content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Conroe Math Tutor
Find a Conroe Math Tutor
...I have taught Pre-Algebra, Algebra, Geometry, and Math Models. I have tutored for Sat Solution and other test prep companies. At my school, I helped start and run a TAKS (STAAR) tutoring
program after school.
19 Subjects: including trigonometry, ESL/ESOL, photography, TOEFL
...I am well-versed in the basics and finer nuances of speech preparation and delivery. I have a Master's degree in theology with a concentration in apologetics, which is the science and art of
rational argumentation. I possess 18 credit units at the Master's level in this concentration.
27 Subjects: including logic, English, writing, reading
...I have three years of theatre experience including college, community and professional theatre work. During the 2002-03 academic year, I served as the Production Manager for the Kingwood
College Theater program. With over a dozen shows under my belt, I have served as cast member, stage crew, light design, sound tech, stage manager and director.
73 Subjects: including calculus, chemistry, grammar, business
...Becca’s passion is to encourage youth to love themselves the way God loves them, and to expect the best out of themselves each and every day.I have tutored all levels of math for decades,
including tutoring Business Calculus at The University of Texas. When I was growing up, I was the student wh...
23 Subjects: including geometry, prealgebra, algebra 1, reading
...The most important attributes a drummer should have are 1. Great listening skills 2. Dependability 3.
27 Subjects: including discrete math, algebra 1, algebra 2, ACT Math
|
{"url":"http://www.purplemath.com/Conroe_Math_tutors.php","timestamp":"2014-04-20T13:59:15Z","content_type":null,"content_length":"23288","record_id":"<urn:uuid:5b392257-0ccb-4d55-90bc-ba86f891b0ac>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fleetwood, NY Science Tutor
Find a Fleetwood, NY Science Tutor
...To me the secret to retaining new information whether it be dates or a new language is understanding its value; we forget things when our brain doesn't register them as being worth remembering
thus the learning in itself must be stimulating and interesting. I am passionate about teaching, my hou...
5 Subjects: including philosophy, French, European history, theatre
...Algebra II/Trigonometry:I cover as much topics as the student needs or all the topics, especially the ratio of those most pertinent to the Regents exam. From Algebraic expressions, Functions
and Relations, Composition and Inverses of Functions, Exponential and Logarithmic Functions, Trigonometri...
47 Subjects: including chemistry, ACT Science, physical science, SAT math
...I use diagrams, charts,and examples from real life to help give the topics in Chemistry different perspectives and help in the learning process. My approach to the SAT Math is to review the
basics of Math. These basics create the foundations for the more advanced problems in Algebra, Geometry, Data Analysis and the other topics tested in the SAT Math section.
45 Subjects: including nutrition, biochemistry, zoology, ACT Science
...The students I see often need help with reading comprehension, essay writing, editing, research writing, study skills, time management, stress management, career goals, etc. Prior to that I
worked as a counselor for students who were placed on academic probation and experiencing difficulties in ...
22 Subjects: including psychology, sociology, reading, English
...I also had the opportunity to work with special ed. students. I modified my lessons to meet the needs of all students in my classes. I have the experience of managing a class where students
had behavior issues.
9 Subjects: including chemistry, algebra 1, algebra 2, trigonometry
Related Fleetwood, NY Tutors
Fleetwood, NY Accounting Tutors
Fleetwood, NY ACT Tutors
Fleetwood, NY Algebra Tutors
Fleetwood, NY Algebra 2 Tutors
Fleetwood, NY Calculus Tutors
Fleetwood, NY Geometry Tutors
Fleetwood, NY Math Tutors
Fleetwood, NY Prealgebra Tutors
Fleetwood, NY Precalculus Tutors
Fleetwood, NY SAT Tutors
Fleetwood, NY SAT Math Tutors
Fleetwood, NY Science Tutors
Fleetwood, NY Statistics Tutors
Fleetwood, NY Trigonometry Tutors
Nearby Cities With Science Tutor
Allerton, NY Science Tutors
Bardonia, NY Science Tutors
Bronxville Science Tutors
Heathcote, NY Science Tutors
Hillside, NY Science Tutors
Inwood Finance, NY Science Tutors
Manhattanville, NY Science Tutors
Maplewood, NY Science Tutors
Mount Vernon, NY Science Tutors
Mt Vernon, NY Science Tutors
River Vale, NJ Science Tutors
Scarsdale Park, NY Science Tutors
Throggs Neck, NY Science Tutors
Tuckahoe, NY Science Tutors
Wykagyl, NY Science Tutors
|
{"url":"http://www.purplemath.com/Fleetwood_NY_Science_tutors.php","timestamp":"2014-04-18T23:35:55Z","content_type":null,"content_length":"24104","record_id":"<urn:uuid:f08bd51c-4fca-498a-86a4-4c4e0081c0fa>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Results 1 - 10 of 22
, 2001
"... Maude is a high-level language and a high-performance system supporting executable specification and declarative programming in rewriting logic. Since rewriting logic contains equational logic,
Maude also supports equational specification and programming in its sublanguage of functional modules and ..."
Cited by 170 (62 self)
Add to MetaCart
Maude is a high-level language and a high-performance system supporting executable specification and declarative programming in rewriting logic. Since rewriting logic contains equational logic, Maude
also supports equational specification and programming in its sublanguage of functional modules and theories. The underlying equational logic chosen for Maude is membership equational logic, that has
sorts, subsorts, operator overloading, and partiality definable by membership and equality conditions. Rewriting logic is reflective, in the sense of being able to express its own metalevel at the
object level. Reflection is systematically exploited in Maude endowing the language with powerful metaprogramming capabilities, including both user-definable module operations and declarative
strategies to guide the deduction process. This paper explains and illustrates with examples the main concepts of Maude's language design, including its underlying logic, functional, system and
object-oriented modules, as well as parameterized modules, theories, and views. We also explain how Maude supports reflection, metaprogramming and internal strategies. The paper outlines the
principles underlying the Maude system implementation, including its semicompilation techniques. We conclude with some remarks about applications, work on a formal environment for Maude, and a mobile
language extension of Maude.
, 1993
"... Rewriting logic [72] is proposed as a logical framework in which other logics can be represented, and as a semantic framework for the specification of languages and systems. Using concepts from
the theory of general logics [70], representations of an object logic L in a framework logic F are und ..."
Cited by 147 (52 self)
Add to MetaCart
Rewriting logic [72] is proposed as a logical framework in which other logics can be represented, and as a semantic framework for the specification of languages and systems. Using concepts from the
theory of general logics [70], representations of an object logic L in a framework logic F are understood as mappings L ! F that translate one logic into the other in a conservative way. The ease
with which such maps can be defined for a number of quite different logics of interest, including equational logic, Horn logic with equality, linear logic, logics with quantifiers, and any sequent
calculus presentation of a logic for a very general notion of "sequent," is discussed in detail. Using the fact that rewriting logic is reflective, it is often possible to reify inside rewriting
logic itself a representation map L ! RWLogic for the finitely presentable theories of L. Such a reification takes the form of a map between the abstract data types representing the finitary theories
- ACM COMPUTING SURVEYS , 1998
"... We survey parallel programming models and languages using 6 criteria [:] should be easy to program, have a software development methodology, be architecture-independent, be easy to understand,
guranatee performance, and provide info about the cost of programs. ... We consider programming models in ..."
Cited by 134 (4 self)
Add to MetaCart
We survey parallel programming models and languages using 6 criteria [:] should be easy to program, have a software development methodology, be architecture-independent, be easy to understand,
guranatee performance, and provide info about the cost of programs. ... We consider programming models in 6 categories, depending on the level of abstraction they provide.
, 1996
"... This paper introduces the basic concepts of the rewriting logic language Maude and discusses its implementation. Maude is a wide-spectrum language supporting formal specification, rapid
prototyping, and parallel programming. Maude's rewriting logic paradigm includes the functional and object-oriente ..."
Cited by 123 (28 self)
Add to MetaCart
This paper introduces the basic concepts of the rewriting logic language Maude and discusses its implementation. Maude is a wide-spectrum language supporting formal specification, rapid prototyping,
and parallel programming. Maude's rewriting logic paradigm includes the functional and object-oriented paradigms as sublanguages. The fact that rewriting logic is reflective leads to novel
metaprogramming capabilities that can greatly increase software reusability and adaptability. Control of the rewriting computation is achieved through internal strategy languages defined inside the
logic. Maude's rewrite engine is designed with the explicit goal of being highly extensible and of supporting rapid prototyping and formal methods applications, but its semi-compilation techniques
allow it to meet those goals with good performance. 1 Introduction Maude is a logical language based on rewriting logic [16,23,19]. It is therefore related to other rewriting logic languages such as
Cafe [10], ELAN [...
, 1996
"... . This paper surveys the work of many researchers on rewriting logic since it was first introduced in 1990. The main emphasis is on the use of rewriting logic as a semantic framework for
concurrency. The goal in this regard is to express as faithfully as possible a very wide range of concurrency mod ..."
Cited by 82 (22 self)
Add to MetaCart
. This paper surveys the work of many researchers on rewriting logic since it was first introduced in 1990. The main emphasis is on the use of rewriting logic as a semantic framework for concurrency.
The goal in this regard is to express as faithfully as possible a very wide range of concurrency models, each on its own terms, avoiding any encodings or translations. Bringing very different models
under a common semantic framework makes easier to understand what different models have in common and how they differ, to find deep connections between them, and to reason across their different
formalisms. It becomes also much easier to achieve in a rigorous way the integration and interoperation of different models and languages whose combination offers attractive advantages. The logic and
model theory of rewriting logic are also summarized, a number of current research directions are surveyed, and some concluding remarks about future directions are made. Table of Contents 1 In...
- In Cafe: An Industrial-Strength Algebraic Formal Method , 1998
"... This paper explains the design and use of two equational proving tools, namely an inductive theorem prover -- to prove theorems about equational specifications with an initial algebra semantics
-- and a Church-Rosser checker---to check whether such specifications satisfy the Church-Rosser property. ..."
Cited by 38 (19 self)
Add to MetaCart
This paper explains the design and use of two equational proving tools, namely an inductive theorem prover -- to prove theorems about equational specifications with an initial algebra semantics --
and a Church-Rosser checker---to check whether such specifications satisfy the Church-Rosser property. These tools can be used to prove properties of order-sorted equational specifications in Cafe
[11] and of membership equational logic specifications in Maude [7, 6]. The tools have been written entirely in Maude and are in fact executable specifications in rewriting logic of the formal
inference systems that they implement.
, 1997
"... Eden is a concurrent declarative language that aims at both the programming of reactive systems and parallel algorithms on distributed memory systems. In this paper, we explain the computation
and coordination model of Eden. We show how lazy evaluation in the computation language is fruitfully combi ..."
Cited by 31 (11 self)
Add to MetaCart
Eden is a concurrent declarative language that aims at both the programming of reactive systems and parallel algorithms on distributed memory systems. In this paper, we explain the computation and
coordination model of Eden. We show how lazy evaluation in the computation language is fruitfully combined with the coordination language that is specifically designed for multicomputers and that
aims at maximum parallelism. The two-level structure of the programming language is reflected in its operational semantics, which is sketched shortly.
- In Proc. International SIGMOD Conference on Management of Data , 1993
"... Although the mathematical foundations of relational databases are very well established, the state of affairs for object-oriented databases is much less satisfactory. We propose a semantic
foundation for object-oriented databases based on a simple logic of change called rewriting logic, and a langua ..."
Cited by 19 (2 self)
Add to MetaCart
Although the mathematical foundations of relational databases are very well established, the state of affairs for object-oriented databases is much less satisfactory. We propose a semantic foundation
for object-oriented databases based on a simple logic of change called rewriting logic, and a language called MaudeLog that is based on that logic. Some key advantages of our approach include its
logical nature, its simplicity without any need for higher-order features, the fact that dynamic aspects are directly addressed, the rigorous integration of user-definable algebraic data types within
the framework, the existence of initial models, and the integration of query, update, and programming aspects within a single declarative language. 1 Introduction Although the mathematical
foundations of relational databases are very well established, the state of affairs for object-oriented databases is much less satisfactory. This is unfortunate, because object-oriented databases
seem to have impor...
, 1996
"... Rewriting logic is proposed as a logic of concurrent action and change that solves the frame problem and that subsumes and unifies a number of previous logics of change, including linear logic
and Horn logic with equality. Rewriting logic can represent action and change with great flexibility and ge ..."
Cited by 10 (4 self)
Add to MetaCart
Rewriting logic is proposed as a logic of concurrent action and change that solves the frame problem and that subsumes and unifies a number of previous logics of change, including linear logic and
Horn logic with equality. Rewriting logic can represent action and change with great flexibility and generality; this flexibility is illustrated by many examples, including examples that show how
concurrent object-oriented systems are naturally represented. In addition, rewriting logic has a simple formalism, with only a few rules of deduction; it supports user-definable logical connectives,
which can be chosen to fit the problem at hand; it is intrinsically concurrent; and it is realizable in a wide spectrum logical language (Maude and its MaudeLog extension) supporting executable
specification and programming. Contents 1 Introduction 2 1.1 What the frame problem (in our sense) is . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 What the frame problem (in our sense) is not
. . . . . . ....
- IN FOX AND GETOV, EDITORS, JOINT ACM-ISCOPE CONFERENCE ON JAVA GRANDE, JGI’02 PROCEEDINGS, 2002 , 2003
"... We introduce Jeeg, a dialect of Java based on a declarative replacement of the synchronization mechanisms of Java that results in a complete decoupling of the `business' and the
`synchronization' code of classes. Synchronization constraints in Jeeg are expressed in a linear temporal logic which allo ..."
Cited by 9 (1 self)
Add to MetaCart
We introduce Jeeg, a dialect of Java based on a declarative replacement of the synchronization mechanisms of Java that results in a complete decoupling of the `business' and the `synchronization'
code of classes. Synchronization constraints in Jeeg are expressed in a linear temporal logic which allows to effectively limit the occurrence of the inheritance anomaly that commonly affects
concurrent object oriented languages. Jeeg is inspired by the current trend in aspect oriented languages. In a Jeeg program the sequential and concurrent aspects of object behaviors are decoupled:
specified separately by the programmer these are then weaved together by the Jeeg compiler.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=208800","timestamp":"2014-04-23T23:13:56Z","content_type":null,"content_length":"39296","record_id":"<urn:uuid:692aad29-c1ff-4edd-9960-30b8f605ec84>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Complex polynomials
March 18th 2012, 08:00 PM #1
Mar 2012
Complex polynomials
Show that if the polynomial p(z) = a[n]z^n+a[n-1]z^n-1+…+a[0] is written in factored form as p(z) = a[n](z-z[1])^d1(z-z[2])^d2…(z-z[r])^dr, then
(a) n = d[1]+d[2]+…+d[r]
(b) a[n-1] = -a[n](d[1]z[1]+d[2]z[2]+…d[r]z[r])
(c) a[0] = a[n](-1)^nz[1]^d1z[2]^d2…z[r]^dr
Re: Complex polynomials
If you multiplied out:
$p(z) = a_n(z-z_1)^{d1}(z-z_2)^{d2}…..(z-z_r)^{dr}$
then how would you form each coefficient so as to make the left hand side look like the right hand side? Note:
Viewed in this way, you are looking at a problem in combinations and permutations.
March 18th 2012, 08:45 PM #2
May 2008
Melbourne Australia
|
{"url":"http://mathhelpforum.com/advanced-applied-math/196125-complex-polynomials.html","timestamp":"2014-04-16T06:30:06Z","content_type":null,"content_length":"32293","record_id":"<urn:uuid:5a6ef0ba-77d7-451a-af1f-72686f07920a>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
|
why are all characters of the maximal torus in a Lie group weights?
up vote 3 down vote favorite
Let $G$ be a compact connected Lie group, $T$ maximal torus, identified with $\mathbb{R}^n/\mathbb{Z}^n$, $X^*(T)$
the set of characters of $T$, naturally identified with $\mathbb{Z}^n$. Let next $\Phi$ denote the set of roots, corresponding to $T$, $$ \Lambda=\{v\in \mathbb{R}^n: \forall \alpha\in \Phi,\ 2(v,\
alpha)/(\alpha,\alpha)\in \mathbb{Z}\}$$ be the lattice of weights. Then $X^*(T)\subset \Lambda$.
What is the easiest and most elementary way to prove it?
It suffices to prove that for any $\lambda\in X^*(T)$ is such a character that for some finite-dimensional complex representation of $G$ there is a non-zero $T$-invariant subspace, on which $T$ acts
by multiplying by $\lambda(\cdot)$. Is there any way to do it without appealing to infinite-dimensional representations (inducing $\lambda$ from $T$ to $G$ and some work after that)?
lie-groups rt.representation-theory
add comment
3 Answers
active oldest votes
"Easiest" depends on how you set things up: everything really hinges on how you want to identify $X^\ast(T)$ with $\mathbb Z^n$. It's probably cleanest if you don't work explicitly with $\
mathbb Z^n$, but instead state everything in terms of Lie algebras and their duals. I personally like the setup given in Knapp, Lie Groups Beyond an Introduction, IV.6--7, which is fairly
standard. In the end it all boils down to associating a copy of $SU(2)$ (or $\mathfrak{su}(2)$ or $\mathfrak{sl}_2(\mathbb C)$ ...) to each root $\alpha$, and then the integrality statement
you're after ultimately follows from the fact that $$\exp 2 \pi i x = 1 \, \implies \, x \in \mathbb Z.$$
Edit: Here's an outline of the details. Let $\mathfrak t_0$ and $\mathfrak t$ denote, respectively, the real and complex Lie algebras of $T$ and set $$ L = \{ \xi \in \mathfrak t_0 \mid \
exp \lambda = 1 \} $$ for the kernel of the exponential map $\exp \colon \mathfrak t_0 \to T$. Also define $$ L^\perp = \{ \lambda \in \mathfrak t^\ast \mid \lambda(L) \subset 2\pi i \
up vote 3 mathbb Z \}. $$ (Admittedly, the $^\perp$ is a slight abuse of notation.) Then there is an isomorphism $$ L^\perp \stackrel{\sim}{\longrightarrow} X^\ast(T) $$ given by sending $\lambda$ to
down vote the character $e^\lambda$ defined by $$ e^\lambda(\exp \xi) = e^{\lambda(\xi)}, \qquad \xi \in \mathfrak t_0. $$ This is basically Proposition 4.58 in Knapp. So our objective now is to show
accepted that $L^\perp$ sits in the weight lattice $\Lambda$ --- in other words, we want to show that $$ 2\frac{(\lambda, \alpha)}{(\alpha,\alpha)} \in \mathbb Z $$ for all $\lambda \in L^\perp$ and
$\alpha \in \Phi$ (Prop 4.59). To this end, let $\psi_\alpha \colon SU(2) \to G$ denote the root morphism corresponding to the root $\alpha \in \Phi$. This morphism has the property that $d
\psi_\alpha(h) = 2 \alpha^\vee/(\alpha,\alpha)$, where $$ h = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \in \mathfrak{su}(2). $$ Consequently, $$ 1 = \psi_\alpha(1) = \psi_\alpha(\exp
(2\pi i h)) = \exp (2 \pi i \, d \psi_\alpha(h)), $$ that is, $$ 2 \pi i \, d \psi_\alpha(h) = 2 \pi i \cdot 2 \frac{\alpha^\vee}{(\alpha,\alpha)} \in L $$ whence $$ \lambda(2 \pi i \cdot 2
\frac{\alpha^\vee}{(\alpha,\alpha)}) \in 2 \pi i \mathbb Z \iff 2\frac{(\lambda, \alpha)}{(\alpha,\alpha)} \in \mathbb Z, $$ as desired.
So the fundamental theorem which is being used is that each $\mathfrak{su}(2)$ exponentiates to an $SU(2)$ or $PSU(2)$. I like this argument a lot. – David Speyer May 20 '11 at 10:58
thanks, that's perfect! – Fedor Petrov May 20 '11 at 18:37
add comment
I'm not sure what I'm allowed to use here. Here is a proof, and I'll see whether or not you are happy with it.
Let $L$ be the lattice of those characters of $T$ which occur in finite dimensional representations of $G$. It is clearly a lattice because, if $\lambda$ occurs in $V$ and $\mu$ occurs in
$U$, then $\lambda+\mu$ occurs in $V \otimes U$. As you say, it is enough to show that $L = X^*(T)$. Also, note that the root lattice is in $L$.
Let $\lambda$ be any element of $X^*(T)$. Let $s : T \to \mathbb{C}$ be the function $t \mapsto \sum_{w \in W} \lambda(w(t))$. (Implicitly using that the Weyl group is finite; am I allowed
to assume this?) Then $s$ is a $W$-invariant function on $T$, and it is not zero since it is $|W|$ at the identity. Since every conjugacy class of $G$ intersects $T$, and does so in a
$W$-orbit, we can extend $s$ to a conjugacy invariant function on $G$.
up vote 1
down vote By the Peter-Weyl theorem, there must be some finite dimensional representation $V$ such that $\langle s, \chi_V \rangle \neq 0$. By the Weyl integral formula, $\langle s, \chi_V \rangle$
is an integral over $T$ of a product of three factors: $s$, which is a certain finite sum of characters of $T$; $\chi_V$, which is a certain finite sum of characters in $L$, and the Weyl
integrand, which is a certain sum of characters in the root lattice. So the product of the last two terms is a sum of characters from $L$.
In order for this integrand to be nonzero, one of the characters in $s$ must be negative a character in the root lattice, so $-w(\lambda) \in L$ for some $w\in W$. Using that $L$ is
$W$-stable, this shows that $\lambda \in L$.
Did you switch from $V$ to $W$ at some point? – Keerthi Madapusi Pera May 19 '11 at 22:58
No, but I used W for two things: the Weyl group and the hypothetical representation in the second paragraph. I'll change the later use to $U$. – David Speyer May 19 '11 at 23:02
Thanks, I was just being dense. – Keerthi Madapusi Pera May 20 '11 at 2:52
Thanks, this is nice argument, the onliest problem why it is not appropriate for my students is that they do not know the proof of Peter-Weyl theorem, which requires some functional
analysis (or it actually does not? the proof which I know requires some properties of compact operators.) – Fedor Petrov May 20 '11 at 18:39
Yes, I think there is no way of avoiding some difficult analysis there. That's the main reason I said I wasn't sure this was what you wanted. – David Speyer May 20 '11 at 19:22
add comment
To answer the last question here, it's certainly not necessary to introduce infinite-dimensional representations into the picture when studying compact Lie groups. On the other hand, I don't
quite see the point of trying to relate in isolation the character group of a maximal torus to an abstractly defined "weight" lattice. This is best done within the standard context of finite
dimensional representations of a compact Lie group, which of course takes a while to develop from scratch. Look for instance at a standard 1985 textbook by Brocker and tom Dieck
Representations of Compact Lie Groups (Springer), where all of the classical theory is laid out systematically.
It's possible of course (as shown by Bourbaki) to treat roots and weights more abstractly without specific reference to Lie groups or Lie algebras, but this is rather artificial here. At any
rate, it's risky to start out by identifying the character group of a maximal torus "somehow" with the standard lattice in $\mathbb{R}^n$, since the same could be done with the root lattice
or (abstract) weight lattice. It's the placement of one lattice within another that counts.
up vote
1 down It's important to have examples at hand, as a reminder that the character group of a maximal torus (say in a semisimple compact group) can vary anywhere between the root lattice (adjoint
vote group) and full weight lattice (simply connected group).
P.S. Added observations: (1) You should assume your group is semisimple (having no central torus of positive dimension) if you want to compare $X(T)$ with the dual lattice of the root
lattice, the latter being of finite index in the former. It's also helpful to recall from the general theory that the quotient of these lattices is isomorphic to the center of the simply
connected covering group of your given group and to the fundamental group of the adjoint group. (2) As the answers here suggest, your question doesn't have any "easy" answer until you place
it in one of the well-established textbook settings for studying structure and representations of compact Lie groups. There are various approaches using analysis, algebraic topology,
algebraic geometry, Lie algebras, comparison with complex Lie groups, etc. Your question is mainly pedagogical, so you should first place it in one or more of these traditions.
add comment
Not the answer you're looking for? Browse other questions tagged lie-groups rt.representation-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/65474/why-are-all-characters-of-the-maximal-torus-in-a-lie-group-weights","timestamp":"2014-04-16T22:05:31Z","content_type":null,"content_length":"71701","record_id":"<urn:uuid:48f14179-b005-42e4-8da7-c8bdc4834fbd>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Article
Bibliographic References
Add to:
Architecture Scalability of Parallel Vector Computers with a Shared Memory
May 1998 (vol. 47 no. 5)
pp. 614-624
ASCII Text x
Eskil Dekker, "Architecture Scalability of Parallel Vector Computers with a Shared Memory," IEEE Transactions on Computers, vol. 47, no. 5, pp. 614-624, May, 1998.
BibTex x
@article{ 10.1109/12.677257,
author = {Eskil Dekker},
title = {Architecture Scalability of Parallel Vector Computers with a Shared Memory},
journal ={IEEE Transactions on Computers},
volume = {47},
number = {5},
issn = {0018-9340},
year = {1998},
pages = {614-624},
doi = {http://doi.ieeecomputersociety.org/10.1109/12.677257},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Computers
TI - Architecture Scalability of Parallel Vector Computers with a Shared Memory
IS - 5
SN - 0018-9340
EPD - 614-624
A1 - Eskil Dekker,
PY - 1998
KW - Architecture scalability
KW - parallel vector computers
KW - shared memory
KW - sustainable peak performance
KW - theoretical peak performance.
VL - 47
JA - IEEE Transactions on Computers
ER -
Abstract—Based on a model of a parallel vector computer with a shared memory, its scalability properties are derived. The processor-memory interconnection network is assumed to be composed of
crossbar switches of size b×b. This paper analyzes sustainable peak performance under optimal conditions, i.e., no memory bank conflicts, sufficient processor-memory bank pathways, and no
interconnection network conflicts. It will be shown that, with fully vectorizable algorithms and no communication overhead, the sustainable peak performance does not scale up linearly with the number
of processors p. If the interconnection network is unbuffered, the number of memory banks must increase at least with O(p log[b]p) to sustain peak performance. If the network is buffered, this
bottleneck can be alleviated; however, the half performance vector length still increases with O(log[b]p). The paper confirms the validity of the model by examining the performance behavior of the
LINPACK benchmark.
[1] C.N. Arnold, "Methods for Performance Evaluation of Algorithms and Computers," Computers in Physics, vol. 4, no. 5, pp. 514-520, Sept./Oct. 1990.
[2] G. Bilardi and F.P. Preparata, "Horizons of Parallel Computation," J. Parallel and Distributed Computing, vol. 27, pp. 172-182, 1996.
[3] G. Bell, “Ultracomputers: A Teraflop Before Its Time,” Comm. ACM, vol. 35, no. 8, pp. 26-47, Aug. 1992.
[4] V.E. Benes, Mathematical Theory of Connecting Networks and Telephone Traffic.New York: Academic Press, 1965.
[5] C. Clos, "A Sudy of Non-Blocking Switching Networks," Bell System Technical J., vol. 32, pp. 406-424, Mar. 1953.
[6] D.M. Dias and J.R. Jump, "Analysis and Simulation of Buffered Delta Networks," IEEE Trans. Computers, vol. 30, no. 4, pp. 273-282, Apr. 1981.
[7] J. Ding and L. Bhuyan,“Finite buffer analysis of multistage interconnection networks,” IEEE Trans. Computers, vol. 43, no. 2, pp. 243-246, Feb. 1994.
[8] J.J. Dongarra, “Performance of Various Computers Using Standard Linear Equations Software,” Technical Report CS-89-85, Computer Science Dept., Univ. of Tennessee, K noxville, 1989.
[9] J.J. Dongarra, "The LINPACK Benchmark: An Explanation," Lecture Notes in Computer Science, vol. 297, pp. 456-474.Berlin: Springer, 1988.
[10] T.-Y. Feng, "A Survey of Interconnection Networks," Computer, vol. 14, no. 12, pp. 12-27, Dec. 1981.
[11] G.H. Golub and C.F. Van Loan, Matrix Computations, second ed., chapter 3. Baltimore: The Johns Hopkins Univ. Press, 1989.
[12] J.J. Hack, "Peak vs. Sustained Performance in Highly Concurrent Vector Machines," Computer, vol. 19, no. 9, pp. 11-19, Sept. 1986.
[13] M.D. Hill, "What Is Scalability?" Computer Architecture News, vol. 18, no. 4, pp. 18-21, Dec. 1990.
[14] R.W. Hockney, "Super-Computer Architecture," Infotech State of the Art Report: Future Systems 2, pp. 277-305.Maidenhead: Infotech, 1977.
[15] K. Hwang, Advanced Computer Architecture: Parallelism, Scalability, Programmability. McGraw-Hill, 1993.
[16] V. Kumar and A. Gupta,“Analyzing scalability of parallel algorithms and architectures,”Dep. Comput. Sci., Univ. Minnesota, Minneapolis, MN, Tech. Rep. TR 91-18, 1991; to appear inJ. Parallel and
Distrib. Comput., 1994. A shorter version appears inProc. 1991 Int. Conf. Supercomput., 1991, pp. 396–405.
[17] D.H. Lawrie, "Access and Alignment of Data in an Array Processor," IEEE Trans. Computers, vol. 24, no. 12, pp. 1,145-1,155, Dec. 1975.
[18] G.F. Lev, N. Pippenger, and L.G. Valiant, "A Fast Parallel Algorithm for Routing in Permutation Networks," IEEE Trans. Computers, vol. 30, no. 2, pp. 93-100, Feb. 1981.
[19] Y. Mun and H.Y. Youn,“Performance analysis of finite buffered multistage interconnection networks,” IEEE Trans. Computers, vol. 43, no. 2, pp. 153-162, Feb. 1994.
[20] D. Nassimi and S. Sahni, "A Self-Routing Benes Network and Parallel Permutation Algorithms," IEEE Trans. Computers, vol. 30, no. 5, pp. 332-340, May 1981.
[21] D. Nassimi and S. Sahni, "Parallel Algorithms to Set-Up the Benes Permutation Network," Proc. Workshop Interconnection Networks for Parallel and Distributed Processing, pp. 70-71, 1980.
[22] D. Nussbaum and A. Agarwal,“Scalability of parallel machines,”Commun. ACM, vol. 34, pp. 57–61, 1991.
[23] J.H. Patel, "Performance of Processor-Memory Interconnections for Multiprocessors," IEEE Trans. Computers, vol. 30, no. 10, pp. 771-780, Oct. 1981.
[24] C.S. Raghavendra and R.V. Boppana,"On self-routing in Benes and shuffle-exchange networks," IEEE Trans. Computers, vol. 40, no. 9, pp.1057-1064, Sept. 1991.
[25] H.S. Stone, "Parallel Processing with the Perfect Shuffle," IEEE Trans. Computers, vol. 20, no. 2, pp. 153-161, Feb. 1971.
[26] Y. Tamir and H.-C. Chi, "Symmetric Crossbar Arbiters for VLSI Communication Switches," IEEE Trans. Parallel and Distributed Systems, Vol. 4, No. 1, 1993, pp. 13-27.
[27] C.-L. Wu and T.-Y. Feng, "On a Class of Multistage Interconnection Networks," IEEE Trans. Computers, vol. 29, no. 8, pp. 694-702, Aug. 1980.
[28] Y.-M. Yeh, T.-Y. Feng, "On a Class of Rearrangeable Networks," IEEE Trans. Computers, vol. 41, no. 11, pp. 1,361-1,379, Nov. 1992.
[29] H.Y. Youn and Y. Mun, "On Multistage Interconnection Networks with Small Clock Cycles," IEEE Trans. Parallel and Distributed Systems, vol. 6, no. 1, pp. 86-93, Jan. 1995.
Index Terms:
Architecture scalability, parallel vector computers, shared memory, sustainable peak performance, theoretical peak performance.
Eskil Dekker, "Architecture Scalability of Parallel Vector Computers with a Shared Memory," IEEE Transactions on Computers, vol. 47, no. 5, pp. 614-624, May 1998, doi:10.1109/12.677257
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/trans/tc/1998/05/t0614-abs.html","timestamp":"2014-04-17T07:27:10Z","content_type":null,"content_length":"57561","record_id":"<urn:uuid:5a482619-90d4-46d8-9e25-64eb3247cdfb>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
|
South Plainfield Precalculus Tutor
Find a South Plainfield Precalculus Tutor
...I am a high school senior who loves math, science, and economics, and I am ready to share that passion for learning with others. I am confident that with hard work and a little bit of guidance,
any student can excel academically. I have taken AP-level Biology, Chemistry, Physics, Calculus and M...
21 Subjects: including precalculus, reading, chemistry, physics
...I love working with students from all grade levels and helping them get motivated to set learning goals and to succeed in school. In addition, I specialize in SAT prep and college advising for
high school students. I have a background in psychology; I have an MA in psychology and am currently a PhD student.
25 Subjects: including precalculus, English, reading, statistics
...As a result of work commitments throughout the state, I am fairly flexible with location. I look forward to hearing from you and scheduling a tutoring session! Thank you.I have worked at a non
profit organization for 8 months where I use Microsoft Outlook on a daily basis.
49 Subjects: including precalculus, Spanish, English, reading
...I first aim to build a friendship through common interests and course material, then my primary objective becomes teaching the information to the student and reinforcing it where ever
necessary. In Stevens, I find a passion for tutoring while having motivation to increase the students test score...
46 Subjects: including precalculus, chemistry, physics, calculus
...A graduate of Rutgers University with a Bachelor of Science in mechanical engineering, I am very familiar with the topics covered in the math section of the PSAT. Not only have I taken both the
PSAT and the SAT, but I have taken courses meant to prepare students for both. I am fluent in English and have excellent writing skills.
21 Subjects: including precalculus, chemistry, physics, calculus
|
{"url":"http://www.purplemath.com/south_plainfield_nj_precalculus_tutors.php","timestamp":"2014-04-21T07:39:04Z","content_type":null,"content_length":"24437","record_id":"<urn:uuid:b705718b-f345-4e73-b1dc-7d1a5af3000c>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Suwanee SAT Math Tutor
Find a Suwanee SAT Math Tutor
...In my search for a job in this economy, I stumbled upon the idea of tutoring and realized just how well I'm suited for something like this! My long-term educational goal is to gain enough
experience working with children in order to apply to a PhD program in Child Development. I am the oldest of six siblings and love working with kids of all ages.
46 Subjects: including SAT math, Spanish, English, biology
...I am very good at recognizing difficulties in learning math and have the ability to correct those difficulties. I have taught middle school math for 3 years and taught high school math for 6
years. I also taught in the college environment for over 10 years and I am currently teaching Math.
20 Subjects: including SAT math, calculus, geometry, algebra 1
My passion is to launch students to become our next mathematicians, engineers and scientists. I am a GA certified T5 Teacher of 6-12 grade Physics, 6-12 grade Math with a Gifted In-Field
Endorsement. I have taught 7th grade Math, 8th grade Math and Physical Science, 10th grade Physical Science, 11th grade Physics, AP Physics B and AP Calculus AB classes.
25 Subjects: including SAT math, chemistry, physics, calculus
I was a National Merit Scholar and graduated magna cum laude from Georgia Tech in chemical engineering. I can tutor in precalculus, advanced high school mathematics, trigonometry, geometry,
algebra, prealgebra, chemistry, grammar, phonics, SAT math, reading, and writing. I have been tutoring profe...
20 Subjects: including SAT math, chemistry, reading, geometry
...In my professional experience as a marketing rep and a loan officer with credit unions, as well as a retail manager, I frequent career days and speaking engagements, as I relish the opportunity
to reach out to high school students as they prepare for adulthood. My tutoring/coaching method begins...
42 Subjects: including SAT math, English, geometry, reading
|
{"url":"http://www.purplemath.com/suwanee_ga_sat_math_tutors.php","timestamp":"2014-04-17T13:23:15Z","content_type":null,"content_length":"23981","record_id":"<urn:uuid:d1de8b25-17cd-4139-8291-3692df9f11b4>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Work Done by a Gas
Thermodynamics is a branch of physics which deals with the energy and work of a system. Thermodynamics deals only with the large scale response of a system which we can observe and measure in
experiments. In rocket science, we are most interested in the thermodynamics of high speed flows, and in propulsion systems which produce thrust by accelerating a gas. To understand how thrust is
created, it is useful to study the basic thermodynamics of gases.
The state of a gas is determined by the values of certain measurable properties like the pressure, temperature, and volume which the gas occupies. The values of these variables and the state of the
gas can be changed. On this figure we show a gas confined in a blue jar in two different states. On the left, in State 1, the gas is at a higher pressure and occupies a smaller volume than in State
2, at the right. We can represent the state of the gas on a graph of pressure versus volume, which is called a p-V diagram as shown at the right. To change the state of the gas from State 1 to State
2, we must change the conditions in the jar, either by heating the gas, or physically changing the volume by moving a piston, or by changing the pressure by adding or removing weights from the
piston. In some of these changes, we do work on, or have work done by the gas, in other changes we add, or remove heat. Thermodynamics helps us determine the amount of work and the amount of heat
necessary to change the state of the gas. Notice that in this example we have a fixed mass of gas. We can therefore plot either the physical volume or the specific volume, volume divided by mass,
since the change is the same for a constant mass. On the figure, we use physical volume.
Scientists define work W to be the product of force F acting through a distance s :
W = F * s
For a gas, work is the product of the pressure p and the volume V during a change of volume.
W = p * V
We can do a quick units check to see that pressure force / area times volume area * length gives units of force times length which are the units of work
W = (force / area) * (area * length) = force * length
In the metric system the unit of work is the Joule, in the English system the unit is the foot-pound. In general, during a change of state both the volume and the pressure change. So it is more
correct to define the work as the integrated, or summed variable pressure times the change of volume from State 1 to State 2. If we use the symbol S [ ] ds for integral, then:
W = S [p] dV
On a graph of pressure versus volume, the work is the area under the curve that describes how the state is changed from State 1 to State 2.
As mentioned above, there are several options for changing the state of a gas from one state to another. So we might expect that the amount of work done on, or by a gas could be different depending
on exactly how the state is changed. As an example, on the graph on the figure, we show a curved black line from State 1 to State 2 of our confined gas. This line represents a change brought about by
removing weights and decreasing the pressure and allowing the volume to adjust according to Boyle's law with no heat addition. The line is curved and the amount of work done on the gas is shown by
the red shaded area below this curve. We could, however, move from State 1 to State 2 by holding the pressure constant and increasing the volume by heating the gas using Charles' law. The resulting
change in state proceeds from State 1 to an intermediate State "a" on the graph. State "a" is at the same pressure as State 1, but at a different volume. If we then remove the weights, holding a
constant volume, we proceed on to State 2. The work done in this process is shown by the yellow shaded area. Using either process we change the state of the gas from State 1 to State 2. But the work
for the constant pressure process is greater than the work for the curved line process. The work done by a gas not only depends on the initial and final states of the gas but also on the process used
to change the state. Different processes can produce the same state, but produce different amounts of work.
Notice that not only does the work done by the gas depend on the process, but also the heat transferred to the gas. In the first process, the curved line from State 1 to State2, no heat was
transferred to the gas; the process was adiabatic. But in the second process, the straight line from State 1 to State "a" and then to State 2, heat was transferred to the gas during the constant
pressure process. The heat transferred to a gas not only depends on the initial and final states of the gas but also on the process used to change the state.
Guided Tours
• Thermodynamics:
Related Sites:
Rocket Index
Rocket Home
Exploration Systems Mission Directorate Home
|
{"url":"http://microgravity.grc.nasa.gov/education/rocket/work2.html","timestamp":"2014-04-20T14:17:30Z","content_type":null,"content_length":"13648","record_id":"<urn:uuid:c12d13b1-dad0-4584-ae6f-a75adfe8f158>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
|
RE: st: RE: Survey design degrees of freedom help
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: st: RE: Survey design degrees of freedom help
From <Andrew.Clapson@statcan.gc.ca>
To <statalist@hsphsun2.harvard.edu>
Subject RE: st: RE: Survey design degrees of freedom help
Date Fri, 4 Sep 2009 12:01:14 -0400
It might also be worth simply running a 'naive regression' (just logit with pweights) to see the difference. (I'm not suggesting this is a valid empirical approach, of course)
On a (somewhat) related topic, I have been working with the 'subpop' option of the -svy- commands for my logit models, and though I understand the theoretical basis for specifying a subpopulation instead of simply using specifying 'if var1 == 1', in my case I found it made next to no difference in my standard errors.
It is sometimes interesting to run the simple, technically incorrect models in order to see what effect the more specialized specifications truly have on your particular dataset.
-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Stas Kolenikov
Sent: September 4, 2009 11:35 AM
To: statalist@hsphsun2.harvard.edu
Subject: Re: st: RE: Survey design degrees of freedom help
You are good to go with the interpretation of your regression coefficients as they relate to the observation level. The way the point estimates are obtained is the pretty much the same as in the plain vanilla data (except that the weights attached to the observations; but you can attach weights without the complex sample structure, too). For instance, in regression case, you would have b = (X' w X)^{-1} (X' w Y) where w is the diagonal matrix of weights. You still use all the observations for point estimation, no compromises are taken there. PSU information only contributes to the variance estimation. Getting the point estimates and getting the variance estimates can (and should) be thought as rather unrelated issues. One of the exercises I give students in my survey statistics classes is to give examples of designs that would produce the same point estimates but different standard errors. I expect them to say something like "SRS vs. stratified sample" or "SRS vs. cluster sa!
mple". So getting the point estimates and getting the variance estimates can (and
should) be thought as rather unrelated issues.
Now, suppose you could collect the data for everybody in the population, and estimate the corresponding model -- let's call this a census regression. The standard errors that Stata gives you measure uncertainty about the census regression parameters, i.e., by how much your point estimates based on your sample might differ from the census parameters. If you sample everybody, your uncertainty is exactly zero. Of course that's not the case in practice when you sample but a tiny fraction of the total population. But still you might sample enough units in a cluster to be pretty sure what the contribution of that cluster is -- that's your independent piece of information for variance estimation (i.e., measuring uncertainty). Even if you took a different sample from that same cluster, you would get roughly the same number. That's why you want to treat the cluster as the independent piece of information. If all your clusters give about the same picture, you will get tight standard e!
rrors; if the clusters are all over, the standard errors will be large.
You can get a very useful measure of how much impact your sampling desigh has had on your estimation by typing -estat effect- after your
-svy- command. It prints DEFF, the design effect, and MEFF, misspecification effect. The first one shows how much the variances of the estimates change because of your complex sampling plan: this is the ratio of the actual variance to the varaince obtained assuming independent data. The second one is measuring by how much you would be mistaken if you applied a naive variance formula. DEFF is somewhat better understood, in general. If you have numbers smaller than 1, you have efficiency gains because of your clever sample design. If you have numbers more than say 3 or 5, your sampling design did not allow you to get a lot of information. Usually that's a consequence of tight clustering or wildly different weights (or both). Sometimes DEFF is interpreted as the efficient sample size: you need [your actual sample size]/DEFF observations in a simple random sample to get the same accuracy of the results. (Keep in mind that SRS are hell of a lot of trouble to set up -- you need th!
e complete list of the population which you never have unless you are The Big Brother aka Census Bureau :)). The largest DEFF I've seen in my practice was about a 100. This was a village level characteristic, access to tap water. Once you have a piped well in the village, every respondent is a 1 on that variable; if you don't have a pipe, everybody is 0. So instead of the total sample size of ~10,000 individuals, I only had ~100 villages that contributed to the estimation of the % with access to tap water. In terms of my above explanation, everybody in the cluster give exactly the same answer, and I am 100% sure about everybody in the cluster having or not having access to tap water (no cluster level uncertainty). The independent piece of information for this variable is given at the cluster level, rather than an individual level. On the other hand, the variables that would reflect information, activity or decision making at the household or individual level, such as age or!
contraception use, would have DEFFs of the order of 1.5 or so in the
same data set.
Now, for your particular variable of interest, I imagine the knowledge of a park might be the cluster level variable (either there is a good park nearby, or there are none), while income is certainly a household level variable (although there would still be a tendency for income to be spatially correlated: there are rich neighborhoods, and there are poor neighborhoods). If you run -svy : mean- and -estat effects- on your park knowledge and income variables, I would imagine the first one would have a higher DEFF.
It's a shame you cannot find anybody to help you with statistics at UMN. Your statistics program is supposed to be one of the top 10 or so in the nation.
On Fri, Sep 4, 2009 at 8:09 AM, Jennifer Schmitt<jorg0206@umn.edu> wrote:
> Stas,
> Thank you for your answer, I've read the paper you suggested, but
> unfortunately my statistical background is very limited (probably part
> of why I've having a difficult time with this as it is), so I'm not
> sure I followed the entire thing. If I may, I have a follow-up
> question. If STATA only uses my villages (PSU) as my independent
> pieces of information and uses all my household data as data to help
> estimate my PSU and its variance does that mean STATA is not testing
> for differences in households, but differences among villages? In
> other words, when I run my logistic regression and find income to be
> positively associated with knowledge of a park does that really mean
> that villages with higher incomes have a greater odds of knowledge or
> that households with higher incomes have greater odds of knowledge? I
> want to be able to speak about households, but your explanation about
> what STATA is doing made me worry that STATA is only telling me about
> villages. Again, thank you for your help, I honestly have not found
> anyone else who can explain the "whys" behind STATA.
Stas Kolenikov, also found at http://stas.kolenikov.name
Small print: I use this email account for mailing lists only.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2009-09/msg00187.html","timestamp":"2014-04-18T00:56:23Z","content_type":null,"content_length":"13225","record_id":"<urn:uuid:7d43d154-a0c9-4654-916a-f511779f48f4>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gravitons from a loop representation of linearized gravity
Loop quantum gravity is based on a classical formulation of $3+1$ gravity in terms of a real $\mathrm{SU}\left(2\right)$ connection. Linearization of this classical formulation about a flat
background yields a description of linearized gravity in terms of a real $U\left(1\right)×U\left(1\right)×U\left(1\right)$ connection. A “loop” representation, in which holonomies of this connection
are unitary operators, can be constructed. These holonomies are not well defined operators in the standard graviton Fock representation. We generalize our recent work on photons and $U\left(1\right)$
holonomies to show that Fock space gravitons are associated with distributional states in the $U\left(1\right)×U\left(1\right)×U\left(1\right)$ loop representation. Our results may illuminate certain
aspects of the much deeper (and as yet unkown) relation between gravitons and states in nonperturbative loop quantum gravity. This work leans heavily on earlier seminal work by Ashtekar, Rovelli and
Smolin (ARS) on the loop representation of linearized gravity using complex connections. In the last part of this work we show that the loop representation based on the real $U\left(1\right)×U\left(1
\right)×U\left(1\right)$ connection also provides a useful kinematic arena in which it is possible to express the ARS complex connection-based results in the mathematically precise language currently
used in the field.
DOI: http://dx.doi.org/10.1103/PhysRevD.66.024017
• Received 22 April 2002
• Published 9 July 2002
© 2002 The American Physical Society
|
{"url":"http://journals.aps.org/prd/abstract/10.1103/PhysRevD.66.024017","timestamp":"2014-04-19T20:07:21Z","content_type":null,"content_length":"26242","record_id":"<urn:uuid:28b0b31f-e2ca-4a2d-9ad2-18b8dae92a66>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lectures on Image Processing : Alan Peters : Free Download & Streaming : Internet Archive
Alan Peters (June 1, 2007)
Updated 11 March 2013
There was an error in the Fourier Transform lecture (No. 6). The calculation of the wavelength of a Fourier component from its position on the Fourier plane was incorrect. That has been corrected.
Please download the latest version.
This is a 20-lecture series on Image Processing that I have created over the past 14 years (1999-2013) for my course, EECE/CS 253, at the Vanderbilt University School of Engineering. The files are
all in Adobe Acrobat (.pdf) format and MS Powerpoint (.ppt) format. They are quite large because of the images in them. The topics covered include: Image Enhancement by Point Operations, Color
Correction, The 2-D Fourier Transform and Convolution, Linear Spatial Filtering, Image Sampling and Rotation, Noise Reduction, and Mathematical Morphology for Image Processing.
Please note that these lectures are sets of slides, not videos. I made them using Matlab, Photoshop, Illustrator, Powerpoint, and Acrobat. Also I have not included the assignments since they are the
source of academic credit for the Vanderbilt Course, EECE/CS 253, and I do not want example solutions to be available.
IMPORTANT: If you do not have the Design Science MathType fonts installed on your computer, the Powerpoint files will not display correctly. For some bizarre and completely unexplained reason,
Powerpoint will not embed these fonts. You can download an installer for these fonts from the other files section of this page. Once you download it, double click on the executable to load the fonts.
This works for windows machines only. For macs and linux boxen, go to the Design Science web pages: http://www.dessci.com/en/dl/fonts/getfont.asp
From the syllabus:
This introductory course in image processing should give the student a working knowledge of the most commonly used methods and procedures for image enhancement and restoration. The emphasis of the
course is on practical results: given an image and a goal for its processing (e.g., feature enhancement, color correction, sharpening, warping, etc.) the student should be able to select and
implement an appropriate procedure to achieve that goal. Good practical results often depend on an understanding of the mathematics behind the procedures as well as the ability to write software to
implement the mathematics. Thus, there are significant mathematical and computational components to the course. In the past, most students have spent most of their time associated with this course
writing and debugging computer programs.
Recommended but not required: An introductory course in digital signal processing (such as EECE 214 or EECE 252) and proficiency in writing computer programs in C, C++, Matlab, or Mathematica. Matlab
is used in the class and the labs.
I am making these freely available for noncommercial use. If you use any of my slides or graphics, please cite me in the normal academic fashion. For example, if you were to use any of the slides
from Lecture 6, The Fourier Transform, cite the source as
Peters, Richard Alan, II, "The Fourier Transform", Lectures on Image Processing, Vanderbilt University, Nashville, TN, April 2008, Available on the web at the Internet Archive, http://www.archive.org
If you intend to use any of the slides in a product that you intend to sell, please contact me directly to obtain permission. Alan dot Peters at Vanderbilt dot edu.
Updated 18 July 2012
Two new lectures added: 19: JPEG compression and 20: High Dynamic Range Imaging
About the Fall 2011 - Summer 2012 Updates: I will be uploading new versions of all the lectures as the year progresses so that all the lectures are updated by mid Summer. As of 18 July 2011 Lectures
1-18 have been updated and two new lectures, 19: JPEG Compression and 20: High Dynamic Range Imaging have been added.
About the 28 April 2008 Update: I recompiled the pdf files from the ppt files (the originals) and made sure that all the fonts were embedded and that the start-in-full-screen mode was off. This may
solve the problem some folks were having with the PDF files. Please let me know if they do not work for you. Also, I retitled the lectures so that they would include the file type and appear in order
and I modified the above introduction to, I hope, better explain the contents of this archive. (Thanks to users LizBurl and d012560c for alerting me to the pdf problems!)
About the 10 April 2008 Update: Many of the lecture files have been revised, mainly to correct typos and various small errors. Some new material has been added as well.
Lecture 1 Intro: Included 3 slides on forensic analysis.
Lecture 2 Digital Images: Included four slides on colormapping and two on image scrambling.
Lecture 3 Point Processing: Added to explanations on slide 16.
Lecture 4 Color Perception: No changes other than date from 2006 to 2007.
Lecture 5 Color Correction: Added 19 slides on the color cube and on RGB vs HSV representation.
Lecture 6 Fourier Transform: Reversed the order of slides 85 and 86.
Lecture 7 Convolution: No changes other than date from 2006 to 2007.
Lecture 8 Frequency Filtering: Added slide on on ideal bandpass filter.
Lecture 9 Sharpening: No changes other than date from 2006 to 2007.
Lecture 10 Pixelization, Quantization: Added 12 slides on steganography (hiding one picture in another.)
Lecture 11 Sampling, Aliasing: Minor format changes.
Lecture 12 Resampling: Errors corrected on pp 34 & 108, 9 pages of examples added.
Lecture 13 Rotating: Added seven slides on interpolation and warping.
Lecture 14 Uncorrelated Noise: No changes other than date from 2006 to 2007.
Lecture 15 Correlated Noise: No changes other than date from 2006 to 2007.
Lecture 16 Median Filters: No changes other than date from 2006 to 2007.
Lecture 17 Binary Morphology: Slide 18 corrected. New slide (49) added. Minor changes (clarifications) made to 25 other slides.
Lecture 18 Grayscale Morphology: New slide (43) added.
This educational material is part of the collection: Additional University Lectures
About this Item
Contributor: Alan Peters
Creator: Alan Peters
Date: June 1, 2007
Creative Commons license: Attribution-Noncommercial 3.0
Write a review
Downloaded 119,886 times Reviews
Average Rating:
Reviewer: zein.ibrahim - - March 6, 2014
Subject: Great lectures
Thanks for these professional, valuable and clear slides.
Reviewer: Alan Peters - - May 4, 2011
Subject: Reference books
Thank you all for your kind words about my lecture series. I am very glad that you have found them useful. A few folks have requested the names of reference texts on the subject. My notes are mainly
from my own experience and many references. I do not use a textbook in my course. However, I do provide a list of texts for those who may want to consult one. They are listed in the 2010 syllabus
that I have just uploaded. It is the file EECE253_syllabus_F2010.pdf. You may notice that the syllabus covers topics that are not in the currently available lectures. That is because I am constantly
tweaking the course and adding new subject areas to it. Later this summer, I will be updating the lectures posted here so that they will be current with the course that I teach at Vanderbilt.
Reviewer: ImranShafi - - May 3, 2011
Subject: Reference book
Dear Professor
The slides are great but I am unable to grasp few concepts and would like to consult the book that you have used in making these slides
looking forward
Reviewer: cactuslee - - June 20, 2010
Subject: Reference of your content
Your image analysis lecture notes are really great,thank you for sharing with everybody. Can i ask for the reference of your lecture notes? Do you have text books about this lecture? I'm asking for
this as i need to site this content in my research paper. Thank you.
Reviewer: Ambak, Kamarudin - - April 22, 2009
Subject: Congratulation!! An Good Deed Regain Good Reciprocation
Hi Mr ALAN,
I just a few minute ago surf this very useful and interesting presentation of image processing topic which I really needed for my essential knowledge doing my Ph.D research. I hve already download
all ppt.slides 1-18 and also pdf format. But, it seem I couldn't downloaded for pdf file sildes 14 & 15. I tried for several time but still unable to finished the downloading operation. Do you have
any suggestion to solve this problem? or anyone Please...
Reviewer: Copyme - - March 29, 2009
Subject: EXCELLENT
Very good explanation about image processing and the software.
Reviewer: d012560c - - April 25, 2008
Subject: Alternate downloads (with smaller file size) available ...
To help save bandwidth, I have converted all 18 lecture notes from PDF format (210 MB) into DjVu format (74 MB) and uploaded them to:
Also included is the MPG animation.
------------------ DjVu INFO.------------------------------------------------------------------------
DjVu plug-in (free) for Windows, Mac OS X, or UNIX:
Open Source Reader for Windows or Mac OS X:
To compare quality of PDF vs. DjVu, take a look at:
The upload is archived in the TAR format, which can be opened with any common compression/decompression utility
such as WinZip, WinRar, Stuffit, 7-Zip, and many others.
Reviewer: lizburl - - April 22, 2008
Subject: help downloading
I have Adobe Acrobat but still cannot seem to open up any of the lectures. Is there something special I need to to do? I registered and believe my adobe is 8.1 professional.
Thanks, Lizburl
Reviewer: bala_sbc - - April 8, 2008
Subject: help
i couldn't download your material
even i had registered
Reviewer: ljiljana - - November 27, 2007
Subject: can't find it
ups, I just saw the note from Nov 1 and I find the files. anyway, perhaps it would be welcome that you add a remark on this (download loaction) in the description of the materials; as it is right now
- titled with 'other files' - this does not seem intuitive at all. cheers, ljiljana
Reviewer: rocks_hound - - November 17, 2007
Subject: Lectures are not here
The lecture videos are not here Mr. Peters. By lecture videos I mean the instructor standing in front of the class explaining the material. I do not seem them in the list to the left.
Reviewer: zendebadvatan - - November 1, 2007
Subject: quastoin!
where are lecture file for download!?
Reviewer: Eng. M. Aydi - - October 30, 2007
Subject: good work
thank you for good work
Reviewer: fcyakop - - October 30, 2007
Subject: thanks
it's just gr8
Reviewer: nadu4u - - October 4, 2007
Subject: good
very good lectures
Reviewer: online - - September 21, 2007
Subject: image
these r good
Reviewer: fly51fly - - September 18, 2007
Subject: PERFECT!
Thanks for your excellent work!
Reviewer: Himanshu Jain - - September 14, 2007
Subject: Extremely well explained
your efforts are explicitly reflecting in ur ppts.
Thx for ur great efforts
Reviewer: neemit - - September 13, 2007
Subject: download
whrs d download link...??
Reviewer: Ketroo - - August 30, 2007
Subject: ?
Where is the download???
Reviewer: LitaF - - July 18, 2007
Subject: Amazing
... is all that I can say about this course
Reviewer: zaizai - - June 15, 2007
Subject: ya
quite well
Reviewer: ichsanibrahim - - June 12, 2007
Subject: Good
Very good explanation about image processing
|
{"url":"https://archive.org/details/Lectures_on_Image_Processing","timestamp":"2014-04-16T22:17:02Z","content_type":null,"content_length":"50182","record_id":"<urn:uuid:8fa67557-24e3-4894-aefa-a6e8cfe23857>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rigid Body Dynamics
Two-Dimensional Rigid Body Dynamics
For two-dimensional rigid body dynamics problems, the body experiences motion in one plane, due to forces acting in that plane.
A general rigid body subjected to arbitrary forces in two dimensions is shown below.
The full set of scalar equations describing the motion of the body are:
is the mass of the body
is the sum of the forces in the x-direction
is the sum of the forces in the y-direction
is the acceleration of the center of mass
in the x-direction, with respect to an
inertial reference frame
xyz, which is the ground in this case
is the acceleration of the center of mass
in the y-direction, with respect to ground
is the angular acceleration of the rigid body with respect to ground
is the sum of the moments about an axis passing through the center of mass
(in the z-direction, pointing out of the page). This is defined as the sum of the torque
due to the forces acting on the body (about an axis passing through the center of mass
, and pointing in the z-direction). Using
is simply a different naming convention.
is the rotational inertia of the rigid body about an axis passing through the center of mass
, and pointing in the z-direction (out of the page)
Note that, if the rigid body were rotating about a fixed point
, the final moment equation would retain the same form if we were to choose point
instead of point
. So, the equation would become:
The figure below illustrates this situation.
Where the point
is a fixed point, attached to ground. A specific example of this would be a pendulum swinging about a fixed point.
Here are some examples of problems solved using two-dimensional rigid body dynamics equations:
The Physics Of A Golf Swing Trebuchet Physics Three-Dimensional Rigid Body Dynamics
For three-dimensional rigid body dynamics problems, the body experiences motion in all three dimensions, due to forces acting in all three dimensions. This is the most general case for a rigid body.
A general rigid body subjected to arbitrary forces in three dimensions is shown below.
The first three of the six scalar equations describing the motion of the body are force equations. They are:
is the acceleration of the center of mass
in the
-direction, with respect to ground (an
inertial reference frame
is the acceleration of the center of mass
in the
-direction, with respect to ground
is the acceleration of the center of mass
in the
-direction, with respect to ground
Note that the subscripts
indicate that the quantities are resolved along the
axes. For example, a force acting along the Z-axis is resolved into its components along the
axes in the above three equations. This can generally be done using trigonometry.
However, it is not necessary to resolve the quantities along the
axis. For the above three force equations, one can resolve the quantities along the XYZ axes instead.
To solve three-dimensional rigid body dynamics problems it is necessary to calculate six inertia terms for the rigid body, corresponding to the extra complexity of the three dimensional system. To do
this, it is necessary to define a local
axes which lies within the rigid body and is
to it (as shown in the figure above), so that it moves with the body. The six inertia terms are calculated with respect to
and depend on the orientation of
relative to the rigid body. So, a different orientation of
(relative to the rigid body) will result in different inertia terms. The reason that
is said to "move with the body" is because the inertia terms will not change with time as the body moves. So you only need to calculate the inertia terms
, at the initial position of the rigid body, and you are done. This has the advantage of keeping the mathematics as simple as possible. An added benefit of having
move with the rigid body is when simulating the motion of the body, over time. We can track the orientation of the body by tracking the orientation of
(since they move together).
For two-dimensional rigid body dynamics problems there is only one inertia term to consider, and it is
, as given above. For these problems
can be calculated with respect to any orientation of the rigid body, and it will always be the same, since the problem is planar. Therefore, we don't need to define an axes
that is attached to the rigid body, and has a certain orientation relative to it (like we do in three-dimensional problems). This is because, for planar problems (where motion is in one plane),
would be independent of the orientation of
(relative to the rigid body).
For the general case (where we have an arbitrary orientation of
within the rigid body), the last three equations describing the motion of the rigid body are moment (torque) equations. They are:
is the sum of the moments about the
-axis, passing through the center of mass
G ΣM[Gy]
is the sum of the moments about the
-axis, passing through the center of mass
G ΣM[Gz]
is the sum of the moments about the
-axis, passing through the center of mass
G w[x]
are the components of the angular velocity of the rigid body with respect to ground, and resolved along the local xyz axes. To calculate these components, one must first determine the angular
velocity vector of the rigid body with respect to the global XYZ axes, and then resolve this vector along the
directions to find the components
. This is often done using trigonometry.
are the components of the angular acceleration of the rigid body with respect to ground, and resolved along the local
axes. To calculate these components, one must first determine the angular acceleration vector of the rigid body with respect to the global XYZ axes, and then resolve this vector along the
directions to find the components
. This is often done using trigonometry.
is the rotational inertia of the rigid body about the
-axis, passing through the center of mass
G I[Gy]
is the rotational inertia of the rigid body about the
-axis, passing through the center of mass
G I[Gz]
is the rotational inertia of the rigid body about the
-axis, passing through the center of mass
G I[Gxy]
is the product of inertia (
) of the rigid body, relative to
xyz I[Gyz]
is the product of inertia (
) of the rigid body, relative to
xyz I[Gzx]
is the product of inertia (
) of the rigid body, relative to
The six inertia terms are evaluated as follows, using integration:
The orientation of
relative to the rigid body can be chosen such that
This orientation is defined as the principal direction of
With this simplification, the moment equations become:
These are known as the
Euler equations of motion
. Clearly, it is a good idea to choose the orientation of
so that it lies in the principal direction. For every rigid body a principal direction exists. If a body has two or three planes of symmetry, the principal directions will be aligned with these
planes. For the case where there are no symmetry planes in the body, the principal direction can still be found, but it involves solving a rather complicated cubic equation.
Note that, for the three moment equations and six inertia terms, their quantities
be with respect to the
axes (this is unlike the first three force equations, where this is optional). For the inertia terms, the reason for this is obvious since this is how they are defined. But for the moment equations,
the reason is rather complicated, but basically it comes down to how they are derived, which is discussed
For example, a moment acting about the Y-axis must be resolved into its components along the
axes in order to use the above moment equations. This can generally be done using trigonometry.
Note that, if the rigid body were rotating about a fixed point
, the above moment equations and six inertia terms would retain the same form if we were to choose point
instead of point
. You just replace the subscript
with the subscript
, and everything else stays the same. Note that the
axes would have origin at point
instead of point
The figure below illustrates this situation.
Where the point
is a fixed point, attached to ground. A specific example of this would be a spinning top precessing around a fixed point.
For two-dimensional rigid body dynamics problems the angular acceleration vector is always pointing in the same direction as the angular velocity vector. However, for three-dimensional rigid body
dynamics problems these vectors might be pointing in different directions, as shown below.
These vectors can be expressed as:
In two-dimensions, to find the angular acceleration you simply differentiate the magnitude of the angular velocity, with respect to time. In three-dimensions you have to account for the change in
direction of the angular velocity vector (since both might change with time), so this does complicate matters a bit. This is done by calculating the difference in the angular velocity vector over a
very small time step
, where
→0. To illustrate, see the figure below.
Using calculus, the angular acceleration is calculated as follows (taking the limit as
The three force equations, and three moment equations shown here for three-dimensional rigid body dynamics problems "fully describe"
possible rigid body motion. They comprise the
set of equations you need to solve the most general rigid body dynamics problems. All simplifications can be made from these six equations. For example, if one assumes planar motion, the six
equations reduce to the two-dimensional dynamics equations given above.
Note that the positive directions of the individual X, Y, Z axes are in the directions shown in the first figure. Similarly, the positive directions of the individual
axes are also in the directions shown in the first figure. This choice of sign convention is important when using the rigid body dynamics equations given here (especially the moment equations). This
is explained on the
equations of motion page
For additional background information see:
Parallel Axis And Parallel Plane Theorem A Closer Look At Velocity And Acceleration
Here are some examples of problems solved using three-dimensional rigid body dynamics equations:
The Physics Of Bowling Euler's Disk Physics Gyroscope Physics Return from Rigid Body Dynamics to Useful Physics Formulas page Return from Rigid Body Dynamics to Real World Physics Problems home page
|
{"url":"http://www.real-world-physics-problems.com/rigid-body-dynamics.html","timestamp":"2014-04-21T14:40:31Z","content_type":null,"content_length":"27035","record_id":"<urn:uuid:41a5fcd5-7cc7-4d09-9112-1bfc398fff89>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
|
(Light/Electron Theory Update)
Copyright (C) 1999, 2000, 2001, 2002, 2014
by Lew Paxton Price and Herbert Martin Gibson
Last updated April 2, 2014.
Back - Main Menu
Electron in Its Home
A lightwave is produced by an electron. The electron is a vortex that is bringing into its center many revolutions of dynamic ether (nether) all the time. Most electrons are found in atoms where each
manages to exist within its own limited volume of space (space that is filled with nether). The atomic nucleus is about 10^-15 meter in diameter. Each atom is about 10^-10 meter in diameter. Each
electron center is about 10^-57 meter in diameter. These measurements imply that there is a lot of room between an atomic nucleus and its outer most electrons.
Electrons reside within their own energy levels ("orbits"). The outermost energy level causes the valence part in chemistry. Electrons far away from the nucleus can generate light when activated with
enough energy to rise outward within its residence much like an energetic rubber ball might rise within a funnel. When it is no longer provided with excess energy, it falls back into the bottom of
its funnel while bouncing back and forth from one "side" of the funnel to the other.
As the electron bounces, it must move first in one direction, and then move back the way it came. This process continues as it moves from the top of its residence to the bottom. This is a vibrating
electron that is creating a lightwave. As it rotates, its "mouth" turns from pointing in one direction to pointing in the opposite direction - and it generates a half-wave of light. For convenience
sake, I will call the first direction in which it moved "north" and the second direction "south".
When the electron moves north, its nether may be incoming in a clockwise revolution. When it moves south, its nether may be incoming in a counterclockwise revolution. No one knows which is the actual
direction of revolution of the incoming nether, but it changes according to the direction the electron is moving. So as seen from our viewpoint, when the revolution is clockwise and the electron is
moving northward, the incoming nether direction is to the right. When the electron is moving southward, the incoming nether direction is to the left. This change in incoming nether direction moves
outward as a half-wave of light.
Electrons prefer to move in the direction their mouths are pointing because it requires less energy to do so (there is less nether inertia to overcome). So the electron begins by moving north,
rotates to create a light half-wave, moves south, rotates to create a second light half-wave, moves north again, and continues this cycle until it arrives back at the bottom of its residence within
the atom. The half-waves combine to become full-cycle lightwaves and a "photon" of light is born.
The energy in the a lightwave comes from the electron rotations. The frequency comes from the number of times in one second that the rotations occur. The number of times in one second that rotations
occur is a function of the time an electron requires to travel from one rotation to the next. Today, the energy in two rotations is Planck's constant divided by one second of time. This convention is
a mistake. It should have been set up to have the time divisor incorporated into Planck's constant. But hindsight is always better. This will be explained in detail when we get into red shift.
A Short History
Planck introduced his constant in 1905 to explain the distribution in frequency of radiant energy in a cavity of a body as a function of the temperature of that body. He found that he could derive
the correct law of distribution with two assumptions: (1) each oscillator producing the radiant energy can possess only discrete amounts of energy or "nhf" where "n" is an integer, "h" is Planck's
constant, and "f" is the frequency of that oscillator, and (2) the probability that an oscillator has the energy "nhf" is proportional to e^ - nf / kT where "k" is the Boltzman constant and "T" is
the absolute temperature.
Bear in mind that Planck was experimenting with many oscillators (electrons) providing light in many frequencies over a period of time that was not exactly one second in duration. He was using
temperature to measure the energy.
In the 1880's, Wilhelm Hallwachs and Heinrich R. Hertz discovered the photoelectric effect. This was analyzed further by later experimenters. Incident light can eject electrons from metals. The
velocities of these electrons are independent of the intensity of the light but increase with its frequency. The number of electrons ejected per unit time is proportional to the intensity of the
In 1905, Einstein proposed that the photoelectric effect was caused by light concentrated in bundles or quanta, of energy "hf", where "h" is Planck's constant and "f" is the frequency of the light.
Each of the bundles can be absorbed only as a whole and by an individual electron. Thus the absorbing electron is given an additional kinetic energy equal to "hf". In passing through the surface
barrier of the metal, the electron loses from this energy a portion which can be designated as "hf[o]". The kinetic energy with which the electron emerges is then given by:
E[k] = (1/2)mv^2 = h ( f - f[o] )
This gives the maximum energy of ejection, since electrons can also lose some energy inside the metal before reaching the surface. The equation indicates that unless "f" is greater than "f[o]" the
electrons cannot escape, so there exists a low frequency limit for the ejection of any electrons. The equation gives no reference to the intensity of the light, but gives the energy of the ejected
electrons in terms of frequency only.
This expression, when later modified to take into account the various energies possessed by the electrons before they absorb the light, agreed with the results of the experiments in detail.
The author of some paragraphs in Encyclopedia Americana states: The equation itself, however, is completely paradoxical from the point of view that regards light as an electromagnetic wave and the
electrons as charged material particles. [In nether theory, there is no such paradox.]
In the foregoing, neither Planck nor Einstein considered the possibility that in the energy they were attributing to "hf" they should have considered "h" to have "1/t" included as part of it and "n"
to be separate.
Eddington published his book, The Expanding Universe, in 1933. Hubble's work with red shift came even later. The idea that "hf" is energy created the notions of energy being lost just because
wavelength is lengthened. The theories which followed were the result of a foundation in which math only was used to determine reality. When "hf" is understood as being a what it actually is rather
than just energy when applied to red shift, logic may again prevail. We should always beware of the integer one which is our Achilles heel in math and which, when applied to time, can be very
According to the Encyclopedia Americana:
"The next extremely important fundamental contribution which came from the field of X-ray research was made by Arthur Holly Compton in 1923. It had already been observed by Joseph Alexander Gray
(1920) that short wavelength X-radiation, after scattering from carbon and other low atomic number atoms, was somewhat more absorbable than the primary radiation, but still of a "hardness" so clearly
related to the primary radiation as to exclude its being a characteristic fluorescent radiation from the scatterer.
"Compton gave the following daring explanation of this effect. He supposed the bundles of radiation energy, "hf ", instead of being associated with spreading waves, to be propagated through space
from the source in the form of projectiles. When one of these projectiles (or "photons"), each with momentum "hf/c", was scattered by a loosely bound electron in some low atomic number scattering
material such as carbon, the electron would recoil under the impact, the more so the larger the angle of scattering and the higher the quantum energy (and hence the momentum) of the projectile. The
kinetic energy thus given to the electron at the expense of the photon explained the "softening" of the scattered radiation in a completely and quantitatively correct way.
"Using characteristic line radiation from a molybdenum target tube, Compton showed that in the spectrum of the scattered radiation there appeared lines each of which was shifted toward longer
wavelengths than its corresponding line in the primary radiation by an amount in complete accord with his theoretical explanation.
"The recoil electrons were also detected and shown by Compton and Simon to have the requisite speeds and directions of recoil. Even more than this was ascertained, for Jesse W. M. Dumond and Harry A.
Kirkpatrick succeeded in showing that the shifted line above referred to was notably broader than the unshifted line and this was satisfactorily explained by them (with complete quantitative
verification in all respects), to be due to the randomly directed velocities possessed initially by the atomic electrons which were the agents that scattered the photons. The presumptively dynamic
character of the electronic clouds in atoms both for gases and for solid metallic bodies was thus experimentally verified."
Enter Dynamic Ether (Nether)
The product of Planck's constant, "h", and frequency, "f ", equals the energy in a photon. The product of Planck's constant and a frequency of one is the kinetic energy, E[k], in the passage of one
wave of light. However, the correct quantum for light is the half-wave rather than the complete wave which is composed of two half-waves. This is because a photon is composed of transverse waves
caused by an electron reversing directions regularly and thus creating opposing accelerations in the ether. These accelerations move outward at the speed of light. Each acceleration is created by a
reverse in the electron's direction which reverses the rotation of the incoming nether. Two adjacent acceleration reversals create one lightwave.
It is the electron's reversals in direction during the production of a "photon" that creates the energy in the photon. The passage of time that the electron is moving between reversals in direction
cause the length of the half-wave or wave. One movement of the electron from one reversal to the next is what creates half of a wavelength of the light being produced. Two movements between
consecutive reversals creates a complete wavelength of light.
Very near the center of the electron vortex, tranverse and radial velocity vectors of the incoming nether are equal to "c", the speed of light. The radius where this occurs is the "Schwartzchild
radius" for the electron. The change in electron direction at this radius causes the nether tangential velocity vector to change from its original direction to the opposite direction, for a total
velocity change of "2c". This change happens over the period of time used for the electron to reverse direction, which is "t[s]".
This is not a true change in velocity for any particular volume of nether, but can be likened to changing the direction of a flow in a hose used while watering a lawn. The direction change is quick,
but applies to different volumes of water in the flow rather than the same volume. Of course, the electron is taking in a flow rather than sending it out. Although "2c/t[s]" appears to be an
"acceleration" when using dimensional analysis, such an acceleration would far exceed the reaction speed of the nether.
A single lightwave has a frequency of "1/t". So its energy is
h(1/t) = E[k] = mad (mad is the general formula for energy) This time, the specifics are
m[e] = electron mass, m[e](t[s]/t) = Mass entering the electron in time "t[s]",
2c/t[s] = acceleration, and ct[s]/2 = distance the acceleration pushes the average for the mass.
The distance is velocity multiplied by time. The velocity is "c" and the time is "t[s]". It represents the length of the Mass that must be accelerated as it is entering the electron center. The
distance "ct[s]" is divided by 2 because the lead end of it moves into the center at the beginning of the electron reversal and the tail end does not move into the center at all during the electron
reversal. The average is "ct[s]"/2. (0 + ct[s])/2 = ct[s]/2.
With a half-wave, the frequency is "1/(2t)".
h/(2t) = mad
h/(2t) = [m[e](t[s]/t)](2c/t[s])(ct[s]/2)
which reduces to
Equation 1. h/(2t) = [m[e](t[s]/t)]c^2
Moving Outward
The equation necessary to show the nature of Planck's constant is the one above. Planck's constant is a very strange creation considering what it is supposed to do. When physicists discovered that
the energy in a "photon" can be found as "hf" where "f" is frequency, they made it possible to separate "h" and "f" so that "h" has the units "md^2/t" in which "m" is mass, "d" is distance, and "t"
is time. Frequency has always been "n/t" in which "n" is the number of events and "n/t" is the number of events in one second. So the product of "hf" is "m(d^2/t^2)" which is "mv^2", kinetic energy.
When theoretical physicists created "hf" as energy and separated "h" and "f", the energy for a single lightwave became "h/t", forcing "h" to have the units "md^2/t" which is neither energy nor
momentum. Energy is "m(d^2/t^2)" and momentum is "m(d/t)". If theoretical physicists had incorporated "1/t" into "h" it would not be a bastardized hybrid and "hn" would be the energy in a photon of
any time duration. This would remove the confusion caused by a photon that is limited to one second versus a natural photon which is never one second in duration. It would also eliminate the problems
some physicists are experiencing with red shift.
The energy in a half-wave of light is that in the equation h/(2t) = [m[e](t[s]/t)]c^2.
The energy in a full lightwave is that in the equation h/t = 2[m[e](t[s]/t)]c^2.
By multiplying the above "h/t" by "n", we have the energy in a photon. But this energy must have a means of being transferred from one point to another through the nether. So momentum is employed as
a means to do so. As the acceleration "2c/t[s]" moves outward as a half-wave of light, it affects the momentum of the outward moving "ripple" that is the half-wave mass. The acceleration that is for
the half-wave moves through each circumference of incoming nether at the speed of light "c" - and "c" is the velocity vector of the reactive speed of the nether itself, and does not change.
This acceleration moving outward at speed "c" is like a "thickness" for the mass that is accelerated at each circumference. This thickness (the "ripple") has the same dimension at all distances from
the electron center. Unlike the theoretical spheres used to illustrate gravity in which nether compression is at work, with single electron inflow there is no appreciable compression of nether. So
the half-wave mass of the outward moving ripple grows in direct proportion to its distance (radius) from the source electron center. The distance the mass of the ripple moves in a tangential
direction decreases in direct proportion to its distance (radius) from the source electron center. The distance it moves tangentially is a function of the change in its tangential velocity. So the
momentum "mv" remains constant for the light half-wave regardless of the distance of the ripple from its source.
Because energy is a function of velocity squared, the energy in the half-wave decreases as it moves outward because the tangential velocity of the half-wave decreases. This is not a problem. As the
ripple approaches a receiver for the half-wave, the receiver's consequent ripple takes the momentum of the half-wave. This new ripple grows shorter with its mass decreasing, causing the tangential
velocity to increase. Momentum remains constant, but the interchange between mass and velocity causes them to give the receiving electron the same amount of energy that was there at the start of the
half-wave's journey.
Theoretical physicists agreed that the energy of a lightwave is "h/t" and that "h/ct" is the momentum of a lightwave. But energy is supposed to have the formula (1/2)mv^2 when momentum has the
formula "mv". In the case of light, according to the experimental results momentum is "(1/2)mv" rather than "mv".
h/t = 2[m[e](t[s]/t)]c^2 is energy. And
h/ct = 2[m[e](t[s]/t)]c is momentum.
These means that momentum is half the size that it should be. The question is why? This implies that either Planck's constant is half as large as it should be, or that Compton's momentum should be
twice as large as it should be.
Compton's "hf/c" is absolutely correct for the the momentum imparted from electron spin during an electron reversal such as is the case with a half-wave. For a full wave the change in energy is
double. Possibly because of this doubling for energy, the momentum of Compton appears to be half what is normal.
Or perhaps Compton was measuring half of the half-wave for his momentum. Because the two halves of the half-wave oppose to one another, this is about the only way that momentum can be measured.
Compton's work in discovering momentum has been satisfactorily verified and appears to be correct. The Hallwachs and Hertz experiment in discovering the photoelectric effect is based upon proportions
rather then absolutes. The velocities of the ejected electrons increased with the frequency of the light. What if the velocities of the electrons were based upon half-waves of light providing their
ejections? This might explain the paradox.
The answer is that all of the experimenters were correct. Momentum is "mv" and "v" is relative. As the half-wave passes by we see only the "v" relative to us. It can be to the left or to the right,
but it must be one or the other. Between half-wave passages, and relative to us, the mass of the ripple is moving tangentially in one direction at velocity "v". Its momentum is "mv" relative to us.
After the next half-wave passage, relative to us, the mass of the ripple is moving in the opposite direction at velocity "v" with momentum equal but in the opposite direction relative to us. We only
see momentum as "mv" after each passage of a half-wave.
If we go back to the beginning of the momentum at the electron Schwarzschild radius, we find that it is equal to m[e](t[s]/t)c relative to us. This is precisely the correct momentum relative to us,
but it will be in only one of two directions that oppose one another. So what Compton measured was what we can see relative to us - which is half what it should be in the strictest sense. The actual
change in momentum is
For more information, see:
Black Holes - More about the Electron
The definitions that follow in this section are from engineering physics, mechanics, etc. - things engineers have known and used for many years. If one still thinks that the electron is not a vortex,
and that it is a gyroscope-like particle, the following may cause him to pause.
Electron spin is more formally called electron angular momentum. Angular momentum is a stepchild of linear momentum. Linear momentum is the product of mass and velocity, "mv". Linear momentum is
handy in tracking some kinds of motion without having to use the energy equations. However, momentum is based upon velocity and velocity is always a relative quantity. So momentum changes according
to that to which velocity is relative. Energy is never really noticed until something accelerates something else and really has nothing to do with velocity except as a shortcut in math.
Angular momentum is based upon linear momentum with a radius of curvature added so that it becomes a means of measuring through the use of angular velocity. Its use is necessary when rotation is
Electron spin is, in reality, the means by which the vortex can exist. To those who think of the electron as a particle, spin is angular momentum. By definition, the angular momentum, p, of a
rotating body such as a gyroscope is
p = Iw = mr[g]^2w
where I = moment of inertia, w = angular velocity, r[g] = radius of gyration, and m = mass of the rotating body.
Center of Gyration
The center of gyration of a body is defined as a point that, if all the mass of the body were concentrated at that point, its moment of inertia would be the same as that of the body. In other words,
this is the center about which the body can rotate without moving linearly or vibrating.
Torque (Moment)
When working with rotation, "torque" or "moment" is the product of force and the distance between the force and the center of rotation. The distance is called the "moment arm", and the force is
simply the product of mass and acceleration [F = ma]. So the product of force and the radius or moment arm is the torque "T" or moment [T = Fr = mar].
Moment of Inertia
The moment of inertia "I of a body is defined as the sum of all moments of inertia of its parts. The moment of inertia of a part is defined as the product of its mass and the square of its distance
from the center of gyration. The distance from the center of gyration is "r[g]" known as the radius of gyration. The equation is
I = mr[g]^2
The need for a moment of inertia comes from angular velocity, and the energy and momentum of rotation or gyration. The sum of the moments of inertia of various parts of a body is difficult to
calculate with linear motion. The distance that one part is from the center of gyration is not the same as the distances of the other parts. Therefore, when calculating kinetic energy in a linear
fashion, the velocity squared part of
is not easy to average. Or when calculating the momentum in a linear fashion,
the velocity part is not easy to average. However, if it were possible to have all of the parts move with the same velocity, the calculation would be simple.
By going to a circular measure, the angle of rotation per length of time is the same for all of the parts. When translating to circular measure, in place of "v" we have (2pi)(n/t). The expression
(2pi) is one circumference measured in radians. A radian is the same length as a radius but is a circular linear measure. There are 2pi radians in one circle. The "n/t" is the number of
circumferences of rotation per second. The expression "(2pi){n/t)" is usually known as "w", and is the circular velocity. It applies to all parts of the rotating body equally, making it very
convenient for use.
Radius of Gyration
The radius of gyration of a body is defined as the square root of the quantity that is the moment divided by the mass of the body.
r[g] = (I/m)^1/2
This is just another version of the equation above for the moment of inertia.
I = mr[g]^2
Angular Velocity
Angular velocity of a body is its circular movement per unit of time. The circular movement is usually calculated in radians, with 2pi radians for every 360 degrees. An angle in radians is the arc
distance divided by the radius. Such an angle divided by time is angular velocity. The usual equation is
w = (2pi)(n/t)
This comes from a linear velocity of (2pi)r(n/t) in which "(2pi)r" is one circumference of a circle, and "n/t" is the number circles traversed in one second. For angular measure, it is divided by "r"
which converts it to radians per second rather than a straight linear distance per second.
Linear Momentum
Linear momentum equals the product of mass and velocity.
Linear momentum = mv
Linear momentum of a body rotating about an axis is "mv" in which "v" is (2pi)r[g](n/t).
Linear momentum of a body rotating about an axis = m[(2pi)r[g](n/t)]
Angular Momentum
Angular momentum is the linear momentum about an axis multiplied by its moment arm.
The linear momentum about an axis is m[(2pi)r[g](n/t)].
The moment arm is r[g].
Angular velocity is: w = (2pi)(n/t)
So the angular momentum "p" is
p = m[(2pi)r[g](n/t)]r[g]
p = mr[g]^2[(2pi)(n/t)]
p = mr[g]^2w
Using mr[g]^2 as the moment of inertia "I" makes calculating easier.
p = Iw = mr[g]^2w
As was shown above.
Finding r[g]
Let us assume that we have a cylinder like a length of pipe. The wall thickness of the pipe is infinitely small and it is rotating about an axis at its center where fluid would flow if the pipe were
in use. This means that all the parts of the pipe's mass are the same distance, "r", from the axis of rotation. Then the torque or rotational momentum can be computed in a linear fashion is "mrv"
where "m" is the sum of all the masses in the pipe, "r" is the distance of all of the masses from the axis of rotation, and "v" is the linear velocity at the pipe wall.
"v" may be in radians per second or "(2pi)r(n/t)" where "n" is revolutions and "n/t" is revolutions per second. Then
mrv = mr[(2pi)r(n/t)] = (2pi)mr^2(n/t).
p = mr^2[(2pi)(n/t)} = mr^2w.
The reasoning above is especially true in the case of electron momentum which has the same Mass moving inward at any radius and therefore has the same mass at any radius. So for the electron,
If we can solve the equation "p = mr[g]^2w" for "r[g]", we may have the radius of gyration.
However, the electron vortex extends to infinity and has a disc-like shape that is distorted by other forces at distances very far from its center. So it has a radius of gyration that ranges from
about 10^-57 meter (which is the Schwarzschild radius) to infinity. It has no particular radius of gyration because "m" (the total inward nether flow rate) is the same at all radii and the product of
velocity and circumference is always the same at all radii.
To elaborate, the transverse velocity of the of the incoming nether is the same as the incoming velocity at all radii, and is proportional to 1/r (same as the radius to the minus one power) - while
the circumference at all radii is proportional to the radius. This makes the product of the velocity and the circumference the same at all radii, meaning that the product of the mass, velocity, and
circumference are the same at all radii.
The result of the above is that the angular acceleration for the electron, according to the definition in the book, can be found at any radius where the mass and velocity are known. And there is a
gyroscopic action which depends upon gyration.
The electron produces an acceleration that we call planck's constant and moves outward in the form of a light wave. It takes about 10^-22 second for the electron to rotate to produce half of this
wave. The relatively slow rotation indicates that the electron has a tendency to remain oriented in space until acted upon by a force (one of the properties of a gyroscope). This proves that the
electron does have something like angular momentum. But this acceleration is merely a change in velocity caused by the 180 degree rotation of the electron during the production of a half-wave of
Rigid Body vs a Vortex
The foregoing has been used to prove that the electron has angular momentum when treating it as a rigid body. But the vortex is fluid and very "flexible" as compared to a rigid body. This is not so
true of the vortex center where the forces keep the nether in a strong grip. The incoming nether takes a shape similar to a modified cylinder or a hemisphere. Although the electron Mass increases in
density as it approaches the electron center, the fact that the electron acts like a small gravity funnel except for a difference in shape causes its mass to remain the same at any distance from its
[The electron's incoming nether (that produces micro-gravity) takes almost a disc-like shape as compared to the spherical shape of gravity funnel for a planet.]
What is usually considered the vortex in nether theory and has been treated as a rigid body, is actually the part close to the electron center. This is the part of the electron where the radius is
such that the exiting accelerating half-waves have not achieved lightspeed yet due to a very fast nether inflow. Outside of this distance from the electron center, there is more flexibility and the
speed of the incoming nether is quickly overcome by the light half-wave. This outside part is a reality, but difficult to use in working with angular acceleration because the electron nether flow
becomes masked or distorted by other nether flows. This is not a problem because we can use the part that we know best to establish angular momentum (since any radius will do for this purpose).
Electron Angular Momentum
The electron has an innate tendency to maintain something which seems to be angular momentum even though a specific radius of gyration does not exist. At any radius from the electron center, a
theoretical mass moves perpendicular to the theoretical radius at a theoretical angular velocity. The electron radius of gyration is any radius we can use from about 10^-57 (the Schwarzschild radius)
to infinity. But the easiest radius to use is the Schwarzschild radius itself because here we know that the velocity of both the incoming nether and the tangential nether is the speed of light "c".
Actually, the hole that is the electron center creates a vortex that is the electron. The vortex extends to infinity even though the electron center is constantly re-orienting itself. Because it is a
lightweight entity re-orienting itself frequently, most of the vortex becomes eclipsed by other forces as the radius increases. So it does not act like a body extending to infinity. Also, the
geometrical law for nether inflow into the electron means that the higher speeds of inflowing Mass at shorter distances from the electron center dominate the vortex that extends outward. This
domination is so great that the electron mimics a solid body - even though the electron vortex is actually a flexible disc or hemisphere. Yet it is true that the use of the equation for a solid body
- in the strictest sense - is improper for electron angular momentum.
Quantum Spin
Angular momentum is said to be already known as "-h/2", and the equation which describes it is of theoretical value for use in discovering the nature of a photon. But this is not angular momentum.
In quantum theory, spin is considered a quality that can be accepted but is not really angular momentum as we understand it. The Bohm interpretation of quantum theory is not far from nether theory in
many ways, but is largely designed to express "large-scale" results without knowing the details of how these are achieved. In quantum theory, the Planck unit of action, "h" equals "h/(2pi)", and is
the unit used for spin.
The value h/(2pi) is used for the Planck unit of action. It comes from "hf" as the energy of a photon - "hf" is the same as the energy in one lightwave multiplied by the number of of waves in a
photon. And "f" is merely the number of waves per second. If one assumes that a wave is a cyclic thing produced with a circumference, then "2pi" is the circumference of that wave in radian form. So h
/(2pi" is a logical means of having a unit of action for the electron.
"S" is spin and "S = M[s]h".
"M[s]" is the quantum number which can be "-1/2" for electron spin "up" or "1/2" for spin "down". Most of the time "M[s]" and "h" are combined so that "h" is a spin of one and "h/2" is a spin of "1/
2". This value of electron spin has been determined by math and experiment when dealing with photons.
So for the electron, the unit of quantum spin is considered "h/2" or "h/(4pi)" - and h is called the "Planck unit of action".
Calculating Electron Angular Momentum
Angular momentum, "p", is usually defined as the product of the moment of inertia, "I", and the angular velocity, "w". With "v" for velocity, "n" for number of revolutions, "t" for time, and the
subscript "g" for "gyration", the correct equations follow.
p = Iw = m[g]r[g]^2w n[g] = n/t = v[g]/[(2pi)r[g]]
w = (2pi)(n/t)
w = (2pi)n[g]
w = (2pi){v[g]/[(2pi)r[g]]}
w = v[g]/r[g]
p = m[g]r[g]^2w
p = m[g]r[g]^2(v[g]/r[g])
p = m[g]r[g]v[g]
The value of "r(mv)" for the electron does not change as one moves from its center outward. So we may use the mass for the electron "m[e]", the Schwarzschild radius "r[s]" and the velocity at the
Schwarzschild radius "c" (the speed of light) in the equation.
p = m[e]r[s]c
p = (9.10956x10^-31 kilogram)(1.3530x10^-57 meter) (2.9979x10^8 meters/second)
p = 3.6949819x10^-79 kilogram meter^2/second
The above value is incredibly small, but that appears to be the answer.
Electron Angular Momentum = p = 3.6949819x10^-79 kilogram meter^2/second
|
{"url":"http://www.lewpaxtonprice.us/litelec.htm","timestamp":"2014-04-18T05:30:27Z","content_type":null,"content_length":"36999","record_id":"<urn:uuid:f8c9d61a-388a-41b2-98a9-42a0f04acaf6>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reflection Groups in Finite Geometry
A survey paper in
the current Bulletin of the American Mathematical Society
(Vol. 45, No. 1, January 2008, pp. 1-60) is titled "
Reflection Groups in Algebraic Geometry
." That paper deals with groups defined over fields of characteristic zero; this note is to point out some references for reflection groups over fields of positive characteristic.
Recall that a reflection group may be defined as a group of linear transformations of a vector space over a (possibly finite) field that is generated by reflections-- transformations that fix a
hyperplane pointwise.
Characteristic Two:
For characteristic two, there exist easily visualized reflection groups acting on the 2x2 square, the 2x2x2 cube, the 4x4 square, and the 4x4x4 cube. For details, see
Binary Coordinate Systems
Finite Geometry of the Square and Cube
|
{"url":"http://finitegeometry.org/sc/gen/refl.html","timestamp":"2014-04-17T00:49:52Z","content_type":null,"content_length":"6510","record_id":"<urn:uuid:c911e1fc-e489-4b1f-acae-907e66860a33>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
|
13 May 04:47 2013
A type not inferred with RankNTypes
Akio Takano <tkn.akio <at> gmail.com>
2013-05-13 02:47:03 GMT
The attached program does not typecheck if I don't include a type signature for 'bar' (the line C). I can't figure out if this is a limitation in the type system or a bug in GHC. One thing that
confuses me is that replacing the line (B) with (A) makes the program typecheck.
Could anyone help me figuring out what is going on?
I'm using GHC 7.6.2. The error was:
% ghc forall.hs
[1 of 1] Compiling Foo ( forall.hs, forall.o )
Could not deduce (Fractional a) arising from the literal `0.1'
from the context (Num (Scalar t), Scalar t ~ a)
bound by a type expected by the context:
(Num (Scalar t), Scalar t ~ a) => AD t
at forall.hs:18:7-13
Possible fix:
add (Fractional a) to the context of
a type expected by the context:
(Num (Scalar t), Scalar t ~ a) => AD t
or the inferred type of bar :: a
In the first argument of `foo', namely `0.1'
In the expression: foo 0.1
In an equation for `bar': bar = foo 0.1
Takano Akio
Glasgow-haskell-users mailing list
Glasgow-haskell-users <at> haskell.org
|
{"url":"http://comments.gmane.org/gmane.comp.lang.haskell.glasgow.user/23710","timestamp":"2014-04-17T18:59:32Z","content_type":null,"content_length":"20295","record_id":"<urn:uuid:a5b521ef-408e-4965-a734-9715a19e4c16>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts about mixture estimation on Xi'an's Og
Here is a review of Finite Mixture Models (2000) by Geoff McLachlan & David Peel that I wrote aeons ago (circa 1999), supposedly for JASA, which lost first the files and second the will to publish
it. As I was working with my student today, I mentioned the book to her and decided to publish it here, if only because I think the book deserved a positive review, even after all those years! (Since
then, Sylvia Frühwirth-Schnatter published Finite Mixture and Markov Switching Models (2004), which is closer to my perspective on the topic and that I would more naturally recommend.)
Mixture modeling, that is, the use of weighted sums of standard distributions as in
$\sum_{i=1}^k p_i f({\mathbf y};{\mathbf \theta}_i)\,,$
is a widespread and increasingly used technique to overcome the rigidity of standard parametric distributions such as f(y;θ), while retaining a parametric nature, as exposed in the introduction of my
JASA review to Böhning’s (1998) book on non-parametric mixture estimation (Robert, 2000). This review pointed out that, while there are many books available on the topic of mixture estimation, the
unsurpassed reference remained the book by Titterington, Smith and Makov (1985) [hereafter TSM]. I also suggested that a new edition of TSM would be quite timely, given the methodological and
computational advances that took place in the past 15 years: while it remains unclear whether or not this new edition will ever take place, the book by McLachlan and Peel gives an enjoyable and
fairly exhaustive update on the topic, incorporating the most recent advances on mixtures and some related models.
Geoff McLachlan has been a major actor in the field for at least 25 years, through papers, software—the book concludes with a review of existing software—and books: McLachlan (1992), McLachlan and
Basford (1988), and McLachlan and Krishnan (1997). I refer the reader to Lindsay (1989) for a review of the second book, which is a forerunner of, and has much in common with, the present book.
Continue reading
|
{"url":"http://xianblog.wordpress.com/tag/mixture-estimation/","timestamp":"2014-04-19T17:03:30Z","content_type":null,"content_length":"84166","record_id":"<urn:uuid:f79bdd60-45b1-4016-b8df-586fe3275514>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chapter 14 - Interest Rate and Currency Swaps
CHAPTER 14 INTEREST RATE AND CURRENCY SWAPS
ANSWERS & SOLUTIONS TO END-OF-CHAPTER QUESTIONS AND PROBLEMS
1. Alpha and Beta Companies can borrow for a five-year term at the following rates:
Alpha Beta
Moody’s credit rating Aa Baa
Fixed-rate borrowing cost 10.5% 12.0%
Floating-rate borrowing cost LIBOR LIBOR + 1%
a. Calculate the quality spread differential (QSD).
b. Develop an interest rate swap in which both Alpha and Beta have an equal cost savings in
their borrowing costs. Assume Alpha desires floating-rate debt and Beta desires fixed-rate debt.
No swap bank is involved in this transaction.
a. The QSD = .5%.
b. Alpha needs to issue fixed-rate debt at 10.5% and Beta needs to issue floating rate-debt at
LIBOR + 1%. Alpha needs to pay LIBOR to Beta. Beta needs to pay 10.75% to Alpha. If this is
done, Alpha’s floating-rate all-in-cost is: 10.5% + LIBOR - 10.75% = LIBOR - .25%, a .25%
savings over issuing floating-rate debt on its own. Beta’s fixed-rate all-in-cost is: LIBOR+ 1% +
10.75% - LIBOR = 11.75%, a .25% savings over issuing fixed-rate debt.
2. Do problem 1 over again, this time assuming more realistically that a swap bank is involved
as an intermediary. Assume the swap bank is quoting five-year dollar interest rate swaps at
10.7% - 10.8% against LIBOR flat.
Solution: Alpha will issue fixed-rate debt at 10.5% and Beta will issue floating rate-debt at
LIBOR + 1%. Alpha will receive 10.7% from the swap bank and pay it LIBOR. Beta will pay
10.8% to the swap bank and receive from it LIBOR. If this is done, Alpha’s floating-rate all-in-
cost is: 10.5% + LIBOR - 10.7% = LIBOR - .20%, a .20% savings over issuing floating-rate debt
on its own. Beta’s fixed-rate all-in-cost is: LIBOR+ 1% + 10.8% - LIBOR = 11.8%, a .20%
savings over issuing fixed-rate debt.
Chapter 14 - Interest Rate and Currency Swaps
3. Company A is a AAA-rated firm desiring to issue five-year FRNs. It finds that it can issue
FRNs at six-month LIBOR + .125 percent or at three-month LIBOR + .125 percent. Given its
asset structure, three-month LIBOR is the preferred index. Company B is an A-rated firm that
also desires to issue five-year FRNs. It finds it can issue at six-month LIBOR + 1.0 percent or at
three-month LIBOR + .625 percent. Given its asset structure, six-month LIBOR is the preferred
index. Assume a notional principal of $15,000,000. Determine the QSD and set up a floating-
for-floating rate swap where the swap bank receives .125 percent and the two counterparties
share the remaining savings equally.
Solution: The quality spread differential is [(Six-month LIBOR + 1.0 percent) minus (Six-month
LIBOR + .125 percent) =] .875 percent minus [(Three-month LIBOR + .625 percent) minus
(Three-month LIBOR + .125 percent) =] .50 percent, which equals .375 percent. If the swap
bank receives .125 percent, each counterparty is to save .125 percent. To affect the swap,
Company A would issue FRNs indexed to six-month LIBOR and Company B would issue FRNs
indexed three-month LIBOR. Company B might make semi-annual payments of six-month
LIBOR + .125 percent to the swap bank, which would pass all of it through to Company A.
Company A, in turn, might make quarterly payments of three-month LIBOR to the swap bank,
which would pass through three-month LIBOR - .125 percent to Company B. On an annualized
basis, Company B will remit to the swap bank six-month LIBOR + .125 percent and pay three-
month LIBOR + .625 percent on its FRNs. It will receive three-month LIBOR - .125 percent from
the swap bank. This arrangement results in an all-in cost of six-month LIBOR + .825 percent,
which is a rate .125 percent below the FRNs indexed to six-month LIBOR + 1.0 percent
Company B could issue on its own. Company A will remit three-month LIBOR to the swap bank
and pay six-month LIBOR + .125 percent on its FRNs. It will receive six-month LIBOR + .125
percent from the swap bank. This arrangement results in an all-in cost of three-month LIBOR
for Company A, which is .125 percent less than the FRNs indexed to three-month LIBOR + .125
percent it could issue on its own. The arrangements with the two counterparties net the swap
bank .125 percent per annum, received quarterly.
Chapter 14 - Interest Rate and Currency Swaps
*4. A corporation enters into a five-year interest rate swap with a swap bank in which it agrees
to pay the swap bank a fixed rate of 9.75 percent annually on a notional amount of €15,000,000
and receive LIBOR. As of the second reset date, determine the price of the swap from the
corporation’s viewpoint assuming that the fixed-rate side of the swap has increased to 10.25
Solution: On the reset date, the present value of the future floating-rate payments the
corporation will receive from the swap bank based on the notional value will be €15,000,000.
The present value of a hypothetical bond issue of €15,000,000 with three remaining 9.75
percent coupon payments at the new fixed-rate of 10.25 percent is €14,814,304. This sum
represents the present value of the remaining payments the swap bank will receive from the
corporation. Thus, the swap bank should be willing to buy and the corporation should be willing
to sell the swap for €185,696.
5. DVR, Inc. can borrow dollars for five years at a coupon rate of 2.75 percent. Alternatively, it
can borrow yen for five years at a rate of .85 percent. The five-year yen swap rates are .64--.70
percent and the dollar swap rates are 2.41--2.44 percent. The current ¥/$ exchange rate is
87.575. Determine the dollar AIC and the dollar cash flow that DVR would have to pay under a
currency swap where it borrows ¥1,750,000,000 and swaps the debt service into dollars. This
problem can be solved using the excel spreadsheet CURSWAP.xls.
Chapter 14 - Interest Rate and Currency Swaps
Solution: Since the dollar AIC is 2.66% and the DVR’s dollar borrowing rate is 2.75%, it should
borrow yen and swap into dollars. The swap locks-in the dollar cashflows DVR needs to cover
the yen debt service. The output from using the excel spreadsheet CURSWAP.xls is:
Cross-Currency Swap Analyzer
FC Bond FC $ Actual
Year Cashflow Received Paid $ Cashflow
0 1,750,000,000 -1,768,027,402 20,188,723 19,982,872
1 -14,875,000 14,875,000 -492,605 -492,605
2 -14,875,000 14,875,000 -492,605 -492,605
3 -14,875,000 14,875,000 -492,605 -492,605
4 -14,875,000 14,875,000 -492,605 -492,605
- -
5 1,764,875,000 1,764,875,000 -20,681,328 20,681,328
AIC 0.85% 0.64% 2.44% 2.66%
Face Value: 1,750,000,000 Bid Ask
Coupon Rate: 0.850% Spot FX Rate: 87.57500 87.57500
OP as % of
Par: 100.000% FC Swap Rate: 0.64% 0.70%
Fee: 0.000% $ Swap Rate: 2.41% 2.44%
6. Karla Ferris, a fixed income manager at Mangus Capital Management, expects the current
positively sloped U.S. Treasury yield curve to shift parallel upward.
Ferris owns two $1,000,000 corporate bonds maturing on June 15, 1999, one with a
variable rate based on 6-month U.S. dollar LIBOR and one with a fixed rate. Both yield 50 basis
points over comparable U.S. Treasury market rates, have very similar credit quality, and pay
interest semi-annually.
Ferris wished to execute a swap to take advantage of her expectation of a yield curve shift
and believes that any difference in credit spread between LIBOR and U.S. Treasury market
rates will remain constant.
Chapter 14 - Interest Rate and Currency Swaps
a. Describe a six-month U.S. dollar LIBOR-based swap that would allow Ferris to take
advantage of her expectation. Discuss, assuming Ferris’ expectation is correct, the change in
the swap’s value and how that change would affect the value of her portfolio. [No calculations
required to answer part a.]
Instead of the swap described in part a, Ferris would use the following alternative derivative
strategy to achieve the same result.
b. Explain, assuming Ferris’ expectation is correct, how the following strategy achieves the
same result in response to the yield curve shift. [No calculations required to answer part b.]
Settlement Date Nominal Eurodollar Futures Contract Value
12-15-97 $1,000,000
03-15-98 1,000,000
06-15-98 1,000,000
09-15-98 1,000,000
12-15-98 1,000,000
03-15-99 1,000,000
c. Discuss one reason why these two derivative strategies provide the same result.
CFA Guideline Answer
a. The Swap Value and its Effect on Ferris’ Portfolio
Because Karla Ferris believes interest rates will rise, she will want to swap her $1,000,000
fixed-rate corporate bond interest to receive six-month U.S. dollar LIBOR. She will continue to
hold her variable-rate six-month U.S. dollar LIBOR rate bond because its payments will increase
as interest rates rise. Because the credit risk between the U.S. dollar LIBOR and the U.S.
Treasury market is expected to remain constant, Ferris can use the U.S. dollar LIBOR market to
take advantage of her interest rate expectation without affecting her credit risk exposure.
Chapter 14 - Interest Rate and Currency Swaps
To execute this swap, she would enter into a two-year term, semi-annual settle, $1,000,000
nominal principal, pay fixed-receive floating U.S. dollar LIBOR swap. If rates rise, the swap’s
mark-to-market value will increase because the U.S. dollar LIBOR Ferris receives will be higher
than the LIBOR rates from which the swap was priced. If Ferris were to enter into the same
swap after interest rates rise, she would pay a higher fixed rate to receive LIBOR rates. This
higher fixed rate would be calculated as the present value of now higher forward LIBOR rates.
Because Ferris would be paying a stated fixed rate that is lower than this new higher-present-
value fixed rate, she could sell her swap at a premium. This premium is called the “replacement
cost” value of the swap.
b. Eurodollar Futures Strategy
The appropriate futures hedge is to short a combination of Eurodollar futures contracts with
different settlement dates to match the coupon payments and principal. This futures hedge
accomplishes the same objective as the pay fixed-receive floating swap described in Part a. By
discussing how the yield-curve shift affects the value of the futures hedge, the candidate can
show an understanding of how Eurodollar futures contracts can be used instead of a pay fixed-
receive floating swap.
If rates rise, the mark-to-market values of the Eurodollar contracts decrease; their yields
must increase to equal the new higher forward and spot LIBOR rates. Because Ferris must
short or sell the Eurodollar contracts to duplicate the pay fixed-receive variable swap in Part a,
she gains as the Eurodollar futures contracts decline in value and the futures hedge increases
in value. As the contracts expire, or if Ferris sells the remaining contracts prior to maturity, she
will recognize a gain that increases her return. With higher interest rates, the value of the fixed-
rate bond will decrease. If the hedge ratios are appropriate, the value of the portfolio, however,
will remain unchanged because of the increased value of the hedge, which offsets the fixed-rate
bond’s decrease.
Chapter 14 - Interest Rate and Currency Swaps
Why the Derivative Strategies Achieve the Same Result
Arbitrage market forces make these two strategies provide the same result to Ferris. The
two strategies are different mechanisms for different market participants to hedge against
increasing rates. Some money managers prefer swaps; others, Eurodollar futures contracts.
Each institutional market participant has different preferences and choices in hedging interest
rate risk. The key is that market makers moving into and out of these two markets ensure that
the markets are similarly priced and provide similar returns. As an example of such an
arbitrage, consider what would happen if forward market LIBOR rates were lower than swap
market LIBOR rates. An arbitrageur would, under such circumstances, sell the futures/forwards
contracts and enter into a received fixed-pay variable swap. This arbitrageur could now receive
the higher fixed rate of the swap market and pay the lower fixed rate of the futures market. He
or she would pocket the differences between the two rates (without risk and without having to
make any [net] investment.) This arbitrage could not last.
As more and more market makers sold Eurodollar futures contracts, the selling pressure
would cause their prices to fall and yields to rise, which would cause the present value cost of
selling the Eurodollar contracts also to increase. Similarly, as more and more market makers
offer to receive fixed rates in the swap market, market makers would have to lower their fixed
rates to attract customers so they could lock in the lower hedge cost in the Eurodollar futures
market. Thus, Eurodollar forward contract yields would rise and/or swap market receive-fixed
rates would fall until the two rates converge. At this point, the arbitrage opportunity would no
longer exist and the swap and forwards/futures markets would be in equilibrium.
7. Rone Company asks Paula Scott, a treasury analyst, to recommend a flexible way to
manage the company’s financial risks.
Two years ago, Rone issued a $25 million (U.S.$), five-year floating rate note (FRN). The
FRN pays an annual coupon equal to one-year LIBOR plus 75 basis points. The FRN is non-
callable and will be repaid at par at maturity.
Scott expects interest rates to increase and she recognizes that Rone could protect itself
against the increase by using a pay-fixed swap. However, Rone’s Board of Directors prohibits
both short sales of securities and swap transactions. Scott decides to replicate a pay-fixed
swap using a combination of capital market instruments.
Chapter 14 - Interest Rate and Currency Swaps
a. Identify the instruments needed by Scott to replicate a pay-fixed swap and describe the
required transactions.
b. Explain how the transactions in Part a are equivalent to using a pay-fixed swap.
CFA Guideline Answer
a. The instruments needed by Scott are a fixed-coupon bond and a floating rate note (FRN).
The transactions required are to:
∙ issue a fixed-coupon bond with a maturity of three years and a notional amount of $25
million, and
∙ buy a $25 million FRN of the same maturity that pays one-year LIBOR plus 75 bps.
b. At the outset, Rone will issue the bond and buy the FRN, resulting in a zero net cash flow at
initiation. At the end of the third year, Rone will repay the fixed-coupon bond and will be repaid
the FRN, resulting in a zero net cash flow at maturity. The net cash flow associated with each
of the three annual coupon payments will be the difference between the inflow (to Rone) on the
FRN and the outflow (to Rone) on the bond. Movements in interest rates during the three-year
period will determine whether the net cash flow associated with the coupons is positive or
negative to Rone. Thus, the bond transactions are financially equivalent to a plain vanilla pay-
fixed interest rate swap.
8. A company based in the United Kingdom has an Italian subsidiary. The subsidiary
generates €25,000,000 a year, received in equivalent semiannual installments of €12,500,000.
The British company wishes to convert the euro cash flows to pounds twice a year. It plans to
engage in a currency swap in order to lock in the exchange rate at which it can convert the
euros to pounds. The current exchange rate is €1.5/£. The fixed rate on a plain vanilla
currency swap in pounds is 7.5 percent per year, and the fixed rate on a plain vanilla currency
swap in euros is 6.5 percent per year.
Chapter 14 - Interest Rate and Currency Swaps
a. Determine the notional principals in euros and pounds for a swap with semiannual payments
that will help achieve the objective.
b. Determine the semiannual cash flows from this swap.
CFA Guideline Answer
a. The semiannual cash flow must be converted into pounds is €25,000,000/2 = €12,500,000.
In order to create a swap to convert €12,500,000, the equivalent notional principals are
∙ Euro notional principal = €384,615,385
∙ Pound notional principal = £256,410,257
b. The cash flows from the swap will now be
∙ Company makes swap payment = €12,500,000
∙ Company receives swap payment = £9,615,385
The company has effectively converted euro cash receipts to pounds.
9. Ashton Bishop is the debt manager for World Telephone, which needs €3.33 billion Euro
financing for its operations. Bishop is considering the choice between issuance of debt
denominated in:
Euros (€), or
U.S. dollars, accompanied by a combined interest rate and currency swap.
a. Explain one risk World would assume by entering into the combined interest rate and
currency swap.
Bishop believes that issuing the U.S.-dollar debt and entering into the swap can lower
World’s cost of debt by 45 basis points. Immediately after selling the debt issue, World would
swap the U.S. dollar payments for Euro payments throughout the maturity of the debt. She
assumes a constant currency exchange rate throughout the tenor of the swap.
Exhibit 1 gives details for the two alternative debt issues. Exhibit 2 provides current
information about spot currency exchange rates and the 3-year tenor Euro/U.S. Dollar currency
and interest rate swap.
Chapter 14 - Interest Rate and Currency Swaps
Exhibit 1
World Telephone Debt Details
Characteristic Euro Currency Debt U.S. Dollar Currency Debt
Par value €3.33 billion $3 billion
Term to maturity 3 years 3 years
Fixed interest rate 6.25% 7.75%
Interest payment Annual Annual
Exhibit 2
Currency Exchange Rate and Swap Information
Spot currency exchange rate $0.90 per Euro ($0.90/€1.00)
3-year tenor Euro/U.S. Dollar
fixed interest rates 5.80% Euro/7.30% U.S. Dollar
b. Show the notional principal and interest payment cash flows of the combined interest rate
and currency swap.
Note: Your response should show both the correct currency ($ or €) and amount for each cash
Answer problem b in the template provided below:
Cash Flows Year 0 Year 1 Year 2 Year 3
of the Swap
World pays
Notional principal
Interest payment
World receives
Notional principal
Interest payment
c. State whether or not World would reduce its borrowing cost by issuing the debt denominated
in U.S. dollars, accompanied by the combined interest rate and currency swap. Justify your
response with one reason.
Chapter 14 - Interest Rate and Currency Swaps
CFA Guideline Answer
a. World would assume both counterparty risk and currency risk. Counterparty risk is the risk
that Bishop’s counterparty will default on payment of principal or interest cash flows in the swap.
Currency risk is the currency exposure risk associated with all cash flows. If the US$
appreciates (Euro depreciates), there would be a loss on funding of the coupon payments;
however, if the US$ depreciates, then the dollars will be worth less at the swap’s maturity.
Year 0 Year 1 Year 2 Year 3
World pays
Notional $3 billion €3.33 billion
Interest payment €193.14 million1 €193.14 million €193.14 million
World receives
Notional $3.33 billion €3 billion
Interest payment $219 million2 $219 million $219 million
€ 193.14 million = € 3.33 billion x 5.8%
$219 million = $ 3 billion x 7.3%
c. World would not reduce its borrowing cost, because what Bishop saves in the Euro market,
she loses in the dollar market. The interest rate on the Euro pay side of her swap is 5.80
percent, lower than the 6.25 percent she would pay on her Euro debt issue, an interest savings
of 45 bps. But Bishop is only receiving 7.30 percent in U.S. dollars to pay on her 7.75 percent
U.S. debt interest payment, an interest shortfall of 45 bps. Given a constant currency
exchange rate, this 45 bps shortfall exactly offsets the savings from paying 5.80 percent versus
the 6.25 percent. Thus there is no interest cost savings by selling the U.S. dollar debt issue and
entering into the swap arrangement.
|
{"url":"http://www.docstoc.com/docs/105414291/INTEREST-RATE-AND-CURRENCY-SWAPS","timestamp":"2014-04-23T18:51:27Z","content_type":null,"content_length":"73743","record_id":"<urn:uuid:7fec2c72-acc9-4e6c-8f7a-27a5c4b286a9>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
|
TWO Sigma Quantitative Research Analyst Salar
TWO Sigma Quantitative Research Analyst Salary
TWO Sigma Quantitative Research Analyst average salary is $139,500, median salary is $142,500 with a salary range from $120,000 to $150,000.
TWO Sigma Quantitative Research Analyst salaries are collected from government agencies and companies. Each salary is associated with a real job position. TWO Sigma Quantitative Research Analyst
salary statistics is not exclusive and is for reference only. They are presented "as is" and updated regularly.
Job Title Company Salary City Year
Quantitative Research Analyst TWO Sigma $150,000 New York, NY, 10001 04/14/2009
Quantitative Research Analyst TWO Sigma $150,000 New York, NY, 10001 09/07/2011
Quantitative Research Analyst TWO Sigma $150,000 New York, NY, 10001 10/01/2012
Quantitative Research Analyst TWO Sigma $150,000 New York, NY, 10001 10/01/2012
Quantitative Research Analyst TWO Sigma $150,000 New York, NY, 10001 10/01/2012
Quantitative Research Analyst TWO Sigma $135,000 New York, NY, 10001 05/24/2010
Quantitative Research Analyst TWO Sigma $135,000 New York, NY, 10001 03/19/2012
Quantitative Research Analyst TWO Sigma $130,000 New York, NY, 10001 04/14/2006
Quantitative Research Analyst TWO Sigma $125,000 New York, NY, 10001 09/16/2009
Quantitative Research Analyst TWO Sigma $120,000 New York, NY, 10001 06/01/2009
TWO Sigma Quantitative Research Analyst Jobs
Calculate how much you could earn
It's FREE. Based on your input and our analysis. How we do it?
All fields are required for calculation accuracy.
• We will send you an email to access your personalized report.
• We won’t share your email address
Related TWO Sigma Quantitative Research Analyst Salary
TWO Sigma Quantitative Research Analyst Jobs
Recent TWO Sigma Quantitative Research Analyst Salaries (April 16, 2014)
Quantitative Research Analyst TWO Sigma $150,000 New York, NY, 10001 10/01/2012
Quantitative Research Analyst TWO Sigma $135,000 New York, NY, 10001 03/19/2012
TWO Sigma Quantitative Research Analyst salary is full-time annual starting salary. Intern, contractor and hourly pay scale vary from regular exempt employee. Compensation depends on work experience,
job location, bonus, benefits and other factors.
|
{"url":"http://www.salarylist.com/company/TWO-Sigma/Quantitative-Research-Analyst-Salary.htm","timestamp":"2014-04-16T20:18:15Z","content_type":null,"content_length":"30420","record_id":"<urn:uuid:6f0f0211-b29f-423b-8689-0aea5212c8fb>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How does the Special Permit drawing work? I heard you square points. Does that improve my drawing odds?
No attachments were found.
Each Special Permit category has a separate random drawing. Building up your Special Permit points gives you a better chance in these drawings.
If we compare the drawing to pulling a ticket out of a barrel, building up points gives you more tickets. The system squares your points. That means that a person with one point ends up with one
ticket in the barrel (1x1=1). A person with 10 points ends up with 100 tickets (10x10=100).
The system randomly pulls out all these tickets in order. At the end, each application will keep and use the lowest number it drew. The lower the number the better as spot number one earns the first
overall pick. With more points, an application has a better chance of drawing a better pick.
The system moves down the list of applications, checking hunt choices to see if it can award a permit. For example, if your first hunt choice is no longer available, then your second choice is
checked. If your second is not available, then the third choice is checked. If none of your hunt choices has a permit to award, then you earn a point and the system moves on to the next application.
It is important to think about the order and the amount of your hunt choices. If you only want to hunt one location, you might select only one hunt choice. In this case, drawing a permit is tougher
because the system only considers that one choice. However, you do not risk losing your points to a second or third hunt choice you did not really want. If instead you are open to drawing several
different hunts, using extra hunt choices increases your odds.
The weighted points system improves your odds, but the drawing is still random. Many factors affect the odds. The number of permits offered for a hunt is a major factor. Some hunts have only 1 permit
to award while others have as many as 750.
You also need to consider what hunt choices the applications ahead of you submitted. If they all submitted different choices than you, you could draw a hunt even if your application is way down the
list. On the same note, your application could have the second overall pick, but the application ahead of you chose the same hunt. If that hunt has only one permit to award, you would not draw a
permit from the second spot and instead earn a point for next year.
Related Questions
• When I check my drawing results online, why do I see ‘Points Used’ even though I did not draw a Special Permit?
• If I apply with a group, how many points does our group application enter the Special Permit drawing?
Question Details
Last Updated
12th of December, 2013
Would you like to...
User Opinions
96% 3%
(28 votes)
How would you rate this answer?
Thank you for rating this answer.
|
{"url":"http://wdfw.wa.gov/help/questions/214/How+does+the+Special+Permit+drawing+work%3F+I+heard+you+square+points.+Does+that+improve+my+drawing+odds%3F","timestamp":"2014-04-21T02:23:05Z","content_type":null,"content_length":"22466","record_id":"<urn:uuid:ba8e0ad9-0c55-4e34-9ae0-ebefc95d9fad>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Weymouth Algebra 2 Tutor
Find a Weymouth Algebra 2 Tutor
...Algebra is also the shared worldwide language of science and engineering, business, digital arts or any other quantitative field. Students and their parents shouldn't hesitate to spend time on
this subject because a weak foundation in algebra is the most common reason for later struggles with mo...
23 Subjects: including algebra 2, chemistry, physics, calculus
...I have been a Precalculus tutor for more than 25 years. During this same time period,I have both taught and tutored students in Algebra 1, Algebra 2, and Trigonometry. As a result, I am
confident that you will find me to be very qualified to teach this subject.
6 Subjects: including algebra 2, geometry, algebra 1, prealgebra
...I currently hold a master's degree in math and have used it to tutor a wide array of math courses. In addition to these subjects, for the last several years, I have been successfully tutoring
for standardized tests, including the SAT and ACT.I have taken a and passed a number of Praxis exams. I even earned a perfect score on the Math Subject Test.
36 Subjects: including algebra 2, chemistry, English, reading
...I have a BS in Mathematics. I've been tutoring Algebra for 5 years. I have prior experience totaling 5 years in helping kids prepare for standardized tests.
13 Subjects: including algebra 2, calculus, geometry, algebra 1
...I am also did very well. I am able to teach children the nuances of reading and reading interpretation and the meaning of words and intent and context. I write a lot for my career in insurance,
and therefore write very well.
90 Subjects: including algebra 2, chemistry, English, reading
|
{"url":"http://www.purplemath.com/Weymouth_Algebra_2_tutors.php","timestamp":"2014-04-20T21:44:20Z","content_type":null,"content_length":"23760","record_id":"<urn:uuid:4c07f9b2-7ea1-4a51-a5a6-190a8c4c5ee7>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
|
(Click for
Sample Schedules
Mathematics major requirements (B.A.)
The mathematics major is designed to meet the needs of students with a wide variety of interests. All students majoring in mathematics start with a basic core of required mathematics courses. The
student then builds on this foundation with a selection of five upper-division courses, chosen from one of three options. Students in the Teachers Option choose courses that meet the requirements for
state certification in mathematics. In the Applied Mathematics and Pure Mathematics options, students get a measure of depth by taking two year-long sequences in mathematics.
Required Courses
Students majoring in mathematics start with a core of five courses.
• MATH 142 - Calculus I.
• MATH 143 - Calculus II.
• MATH 244 - Calculus III (must be taken at St. Louis University with a grade of at least ā Cā ).
• MATH 266 - Principles of Mathematics.
• MATH 315 - Introduction to Linear Algebra.
Options for upper division courses:
The student must pass at least five additional upper-division mathematics courses to complete the major under the guidelines of the Applied Mathematics option, the Pure Mathematics option, or the
Teachers Option, described below. A GPA of 2.00 (ā Cā average) or higher is required in upper-division mathematics courses counting toward the major. The upper-division courses are built around
year-long sequences of courses in five areas of mathematics:
• MATH 355, and 455 or 457 - Differential Equations;
• MATH 401, 402 - Probability and Statistics;
• MATH 411, and 412 or 415 - Introduction to Abstract Algebra and Linear Algebra or Number Theory;
• MATH 421, and 422 or 423 - Introduction to Analysis and Metric Spaces or Multivariable Analysis;
• MATH 451, and 452 or 453 - Complex Variables.
Applied Mathematics
The Applied Mathematics option requires any two of the five year-long sequences listed above, plus a fifth upper-division mathematics course beyond the core mathematics requirement. This option is
appropriate for students planning on careers in industry, government agencies, actuarial sciences, etc. The student's career ambitions should guide the selection of year-long sequences. For example,
a career as an actuary will require expertise in probability and statistics; a career as an applied mathematician, differential equations.
Pure Mathematics
The Pure Mathematics option requires the two year-long sequences beginning with Introduction to Abstract Algebra and Introduction to Analysis, plus a fifth upper-division mathematics course beyond
the core mathematics requirement. This option is appropriate for students who intend to go on to graduate school in mathematics, or who plan careers in cryptography, computer science, teaching, etc.
Teachers Option
The Teachers Option requires the following courses, which satisfy requirements for teacher certification.
• MATH 401 - Elementary Theory of Probability
• MATH 405 - History of Mathematics
• MATH 411 - Elements of Modern Algebra
• MATH 441 - Foundations of Geometry or MATH 447 - Non-Euclidean Geometry
• One additional course chosen from the following:
□ MATH 355 - Differential Equations
□ MATH 402 - Introductory Mathematical Statistics
□ MATH 415 - Number Theory
□ (An appropriate upper-division mathematics elective may be substituted, with the approval of the student's mathematics mentor.)
Mathematics Minor Requirements
There are two options for students minoring in mathematics: the traditional Mathematics Minor and the Engineering Mathematics Minor.
Mathematics Minor
A minor in mathematics should consist of:
• MATH 142 - Calculus I
• MATH 143 - Calculus II
• MATH 244 - Calculus III
• MATH 266 - Principles of Mathematics
• MATH 315 - Introduction to Linear Algebra
• one further course in upper-division mathematics chosen with attention to prerequisites.
Engineering Mathematics Minor
Students seeking a minor in Engineering Mathematics must complete the three semesters of calculus and also complete four upper-division courses in subjects of importance to engineers, which include:
• MATH 311 - Linear Algebra for Engineers (offered Spring semesters only)
• MATH 320 - Numerical Analysis (offered occasionally)
• MATH 355 - Differential Equations
• MATH 360 - Combinatorics (offered occasionally)
• MATH 370 - Advanced Mathematics for Engineers
• MATH 401 - Elementary Theory of Probability
• MATH 402 - Introductory Mathematical Statistics
• MATH 403 - Probability and Statistics for Engineers
• MATH 451 - Introduction to Complex Variables
• MATH 452 - Complex Variables II
• MATH 453 - Geometric Topology (offered occasionally)
• MATH 455 - Nonlinear Dynamics and Chaos (offered Spring semesters only)
• MATH 457 - Partial Differential Equations (offered Fall semesters only)
• MATH 465 - Cryptography (offered occasionally)
Other upper-division mathematics courses may fulfill the course requirement for the Engineering Mathematics Minor, subject to approval by the Department of Mathematics and Computer Science. Students
must meet the prerequisites for all courses selected.
|
{"url":"http://mathcs.slu.edu/undergrad-math/requirements","timestamp":"2014-04-18T20:45:12Z","content_type":null,"content_length":"28606","record_id":"<urn:uuid:3c38dd9e-d2e8-4993-8ef7-1de3d0bbf411>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hingham, MA Trigonometry Tutor
Find a Hingham, MA Trigonometry Tutor
...I did not really begin to appreciate the genius of Isaac Newton until I was asked, as a young NASA employee, to code a computer program to solve orbital rendezvous problems. To this day I am
overwhelmed whenever I think of his gifts to us. Mechanics is the basis of physics, physics is the basis of most fields of application, particularly in engineering.
7 Subjects: including trigonometry, calculus, physics, algebra 1
...When this is combined with my many years of also teaching Algebra, I believe that you will find that I am well qualified to teach this subject. I have been a Precalculus tutor for more than 25
years. During this same time period,I have both taught and tutored students in Algebra 1, Algebra 2, and Trigonometry.
6 Subjects: including trigonometry, geometry, algebra 2, prealgebra
...In the last 3 years we have upgraded to a chess team where we play different schools around the area. My rating, based on some online play, might be around 1500 give or take a 100 points. I
have been teaching C++ at North Reading High School for 7 years.
19 Subjects: including trigonometry, calculus, physics, algebra 2
...I have had much success over the years with a lot of repeat and referral business. I have tutored Middle School math, High School Math, secondary H.S. entrance exam test prep, Sat, PSAT, ACT
(math and english) and SAT I and SAT II, Math. I have taught mddle school as well as High School.
19 Subjects: including trigonometry, geometry, GRE, algebra 1
...I don't just know the material; I know the student as well.I performed well in my physics courses as an MIT student. I have tutored students in classic mechanics, electricity, and magnetism. I
can offer help in both non-Calculus and Calculus based courses.
24 Subjects: including trigonometry, chemistry, calculus, physics
|
{"url":"http://www.purplemath.com/Hingham_MA_Trigonometry_tutors.php","timestamp":"2014-04-18T00:49:36Z","content_type":null,"content_length":"24176","record_id":"<urn:uuid:2c39aa86-fce7-49d4-b482-6943a786963d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent US6292192 - System and method for the direct rendering of curve bounded objects
1. Field of the Invention
The present invention relates generally to computer graphics, and more particularly to a texture procedure for rendering curved bounded images on a graphics display device.
2. Related Art
A typical computer generated image of a curve bounded object utilizes a number of line segments to approximate the curved boundary. A pixel is a picture element of a graphics display device. Each
pixel may represent unique attributes such as color, lighting, texture, etc. As is well known in the relevant art(s), texture procedures may be used to provide visual detail for displayed graphical
objects. In a line segment technique, the more line segments used in the representation, the greater accuracy the ultimate display may contain. Each segment may contribute to one or more pixels of
the final image. However, the number of line segments that can be used to render a display of a curved bounded region is limited. It is limited by the resolution of the intended graphics display
device as well as the available memory space on the computer system. Furthermore, the more line segments used, the greater will be the calculation time, and thus the rendering time.
Conventional textures are image-based and composed of texels, which taken collectively form an image. A polygon is rendered with a mapping specified between the polygon's vertices and the texture.
This mapping is usually specified in texture coordinates (s, t). When the polygon is rendered into pixels, the texture coordinates of each pixel are determined and used to lookup a value in the
texture that is used in the drawing of the pixel. This value may be color, transparency, etc. Other conventional methods for rendering curved bounded regions have attempted various polygonal
approximations and other incremental methods that result in the same limitations described above with respect to line segment techniques. Therefore, what is needed is a texture procedure for
rendering curved based objects without the need to convert the graphics object into line segments or perform any type of tessellation.
The present invention is a system and method for rendering a curve bounded object to a graphics display device directly from a high level curve-based description. The method includes receiving a
curve-based description of the graphics object (e.g., character typeface) and dividing the graphics object into a rectangular mesh of texels. Each texel is detailed by defining a combination of
curved geometry functions and a boolean function. Each texel contains a miniature resolution independent image of bounded complexity. Taken collectively, the texels form a continuous or resolution
independent binary image. The result of the above steps transforms the graphics object into a geometric-texture. The method of the present invention, for each pixel to be displayed, then determines
the Cartesian (s, t) coordinate pair within the geometric-texture. The curved geometry functions and the boolean function defined for the texel containing the (s, t) pair are then evaluated. This
step is repeated for each (s, t) pair of each pixel of each polygon to be rendered. The result is an alpha value or color for each pixel and thus the display of the curve bounded object to a graphics
display device.
In a preferred embodiment, the method of the present invention utilizes two horizontal axis functions, f[0](s) and f[1](s), and two vertical axis functions, g[0](t) and g[1](t), within each texel in
creating the curved geometries.
One advantage of the present invention is that unlike conventional methods that used image-based textures, the invention uses procedural textures that have no inherent resolution and will remain
accurate when subject to arbitrary magnification.
Another advantage of the present invention is that the method can be performed using conventional tri-linear interpolation hardware when the curved geometry functions defined for each texel are cubic
Another advantage of the present invention is that the method may also be used to implement a class of procedural alpha texture for selectively drawing (trimming) graphic primitives.
The present invention will be described with reference to the accompanying drawings, wherein:
FIGS. 1A and 1B are a flowchart representing the overall preferred operation of the present invention;
FIG. 2 is an illustration of how Bézier curves divide a texel according to the present invention;
FIG. 3 is an illustration of how a texel is located within a texture according to the present invention;
FIG. 4 illustrates an example of the preprocessing and run-time transformations according to the present invention;
FIG. 5 is a block diagram of an exemplary computer system useful for implementing the present invention;
FIG. 6 is an illustration of a texture map on which the present invention would operate;
FIG. 7 is an illustration of a texture map divided into several texels according to the present invention;
FIG. 8 is an illustration of a texture map divided into several labeled texels according to the present invention;
FIG. 9 is an illustration of one texel of a texture map containing one curve according to the present invention;
FIG. 10 is an illustration of one texel of a texture map containing two curves according to the present invention;
FIG. 11 is an illustration of one texel of a texture map containing three curves according to the present invention;
FIG. 12 is an illustration of a texel containing a Bézier curve, g[0](t), defined as a function of the vertical axis according to the present invention;
FIG. 13 is an illustration of a texel containing a Bézier curve, f[0](s), defined as a function of the horizontal axis according to the present invention;
FIG. 14 is an illustration of a texel containing a Bézier curve, g[1](t), defined as a function of the vertical axis according to the present invention;
FIG. 15 is an illustration of a texel containing a Bézier curve, f[1](s), defined as a function of the horizontal axis according to the present invention;
FIG. 16 is an illustration of a texel containing four Bézier curves and the mapping done according to the present invention; and
FIGS. 17-24 illustrate how the texture procedure of the present invention can be performed using conventional tri-linear interpolation hardware.
The present invention relates to a texture procedure that uses conventional tri-linear interpolation hardware to compute whether a pixel is inside or outside a curved region. The method can be used
to quickly render characters (e.g., PostScript™ typefaces) directly from curved based descriptions.
The present invention is described in terms of a character-rendering example. This is for convenience only and is not intended to limit the application of the present invention. In fact, after
reading the following description, it will be apparent to one skilled in the relevant art how to implement the following invention in alternative embodiments (e.g., to implement a class of procedural
alpha texture for selectively trimming graphic primitives).
Referring to FIGS. 1A and 1B, texture procedure 100 illustrates the overall operation of the present invention. Texture procedure 100 begins at step 102 with control passing immediately to step 104.
In step 104, a computer stored (either digitized or synthesized) curve-based description of a curve bounded object is received and converted into a texture map. For example, a PostScript™ font
(character) might be received. They are defined by curves and straight lines. The texture map is then divided into a rectangular mesh of regions known as texels in a step 106. Next, in a step 108,
the interior of each texel, based on the shape of part of the curve bounded object appearing in the texel, is detailed by defining up to four Bézier curves. Use of the Bézier formulation for
constructing curves to display curved bounded regions (surfaces) is well known in the relevant art. See Hearn, Donald and Baker, M. Pauline, Computer Graphics, (Prentice-Hall: USA 1986) pp. 195-98,
which is incorporated herein by reference in its entirety.
The four Bézier curves are two functions, f[0](s) and f[1](s), of the horizontal axis (s), and two functions, g[0](t) and g[1](t), of the vertical axis (t). In a preferred embodiment, each of the
four functions are defined using the Bézier formulation with four control points, P[0], P[1], P[2 ]and P[3], as will be explained in detail below with reference to FIGS. 8-15). This process results
in four cubic polynomial functions that display (approximately) the desired curved object. Each of these curves divides the texel into two regions, plus (+/0) and minus (−/1) (as shown in FIG. 2 with
reference to a texel 202). The plus region is above or to the right of the curve, whereas the minus region is below or to the left of the curve.
In addition to the four Bézier functions, each texel is also defined, in step 110, by a boolean function based on the shape appearing in each texel and the four Bézier curves. In step 112, a 16-bit
boolean vector is then obtained by evaluating the boolean function (defined in step 110) based on the shape of the part of the curve bounded object appearing in each texel and the four cubic
polynomial Bézier functions (defined in step 108).
In step 113, the resultant geometric-texture is stored (on a host computer memory as will be explained below with reference to FIG. 5). Steps 102 to 113 can be part of a preprocessing procedure for a
set of characters (e.g., a PostScript™ font). Once created and stored, the geometric textures can be loaded into memory at run-time (step 113 b) for continuation of the texture procedure 100. The
geometric-texture is used to draw a polygon that allows the curved object, defined by the geometric-texture, to be drawn. Because the geometric-texture of the present invention behaves like a
conventional texture, many polygons could be used to draw the object, possibly mapping it onto a three-dimensional object.
At run-time processing, for each pixel to be displayed of each graphics primitive (i.e. polygon) to be rendered using the geometric-texture, the texture (s, t) coordinate pair is determined (step 114
). This is done using any form of conventional interpolation. Then, the texel into which the (s, t) pair falls is located. In step 115, the texture (s[local], t[local]) coordinate pair local to the
texel must be computed. Since there are normally a 2 ^n number of texels in each dimension of the texture map, this computation is not costly. For example, step 114 and the computation of step 115
are illustrated in FIG. 3. FIG. 3 shows a texture map 300 divided into sixteen texels (4×4 array). The (s, t) coordinates within the texture 300 range from 0.0 to 1.0. Thus the coordinates of any
pixel within the texture map 300 will be expressed in as a (s, t) pair where s and t are fractions. A pixel 302 is first located within the texture map 300. Its (s, t) coordinate pair, relative to
texture 300, is (⅜, {fraction (9/16)}). This completes step 114.
In step 115, a local (s[local], t[local]) coordinate pair of pixel 302, relative to texel 202, is computed. Texel 202 has an origin whose (s, t) coordinate pair is (¼, ½). The origin is simply the
(s, t) coordinate pair, relative to texture 300, of the bottom left corner of the texel 202. Then, the (s[local],t[local]) coordinate pair of pixel 302 is computed as follows:
(s[local], t[local])=(j*(s—s[origin]), k*(t—t[origin]))
where j and k are the number of texels which divide texture map 300 in the s and t direction respectively. In FIG. 3, the result of the above calculation is an (s[local, t] [local]) coordinate pair
of (½, ¼) for pixel 302.
In step 116, for each (s[local], t[local]) pair, the four functions of the texel where the (s, t) pair lies are evaluated in parallel. As will be explained below (with reference to FIGS. 17-24),
these four evaluations may use the same hardware that is required for tri-linear mapped textures. The difference between each function and the opposing texture coordinate is used to determine whether
the (s[local], t[local]) pair is in the plus (+/0) or minus (−/1) region. The evaluations are illustrated in step 116 b as follows:
Curve 0: sign(t −f[0](s[local])) yields Bit 0
Curve 1: sign(t −f[1](s[local])) yields Bit 1
Curve 2: sign(s −g[0](t[local])) yields Bit 2
Curve 3: sign(s −g[1](t[local])) yields Bit 3
The resultant 4-bit “outcode” (bits 0-3 concatenated), corresponds to the (s[local], t[local]) coordinate pair's relationship with the plus or minus regions with respect to each curve. The outcode is
then used as an index into the boolean vector (step 118) (as will be further explained below with reference to FIG. 16). Step 120 can then determine, for example, the alpha value or “in” or “out” for
each (s[local], t[local]) pair. If the boolean vector bit pointed to by the outcode is set to 0, then the pixel is transparent (step 122). Alternatively, if the boolean vector bit pointed to by the
outcode is set to 1, the pixel is opaque (step 124). Steps 114-124 are thus repeated for every pixel to be displayed of each geometric-texture to be rendered (this recursion is not shown in FIGS 1A
and 1B). The process is thus completed, as indicated by step 126, when the entire curve bounded object is rendered to the graphics display device pixel by pixel.
The run-time processing steps of 113 b to 126 can be repeated any number of times to produce different transformations of the geometric-texture. The texture procedure 100 is illustrated for a set of
characters in FIG. 4. FIG. 4 shows the division between preprocessing (steps 102 to 113) and run-time processing (steps 113 b to 126). In the case of drawing a character, the polygon normally
specifies (s, t) coordinates that completely surround the character to be drawn. The recursion mentioned above would thus be performed for each (s, t) coordinate pair of the polygon's pixels. By
applying transformations to the polygon (only four points) the entire character is transformed. In FIG. 4, an entire alphabet and a polygon with the mapping into the geometric-texture is defined. The
polygon can then be drawn several times by applying different transformations, according to the present invention, that result in four different renderings 402 a-402 d.
Furthermore, it is important to note that the result of texture procedure 100 (as shown in FIGS. 1A and 1B) is a single number (a zero or a one). Therefore, it will be apparent to one skilled in the
relevant art how to implement the method of the present invention to use the result for various purposes (e.g., color) other than transparency.
FIG. 5 is a block diagram of an exemplary computer imaging system 501 useful for implementing the present invention. Computer imaging system 501 includes a host computer 502, geometry engine 504,
rasterizing unit 506, texture engine 508, texture memory 510, attenuation unit 550, and frame buffer 512. Imaging system 501 further includes a separator unit 507. Steps 106-120 are carried out in
texture engine 508 and can be implemented in software, firmware, and/or hardware in one or more processing components. Steps 122-124 would take place on frame buffer 512. For example, any host or
graphics processor can be used to implement texture procedure 100 in software running on a processor(s). In the example of FIG. 5, host 502 can implement step 114 by controlling pixels passed to
separator unit 507. Separator unit 507 can be any type of processing logic (or program code executing on host 502).
The present invention is described in terms of an example computer graphics processing environment. As described herein, the present invention can be implemented as software, firmware, hardware, or
any combination thereof.
Given the description herein, it would be apparent to one skilled in the art to implement the present invention in any computer graphics application, API, or any other system that supports a texture
engine including, but not limited to, a computer graphics processor (single chip or multiple chips), high-end to low-end graphics workstations, gaming platforms, systems and consoles.
Description in these terms is provided for convenience only. It is not intended that the invention be limited to application in this example environment. In fact, after reading the following
description, it will become apparent to a person skilled in the relevant art how to implement the invention in alternative environments.
The present invention can be implemented using software running (that is, executing) in an environment similar to that described above. In this document, the term “computer program product” is used
to generally refer to a removable storage unit or a hard disk installed in a hard disk drive. These computer program products are means for providing software to a computer system (e.g., host 502).
Computer programs (also called computer control logic) are stored in main memory and/or secondary memory. Computer programs can also be received via a communications interface. Such computer
programs, when executed, enable the computer system to perform the features of the present invention as discussed herein. In particular, the computer programs, when executed, enable a processor to
perform the features of the present invention. Accordingly, such computer programs represent controllers of a computer system.
In an embodiment where the invention is implemented using software, the software may be stored in a computer program product and loaded into a computer system using a removable storage drive, hard
drive, or communications interface . Alternatively, the computer program product may be downloaded to computer system over a communications path. The control logic (software), when executed by a
processor, causes the processor to perform the functions of the invention as described herein.
In another embodiment, the invention is implemented primarily in firmware and/or hardware using, for example, hardware components such as application specific integrated circuits (ASICs).
Implementation of a hardware state machine to perform the functions described herein will be apparent to persons skilled in the relevant art(s).
Detailed Example of Texture Procedure 100
FIG. 6 is an illustration of a texture map 600 on which the present invention would operate. In a preferred embodiment, the method of the present invention to directly render curve bounded objects
would be used to render characters (e.g., PostScript™ typefaces). Accordingly texture map 600 is a computer stored image of the lowercase letter “b”.
FIG. 7 is an illustration of texture map 600 divided into a rectangular mesh of regions known as texels. Although FIG. 6 shows texture map 600 divided into 25 texels (5×5 rectangular grid), it should
be understood that this is presented as an example and not a limitation. For reasons that will become clear, the method of the present invention allows texture map 600 to be divided into a lesser
number of texels than those needed by conventional texture resolution methods.
FIG. 8 is an illustration of texture map 600 divided into texels as shown in FIG. 7. However, in FIG. 8, five texels 601-605 have been labeled “A” through “E”, respectively, for purposes of the
following explanation of a preferred embodiment of the present invention.
FIG. 9 is a detailed illustration of texel 602 (labeled “B”) of texture map 600. The shape appearing in texel 602, which is simply the part of the lowercase letter “b” of FIG. 7 that falls into texel
602, defines one Bézier curve 0. Curve 0 is defined through the Bézier formulation using four control points P[0], P[1], P[2 ]and P[3]. This curve is simply a function of the vertical axis (t). Thus,
the 16-bit boolean vector would contain half ones corresponding to the Bézier curve 0. The boolean vector would thus be:
FIG. 10 is a detailed illustration of texel 603 (labeled “C”) of texture map 600. The shape appearing in texel 603, which is simply the part of the lowercase letter “b” of FIG. 7 that falls into
texel 603, defines two Bézier curves 0 and 1. Bézier curve 0 is simply a function of the vertical axis (t) as in texel 602. Bézier curve 1 is a function of the horizontal axis (s). Thus, the 16-bit
boolean vector reflects the union of the two half spaces of Bézier curves 0 and 1. The boolean vector would thus be:
Curve 1: 1100110011001100
Curve 0: 1010101010101010
Boolean Function=curve 0 V curve 1
Boolean Vector=1110111011101110
FIG. 11 is a detailed illustration of texel 604 of (labeled “D”) of texture map 600. The shape appearing in texel 604, which is simply the part of the lowercase letter “b” of FIG. 7 that falls into
texel 604, defines three Bézier curves 0 ,1 and 2. Two Bézier curves are function of the vertical axis (t) and one is a function of the horizontal axis (s). The boolean vector is such that the
vertical curves are split by the horizontal curve. The boolean vector would thus be:
Curve 1: 1100110011001100
Curve 0: 1010101010101010
Curve 2: 1111000011110000
Boolean Function [(curve0Λcurve1)V(curve2Λ{overscore (curve)}1)]
Boolean Vector=1011100010111000
Referring to FIG. 8, it can be seen that texel 601 (labeled “A”) has no curves defined as all the pixels within the texel need to be filled during rendering. Thus, no matter what curves are
evaluated, according to the present invention, the result would always be a logical 1 (i.e. inside the curve). Thus the boolean vector for such a texel is all TRUE (“1”s). Still referring to FIG. 8,
texel 605 (labeled “E”) also contains no curves defined. Because no pixels within the texel need to be filled during rendering, no matter what curves are evaluated, according to the present
invention, the result would always be a logical 0 (i.e. outside the curve). Thus the boolean vector for such a texel is all FALSE (“0”s).
Now referring to FIG. 12, a detailed illustration of a texel 900 containing a Bézier curve 0 defined as a function of the vertical axis, g[0](t) is shown. If a (s[local], t[local]) coordinate pair
evaluates in the shaded region, then bit 0 is set to logical TRUE (“1”).
FIG. 13 is an illustration of texel 1200 containing a Bézier curve 1 defined as a function of the horizontal axis, f[0](s). If a (s[local, t] [local]) coordinate pair evaluates in the shaded region,
then bit 1 is set to logical TRUE (“1”).
FIG. 14 is an illustration of texel 1200 containing a Bézier curve 2 defined as a function of the vertical axis, g[1](t). If a (s[local], t[local]) coordinate pair evaluates in the shaded region,
then bit 2 is set to logical TRUE (“1”).
FIG. 15 is an illustration of texel 1200 containing a Bézier curve 3, defined as a function of the horizontal axis, f[1](s). If a (s[local, t] [local]) coordinate pair evaluates in the shaded region,
then bit 3 is set to logical TRUE (“1”).
Now referring to FIG. 16, texel 1200 is shown containing all four Bézier curves 0-3 (shown individually in FIG. 12-15 respectively). Also shown in FIG. 16 is the mapping done according to the present
invention is illustrated. The resulting four bits (0-3) are concatenated to form a 4-bit outcode. The 4-bit outcode is then used as an index into the earlier evaluated 16-bit boolean vector for texel
1200 (see step 112 of FIG. 1A). The bit in the 16-bit boolean vector that corresponds to the 4-bit outcode is then used to render the pixel on the graphics display. The value read from the boolean
vector is the alpha value for the pixel. More specifically, if the boolean vector bit pointed to by the outcode is set to 0, then the pixel is transparent. Alternatively, if the boolean vector bit
pointed to by the outcode is set to 1, the pixel is opaque. In general, the boolean vector will contain 2^n bits for n curves in each texel. This is because each curve provides one bit of the index
into the boolean vector.
Advantages of the Present Invention
An advantage to the above described texture procedure is that by modeling the part of the texture map that falls into each texel with four Bézier curves, more detail is modeled by each texel. Thus, a
texture map can be divided into fewer texels thereby providing a very significant performance increase in rendering objects to graphics display devices.
Another advantage of the present invention is that conventional tri-linear interpolation hardware can be used to implement the texture procedure 100. Tri-linear interpolation, as is well known in the
relevant art, is normally used to compute the weighted average of eight texels. This technique provides some rudimentary filtering. Tri-linear interpolation is briefly described to illustrate the
analogy to the present invention that permits re-use of the tri-linear interpolation hardware for rendering curve bounded regions in accordance with the present invention.
In standard tri-linear interpolation texture mapping, a texture map is stored in varying degrees of pre-filtering. For example, FIG. 17 shows a sample texture pattern 1702 and several pre-filtered or
lower level of detail (LOD) versions 1704-1714 of the same map. Each of the lower LOD maps is one half the height and width of the next higher LOD map and is made by averaging together each group of
four texels of the next higher LOD map. During a mapping operation, the size and shape of a pixel image mapped into the texture map is used to determine which level of detail (LOD) is appropriate for
use in the texture mapping operation. Each LOD is useful for a different mapped pixel size.
When a pixel size corresponds exactly to an existing LOD map (e.g., texture map 1702), the tri-linear interpolation operation simplifies to a bilinear interpolation. For example, to determine the
contribution of the texels of LOD 1702 to the color of a display pixel on a display screen, the location of the pixel is mapped (i.e., transformed) to the texture map. The texture is then “sampled”
at the exact point where the pixel center mapped into the texture map. However, because the pixel center may not coincide exactly with a texel value, a weighted average of the four nearest texel
values is taken. This is illustrated in FIG. 18. The mapped pixel center is indicated at 1802. Note that pixel center 1802 falls between texel centers A, B, C and D of texture map 1702. One way to
take the weighted average of these four texel values is by doing a bilinear interpolation (i.e., a linear interpolation in two dimensions).
In the case where the pixel size does not correspond exactly to any existing LOD map, it will likely fall between two maps (e.g., texture maps 1702 and 1704). In this case, a bilinear interpolation
is performed in both maps and a linear interpolation is used to blend the two results. The two bilinear interpolations followed by a linear interpolation yields a “tri-linear interpolation.” For
example, if the pixel size fall between a size corresponding to map 1702 and map 1704, then a bilinear interpolation operation would be performed in map 1702 as discussed above. In addition, a
bilinear interpolation operation would be performed in map 1704 as shown in FIG. 18. The two resulting values would then be linearly blended based on the actual pixel size relative to the two
bracketing LOD maps to yield a color value for the pixel. FIG. 19 graphically depicts the tri-linear interpolation operation between the two LOD maps 1702 and 1704.
Normally, the tri-linear interpolation hardware simultaneously computes the four component values (R, G, B, α) for a pixel, effectively requiring four tri-linear interpolation engines as graphically
depicted in FIG. 20. In the case of the present invention, the objective is to use the same structure(s) to compute cubic polynomials.
The inventor discovered that a cubic polynomial in Bézier form (using four control points, P[0], P[1], P[2 ]and P[3]), as shown in FIG. 21, can be computed using nested linear interpolation. The
mapping of the four scalar control points between conventional Bézier polynomial calculation (FIG. 21) and tri-linear interpolation (FIG. 20) is shown in FIG. 22. The mapping of the intermediate
terms in the interpolation are shown in FIGS. 23a and 23 b. The complete mapping (a combination of FIGS. 22-23) is shown in FIG. 24.
Thus, the four Bézier curves used to describe the detail in each texel can be computed using the four tri-linear interpolation engines (R, G, B, and α) of typical tri-linear interpolation hardware.
Normally the four tri-linear interpolators use the same sets of weighting values in all four engines. However, to implement the present invention, two sets of weighting values must be used. Two
engines will use the offset of the s texture coordinate within the texel, and the other two engines will use the offset of the t texture coordinate within the texel. As will be apparent to one
skilled in the relevant art, the conventional tri-linear interpolation hardware will need to be augmented with logic to perform the various operations of texture procedure 100 (e.g., subtraction,
looking up the outcode, etc.).
Another advantage of the present invention is that the four Bézier curves are defined by sixteen scalar values (four control points, P[0], P[1], P[2 ]and P[3], for each curve defined in a texel) and
the boolean vector is defined by a 16-bit value. The boolean vector may be stored as a separate 16-bit value (e.g., luminance texture word) or as part of the curve textures. Therefore, if the latter
implementation is chosen, the 16-bit boolean vector can be stored as the low bit for each of the sixteen scalars (P[0], P[1], P[2 ]and P[3 ]for four curves) without using any additional computer
memory resources.
Yet still, an additional advantage of the current invention is that complex geometry can be transformed with very little computational overhead. As illustrated in FIG. 4, a complex figure can have
any projective transform applied to it at the cost of transforming the polygon that uses the figure as a texture which is usually only four points. Normally, the transformation of such a figure would
require that all of the points describing the figure be transformed—a much more costly operation.
While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example, and not limitation. It will be apparent to
persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the invention. Thus the present invention should not be
limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
|
{"url":"http://www.google.co.uk/patents/US6292192","timestamp":"2014-04-21T14:59:03Z","content_type":null,"content_length":"105326","record_id":"<urn:uuid:4b03ea7e-da4f-40d3-a6db-99f0ae54e6c6>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Parametrization of Hyperboloid
July 31st 2010, 10:35 AM #1
Junior Member
Jan 2010
Parametrization of Hyperboloid
a) Find a parametrization of the hyperboloid $x^2 + y^2 - z^2 = 25$
b) Find an expression for a unit normal to this surface.
c) Find an equation for the plane tangent to the surface at $(x_0,y_0,0)$, where $x_0^2 + y_0^2 = 25$
For the parametrization I got $\Phi(r,\theta) = (rcos\theta, rsin\theta, \sqrt{r^2 - 25})$
is that the correct parametrization?
Let $x = rcos(\theta)$ and $y = rsin(\theta)$.
Think about an auxilliary plane uz, for example, which is perpendicular to the $xy$ plane and makes $\theta$ degress starting from x axis. In this plane, we will make each point $P = ( Pu, Pz )$
rotate. But $Pu$ and $Pz$ will have a parametrization too thus this will nicely describe the hyperboloid. Lets see:
In the $uz$ plane, you parametrize the hyperbola with $Pu = cosh (s)$ and $Pz = sinh (s)$ with $-1\leq s \leq 1$. Notice that this parameter s will describe the contour of the hyperbola.
Now you should use $\theta$ as another parameter, an this would be the "revolution parameter". It will take each contour of hyperbola and rotate it $\theta$ degrees.
So, in the $xy$ plane, we should have $x = 5Pu\cos\theta$ and $y=5Pu\sin\theta$ with $0\leq \theta \leq 2\pi$. If we make $z = 0$ in $x^2 + y^2 - z^2 = 25$, we would end with $x^2 + y^2 = 25$,
which requires the $r = 5$ in the parametrization $x = r\cos\theta$ and $y = r\sin\theta$.
Putting all together, we must have: $\Phi(s,\theta) = ( Pu\cos\theta, Pu\sin\theta, Pz) = ( 5\cosh(s)\cos\theta, 5\cosh(s)\sin\theta, 5sinh (s) )$
For the B) you want the expression of the normal vector?
Only to ellucidate, here's the hyperboloid. We can see the mesh caused by this type of parametrization - there are circunferences parallel to the xy plane, in which we have some z coordinate
fixed, and there are hyperbolas, when we have either x or y fixed.
For the B, I guess you should use the definition of gradient. I'll work on this later. I hope it helps!
a) Find a parametrization of the hyperboloid $x^2 + y^2 - z^2 = 25$
b) Find an expression for a unit normal to this surface.
c) Find an equation for the plane tangent to the surface at $(x_0,y_0,0)$, where $x_0^2 + y_0^2 = 25$
For the parametrization I got $\Phi(r,\theta) = (rcos\theta, rsin\theta, \sqrt{r^2 - 25})$
is that the correct parametrization?
Yes, if $x= r cos(\theta)$, $y= r sin(\theta)$, and $z= \sqrt{r^2- 25}$, then $x^2+ y^2- z^2= r^2 sin^2(\theta)+ r^2 cos^2(\theta)- (r^2- 25)= r^- (r^2- 25)= 25$
(Although there is no such thing as "the" parameterization. A surface has many different parameterizations. Since your x, y, and z satisfy the equation $x^2+ y^2- z^2= 25$, it is a valid
To find a vector perpendicular to the surface $x^2+ y^2- z^2= 25$ think of that as a "level surface" of $F(x,y,z)= x^2+ y^2- z^2$ and use the fact that grad F is always perpendicular to level
surfaces of F.
And, once you have a perpendicular, use the fact that if the vector $A\vec{i}+ B\vec{j}+ C\vec{k}$ at point $(x_0,y_0,z_0)$ in the plane, then the plane is given by $A(x- x_0)+ B(y- y_0)+ C(z-
z_0)= 0$.
July 31st 2010, 10:44 AM #2
Jan 2009
July 31st 2010, 11:16 AM #3
July 31st 2010, 11:24 AM #4
July 31st 2010, 11:46 AM #5
MHF Contributor
Apr 2005
|
{"url":"http://mathhelpforum.com/calculus/152441-parametrization-hyperboloid.html","timestamp":"2014-04-17T22:24:22Z","content_type":null,"content_length":"50650","record_id":"<urn:uuid:40c86ed2-a315-4a44-8d9e-52cf1a087c10>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Life of Fred - stand-alone text? or supplement? - High School and Self-Education Board
Ah, yes, the math saga continues . . . . .
I would REALLY like to use Chalkdust, but the price - aye, there's the rub.
Dd12 is enjoying Life of Fred - he's doing review work, finished Fractions in about 5 weeks, is progressing well through Decimals/Percents - it's all stuff he's had in SM 6A-B. He has a wacky sense
of humor, and just laughs his way through math reading the stories.
Am I nuts to consider doing the whole LoF series for high school math? Is it really enough? I would buy the extra Home Companion series to provide extra problem sets.
OR - is the best bet to do either Lial's with DVT or CD, and use LoF as a review/reinforcement/motivator?
I suppose another option would be to work through LoF next year (8th grade), and do a placement test in CD to see how he's done. . .. . .
(hoping JanninTX has seen this post and knows a smidge about LoF!)
|
{"url":"http://forums.welltrainedmind.com/topic/80579-life-of-fred-stand-alone-text-or-supplement/","timestamp":"2014-04-20T06:05:38Z","content_type":null,"content_length":"99581","record_id":"<urn:uuid:321d352c-a8e8-4323-8f55-4fb792f72b87>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ratios of Ripped Paper
Date: 11/19/95 at 20:48:32
From: Anonymous
Subject: Brain Teaser ques.
Ok Dr. Math, I'm really stuck with this question.
Grab a piece of 8 by 11 paper. Rip it in half, then put that half over
the other, then rip it again. For example I cut it in half, so now I
have 2 pieces. Now I place them over each other, and rip it again.
Now there are 4. So what is the length (or how high, like 5 miles or
something) when the paper is ripped 30 times?
Thanx, doc.
Date: 11/19/95 at 14:22:58
From: Doctor Ethan
Subject: Re: Brain Teaser ques.
That is a neat question. I would like to mention one thing though. You
actually could never rip a piece of paper that many times in half. But we
can imagine. Let's figure it out.
Let's call w the original width of the paper.
After one rip the width of the two halves combined is 2w.
Then after one more rip we have four pieces so the total width is 4w.
After three rips we have 8 pieces so the total width is 8w.
Let us make a chart. Check my answers by actually ripping paper.
# of rips Thickness
1 2w
2 4w
3 8w
4 16w
5 32w
6 64w
Do you see a pattern?
I hope so. These numbers (2,4,8,16,32,64) are the powers of 2.
Does that make sense to you.
That means that
So after 30 rips you would have a stack 2^30 sheets high.
(That symbol 2^30 means 2*2*2*2... thirty times.)
That is 1,073,741,824w That is really big.
Okay now let's assume that a sheet of paper is 1/500 of an inch
(it is actually a little thicker)
Then w = .002 inches
so 1,073,741,824w = 2147483.648 inches
To convert to miles we divide by 63360 to get 33.89 miles.
Hope this helps,
-Doctor Ethan, The Geometry Forum
|
{"url":"http://mathforum.org/library/drmath/view/57917.html","timestamp":"2014-04-21T12:44:32Z","content_type":null,"content_length":"6697","record_id":"<urn:uuid:5b74e1ad-95dc-4239-ba91-de5d4fb86f46>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The teaching of mathematics in Ancient Rome.
The Roman educational system was very similar to the Greek's, but the emphasis on what should be learnt and why was very different. Roman children were taught at home until about the age of twelve,
and probably learnt similar things to the Greeks, letters, music and, at this stage, a greater proportion of elementary Arithmetic and counting, using both the abacus and their fingers. At the age of
twelve the boys would then progress to a school of Literature where they would learn Grammar and elements of Logic, Rhetoric and Dialectics. As with the Greeks, many Romans would learn little more of
Mathematics than what they acquired from their lessons at home unless required by their occupation. This was not always the case, however, and boys would often also attend lessons given by a special
Mathematics master. This, for purely practical reasons, would be taught through several examples and was heavily calculation based. The Roman who sought to learn more than this small measure was
indeed the exception rather than the rule.
The Roman attitude of utility and practicality is seen in Quintilian's work where he recommends that Geometry is to be studied for two reasons. The first is that the mental training developed by the
subject through the logical progression of axioms and proofs is vital, and the second is that its usage in political discussions, questions on land-measurement and similar problems is very important.
Sophists employed here would be more likely to teach their students the art of speaking, Oratorio, and of current affairs than advances in science and Geometry. During this time many other texts were
written recommending various educational courses for those in the middle and artisan classes, as well as the ruling class. For example Vitruvius, writing for architects, suggests that his students
should include in their general education knowledge of Geometry, Optics, Arithmetic, Astronomy, and others (Law, Medicine, Music, Philosophy and History). Galen recommends to prospective doctors in
the 2nd century that they should have studied such varied subjects as Medicine, Rhetoric, Music, Geometry, Arithmetic and Dialectics, Astronomy, Literature and Law. And there are others, Varro and
Seneca are just two who also recommend Geometry and Arithmetic as being necessary. Boethius used his literary talents in writing and translating Greek texts into Latin. His understanding of
mathematics was rather limited, however, and the text he wrote on arithmetic was of poor quality. His geometry text has not survived but there is little reason to believe that is was any better.
Despite this his mathematics texts were among the best available to the Romans and widely used.
From the above comments it can be seen that, although Mathematics in education was often frowned upon, it must have been taught where it was necessary. The low opinion of Mathematics is probably due
in part to the professions which required mathematical or scientific learning. These professions were generally considered 'illiberal' and were looked down on. Those requiring an advanced level of
Logic, Rhetoric and Oratorio were far preferred. This attitude is reflected in those found in Britain throughout the Mediaeval and Renaissance years, and it is only recently that this has been
Article by: J J O'Connor and E F Robertson based on a University of St Andrews honours project by Elizabeth Watson submitted May 2000.
Main index Mathematical Education Index
Biographies Index History Topics Index
Glossary Index Famous curves index
Mathematicians of the day Anniversaries for the year
Birthplace Maps Time lines
Search Form Societies, honours, etc
JOC/EFR August 2000 School_of_Mathematics_and_Statistics
The URL of this page is:
|
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Education/rome.html","timestamp":"2014-04-18T08:05:40Z","content_type":null,"content_length":"5691","record_id":"<urn:uuid:d27cb2fa-cc2d-46a7-9531-67513cf33329>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Zhang Yitang is proof that for mathematicians, life begins at 40
Middle-aged Chinese researcher's prime numbers breakthrough is more evidence that the days of the maths whiz kid are well and truly over
No mathematician should ever allow himself to forget that mathematics, more than any art or science, is a young man's game," the British mathematician G.H. Hardy wrote in A Mathematician's Apology.
But the older guys are now catching up.
Since Hardy wrote those lines in 1940, it has been conventional wisdom that mathematical breakthroughs are most often made in a moment of brilliance by a born genius at a young age, rather than an
experienced practitioner after decades of work.
Last month, however, Zhang Yitang, a 50-year-old lecturer in mathematics at the University of New Hampshire, defied Hardy's glib assertion. Zhang, who had not published any original work since 2001,
submitted a paper to the peer-reviewed Annals of Mathematics in which he solved one of the most longstanding and difficult problems in pure maths. His proof - that there are an infinite number of
consecutive pairs of prime numbers (those that are divisible only by 1 and themselves such as 3, 5, 7, 11) separated by less than 70 million - may be meaningless to the layperson, but to number
theorists it is earth-shaking.
The fact that Zhang is well into middle age gives hope to legions of mid-career mathematicians oppressed by Hardy's dictum that groundbreaking work in their field should be left to the young.
Of course Hardy could point to many examples in the history of mathematics to support his assertion. The French mathematician Evariste Galois laid the foundations for modern algebra in the 1800s
while he was still a teenager and died at the age of 21. During the same era, the Norwegian Niels Abel, aged 19, independently came up with group theory, which is invaluable in many areas of
mathematics and physics. Srinivasa Ramanujan, the Indian maths prodigy mentored by Hardy at Cambridge University, compiled 3,900 results in identity and equations before he died at age 32 in 1920.
In more recent times, there's Terence Tao, whose parents emigrated to Australia from Hong Kong. Tao is a polymath who does brilliant work across many mathematical disciplines such as number theory,
harmonic analysis and combinatorics. He received his PhD in mathematics from Princeton University at 20 years old, was at 24 appointed the youngest ever full professor at the University of California
at Los Angeles, and at 30, in 2006, received the Fields Medal, the highest honour in mathematics.
The media reinforces the stereotype of youthful mathematical creativity. In the movie A Beautiful Mind John Nash, who as a graduate student in his early twenties did pioneering work in game theory,
is depicted hanging out at a bar in Princeton when a sudden insight leads him to the concept that became known as the Nash equilibrium, which is today widely applied in economics and conflict
While such young guns make romantic figures for feel-good movies, Zhang's story may be even more inspirational for being the achievement of age, experience, persistence and sheer hard slog. It took
him over three years of intensive, single-minded research in his late forties to solve the prime numbers problem.
He is not the only late bloomer. At age 41, Andrew Wiles, a Princeton and Oxford University mathematician, cracked Fermat's Last Theorem, which had vexed mathematicians for 358 years since Pierre de
Fermat came up with it in 1637. Wiles had pondered the problem since he discovered it in a library when he was a 10-year-old student in Scotland. After seven years of intense and solitary work, he
presented his results at Cambridge University in 1993, and, like Zhang, stunned his fellow mathematicians. It took another year for him to correct an error in his first proof in collaboration with
his former student Richard Taylor.
An equally difficult problem, the Poincaré conjecture, was also solved by a mature thinker. In 1904 Henri Poincare, one of the most creative mathematicians of all time, made his conjecture about the
topology, or shape and space, of a three-dimensional sphere. The Clay Mathematics Institute in the United States offered US$1 million to the person who could prove the conjecture. In 2006, the
Russian mathematician Grigori Perelman did so. He was 40 years old. Offered the cash award as well as the Fields Medal, Perelman turned both down. Declaring, "I am not interested in money or fame",
he was the first and only person to decline the prestigious medal. He said his contribution was no more significant than that of an American mathematician, Richard Hamilton, who devised the technique
that allowed him to prove the conjecture.
Why have recent mathematical breakthroughs been made by older brains? There is just much more mathematics requiring more time to master than during Hardy's day a century ago. As Jordan Ellenberg, an
expert in algebraic geometry at the University of Wisconsin, has noted, today there are no whiz kids like Galois and Abel. It simply takes them longer to learn from many more intellects.
Wiles tapped into the work of the Japanese mathematicians Yutaka Taniyama and Goro Shimura in two distinct branches of maths to figure out Fermat's Last Theorem. Perelman's proof of the Poincaré
conjecture was aided by Hamilton's work in differential geometry at Columbia University. Zhang's breakthrough in prime numbers built on the work of Dan Goldston at San Jose State University in the
United States, Janos Pintz of the Renyi Institute of Mathematics in Budapest and Cem Yalcin Yildirim of Bogazici University in Istanbul.
As for whether the frontiers of mathematics are best advanced by youthful flashes of intuition or long years of logical deduction, Poincare provided an answer. He wrote: "Logic and intuition have
their necessary role. Each is indispensable."
He should know. He came up with his famous conjecture when he was 50 years old.
Tom Yam is a Hong Kong-based management consultant. He holds a PhD in electrical engineering and an MBA from the Wharton School of the University of Pennsylvania
|
{"url":"http://www.scmp.com/print/lifestyle/technology/article/1256542/zhang-yitang-proof-mathematicians-life-begins-40","timestamp":"2014-04-16T13:21:21Z","content_type":null,"content_length":"14749","record_id":"<urn:uuid:0241583a-7902-4576-af43-59c257d66ed2>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Archive for Mathematical Sciences & Philosophy
June 29th 2005
Basil Hiley: Non-Commutative Geometry from the Perspective of A Physicist
Part 1 : Algebras and Process
Part 2 : Algebraic vs Geometric Aspects of Spinors and Twistors
Basil Hiley: Non-Commutative Geometry from the Perspective of A Physicist
Part 1 : Algebras and Process
Part 2 : Algebraic vs Geometric Aspects of Spinors and Twistors
Basil Hiley: Non-Commutative Geometry from the Perspective of A Physicist
Part 1 : Algebras and Process
Part 2 : Algebraic vs Geometric Aspects of Spinors and Twistors
Basil Hiley: Non-Commutative Geometry from the Perspective of A Physicist
Part 1 : Algebras and Process
Part 2 : Algebraic vs Geometric Aspects of Spinors and Twistors
Fabio Frescura, Professor of Physics, University of Witwatersrand, South Africa
Symmetric and Antisymmetric Structures in Quantum Theory and The Program of Geometrisation of Physics
Fabio Frescura, Professor of Physics, University of Witwatersrand, South Africa
Symmetric and Antisymmetric Structures in Quantum Theory and The Program of Geometrisation of Physics (continued)
Fabio Frescura, Professor of Physics, University of Witwatersrand, South Africa
Symmetric and Antisymmetric Structures in Quantum Theory and The Program of Geometrisation of Physics (continued)
Fabio Frescura, Professor of Physics, University of Witwatersrand, South Africa
Symmetric and Antisymmetric Structures in Quantum Theory and The Program of Geometrisation of Physics (concluded)
Melvin Brown, TPRU London
The Bohm Interpretation and Momentum Space : The Relationship between Mechanics and Dynamics in the Bohm
Discussion of A.M. Talks
Discussion of A.M. Talks
Discussion of A.M. Talks (concluded)
Plus 10 – 15 minute additional talk by Freddie van Oystaeyen (audio only) to be added
A category – theoretic framework for thinking about Indistinguishables
|
{"url":"http://www.archmathsci.org/conferences-and-workshops/askloster/2005/videos-1/","timestamp":"2014-04-17T12:37:43Z","content_type":null,"content_length":"19699","record_id":"<urn:uuid:b54d587e-fdf7-41b3-858f-71354f0da69b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
|
on Denis Poisson (1781 - 1840)
Siméon Denis Poisson (1781 - 1840)
From `A Short Account of the History of Mathematics' (4th edition, 1908) by W. W. Rouse Ball.
Siméon Denis Poisson, born at Pithviers on June 21, 1781, and died at Paris on April 25, 1840, is almost equally distinguished for his applications of mathematics to mechanics and to physics. His
father had been a private soldier, and on his retirement was given some small administrative post in his native village; when the revolution broke out he appears to have assumed the government of the
place, and, being left undisturbed, became a person of some local importance. The boy was put out to nurse, and he used to tell how one day his father, coming to see him, found that the nurse had
gone out, on pleasure bent, having left him suspended by a small cord attached to a nail fixed in the wall. This, she explained, was a necessary precaution to prevent him from perishing under the
teeth of the various animals and animalculae that roamed the floor. Poisson used to add that his gymnastic efforts carried him incessantly from one side to the other, and it was thus in his tenderest
infancy that he commenced those studies on the pendulum that were to occupy so large a part of his mature age.
He was educated by his father, and destined much against his will to be a doctor. His uncle offered to teach him the art, and began by making him prick the veins of cabbage-leaves with a lancet. Wen
perfect in this, he was allowed to put on blisters; but in almost the first case he did this by himself, the patient died in a few hours, and although all the medical practitioners of the place
assured him that ``the event was a very common one,'' he vowed he would have nothing more to do with the profession.
Poisson, on his return home after this adventure, discovered amongst the official papers sent to his father a copy of the questions set at the Polytechnic school, and at once found his career. At the
age of seventeen he entered the Polytechic, and his abilities excited the interest of Lagrange and Laplace, whose friendship he retained to the end of their lives. A memoir on finite differences
which he wrote when only eighteen was reported on so favourably by Legendre that it was ordered to be published in the Recueil des savants étrangers. As soon as he had finished his course he was made
a lecturer at the school, and he continued through his life to hold various government scientific posts and professorships. He was somewhat of a socialist, and remained a rigid republican till 1815,
when, with a view to making another empire impossible, he joined the legitimists. He took, however, no active part in politics, and made the study of mathematics his amusement as well as his
His works and memoirs are between three and four hundred in number. The chief threatises which he wrote were his Traité de mécanique, published in two volumes, 1811 and 1833, which was long a
standard work; his Théorie mathématique de la chaleur, 1835, to which a supplement was added in 1837; and his Recherches sur la probabilité des jugements, 1837. He had intended, if he had lived, to
write a work which should cover all mathematical physics and in which the results of the three books last named would have been incorporated.
Of his memoirs in pure mathematics the most important are those on definite integrals, and Fourier's series, their application to physical problems constituting one of his chief claims to
distinction; his essays on the calculus of variations; and his papers on the probability of the mean results of observations.
Perhaps the most remarkable of his memoirs in applied mathematics are those on the theory of electrostatics and magnetism, which originated a new branch of mathematical physics; he suppose that the
results were due to the attractions and repulsions of imponderable particles. The most important of those on physical astronomy are the two read in 1806 (printed in 1809) on the secular inequalities
of the mean motions of the planets, and on the variation of arbitrary constants introduced into the solutions of questions on mechanics; in these Poisson discusses the question of the stability of
the planetary orbits (which Lagrange had already proved to the first degree of approximation for the disturbing forces), and shews that the result can be extended to the third order of small
quantities: these were the memoirs which led to Lagrange's famous memoir of 1808. Poisson also published a paper in 1821 on the libration of the moon; and another in 1827 on the motion of the earth
about its centre of gravity. His most important memoirs on the theory of attraction are one in 1829 on the attraction of spheroids, and another in 1835 on the attraction of a homogeneous ellipsoid:
the substitution of the correct equation involving the potential, namely, V = 0, was first published in 1813. Lastly, I may mention his memoir in 1825 on the theory of waves.
This page is included in a collection of mathematical biographies taken from A Short Account of the History of Mathematics by W. W. Rouse Ball (4th Edition, 1908).
Transcribed by
D.R. Wilkins
School of Mathematics
Trinity College, Dublin
|
{"url":"http://www.maths.tcd.ie/pub/HistMath/People/Poisson/RouseBall/RB_Poisson.html","timestamp":"2014-04-20T05:48:33Z","content_type":null,"content_length":"6395","record_id":"<urn:uuid:aa4ec381-7848-46e9-805a-43661bb20480>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] FOM: What are FOM for?
Sam Sanders sasander at cage.ugent.be
Tue Feb 28 20:10:42 EST 2012
Martin Davis asked what the Foundations of Mathematics are for.
I present my opinion on these matters.
The following picture is an overly simple ordering of several scientific disciplines from
concrete to more abstract:
Biology < Chemistry < Physics < Mathematics < Mathematical Logic
Everyone will agree that Biology is not just the study of some particular
chemical reactions. Similarly, Chemistry (resp. Physics) is not just the study of some
particular physical (resp. mathematical) systems.
In each case, if X<Y, then X cannot be *reduced* to some subset of Y. Obviously
X does take place in a subset of Y, but it is more than just a subset of Y, something
"holistic" is going on.
A good example is Biology: the notion of "living matter" is not well-defined or understood
yet, and has no reduction (to the best of my knowledge) to simple chemical reactions.
Extrapolating, in the case of Mathematics, we cannot just reduce Mathematics to the study
of some formal logical systems. Something more is going on. Evidence for this, in my opinion,
is provided by the results of Reverse Mathematics. One does not need to agree with the "Big Five"
thesis to admit that Reverse Mathematics reveals surprising properties of "ordinary Mathematics".
These properties (discussed at length on this list) do not have reductions to formal logic, to the best
of my knowledge.
Hence, *a* role for FOM is the study of the properties of Mathematics that differentiate (formalized) Mathematics
from generic logical systems. In other words, not all logical systems have mathematical content and what
makes those that do (e.g. ACA_0, WKL_0, …) different?
Sam Sanders
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2012-February/016258.html","timestamp":"2014-04-19T04:20:41Z","content_type":null,"content_length":"4482","record_id":"<urn:uuid:2f4b191e-3101-45b8-8424-9aac88edfcc4>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Charts for Math
in Math Charts
All of the charts on the 100 Numbers web page are 100 numbers charts. Some of the charts are completely filled in, some of them are blank, and the rest of the 100 number charts are partially filled
in - the student is to complete those charts. Letter sizes - there are charts with big letters and charts with small letters. That's about it. To print the charts go to Charts for Math: 100 Numbers
My children needed coordinate grids so often for algebra that I made a notebook for them to use. There are about 148 coordinate grids needed for Saxon Algebra 1 [1999 edition]. My daughter used a
different program for algebra and I looked through the solution's manual that came with her math program to get an idea of how many coordinate grids to print and which ones to print. For the most
part, I printed the 6x6 6-grids on both sides of the paper.
Learning Math Facts: As an alternative to math facts drilling or in addition to drilling, it is acceptable in some cases to let your children look at a multiplication chart while they do their math
lesson. Allowing them to look at a chart will take away some of the their frustration during math. In our case, my child used the chart less over time and eventually she did not need it at all This
using a chart during math has two steps.
1. Allow them to look at charts during math class unhampered in any way.
2. Monitor their use of the charts by blacking out or covering the parts of the chart that they have learned.
We used both drill and charts. Results from math facts drilling revealed which parts of the multiplication charts to cover.
Related Content at DonnaYoung.org
Triangular Flash Cards - The numbers in this set of triangular flash cards are rotated. Cover the answer with your finger when showing the card.
Triangular Flash Cards- The numbers in this set are one-side up instead of rotated. Cover the answer with your finger when showing the card.
Play Money - Page features play money as well as place value "money" and play bank checks
Money Quizzes - The money quizzes are computer based. The child counts the money online, then chooses right answer - we hope.
Store Tags - Improve Adding and Subtracting Skills while Playing Store - page has both filled store tabs and blank store tags
Math Drill Sheets. There are various levels spanning from the younger to the older students.
|
{"url":"http://donnayoung.org/math/charts.htm","timestamp":"2014-04-17T15:33:48Z","content_type":null,"content_length":"31223","record_id":"<urn:uuid:381342d7-d194-4694-b0e8-ec8e83119d13>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Atank of height 5 m is full of water. There is a hole of cross sectional area 1 cm2 in the bottom. The volume of water that will come out from the hole per second is ----
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/508c2bfce4b0ee63c4895b3e","timestamp":"2014-04-17T06:58:53Z","content_type":null,"content_length":"35346","record_id":"<urn:uuid:a379f736-34b3-4193-9f04-2624fae8753f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Curved Spacetime
Shapes with corners appear different depending on the curvature of space. In flat space, the sum of the angles in a triangle is 180 degrees. For a positively curved space, such as the surface of a
sphere, the sum of the interior angles of a triangle exceeds 180 degrees. For a negatively curved space, such as a saddle, the angles of a triangle sum to less than 180 degrees. (Unit: 3)
|
{"url":"http://www.learner.org/courses/physics/visual/visual.html?shortname=spacetime","timestamp":"2014-04-20T08:26:35Z","content_type":null,"content_length":"2934","record_id":"<urn:uuid:7a8d7039-a141-4d21-91f2-e1713910a725>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
|
dlib C++ Library - linear_manifold_regularizer_ex.cpp
// The contents of this file are in the public domain. See LICENSE_FOR_EXAMPLE_PROGRAMS.txt
This is an example illustrating the use of the linear_manifold_regularizer
and empirical_kernel_map from the dlib C++ Library.
This example program assumes you are familiar with some general elements of
the library. In particular, you should have at least read the svm_ex.cpp
and matrix_ex.cpp examples. You should also have read the empirical_kernel_map_ex.cpp
example program as the present example builds upon it.
This program shows an example of what is called semi-supervised learning.
That is, a small amount of labeled data is augmented with a large amount
of unlabeled data. A learning algorithm is then run on all the data
and the hope is that by including the unlabeled data we will end up with
a better result.
In this particular example we will generate 200,000 sample points of
unlabeled data along with 2 samples of labeled data. The sample points
will be drawn randomly from two concentric circles. One labeled data
point will be drawn from each circle. The goal is to learn to
correctly separate the two circles using only the 2 labeled points
and the unlabeled data.
To do this we will first run an approximate form of k nearest neighbors
to determine which of the unlabeled samples are closest together. We will
then make the manifold assumption, that is, we will assume that points close
to each other should share the same classification label.
Once we have determined which points are near neighbors we will use the
empirical_kernel_map and linear_manifold_regularizer to transform all the
data points into a new vector space where any linear rule will have similar
output for points which we have decided are near neighbors.
Finally, we will classify all the unlabeled data according to which of
the two labeled points are nearest. Normally this would not work but by
using the manifold assumption we will be able to successfully classify
all the unlabeled data.
For further information on this subject you should begin with the following
paper as it discusses a very similar application of manifold regularization.
Beyond the Point Cloud: from Transductive to Semi-supervised Learning
by Vikas Sindhwani, Partha Niyogi, and Mikhail Belkin
******** SAMPLE PROGRAM OUTPUT ********
Testing manifold regularization with an intrinsic_regularization_strength of 0.
number of edges generated: 49998
Running simple test...
error: 0.37022
error: 0.44036
error: 0.376715
error: 0.307545
error: 0.463455
error: 0.426065
error: 0.416155
error: 0.288295
error: 0.400115
error: 0.46347
Testing manifold regularization with an intrinsic_regularization_strength of 10000.
number of edges generated: 49998
Running simple test...
error: 0
error: 0
error: 0
error: 0
error: 0
error: 0
error: 0
error: 0
error: 0
error: 0
#include <dlib/manifold_regularization.h>
#include <dlib/svm.h>
#include <dlib/rand.h>
#include <dlib/statistics.h>
#include <iostream>
#include <vector>
#include <ctime>
using namespace std;
using namespace dlib;
// ----------------------------------------------------------------------------------------
// First let's make a typedef for the kind of samples we will be using.
typedef matrix<double, 0, 1> sample_type;
// We will be using the radial_basis_kernel in this example program.
typedef radial_basis_kernel<sample_type> kernel_type;
// ----------------------------------------------------------------------------------------
void generate_circle (
std::vector<sample_type>& samples,
double radius,
const long num
- num > 0
- radius > 0
- generates num points centered at (0,0) with the given radius. Adds these
points into the given samples vector.
// ----------------------------------------------------------------------------------------
void test_manifold_regularization (
const double intrinsic_regularization_strength
- Runs an example test using the linear_manifold_regularizer with the given
// ----------------------------------------------------------------------------------------
int main()
// Run the test without any manifold regularization.
// Run the test with manifold regularization. You can think of this number as
// a measure of how much we trust the manifold assumption. So if you are really
// confident that you can select neighboring points which should have the same
// classification then make this number big.
// ----------------------------------------------------------------------------------------
void test_manifold_regularization (
const double intrinsic_regularization_strength
cout << "Testing manifold regularization with an intrinsic_regularization_strength of "
<< intrinsic_regularization_strength << ".\n";
std::vector<sample_type> samples;
// Declare an instance of the kernel we will be using.
const kernel_type kern(0.1);
const unsigned long num_points = 100000;
// create a large dataset with two concentric circles. There will be 100000 points on each circle
// for a total of 200000 samples.
generate_circle(samples, 2, num_points); // circle of radius 2
generate_circle(samples, 4, num_points); // circle of radius 4
// Create a set of sample_pairs that tells us which samples are "close" and should thus
// be classified similarly. These edges will be used to define the manifold regularizer.
// To find these edges we use a simple function that samples point pairs randomly and
// returns the top 5% with the shortest edges.
std::vector<sample_pair> edges;
find_percent_shortest_edges_randomly(samples, squared_euclidean_distance(), 0.05, 1000000, time(0), edges);
cout << "number of edges generated: " << edges.size() << endl;
empirical_kernel_map<kernel_type> ekm;
// Since the circles are not linearly separable we will use an empirical kernel map to
// map them into a space where they are separable. We create an empirical_kernel_map
// using a random subset of our data samples as basis samples. Note, however, that even
// though the circles are linearly separable in this new space given by the empirical_kernel_map
// we still won't be able to correctly classify all the points given just the 2 labeled examples.
// We will need to make use of the nearest neighbor information stored in edges. To do that
// we will use the linear_manifold_regularizer.
ekm.load(kern, randomly_subsample(samples, 50));
// Project all the samples into the span of our 50 basis samples
for (unsigned long i = 0; i < samples.size(); ++i)
samples[i] = ekm.project(samples[i]);
// Now create the manifold regularizer. The result is a transformation matrix that
// embodies the manifold assumption discussed above.
linear_manifold_regularizer<sample_type> lmr;
// use_gaussian_weights is a function object that tells lmr how to weight each edge. In this
// case we let the weight decay as edges get longer. So shorter edges are more important than
// longer edges.
lmr.build(samples, edges, use_gaussian_weights(0.1));
const matrix<double> T = lmr.get_transformation_matrix(intrinsic_regularization_strength);
// Apply the transformation generated by the linear_manifold_regularizer to
// all our samples.
for (unsigned long i = 0; i < samples.size(); ++i)
samples[i] = T*samples[i];
// For convenience, generate a projection_function and merge the transformation
// matrix T into it. That is, we will have: proj(x) == T*ekm.project(x).
projection_function<kernel_type> proj = ekm.get_projection_function();
proj.weights = T*proj.weights;
cout << "Running simple test..." << endl;
// Pick 2 different labeled points. One on the inner circle and another on the outer.
// For each of these test points we will see if using the single plane that separates
// them is a good way to separate the concentric circles. We also do this a bunch
// of times with different randomly chosen points so we can see how robust the result is.
for (int itr = 0; itr < 10; ++itr)
std::vector<sample_type> test_points;
// generate a random point from the radius 2 circle
generate_circle(test_points, 2, 1);
// generate a random point from the radius 4 circle
generate_circle(test_points, 4, 1);
// project the two test points into kernel space. Recall that this projection_function
// has the manifold regularizer incorporated into it.
const sample_type class1_point = proj(test_points[0]);
const sample_type class2_point = proj(test_points[1]);
double num_wrong = 0;
// Now attempt to classify all the data samples according to which point
// they are closest to. The output of this program shows that without manifold
// regularization this test will fail but with it it will perfectly classify
// all the points.
for (unsigned long i = 0; i < samples.size(); ++i)
double distance_to_class1 = length(samples[i] - class1_point);
double distance_to_class2 = length(samples[i] - class2_point);
bool predicted_as_class_1 = (distance_to_class1 < distance_to_class2);
bool really_is_class_1 = (i < num_points);
// now count how many times we make a mistake
if (predicted_as_class_1 != really_is_class_1)
cout << "error: "<< num_wrong/samples.size() << endl;
cout << endl;
// ----------------------------------------------------------------------------------------
dlib::rand rnd;
void generate_circle (
std::vector<sample_type>& samples,
double radius,
const long num
sample_type m(2,1);
for (long i = 0; i < num; ++i)
double sign = 1;
if (rnd.get_random_double() < 0.5)
sign = -1;
m(0) = 2*radius*rnd.get_random_double()-radius;
m(1) = sign*sqrt(radius*radius - m(0)*m(0));
// ----------------------------------------------------------------------------------------
|
{"url":"http://dlib.net/linear_manifold_regularizer_ex.cpp.html","timestamp":"2014-04-17T10:27:19Z","content_type":null,"content_length":"24757","record_id":"<urn:uuid:7b3e5ae0-d1f4-4744-81fb-791ce4851607>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find the values of k so that lines are perpendicular using symetric equations
April 26th 2009, 08:33 AM #1
Mar 2009
Find the values of k so that lines are perpendicular using symetric equations
Find the value of k so that the lines
I got the the vector equation for both lines
L1=(x,y,z) =(4,-4,-3) + k(3,0,2)
L2=(x,y,z) (-4,-8,-12) + k(0,-2,0)
I got the dot product of (3,0,2) and (0,-2,0) and it equals zero, so it is perpendicular. I just don't know what to after
Vector equations of lines
Hello JohnBlaze
Sorry, but you must have misunderstood the question. The one you have asked doesn't make sense. The equation
$\vec{r} = 4\vec{i} - 4\vec{j} -3\vec{k} + \lambda(3\vec{i} +0\vec{j} + 2\vec{k})$
represents a straight line passing through the point with position vector $4\vec{i} - 4\vec{j} -3\vec{k}$, parallel to the vector $3\vec{i} +0\vec{j} + 2\vec{k}$. By varying the values of $\
lambda$, you can create the position vector of any point on the line. E.g.
□ If $\lambda = 0$, you get the original point $\vec{r}=4\vec{i} - 4\vec{j} -3\vec{k}$ itself
□ If $\lambda = 1$, you get the point $\vec{r}=7\vec{i} - 4\vec{j} -\vec{k}$
□ If $\lambda = 2$, you get the point $\vec{r}=10\vec{i} - 4\vec{j} +\vec{k}$
... and so on.
Similarly, $\vec{r}=-4\vec{i} - 8\vec{j} -12\vec{k} + \lambda(0\vec{i} -2\vec{j} + 0\vec{k})$ represents, for different values of $\lambda$, all the points on a line passing through $-4\vec{i} -
8\vec{j} -12\vec{k}$ parallel to $0\vec{i} - 2\vec{j} +0\vec{k}$.
Can you see why what you've put here is meaningless? Would you like to look at the question again, and if you can't see how to do it, to post the original question here?
April 27th 2009, 05:20 AM #2
|
{"url":"http://mathhelpforum.com/pre-calculus/85737-find-values-k-so-lines-perpendicular-using-symetric-equations.html","timestamp":"2014-04-18T11:12:46Z","content_type":null,"content_length":"39398","record_id":"<urn:uuid:80a1f5b3-3941-433e-873f-dfa2b15b75c0>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The lifetime risk of maternal mortality: concept and measurement
John Wilmoth ^a
a. Department of Demography, University of California, Berkeley, CA, United States of America.
Correspondence to John Wilmoth (e-mail: jrw@demog.berkeley.edu).
(Submitted: 05 October 2007 – Revised version received: 14 July 2008 – Accepted: 28 July 2008 – Published online: 13 February 2009.)
Bulletin of the World Health Organization 2009;87:256-262. doi: 10.2471/BLT.07.048280
The importance of quantifying the loss of life caused by maternal mortality in a population is widely recognized. In 2000, the UN Millennium Declaration identified the improvement of maternal health
as one of eight fundamental goals for furthering human development. As part of Millennium Development Goal 5, the UN established the target of reducing the maternal mortality ratio by three-quarters
between 1990 and 2015 for all national and regional populations.^1
The maternal mortality ratio (MMRatio) is obtained by dividing the number of maternal deaths in a population during some time interval by the number of live births occurring in the same period. Thus,
the MMRatio depicts the risk of maternal death relative to the frequency of childbearing. A related measure, the maternal mortality rate (MMRate), is found by dividing the average annual number of
maternal deaths in a population by the average number of women of reproductive age (typically those aged 15 to 49 years) who are alive during the observation period. Thus, the MMRate reflects not
only the risk of maternal death per pregnancy or per birth, but also the level of fertility in a population.
In addition to the MMRatio and the MMRate, the lifetime risk, or probability, of maternal death in a population is another possible measure. Whereas the MMRatio and the MMRate are measures of the
frequency of maternal death in relation to the number of live births or to the female population of reproductive age, the lifetime risk of maternal mortality describes the cumulative loss of human
life due to maternal death over the female life course. Because it is expressed in terms of the female life course, the lifetime risk is often preferred to the MMRatio or MMRate as a summary measure
of the impact of maternal mortality.
However, despite its interpretive appeal, the lifetime risk of maternal mortality can be defined and calculated in more than one way. A clear and concise discussion of both its underlying concept and
measurement methods is badly needed. This article addresses these issues and is intended to serve as a basis for official estimates of this important indicator of population health and well-being. In
fact, the measure recommended here was adopted for use with the 2005 maternal mortality estimates published by the UN.^2
Basic concepts
The lifetime risk, or probability, of maternal mortality could reflect at least three different underlying concepts, which can be summarized briefly as follows:
• The fraction of infant females who would die eventually from maternal causes in the absence of competing causes of death from birth until menopause.
• The fraction of infant females who would die eventually from maternal causes when competing causes of death are taken into account.
• The fraction of adolescent females who would die eventually from maternal causes when competing causes of death are taken into account.
In formulae, these three concepts of lifetime risk can be defined as follows:
where each summation is over an age range, with x = 15 to 49 years. Each formula yields a probability of maternal death over some portion of the female life course, given a particular set of
assumptions about other causes of death.
In these three equations, MMRatio[x] is the maternal mortality ratio at age x, MMRate[x] is the maternal mortality rate at age x, f[x] is the fertility rate at age x, ℓ[x] is the number of survivors
at age x in a female life table, and L[x] is the number of woman-years of exposure to the risk of dying from maternal or other causes between ages x and x + 1 for the hypothetical cohort of women
whose lifetime experience is depicted in the same life table. The equivalence between the two expressions in each equation follows from observing that
where, for a given time period, MD[x] is the number of maternal deaths occurring among women aged x, W[x] is the number of woman-years of exposure at age x in the observed population (in contrast to
L[x] , which refers to the hypothetical population of a female life table), and B[x] is the number of live births in women aged x. Therefore, MMRate[x] = MMRatio[x] × f[x].
Note that LR[2] and LR[3] are related as follows:
where ℓ[15]/ℓ[0] is the probability that a woman will survive from birth (i.e. 0 years) to age 15 years, as derived from a female life table. Equation 4 can be used for computing LR[2] from LR[3], or
vice versa.
To understand Equation 2 better, observe that each element of the sum can be represented verbally as follows:
Note that “woman-years lived at age x” refers in one case to the observed population and in the other to the hypothetical population of a female life table. Thus, the observed age-specific maternal
mortality rates are applied to the fictitious life-table population as a means of constructing a synthetic measure of lifetime risk for a given time period.
Summing Equation 2 across age (i.e. x = 15 to 49 years) yields the number of maternal deaths over the life course per female live birth, or in other words, the full lifetime probability of maternal
mortality, with other causes of death taken into account. A similar analysis of Equation 3 illustrates that it represents the adult lifetime probability of maternal mortality per 15-year-old female.
By contrast, Equation 1 contains the implicit assumption that the number of woman-years lived between ages x and x + 1 per female live birth (L[x]/ℓ[0]) is one for all ages, so in effect it ignores
all forms of mortality, including that from maternal causes. Thus, it is theoretically possible within this model for a woman to die more than once from a maternal cause over her lifetime (similar to
having more than one birth). This imprecision is unimportant, however, since MMRate[x] is typically quite small at all ages, usually less than 1 per 1000, and thus higher-order terms are negligible.
in all human life tables, it follows that:
Therefore, of the three concepts of lifetime risk, the first one, LR[1], yields the largest probability of maternal death over a lifetime. However, this value is inflated because deaths due to other
causes are ignored. If such deaths are factored into the calculation, the resulting lifetime risk of maternal death is reduced. A variant of LR[1] was used for computing the lifetime risk of maternal
mortality in UN estimates for the year 2000.^3
The second concept, LR[2], yields the smallest probability of maternal death over a lifetime, while the third concept, LR[3], yields a value that lies between the other two. Both LR[2] and LR[3] take
account of competing risks due to other causes of mortality. However, many deaths from other causes occur in childhood, before the risk of maternal death becomes relevant. If childhood deaths are
eliminated from the calculation, LR[3] reflects the adult lifetime risk of maternal death.
The size of the differences between the three measures in Equation 5 depends strongly on the level of overall mortality in a population. In populations with a high probability of survival to
adulthood, there is very little difference between them; the three measures differ most in populations with relatively high levels of mortality from all causes, including maternal causes.
For all three concepts, the measures of lifetime risk are hypothetical in the sense that they rely on the demographic patterns observed in a population during a single period of time. Thus, they
represent the lifetime risk of maternal mortality for a cohort of females who, hypothetically, are subject throughout their lives to prevailing demographic conditions, as reflected by age-specific
rates of fertility and mortality, including maternal mortality. Like life expectancy at birth, they are examples of “period” measures of population characteristics as used in standard demographic
Age-specific maternal mortality data
The Bangladesh Maternal Health Services and Maternal Mortality Survey of 2001 was a nationally representative survey that collected information about mortality in general and about maternal deaths in
particular.^7 The data presented here are based on births and deaths that occurred within interviewed households during a period of 3 years before the survey. For each reported death, information was
gathered on the age and sex of the deceased. In addition, if the deceased was a woman aged 13–49 years, follow-up questions were asked to determine whether the death was due to a maternal cause.
Using such information, it was possible to compute various age-specific measures of fertility and mortality, including maternal mortality. Table 1 illustrates the results obtained when all three
measures of lifetime risk were calculated for Bangladesh during 1998–2001 using data derived from the 2001 survey and Equation 1, Equation 2 and Equation 3. In these calculations, when age-specific
information about maternal deaths was used to compute the lifetime risk, the value of each measure was the same whether based on MMRatio[x] or MMRate[x].
Summary maternal mortality data for ages 15–49 years
In most situations, the age distribution of maternal deaths is not known and information is limited to summary measures, such as the MMRatio or the MMRate, which are computed using data on maternal
deaths, live births and woman-years of exposure for ages 15–49 years combined. To obtain the formulae for lifetime risk that are used in practice from Equation 1, Equation 2 and Equation 3, one must
assume that either the MMRatio or the MMRate is constant across all ages.
For example, if one assumes the MMRatio is constant across all ages, Equation 1, Equation 2 and Equation 3 can be simplified as follows:
Here, TFR is the total fertility rate, or the number of children per woman implied by age-specific fertility rates, f[x] , if we assume death does not occur until at least the age when menopause is
reached, and NRR is the net reproduction rate, or the expected number of female children per newborn girl given current age-specific fertility and mortality rates. The factor of 2.05 in Equation 2a
and Equation 3a comes from assuming a typical sex ratio at birth (i.e. 105 boys per 100 girls) and is needed here because the NRR is expressed in terms of female births only.
Alternatively, if we assume the MMRate is constant across age, the three equations become the following:
Here, T[15] – T[50] is a life-table quantity representing the number of woman-years lived between ages 15 and 50 years, and the factor of 35 in Equation 1b corresponds to the reproductive interval
from age 15 to 50 years. If a different reproductive interval were used for computing the MMRate, these equations would need to be modified accordingly.
These two sets of formulae can be considered as alternative approximations for Equation 1, Equation 2 and Equation 3. Their accuracy depends on the validity of the underlying assumptions: that either
MMRatio[x] or MMRate[x] has a constant value across the age range. In this regard, it is clear which of the two sets of approximations is preferable: MMRate[x] tends to be more stable over age than
MMRatio[x], as illustrated in Table 1, for the population of Bangladesh between 1998 and 2001. This pattern is expected to be observed in general and follows from the relationship linking these two
measures at a given age x. Recall that MMRatio[x] × f[x] = MMRate[x]. Thus, the relative stability of MMRate[x] over age occurs because the sharp age-related increase in the risk of maternal death
per live birth, MMRatio[x], is balanced by a sharp decline in the fertility rate, f[x], at older ages.
The greater accuracy of approximations based on the MMRate is confirmed in Table 2, which shows all three measures of lifetime risk computed for Bangladesh from 1998 to 2001 using three types of
information about maternal mortality: age-specific data, the MMRatio and the MMRate. The differences between rows in the table are consistent with the inequality in Equation 5. The differences
between columns confirm that estimates of lifetime risk derived using age-specific data are closer to approximations derived using the MMRate than to those derived using the MMRatio. Observe that, in
this example, estimates based on the MMRate have a small but consistent upward bias of around 2–3% in relative terms. However, estimates based on the MMRatio have a much larger downward bias, about
Finally, it is important to note that none of the lifetime risk measures in Table 2 is identical to the one used in the published report of UN maternal mortality estimates for the year 2000.^3 That
measure, here called LR[0], equals 1.2 × LR[1], as computed using Equation 1a. The factor of 1.2 was intended to serve as a means of incorporating maternal deaths associated with pregnancies that did
not result in a live birth. However, this adjustment is inappropriate, since the MMRatio depicts the frequency of maternal deaths in relation to the number of live births, not the number of
In summary, the choice between possible measures of the lifetime risk of maternal death has two dimensions: the desired concept of lifetime risk and the accuracy of the calculation method. Of the
three concepts of lifetime risk considered here, the first should be rejected as inappropriate because it ignores other forms of mortality (i.e. competing risks) and consequently exaggerates the
lifetime risk of maternal mortality. The other two concepts both take competing risks into account and differ only in terms of their starting point: either birth or age 15 years, with the latter
representing an approximate minimum age of reproduction.
There seem to be few precedents to guide the choice between the second and third concepts of lifetime risk. One source defined the “lifetime risk of maternal death” as the “probability of maternal
death during a woman’s reproductive lifetime”.^8 This definition seems to imply a conditional probability in which the pool of women at risk should include only those who survived to the age when
reproduction starts. Members of the working group that produced the UN estimates of maternal mortality for 2005 came to the same conclusion; namely, that the concept of “lifetime risk of maternal
mortality” should refer to the probability of maternal death conditional on survival to age 15 years, with other forms of mortality taken into account (i.e. LR[3]).
Ideally, measures of lifetime risk should be computed using age-specific data. In most situations, however, one does not possess age-specific information about maternal mortality. For international
comparisons, therefore, one needs a method that produces reliable results using either the MMRatio or the MMRate computed for ages 15–49 years. I have demonstrated here that MMRate[x] tends to be
more stable as a function of age than MMRatio[x] and, therefore, that the MMRate yields more accurate estimates of the lifetime risk of maternal death.
Based on these two conclusions about concept and accuracy, I recommend that LR[3] computed using the MMRate be used for international comparisons of the lifetime risk of maternal mortality. As noted
already, this approach was used to derive the 2005 UN estimates.^2
Table 3 compares estimates, for the world as a whole and for various regional groupings, of the lifetime risk of maternal mortality in 2005 derived using all the calculation methods discussed here,
except those that rely on age-specific data. Taking sub-Saharan Africa as an example, the range of estimates extends from 3.41% to 5.76%, or from 1 in 29 to 1 in 17. Note that the measure of lifetime
risk used for the 2000 UN estimates, LR[0], gives the highest value of the lot, whereas the measure recommended here and used for the 2005 estimates (i.e. LR[3] based on the MMRate) gives an
intermediate value of 4.47%, or 1 in 22.
For the population groupings shown in Table 3, the measure of lifetime risk used for the 2000 UN estimates exaggerates the lifetime risk relative to the measure used for the 2005 estimates by an
average of around 20%.
Thus, the two sets of estimates are not directly comparable: a trend analysis based on the 2000 and 2005 estimates of lifetime risk would exaggerate the pace of decline in some cases, while it would
understate the speed of increase or reverse the direction of change in others. For this reason, and because of other changes in the methods used between the 2000 and 2005 UN studies of maternal
mortality, the two sets of estimates should not be used for trend analysis. Any such analysis should focus on the 1990 and 2005 regional estimates of the MMRatio.^2 ■
The analysis presented here was initiated while the author was working for the UN Population Division. The author thanks his colleagues in the Maternal Mortality Working Group for their constructive
comments about this work. Special thanks to Emi Suzuki of the World Bank for assistance with data. The comments of two anonymous reviewers were very helpful.
Funding: Final data analysis and preparation of this article for publication were supported by a grant from the United States National Institute on Aging (R01 AG11552).
Competing interests: None declared.
|
{"url":"http://www.who.int/bulletin/volumes/87/4/07-048280/en/","timestamp":"2014-04-19T02:07:42Z","content_type":null,"content_length":"50064","record_id":"<urn:uuid:6273a0b5-f131-4783-8b3a-b1e9271fbf3b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
|
S.O.S. Mathematics CyberBoard
I'm just an imagination of your figment...
6 jars: jar A, jar B, jar C, jar D, jar E, jar F.
The 12 candies are dropped at random in the jars, 2 in each jar.
What's the probability that AT LEAST one jar contains 2 candies of same color?
Hello, Denis!
I think I've got it, but normally I miscount somewhere . . . *blush*
12 candies: 2 black, 2 blue, 2 purple, 2 red, 2 white, 2 yellow.
6 jars: jar A, jar B, jar C, jar D, jar E, jar F.
The 12 candies are dropped at random in the jars, 2 in each jar.
What's the probability that AT LEAST one jar contains 2 candies of same color?
There are: .
How many of these have no matching pairs of colors?
Place one of each color in each of the jars.
. . There are:
The other six must be placed so there are no matching pairs.
This is a derangement of the six candles. . There are:
Then there are: .no matching pairs.
Hence, there are: .some matching pairs.
Therefore: .
Nope; way off; probability is below 1/2.
Try a shorter one: 2 red and 2 blue with 2 jars: probability is 1/3, right?
Btw, they're CANDIES, not CANDLES
I'm just an imagination of your figment...
Is there a neat way to approach this problem? The only method I could find was a laborious listing of cases that can occur. Following Denis's suggestion of building up to the case of six colours and
six jars by starting with two of each and working upwards, I got the answer
That's what I got too, Opalg, but by writing a program.
Labelling the 12 candies 1 to 12 (1-2 = black .... 11-12 = yellow),
say a random result is (Jar A to F order):
7-3, 11-5, 6-1, 2-12, 9-8, 4-10
That's really the same as:
1-6, 2-12, 3-7, 4-10, 5-11, 8-9 (keep left < right)
Then check each "jar" for a difference of 1, highest being even.
Assigning variables A to L for 1 to 12, the looping goes:
A = 1
Loop B from A+1 to 12
Loop C from A+1 to 7
Loop D from C+1 to 12
Loop E from C+1 to 8
Loop F from E+1 to 12
Loop G from E+1 to 9
Loop H from G+1 to 12
Loop I from G+1 to 10
Loop J from I+1 to 12
Loop K from I+1 to 11
Loop L from K+1 to 12
Check each jar for at least one "yes":
Is B-A = 1 and B even?
Is L-K = 1 and L even?
Results in 4355 "yes" out of 10395 combos, hence 871/2079
Haven't been able to come up with a "neat way"...the way Soroban does!
Maybe he'll be back with one: to make up for his terrible goof in his post
I'm just an imagination of your figment...
I couldn't find a neat general solution for n candies and n/2 jars. It's way beyond my ability. I vaguely remember once looking for but not finding a solution to the menage problem excluding the
typical restriction that the seating must alternate man/woman. That seems to be a very similar problem.
I'll show my work. Not because it's any good. I'm just hoping that it might give some ideas to many of you here who are smarter and better educated than me. I doubt if this leads anywhere - so I
apologize in advance if this is way off and a complete waste of everyone's time.
First I create a sequence for the color of candies drawn
I list the six possible arrangements of four candies in a matrix:
I then have a matrix which describes the match status of each element in the matrix (i.e., whether or not the element matches its partner in the jar).
For odd values of
For even values of
So my "match-status" matrix for n=4 corresponding to the above outcomes is:
There needs to be a second status matrix which shows the potential new partner for each element if that particular element is shifted to the right by one space (assuming one candy is placed to the
left and one is placed to the right). Candies can be displaced by 0, 1 or 2 spaces. If they are displaced by 0 or 2 spaces they are paired with the same candy as before. If they are displaced by 1
space, they have a new partner. For odd numbered candies, this new partner will be to the left; for even numbered candies it will be to the right.
For the above matrix, this "alternate status" matrix is as follows:
Now I want to get from a matrix for
So, let
I can't take it from here. I don't know whether it goes anywhere from here, and I apologize again for my severe shortcomings in linear algebra. I hope it gives someone some ideas on another tack.
I have a feeling that there is no simple solution, but that possibly the probability approaches 1/2 as
"If I have not seen as far as others, it is because giants were standing on my shoulders." - Hal Abelson
Looking at 3 jars, 3 colors (color1 = 1,2; color2 = 3,4; color3 = 5,6):
as example, if 4,3,6,1,2,5 are random results, then Jars A,B,C = 4-3, 6-1, 2-5;
can be rearranged as 1-6, 2-5, 3-4.
There's 15 possible results:
12 34 56 yes
12 35 46 yes
12 36 45 yes
13 24 56 yes
13 25 46 no
13 26 45 no
14 23 56 yes
14 25 36 no
14 26 35 no
15 23 46 no
15 24 36 no
15 26 34 yes
16 23 45 no
16 24 35 no
16 25 34 yes
7 yes, 8 no; so probability = 7/15
The "1" being fixed, we have 3 combos for each of others:
so (c = nunber of candies) total combos = 3(c - 1)
Works similarly for our 12 candies: 945(c - 1) = 10,395 total combos.
BUT how is the 945 arrived at?
If we figure that out, we're still in trouble:
how is the 4355 "yes cases" arrived at?
I'm just an imagination of your figment...
My approach to the "pairs" problem was to look at this situation: Suppose that you have m+n jars, 2m+n colours and 2m+2n candies. There are n pairs of candies and 2m single candies. Each pair has one
of the n colours; each of the 2m single candies has a different colour. The candies are dropped at random in the jars, 2 in each jar, and P(m,n) denotes the probability that NO jar contains two
candies of the same colour.
Thinking about what happens when two random candies are dropped into the first jar, you get the recurrence relation
(The three terms on the right correspond to the three possibilities for the two candies: (a) they come from distinct pairs, (b) one comes from a pair and the other is single, (c) both are single.)
This recurrence relation satisfies the boundary conditions P(m,0)=1 (because if all the candies have different colours it's obviously impossible for a jar to contain two of the same colour), and P
(0,1)=0 (because then there are two candies, both having the same colour, and only one jar to put them in ...).
Applying the recurrence relation recursively, we get
After a few more recursions, the result comes out as
The pattern for the denominator is clear enough (!), but I don't see much hope of finding a simple pattern for the numerator.
(The original problem asked for 1 – P(0,6), but I think it's more natural to look at the case where there are no jars with candies of the same colour.)
Put a pair of candies in one jar, then fill the rest randomly.
There are 6 ways to choose a pair of candies, and 6 ways to choose a jar. Then there are 10!/2^5 ways to fill the rest.
But this counts too many, because there might be two pairs. So subtract the following number...
There are 6.6.5.5/2! ways to put two pairs of candies in two different jars. Then there are 8!/2^4 ways to fill the rest.
But this counts too many, because there might be three pairs. So subtract the following number...
There are 6.6.5.5.4.4/3! ways to put three pairs of candies in three different jars. Then there are 6!/2^3 ways to fill the rest.
But this counts too many... et cetera.
Answer: 3 135 600.
Divide by 7 484 400 to get 0.41895141895...
aswoods wrote:
> There are 6 ways to choose a pair of candies, and 6 ways to choose a jar.
WHY choose a jar?
> Answer: 3 135 600.
> Divide by 7 484 400 to get 0.41895141895...
Same as what we've already got: 871/2079 (from 4355/10395)
I'm just an imagination of your figment...
|
{"url":"http://sosmath.com/CBB/viewtopic.php?f=18&t=43152&p=184009","timestamp":"2014-04-19T19:33:50Z","content_type":null,"content_length":"61813","record_id":"<urn:uuid:51d0a75b-128e-4269-ac4b-28cf5aef4aa1>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Tutor] Need help w/ a for loop
Monte Milanuk monte at milanuk.net
Thu Oct 23 06:09:26 CEST 2008
Hello all,
New guy here, so go easy on me ;)
I'm starting to work my way through Python Programming by Zelle, and have hit a bit of a wall on one of the programming exercises in Chapter 3 (#15 if anyone has the book handy).
What the question ask is: Write a program that approimates the value of pi by summing the terms of this series: 4/1 - 4/3 + 4/5 - 4/7 + 4/9 - 4/11+... The program should ask the user for 'n', the number of terms to sum, and then output the sum of the first 'n' terms of this series.
Where I am running into problems is how to do the '-' & '+', depending on the value of 'n'. i.e. if 'n' = 3, it's going to be a - & an +, if 'n' =5 its going to be -, +, -, +, etc. How to make that work in terms of an algorithm is making my head hurt (and its so early in the book yet... ;) )
Here's what I have thus far:
# approximate_pi.py
# Approximates the value of 'pi' by summing the terms of a series.
import math
def main():
print "This program will approximate the value of pi"
print "to a degree determined by the user. "
# get the value of n from the user
n = input("How many terms do you want me to sum? ")
# create a loop from 1 to n+1, odd)
for i in range(1,n + 1,2):
# each term is '4/i' as it steps thru the loop starting with 1
x = 4 / i
# not sure where to go from here
# output the sum - convert it to a float just in case
print "The sum of the numbers you entered is", (float(sum))
# calculate the difference between our approximation and Python's pi
diff = sum - math.pi
# output the difference
print "The difference between your 'pi' & Python's pi is", diff, "."
Any assistance or nudges in the right direction would be most appreciated.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/tutor/attachments/20081022/fc3e79c5/attachment.htm>
More information about the Tutor mailing list
|
{"url":"https://mail.python.org/pipermail/tutor/2008-October/065003.html","timestamp":"2014-04-21T13:53:00Z","content_type":null,"content_length":"4655","record_id":"<urn:uuid:5177432b-ef0e-4473-9051-c01f562774c1>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Proving continuity
May 15th 2011, 04:41 AM #1
Mar 2011
Proving continuity
Use the given definition of continuity to prove f is continuos at 0.
Definition: For all £>0, there exists a k such that abs(f(x)-f(a))<£ for all x such at
F(x) = 1+x^2 when x>_0 , f(x)= 1-x^3 when x<0
I have trouble understanding the implications of the definition of continuity.
I have that abs(f(x)-f(0))=abs(f(x)-1) but don't what to do next.
Use the given definition of continuity to prove f is continuos at 0.
Definition: For all £>0, there exists a k such that abs(f(x)-f(a))<£ for all x such at
F(x) = 1+x^2 when x>_0 , f(x)= 1-x^3 when x<0
I have trouble understanding the implications of the definition of continuity.
I have that abs(f(x)-f(0))=abs(f(x)-1) but don't what to do next.
You have to consider two cases:
$\left| {F(x) - F(0)} \right| = \left\{ {\begin{array}{rl} {x^2 ,} & {x > 0} \\ {-x^3 ,} & {x < 0} \\ \end{array} } \right.$.
Can you show me how to do it?
Nothing, I haven't encountered it before.
How can you say with any sincerity?
That is what this entire question is about. You posted it.
Did you post a question that you know absolutely nothing about?
If so, that is exceedingly odd.
I know the definition but I don't really get it and don't know how to apply it to questions. I meant I have not done a question like this from first principles. So if you could post half of the
method I will probably get it. I would prefer if you posted it all but you seem averse to that.
If that is the truth, the show us.
Suppose $\varepsilon > 0$ then let $\delta = \min \left\{ {1,\varepsilon } \right\}$.
Now YOU show that $\left| {x - 0} \right| < \delta \, \Rightarrow \,\left| {F(x) - 1} \right| < \varepsilon .$
I don't really understand what the definition intuitively means. An explanation would be welcomed.
In post 2,how have got that equation?
Anyway my attempt. x<0, abs(x)= -x so -x< sigma if sigma =min(1,e)
abs(f(x)-1)=abs(x^3)= (-x)^3<sigma^3<e.
Probably total rubbish.
Yes, it is rubbish.
Over and out
Anyway my attempt. x<0, abs(x)= -x so -x< sigma if sigma =min(1,e)
abs(f(x)-1)=abs(x^3)= (-x)^3<sigma^3<e.
I think this is pretty correct. I would expand it a little as follows. Let δ (it is called delta) be min(1,ε) and suppose that 0 < -x < δ. Then |f(x) - 1| = |x^3| = (-x)^3 < δ^3 <= δ (since δ <=
1) <= ε (since δ = min(1,ε)).
The case x >= 0 is very similar but simpler since |x| = x.
I don't really understand what the definition intuitively means.
Looks like you need a tutorial on epsilon-delta. I don't have good links, but this one I found just now seems all right. Look especially at this intuitive explanation using Flash.
eminently more helpful. How did you know to set delta to be min(1,epsilon)?
In this example, this was Plato's suggestion. The intuition is that we must make x^2 to be < ε when x < δ. Since δ^2 < δ when 0 < δ < 1, it is sufficient to choose such δ that both δ < 1 and δ <
ε hold. It is also possible to choose $\delta = \min(\sqrt{\varepsilon},\sqrt[3]{\varepsilon})$. And of course, any smaller value of δ also works, such as min(1,ε) / 2.
I general, there is no algorithm for choosing δ given ε; it completely depends on the function. The definition of the continuity just requires that for every ε some δ exists. When proving
continuity from the definition, one can use any means to find a δ. One becomes better at this with practice. It is even possible to prove the existence of δ by contradiction instead of coming up
with a specific example, i.e., to show that the assumption that no suitable δ exists is absurd.
May 15th 2011, 04:58 AM #2
May 15th 2011, 06:04 AM #3
Mar 2011
May 15th 2011, 06:08 AM #4
May 15th 2011, 06:22 AM #5
Mar 2011
May 15th 2011, 08:01 AM #6
May 15th 2011, 08:12 AM #7
Mar 2011
May 15th 2011, 08:22 AM #8
May 15th 2011, 08:57 AM #9
Mar 2011
May 15th 2011, 09:11 AM #10
May 15th 2011, 10:04 AM #11
MHF Contributor
Oct 2009
May 15th 2011, 11:19 AM #12
Mar 2011
May 15th 2011, 11:48 AM #13
MHF Contributor
Oct 2009
|
{"url":"http://mathhelpforum.com/differential-geometry/180643-proving-continuity.html","timestamp":"2014-04-19T23:08:39Z","content_type":null,"content_length":"73855","record_id":"<urn:uuid:90ddae3b-4d93-4bba-a91e-a4f0defe83d4>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Another PSA – this many cups of coffee will kill you.
Science Scout twitter feed
With Ben providing a nice substance abuse prelude, it seems like a good enough time to also explore other things related to death. Such as lethal doses – i.e. for things we scientists particularly
indulge in (like coffee and alcohol and, yes – the free cookies at Departmental seminars). We’ll start by looking at the fatality of coffee for this post, and here, for the scientist, the first place
to look a little deeper into this is the vaulted MSDS (or Material Safety Data Sheet).
For those not initiated in this lingo, MSDS are those documents that provide risk assessment and health considerations for any and all reagents, compounds, molecules, chemistries you might care to
use in a laboratory setting. Of course, the most press worthy value it often provides is the lethal dose. Which, according to wiki is:
the median lethal dose, LD[50] (abbreviation for “Lethal Dose, 50%”), LC[50] (Lethal Concentration, 50%) or LCt[50] (Lethal Concentration & Time) of a toxic substance or radiation is the dose
required to kill half the members of a tested population.
Anyway, for fun, and to shift focus if only momentarily away from the current remarkable events happening concerning pigs, viruses, and epidemiology, I thought it might be interesting to do some back
of the envelope calculations to bring to you, our dear reader, a public service announcement on not only the value of washing your hands, and initially isolating yourself when exhibiting flu
symptoms, but also to avoid things like drinking “X” many cups of coffee.
Anyway, this calculation is not as easy as it sounds because there’s a certain amount of kinetics that needs to taken into consideration, but at the very least, let’s first start with a few facts and
figures to get the ball going.
Firstly, if we’re going to focus on coffee, probably its most potent chemical component from an oral lethal dose point of view is the caffeine. However, from a purely empirical perspective, it might
actually be its water content that will kill you in the end. In other words, if you drink lots of coffee and plan on doing it to induce a fatality, it might be interesting to see what scenarios are
necessary for that death to be caused by too much caffeine versus too much water.
In any event, here are the numbers to concern ourselves with:
1. Average weight of a human: From wiki:
In the United States National Health and Nutrition Examination Survey, 1999-2002, the mean weight of males between 20 and 74 years of age was 191 pounds (86.6 kg, 13 st 9 lb); the mean weight of
females of the same age range was 164 pounds (74.4 kg, 11 st 10 lb)
Let’s use 80kg as an average.
2. A single cup of coffee on average contains about 250ml of water, and about 135mg of caffeine (link).
3. Lethal dose (oral intake for a rat, which has similar metabolism – although we should note, not identical metabolism) is about 192 mg/kg for caffeine and 90 mL/kg for the water.
4. However, the other part of the equation is we need to evaluate involves rates of elimination.
The half-life of caffeine–the time required for the body to eliminate one-half of the total amount of caffeine–varies widely among individuals according to such factors as age, liver function,
pregnancy, some concurrent medications, and the level of enzymes in the liver needed for caffeine metabolism. In healthy adults, caffeine’s half-life is approximately 4.9 hours.
And for water – this was a little harder, because water turn over rates I found, tended to revolve around the idea of an individual not imbiding in crazy amounts of fluids. So, for the sake of our
calculations, I’ll go with the follow piece of information:
It’s Not How Much You Drink, It’s How Fast You Drink It! The kidneys of a healthy adult can process fifteen liters of water a day! You are unlikely to suffer from water intoxication, even if you
drink a lot of water, as long as you drink over time as opposed to intaking an enormous volume at one time. As a general guideline, most adults need about three quarts of fluid each day. Much of
that water comes from food, so 8-12 eight ounce glasses a day is a common recommended intake. You may need more water if the weather is very warm or very dry, if you are exercising, or if you are
taking certain medications. The bottom line is this: it’s possible to drink too much water, but unless you are running a marathon or an infant, water intoxication is a very uncommon condition.
O.K. so let’s do the math.
First, an oral lethal dose for an 80kg human would extrapolate to 15,360mg of total caffeine. This technically is equivalent to the amount of caffeine absorbed from drinking 113 cups of coffee really
really really quickly. However, the reality is that this figure would instead result in a fatality due to water intoxication since 113 cups is close to 30 litres of water.
So let’s try a different tact: by focusing on a safe water ingestion figure (i.e. 15 litres per day when spread reasonably). This works out to 60 cups of coffee over a full day, or approximate one
cup every 24 minutes. Anyway, this is some pretty nasty math to figure out (since it’s a half life calculation with continual replenishing going on). Anyway, if you do the math, what you find is that
at the end of a 24 hour period, that average body would have retained a little less than 2500mg. Not even close to the 15,000 or so milligrams needed to reach the lethal dose. Presumably still not a
healthy thing to do, but withing the context of our LD[50], it sounds doable.
And the funny thing is, by the next day, that 2500mg would have been metabolized or cleared itself and only about 50mg of this is left behind. Which means that the net total amount of caffeine still
in a person’s system if he or she were to continue drinking a cup of coffee every 24 minutes for a 48 hour period is 2550mg (2500mg + 50mg).
It turns out that your body is potentially quite capable of dealing with such a heavy coffee dosage, because that new 2550mg level becomes 53mg by the next 24hours – therefore three days of drinking
a cup of coffee every 24 minutes will result in a net retention of 2553mg (2500mg + 53mg) and so on.
I haven’t had a chance to extrapolate this over the full year (365 days), but I’m pretty sure that even a constant coffee drinking regime (1 cup every 24minutes for the full year) wouldn’t work out
to a retention amount above the lethal dose.
All to say that your body pretty much kicks ass in its remarkable metabolism. Now, it’ll be interesting to maybe dig a little deeper with regards to how messed up a person gets with that base 2500mg
inside them (as I’m sure the case will be). As well, not sure what the deal would be with 15 litres of expresso shots per day – that may just about be enough!
1. #1 TLP May 1, 2009
What about the caffeine in energy drinks?
2. #2 Harlan May 1, 2009
Hah. With a housemate who was a toxicologist, I once figured out the LD50 of chocolate-covered espresso beans, which I was eating too many of. They’re a much more efficienct way of getting
caffeine into your system, and don’t have all that pesky water. Turns out that the LD50 was something like 2 kg. Which is a lot, but you could certainly fit it in your stomach over the course of
an hour or so.
Of course, you would wish you were dead long before you actually became dead…!
3. #3 ebohlman May 1, 2009
This probably won’t affect your results much, but the mean is a poor measure to use for adult human weight because it’s quite right-skewed (a 191-pound man who gained 125 pounds would lose a few
years of life expectancy; one who lost 125 pounds would lose all of it).
4. #4 majolo May 1, 2009
If your looking for the equilibrium level, I think the half-life of 4.9 days works out to eliminating 5.5% of the caffeine in the body every 24 minutes. So at equilibrium, the intake of 135mg
every 24 minutes would equal 5.5% of the total caffeine in the body, which works out to about 2450mg.
(Actually, I get only 2241mg after day 1, not sure how our models might be differing…)
5. #5 david Ng May 1, 2009
Thanks Majolo – mine was literally a “back of the envelope” calculation so didn’t have a calculator to do the full on first order analysis.
I think the proper term for the common calculation that is analagous to our needs (from a pharmocokinetic point of view) is called Multiple Dose Pharmocokinetic equation or something like that…
Is that how you got your other figure?
6. #6 natural cynic May 1, 2009
Another confounding factor in water intoxication is the concentrations of salts in the coffee, mostly sodium. There is slightly more sodium in coffee than standard tap water [5 mg/cup vs. 2 mg/
cup]. Therefore the amount of fluid coffee to kill you would be slightly higher than the equivalent dosage of water, since it’s usually the hyponatremia that gets you in water intoxication.
7. #7 TheBrummell May 1, 2009
To complicate things, caffeine impacts kidney function, doesn’t it? So is there a synergistic-lethal effect of caffeine dissolved in water? That sounds like rather nasty math, now that I think
about it.
And I’m curious what the sub-lethal effects of carrying 2500mg of caffeine in one’s bloodstream would be. I get jittery and have some minor GI tract issues if I drink too much coffee in too short
a time, but I’ve never gotten anywhere near one cup per 24 minutes.
8. #8 majolo May 1, 2009
I took a decay function f(t)=(1/2)^(t/4.9) to give a half- life of 4.9 hours (not days, sorry for the typo). This gave 1-f(24/60)=0.055 for the 24-minute decay rate. For the level after one day I
took the sum of 135*f(24*i/60), i going from 1 to 60 (you can simplify that as a geometric sum, but I have Mathematica handy). I might borrow this problem for a calculus exercise sometime.
9. #9 anon May 1, 2009
Does anyone want to calculate the equivalent figures for espresso (1.7g/L caffeine)?
10. #10 John May 1, 2009
The math isn’t quite as nifty, but just for fun:
11. #11 Martijn Lafeber May 3, 2009
Pfew, luckily there’s no need to rename my site http://www.getcoffee.at to http://www.gettea.at
12. #12 Leo Nasti June 5, 2012
A unique discussion might be priced at comment. I’m sure that you should generate more on the following topic, may possibly not be a taboo theme but generally people are too little to speak in
such themes. To the next. Kind regards
|
{"url":"http://scienceblogs.com/worldsfair/2009/05/01/lethal-doses-and-substance-abu/","timestamp":"2014-04-19T01:57:43Z","content_type":null,"content_length":"67670","record_id":"<urn:uuid:8eedc0f8-791f-4c36-88ab-f056263bbeba>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Exact differential equation
November 13th 2009, 04:00 AM
Exact differential equation
hello all
I am having problem understanding this:
i solved (a) :
(4x^3 y^2 + 3x^2 y^4)dx+(2x^4 y + 4x^3 y^3)dy=0
and got this:
x^4 y^2 +x^3 Y^4 = c
the question is (b) to solve for IC :
and give all the solution. what is there beside x=0 ; y=0 ??
i think maybe x=-y^2 but I'm not sure
thanks for the help
November 13th 2009, 05:38 AM
From the IC:
c= 0
Therfore you would think
x^4 y^2 + x^3 y^4 = 0 is the solution
dy/dx = (4x^3 y^2 + 3x^2 y^4)/2x^4 y + 4x^3 y^3)
reduces to:
dy/dx = (4x^3 y + 3x^2 y^3)/2x^4 + 4x^3 y^2)
which has the equiilibrium solution y = 0 as well
Another way of looking at this is
x^4 y^2 + x^3 y^4 = 0
y^2( x^4 +x^3y^2) =0
which yields y = 0 or x = -y^2 as you suggested.
dy/dx = (4x^3 y + 3x^2 y^3)/2x^4 + 4x^3 y^2)
does not satisfy the conditions of the uniqueness theorem.
November 13th 2009, 05:40 AM
This is the resulting equation after applying the IC:
$<br /> x^3 y^2 \left( {x + y^2 } \right) = 0<br />$
so all the possible solutions are on the following curve:
$<br /> y = \left\{ {\left. { \pm \sqrt { - x} } \right|x \geqslant 0} \right\}<br />$
which is a canonical parabola rotated 90 degrees to the left.
November 13th 2009, 09:44 AM
thank you very much for that. ( just as i thought but i was not sure)
by the way what do you use to write in "math font"?
|
{"url":"http://mathhelpforum.com/differential-equations/114292-exact-differential-equation-print.html","timestamp":"2014-04-20T04:02:55Z","content_type":null,"content_length":"5770","record_id":"<urn:uuid:4341afbe-4463-4445-9267-3269e404a176>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
|
BlackShot Cheats
BlackShot Cheats
Uploaded by r4ge.
Blackshot Hack Features:
• AIMBOT:
- Autobot
- AimThru( Distance, Automatic, Crosshair )
- AimAt( Head, Neck, Spine )
- AimKey( Customizable )
- AutoSwitch
- No Recoil
- No Spread
- Name
- Distance
- Box
- Health
- 2D Radar
- Ignore friends
- PlayerChams
- XQZ Wallhack
- Draw Fps
- Draw Time
- Draw Resolution
- Crosshair (6 different types of crosshairs (color customizable)
- Menu position( X, Y )
- Menu Width
- Radar position( X, Y )
- Radar fade
- Save/Load to/from 6 different slots!
Also Features:
No Smoke
3D Boxes
FullBright (Works PERFECT!)
Crosshair (My custom rainbow)
False RenderFront/Back (See my False Render explanation)
Cool ass Menu
1) Download All Files !!!
2) Unpacked
3) Run “xxxGarenaForBlackshotGarena.exe”
4) Open “xxxGarenaForBlackshotblackshotshacksD3_D9_inje ct.exe”
5) Run Blackshot
6) Use “Arrow Key”
7) Enjoy
Thats it, Enjoy!
33 thoughts on “BlackShot Cheats”
1. Elyn Patchman
I’ve never read anything in their official site that says this model is bulletproof. But the saleslady told me that they are. I’ve read in one site too that this model are for shooting but it
never said that it’s bulletproof.
But I read in the news papers that some Oakleys withstand shotgun impact. I don’t know whether that’s the same as being bulletproof.
Anyone who has crosshair? Are they bulletproof?
btw, i already bought them. just curious.
2. nothin_nyce1
A rocket that is launched vertically is tracked by a radar station located on the ground 4 mi from the launch site. What tis the speed of the rocket at the instant its distance from the radar
station is 5 mi and the distance is increasing at the rate of 3600 mi/h
3. addmeonxbox360myuserisfallior
A.) At 3 P.M, ship A is 150 km west of ship B. Ship A is sailing east at 35 km/h and ship B is sailing north at 25 km/h. How fast is the distance between the ships changing at 7 P.M.? (Round your
answer to one decimal place.)
B.) A plane flying horizontally at an altitude of 2 mi and a speed of 500 mi/h passes directly over a radar station. Find the rate at which the distance from the plane to the station is
increasing when it is 4 mi away from the station. (Round your answer to the nearest whole number.)
I got 484 for this one….but apparently that is wrong?
4. sam N
A plane flying at a constant speed of 24km/min passes over a ground radar station at an altitude of 6km and climbs at an angle of 45 degrees. At what rate is the distance from the plane
increasing 2 minutes later (in km/min)?
5. heavenly sword
Does it comes from a satellite? is it video? are they pictures? how do they put everything together so u can go down from the sky and everything looks very smooth?
6. Ramblin Spirit
A plane flying with a constant speed of 300 km/h passes over a ground radar station at an altitude of 1 km and climbs at an angle of 30 degrees. At what rate is the distance from the plane to the
radar station increasing a minute later?
The answer is 296 km/ h. Need to know how they got that.
7. isk8at818
A plane flying at a constant speed of 24km/min passes over a ground radar station at an altitude of 6km and climbs at an angle of 45 degrees. At what rate is the distance from the plane
increasing 2 minutes later (in km/min)?
8. opurt
Does it comes from a satellite? is it video? are they pictures? how do they put everything together so u can go down from the sky and everything looks very smooth?
9. easton j
A plane flying horizontally at an altitude of 6 mi and a speed of 510 mi/h passes directly over a radar station. Find the rate at which the distance from the plane to the station is increasing
when it is 8 mi away from the station. Answer in mi/h
ALERT: Both answer below are incorrect. The correct answer is 337.33.
10. Wooooody
A state highway patrol car radar unit uses a frequency of 9.90 109 Hz. What frequency difference will the unit detect from a car approaching at a speed of 31.1 m/s?
Galileo attempted to measure the speed of light by measuring the time elapsed between his opening a lantern and his seeing the light return from his assistant’s lantern. The experiment is
illustrated in Figure 25-24. What distance, d, must separate Galileo and his assistant in order for the human reaction time, Δt = 0.1 s, to introduce no more than a 17% error in the speed of
I dont really know what is what so please help me…
11. krow147
1) A plane flying horizontally at an altitude of 1 mi and a speed of 480 mi/h passes directly over a radar station. Find the rate at which the distance from the plane to the station is increasing
when it is 2 mi away from the station. (Round your answer to the nearest whole number.)
2)A street light is mounted at the top of a 15-ft-tall pole. A man 6 ft tall walks away from the pole with a speed of 7 ft/s along a straight path. How fast is the tip of his shadow moving when
he is 50 ft from the pole?
12. have faith
A stationary radar operator determines that a ship is 7 km south of him. An hour later the same ship is 10 km southeast. If the ship moved at constant speed and always in the same direction, what
was its velocity during this time? (Hint: Take the origin to be the location of the radar.)
13. mmminja
A rocket that is launched vertically is tracked by a radar station located on the ground 4 mi from the launch site. What tis the speed of the rocket at the instant its distance from the radar
station is 5 mi and the distance is increasing at the rate of 3600 mi/h
14. Mistry
A plane flying with a constant speed of 300 km/h passes over a ground radar station at an altitude of 1 km and climbs at an angle of 30 degrees. At what rate is the distance from the plane to the
radar station increasing a minute later?
The answer is 296 km/ h. Need to know how they got that.
15. Con Orpe
A plane is climbing at a constant angle of 30 degrees. It is flying at a constant speed of 240 mph. At 1:00 pm is passes directly over a radar station at an altitude of 1 mile. How fast is the
distance between the radar station and the plane increasing at 1:01 pm?
Please explain and show how to do it!
16. Jonny
A plane flying at a constant speed of 24km/min passes over a ground radar station at an altitude of 6km and climbs at an angle of 45 degrees. At what rate is the distance from the plane
increasing 2 minutes later (in km/min)?
17. fattiemanny
plane flying with a constant speed of 180 km/h passes over a ground radar station at an altitude of 2 km and climbs at an angle of 30°. At what rate is the distance from the plane to the radar
station increasing a minute later?
124.11 was incorrect?
18. blarg blarg
A plane flying horizontally at an altitude of 6 mi and a speed of 510 mi/h passes directly over a radar station. Find the rate at which the distance from the plane to the station is increasing
when it is 8 mi away from the station. Answer in mi/h
ALERT: Both answer below are incorrect. The correct answer is 337.33.
19. Krazy Bob
I read something about the Navy’s new E-2D Advanced Hawkeye. Apparently, its ability to use STAP processing arises because it utilizes multiple receivers in its radar. So, would this feature
indicate that the AN/APY-9 radar on the E-2D is multistatic? If so, wouldn’t the multistatic array, combined with the low frequency of the radar, improve the E-2Ds ability to detect low-RCS
20. Scorch Delta-62
A plane flying horizontally at an altitude of 2 mi and a speed of 570 mi/h passes directly over a radar station. Find the rate at which the distance from the plane to the station is increasing
when it is 5 mi away from the station. (Round your answer to the nearest whole number.)
21. fattiemanny
A plane flying horizontally at an altitude of 1 mi and a speed of 580 mi/h passes directly over a radar station. Find the rate at which the distance from the plane to the station is increasing
when it is 5 mi away from the station. (Round your answer to the nearest whole number.)
22. Lia-lu-li
A plane flying horizontally at an altitude of 1 mi and a speed of 570 mi/h passes directly over a radar station. Find the rate at which the distance from the plane to the station is increasing
when it is 3 mi away from the station. (Round your answer to the nearest whole number.)
i tried and i got 540
23. Muzahid
I read something about the Navy’s new E-2D Advanced Hawkeye. Apparently, its ability to use STAP processing arises because it utilizes multiple receivers in its radar. So, would this feature
indicate that the AN/APY-9 radar on the E-2D is multistatic? If so, wouldn’t the multistatic array, combined with the low frequency of the radar, improve the E-2Ds ability to detect low-RCS
24. Roflcopter
How would they know exactly what they’re bombing without being able to see it? Radar wouldn’t be good enough to show exactly what building they’re hitting because all buildings look the same on
I’m just confused on how they did it.
FYI air to ground radar does show buildings.
25. Thomas A
I am prestege 2 level 17 and ive been playing for 2d 1h 30m and my KD is 0.948 i have tried everything to improve but i just get worse and when i think im improving i just end up being the same
or worse. Can someone Please help me?
26. Brendan O
Why does the fact that the speed of light travels at the same speed in all circumstances (a moving car, from a house lamp) entail that space/time must distort, move? I don’t see how this follows.
Also, if space/time is a fabric and the planets weigh down on it, what’s above the fabric? Wouldn’t it have to be more fabric, space/time?
27. mal_functiongeo
A highway patrol plane flies 3 mi above a level, straight road at a steady 120 mi/h. The pilot sees an oncoming car and with radar determines that at the instant the line-of-sight distance from
plane to car is 5 mi the line-of-sight distance is decreasing at the rate of 160 mi/h. Determine the car’s speed along the highway.
28. mendhak
1) A rocket that is launched vertically is tracked by a radar station located on the ground 3 mi from the launch site. What is the vertical speed of the rocket at the instant that its distance
from the radar station is 5 mi and this distance is increasing at a rate of 5000 mi/h?
2) A circular oil slick of uniform thickness is caused by a spill of 1m3 of oil. The thickness of the oil slick is decreasing at the rate of 0.1 cm/h. At what rate is the radius of the slick
increasing when he radius is 8 m?
I am lost on both of these. Any help would be much appreciated.
29. Praveen
I need someone that can thoroughly explain to me in depth how waves die off. I know that WiFi at homes have only a certain radius you can be at until the signal dies off; however, I don’t
understand why. I also know that as you increase frequency, you have shorter wavelength; I’m confused about the energy part of the wave. I need information on how and why a radar wave (I’m not
sure if radar is another type of wave or not, correct me if I’m wrong) only can travel a certain distance before you can no longer see any objects appear on the display. Please do not talk about
interference, I want to know in a experiment where no outside factors are present, why a wave with a high frequency will die off as it travels farther.
If I have made no sense, let me tell you what I need for a requirement for my presentation.
My topic is about radars and how it relates to the Electromagnetic Spectrum.
The requirement I’m confused about is: I must include all valid relationships between wavelength, frequency and energy of my topic.
I already know wavelength and frequency, I’m just lost as to what to type or research about the energy of the radar waves (again, correct me if there are no such things as radar waves).
If you give me an acceptable answer to this problem, I will give you all the points I can give you; as another incentive: I could help you with any computer questions you may have too, if you’d
Thanks for viewing!
30. Brody S
A port and a radar station are 3 mi apart on a straight shore runnign east and west. A ship leaves the port at noon traveling at a rate of 13 mi/hr. If the ship maintains its speed and course,
what is the rate of change of the tracking angle Ɵ between the shore and the line between the radar station and the ship at 12:30 PM?
31. Mc L
A plane flying with a constant speed of 24km/min passes over a ground radar station at an altitude of 16 km and climbs at an angle of 25 degrees. At what rate, in km/min is the distance from the
plane to the radar station increasing 1 minute later?
32. Sonny
A possible means for making an airplane invisible to radar is to coat the plane with an antireflective polymer. If radar waves have a wavelength of 2.60 cm and the index of refraction of the
polymer is n = 1.60, how thick would you make the coating?
33. Elijah luv
What I mean is, when you look through the scope of a gun, are the crosshairs always black, or do they make them in colors like red, green, etc.
I’m talking about real guns here, not paintball or anything like that.
I didn’t know where else to post this question, so I posted in hunting.
|
{"url":"http://downloadhack.net/blackshot-cheats/","timestamp":"2014-04-19T01:47:43Z","content_type":null,"content_length":"196448","record_id":"<urn:uuid:1904a196-62e2-493d-a9f7-231f29056075>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Total # Posts: 1,485
yup! all were right! thanks so much!!:)
The first two please
please help me solve my homework with all the work involved. i have trouble paying attention in class so it's hard for me to do my homework. here are a few of the questions. I need to find out what
the letters equal. 1.) 10(-4+y)=2y 2.)9(b-4)-7b=5(3b-2) 3.)-(8n-2)=3+10(1-3...
A spinner is divided into 8 equal sections. There are 3 yellow sections and 5 green sections. The spinner is spun 32 times. Which proportion can be used to find the number of times, B, that the
spinner could be expected to land on a green section?
Opps I mean dedicate*
I am writing a paragraph about why I want to participate in this school counsil thing and I was wobderi if the word choice for "deticate" in this sentence makes sense: I want to deticate myself to
participate in more school activities.
Is this sentence right? I have never had a pet before.
I'm just courious what "ragweed mix" is/are. Is it one type of plant or a bunch?
the construction of Levitt Towns
stats 101
how do you find the lower class limit,upper class limit, class width, class midpoint, and class boundaries from a set of frequencies data
Two positive charges, each of magnitude 2 10-6 C, are located a distance of 17 cm from each other. (a) What is the magnitude of the force exerted on each charge?
health and diseases
Im not sure how to do it? I need suggestions
health and diseases
I need help on doing a powerpoint presentation about trichomoniasis
I am stuck on this question: what mass of KCl in grams when 2.00 lbs KC10 3 is decomposed?
why is anthropology necessary in a career/life?
abnornal psych
so what are the necessary reason for and aganist establishing a specific disorder related to suicide?
abnornal psych
should suicide be added to the DSM IV- TR and what are the reason for and aganist establishing a specific disorder related to suicide?
If y is directly proportional to x and inversely proportional to z^2, by what value is y multiplied if x is doubled and z is multiplied by 3
You have a mass of 71 kg and are on a 51-degree slope hanging on to a cord with a breaking strength of 165 newtons. What must be the coefficient of static friction to 2 decimal places between you and
the surface for you to be saved from the fire? In the previous problem if the...
College physics
A skier traveling at 30.9 m/s encounters a 21.5 degree slope. If you could ignore friction, to the nearest meter, how far up the hill does he go?
I'm still a little confused. When I use the equation: W = k(x2^2 - x1^2)/2, I get 0.0216 J. This doesn't seem correct.
A spring with k = 54 N/cm is initially stretched 1 cm from its equilibrium length. (a) How much more energy is needed to further stretch the spring to 2 cm beyond its equilibrium length? (b) From
this new position, how much energy is needed to compress the spring to 2 cm short...
Calculate the final pressure on the addition of 2.0 g He(g) to a vessel of 10.0 L fixed volume already containing O2(g) at 25°C and 740 mm Hg
Learning through Play
One of the parents in your school says, I really enjoyed coming to Parents Night last week and seeing all the things the children are doing. I was particularly interested in your comments about the
materials which enhance cognitive abilities. Why is it so much bett...
Calc- Trig Substitution
integral of dx/square root(x^2-64) My answer is: ln (x/8 + square root(x^2-64)/8) +c is this right?
A model of a helicopter rotor has four blades, each 3.40 {m} in length from the central shaft to the tip of the blade. The model is rotated in a wind tunnel at 580 {rev/min}. What is the radial
acceleration of the blade tip, expressed as a multiple of the acceleration g due to...
how many times a proton is heavier than an electron
Write an addition problem in which, when the addends are rounded to the nearest ten, the sum is 90. Then explain how you use estimation to solve the problem.
A skydiver jumps out of a hovering helicopter, so there is no forward velocity. Use this information to answer questions 1 to 9. Ignore wind resistance for this exercise. What is the skydiver's
functions 12
after i solve for x then that is my answer? so x=1/3,-1
functions 12
so would it be (3sinx-1)(4sinx+4)=0?
functions 12
solve 12cos^2X=8+13sinX between the window of -360 ¡Ü x ¡Ü 360
Why are hydrogen ions NEVER found in an aqueous solution?
the top of a giant slide is 30 ft off the ground. the slide rises at a 30 degree angle. to the nearst whole foot, what is the distance down the slide?
I have a few questions if someone could please help with them: Which of the following is true: A.Relative dating tells us how long ago something existed B.Disconformities are less common but usually
far more conspicuous because the strata on either side are essentially C.Estim...
A solution of .12 mol/L of phosphoric acid completely neutralizes .20L of a solution of sodium hydroxide with.a concentration of .15mol/L What is the neutralizing volume of the phosphoric acid?
I am doing a presentation on biomedical issues- abortion. My part is what issues involve problems with consent. Can you help please
Earth Science
Which of the following is true: -Relative dating tells us how long ago something existed -Disconformities are less common but usually far more conspicuous because the strata on either side are
essentially -Estimates indicate that the North American continent is being lowered a...
Are these right? I didnt know some of them. 1. AgNO3 Soluble, rule 3 2. Ag2SO4 Insoluble, rule 4 3. HgCl2 Insoluble, rule 2 4. BaSO4 Insoluble, rule 4 5. CaCl2 Soluble, rule 2 6. NH4OH ? 7. PbCl4
Insoluble, rule 2 8. Mg(C2H3O2)2 Soluble, rule 3 9. HgCl Soluble, rule 2 10. CaF2...
how do you determine the solubility of the following salts: 1. AgNO3 2. Ag2SO4 3. HgCl2 4. BaSO4 5. CaCl2 6. NH4OH 7. PbCl4 8. Mg(C2H3O2)2 9. HgCl 10. CaF2 11. CuO 12. AgI 13. Al(OH)3 14. Fe2(CO3)3
15. CrPO4 16. K2S
what are the rules used to determine the solubility of salt?
Honors Chemistry
1. What is the pH of a solution whose [H3O+] is 1 X 10^ -5 M? 2. What is the [H3O+] concentration of a solution with a pH of 9? 3. What is the pH of a solution whose [H3O+] concentration is 3 X 10^-3
M? 4. What is the pH of a solution with a [H3O+] concentration of 1 X 10^ -12...
Shurley English
What are Single Quotes and when do you use them?
A cannonball is fired into the air with a speed of 42 m/s from a height of 28 m, with what speed does the cannonball hit the ground?
An unmarked police car traveling a constant 95km/h is passed by a speeder traveling 14km/h .Precisely 2.50s after the speeder passes, the police officer steps on the accelerator; if the police car's
acceleration is 2.30m/s^2 , how much time passes before the police car ove...
The object distance for a convex lens is 15.0 cm, and the image distance is 5.0 cm. The height of the object is 9.0 cm. What is the height of the image?
a.16 inches by 16 inches b.15 inches by 15 inches c.19 inches by 19 inches d.17 inches by 17 inches
Janna is creating a multi-tiered cupcake display for a party. She plans on having 16 cupcakes on each tier. -There is a 1-inch space between all cupcakes. -The cupcakes have a uniform width
[diameter] of 3 inches. -Each tier will display the cupcakes in either 2 rows of 8, or ...
physics ! HELP PLEASE !!!!1
what are the methods of computing the veolicty of a particle in one- dimensona motion from a graph ?
physcis ! HELP !!!!
which of collowing situations describe a particle that might have a non-zero velocity? canbe more than one A) a particle that has a constant position as a function of time B) a particle that has its
position changing as a fnction of time c)when you graph the particle's acc...
which of collowing situations describe a particle that might have a non-zero velocity? canbe more than one A) a particle that has a constant position as a function of time B) a particle that has its
position changing as a fnction of time c)when you graph the particle's acc...
what are the methods of computing the veolicty of a particle in one- dimensona motion from a graph ?
Using the formula for BMI: B = 703(w)/ h² When the weight is 100 lbs and the BMI is 20, how do I figure out the persons height?
An automobile traveling 95 overtakes a 1.10--long train traveling in the same direction on a track parallel to the road. Q1: If the train's speed is 75 , how long does it take the car to pass it
Q2:How far will the car have traveled in this time? Q3:What is the time if the...
An automobile traveling 95 overtakes a 1.10--long train traveling in the same direction on a track parallel to the road. Q1: If the train's speed is 75 , how long does it take the car to pass it
Q2:How far will the car have traveled in this time? Q3:What is the time if the...
i got answers to some of the questions Q1: 3.3 mins Q2: ?km <- still need help with Q3:23.3 Q4:?km still need help with
An automobile traveling 95 overtakes a 1.10--long train traveling in the same direction on a track parallel to the road. Q1: If the train's speed is 75 , how long does it take the car to pass it
Q2:How far will the car have traveled in this time? Q3:What is the time if the...
Identify 5 different materials that you encounter or use on a daily basis, and identify a mineral that is mined to supply each of those materials.
Thank you for the help :)
The Longhorns punted the ball from the 5-yard line. The punt was caught at the 41-yard line and returned to the 19-yard line. The ball did not cross the 50-yard line. What was the net yardage on the
Physics Help!!!
its wrong :/
Physics Help!!!
One of the 79.0 -long strings of an ordinary guitar is tuned to produce the note (frequency 245 ) when vibrating in its fundamental mode. Part A- If the tension in this string is increased by 3.2 ,
what will be the new fundamental frequency of the string? F= ? Hz i understood ...
Physics Help!!!
One of the 79.0 -long strings of an ordinary guitar is tuned to produce the note (frequency 245 ) when vibrating in its fundamental mode. Part A- If the tension in this string is increased by 3.2 ,
what will be the new fundamental frequency of the string? F= ? Hz i understood ...
Two point charges are fixed on the y axis: a negative point charge q1 = -23 µC at y1 = +0.23 m and a positive point charge q2 at y2 = +0.33 m. A third point charge q = +7.8 µC is fixed at the origin.
The net electrostatic force exerted on the charge q by the other ...
A cylinder shaped sculpture is 24 meters high with a diameter of 6.8 meters. An artist plans to spray paint the entire surface with silver paint. If 1 can of spray paint covers 50 square meters, how
many cans does the artist need to paint the sculpture?
Chemistry III
I'm sorry thank you so much for all your help but I seem to be getting the wrong answer for part c and I don't understand how to get the individual ion's moles for part b. I have the correct volume
of 0.00197L. For part c I am doing as you say and I get a pOH of 5....
Chemistry III
Assume you dissolve 0.240 g of the weak acid benzoic acid, C6H5CO2H, in enough water to make 1.16 102 mL of solution and then titrate the solution with 0.153 M NaOH. C6H5CO2H(aq) + OH -(aq) C6H5CO2-
(aq) + H2O(l) (a) What was the pH of the original benzoic acid solution? (b) Wh...
Chemistry II
You place 1.234 g of solid Ca(OH)2 in 1.00 L of pure water at 25°C and it partially dissolves. The pH of the solution is 12.40. Estimate the Ksp for Ca(OH)2.
A 20.2 mL sample of a weak monoprotic acid, HX, requires 50.0 mL of 0.060 M NaOH to reach the equivalence point. After the addition of 30.0 mL of NaOH, the pH is 5.50. What is the Ka of HX?
The pet shop owner told jean to fill the new fish tank 3/4 full of water.Jean filled the tank 9/12 full.What fraction of the tank does she still need to fill?
A student is interested in whether students who study with music playing devote as much attention to their studies as do students who study under quiet conditions (he believes that studying under
quiet conditions leads to better attention). he randomly assigns participants to ...
Given info: Reaction: 3A+2B--->2C+D {A] (mol/L) 1.0 ×10^-2 1.0 ×10^−2 2.0 ×10^−2 2.0 ×10^−2 3.0 ×10^−2 [B] (mol/L) 1.0 3.0 3.0 1.0 3.0 Rate of appearance of C (mol/L-hr) 0.30×10^−6 8.10×10^−6 3.2...
Given info: A] (mol/L) [B] (mol/L) Rate of appearance of C (mol/L-hr) 1.0 ×10^-2 1.0 0.30×10^−6 1.0 ×10^−2 3.0 8.10×10^−6 2.0 ×10^−2 3.0 3.24×10^−5 2.0 ×10^−2 1.0 1.20×10^−6 3.0 ×10...
calculate the specific rate constant between substances A and B has been found to given data 3A + 2B ---> 2C + D
calculate the specific rate constant A] (mol/L) [B] (mol/L) Rate of appearance of C (mol/L-hr) 1.0 ×10^-2 1.0 0.30×10^−6 1.0 ×10^−2 3.0 8.10×10^−6 2.0 ×10^−2 3.0 3.24×10^−5 2.0 ×10^−2 1.0 1.20×...
calculate the specific rate constant A] (mol/L) [B] (mol/L) Rate of appearance of C (mol/L-hr) 1.0 ×10^-2 1.0 0.30×10^−6 1.0 ×10^−2 3.0 8.10×10^−6 2.0 ×10^−2 3.0 3.24×10^−5 2.0 ×10^−2 1.0 1.20×...
collect the specific rate constant....
calculate the specific rate constant A] (mol/L) [B] (mol/L) Rate of appearance of C (mol/L-hr) 1.0 ×10^-2 1.0 0.30×10^−6 1.0 ×10^−2 3.0 8.10×10^−6 2.0 ×10^−2 3.0 3.24×10^−5 2.0 ×10^−2 1.0 1.20×...
calculate the specific rate constant A] (mol/L) [B] (mol/L) Rate of appearance of C (mol/L-hr) 1.0 ×10^-2 1.0 0.30×10^−6 1.0 ×10^−2 3.0 8.10×10^−6 2.0 ×10^−2 3.0 3.24×10^−5 2.0 ×10^−2 1.0 1.20×...
How would you analyze a solution known to contain Hg22+ and Cu2+?
Determine the value for the acceleration due to gravity at the surface of the planet Mars, given the following data: Mass of Mars = 6 .42 x 10^23 kg Radius of Mars = 3.37 x 10^6 m Thanks!
Take the radius of the earth's orbit to be 1.5 x 1011 m. a. Using a year to be 365 days, find the speed of the earth as it moves around the sun. __m/s b. Convert this speed to miles/hour (show your
conversion factor in your written backup). __miles/hour
Take the radius of the earth at the equator to be 6380 km a. Find the speed the speed of an object at the equator due to the earth's rotation. ___m/s b. Convert this speed to miles/hour (show your
conversion factor in your written backup). __miles/hour
Explain how to calculate margin of error. What effect does increasing your sample size have on the margin of error? What effect does it have on the confidence level? What effect does it have on the
confidence interval?
a bag contains only nickels and dimes. there are exactly 18 coins in the bag. the amount of money in the bag is $1.25. let x represent the number of nickels in the bag. let y represent the number of
dimes in the bag. 1. write a system of equations that correctly models this si...
you are planning a cookout. you figure that you will need at least 5 packages of hot dogs and hamburgers in total. a package of hot hods costs $1.90 and a package of hamburgers costs $5.20. you can
spend a maximum of $20 in total on the hot dogs and hamburgers. Identify two po...
find each probability if you pick a card, do not replace it, then pick a second 3 white cards 5 black cards and you pick black, then white
Explain how to calculate margin of error. What effect does increasing your sample size have on the margin of error? What effect does it have on the confidence level? What effect does it have on the
confidence interval?
Explain how to calculate margin of error. What effect does increasing your sample size have on the margin of error? What effect does it have on the confidence level? What effect does it have on the
confidence interval?
1.5x+4y=8 x+y=7 Please help me solve this system of equation and show your work.
How many liters of 0.101 m NaCl contains 1.70 mol of this salt? Assume this solvent is H2O. Answer in units of L I do not understand what i am looking for or how to solve for it. Steps would be
A DVD player's laser reader helps to "read" the information on the DVD. A particular laser reader measures 0.56 micron thick. does this fall with the required standards if a company uses |t - 0.03 |
<= 0.525 for determining the part to be acceptable?
A typical DVD can store between 137.9 and 142.1 minutes of data before it is considered full. write an absolute value inequality describing the minutes of data (w) a full DVD will hold.
A bump on a DVD needs to be between 0.45 and 0.55 micron wide and 0.95 and 0.99 micron high. write two absolute value inequalities describing the width (x) and height (y) of the bump.
suggest a reason for the failure of some of the molecules to diffuse through the semipermeable membrane.
.28C = 30C * x/.2kg x = .001867 moles S .001867 moles S = .48g S/x x = 8.0177 molecular formula for S Is this correct?
If 0.48 g of sulfur are added to 200 g of carbon tetracholoride, and the freezing point of the carbon tetrachloride (Kf=30 degrees C/m) is depressed by 0.28 degrees C, what is the molar mass and
molecular formula of the sulfur? .48g * 1mol/32.07g= .01497 mol S .01497/.2 kg = ....
.48g * 1mol/32.07g= .01497 mol S .01497/.2 kg = .07485 molality 273.28K= i * 293K * .07485 i = .08 ??
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Kelly&page=4","timestamp":"2014-04-19T04:28:42Z","content_type":null,"content_length":"31164","record_id":"<urn:uuid:b41b031d-5fbb-403e-9823-fccd67dd58b7>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Misunderstandings of Power-Law Distributions
Power laws are ubiquitous. In its most basic form, a power-law distribution has the following form:
$Pr\{x=k; a\} = \frac{k^{-a}}{\zeta(a)}$
where $a>1$ is the parameter of the power-law and $\zeta(a) = \sum_{i=1}^{+\infty} \frac{1}{i^a}$ is the Riemann zeta function that serves as a normalizing constant.
Part A
Often in research we get data sets that seem to follow a power-law. Since power-laws are cool, it is common to examine whether some distribution is indeed a power law and then try to estimate its
One of the most common approaches is to:
• Generate a frequency-count histogram: in the x-axis we have the frequency of an event, and in the y-axis we have how often such events happen (count).
• Plot the histogram in a log-log plot. Since the power law is $Pr\{x=k\} \propto k^{-a}$ then $\log(Pr\{x=k\}) = -a \cdot \log(k) +b$, the resulting plot shows typically a straight line, with a
negative slope.
• Run least-squares regression to estimate the slope and the intersection of the best fitting line.
• If the $R^2$ is high (or if the line looks to fit nicely), then the distribution is a power-law and the parameters are given by the regression.
Unfortunately, this (common!) strategy is wrong. Running a regression in a log-log space violates many of the underlying assumptions of an OLS regression. First of all, the errors in a log space are
distorted and cannot follow a normal distribution. Second, many distributions will tend to fit a straight line in a log-log plot. In fact, I have seen people fitting Gaussians with a few outliers to
a log-log plot and naming the distribution a power-law.
For the reasons above, it is always better to use more robust estimating techniques to estimate the parameters of a power-law. Personally,
I have been using
the maximum likelihood estimator. You can also read
this great paper in arXiv
that discusses estimation techniques for many variations of power-laws. (See Appendix A for a more rigorous discussion of the perils of the least-squares approach.) From SIGMOD, I also recall the
paper on modeling skew in data streams that presents
an approach that converges faster
than the MLE estimate.
Have you seen any Bayesian estimator? The MLE estimator tends to be unstable with only few observed sample values and it would be useful to impose a prior, limiting the acceptable parameter values to
a predefined space. I have seen
a short discussion (Section II.B)
but the approach requires numeric methods. Any method that results in a closed-form expression?
Part B
Another common misunderstanding of the power-law distributions is their interpretation. A common description of a power-law is "
there are only a few events that occur frequently and many events that occur only a few times
." While this is true, it hides the fact that the "
few events that occur frequently
" appear much more often compared to other "
" distributions. In other words, under a probabilistic interpretation, we have
too many
events that "occur frequently"
only a few
For example, word frequencies tend to follow a power-law distribution. This means that there are a few words (like
, and so on) that appear frequently, and many words that occur only a few times. However, if words frequencies followed a Gaussian distribution, then we would never observe words with high frequency.
Most of the words would tend to appear with roughly equal frequencies, and the outliers would be extremely rare.
To inverse the example, if the height of people followed a power-law instead of a Gaussian (say with $a=2$), then we would have 5 billion midgets at 1ft, 1.2 billion short people at 2ft, 300 million
people 4ft tall, .... 1.2 million 64ft tall, 20 thousand at 500ft, and a handful of 30,000ft tall.
In other words, power-laws have
heavy tails
outliers appear frequently.
Not acknowledging the frequent-outliers phenomenon can lead to wrong modeling decisions. See for example Manderbrot's article "
How the Finance Gurus Get Risk All Wrong
Part C
Finally a question and call for help: What is the distribution of the sum of two power-law random variables? In other words, we try to compute the distribution for the random variable $Z=X+Y$ where
$X$ and $Y$ follow power-law distributions with parameters $a_1$ and $a_2$. Following the standard definition, we need to compute the convolution:
$Pr\{ z \} = \sum_{k=1}^{z-1} Pr\{ x=k \} \cdot Pr\{ y=z-k \} =\sum_{k=1}^{z-1} \frac{k^{a_1}}{\zeta(a_1)} \cdot \frac{(z-k)^{a_2}}{\zeta(a_2)}$
(The index $k$ goes from $1$ to $z-1$ because power-laws are only defined for positive integers.)
Now, what is the result of this convolution?
The paper by Wilke et al.
claims that if we sum two power-law random variables with parameters $a_1$ and $a_2$ the result is again a power-law distribution with parameter $\min(a_1,a_2)$.
If we use the
Fourier transform of a power-law
is this transformation correct?
), and compute the convolution by taking the product of the transforms and the going back, then the resulting distribution is again a power-law and the configuring parameter seems to be $a_1 + a_2
+1$. My gut feeling though says that some underlying assumption is violated, and that this result is not correct.
Any other pointers?
Update (1/31/2008): Daniel Lemire reposted my bleg
(I learned from Suresh that
bleg stands for "blog request for help"
), and
Yaroslav Bulatov
pointed to the paper "
The Distribution of Sums of Certain I.I.D. Pareto Variates
" by Colin M. Ramsay, which pretty much answers the question. The derivation uses Laplace transforms (instead of Fourier) and the Bromwich integral, and tends to be quite technical. However, if you
look at Figures 1 and 2, you will pretty much understand the result of the summation. On a first check, the shape of the resulting distribution seems similar to the result of summing two exponential
distributions (see the comments below).
|
{"url":"http://www.behind-the-enemy-lines.com/2008/01/misunderstandings-of-power-law.html","timestamp":"2014-04-20T20:55:41Z","content_type":null,"content_length":"96588","record_id":"<urn:uuid:e5162534-59bc-436b-84db-d5a86164847d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Distributive Law
Year 7 Interactive Maths - Second Edition
The Distributive Law
Consider the expression a(b + c).
This expression represents the area of a rectangle of length b + c and width a. So, this expression can be represented as:
This can be split up into two parts as follows:
Now, the sum of the areas of the rectangles = ab + ac.
This suggests that:
This is called the Distributive Law.
Example 14
Example 15
Example 16
Key Term
Distributive Law
|
{"url":"http://www.mathsteacher.com.au/year7/ch05_algebra/09_dist/law.htm","timestamp":"2014-04-20T00:49:20Z","content_type":null,"content_length":"11953","record_id":"<urn:uuid:55215bb8-7561-49b9-8b9d-5a5ad4df43ab>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
|
what is the maximum height...
October 10th 2011, 06:06 PM #1
Senior Member
Aug 2009
what is the maximum height...
ok so this problem seems pretty straight forward.
a ball is thrown straight up from the ground with an initial velocity of 256 ft/s. the equation h = 256t -16t^2 describes the height the ball can reach in t seconds.
ok so i take the first derivative and find that t = 8 seconds or in 8 seconds the ball reaches the max height.
when i plugged in the 8 seconds i got 1024 as the max height in feet, my book says the answer is wrong? ok so where did i mess up?
Re: what is the maximum height...
Seems correct to me. Even starting with the acceleration function and integrating to velocity you obtain the same answer.
Re: what is the maximum height...
Because that is a simple quadratic you can also find the maximum height by completing the square:
$h= 256t- 16t^2= -16(t^2- 16t+ 64- 64)= -16(t- 8)^2+ 1024$
Since a square is never negative, $-16(t- 8)^2$ is never positive and h is never larger than 1024 which is its value when t= 8.
October 10th 2011, 06:47 PM #2
Junior Member
Aug 2010
October 11th 2011, 07:40 AM #3
MHF Contributor
Apr 2005
|
{"url":"http://mathhelpforum.com/calculus/190042-what-maximum-height.html","timestamp":"2014-04-18T14:49:39Z","content_type":null,"content_length":"35509","record_id":"<urn:uuid:aefa9c36-b77a-46bc-a533-3f73ec0a7e4b>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Three Four Five
Copyright © University of Cambridge. All rights reserved.
'Three Four Five' printed from http://nrich.maths.org/
As shown in the diagram, two semi-circles (each of radius $1/2$) touch each other, and a semi-circle of radius 1 touches both of them. Find the radius of the circle which touches all three
semi-circles, and find a 3-4-5 triangle in the diagram.
|
{"url":"http://nrich.maths.org/671/index?nomenu=1","timestamp":"2014-04-20T00:41:34Z","content_type":null,"content_length":"3347","record_id":"<urn:uuid:4efa7c5b-d9c7-40b5-8b56-062c0ce3ed79>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Measure and Integration: A Concise Introduction to Real Analysis
1 History of the Subject.
1.1 History of the Idea.
1.2 Deficiencies of the Riemann Integral.
1.3 Motivation for the Lebesgue Integral.
2 Fields, Borel Fields and Measures.
2.1 Fields, Monotone Classes, and Borel Fields.
2.2 Additive Measures.
2.3 Carathéodory Outer Measure.
2.4 E. Hopf’s Extension Theorem.
3 Lebesgue Measure.
3.1 The Finite Interval [-N,N).
3.2 Measurable Sets, Borel Sets, and the Real Line.
3.3 Measure Spaces and Completions.
3.4 Semimetric Space of Measurable Sets.
3.5 Lebesgue Measure in R^n.
3.6 Jordan Measure in R^n.
4 Measurable Functions.
4.1 Measurable Functions.
4.2 Limits of Measurable Functions.
4.3 Simple Functions and Egoroff’s Theorem.
4.4 Lusin’s Theorem.
5 The Integral.
5.1 Special Simple Functions.
5.2 Extending the Domain of the Integral.
5.3 Lebesgue Dominated Convergence Theorem.
5.4 Monotone Convergence and Fatou’s Theorem.
5.5 Completeness of L^1 and the Pointwise Convergence Lemma.
5.6 Complex Valued Functions.
6 Product Measures and Fubini’s Theorem.
6.1 Product Measures.
6.2 Fubini’s Theorem.
6.3 Comparison of Lebesgue and Riemann Integrals.
7 Functions of a Real Variable.
7.1 Functions of Bounded Variation.
7.2 A Fundamental Theorem for the Lebesgue Integral.
7.3 Lebesgue’s Theorem and Vitali’s Covering Theorem.
7.4 Absolutely Continuous and Singular Functions.
8 General Countably Additive Set Functions.
8.1 Hahn Decomposition Theorem.
8.2 Radon-Nikodym Theorem.
8.3 Lebesgue Decomposition Theorem.
9. Examples of Dual Spaces from Measure Theory.
9.1 The Banach Space L^p.
9.2 The Dual of a Banach Space.
9.3 The Dual Space of L^p.
9.4 Hilbert Space, Its Dual, and L^2.
9.5 Riesz-Markov-Saks-Kakutani Theorem.
10 Translation Invariance in Real Analysis.
10.1 An Orthonormal Basis for L^2(T).
10.2 Closed Invariant Subspaces of L^2(T).
10.3 Schwartz Functions: Fourier Transform and Inversion.
10.4 Closed, Invariant Subspaces of L^2(R).
10.5 Irreducibility of L^2(R) Under Translations and Rotations.
Appendix A: The Banach-Tarski Theorem.
A.1 The Limits to Countable Additivity.
|
{"url":"http://www.maa.org/publications/maa-reviews/measure-and-integration-a-concise-introduction-to-real-analysis","timestamp":"2014-04-16T06:36:39Z","content_type":null,"content_length":"103093","record_id":"<urn:uuid:74950f0e-ab4d-4153-a720-655e65e61b23>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Calculate Monthly Lease Payments - Page 2
Howdy, Stranger!
It looks like you're new here. If you want to get involved, click one of these buttons!
How to Calculate Monthly Lease Payments
Hi kyfdhx, thanks for helping out w/ the numbers =). The dealer quoted me on the residual and MF (after some pushing), but I know they're a off from what ridewithg and leasecompare have (they
have a 2010 X5 3.0 w/ 58% and 0.00175 base). I'm just glad my calculations weren't different from yours...I know this would have been our first time leasing but the dealer's 892/mo did NOT seem
right to me at all.
My guess is he was reading from the wrong sheet... but, his computer program put in the correct numbers..
I don't know the numbers for the 2010 model, but the ones you have there seem a little too good.. But, each month, the mix of incentives, money factor and residual can be all over the map, and
usually work out to about the same payment..
Another thing to remember... the dealer can quote any number he wants over the phone or e-mail, but the residual is the only one you can verify from the lease paperwork.. BMW dealers are famous
for marking up the money factor by the full 0.0004, and the acquisition fee to $925 (from $725). If the numbers don't match up, those are the first two places to look for discrepancies.
Moderator - Prices Paid, Lease Questions, SUVs
Thanks for the insight kydfx,
Actually, I confused the money factor from the ridewithg.com's base rate for the 2010 X5 :blush: . Still, even with the money factor I was quoted (which is on the HIGHER side from what I've read
here and on other bimmer forums), that sales guy or general manager goofed up enough (to their advantage) to get the monthly ~$85 more than what you and I calculated. I guess it'll be a mystery
as to how they really did it...
Thanks for your assistance though kydfx! I now have the needed confidence to calculate a monthly in the event I opt to lease in the future :shades:
How difficult is it to obtain the dealer's LEASE WORKSHEET ? I assume it's different than the Spec sheet? When I ask for it, I get the spec sheet and separate numbers for tax etc typed into an
email. What does the Lease Worksheet contain and why do dealers seem reluctant to provide one?
Hi lola225,
The dealer's LEASE WORKSHEET is a computer generated docment that contains all pertinent lease data including sell price, doc fees, bank fees, gross cap, cap reduction, adj cap, money factor,
residual, term, payment, taxes, etc. Dealers are reluctant to provide them because they assume customers will take it to another dealer and play the game "can you beat this"? If you really want
it, then they should give it to you. Otherwise, refuse to do business with them. Once a lease has been consumated, most fund providers require that the dealer submit a similar document together
with the lease agreement and other miscellaneous documents. It's different from a SPEC sheet which describes the vehicle attributes... kind of like the window sticker.
I wouldn't be too concerned about the lease worksheet or place too much credence in what the dealer is offering. Don't let the dealer control the deal. You must be in control. A good start is to
create a lease proposal. I've posted several samples on different message boards (Honda Accord, Infiniti G37, etc). Just click on my screen name and feel free to peruse my posts. These are
one-page proposals that provide all the details of the lease and are designed to save time, money, and aggravation. Once created, you can FAX/email it to the dealer and do all your negotiating
via phone/email in the comfort of your home/office.
Hope this helps.
Can you please tell me how to calculate sales tax on a lease in NJ? Is it .07 x the depreciaton or .07 x the selling price and spread over the lease term? Or just .07 thime the monthly lease
payment which includes the finance charge? Thanks so much.
Greetings jerryken!
Thanks for asking. You didn't describe your particular lease situation, if any, so I have to cover all bases. In short, the answer is none of the above. On October 1, 2005, New Jersey joined a 22
state coalition and bought into the
Streamlined Sales and Use Tax Law
. The key points governing sales tax treatment given to motor vehicle leases in New Jersey, under this law, are briefly summarized in the following document
An important provision is described at Item 12 in the above doc
"12. The Division has indicated that the tax base will be reduced by the value of a trade-in of property owned by the lessee that is accepted by the lessor as partial payment.
(a) Does this rule apply under both the original purchase price method and the total lease payments method?
(b) In determining whether the lessee is the owner of property, what is controlling (i.e. GAAP, UCC, tax treatment)? For example, a lessee trades in property subject to a finance lease. Is the
tax base reduced if the lessee is considered the owner for GAAP purposes?
Since the tax is imposed on the lessee, the trade-in credit is applicable under both calculation methods. However, in both cases, the lessor must disclose the tax base (purchase price or lease
payments), as well as the amount of sales tax due, on the paperwork provided to the lessee. As long as the property traded in was originally acquired by the lessee, it does not matter if there is
an amount owed to pay off a loan. The trade in credit is based on the amount of value allowed by the dealer/lessor against the lease."
This document also describes the two methods for computing sales tax in NJ:
(1) tax rate x manufacturer’s invoice price (Item 7) and;
(2) tax rate x total taxable lease payments; otherwise, known as the total payment
method (Item 8)
Either way, you can roll the tax (finance) into the lease (see below). More than likely, you’ll want to opt for (2- total payment method) as it is usually the cheaper of the two methods.
The best way to illustrate the sales tax calculation methodology is to use a concrete example. Because the first method is straight forward, I’ll describe the second using a hypothetical
example. What follows is somewhat long-winded, so please hang-in. Consider a lease, originating in NJ, with the following data
MSRP . 30,000
Sell Price (S)... . ... 27,000
Acq. Fee (A) . .. 600 (Acq Fees are taxable in NJ)
Trade Equity (Q)... .. (1,000) (we’ll assume financed negative equity- taxable in NJ)
Gross Cap .. 28,600
Cap reduction (D) ... 500 (assume $500 cash down- taxable in NJ)
Cap reduction- trade-in credit 4,000 (assumed trade-in allowance- not taxable in NJ)
Adjusted Cap (C) 24,100
Money factor (F) . 0.00200
Residual Factor . 0.60
Residual Value (R) . 18,000 (Residual Factor x MSRP)
Term (N months).... . 36
NJ sales Tax Rate (t) 7%
Note that the financed items, A & Q, are taxable items and are assumed to be rolled into the lease (i.e., capitalized). The $500 cash down payment (D) is also taxable (see below). However, the
entire trade-in value of $4,000 is non-taxable; regardless of the fact that a $5,000 loan balance remains outstanding producing negative equity in the amount of $1,000.
Taxable Payment = F x (C + R) + (C – R) / 36
= 0.00200 x (24,100 + 18,000) + (24,100 – 18,000) / 36
= 253.64
Total NJ Sales Tax Liability = Total payment tax + Tax on cash cap reduction
= (t x N x Taxable Payment) + (t x D)
= (0.07 x 36 x 253.64) + (0.07 x 500)
= 674.17
The taxable payment is NOT the "lease payment". It's only purpose is to compute tax liability and is, therefore, an intermediate calculation.
Now, if you wish to roll the tax into the lease, then your payment, including taxes, is
= 0.00200 x (28,100 + 674.17 + 18,000) + (28,100 + 674.17 – 18,000) / 36
= 392.83... this is your "lease payment"
Observe that 24,100 becomes 28,100 in the last calculation. This is due to the fact that the trade-in value ($4,000) is
as you owe $5,000 on the trade. The only roll that the trade-in value plays, in this example, is to compute the
taxable payment
. Beyond that, it's irrelevant and is not used to compute the lease payment (including taxes) or your regular lease payment, with or without taxes, for that matter. And so, the $4,000 was added
back in order to compute the "lease payment". Again, this is triggered by the fact that you still owe money (which the dealer pays) on your trade.
Your intitial costs, payable at lease inception, are assumed to be the the 1st payment of $392.83 plus DMV fees and dealer doc fees plus any applicable taxes on the dealer doc fee or other
miscellaneous fees.
Questions? Please let me know.
I just don't get it. I don't understand the whole leasing process or the numbers or the residuals.
Does this mean I shouldn't lease?
I've been looking at used RDXes and Lexi and have been worried about their health.
Today, I saw many cars including a Honda crossover car, that could be leased. I figure if I can pay under $400 for a NEW car that I like, Im ahead of the game. I just don't know how to get eher
without my head spinning.
To wit:
Honda Accord Crosstour EX V-6 Automatic 5Speed 2WD
$309.00 per month for 36 months. $2,299.00 total due at signing.
I suggest that you educate yourself about leasing. Edmunds offers some outstanding easy-to-understand articles on leasing at...
I'm not sure how you "figure" $400 as a threshold value for a Crossover but the danger of not knowing how a lease is structured or how to compute payments can be very costly. It's not unusual
that a vehicle can be leased for much less than their advertised specials.
Hope this helps.
If you drive a lot of miles per year you probably shouldn't lease. Over 15k per.
I have no idea what I was thinking when I computed the taxable payment in Post #29. I should have excluded the $500 cap reduction. The taxable payment should be...
0.00200 x (24,600 + 18,000) + (24,600 – 18,000) / 36 = 268.53
Therefore, the total NJ sales tax liability amounts to...
0.07 x 36 x 268.53 = 676.70
The "lease payment" is based on the following data (we'll roll the tax into the lease)...
MSRP . 30,000
Sell Price (S)... . ... 27,000
Acq. Fee (A) . .. 600.00
Trade Equity (Q)... .. (1,000)
NJ Sales Tax........................ 676.70
Gross Cap .. 29,276.70
Cap reduction (D) ... 500.00
Adjusted Cap (C) 28,776.70
Money factor (F) . 0.00200
Residual Factor . 0.60
Residual Value (R) . 18,000.00
Term (N months).... . 36
NJ sales Tax Rate (t) 7%
and is computed as follows...
Lease Payment = 0.00200 x (28,776.70 + 18,000) + (28,776.70 – 18,000) / 36
= 392.91
I think this is true to an extent, however, I am well over my mileage allowance, but as long as I stick with Honda on another lease I don't have to worry about it. I will be turning it in in the
next month or two and if I am incorrect about that I would sure like to know. They have told me all I have to do is bring it in and drive out in a new lease, no matter the mileage.
Well... technically, it's true.... but, you'll still be paying for the over-mileage charges.. If they are turning your car back into Honda Finance, you'll either have to pay the charges, or
they'll just roll them into your new lease payment (making it higher than it would be otherwise)...
If they are buying your car from Honda Finance, then that extra mileage will certainly make your car worth less to them, and that difference will be rolled into your new lease payment..
So, yeah.... you bring it in, and drive out with a new lease, no matter your mileage.... but, it will be reflected in your new lease payment.
No free lunch, I'm afraid...
Now, I don't necessarily agree that high-mileage drivers shouldn't lease... Properly constructed, a high-mileage lease can be the cheapest way to go, especially if you would trade every three
years, anyway..
Moderator - Prices Paid, Lease Questions, SUVs
You can avoid any over mileage charges and damage by purchasing the car instead of turning it in.
Oh wow, they mislead me I do believe. Amazing I am going on my 4th lease and did not fully understand this. Out of those leases I have never traded one lease in with the same auto maker for
another. I haven't worried about mileage because I was under the impression that if I got another Honda it didn't matter.
I am pretty sure the dealer will buy it for resale, so now it is a matter of negotiating the trade in value? I don't believe Honda will let me sell the vehicle to a 3rd party either? I guess if
they don't give me the payoff amount I can just go turn it in myself, pay the mileage and go lease another vehicle. A little incentive if they really want the car.
This changes things a bit.
Does anyone have an idea what the current residual and money factors are for a 2011 Sorento in NJ?
You may want to try the Sorento message board...
and re-post.
I'm sure that one of the moderators or, a knowledgeable poster, will be happy to help you. This message board is reserved for questions concerning lease payment calculations for which you're not
likely to get a timely response to your question.
I created a new discussion for Sorento leasing for you (Kia leasing used to be non-existent, but it seems to have revived)..
You can find it
Moderator - Prices Paid, Lease Questions, SUVs
I’m planning to lease a 2010 Toyota Prius II, & have done a lot of research—all of it on 12k miles a year. However, as I currently drive under 10,000 miles a year, I realize I should
get a lease for 10,000 miles a year. duh. (Both my lease & insurance would be cheaper, & it’s highly unlikely I’ll be driving 12k miles a year.) Toyota is currently offering specials,
including .00020 money factor. Here’s my dealer’s offer: 0 down, & $200 a month for 36 months re a 12,000 miles a year lease. (This includes taxes, bank fee, destination fee, & DMV
fee--& is way lower than what other dealers have been offering.) My question is: If I lower my mileage down to 10k instead of 12k a year, what should my monthly payment be? (How much less should
I pay a month?) Thank you so much! I appreciate your response. Best, Artwheels
Greetings artwheels!
It's rare times like this that I wish that the edmunds website supported mathematical fonts.
Using differential analysis, the formula for the change in payment, given a percentage change in the residual, with all other variables held constant is...
%P = (f - 1/N)S(%r) **
%P = monetary change in payment
f = money factor
N = term
S = Adj MSRP upon which the residual value is calculated
%r = percentage points change in the residual factor
Consider the following hypothetical data...
Adj. MSRP = 25,000 (some options may only be partially residualized or not
residualized at all which lowers the MSRP hence "Adj"
Money Factor = 0.00200
Term = 36 months
Net Cap = 20,000
Res. Factor = 60%
Res. Value = Res. Factor x Adj. MSRP = 0.60 x 25,000 = 15,000 for 12K miles
Using the money factor formula, the above data yields a payment of...
P = 0.00200 x (20,000 + 15,000) + (20,000 - 15,000) / 36
= 208.89
But, if we lower the mileage to 10K, the residual factor may increase from 60% to 62%... a 2 percentage points increase (+2% or +0.02)... the residual value increases to 15,500 (0.62 x 25,000)
and so, the new lower payment is...
*P = 0.00200 x (20,000 + 15,500) + (20,000 - 15,500) / 36
= 196.00
Observe that the payment dropped by 12.89 (i.e., -12.89).
We can easily calculate this payment change (-12.89) by using one formula instead of two and circumnavigate a lot of work just by using the above forrmula...
%P = (f - 1/N)S(%r) **
= (0.00200 - 1/36)(25,000)(+0.02) (the positive sign indicates an increase)
= -12.89 (the negative sign indicates a decrease)
This formula has the advantage of quickly determining how your payment will change (up (+) of down (-)) and, by how much.
** Toyota is notorious for not residualizing the destination charge and floor
mats. I'm not sure if they deploy residual factors or not. If not, then they use flat dollar amounts instead. In this case, S(%r) is simply the change (up (+), down (-)) in the residual dollar
Questions? Please let me know.
All the best...
|
{"url":"http://forums.edmunds.com/discussion/12602/general/x/how-to-calculate-monthly-lease-payments/p2","timestamp":"2014-04-16T11:39:10Z","content_type":null,"content_length":"152297","record_id":"<urn:uuid:7b3347bf-baa2-4f48-a5b3-84fbabdb8c52>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Time Difference
This post is based on a discussion about Progress Bars of Life, where I was foolish enough to claim that printing a text string representing the difference between two times could not be that hard in
C. It is not hard, but turned out not to be entirely trivial either.
The problem we will consider is; given two tm structs, compute the difference in time between them, in such a way that we can easily format a string that gives a textual representation of it. We want
years, months, days, hours, minutes, seconds.
The first idea you might get is to use difftime() to get the difference in seconds between the two times, and then compute the quantities we want by simple arithmetic. So,
start_time = mktime(&start);
end_time = mktime(&end);
secdiff = difftime(end_time, start_time);
years = secdiff / SEC_IN_YEAR;
months = (secdiff % SEC_IN_YEAR) / SEC_IN_MONTH;
days = (secdiff % SEC_IN_MONTH) / SEC_IN_DAY;
hours = (secdiff % SEC_IN_DAY) / SEC_IN_HOUR;
minutes = (secdiff % SEC_IN_HOUR) / SEC_IN_MINUTE;
seconds = (secdiff % SEC_IN_MINUTE);
You often see something like this in timing code — it works great for showing elapsed time in seconds, minutes, even hours. Do you see any problems with this approach?
How many seconds are there in a month? That depends on which month of course. And even worse, it also depends on which year, due to leap years.
For instance, the difference between Jan 31st and Mar 1st is sometimes 29 days, sometimes 30 days, but always 1 month 1 day. The difference between Jul 2nd and Aug 1st is 30 days, but not a month.
So once the difference in time exceeds hours, we somehow need to use the additional information about where this period of time starts and ends, in order to get answers that make sense to humans.
Luckily this information is already available in the tm structs.
If we accept the convention that the time between the 1st of a month and the 1st of the following is a “month” regardless of how many days are between, then one approach is to use elementary school
subtraction on the tm structs. We subtract one field at a time, starting at the lowest, borrowing from the next one if required.
For instance, we get the difference in seconds by subtracting the tm_sec fields. If the result is negative, we have to borrow a minute.
/* difference in the seconds */
seconds = end.tm_sec - start.tm_sec;
/* if negative, we have to borrow a minute */
if (seconds < 0)
seconds = 60 + seconds;
min_borrow = 1;
But how do we borrow a month? Since we already know the number of days in the end month ( end.tm_mday), and all months in between are full calendar months, we only need to figure out how many days
are in the start month.
/* returns 1 if y is a leap year, 0 otherwise */
static int leap(int y)
return (y % 400 == 0 || (y % 4 == 0 && y % 100 != 0)) ? 1 : 0;
const int md[] = { 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 };
/* ... */
/* difference in the days of month */
days = end.tm_mday - start.tm_mday - day_borrow;
/* if negative, we have to borrow a month */
if (days < 0)
int start_mon_days;
/* get number of days in start month */
start_mon_days = md[start.tm_mon];
/* if february, correct for leap year */
if (start.tm_mon == 1) start_mon_days += leap(1900 + start.tm_year);
days = start_mon_days + days;
mon_borrow = 1;
So, if we borrow a month, then the difference in the days of the month is the number of days left in the start month, plus the number of days in the end month, minus any borrow from the calculation
of hours of the day. This is the same as the total number of days in the start month plus the negative difference we already computed.
This method gives the desired results on examples like the ones given above — months are what we perceive as months on the calendar.
There is one more detail we may have to consider — how many hours are there in a day? Usually 24, but due to daylight saving time (summer time) it might be 23 or 25.
As an example, consider the difference between 01:45 and 03:15, which is 1 hour 30 minutes in most cases. But the night DST starts, it will be only 30 minutes (from 01:45 STD to 03:15 DST). Even
worse, the night DST ends, it will first be 2 hours 30 minutes (from 01:45 DST to 03:15 STD), then 1 hour 30 minutes (from 01:45 STD to 03:15 STD) as the clock runs through the extra hour.
Much like the relationship between months and days, I think it makes sense to interpret the difference between the same time on consecutive dates as a “day”, no matter if it takes 23, 24, or 25
actual hours. Once the difference goes below 1 day however, it is more logical to use the actual difference in clock time.
So we would like the difference between 12:00 STD the day before DST starts, and 12:30 DST the day DST starts to be 1 day 30 minutes even though it only takes 23 hours 30 minutes. And we want the
difference between 12:30 DST the day before DST ends, and 12:00 STD the day DST ends to be 24 hours 30 minutes but not a day.
Handling of DST is implementation defined in C, so to keep things simple, we can let mktime() handle this for us. We just have to adjust the hours of the day in case the difference is below one day.
2 /* if difference is below one calendar day and there was a DST difference
3 we adjust hour difference to clock time */
4 if (years + months + days == 0 && hours != (secdiff % SEC_IN_DAY) / SEC_IN_HOUR)
5 {
6 int oldhours = hours;
7 hours = (secdiff % SEC_IN_DAY) / SEC_IN_HOUR;
8 /* handle the special case where DST increased hours past 24 */
9 if (oldhours - hours > 11) hours += 24;
10 }
In many cases you can safely ignore these details, and I am sure there are better ways to handle it — but I hope this has given some indication of how it may not be quite as simple as it appears. The
code I wrote while playing around with this problem is available here.
One thought on “Time Difference”
1. Very entertaining read. It’s also interesting to think about options for displaying the duration with different levels of detail depending on the time frame — like when to round and how much
detail to provide based on the duration.
You must be logged in to post a comment.
|
{"url":"http://www.hardtoc.com/archives/261","timestamp":"2014-04-16T04:11:45Z","content_type":null,"content_length":"59429","record_id":"<urn:uuid:58b3aa90-1802-47a8-bc6f-928c337b856c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Re: To K-12 teachers here: Another enjoyable post from Dan Meyer
Replies: 1 Last Post: Aug 28, 2013 7:24 AM
Messages: [ Previous | Next ]
GS Chandy Re: To K-12 teachers here: Another enjoyable post from Dan Meyer
Posted: Aug 28, 2013 12:57 AM
Posts: 5,944
From: Hyderabad, I find I need to modify and add to my post dt. Aug 27, 2013 10:13 PM (http://mathforum.org/kb/thread.jspa?threadID=2592188).
India While re-asserting my claims that Robert Hansen (RH) has systematically been wrongly crediting some Germanic 'Kahn' or the other for the works of the proud Pathan 'Salman Khan'
Registered: 9/29/05 and the 'Khan Academy' in the teaching of ALGEBRA (thereby causing Mr Khan some degree of ANGST), I believe that RH is in essence correct in his statement (at his post dt.Aug 27,
2013 10:33 PM - http://mathforum.org/kb/message.jspa?messageID=9236330) to this effect:
>> Algebra is not about predicting and measuring. It is
>> about symbolic reasoning, period. These "busy work"
>> examples are no substitute for building symbolic
>> reasoning. I am open to an amount of operational
>> stuff in the class. But not to the point where there
>> isn't any symbolic reasoning left. I think my rule of
>> thumb where 25% of the class can be operational while
>> the other 75% should be symbolic reasoning is a
>> pretty fair and decent standard.
RH's 'rule of thumb' of 25% 'operational' Vs. 75% 'symbolic reasoning' seems to be more or less valid. (But this ratio already contains a major difficulty, which I touch on
I don't know that Dan Meyer is doing it all wrong (as RH claims he is) or that Salman Khan is doing it all correct and perfectly A-OK - but I do believe that in general RH is
correct in his claim that "algebra is symbolic reasoning": in any case, that is the impression I've always had.
Underlying questions, to my mind, are:
Why do so many school students not 'get' algebra at all?
How many out of each hundred? I don't know - but from my memories of school I believe it is quite a sizable number.
The issues to be resolved in the case of algebra lie in the questions:
How to help the student build his/her skills to reason symbolically (algebraically)? (I believe it is possible to enable *most* students to do this - at least a great many more
than the educational system is currently succeeding with).
Is it possible to do this for *most* students - or is this something beyond the abilities of *most* students? (I believe this is NOT beyond the capabilities of *most* students).
I believe that the 'math educational system' does need to find out WHY so many students fail to understand how to reason symbolically - and then to create practical means to
overcome this grave deficiency in the process of teaching a very important part of math - in fact, I claim this should be considered a part of 'general critical reasoning'.
I believe that this may be the problem what Dan Meyer is trying to 'fix'. He may not yet have 'resolved' that problem. Is he on the right track? I do not know: RH claims he is
But neither has Salman Khan resolved that problem! The problem Salman Khan has resolved is something else entirely!
Salman Khan has, I believe, more or less satisfactorily resolved the issue of how to help students improve their proficiency who already 'get the underlying idea of algebra'.
(This is my understanding, from only a very cursory glance at the Khan Academy works).
Salman Khan has not found out how to help students who do not at all 'get the idea of algebra' to resolve that very serious difficulty.
I believe that the answer goes back a considerable way - in that many students are turned off math right to begin with: they have been turned off LONG before they ever got to
'algebra'! That ratio of 25% operational to 75% symbolic (mentioned below) is of little use to students who have already been totally turned off the whole idea of math (as
difficult/ boring/ fearsome/ loathsome).
THAT is the real problem the 'math educational system' has to learn to fix (or otherwise resolve) - and it doesn't seem to have the slightest clue of how to go about doing it.
Robert Hansen may like to observe that I am NOT accusing teachers of making students fear/loathe math (though they may play some part in this sad scenario): I AM accusing the
entire 'math educational system' of doing that.
I don't know if RH will be able to understand the distinction between what he has been suggesting I claim and what I actually do claim. The underlying difficulty is, RH has this
habit of *arguing from falsehoods*, so one does not really know how seriously any of his statements should be taken.
*Here are a couple of RH's false arguments that I've encountered:
1. OPMS is just petty list-making (words/ideas to that effect).
2. GSC accuses teachers of making students fear or loathe math.
3. More falsehoods to follow?
("Still Shoveling! Not PUSHING!")
Date Subject Author
8/28/13 Re: To K-12 teachers here: Another enjoyable post from Dan Meyer GS Chandy
8/28/13 Re: To K-12 teachers here: Another enjoyable post from Dan Meyer Robert Hansen
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2592263","timestamp":"2014-04-17T21:26:44Z","content_type":null,"content_length":"22300","record_id":"<urn:uuid:926c873e-b8b0-4d0f-8a60-8e00af8a9ec5>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Girls Inherit Math Phobia From Female Teachers
Research conducted at the University of Chicago shows that female elementary school teachers who are anxious about math often pass the phobia on to female students. Published in a recent edition of
PNAS, the findings are the product of a year-long study on 17 first- and second-grade teachers and 52 boys and 65 girls who were their students. The researchers found that boys' math performance was
not related to their teacher's math anxiety while girls' math achievement was affected.
To determine the impact of teachers' mathematics anxiety on students, the team assessed teachers' anxiety about math. Then, at both the beginning and end of the school year, the research team also
tested the students' level of mathematics achievement and the gender stereotypes the students held.
The students were told gender neutral stories about students who were good at mathematics and good at reading and then asked to draw a picture of a student who was good at mathematics and one that
was good at reading. Researchers were interested in examining the genders of the drawings that children produced for each story.
At the beginning of the school year, student math achievement was unrelated to teacher math anxiety in both boys and girls. By the end of the school year, however, the more anxious teachers were
about math, the more likely girls, but not boys, were to endorse the view that "boys are good at math and girls are good at reading." Girls who accepted this stereotype did significantly worse on
math achievement measures at the end of the school year than girls who did not accept the stereotype and than boys overall.
Girls who confirmed a belief that boys are better in math than girls scored six points lower in math achievement than did boys or girls who had not developed a belief in the stereotype (102 for the
girls who accepted the stereotype, versus 108 for the other students).
Other research has shown that elementary school children are highly influenced by the attitudes of adults and that this relationship is strongest for students and adults of the same gender. "Thus it
may be that first- and second-grade girls are more likely to be influenced by their teachers' anxieties than their male classmates, because most early elementary school teachers are female and the
high levels of math anxiety in this teacher population confirm a societal stereotype about girls' math ability," Beilock said.
The authors suggest that elementary teacher preparation programs could be strengthened by requiring more mathematics preparation for future teachers as well as by addressing issues of math attitudes
and anxiety in these teachers.
More than 90 percent of elementary school teachers in the country are women and they are able to get their teaching certificates with very little mathematics preparation, according to the National
Survey of Science and Mathematics Education. Other research shows that elementary education majors have the highest rate of mathematics anxiety of any college major.
The potential of these teachers to impact girls' performance by transmitting their own anxiety about mathematics has important consequences. Teachers' anxiety might undermine female students'
confidence in learning mathematics throughout their years of schooling and also decrease their performance in other subjects, such as science and engineering, which are dependent on mathematical
|
{"url":"http://www.science20.com/news_articles/girls_inherit_math_phobia_female_teachers","timestamp":"2014-04-18T23:30:56Z","content_type":null,"content_length":"43440","record_id":"<urn:uuid:1c5e3bd5-0106-4186-818c-04de1d75cdfa>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: An oo List Of REALS in ZFC
Replies: 10 Last Post: Mar 4, 2013 1:46 PM
Messages: [ Previous | Next ]
george Re: An oo List Of REALS in ZFC
Posted: Mar 3, 2013 12:43 AM
Posts: 800
Registered: 8/5/08 On Mar 2, 9:47 pm, Graham Cooper <grahamcoop...@gmail.com> wrote:
> That "An ANTI-DIAGAONAL is not any ROW"
> is a Structural Postulate of 2 dimensional arrays
Exactly. And therefore, it is TRUE OF ALL of them.
> and has no bearing on the DATA in the list.
SO WHAT if it "has no bearing on the DATA"?? It is STILL NOT
any row, and therefore it IS NOT ON the list!
> It's merely DIGITn ~= ~DIGITn
Exactly, and that IS NOT ON the list!
> State your anti-diagonal function and SEE what strings are uncountable on this LIST.
*NO* INDIVIDUAL string is uncountable, DUMBASS!
The anti-diagonal is not uncountable either!
There is only ONE of them if you define an anti-diagonal FUNCTION
of course many other functions are definable, unless you use base 2).
ONE is as countable as it gets!
Date Subject Author
3/2/13 Re: An oo List Of REALS in ZFC Graham Cooper
3/3/13 Re: An oo List Of REALS in ZFC george
3/3/13 Re: An oo List Of REALS in ZFC Graham Cooper
3/3/13 Re: An oo List Of REALS in ZFC Graham Cooper
3/3/13 Re: An oo List Of REALS in ZFC Graham Cooper
3/4/13 Re: An oo List Of REALS in ZFC Graham Cooper
3/4/13 Re: An oo List Of REALS in ZFC Graham Cooper
3/3/13 Re: An oo List Of REALS in ZFC Virgil
3/3/13 Re: An oo List Of REALS in ZFC Graham Cooper
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2438288&messageID=8492102","timestamp":"2014-04-17T10:01:25Z","content_type":null,"content_length":"26239","record_id":"<urn:uuid:5df7de34-5a01-478b-8c0c-9390273f61fc>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
let his soul rest in peace alex we will always remember you.....:'( R.I.Palexpr787
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50aa8199e4b064039cbd44cb","timestamp":"2014-04-17T18:41:40Z","content_type":null,"content_length":"120001","record_id":"<urn:uuid:d5a4d3ce-c6b2-4f58-87ef-9f0a1fe4b82f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The four-color problem
The four-color problem: assaults and conquest
McGraw-Hill International Book Co.
, 1977 -
217 pages
Review: The Four-Color Problem: Assaults and Conquest
User Review - Neil - Goodreads
It would be enough to have the excellent history of the conjecture and explication of Appel & Haken's theorem, but the meaty discussion of outstanding questions in graph coloring is what really makes
this volume shine. The work on the roots of chromials is especially great. Read full review
Historical Setting 3
Map Coloring 21
Solution of the FourColor Problem 52
7 other sections not shown
Bibliographic information
|
{"url":"http://books.google.com/books?id=ez_vAAAAMAAJ&q=directed+graph&dq=related:ISBN0133634655&source=gbs_word_cloud_r&cad=6","timestamp":"2014-04-21T05:33:06Z","content_type":null,"content_length":"112450","record_id":"<urn:uuid:845efde7-1323-4507-8a08-0737aa44930e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A regular problem
May 21st 2012, 12:05 AM #1
May 2012
A regular problem
I need help finding an equation to solve the problem found here: aha! : paul scott : a regular problem
i have worked it out using trail and error and stuffing around with match sticks but i have know idea how to make an equation to solve it.
Do i need to use simultaneous equations?
Thanks in advance,
Re: A regular problem
The key is that each side of a figure must be an integer number of matches. And, of course, with "triangle", "square", "pentagon", "hexagon", "octagon", there are 10 pairs. To make "triangle and
square" from n matches we must have 3i+ 4j= n, for some integers i and j. To make "triangle and pentagon" we must have 3u+ 5v= n for integers u and v. To make "triangle and hexagon" we must have
3p+ 6q= n, etc.
Re: A regular problem
I just have a homework question I've been stuck on and hoping you guys can help me out a bit.
COnsider the number 48. If you add 1 to it, you get 49, which is a perfect square. If you add 1 to its (1/2), you get 25, which is also a perfect square. Please find the next 2 numbers with the
same properties. like 48+1=49 (perfect square)
48/2=24, 24+1=25 (perfect square)
Thank you so much!
Re: A regular problem
thanks for that hallsofive i got it all figured out
May 22nd 2012, 11:12 AM #2
MHF Contributor
Apr 2005
May 24th 2012, 04:55 PM #3
May 2012
May 26th 2012, 05:33 PM #4
May 2012
|
{"url":"http://mathhelpforum.com/algebra/199049-regular-problem.html","timestamp":"2014-04-20T03:16:05Z","content_type":null,"content_length":"37832","record_id":"<urn:uuid:49642bf6-5c80-4229-bcdc-14227de666d1>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Exponential function
From New World Encyclopedia
The exponential function is one of the most important functions in mathematics. For a variable x, this function is written as exp(x) or e^x, where e is a mathematical constant, the base of the
natural logarithm, which equals approximately 2.718281828, and is also known as Euler's number. Here, e is called the base and x is called the exponent. In a more general form, an exponential
function can be written as a^x, where a is a constant and x is a variable.
The graph of y = e^x is shown on the right. The graph is always positive (above the x axis) and increases from left to right. It never touches the x axis, although it gets extremely close to it. In
other words, the x axis is a horizontal asymptote to the graph. Its inverse function, the logarithm, $\log_e(x) = y \,$, is defined for all positive x.
Sometimes, especially in the sciences, the term exponential function is more generally used for functions of the form ka^x, where a is any positive real number not equal to one.
In general, the variable x can be any real or complex number, or even an entirely different kind of mathematical object.
Some applications of the exponential function include modeling growth in populations, economic changes, fatigue of materials, and radioactive decay.
Most simply, exponential functions multiply at a constant rate. For example the population of a bacterial culture that doubles every 20 minutes can be expressed (approximatively, as this is not
really a continuous problem) as an exponential, as can the value of a car that decreases by 10 percent per year.
Using the natural logarithm, one can define more general exponential functions. The function
$\,\!\, a^x=(e^{\ln a})^x=e^{x \ln a}$
defined for all a > 0, and all real numbers x, is called the exponential function with base a. Note that this definition of $\, a^x$ rests on the previously established existence of the function $\,
e^x$, defined for all real numbers.
Exponential functions "translate between addition and multiplication" as is expressed in the first three and the fifth of the following exponential laws:
$\,\!\, a^0 = 1$
$\,\!\, a^1 = a$
$\,\!\, a^{x + y} = a^x a^y$
$\,\!\, a^{x y} = \left( a^x \right)^y$
$\,\!\, {1 \over a^x} = \left({1 \over a}\right)^x = a^{-x}$
$\,\!\, a^x b^x = (a b)^x$
These are valid for all positive real numbers a and b and all real numbers x and y. Expressions involving fractions and roots can often be simplified using exponential notation:
$\,{1 \over a} = a^{-1}$
and, for any a > 0, real number b, and integer n > 1:
$\,\sqrt[n]{a^b} = \left(\sqrt[n]{a}\right)^b = a^{b/n}.$
Formal definition
The exponential function e^x can be defined in a variety of equivalent ways, as an infinite series. In particular, it may be defined by a power series:
$e^x = \sum_{n = 0}^{\infty} {x^n \over n!} = 1 + x + {x^2 \over 2!} + {x^3 \over 3!} + {x^4 \over 4!} + \cdots$
or as the limit of a sequence:
$e^x = \lim_{n \to \infty} \left( 1 + {x \over n} \right)^n.$
In these definitions, n! stands for the factorial of n, and x can be any real number, complex number, element of a Banach algebra (for example, a square matrix), or member of the field of p-adic
Derivatives and differential equations
The importance of exponential functions in mathematics and the sciences stems mainly from properties of their derivatives. In particular,
$\,{d \over dx} e^x = e^x$
That is, e^x is its own derivative. Functions of the form $\,Ke^x$ for constant K are the only functions with that property. (This follows from the Picard-Lindelöf theorem, with $\,y(t) = e^t, y(0)=
K$ and $\,f(t,y(t)) = y(t)$.) Other ways of saying the same thing include:
• The slope of the graph at any point is the height of the function at that point.
• The rate of increase of the function at x is equal to the value of the function at x.
• The function solves the differential equation $\,y'=y$.
• exp is a fixed point of derivative as a functional
In fact, many differential equations give rise to exponential functions, including the Schrödinger equation and the Laplace's equation as well as the equations for simple harmonic motion.
For exponential functions with other bases:
$\,{d \over dx} a^x = (\ln a) a^x$
Thus any exponential function is a constant multiple of its own derivative.
If a variable's growth or decay rate is proportional to its size—as is the case in unlimited population growth (see Malthusian catastrophe), continuously compounded interest, or radioactive decay
—then the variable can be written as a constant times an exponential function of time.
Furthermore for any differentiable function f(x), we find, by the chain rule:
$\,{d \over dx} e^{f(x)} = f'(x)e^{f(x)}$.
Double exponential function
The term double exponential function can have two meanings:
• a function with two exponential terms, with different exponents
• a function $\,f(x) = a^{a^x}$; this grows even faster than an exponential function; for example, if a = 10: f(−1) = 1.26, f(0) = 10, f(1) = 10^10, f(2) = 10^100 = googol, ..., f(100) =
Factorials grow faster than exponential functions, but slower than double-exponential functions. Fermat numbers, generated by $\,F(m) = 2^{2^m} + 1$ and double Mersenne numbers generated by $\,MM(p)
= 2^{(2^p-1)}-1$ are examples of double exponential functions.
See also
• Carico, Charles C. 1974. Exponential and logarithmic functions (Wadsworth precalculus mathematics series). Belmont, CA: Wadsworth Pub. Co. ISBN 0534003141.
• Fried, H. M. 2002. Green's Functions and Ordered Exponentials. Cambridge, UK: Cambridge University Press. ISBN 0521443903.
• Konyagin, Sergei, and Igor Shparlinski. 1999. Character Sums with Exponential Functions and their Applications. Cambridge, UK: Cambridge University Press. ISBN 0521642639.
External links
All links retrieved October 11, 2013.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons
CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia
contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by
wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed.
Research begins here...
|
{"url":"http://www.newworldencyclopedia.org/entry/Exponential_function","timestamp":"2014-04-20T10:47:57Z","content_type":null,"content_length":"30900","record_id":"<urn:uuid:44b38101-b99f-416b-8336-0f6f932c10d1>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of Walter_A._Shewhart
Walter Andrew Shewhart
(pronounced like "Shoe-heart",
March 18
March 11
) was an American
, sometimes known as the
father of statistical quality control
W. Edwards Deming said of him:
As a statistician, he was, like so many of the rest of us, self-taught, on a good background of physics and mathematics.
Early life and education
Born in New Canton, Illinois to Anton and Esta Barney Shewhart, he attended the University of Illinois before being awarded his doctorate in physics from the University of California, Berkeley in
Work on industrial quality
Bell Telephone’s engineers had been working to improve the reliability of their transmission systems. Because amplifiers and other equipment had to be buried underground, there was a business need to
reduce the frequency of failures and repairs. When Dr. Shewhart joined the Western Electric Company Inspection Engineering Department at the Hawthorne Works in 1918, industrial quality was limited to
inspecting finished products and removing defective items. That all changed on May 16, 1924. Dr. Shewhart's boss, George D Edwards, recalled: "Dr. Shewhart prepared a little memorandum only about a
page in length. About a third of that page was given over to a simple diagram which we would all recognize today as a schematic control chart. That diagram, and the short text which preceded and
followed it, set forth all of the essential principles and considerations which are involved in what we know today as process quality control. Shewhart's work pointed out the importance of reducing
variation in a manufacturing process and the understanding that continual process-adjustment in reaction to non-conformance actually increased variation and degraded quality.
Shewhart framed the problem in terms of assignable-cause and chance-cause variation and introduced the control chart as a tool for distinguishing between the two. Shewhart stressed that bringing a
production process into a state of statistical control, where there is only chance-cause variation, and keeping it in control, is necessary to predict future output and to manage a process
economically. Dr. Shewhart created the basis for the control chart and the concept of a state of statistical control by carefully designed experiments. While Dr. Shewhart drew from pure mathematical
statistical theories, he understood data from physical processes never produce a "normal distribution curve" (a Gaussian distribution, also commonly referred to as a "bell curve"). He discovered that
observed variation in manufacturing data did not always behave the same way as data in nature (Brownian motion of particles). Dr. Shewhart concluded that while every process displays variation, some
processes display controlled variation that is natural to the process, while others display uncontrolled variation that is not present in the process causal system at all times.
Shewhart worked to advance the thinking at Bell Telephone Laboratories from their foundation in 1925 until his retirement in 1956, publishing a series of papers in the Bell System Technical Journal.
His work was summarized in his book Economic Control of Quality of Manufactured Product (1931).
Shewhart’s charts were adopted by the American Society for Testing and Materials (ASTM) in 1933 and advocated to improve production during World War II in American War Standards Z1.1-1941, Z1.2-1941
and Z1.3-1942.
Later work
From the late 1930s onwards, Shewhart's interests expanded out from industrial quality to wider concerns in science and statistical inference. The title of his second book Statistical Method from the
Viewpoint of Quality Control (1939) asks the audacious question: What can statistical practice, and science in general, learn from the experience of industrial quality control?
Shewhart's approach to statistics was radically different from that of many of his contemporaries. He possessed a strong operationalist outlook, largely absorbed from the writings of pragmatist
philosopher C. I. Lewis, and this influenced his statistical practice. In particular, he had read Lewis's Mind and the World Order many times. Though he lectured in England in 1932 under the
sponsorship of Karl Pearson (another committed operationalist) his ideas attracted little enthusiasm within the English statistical tradition. The British Standards nominally based on his work, in
fact, diverge on serious philosophical and methodological issues from his practice.
His more conventional work led him to formulate the statistical idea of tolerance intervals and to propose his data presentation rules, which are listed below:
1. Data has no meaning apart from its context.
2. Data contains both signal and noise. To be able to extract information, one must separate the signal from the noise within the data.
Walter Shewhart visited India in 1947-48 under the sponsorship of P. C. Mahalanobis of the Indian Statistical Institute. Shewhart toured the country, held conferences and stimulated interest in
statistical quality control among Indian industrialists.
He died at Troy Hills, New Jersey in 1967.
In 1938 his work came to the attention of physicists W. Edwards Deming and Raymond T. Birge. The two had been deeply intrigued by the issue of measurement error in science and had published a
landmark paper in Reviews of Modern Physics in 1934. On reading of Shewhart's insights, they wrote to the journal to wholly recast their approach in the terms that Shewhart advocated.
The encounter began a long collaboration between Shewhart and Deming that involved work on productivity during World War II and Deming's championing of Shewhart's ideas in Japan from 1950 onwards.
Deming developed some of Shewhart's methodological proposals around scientific inference and named his synthesis the Shewhart cycle.
Achievements and honours
In his obituary for the American Statistical Association, Deming wrote of Shewhart:
As a man, he was gentle, genteel, never ruffled, never off his dignity. He knew disappointment and frustration, through failure of many writers in mathematical statistics to understand his point of
He was founding editor of the Wiley Series in Mathematical Statistics, a role that he maintained for twenty years, always championing freedom of speech and confident to publish views at variance with
his own.
His honours included:
Both pure and applied science have gradually pushed further and further the requirements for accuracy and precision. However, applied science, particularly in the mass production of interchangeable
parts, is even more exacting than pure science in certain matters of accuracy and precision.
Progress in modifying our concept of control has been and will be comparatively slow. In the first place, it requires the application of certain modern physical concepts; and in the second place it
requires the application of statistical methods which up to the present time have been for the most part left undisturbed in the journal in which they appeared.
Shewhart’s propositions
1. All chance systems of causes are not alike in the sense that they enable us to predict the future in terms of the past.
2. Constant systems of chance causes do exist in nature.
3. Assignable causes of variation may be found and eliminated.
Based upon evidence such as already presented, it appears feasible to set up criteria by which to determine when assignable causes of variation in quality have been eliminated so that the product may
then be considered to be controlled within limits. This state of control appears to be, in general, a kind of limit to which we may expect to go economically in finding and removing causes of
variability without changing a major portion of the manufacturing process as, for example, would be involved in the substitution of new materials or designs.
The definition of random in terms of a physical operation is notoriously without effect on the mathematical operations of statistical theory because so far as these mathematical operations are
concerned random is purely and simply an undefined term. The formal and abstract mathematical theory has an independent and sometimes lonely existence of its own. But when an undefined mathematical
term such as random is given a definite operational meaning in physical terms, it takes on empirical and practical significance. Every mathematical theorem involving this mathematically undefined
concept can then be given the following predictive form: If you do so and so, then such and such will happen.
Every sentence in order to have definite scientific meaning must be practically or at least theoretically verifiable as either true or false upon the basis of experimental measurements either
practically or theoretically obtainable by carrying out a definite and previously specified operation in the future. The meaning of such a sentence is the method of its verification.
In other words, the fact that the criterion we happen to use has a fine ancestry of highbrow statistical theorems does not justify its use. Such justification must come from empirical evidence that
it works.
Presentation of Data depends on the intended actions
Rule 1. Original data should be presented in a way that will preserve the evidence in the original data for all the predictions assumed to be useful.
Rule 2. Any summary of a distribution of numbers in terms of symmetric functions should not give an objective degree of belief in any one of the inferences or predictions to be made therefrom that
would cause human action significantly different from what this action would be if the original distributions had been taken as evidence.
See also
• W. Edwards Deming (1967) "Walter A. Shewhart, 1891-1967," American Statistician 21: 39-40.
• Bayart, D. (2001) Walter Andrew Shewhart, Statisticians of the Centuries (ed. C. C. Heyde and E. Seneta) pp. 398-401. New York: Springer.
• ------, 2005, "Economic control of quality of manufactured product" in Grattan-Guinness, I., ed., Landmark Writings in Western Mathematics. Elsevier: 926-35.
• Fagen, M D, ed. (1975) A History of Engineering and Science in the Bell System: The Early Years (1875-1925).
• ------, ed. (1978) A History of Engineering and Science in the Bell System: National Service in War and Peace (1925-1975) ISBN 0-932764-00-2
• Wheeler, Donald J. (1999). Understanding Variation: The Key to Managing Chaos, 2nd ed. SPC Press, Inc. ISBN 0-945320-53-1.
External links
|
{"url":"http://www.reference.com/browse/wiki/Walter_A._Shewhart","timestamp":"2014-04-16T21:17:27Z","content_type":null,"content_length":"91190","record_id":"<urn:uuid:c7d7f024-ce54-4979-9e93-ab224718e63a>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Strongly Complete Profinite Groups.
up vote 0 down vote favorite
I've been reading about profinite groups and have encountered the notion of strong completeness. I.e. that a profinite group $G$ is strongly complete if it is isomorphic to it's profinite completion
or equivalently, if every subgroup of finite index is open. My problem is that I am not understanding why these conditions are equivalent. I cannot find a reference for this fact; every mention of it
I find states that this equivalence is "obvious." I believe the equivalence stems from the fact that if all subgroups of finite index of $G$ are open then the set of subgroups of finite index forms a
fundamental system of open neighborhoods of $1$ in $G$ which allows one to reconstruct the topology. I would appreciate any help understanding this, or any references on this fact.
topological-groups profinite-groups
add comment
1 Answer
active oldest votes
The profinite completion is the inverse limit of all quotients by finite index normal subgroups. Any profinite group is the inverse limit of its quotients by open normal subgroups. Since
open normal subgroups have finite index a profinite group is strongly complete iff the open normal subgroups are cofinal among finite index notmal subgroups. But this is equivalent to all
up vote 2 finite index subgroups are open.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged topological-groups profinite-groups or ask your own question.
|
{"url":"http://mathoverflow.net/questions/122465/strongly-complete-profinite-groups","timestamp":"2014-04-17T15:54:58Z","content_type":null,"content_length":"49193","record_id":"<urn:uuid:47d5cba5-336a-40a8-bb65-4a5d815f4ed7>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fields Medalists / Nevanlinna Price Winner
Fields Medalists / Nevanlinna Price Winner 1998
The winner of the Fields Medals are
Richard E. Borcherds (Cambridge University; Kac-Moody algebras, automorphic forms),
W. Timothy Gowers (Cambridge University; Banach space theory, combinatorics),
Maxim Kontsevich (IHES Bures-sur-Yvette; mathematical physics, algebraic geometry and topology),
Curtis T. McMullen (Harvard University; complex dynamics, hyperbolic geometry).
The winner of the Nevanlinna prize is
Peter W. Shor (AT&T Labs Florham Park,New Jersey; quantum computation, computational geometry).
Special Tribute to Andrew Wiles
Andrew J. Wiles (Princeton University)
|
{"url":"http://www.mathunion.org/general/prizes/nevanlinna/prize-winners/o/General/Prizes/Nevanlinna/1998/","timestamp":"2014-04-20T13:30:53Z","content_type":null,"content_length":"2617","record_id":"<urn:uuid:35dfd3b6-b35f-4e0a-89bc-b2ab567cb1f5>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computed Tomography Method With Helical Relative Movement And Conical Beam Bundle
Patent application title: Computed Tomography Method With Helical Relative Movement And Conical Beam Bundle
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
The invention relates to a
computed tomography method
in which a periodically moving object is irradiated by a conical beam bundle. An nPi-relative movement is generated between a radiation source, which generates the conical beam bundle, and the
object. During the nPi-relative movement, measured values are acquired, which depend on the intensity in the beam bundle on the other side of the object and from these measured values filter values
are determined, which are divided into different groups. The filter values of at least one group are weighted in dependence on the movement of the object, wherein, when filter values of several
groups are weighted, filter values of different groups are weighted differently in dependence on the movement of the object. Finally, a CT image of the object is reconstructed from the filter values.
A computed tomography method having the following steps:a) generation with a radiation source of a conical beam bundle passing through an examination zone and a periodically moving object located
therein,b) generation of a nPi-relative movement between the radiation source on the one hand and the examination zone on the other hand, which comprises a rotation about an axis of rotation and a
displacement in a displacement direction parallel to the axis of rotation and takes the form of a helix,c) acquisition by means of the detector unit, during the nPi-relative movement, of measured
values that are dependent on the intensity in the beam bundle on the other side of the examination zone,d) determination of filter values by filtering the measured values and dividing the filter
values into different groups,e) weighting of the filter values of at least one group in dependence on the movement of the object, wherein, when filter values of several groups are weighted, filter
values of different groups are differently weighted in dependence on the movement of the object,f) reconstruction of a CT image of the examination zone from the filter values.
A computed tomography method as claimed in claim 1, wherein in step f) the reconstruction is carried out by back-projecting the filter values into the examination zone.
A computed tomography method as claimed in claim 1, wherein in step d) the measured values are filtered with a κ-filter.
A computed tomography method as claimed in claim 1, wherein, before the filtering in step d), the measured values are derived, wherein each measured value is assigned a ray and wherein in each case
measured values whose associated rays run parallel are derived partially in accordance with the angular position of the radiation source on the helix, from which radiation source the particular
parallel rays issue.
A computed tomography method as claimed in claim 1, wherein, in step d), the filter values are filtered by filtering the measured values in such a manner and are divided into groups in such a manner
that filter values of different groups comprise contributions of radon planes that intersect the helix with varying frequency.
A computed tomography method as claimed in claim 5, wherein filter values of a group are weighted more heavily in dependence on the movement of the object the more frequently the radon planes whose
contributions are contained by the filter values of the particular group intersect the helix.
A computed tomography method as claimed in claim 1, wherein, in step d), first and second filter values are determined by filtering the measured values in such a manner that the first filter values
comprise contributions of radon planes that intersect the helix at least n times, and the second filter values comprise contributions of radon planes that intersect the helix fewer than n times,in
that the filter values in step d) are divided into different groups in such a manner that the first filter values form a first group and the second filter values form a second group, andin that, in
step e), the filter values of the first group are weighted more heavily in dependence on the movement of the object than the filter values of the second group.
A computed tomography method as claimed in claim 7, wherein, in step d), for each measured value the first and/or second filter values are determined according to the following steps:assignment of at
least one filter line to the particular measured value, wherein, when the particular measured value lies between two Pi-boundary lines, the filter lines F
.sup.(R), G
.sup.(R) and G
.sup.(L) are assigned to the particular measured value, whereinl assumes the values 1, 3 . . . , n,u assumes the values 3, . . . , n-2, andv assumes the values 1, 3, . . . , n-2,wherein, when the
particular measured value lies between two rPi-boundary lines, but not between two (r-2)Pi-boundary lines, wherein r is greater than 1 and less than n, the filter lines F
.sup.(R), F
sup.(R), . . . , F
.sup.(R), G
.sup.(L), G
sup.(L), . . . , G
sup.(L), G
.sup.(R), G
sup.(R), . . . , G
sup.(R) are assigned to the particular measured value,wherein, when the particular measured value lies between two nPi-boundary lines, but not between two (n-2)Pi-boundary lines, the filter line F
.sup.(R) is assigned to the particular measured value,determining in each case an intermediate filter value for each filter line that is assigned to the particular measured value, by filtering the
measured values that lie on the particular filter line and adding the filtered measured values tying on the particular filter line to form an intermediate filter value, so that for each filter line
assigned to the particular measured value an intermediate filter value is determined, which is assigned to the particular measured value and the particular filter line,forming a first filter value by
linear combination of those intermediate filter values of the particular measured value that are assigned to the filter lines F
sup.(R), F
.sup.(R), G
sup.(R) and G
sup.(L), and forming a second filter value by linear combination of those intermediate filter values of the particular measured value that are assigned to the filter lines F.sub.
sup.(R), F.sub.
sup.(R), . . . , F
sup.(R), G.sub.
sup.(L), G.sub.
sup.(L), . . . , G
sup.(L), G.sub.
sup.(R), . . . , G
sup.(R),wherein to determine the course of the associated at least one filter line F.sub.
sup.(R), G
.sup.(R), and G
.sup.(L), first of all dividing lines L
, L.sub.-m, L
and L.sub.-p
with m=1, 3, . . . , n and p=1, 3, . . . , n are determined, the projections of which onto a notional, planar detector surface that contains the axis of rotation and the surface normal of which runs
through the respective radiation source position y(s) extend as follows:the projection of the dividing line L
is parallel to the projection of the derivative {dot over (y)}(s) of the radiation source position y(s) onto the planar detector surface,the projection of the dividing line L
is the same as the projection of the dividing line L
,the projection of the dividing line L
for m>1 is parallel to the projection of L
and tangential to the projection of the upper mPi-boundary line onto the planar detector surface,the projection of the dividing line L.sub.-m for m>1 is parallel to the projection of L
and tangential to the projection of the lower mPi-boundary line onto the planar detector surface,the projection of the dividing line L.sub.-p
is tangential to the projection of the upper mPi-boundary line onto the planar detector surface and tangential to the projection of the lower pPi-boundary line onto the planar detector surface and
has a negative gradient with respect the planar detector surface,the projection of the dividing line L
is tangential to the projection of the upper mPi-boundary line onto the planar detector surface and tangential to the projection of the lower pPi-boundary line onto the planar detector surface and
has a positive gradient with respect to the planar detector surface,wherein the filter lines F
.sup.(R), G
.sup.(R) and G
.sup.(L) that are assigned to the particular measured value extend through the particular measured value, wherein the filter line F
.sup.(R) extends exclusively between the uPi-boundary lines, wherein the filter line G
.sup.(R) extends exclusively between the uPi-boundary lines, wherein the filter line G
.sup.(L) extends exclusively between the vPi-boundary lines andi) wherein the projection of a filter line F
.sup.(R) that has been assigned to a measured value extends onto the planar detector surface tangentially to the projection of the upper lPi-boundary line onto the planar detector surface, when the
measured value lies above a dividing line L
,ii) wherein the projection of a filter line F
.sup.(R) that has been assigned to a measured value extends onto the planar detector surface parallel to the projection of the dividing line L
onto detector surface, when i) does not apply and the measured value lies on the planar detector surface above a dividing line L.sub.-i,iii) wherein the projection of a filter line F
.sup.(R) that has been assigned to a measured value extends onto the planar detector surface tangentially to the projection of the lower lPi-boundary line onto the planar detector surface, when i)
and ii) do not apply,iv) wherein the course of a filter line G
.sup.(R) that has been assigned to a measured value is determined by setting variables t and x to zero and carrying out the following steps α) to δ) until, for a filter line G
.sup.(R) that is assigned to a measured value and for which the course is to be determined, a course has been determined, or until (u-t) is less than 3 or (3+x) is greater than u:α) checking whether
the particular measured value lies on the planar detector surface above the dividing line L
-l, wherein, if this is the case, the projection of the filter line G
.sup.(R) that has been assigned to the measured value extends onto the planar detector surface tangentially to the projection of the upper (u-t)Pi-boundary line onto the planar detector surface,β)
adding the number two to the variable t,γ) if the particular measured value does not lie above the dividing line L
-t+2, that is, if in step α) no course has been determined for the filter line G
.sup.(R), and if (u-t) is greater than or equal to 3, checking whether the particular measured value lies on the planar detector surface above the dividing line L
-l taking into account the variable t increased by two, wherein, if this is the case, the projection of the filter line G
.sup.(R) that has been assigned to the measured value extends onto the planar detector surface tangentially to the projection of the lower (3+x)Pi-boundary line onto the planar detector surface,δ)
adding the number two to the variable x,v) wherein, if in steps α) to δ) for a filter line G
.sup.(R) that is assigned to a measured value no course of the filter line G
.sup.(R) has been determined, the filter line G
.sup.(R) extends tangentially to the lower uPi-boundary line,vi) wherein the course of a filter line G
.sup.(L) that has been assigned to a measured value is determined by setting variables t and x to zero and carrying out the following steps ε) to θ) until, for a filter line G
.sup.(L) that is assigned to a measured value and for which the course is to be determined, a course has been determined, or until (v-t) is less than 1 or (l+x) is greater than v.ε) checking whether
the particular measured value lies on the planar detector surface above the dividing line L.sub.-(l+x)
-t wherein, if this is the case, then the projection of the filter line G
.sup.(L) that has been assigned to the measured value extends onto the planar detector surface tangentially to the projection of the upper (v-t) Pi-boundary line onto the planar detector surface,)
adding the number two to the variable t,η) if the particular measured value does not lie above the dividing line L.sub.-(l+x)
-t+2, that is, if in step ε) no course has been determined for the filter line G
.sup.(L), and if (v-t) is greater than or equal to 1, checking whether the particular measured value lies on the planar detector surface above the dividing line L.sub.-(l+x)
-t taking into account the variable t increased by two, wherein, if this is the case, the projection of the filter line G
.sup.(L) that has been assigned to the measured value extends onto the planar detector surface tangentially to the projection of the lower (1+x) Pi-boundary line onto the planar detector surface.θ)
adding the number two to the variable x,vii) wherein, if in steps ε) to θ) for a filter line G
.sup.(L) that is assigned to a measured value, no course of the filter line G
.sup.(L) has been determined, the filter line G
.sup.(L) extends tangentially to the lower vPi-boundary line.
A computed tomography method as claimed in claim 8, wherein a filter direction is assigned to each filter line, wherein the filter directions that are assigned to the filter lines F
.sup.(R) and G
.sup.(R), projected onto the planar detector surface (60), point substantially from left to right, and the filter directions that are assigned to the filter lines G
.sup.(L), projected onto the planar detector surface, point substantially from right to left.
A computed tomography method as claimed in claim 8, wherein when the projection of a filter line G
.sup.(L) onto the planar detector surface extends tangentially to the projection of one of the upper sPi-boundary lines withs=1, 3, . . . , v the projection of the filter line G
.sup.(L) approaches one of the upper sPi-boundary lines in a region of the projection that is located to the left of the projection onto the planar detector surface of the measured value to be
filtered, in that when the projection of a filter line G
.sup.(R) onto the planar detector surface extends tangentially to the projection of one of the lower sPi-boundary lines withs=13 . . . , v, the projection of the filter line G
.sup.(L) approaches one of the lower sPi-boundary lines in a region of the projection that is located to the right of the projection onto the planar detector surface of the measured value to be
filtered,in that when a projection of the filter line G
.sup.(R) onto the planar detector surface extends tangentially to the projection of one of the upper pPi-boundary lines withp=3, . . . , u onto the detector surface, the projection of the filter line
.sup.(R) approaches the upper pPi-boundary line in a region that is located to the right of the projection onto the planar detector surface of the measured value to be filtered,in that when the
projection of a filter line G
.sup.(R) onto the planar detector surface (extends tangentially to the projection of one of the lower pPi-boundary lines withp=3, . . . , u, the projection of the filter line G
.sup.(R) approaches one of the lower pPi-boundary lines in a region of the projection that is located to the left of the projection onto the planar detector surface of the measured value to be
filtered,in that when the projection of a filter line F
.sup.(R) onto the planar detector surface extends tangentially to the projection of the upper lPi-boundary line onto the planar detector surface and when the location at which the projection of the
dividing line L
onto the planar detector surface tangentially contacts the projection of the upper lPi-boundary line is located to the left of the projection onto the planar detector surface of the measured value to
be filtered, the projection of the filter line F
.sup.(R) approaches the upper lPi-boundary line in a region of the projection that is located to the left of the projection onto the planar detector surface of the measured value to be filtered,in
that when the projection of a filter line F
.sup.(R) onto the planar detector surface extends tangentially to the projection of the upper lPi-boundary line onto the planar detector surface and when the location at which the projection of the
dividing line L
onto the planar detector surface tangentially contacts the projection of the upper lPi-boundary line is located to the right of the projection onto the planar detector surface of the measured value
to be filtered, the projection of the filter line F
.sup.(R) approaches the upper lPi-boundary line in a region of the projection that is located to the right of the projection onto the planar detector surface of the measured value to be filtered,in
that when the projection of a filter line F
.sup.(R) onto the planar detector surface extends tangentially to the projection of the lower lPi-boundary line onto the planar detector surface and when the location at which the projection of the
dividing line L.sub.-l onto the planar detector surface tangentially contacts the projection of the lower lPi-boundary line is located to the left of the projection onto the planar detector surface
of the measured value to be filtered, the projection of the filter line F
.sup.(R) approaches the lower lPi-boundary line in a region of the projection that is located to the left of the projection onto the planar detector surface of the measured value to be filtered,in
that when the projection of a filter line F
.sup.(R) onto the planar detector surface extends tangentially to the projection of the lower lPi-boundary line onto the planar detector surface and when the location at which the projection of the
dividing line L.sub.-l onto the planar detector surface tangentially contacts the projection of the lower lPi-boundary line is located to the right of the projection onto the planar detector surface
of the measured value to be filtered, the projection of the filter line F
.sup.(R) approaches the lower lPi-boundary line in a region of the projection that is located to the right of the projection onto the planar detector surface of the measured value to be filtered.
Computed tomography method as claimed in claim 8, wherein on forming a first filter value by linear combination of those intermediate filter values of the particular measured value which are assigned
to the filter lines F
sup.(R), F
.sup.(R), G
sup.(R) and G
sup.(L),an intermediate filter value, which is assigned to a filter line F
sup.(R), is multiplied by the factor -1,an intermediate filter value, which is assigned to a filter line F
sup.(R), is multiplied by the factor 1,an intermediate filter value, which is assigned to a filter line G
sup.(R), is multiplied by the factor 1/2,an intermediate filter value, which is assigned to a filter line G
sup.(L), is multiplied by the factor -1/2, and on forming a second filter value by linear combination of those intermediate filter values of the particular measured value which are assigned to the
filter lines F.sub.
sup.(R), F.sub.
sup.(R), . . . , F
sup.(R), G.sub.
sup.(L), G.sub.
sup.(L), . . . , G
sup.(L), G.sub.
sup.(R), G.sub.
sup.(R), . . . , G
sup.(R),an intermediate filter value, which is assigned to a filter line F.sub.
sup.(R), is multiplied by n/3,an intermediate filter value, which is assigned to a filter line F
sup.(R), is multiplied by n/(n-2),an intermediate filter value, which is assigned to a filter line F
.sup.(R), with 1<k<n-2, is multiplied by 2n/(k(k+2)),an intermediate filter value, which is assigned to a filter line G
sup.(R), is multiplied by -n/(2(n-2)),an intermediate filter value, which is assigned to a filter line G
.sup.(R), with z<n-2, is multiplied by n/(z(z+2)),an intermediate filter value, which is assigned to a filter line G
sup.(L), is multiplied by n/(2(n-2)),an intermediate filter value, which is assigned to a filter line G
.sup.(L), with d<n-2, is multiplied by n/(d(d+2)).
Computed tomography method as claimed in claim 1, wherein, in step d), each filter value is determined from measured values that have been simultaneously acquired in step c), and in that when
weighting filter values of one group in step e), a filter value of this group that has been determined from measured values that have been acquired whilst the object has moved more quickly, is
weighted more weakly than a different filter value of this group that has been determined from measured values that have been acquired whilst the object has moved more slowly.
Computed tomography method as claimed in claim 1, wherein the object to be examined is a heart, in that during the acquisition in step c) simultaneously an electrocardiogram is recorded and in that
when weighting filter values of one group in step e), a filter value of this group that has been determined from measured values that have been acquired whilst the heart was in the diastolic phase,
is weighted more heavily than a different filter value of this group that has been determined from measured values that have been acquired whilst the heart was in the systolic phase.
Computer tomograph, especially for carrying out the method as claimed in claim 1, havinga radiation source for generating a conical beam bundle passing through an examination zone and an object
located therein,a drive arrangement, in order to cause an object located in the examination zone and the radiation source to rotate relative to one another about an axis of rotation and to be
displaced parallel to the axis of rotation,a detector unit coupled to the radiation source, which detector unit has a detector surface for acquisition of measured values,a reconstruction unit for
reconstructing a CT image of the examination zone from the measured values acquired by the detector unit,a control unit for controlling the radiation source, the detector unit, the drive arrangement
and the reconstruction unit in conformity with the steps disclosed in claim
1. 15.
Computer tomograph as claimed in claim 14, wherein the computer tomograph comprises means for recording the movement of the object, especially an electrocardiograph.
Computer program for a control unit for controlling a radiation source, a detector unit, a drive arrangement and a reconstruction unit of a computer tomograph as claimed in claim 14 for carrying out
the method as claimed in claim
The invention concerns a computed tomography method, in which an examination zone in which there is arranged a periodically moving object is irradiated along a helical trajectory by a conical beam
bundle. The invention also relates to a computer tomograph for carrying out the method according to the invention and to a computer program for controlling the computer tomograph.
Methods of the kind mentioned initially are often used in examinations of the heart and are described, for example, in "Multislice CT in Cardiac Imaging: Technical Principles, Imaging Protocols,
Clinical Indications and Future Perspective", B. M. Ohnesorge et al., Springer Verlag, 2002, ISBN 3540429662. In these methods, measured values are acquired with a detector unit, the cardiac motion
being recorded by means of an electrocardiograph during measuring. The measured values are weighted in dependence on the cardiac motion, and a computed tomography image (CT image) of the heart is
reconstructed from the weighted measured values.
The drawback of these known methods is that, despite the weighting, the reconstructed CT images contain artifacts owing to the movement of the object, which diminish the image quality.
It is an object of the invention to specify a computed tomography method of the kind mentioned initially in which the quality of the reconstructed CT image is enhanced.
That object is achieved in accordance with the invention by a computed tomography method having the following steps:
) generation with a radiation source of a conical beam bundle passing through an examination zone and a periodically moving object located therein,b) generation of a nPi-relative movement between the
radiation source on the one hand and the examination zone on the other hand, which comprises a rotation about an axis of rotation and a displacement in a displacement direction parallel to the axis
of rotation and takes the form of a helix,c) acquisition by means of the detector unit, during the nPi-relative movement, of measured values that are dependent on the intensity in the beam bundle on
the other side of the examination zone,d) determination of filter values by filtering the measured values and dividing the filter values into different groups,e) weighting of the filter values of at
least one group in dependence on the movement of the object, wherein, when filter values of several groups are weighted, filter values of different groups are differently weighted in dependence on
the movement of the object,f) reconstruction of a CT image of the examination zone from the filter values.
The nPi-relative movement and also the nPi-acquisition, that is, the acquisition of measured values during an nPi-relative movement, are generally well known. With this specific relative movement,
the pitch, that is, the spacing of adjacent turns of the helical trajectory, is selected so that from each radiation source position, n+1 turns are projected onto the detector surface, n being a
natural, odd number greater than 1. As a rule, one talks of the 3Pi-, 5Pi-, 7Pi- etc. relative movement or acquisition. The projections of the turns of the helical trajectory onto the detector
surface are known as mPi-boundary lines, with m=1, 3, . . . , n. The two innermost projected turns are the Pi-boundary lines, the two projected turns each adjacent to a Pi boundary line are the
3Pi-boundary lines, the 5Pi-boundary lines come next, and so on. The course of the mPi-boundary lines on the detector surface is explained more precisely below.
In contrast to the prior art mentioned initially, not all measured values are weighted in equal measure in dependence on the object movement; on the contrary, the measured values are filtered, and
the resulting filter values are divided into groups, at least one group of filter values being weighted in dependence on the object movement. If filter values of several groups are weighted, then
filter values of different groups are filtered differently in dependence on the object movement, that is, filter values of a group that have been formed by filtering measured values that have been
acquired during a specific movement phase of the object are weighted differently, for example, multiplied by a larger or smaller weighting factor, from filter values of another group that have been
formed by filtering measured values that have been acquired during the same movement phase of the object. This group-dependent weighting enables filter values which, were they to be weighted in
dependence on the object movement, would not contribute to enhancement of the image quality or would impair the image quality, to be weighted more weakly than other filter values or not at all in
dependence on the object movement. In this way, the image quality is improved compared with the methods mentioned initially.
In the embodiment as claimed in claim 2, the filter values are back-projected, which leads to a good image quality of the reconstructed object for little computation effort.
According to claim 3, the measured values are filtered by means of a κ-filter, which is explained in detail below. The use of a κ-filter and also the derivation of the measured values as claimed in
claim 4 before filtering lead to a further improvement in image quality.
According to claim 5, in step d) the filter values are filtered by filtering the measured values in such a manner and dividing the filter values into groups in such a manner that filter values of
different groups comprise contributions of radon planes that intersect the helix with varying frequency. That is to say, the filter values of one group comprise, for example, contributions of radon
planes that intersect the helix 1 to 3 times, the filter values of another group comprise, for example, contributions of radon planes that intersect the helix 4 to 6 times, and the filter values of
another group comprise, for example, contributions of radon planes that intersect the helix more than 6 times.
The embodiment as claimed in claim 5 and also the embodiment as claimed in claim 6 lead to a further improvement in image quality.
According to claim 7, first filter values are determined, which comprise contributions of radon planes that intersect the helix at least n times, and second filter values are determined, which
comprise contributions of radon planes that intersect the helix fewer than n times. The filter values are then divided into different groups, such that the first filter values form a first group and
the second filter values form a second group. In step e), the filter values of the first group are then weighted more heavily in dependence on the object movement than the filter values of the second
group. For example, in dependence on the object movement, only the first filter values are weighted and not the second filter values, whereby the image quality of the reconstructed CT image is
further enhanced.
Inter alia, a filter value of one group is weighted more heavily in dependence on the object movement than a filter value of a different group, if a filter value of the one group is multiplied by a
larger weighting factor than a filter value of the other group, although both filter values have been determined by filtering measured values that have been acquired while the object was in the same
phase of movement. The fact that the filter values of one group are weighted more heavily in dependence on the object movement than the filter values of a different group includes also the case that
the filter values of the one group are weighted, even if only weakly, in dependence on the object movement, and the filter values of the other group are not weighted in dependence on the object
Radon planes are known, for example, from "The Mathematics of Computerized Tomography", F. Natterer, Wiley, New York, 1986, so that more specific details of the course of radon planes and their
significance for the reconstruction of computed tomography images are not given.
Claim 8 describes a relatively simple manner of determining first filter values comprising contributions of radon planes that intersect the helix at least n times, and second filter values comprising
contributions of radon planes that intersect the helix fewer than n times.
The embodiments of the computed tomography method according to the invention described in claims 9 to 13 further enhance the quality of the reconstructed CT image.
Claims 14 and 15 describe a computer tomograph for carrying out the method according to the invention. Claim 16 defines a computer program for controlling a computer tomograph as claimed in claim 14.
These and other aspects of the invention are apparent from and will be elucidated with reference to the embodiments described hereinafter.
In the drawings:
FIG. 1 shows a computer tomograph, with which the method according to the invention can be implemented,
FIG. 2 is a flow chart of the method according to the invention,
FIG. 3 is a perspective view of a helical trajectory, a radiation source, a focus-centered and a planar detector surface in the case of a Pi-relative movement,
FIG. 4 is a perspective view of a helical trajectory, a radiation source, a focus-centered and a planar detector surface during a 3Pi-relative movement,
FIG. 5 is a perspective view of parallel rays that emanate from different radiation source positions and are incident on the same row of detectors,
FIG. 6 is a perspective view of a portion of the helical trajectory, the radiation source, and the planar detector surface,
FIG. 7 shows the course of mPi-boundary lines and dividing lines on the planar detector surface during a 5Pi-relative movement,
FIG. 8 shows the course of mPi-boundary lines and dividing lines on the planar detector surface during a 7Pi-relative movement,
FIG. 9 to FIG. 12 show a course of filter lines during a 5Pi-relative movement,
FIG. 13 and FIG. 14 show a course of filter lines during a 7Pi-relative movement, and
FIG. 15 shows a flow chart of a filtered back-projection.
The computer tomograph illustrated in FIG. 1 comprises a gantry 1, which is capable of rotating about an axis of rotation 14 running parallel to the z-direction of the co-ordinate system illustrated
in FIG. 1. For that purpose, the gantry 1 is driven by a motor 2 at a preferably constant but adjustable angular velocity. A radiation source S, for example an X-ray tube, is fixed to the gantry 1.
The X-ray tube is provided with a collimator arrangement 3, which from the radiation produced by the radiation source S extracts a conical beam bundle 4, i.e. a beam bundle that has a finite extent
other than zero both in the z-direction and in a direction perpendicular thereto (i.e. in a plane perpendicular to the axis of rotation).
The beam bundle 4 passes through a cylindrical examination zone 13, in which a periodically moving object (not illustrated) is located. In this exemplary embodiment this object is a beating heart,
which performs proper motion and is possibly additionally moved back and forth by respiratory motion of the patient. In other embodiments, other periodically moving body organs, such as the liver or
the brain, periodically moving parts of body organs or periodically moving technical objects could alternatively be irradiated.
After passing through the examination zone 13, the beam bundle 4 is incident on a detector unit 16 fixed to the gantry 1, the detector unit having a detector surface comprising a plurality of
detector elements, which in this embodiment are arranged matrix-form in rows and columns. The detector columns extend parallel to the axis of rotation 14. The detector rows are located in planes
perpendicular to the axis of rotation, in this embodiment on an arc of a circle around the radiation source S (focus-centered detector surface). In other embodiments they could alternatively be of a
different form, for example, they could describe an arc of a circle about the axis of rotation 14 or be linear. Each of the detector elements on which the beam bundle 4 is incident supplies in each
position of the radiation source a measured value for a beam from the beam bundle 4.
The angle of aperture of the beam bundle 4 denoted by (max determines the diameter of the object cylinder, within which the object to be examined is located during acquisition of the measured values.
The angle of aperture is here defined as the angle that a ray lying in a plane perpendicular to the axis of rotation 14 at the edge of the beam bundle 4 encloses with a plane defined by the radiation
source S and the axis of rotation 14.
The examination zone 13, or rather the object or the patient support table, can be displaced by means of a motor 5 parallel to the axis of rotation 14 and the z-axis. Alternatively and equivalently,
the gantry could be displaced in that direction. If the object is a technical object and not a patient, the object can be rotated during an examination, while the radiation source S and the detector
unit 16 remain stationary.
If the motors 2 and 5 run simultaneously, the radiation source S and the detector unit 16 describe a helical trajectory relative 17 to the examination zone 13. If, however, the motor 5 for advance in
the direction of the axis of rotation 14 is idle, and the motor 2 allows the gantry to rotate, a circular trajectory is produced for the radiation source S and the detector unit 16 relative to the
examination zone 13. Only the helical trajectory will be considered below.
The helical trajectory 17 may be parameterized by
( s ) = ( R cos s R sin s s h 2 π ) ( 1 )
wherein R is the radius of the helical trajectory 17, s is the angular position on the helical trajectory and h is the pitch, that is, the spacing between two adjacent turns of the helical
During acquisition of the measured values, the cardiac motion is recorded in known manner by means of an electrocardiograph 8. For that purpose, the thoracic region of a patient is connected by means
of electrodes (not illustrated) to the electrocardiograph 8. In other embodiments, especially in the case of other moving objects, the movement of the object can be followed in other known manners.
Thus, for example, movement information can be obtained from the values measured by the detector unit 16, so that detection of the movement with an additional device, such as an electrocardiograph,
could be omitted. For that purpose, first of all the measured values are used to prepare a kymogram, from which in known manner the movement can be derived. A detailed description of this method can
be found in "Kymogram detection and kymogram-correlated image reconstruction from subsecond spiral computed tomography scans of the heart", M. Kachelrieβ, D. A. Sennst, W. Maxlmoser, W. A. Kalender,
Medical Physics 29(7):1489-1503, 2002, to which reference is hereby made.
In the present embodiment, it is assumed that the patient is not breathing during the measurement. The respiratory motion can therefore be disregarded. In other embodiments, the respiratory motion
could be measured, for example, using a deformable abdominal belt that is connected to a respiratory motion-measuring device.
The measured values acquired by the detector unit 16 are fed to a reconstruction and image-processing computer 10, which is connected to the detector unit 16, for example, via a contactlessly
operating data transmission (not illustrated). In addition, the electrocardiogram is transmitted from the electrocardiograph 8 to the reconstruction and image-processing computer 10. The
reconstruction and image-processing computer 10 reconstructs the absorption distribution in the examination zone 13 and reproduces it, for example, on a monitor 11. The two motors 2 and 5, the
reconstruction and image-processing computer 10, the radiation source S, the electrocardiograph 8, and the transfer of the measured values from the detector unit 16 to the reconstruction and
image-processing computer 10 are controlled by the control unit 7. The control unit 7 also controls the transmission of the electrocardiogram from the electrocardiograph 8 to the reconstruction and
image-processing computer 10.
In other embodiments, the acquired measured values and the measured electrocardiograms for reconstruction can first be fed to one or more reconstruction computers, which forward the reconstructed
data, for example, via a fiber optic cable, to an image-processing computer.
The individual steps of one embodiment of the computed tomography method according to the invention are explained in the following with reference to the flow chart in FIG. 2.
After initialization in step 101, the gantry rotates at an angular velocity that in this example of embodiment is constant. It may also vary however, for example, as a function of time or of
radiation source position.
In step 102, the examination zone is displaced parallel to the axis of rotation 14 in displacement direction 24 (opposite to the direction of the z-axis of the system of co-ordinates in FIG. 1), for
example by displacing the patient support table, and the radiation of the radiation source S is switched on, so that the detector unit 16 is able to detect the radiation from a plurality of angular
positions s and the radiation source S moves relative to the examination zone 13 on the helical trajectory 17. At the same time, or alternatively even before the radiation source S is switched on,
the electrocardiograph 8 is activated, so that an electrocardiogram is measured simultaneously.
The pitch h is here selected in such a way that for each radiation source position y(s) at least four adjacent turns of the helical trajectory 17 along the rays that issue from the respective
radiation source position y(s) are projected onto the detector surface. If four turns are projected on the onto detector surface, then this is a 3Pi-acquisition or 3Pi-relative movement (FIG. 4). If
n+1 turns are projected onto the detector surface, wherein n is an odd integer greater than or equal to 3, then the acquisition is known as nPi-acquisition and the relative movement as nPi-relative
The course of the projection of individual turns onto a notional, planar detector surface 60, the surface normal of which passes through the respective radiation source position y(s) and which
contains the axis of rotation 14, can be described by the following equations:
v P
1 ( u P 1 ) = + h 2 π ( 1 + ( u P 1 R ) 2 ) ( m π 2 - arctan u P 1 R ) and ( 2 ) v P 1 ( u P 1 ) = - h 2 π ( 1 + ( u P 1 R ) 2 ) ( m π 2 + arctan u P 1 R ) . ( 3 )
In these equations, u
, and v
are co-ordinates of a Cartesian system of co-ordinates 62 on the planar detector surface 60, wherein the u
-co-ordinate axis is oriented perpendicular and the v
-co-ordinate axis is oriented parallel to the axis of rotation 14 in the displacement direction 24. For reasons of clarity, this co-ordinate system 62 is illustrated in FIGS. 7 to 14 below the planar
detector surface 60. The origin of the co-ordinate system 62 lies at the center of the detector surface 60 however. The size of the planar detector surface 60 is chosen so that all rays that are
incident upon the focus-centered detector surface 16 also pass through the planar detector surface 60.
The relationship of the co-ordinates on the notional, planar detector surface 60 to the co-ordinates of the real, focus-centered detector surface 16 is given by the following equations:
u P
1 = R tan β and ( 4 ) v P 1 = R 2 + u P 1 2 tan λ = R tan λ cos β = v F cos β R D . ( 5 )
Here, λ is the cone angle of a beam coming from y(s), that is, the angle that this beam encloses with a plane perpendicular to the axis of rotation 14 and containing the radiation source position y
(s). Furthermore, β is the fan angle of a beam coming from y(s), that is, the angle that the projection of this beam onto a plane oriented perpendicular to the axis of rotation 14 and containing the
radiation source position y(s) encloses with a line passing through the radiation source position y(s) and oriented perpendicular to the axis of rotation 14. The variable D denotes the distance of
the radiation source position y(s) from the middle of the focus-centered, real detector surface 16.
During the Pi-acquisition, only two turns adjacent to the respective radiation source position y(s), are projected onto the real, focus-centered detector surface 16 and onto the notional, planar
detector surface 60. The course 81 of the upper projection in FIG. 3 onto the planar detector surface 60 is described by equation (2), whilst the course 83 of the lower projection in FIG. 3 onto the
planar detector surface 60 is described by equation (3) (in each case with m=1). The upper projection is referred to below as the upper Pi-boundary line 81 and the lower projection is referred to
below as the lower Pi-boundary line 83.
The terms "upper", "lower", "left" and "right" and similar expressions relate within the scope of the invention to the orientation of the planar detector surface 60 and the associated system of
co-ordinates 62, as they are illustrated in FIGS. 3, 4 and 7 to 14. Thus, the displacement direction 24 and the v
-co-ordinate axis point "upwards". Furthermore, the u
-co-ordinate axis points "to the right".
During the 3Pi-acquisition, four adjacent turns are projected onto the real, focus-centered detector surface 16 and onto the notional, planar detector surface 60. The course of the uppermost
projection 85 in FIG. 4 onto the planar detector surface 60 is described by equation (2), whilst the course of the lowermost projection 87 in FIG. 4 onto the planar detector surface 60 is described
by equation (3) (in each case with m=3). The uppermost projection is referred to below as the upper 3Pi-boundary line 85 and the lowermost projection is referred to below as the lower 3Pi-boundary
line 87. During the 3Pi-acquisition, an upper Pi-boundary line 81 and a lower Pi-boundary line 83 and an upper 3Pi-boundary line 85 and a lower 3Pi-boundary line 87 run onto the detector surface.
During an nPi-acquisition, upper Pi-3Pi-, 5Pi-, . . . , nPi-boundary lines (as per equation (2)) and lower Pi-3Pi-, 5Pi-, . . . , nPi-boundary lines (as per equation (3)) run accordingly onto the
planar detector surface 60.
Upper Pi-3Pi-, 5Pi- etc. boundary lines have positive v
-co-ordinates on the planar detector surface, whereas lower Pi-, 3Pi-, 5Pi- etc. boundary lines have negative v
During a 3Pi-acquisition, as is illustrated in FIG. 4, h=57.6 mm can be chosen as the pitch when the fan angle of the beam that inclines most strongly out of a plane containing the axis of rotation
14 and the radiation source position y(s) amounts to 52.1°, and when the acquisition geometry is additionally distinguished by an expansion of the real, focus-centered detector 16 in the direction of
the axis of rotation 14 of 175.1 mm, a spacing of the beam source position y(s) from the axis of rotation 14 of 570 mm and a spacing of the radiation source position y(s) from the center of the real,
focus-centered detector surface 16 of 1040 mm.
In step 103, the measured values are derived in accordance with the following equation partially in conformity with s, that is, in conformity with the angular position of the radiation source S:
D f
' ( y ( s ) , Θ ) = ∂ D f ( y ( s ) , Θ = const . ) ∂ s with ( 6 ) D f ( y ( s ) , Θ = ∫ 0 ∞ lf ( y + l Θ ) . ( 7 )
Here, Θ is a unit vector that differentiates measured values, which, although they have been caused by beams coming from the same radiation source position, are incident upon different detector
elements. The unit vector Θ thus specifies the direction of the beam belonging to the measured value. The direction of the unit vector, that is, the direction of a beam, can be parameterized by the
angular position s of the radiation source on the helical trajectory 17 and by a point x in the examination zone 13 through which the beam runs (Θ=Θ(s, x)). Furthermore, D
(y(s), Θ) describes the measured value for a specific radiation source position y(s) and a specific ray direction Θ that has been measured with the focus-centered detector after the corresponding ray
has passed with the absorption distribution f(x) through the object to be reconstructed.
In the case of the partial derivation of the measured values in conformity with equation (6), it should be noted that Θ remains constant, so that, for the derivation, in each case measured values of
parallel rays have to be taken into account. Since parallel rays have the same cone angle, in the case of the focus-centered detector surface 16 used here, parallel rays 51 are incident on the same
detector row 53 (see FIG. 5, in which only a partial area of the detector surface 16 is illustrated). For partial derivation, the measured values can therefore initially be re-sorted. For that
purpose, measured values that belong to parallel rays 51, that is, to the same detector row 53 but to different angular positions s
, s
, s
of the radiation source, are in each case combined into one quantity. The measured values of each quantity are then derived, for example, numerically using the known finite element method in
conformity with the angular position s of the radiation source, wherein known smoothing techniques can be used.
In step 104, the derived measured values are projected along their rays onto the notional, planar detector surface 60.
In step 105, one or more filter lines are assigned to each measured value, each filter line in turn being assigned to a filter direction. The filter lines and filter directions indicate which
measured values are taken into account in what sequence according to the following equation, in order to obtain a first filter value P
(s, x) and/or a second filter value P
(s, x) for a measured value to be filtered, the ray of which, starting from the radiation source position y(s), passes through the point x in the examination zone:
P g
( s , x ) = q = 1 N g μ q , g ∫ - π π γ sin γ D f ' ( y ( s ) , Θ q , g ( s , x , γ ) ) and ( 8 ) P ug ( s , x ) = q = 1 N ug μ q , ug ∫ - π π γ sin γ D f ' ( y ( s ) , Θ q , ug ( s , x , γ ) ) ( 9 )
Here, N
is the number of filter lines that are assigned to the measured value to be filtered, the ray of which, starting from the radiation source position y(s), runs through the point x in the examination
zone, and are used to determine the first filter value P
(s, x) for this measured value. Furthermore, N
is the number of filter lines that are assigned to the measured value to be filtered, the ray of which, starting from the radiation position y(s), runs through the point x in the examination zone,
and are used to determine the second filter value P
(s, x) for this measured value. In addition, μ
and μ
,ug are filter factors, which are described in detail below. Furthermore, Θ
(s, x, γ) is a unit vector, which, for a measured value to be filtered defined as described above by y(s) and x, indicates the qth filter line with corresponding filter direction, which is assigned
to the measured value to be filtered and which is used to determine the first filter value for this measured value. Correspondingly, Θ
,ug(s, x, γ) is a unit vector, which, for a measured value to be filtered defined as described above by y(s) and x, indicates the qth filter line with corresponding filter direction, which is
assigned to the measured value to be filtered and which is used to determine the second filter value for this measured value. Finally, γ is the κ-angle, which the direction vector Θ(s, x)=Θ
(s, x, 0)=Θ
,ug(s, x, 0) that runs, starting from the radiation source position y(s), through the point x in the examination zone, encloses with the unit vector Θ
(s, x, γ) (when calculating a first mean value according to equation (8)) respectively Θ
,ug(s, x, γ) (when calculating a second mean value according to equation (9)).
The relationship between the κ-angle γ, the qth filter line 71 for a measured value given by y(s) and x, and the unit vector Θ
(s, x, γ) is illustrated by way of example in FIG. 6, the index "g" respectively "ug" having been omitted since this index is irrelevant for the illustration. If the qth filter line 71 of the
measured value predetermined by the radiation source position y(s) and x is indicated at the location 73, then Θ
(s, x, 0) denotes the ray that has caused the predetermined measured value at the location 73, and for different κ-angles γ ≠0 the unit vector Θ
(s, x, γ) samples the measured values on the qth filter line 71.
The filter lines and the associated filter directions for the measured values are now defined below:
Initially, a quantity of straight lines is defined as dividing lines L
, wherein m can assume odd natural values of 1 to n in an nPi-acquisition. L
extends through the middle (u
=0, v
=0) of the planar detector 60 and asymptotically to the upper and lower Pi-boundary line. Furthermore, L
has a positive gradient based on the (u
, v
)-co-ordinate system 62. This is illustrated by way of example in FIG. 7, in which inter alia an upper Pi-boundary line 81, an upper 3Pi-boundary line 85, and upper 5Pi-boundary line 89, a lower
Pi-boundary line 83, a lower 3Pi-boundary line 87 and a lower 5Pi-boundary line 91 of a 5Pi-acquisition can be seen. The line L
runs at the same time parallel to the derivative {dot over (y)}(s) at the respective current angular position s. For m>1, the lines L
extend parallel to L
and tangentially to the upper mPi-boundary line. In the case of the 5Pi-acquisition, in addition to the line L
, the lines L
and L
are defined. Moreover, a quantity of dividing lines L-m is defined, which extend parallel to L
and tangentially to the lower mPi-boundary line. In the case of the 5Pi-acquisition, the lines L
and L
are defined. Furthermore, a dividing line L
is defined that is the same as the dividing line L
Furthermore, a quantity of dividing lines L.sub.-p
is defined, which, in the above-mentioned orientation of the (u
, v
)-co-ordinate system 62 illustrated in FIGS. 7 to 14, have a negative gradient and extend both tangentially to the upper mPi-boundary line and tangentially to the lower pPi-boundary line, wherein p
and m are odd, natural numbers less than or equal to n. Dividing lines L
, L
, L
, L
and L
are illustrated by way of example in FIG. 7.
Within the scope of the invention, the gradient of a line on the notional, planar detector surface relates to the (u
, v
)-co-ordinate system 62 used here and illustrated in FIGS. 7 to 14.
Furthermore, a quantity of dividing lines L
is defined, which, in the above-mentioned orientation of the (u
, v
)-co-ordinate system 62 illustrated in FIGS. 7 to 14, have a positive gradient and extend both tangentially to the upper mPi-boundary line and tangentially to the lower pPi-boundary line, wherein p
and m are odd, natural numbers less than or equal to n. Dividing lines L
, L
, L
, L
and L
are illustrated by way of example in FIG. 8.
In an nPi-acquisition, filter lines F
.sup.(R), G
.sup.(R) and G
.sup.(L) are defined, wherein
l assumes the values 1, 3, . . . , n,
u assumes the values 3, . . . , n-2, and
v assumes the values 1, 3, . . . , n-2.
The filter lines F
.sup.(R) extend exclusively between the lPi-boundary lines, the filter lines G
.sup.(R) extend exclusively between the uPi-boundary lines, and the filter lines G
.sup.(L) extend exclusively between the vPi-boundary lines.
The filter direction 88 for filter lines F
.sup.(R) and G
.sup.(R) extends on the planar detector surface 60 in the orientation of the detector surface 60 illustrated in FIGS. 7 to 14 and of the (u
, v
)-co-ordinate system 62 substantially from left to right, that is, substantially in the direction of the v
-axis of the (u
, v
)-co-ordinate system 62. The filter direction 86 for filter lines G
.sup.(L) extends on the planar detector surface 60 in the orientation of the detector surface 60 illustrated in FIGS. 7 to 14 and of the (u
, v
)-co-ordinate system 62 substantially from right to left, that is, substantially opposite to the direction of the v
-axis of the (u
, v
)-co-ordinate system. Several filter directions are indicated by arrows 86, 88 in FIGS. 9 to 14.
Each measured value to be filtered is assigned at least one of the filter lines F
.sup.(R), G
.sup.(R) and G
.sup.(L), each of these assigned filter lines extending through the respective measured value.
The course of a filter line F
.sup.(R) of a measured value to be filtered in conformity with (8) or (9) is defined so that the projection of the filter line F
.sup.(R) that has been assigned to a measured value extends onto the planar detector surface (60) tangentially to the projection of the upper lPi-boundary line onto the planar detector surface (60),
when the measured value lies on the planar detector surface (60) above a dividing line L
. If this is not the case, then the filter line F
.sup.(R) is so defined that the projection of the filter line F
.sup.(R) that has been assigned to a measured value extends onto the planar detector surface (60) parallel to the projection of the dividing line L
onto the planar detector surface (60), when the measured values lies on the planar detector surface (60) above a dividing line L.sub.-l. If this is also not the case, then the filter line F
.sup.(R) is defined so that the projection of the filter line F
.sup.(R) that has been assigned to a measured value extends onto the planar detector surface (60) tangentially to the projection of the lower lPi-boundary line onto the planar detector surface (60).
The projection of the filter line F
.sup.(R) extends therefore inter alia tangentially to the projection of an upper or lower lPi-boundary line. In that case, it is additionally determined whether the particular filter line F
.sup.(R) approaches the upper or lower lPi-boundary line on the notional, planar detector surface to the left or the right of the measured value to be filtered and to which the particular filter line
.sup.(R) is assigned, that is, whether the particular tangential point is located to the left or to the right of the measured value to be filtered.
To determine the position of a tangential point of a filter line F
.sup.(R) extending tangentially to an upper lPi-boundary line, initially the tangential point of the dividing line L
is determined, that is, the point at which the dividing line L
contacts the upper lPi-boundary line. If the measured value to be filtered to which the filter line F
.sup.(R) is assigned is arranged on the notional, planar detector to the left of this contact point, then the tangential point of the filter line F
.sup.(R) is located to the right of the measured value to be filtered. If that is not the case, then the tangential point of the filter line F
.sup.(R) is located to the left of the measured value to be filtered on the notional, planar detector surface.
To determine the position of a tangential point of a filter line F
.sup.(R) extending tangentially to a lower lPi-boundary line, initially the tangential point of the dividing line L.sub.-l is determined, that is, the point at which the dividing line L.sub.-l
contacts the lower lPi-boundary line. If the measured value to be filtered to which the filter line F
.sup.(R) is assigned is arranged on the notional, planar detector to the left of this contact point, then the tangential point of the filter line F
.sup.(R) is located to the right of the measured value to be filtered. If that is not the case, then the tangential point of the filter line F
.sup.(R) is located to the left of the measured value to be filtered on the notional, planar detector surface.
The course of a filter line G
.sup.(R) of a measured value to be filtered in conformity with equation (8) or (9) is defined by setting variables t and x to zero and carrying out the following steps α) to δ) until, for a filter
line G
.sup.(R) that is assigned to a measured value and for which the course is to be determined, a course has been determined, or until (u-t) is less than 3 or (3+x) is greater than u:
) Checking whether the particular measured value on the planar detector surface 60 lies above the dividing line L
-t. If this is the case, then the projection of the filter line G
.sup.(R) that has been assigned to the measured value extends onto the planar detector surface (60) tangentially to the projection of the upper (u-t)Pi-boundary line onto the planar detector surface
(60).β) Adding the value two to the variable t.γ) If the particular measured value does not lie above the dividing line L
-t+2, (that is, if in step α) no course has been determined for the filter line G
.sup.(R)) and if (u-t) is greater than or equal to 3, then it is checked whether the particular measured value is located on the planar detector surface 60 above the dividing line L
-t, taking into account the variable t increased by two. If this is the case, then the projection of the filter line G
.sup.(R) that has been assigned to the measured value extends onto the planar detector surface (60) tangentially to the projection of the lower (3+x) Pi-boundary line onto the planar detector surface
(60),δ) Adding the value two to the variable x.
If in steps α) to δ) it was not possible to determine a course for a filter line G
.sup.(R) that is assigned to a measured value, that is to say, if determination of the course of the filter line G
.sup.(R) by means of the steps α) to δ) has been terminated, since (u-t) has become less than 3 or (3+x) has become greater than u, then the filter line G
.sup.(R) that is assigned to the particular measured value extends tangentially to the lower u Pi-boundary line.
For a filter line G
.sup.(R), the particular tangential point is located to the left of the measured value to be filtered to which the particular filter line is assigned, when the filter line G
.sup.(R) extends tangentially to a lower 3Pi-, 5Pi, . . . or uPi-boundary line. If, on the other hand, the filter line G
.sup.(R) extends tangentially to an upper 3Pi-, 5Pi, . . . or uPi-boundary line, then the particular tangential point is located to the right of the measured value to be filtered.
The course of a filter line G
.sup.(L) of a measured value to be filtered in conformity with equation (8) or (9) is defined by setting variables t and x to zero and carrying out the following steps ε) to θ) until, for a filter
line G
.sup.(L) that is assigned to a measured value and for which the course is to be determined, a course has been determined, or until (v-t) is less than 1 or (1+x) is greater than v:
) Checking whether the particular measured value lies on the planar detector surface 60 above the dividing line L.sub.-(1+x)
-t. If this is the case, then the projection of a filter line G
.sup.(L) that has been assigned to a measured value extends onto the planar detector surface (60) tangentially to the projection of the lower (1+x)Pi-boundary line onto the planar detector surface
(60).ζ) Adding the value two to the variable t.η) If the particular measured value does not lie above the dividing line L.sub.-(1+x)
-t+2, (that is, if in step α) no course has been determined for the filter line G
.sup.(R)) and if (v-t) is greater than or equal to 1, then it is checked whether the particular measured value lies on the planar detector surface 60 above the dividing line L.sub.-(1+x)
-t, taking into account the variable t increased by the value two. If this is the case, then the projection of the filter line G
.sup.(L) that has been assigned to the measured value extends onto the planar detector surface (60) tangentially to the projection of the lower (1+x)Pi-boundary line onto the planar detector surface
(60).θ) Adding the value two to the variable x.
If in steps ε) to θ) it was not possible to determine a course for a filter line G
.sup.(L) that is assigned to a measured value, that is to say, if determination of the course of the filter line G
.sup.(L) by means of the steps ε) to θ) has been terminated, since (v-t) has become less than 1 or (1+x) has become greater than u, then the filter line G
.sup.(L) that is assigned to the particular measured value extends tangentially to the lower vPi-boundary line.
For a filter line G
.sup.(L), the particular tangential point is located to the left of the measured value to be filtered to which the particular filter line is assigned, when the filter line G
.sup.(L) extends tangentially to an upper Pi-, 5Pi, . . . or vPi-boundary line. If, on the other hand, the filter line G
.sup.(L) extends tangentially to a lower Pi-, 5Pi, . . . or vPi-boundary line, then the particular tangential point is located to the right of the measured value to be filtered.
The course of several filter lines is illustrated as an example in FIGS. 9 to 14, the filter lines in these Figures each being shown by a broken line. In FIG. 9, several filter lines F
.sup.(R) are shown and in FIG. 10 several filter lines F
.sup.(R) are shown, in each case during a 5Pi-relative movement. In FIG. 11, several filter lines G
.sup.(L) are shown and in FIG. 12 several filter lines G
.sup.(L) are shown, likewise during a 5Pi-relative movement. In FIG. 13, several filter lines G
.sup.(R) are shown and in FIG. 14 several filter lines G
.sup.(R) are shown, in each case during a 7Pi-relative movement, the upper 7Pi-line bearing the reference numeral 90 and the lower 7Pi-line bearing the reference numeral 93.
The number of filter lines that is assigned to a measured value therefore depends on its position on the detector surface and on the selected "n" in the nPi-relative movement.
If the measured value lies between two Pi-boundary lines, then the filter lines F
.sup.(R), F
.sup.(R), . . . , F
.sup.(R), G
.sup.(R), G
.sup.(R), . . . , G
.sup.(R), G
.sup.(L), G
.sup.(L), . . . , G
.sup.(L) are assigned to the measured value. If the particular measured value lies between two rPi-boundary lines, but not between two (r-2)Pi-boundary lines, wherein r is greater than 1 and less
than n, then the filter lines F
.sup.(R), F
+2.sup.(R), . . . , F
.sup.(R), G
.sup.(L), G
+2.sup.(L), . . . , G
.sup.(L), G
.sup.(R), G
+2.sup.(R), . . . , G
.sup.(R) are assigned to the measured value. If the particular measured value is between two nPi-boundary lines, but not between two (n-2)Pi-boundary lines, then the filter line F
.sup.(R) is assigned to the measured value.
For example, during a 5Pi-relative movement, the filter lines F
.sup.(R), F
.sup.(R), F
.sup.(R), G
.sup.(R), G
.sup.(L) and G
.sup.(L) are assigned to a measured value that lies between the Pi-boundary lines. The filter lines F
.sup.(R), F
.sup.(R), G
.sup.(R) and G
.sup.(L) are assigned to a measured value that lies between the 3Pi-boundary lines but not between the Pi-boundary lines. Finally, the filter line F
.sup.(R) is assigned to a measured value that lies between the 5i-boundary lines but not between the 3Pi-boundary lines.
The filter lines F
.sup.(R), G
.sup.(R) and G
.sup.(L) are for a specific location, that is, for a specific position of the measured value on the detector surface exclusively dependent on the acquisition geometry used, that is, on the dimensions
of the computer tomograph and on the pitch selected. In known acquisition geometry, the filter lines F
.sup.(R), G
.sup.(R) and G
.sup.(L) can thus be determined immediately and not first in step 105.
Next, the measured values projected onto the planar detector 60 are filtered in step 106 along the filter lines and filter directions predetermined in step 105 in conformity with equation (8) and
(9), so that per measured value and associated filter line a respective intermediate filter value is determined.
For that purpose, first of all a measured value to be filtered and a filter line assigned to this measured value are selected. Along this filter line, measured values that lie on the filter line are
each multiplied in the filter direction with a κ-factor and added. The κ-factor decreases here as the sine of the κ-angle increases. The κ-factor is in particular equal to the reciprocal of the sine
of the κ-angle. The result of the summation is an intermediate filter value. This is repeated for all filter lines of the measured value to be filtered, so that for this measured value and for each
filter line assigned to this measured value a respective intermediate filter value is determined, that is, an intermediate filter value is assigned to each filter line that is assigned to a measured
The reciprocal of the sine of the κ-angle is known as κ-filter, wherein a function that corresponds to the reciprocal of the sine of the κ-angle, but does not represent it exactly, is also known as
κ-filter. For example, the Taylor development of said reciprocal is also known as κ-filter.
To determine an intermediate filter value for a measured value to be filtered and a filter line assigned to this measured value, those measured values that lie on this filter line are preferably
interpolated on the planar detector 60 in such a way that they are arranged on this filter line equidistantly with respect to the κ-angle. The interpolated measured values are then multiplied by the
κ-factor according to equation (8) or (9) along the filter line and added to form an intermediate filter value, wherein multiplication by the κ-factor and the summation can be carried out in known
manner by means of a Fourier transform.
Filtering was here carried out on the planar detector. Alternatively, it can be carried out on any desired detector. The measured values and the filter lines would then have to be projected onto this
detector. In particular it can be sensible to filter the measured values on the focus-centered detector, so that the projection of the measured values onto the planar detector carried out in step 104
can be omitted.
In step 107, first and second filter values are determined from the intermediate filter values. For that purpose, the intermediate filter values of a measured value, as far as they are assigned to
the filter lines F
.sup.(R), F
.sup.(R), G
.sup.(R) and G
.sup.(L) are combined linearly to form a first filter value P
for this measured value (this corresponds to the multiplication by the factors μ
and the summation in equation (8)). Furthermore, the intermediate filter values of a measured value, so far as they are assigned to the filter lines F
.sup.(R), F
.sup.(R), . . . , F
.sup.(R), G
.sup.(L), G
.sup.(L), . . . , G
.sup.(L), G
.sup.(R), G
.sup.(R), . . . , G
.sup.(R), are combined linearly to form a second filter value P
for this measured value (this corresponds to the multiplication by the factors μ
,ug and the summation in equation (9)).
In determining the first filter value P
by means of a linear combination, the intermediate filter values that are assigned to the filter lines F
.sup.(R), F
.sup.(R), G
.sup.(R) and G
.sup.(L), are multiplied by the following filter factors μ
an intermediate filter value, which is assigned to a filter line F
.sup.(R), is multiplied by the filter factor -1,
an intermediate filter value, which is assigned to a filter line F
.sup.(R), is multiplied by the filter factor 1,
an intermediate filter value, which is assigned to a filter line G
.sup.(R), is multiplied by the filter factor 1/2,
an intermediate filter value, which is assigned to a filter line G
.sup.(L) is multiplied by the filter factor -1/2.
In determining the second filter value P
by means of a linear combination, the intermediate filter values that are assigned to the filter lines F
.sup.(R), F
.sup.(R), . . . , F
.sup.(R), G
.sup.(R), G
.sup.(L), . . . , G
.sup.(L), G
.sup.(R), G
.sup.(R), . . . , G
.sup.(R), are multiplied by the following filter factors μ
an intermediate filter value, which is assigned to a filter line F
.sup.(R), is multiplied by the filter factor n/3,
an intermediate filter value, which is assigned to a filter line F
.sup.(R), is multiplied by the filter factor n/(n-2),
an intermediate filter value, which is assigned to a filter line F
.sup.(R), with 1<k<n-2, is multiplied by the filter factor 2n/(k(k+2)),
an intermediate filter value, which is assigned to a filter line G
.sup.(R), is multiplied by the filter factor n/(2(n-2)),
an intermediate filter value, which is assigned to a filter line G
.sup.(R), with z<n-2, is multiplied by the filter factor -n/(z(z+2)),
an intermediate filter value, which is assigned to a filter line G
.sup.(L), is multiplied by the filter factor n/(2(n-2)),
an intermediate filter value, which is assigned to a filter line G
.sup.(L), with d<n-2, is multiplied by the filter factor n/(d(d+2)).
In order to determine a first filter value P
for a measured value, it is first checked which of the filter lines F
.sup.(R), F
.sup.(R), G
.sup.(R) and G
.sup.(L) have been assigned to the measured value. For each of these assigned filter lines a respective intermediate filter value is determined. Each intermediate filter value determined is
multiplied by the corresponding above-mentioned filter factor, and the intermediate filter values multiplied by the respective filter factor are added to form a first filter value P
Correspondingly, in order to determine a second filter value P
for a measured value, it is first checked which of the filter lines F
.sup.(R), F
.sup.(R), . . . , F
.sup.(R), G
.sup.(L), G
.sup.(L), . . . , G
.sup.(L), G
.sup.(R), G
.sup.(R), . . . , G
.sup.(R) have been assigned to the measured value. For each of these assigned filter lines a respective intermediate filter value is determined. Each intermediate filter value determined is
multiplied by the corresponding above-mentioned filter factor, and the intermediate filter values multiplied by the respective filter factor are added to form a second filter value P
This method is repeated for each measured value that during the nPi-relative movement is located between the nPi-boundary lines, so that for each of these measured values, when the corresponding
filter lines have been assigned to it, a first and/or a second filter value is determined.
The first filter values form a first group, and the second filter values form a second group.
In step 108, the first filter values P
, that is, the filter values of the first group, are weighted in dependence on the movement of the object.
Each first filter value has been determined from measured values that have been acquired at the same instant, that is, each first filter values can be assigned an instant and hence, when considering
the electrocardiogram, a cardiac movement phase. For weighting the first filter values, weighting methods known, for example, from cardiac imaging can therefore be used, the sole difference being
that, in place of the measured values, the first filter values are weighted in dependence on the movement of the object. These weighting methods are known to those skilled in the art, for example,
from "Adaptive temporal resolution in medical cardiac cone beam CT reconstruction", R. Manzke, M. Grass, T. Nielsen, G. Shechter, D. Hawkes, Medical Physics, Vol. 30, No. 12, pages 3072 to 3080,
December 2003, so that only brief details of the weighting of the first filter values are given.
The periodic movement of the object to be examined is known on the basis of the electrocardiogram recorded during acquisition of the measured values. In other embodiments, the movement of the object
could, as stated above, be determined in a different manner. It is thus known in which time ranges within a movement period the object has moved more slowly and in which time ranges the object has
moved more quickly, that is, in which time ranges the object has moved more quickly than in other time ranges. Relative to the period duration a base instant is therefore defined, which preferably
lies centrally in a time range within a period in which the object has moved relatively little.
As a rule, the time ranges within a period in which the object is moving relatively little are known, so that the base instant relative to the movement period can be determined in advance. For
example, in the case of the human heart it is known that it moves relatively little in a time range that is arranged around 70%-RR, meaning, when a base instant lies at 70%-RR, that at the base
instant 70% of the particular time range, that is, the interval between two adjacent R-peaks, is deleted.
Base instants that lie in time ranges in which the object is moving as little as possible, can also, for example, be determined by the method described in "Automatic phase-determination for
retrospectively gated cardiac CT", R. Manzke, Th. Kohler, T. Nielsen, D. Hawkes, M. Grass, Medical Physics, Vol. 31, No. 12, pages 3345 to 3362, December 2004.
When the object is a heart, then the base instant preferably lies in the diastolic phase and not in the systolic phase, since in the diastolic phase the heart moves less than in the systolic phase.
Around each base instant there is a time range of pre-determined length, the base instant forming the middle of the respective time range. The length of the time ranges depends on how many measured
values from which angular ranges are required for the reconstruction. The length of the time ranges is therefore to be chosen so that, after weighting, the quantity of measured values necessary for
the applied reconstruction method is available.
In this embodiment, time ranges are determined for each voxel, wherein the time range for the respective voxel is so large that for the respective voxel, after weighting in dependence on the movement
of the object, the Pi-criterion in respect of the weighted first filter values is fulfilled. The Pi-criterion is explained in detail below.
Once the base instants have been determined and time ranges have been arranged around the base instants, first filter values that have been determined from measured values that have been acquired
within these time ranges are multiplied by 1. The remaining first filter values are multiplied by 0, that is, are ignored in the back-projection described below.
It can be checked whether the Pi-criterion for a voxel in respect of the weighted first filter values is fulfilled as follows: First of all a plane that is oriented perpendicularly to the axis of
rotation and runs through this voxel is defined. Then, those regions of the helix which, during the measurement, issue rays that run through this voxel and whose associated measured values have been
acquired at instants that lie within the time ranges around the base instants, are projected onto this plane in the direction of the axis of rotation. When these helix segments projected onto the
plane form a segment of circle, which is bounded by a straight line that runs through this voxel, then the Pi-criterion for that voxel is fulfilled.
As already mentioned above, this weighting in dependence on the movement of the object represents just one example. Other known methods, for example, methods in which the first filter values are
weighted in dependence on their distance from the base instant, can also be used to weight the first filter values in dependence on the movement of the object.
In step 109, the weighted first filter values and the second filter values for reconstruction of the absorption distribution in the examination zone are back-projected substantially in conformity
with the following equation, only measured values and filter values that lie between the two nPi-lines being taken into account:
f g
, ug ( x ) = ( - 1 ) 1 2 π 2 1 n ∫ s x - y ( s ) P g , ug ( s , x ) . ( 10 )
Here, f
(x) is the value of the CT image at the location x in the examination zone, which is obtained by back-projection of the weighted first filter values. Furthermore, f
(x) is the value of the CT image at the location x in the examination zone that is obtained by back-projection of the second filter values.
First of all the weighted first filter values are back-projected.
The back-projection is explained below with reference to the flow chart illustrated in FIG. 15. Alternatively, other known back-projection methods can be used.
In step 201, a location x and a voxel V(x) arranged at this location are predetermined within a predeterminable field of view (FOV) in the examination zone, wherein a voxel V(x) that has not yet been
reconstructed in the preceding back-projection steps is selected.
Then, in step 203, the quantity of angular positions s or radiation source positions y(s) giving off rays that pass centrally through the voxel V(x) and are incident on the detector surface between
the nPi-boundary lines is determined.
Then, in step 205, an angular position s that has not yet been used for reconstruction of the voxel V(x) is predetermined from the quantity of angular positions determined in step 203.
In step 207, a weighted first filter value is determined for the ray starting from the radiation source position y(s) determined by the predetermined angular position s and passing centrally through
the voxel (x). If the detector surface, as in this embodiment, is made up of several rectangular detector elements, each of which records a measured value, then, when the ray is incident centrally on
a detector element, the weighted first filter value that is assigned to this measured value recorded by this detector pixel is determined for this ray. If this ray is not incident centrally on a
detector element, then a weighted first filter value is determined by interpolation from the weighted first filter value that is assigned to the measured value recorded by the detector element on
which the ray is incident, and from adjacent weighted first filter values, for example, by a bilinear interpolation. If, however, as described above, the first filter values are multiplied by one or
zero, then in step 207 a weighted first filter value is, of course, determined only when the radiation source has assumed the angular position predetermined in step 205 at an instant that lies in one
of the time ranges arranged around the base instant. If this is not the case, then the sequence can be continued directly with step 213.
The weighted first filter value determined in step 207 is multiplied in step 209 by a further weighting factor that decreases as the distance of the radiation source y(s) from the location x
predetermined in step 201 increases. In this embodiment, in conformity with equation (10) this weighting factor is equal to 1/|x-y(s)|.
In step 211, the weighted filter value on the voxel V(x), which is preferably initially the same, is added.
In step 213, it is checked whether all angular positions s from the quantity of angular positions determined in step 203 have been taken into account in the reconstruction of the voxel V(x). If this
is not the case, then the flowchart branches to step 205. Otherwise in step 215 it is checked whether all voxels V(x) have been reconstructed in the FOV. If this is not the case, then the sequence is
continued with step 201. If, on the other hand, all voxels V(x) in the FOV have passed through, then the absorption in the entire FOV, and hence a CT image, has been determined, and the
back-projection of the weighted first filter values is terminated.
The second filter values are correspondingly back-projected, that is, the second filter values, as described above in connection with the steps 201 to 215, are back-projected (in the above
description essentially in each case the weighted first filter value is to be replaced by the second filter value).
The CT images that have been determined by back-projection of the weighted first filter values and the second filter values are added voxel-wise, the CT image resulting from the summation being the
definitively reconstructed CT image of the examination zone, whereby the computed tomography method according to the invention is ended (step 110).
Alternatively, the weighted first filter values and the second filter values can be combined before back-projection, that is, for example, can be combined linearly or added, wherein then only the
combined filter values are to be back-projected to the final CT image.
Patent applications by Claas Bontus, Hamburg DE
Patent applications by Michael Grass, Buchholz In Der Nordheide DE
Patent applications by Thomas Köhler, Norderstedt DE
Patent applications by KONINKLIJKE PHILIPS ELECTRONICS N V
Patent applications in class Tomography (e.g., CAT scanner)
Patent applications in all subclasses Tomography (e.g., CAT scanner)
User Contributions:
Comment about this patent or add new information about this topic:
|
{"url":"http://www.faqs.org/patents/app/20080205727","timestamp":"2014-04-16T21:02:01Z","content_type":null,"content_length":"121539","record_id":"<urn:uuid:8609d4f3-3449-44dc-abbf-53298ac95095>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Crofton, MD Calculus Tutor
Find a Crofton, MD Calculus Tutor
...Between 2006 and 2011 I was a research assistant at the University of Wyoming and I used to cover my advisor’s graduate level classes from time to time. And, since August 2012 I have tutored
math (from prealgebra to calculus II), chemistry and physics for mid- and high-school students here in th...
14 Subjects: including calculus, chemistry, physics, geometry
I am currently finishing up my Master's of Arts in Teaching at Loyola University in Maryland. My background is biology and math. I love to work with kids and watching them succeed is the best
gift I could ever receive.
18 Subjects: including calculus, reading, geometry, writing
...I have 5 years of MATLAB experience. I often used it during college and graduate school. I have experience using it for simpler math problems, as well as using it to run more complicated
27 Subjects: including calculus, physics, geometry, algebra 1
...I am a graduate of the University of Maryland, where I completed a Bachelor of Arts in 2011 with performance on trombone as the major focus. Piano proficiency was a part of the degree
requirement, a requirement which I met by demonstrating proficiency in an informal audition. I performed as a trombone instrumentalist in the Navy Band based at Pearl Harbor, Hawaii from 2004 to
15 Subjects: including calculus, statistics, piano, geometry
I have been working as a personal tutor since November 2007 for the George Washington University (GWU) Athletic Department. I have hundreds of hours of experience and am well-versed in explaining
complicated concepts in a way that beginners can easily understand. I specialize in tutoring math (from pre-algebra to differential equations!) and statistics.
16 Subjects: including calculus, geometry, statistics, ACT Math
Related Crofton, MD Tutors
Crofton, MD Accounting Tutors
Crofton, MD ACT Tutors
Crofton, MD Algebra Tutors
Crofton, MD Algebra 2 Tutors
Crofton, MD Calculus Tutors
Crofton, MD Geometry Tutors
Crofton, MD Math Tutors
Crofton, MD Prealgebra Tutors
Crofton, MD Precalculus Tutors
Crofton, MD SAT Tutors
Crofton, MD SAT Math Tutors
Crofton, MD Science Tutors
Crofton, MD Statistics Tutors
Crofton, MD Trigonometry Tutors
|
{"url":"http://www.purplemath.com/crofton_md_calculus_tutors.php","timestamp":"2014-04-20T11:09:09Z","content_type":null,"content_length":"24042","record_id":"<urn:uuid:dc125f5c-919c-453a-a717-0b87cc012d55>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding the Area of Circle
November 9th 2008, 06:23 PM #1
Jul 2008
Finding the Area of Circle
A Piece of String fits exactly around the curved surface of a cylinder whose radius is 3 cm. Find the length of the string.
November 9th 2008, 06:33 PM #2
|
{"url":"http://mathhelpforum.com/geometry/58632-finding-area-circle.html","timestamp":"2014-04-21T09:18:47Z","content_type":null,"content_length":"32823","record_id":"<urn:uuid:e54e04c2-7050-4469-9ffe-1fd318954838>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Urgent Geometry assistance?
February 7th 2009, 05:32 AM #1
Sep 2008
Urgent Geometry assistance?
1. A circle graph has a section marked "Potatoes: 28%." What is the measure of the central angle of this section?
2) http://img9.imageshack.us/img9/5614/...aaaaaaazh4.png
Find the measure of arc YW.
Find the measure of arc WXS.
This is for my Geometry.
1. A circle graph has a section marked "Potatoes: 28%." What is the measure of the central angle of this section?
28% of 360 degrees
2) http://img9.imageshack.us/img9/5614/...aaaaaaazh4.png
Find the measure of arc YW.
Find the measure of arc WXS.
arc length, $s = r\theta$ , where $\theta$ = central angle in radians
arc YW = $30^{\circ} = \frac{\pi}{6} \, rad$
$s = 27 \cdot \frac{\pi}{6} = \frac{9\pi}{2} \approx 14.1 \, in.$
you find the measure of arc WXS
o.k. ?
February 7th 2009, 06:33 AM #2
|
{"url":"http://mathhelpforum.com/math-topics/72281-urgent-geometry-assistance.html","timestamp":"2014-04-21T00:34:50Z","content_type":null,"content_length":"34456","record_id":"<urn:uuid:f62a295c-6efe-4f97-b8cd-cfc78dedf597>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
|
❤ math
1015 words.
pg ; onew/taemin
summary ; taemin confesses to onew... with a math equation?
Taemin frowns down at his page. “This is a terrible idea,” he mumbles, moving his hand to scribble out the equation -- but he pauses.
He lowers his hand, again. “A terrible idea.” He emphasizes, throwing the pencil down onto the table and groaning in aggravation.
At school, there is a “cute math equation” being passed around. Get your crush to solve it and you’ll get a date for sure, apparently. Taemin’s friend from school swore by it, saying that it was
foolproof and that no one would be able to say no.
9x-7i > 3(3x-7u)
Taemin sighs down at the page, he has no confidence in this trick, whatsoever. Just because it works on the girls at school, that doesn’t mean it will work on Jinki. Jinki is smart, he isn’t going to
be wooed by some stupid, high-school love confession. Taemin sighs, again.
“This will never work,” he says with a frown.
Jinki walks into the room suddenly and Taemin stiffens. Taemin takes in a slow breath as he watches Jinki put a dish into the sink, struggling to build up the courage to show Jinki his math question.
Jinki begins to walk back out the door and Taemin gulps, deciding finally that it’s now or never.
“Hyung?” Taemin says, reaching out quickly to grab Jinki’s wrist before he leaves the room. “Hyung, I need some help with my homework.” He looks up at Jinki, batting his eyelashes in a way that
always makes Kibum give in to his requests. Taemin hopes it will work on Jinki, too.
Jinki beams, pulling the chair beside Taemin out and sitting down at the dining table. Taemin mentally cheers. Jinki leans in, close to Taemin, and looks down at the papers scattered across the
table. “What did you need help with?”
Taemin brings his math homework closer to Jinki and points at the question he’s pretending to have problem with. “This one. No matter what I do, I just can’t get the right answer!” he says, huffing
out a breath and staring at the question with narrowed eyes.
“I’m sure you’re just making it out to be harder than it is.” Jinki says and looks up from the page to smile at Taemin, “I’ll explain it to you, then you’ll be able to do it.”
Taemin grins, “Thanks, hyung!” he hands Jinki his pencil and watches, feeling a mixture of excitement and uncertainty, as Jinki begins working on the question.
“Simplify, right?” Jinki asks, recopying the question onto a new sheet of paper.
Taemin nods once quickly and his hair flops into his face. He frowns and moves to brush the strands out of his eyes, but Jinki pushes them aside with the end of the pencil before Taemin has a chance
to lift his hand.
He feels a blush spread across his face and looks away from Jinki’s smile. “Um, yeah. Simplify.” He says, stumbling over his words.
Jinki goes back to the equation, circling the second half of the question. “First, you expand the brackets, multiplying everything within them by three.” He explains, writing out the new equation
under the old one. He looks over at Taemin, “Do you understand so far?”
Hiding a growing smile beneath his hand, Taemin nods.
“Next, you move this 9x,” he circles the nine on the right side of the equation, “to the other side,” he draws an arrow, “and it becomes a negative.” He indicates this clearly by drawing a thick
negative sign beside the nine. “So, you subtract it from the 9x that is already there.”
He looks over at Taemin, to make sure he’s paying attention. “Which eliminates the ‘x’ from the equation, because nine minus nine is zero.”
Taemin pouts and clutches at Jinki’s sleeve. “Hyung, you make it look so easy!” he whines, over-exaggerating his anguish to make Jinki smile.
Jinki just blinks, holding the pencil out for Taemin. “Do you understand how to finish it now?”
Quickly, Taemin shakes his head, pushing Jinki’s hand back down onto the page. “No! I need you to finish it for me!” his heart pounds hard in his chest, worrying that Jinki will leave without solving
the question.
“Okay, calm down, Taemin!” Jinki laughs, placing his hand on Taemin’s shoulder.
Taemin tenses instantly. He stares over at Jinki, eyes widening in fear.
“I can help you whenever you need me, and if you need me now, then I’ll help you.” Jinki says with a smile.
Taemin relaxes, slightly. “I need you now, hyung.” He says, sounding more breathless than he had intended. He looks away in embarrassment when Jinki’s smile grows.
Jinki takes his hand off Taemin’s shoulder and claps once, “Alright, back to the question!”
Taemin sighs and turns back to the Jinki, watching him write the new equation beneath the old one. “You’re almost finished,” he observes, a smile spreading across his face.
Jinki nods. “All that’s left to do, is move this negative seven,” he circles the number, “to the other side, and divide the -21 by it.” He says, explaining each step carefully. “And, since we’re
dividing by a negative, we need to flip the larger than sign around, making it less than.”
He writes the finished equation beneath the previous step, boxing it with a bright grin. “There you go, that’s how you figure out the answer!” Jinki looks over at Taemin. “Do you understand it?”
He bites his lip and looks at Jinki expectantly. “Yeah, I get it.”
Jinki blinks down at the question, processing it slowly. “Oh,” he says finally, and Taemin’s eyes brighten in joy.
Taemin reaches out for Jinki’s sleeve, tugging it lightly. “Do you get it?” he asks, trembling from the fear of rejection and the adrenaline rush of finally confessing.
“Cute,” Jinki says, smiling.
Discarding all of his previous doubts and worries, Taemin smiles and rests his head on Jinki’s shoulder. “I knew you’d like it.”
/math nerd
:'D I saw a group on Facebook dedicated to this cute little math equation, and that sort've inspired me to write this? Haha, I was actually going to end this with Onew being oblivious and Taemin
hating life, but then I changed my mind.
I hope Onew's explanations about how to solve the equation weren't too confusing. ;~; I tried to make it simple so people wouldn't get lost but idk if i succeeded.
• 52 comments
• 52 comments
|
{"url":"http://datakicker.livejournal.com/4028.html","timestamp":"2014-04-16T05:00:09Z","content_type":null,"content_length":"149838","record_id":"<urn:uuid:e05bf3b5-015d-4354-a880-5c081142470b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Modena, PA Math Tutor
Find a Modena, PA Math Tutor
...British Lit. from Chaucer through Shakespeare, Milton, Austen, Dickens, Wordsworth, Shaw et al. I am also conversant with American Lit., including essayists, poets, novelists and playwrights
(Emerson, Thoreau, Whitman, Dickinson, Frost, Poe, Fitzgerald, Hemingway, Updike, Saroyan, T. Williams, A.
32 Subjects: including algebra 1, algebra 2, American history, biology
I have experience tutoring students for the SAT and ACT in all areas of the tests and have taught mathematics at the high school level. I have a proven track record of increasing students' ACT and
SAT scores and improving their skills. My approach is tailored specifically to the student, so no two programs are alike.
19 Subjects: including algebra 1, algebra 2, calculus, geometry
...I have been tutoring for 44 years. It started by helping my brother's friends through their high school math courses. I do this because I love math.
13 Subjects: including precalculus, algebra 1, algebra 2, geometry
...My degree is from RPI in Theoretical Mathematics, and thus included taking Discrete Mathematics as well as a variety of other proof-writing courses (Number Theory, Graph Theory, and Markov
Chains to name a few). I absolutely love writing proofs and thinking problems out abstractly. I favor the S...
58 Subjects: including calculus, differential equations, biology, algebra 2
...Therefore, I treat every student as I would want my children to be treated. I am stern but caring, serious but fun, and nurturing but have high expectations of all of my students. Together as a
team, you and I can help your child to do his or her best.
12 Subjects: including prealgebra, trigonometry, algebra 1, algebra 2
|
{"url":"http://www.purplemath.com/Modena_PA_Math_tutors.php","timestamp":"2014-04-21T11:07:11Z","content_type":null,"content_length":"23712","record_id":"<urn:uuid:fac63848-2313-4b5d-8cdd-0370554ec5e8>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A bit of trigonometry help needed
November 27th 2008, 05:17 PM
A bit of trigonometry help needed
I have a couple of math question that i cant seem to figure out
'Kyle has his own plane and he is planning his approach to the Toronto Pearson International Airport.He wants to descend at an angle of 22 degrees and he is starting his descent at a height of
10,000 feet.How long is his path to the runway?'
'An elevator is being installed at the mall.It will make an angle of 30 degrees with the ground and will rise up 20 feet from the ground.What is the length of the elevator?'
Any help on these would be greatly appreciated
November 27th 2008, 07:30 PM
from the diagram....
$\sin 30^0=\frac {20}{x}$
$\implies x=40\ \text{feet}$
So the length of the elevator is 40 feet
November 27th 2008, 07:39 PM
thanks alot :)
|
{"url":"http://mathhelpforum.com/trigonometry/61952-bit-trigonometry-help-needed-print.html","timestamp":"2014-04-18T08:10:08Z","content_type":null,"content_length":"5395","record_id":"<urn:uuid:2d4a80d0-3444-4889-bf3e-3b84a00d4626>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Proving a function is larger than another.
September 10th 2009, 11:50 AM
Proving a function is larger than another.
let 0>i>1
prove that (1+i)^t < 1+i*t for 0>t>1.
I am a little bit clueless where to start. At first I thought about integrating the area in between and if that area was positive then it would be true, if negative then it would be false and
that would basically prove it. But it turned out to be a lot more complicated than it first seemed which led me to believe there is probably a much better way. Any help would be appreciated.
September 10th 2009, 12:58 PM
I have tried to make sense of the question by reversing four of the inequality signs. I have also replaced i by x, partly because i tends to mean the square root of –1 and partly because it's
easier to prove the result if you think of x as a variable and t as a constant.
The Mean Value theorem (with remainder) for the function $f(x) = (1+x)^t$ tells you that $(1+x)^t = 1+tx+\tfrac12t(t-1)(1+y)^{t-2}$ for some y between 0 and x. Since $t-1<0$, it follows that $
(1+x)^t < 1+tx$.
(If you don't know the Mean Value theorem, we'll have to think of another approach.)
September 10th 2009, 02:13 PM
I did make a mistake and your sign reversal is correct. I do know the Mean Value Theorem roughly but I must admit that I have no clue what you did there.
Just as a side note, this is not a calculus class it's my actuarial class, so the question might have a non calculus way of being solved(i doubt that though).
If you could give a little bit more explanation as to how you got:
$<br /> <br /> (1+x)^t = 1+tx+\tfrac12t(t-1)(1+y)^{t-2}<br />$
I would really appreciate that.
September 11th 2009, 06:02 AM
The particular case of Taylor's theorem that I was quoting says that if f is a twice differentiable function then $f(x) = f(0) + xf'(0) + R_2(x)$, where $R_2(x)$ ( the remainder after the first 2
terms of the Taylor series) is equal to $\tfrac12x^2f''(y)$ for some y lying between 0 and x. (And I was quoting that result wrongly, because I left out the $x^2$ that I have inserted in red
September 16th 2009, 07:09 PM
Is there a simpler way to solve this question?
Using only derivatives? The Mean Value theorem is above me presently.
September 17th 2009, 03:34 AM
We want to show that $(1+x)^t<1+tx$ when x and t both lie between 0 and 1.
Fix t, with 0 < t < 1, and let $f(x) = 1+tx - (1+x)^t$. Then $f(0)=0$. If we can show that f is an increasing function, it will follow that f(x)>0 whenever x>0, which is what we want to prove.
So differentiate f(x), to get $f'(x) = t - t(1+x)^{t-1}$. Notice that $(1+x)^{t-1} = \frac1{(1+x)^{1-t}}$. But 1+x>1, and 1–t>0. It follows that $(1+x)^{1-t}>1$, so that $(1+x)^{t-1}<1$. That
tells us that $f'(x)>0$, so that f is an increasing function, as required.
|
{"url":"http://mathhelpforum.com/calculus/101540-proving-function-larger-than-another-print.html","timestamp":"2014-04-17T15:55:02Z","content_type":null,"content_length":"12238","record_id":"<urn:uuid:6db565e5-ceea-43cb-8529-30df145ee425>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
November 5th 2012, 08:42 AM #1
Jun 2012
san diego
The function below gives the cost in dollars to rent an RV motor home for d days.
Find the cost of renting the RV for a vacation that will last 4 days.
Can anyone help me solve this?
Re: function
put in your formula d=4 and you will the cost of renting for 4 days
November 5th 2012, 03:03 PM #2
Jul 2011
|
{"url":"http://mathhelpforum.com/algebra/206800-function.html","timestamp":"2014-04-16T11:53:49Z","content_type":null,"content_length":"31061","record_id":"<urn:uuid:e7a2e7d5-f7c3-4d6a-b2b3-600ff196c3d0>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Inverse Problems: Formulas for Finding Properties of Vibrating Systems from Nodal Lines and Surfaces
Thursday, July 25
11:30 AM
Inverse Problems: Formulas for Finding Properties of Vibrating Systems from Nodal Lines and Surfaces
This presentation will focus on the inverse problem of finding properties of two and three dimensional vibrating systems. This problem has important applications in nondestructive testing and design.
The speaker will discuss a new data set...the position of the nodal lines or nodal surfaces. The nodes are places where the system does not vibrate when it's excited at a natural frequency. The nodal
positions are determined by a Doppler shift measurement. She will show three simple formulas for determining the amplitude of a force on a two dimensional membrane or a three dimensional acoustic
medium, and conjecture similar simple formulas for finding the density or soundspeed and support the conjectures with numerical experiments. Previously measured data sets are the natural frequencies
for the vibrating system or the displacement of a single modeshape at all points of the system.
Joyce R. McLaughlin
Ford Foundation Professor of Mathematics
Rensselaer Polytechnic Institute
MMD, 5/20/96
|
{"url":"http://www.siam.org/meetings/archives/an96/ip6.htm","timestamp":"2014-04-21T00:19:45Z","content_type":null,"content_length":"1959","record_id":"<urn:uuid:f501f3d1-c007-4ec5-bb83-100b8021672e>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trig limit help
January 8th 2010, 11:55 AM #1
Trig limit help
Where do I even start with this one?
lim x -> 0 of ( (2 - cos(3x) - cos(4x) ) / x )
Is there some identity that I can use that I'm not seeing.. ? I can't multiply by a conjugate can I?
The slacker's way is to use l'Hopital's Rule.
Otherwise you could substitute the Maclaurin series for cos(3x) and cos(4x), simplify the numerator and then cancel the common factor of x. Then take the limit.
Alternatively, you could probably do a clever re-arrangement and get standard forms whose limits are well known. But why hoe the hard road?
The simple little fact that $\frac{1-\cos x}x\to0$ as $x\to0$ will solve this:
$\frac{2-\cos 3x-\cos 4x}{x}=3\cdot \frac{1-\cos 3x}{3x}+4\cdot \frac{1-\cos 4x}{4x}.$ So the limit is zero.
If you were going to be a slacker (as I am $\frac{0}{0}$ by direct substitution.
So you can use L'Hospital's Rule.
$\lim_{x \to 0}\frac{2 - \cos{3x} - \cos{4x}}{x} = \lim_{x \to 0}\frac{\frac{d}{dx}(2 - \cos{3x} - \cos{4x})}{\frac{d}{dx}(x)}$
$= \lim_{x \to 0}\frac{3\sin{3x} + 4\sin{4x}}{1}$
$= \frac{3\cdot 0 + 4\cdot 0}{1}$
$= 0$, as shown previously.
January 8th 2010, 12:00 PM #2
January 8th 2010, 12:14 PM #3
January 8th 2010, 06:43 PM #4
January 8th 2010, 06:51 PM #5
|
{"url":"http://mathhelpforum.com/calculus/122925-trig-limit-help.html","timestamp":"2014-04-18T05:16:38Z","content_type":null,"content_length":"47619","record_id":"<urn:uuid:11ac346d-e550-46fd-9b5d-789a92037429>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tony Smith: My Two Bets With TD
As I explained yesterday, I am in the process of receiving payment for a few bets on possible discoveries at the LHC. Two such bets were on between me and Tony Smith, a long time reader of this blog
and a lawyer with deep interest in particle physics (and a few interesting ideas). Tony now concedes them. These are for a total of $200 and a bottle of Strega (an italian liquor); the latter has
been agreed to be turned into a bottle of good wine, much closer to my taste. I will post here a picture of the wine as I get it; in the meantime, Tony agreed to write something to describe the heart
of the matter to readers of this blog. So the text below is from him.
It's OK to be wrong as long as you correct your mistakes as soon as you find them.
My father was in the mining business. He told me "never trust any geological model until you drill a hole and look to see what is really under the ground". I am a lawyer. To try a case, I have to
have a working model of the facts to the extent that I know them at the time, but to be ready to change that model immediately when new facts emerge (as they often do quite unexpectedly).
Around 1981, I started to try to build a realistic physics model based on those principles. I started with N = 8 supergravity, but its naive 1-1 supersymmetry gave it too many particles and its SO(8)
did not really fit the Standard Model gauge groups. Then I tried to build a model around Division Algebras and Spin(8) with 3 generations of fermions and of W/Z bosons, but experiment said that 3
generations of W/Z was wrong, so I changed it to a model based on F4.
F4 was better than Spin(8), but it ran aground due to lack of complex structure, which led me to build an E6 model.
The E6 model was pretty nice (it can be seen as a bosonic string model with fermions coming from orbifolding), but it only had local Lagrangian structure and did not seem to give a natural Algebraic
Quantum Field Theory (AQFT). To get an AQFT, I needed to use the 8-periodicity of Real Clifford Algebras. Since E6 sits inside E8 which lives inside the Clifford Algebra Cl(16), E8 and Clifford
Algebra is the basic structure of my present model, which has a lot of complicated details that give results that look roughly consistent with experiments up to the LHC Higgs search.
My Higgs sector is based on Higgs as a Tquark condensate with 3 mass states for the Higgs and for the Tquark. Since a Tquark condensate involves a quantum protectorate to allow it to be stable beyond
the very short basic Tquark lifetime, I had in my model T0 and T0c mesons in which a low-mass-state Tquark, stabilized by the condensate quantum protectorate, combined with an Up or Charm (anti)
quark, producing mesons with mass around 125 GeV or so.
The low-mass-state Higgs in my model was around 145 GeV or so, which is roughly where Gfitter says the Higgs should be if the Tquark mass is not fixed (and it would not be fixed in my model with 3
mass states for the Tquark).
Therefore, with the 2011 LHC results, I was happily identifying the 125 GeV digamma bump with my lowTquark T0 meson and the 137 GeV digamma bump with my lowHiggs.
The fact that the 2011 LHC WW cross section (for both CMS and ATLAS) was low (something natural for a T0 meson but not good for Standard Model Higgs) made me confident enough to bet with Tommaso
Dorigo that the 125 GeV bump would not be Higgs.
The 4 July 2012 LHC results told me that I lost the bet because the 137 GeV bump went away in both CMS and ATLAS with the new data and as to the 125 GeV bump, even though the Tevatron announced on 2
July 2012 that it saw a low WW cross section and ATLAS on 4 July 2012 was still reporting a low WW cross section in agreement with CMS 2011 and ATLAS 2011, CMS showed a high WW cross section in
agreement with a Standard Model Higgs.
CMS was able to find the correct result that ATLAS and the Tevatron missed because, as Tommaso Dorigo said in his blog, by CMS "... having put together more advanced multivariate search techniques
and having analyzed in time for the announcement not just the two main channels but all the five important final states (W boson pairs, b-quark pairs, and tau-lepton pairs in addition to the two ...
main channels ... [ digamma and Higgs to ZZ to 4l])...".
Not only was my bet lost, but my model was shown to have errors, so I must revise it
in at least two ways:
1 - There is no quantum protectorate extension of the life of the Tquark, so there are no Tquark mesons.
2 - The LHC indeed found the Higgs at 125 GeV, which is about 0.86 times the value calculated in my model. Since the high digamma strength in the 2012 LHC data could be due to the Higgs being
connected with a Tquark condensate, it seems that the 125 GeV Higgs is really basically a plain vanilla Standard Model Higgs.
It is easy to do 1 (just as it was easy to get rid of high-generation W bosons many years ago)
but it will take some work and rethinking to take care of 2, so thanks to LHC observations for telling me to get to work to try to get my model into better shape.
This is why I like physics:
You can use your imagination to devise models that (in your eyes) are beautiful but Nature (not the magazine) is always the boss, telling you though experiments like the LHC how dumb you were to do
some of the things that you thought were so smart, and then you get a chance to correct your dumb mistakes and try to do better.
It is a life-long process that goes on as long as you have fun playing the game: Even if I get 1 and 2 done, that will not be the end of the road.
My model still has 3 Higgs mass states, with the 125 GeV Higgs being the low state. As to the middle and high mass states, the LHC will have to say whether they exist or not.
In the histograms below, I have colored the low mass state dots green
and some possible middle mass states cyan and high mass states magenta.
The middle (cyan) and high (magenta) possible peaks may go away with more data. Maybe I can get another bet with Tommaso about that.
Whether or not the possible high mass Higgs excesses go away, we now know that the plain old Standard Model is what Nature likes, so, what should physicists do in the future ?
Here are a few things to think about:
Study the High Energy Massless Realm well above Electroweak Symmetry Breaking: What happens to Kobayashi-Maskawa mixing in a Realm with no mass ? How do you tell a muon from an electron if they are
both massless ? Build a Muon Collider to find out.
If conventional 1-1 fermion-boson SuperSymmetry is not Nature's Way, can we get the nice cancellations from a more Subtle SuperSymmetry ? For that, my model uses a Triality-related symmetry between
fermions and gauge bosons based on its 8-dim Kaluza-Klein structure, but in it the Standard Model fermion terms in 8 dimensions cancel the 8-dimensional Standard Model gauge boson terms so although
the cancellation is clear in high-energy 8-dim space-time
it is a subtle effect in the low-energy 4-dim physical spacetime part of the Kaluza-Klein.
What about Dark Matter and Dark Energy? My model uses the Spin(2,4) = SU(2,2) Conformal Group of Irving Ezra Segal to account for both, but it is experimental observation that counts.
My favourite experimental approach is that of Paul A. Warburton at University College London using terahertz frequency Josephson Junctions.
Since the Higgs came from Solid State Physics ideas of people like Anderson, look closely at Solid State Nanostructures (such as Nickel/Palladium that seems to be useful in Cold Fusion) to see
whether they can show new ways to visualize the workings of High-Energy Physics of the Standard Model plus Gravity.
Hi Tonny,
I understand that Einstein had the equivalence principle as a guide in his superb mathematical construction.
You are a mathematician, so I would like to ask you: what is the physical principle behind your model?
NumCracker (not verified) | 07/12/12 | 16:39 PM
Nice post.
Garrett (not verified) | 07/12/12 | 21:05 PM
As to the "physical principle behind my model",
with respect to its Higgs sector and
particularly with respect to my proposal of another bet with Tommaso,
here is my motivation:
I think that the Standard Model has been vindicated
and that there is only one Standard Model Higgs
whose effective ground state is around 125 GeV
that it comes from Higgs as Tquark-Tantiquark condensate
(where the Tquark-Tantiquark pair is protected from decay
by an Anderson Quantum Protectorate
like Cooper Pairs (ElectronSpinUp-ElectronSpinDown) are protected
as members of a superconductor condensate)
the Higgs-Tquark-Tantiquark system
has two somewhat stable excited states:
around 200 GeV where the system encounters a Triviality Boundary
around 250 GeV where the system encounters a Critical Point
related to the Higgs VEV.
Since I have not done detailed analysis of the stability of the excited states,
I do not have a prediction for their cross-section at the LHC,
but I would guess that they are substantially lower than would be
expected for a single SM Higgs there.
On a related topic,
it may be that with the Higgs as a Tquark-Tantiquark condensate
the net result of the Tquark part of SM Higgs cross-section for
the 125 GeV ground state might be substantially different
from the LHC analysis used in July 2012,
thus accounting for the higher-than-expected observation reported in July 2012.
PS - I do not hold the equivalence principle as a guiding light for my physics model because
John F. Donboghue, Barry R. Holstein, and R. W. Robinett in Phys. Rev. D 30, 2561–2572 (1984) say:
"... Using the techniques of finite-temperature field theory we renormalize the electromagnetic and gravitational couplings of an electron which is immersed in a heat bath with T≪me. By taking the
nonrelativistic limit, we demonstrate that the inertial and gravitational masses are unequal. ...".
They have a related paper in
General Relativity and Gravitation 17.3 (1985): 207-214.
Their work is for fermions. A similar result for bosons has been
given by Igor Kulikov in section XVI.2 of hep-th/9609050
Anonymous (not verified) | 07/12/12 | 22:06 PM
But... what are the chances of you coming up with a model that has been missed by theorists with far greater technical competence than you and who devote their full time to creating theoretical
Put another way, do you know the sufficient details of the standard model to then modify it?
I suppose if what you do gives you some pleasure in life and helps you to live your life more fully, then fine.
Jonny (not verified) | 07/13/12 | 08:37 AM
But... what are the chances of you coming up with a model that has been missed by theorists with far greater technical competence than you and who devote their full time to creating theoretical
ha ha Unfortunately the old days of theoretical physics has been supplanted by sheer numbers of 'theoretical' people who aren't really any more technically competent. Name these far greater
'technical' people.
Hank Campbell | 07/13/12 | 12:43 PM
As to the ad hominem attack comment by "Jonny"
here is a bit of my personal history:
When I discussed an early version of my model with Yuval Neeman
many years ago, he said (here I paraphrase as it was a personal
verbal conversation not written dow):
If your model is internally mathematically consistent
produces calculated values in substantial agreement with experiments,
the model stands on its own and you do not need any credentials.
On the other hand,
no matter how many credentials (degrees, prizes, etc) you have,
if your model is wrong, it is still wrong.
Therefore, I will let my model stand for itself,
and it is clearly available on the web for anyone with interest
to study it to whatever extent they like.
Tony Smith (not verified) | 07/13/12 | 14:00 PM
I like that take on it. Very down to Earth, and an actual answer to the question.
Thanks for sharing.
CuriousReader (not verified) | 07/13/12 | 14:31 PM
Very nice post, Tony. I've long been interested in your work (and similar work, for instance Geoffrey Dixon's), but I must admit that I find your style often quite difficult to follow... Reading the
above helped a bit with the conceptual overview. And it's gracious of you to accept the judgement of experiment and man up to mother nature (and pony up your debts) -- I have a sneaking suspicion
that there will be some independent model builders who won't find that kind of courage within themselves. (Not that I can't sympathize; it's hell to have your beautiful structure shattered by
empirical evidence. But ultimately, that's the only way to learn new things, and that's what we're doing it for...)
Jochen (not verified) | 07/15/12 | 13:54 PM
I'm glad you're where trying this sort of model building, of course they are potentially so many possibilities it was always possible even likely that you'd fail on the first tries, you have to go
with you've knowledge of literature and intuition take you and see if the models you build successfully represent nature or not. I've been working lines somewhat similar to you, especially looking
for alternative ways to fit the a standard model generation into E6. The way most common in the literature as to 2 Higgs particles an extra quark and 2 extra neutrinos and I don't believe it fits.
(See for examples papers by S.F. King on ArXiv. In particular its been tried a lot and keeps coming up with predictions that don't match. I've been trying other ways to embed the standard model in
E6, and at the moment fill most happy with a 15+12 decomposing of E6, with two extra quarks, A paper by Stephen Alder claimed computer based trials of symmetry breaking of E6 only realistic break
into 15+12 at lowest energy.
BDOA Adams, Axitronics
Barry Adams | 07/15/12 | 19:36 PM
I've been working lines somewhat similar to you, especially looking for alternative ways to fit the a standard model generation into E6. The way most common in the literature as to 2 Higgs particles
an extra quark and 2 extra neutrinos and I don't believe it fits. Terravita
James (not verified) | 08/14/12 | 13:17 PM
|
{"url":"http://www.science20.com/quantum_diaries_survivor/tony_smith_my_two_bets_td-92037","timestamp":"2014-04-18T14:11:08Z","content_type":null,"content_length":"59368","record_id":"<urn:uuid:198b9648-30f2-4cb6-9cd0-5557d3a7b99a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Big and Small Numbers in Physics
Copyright © University of Cambridge. All rights reserved.
'Big and Small Numbers in Physics' printed from http://nrich.maths.org/
Physics makes use of numbers both small and large. Try these questions involving big and small numbers. You might need to use pieces of physical data not given in the question. Sometimes these
questions involve estimation, so there will be no definitive 'correct' answer; on other occasions an exact answer will be appropriate. Use your judgement as seems appropriate in each context.
1. It is known that the value of $g$ on the moon is about one-sixth that on earth. How high do you think that you would be able to jump straight up on the surface of the moon?
2. The mass of an atom of lead is $3.44\times 10^{-22}\textrm{ g}$. Lead has a density of $11.35\textrm{ g cm}^{-1}$. How many atoms of lead are found in a single cubic centimetre of lead?
3. The earth orbits the sun on an almost circular path of average radius about $149\,598\,000\,000\textrm{ m}$. How fast is the earth moving relative to the sun?
4. The tallest buildings in the world are over $800\textrm{ m}$ high. If I dropped a cricket ball off the top of one of these, estimate how fast it would be moving when it hit the ground.
5. What weight of fuel would fit into a petrol tanker?
6. The charge on a proton is $1.6\times 10^{-19}\textrm{ C}$. What is the total sum of the positive charges in a litre of hydrochloric acid of $\mathrm{pH}$ $1.0$?
7. What is the mass of a molecule of water?
8. How many molecules of water are there in an ice cube?
9. Around $13.4$ billion years ago the universe became sufficiently cool that atoms formed and photons present at that time could propagate freely (this time was called the surface of last
scattering). How far would one of these old photons have travelled by now?
10. How much energy is contained in the matter forming the earth?
An obvious part of the skill with applying mathematics to physics is to know the fundamental formulae and constants relevant to a problem. By not providing these pieces of information directly, you
need to engage at a deeper level with the problems. You might not necessarily know all of the required formulae, but working out which parts you can and cannot do is all part of the problem solving
|
{"url":"http://nrich.maths.org/6504/index?nomenu=1","timestamp":"2014-04-19T02:44:49Z","content_type":null,"content_length":"5955","record_id":"<urn:uuid:5949d9fe-88d5-49fc-bde5-d626af6efad0>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions - Formula for E dimensional elements of an n dimensional perfect figure
Date: Dec 7, 2010 10:04 PM
Author: Robert Carmosino
Subject: Formula for E dimensional elements of an n dimensional perfect figure
Ok, in my math research class, I've been working on a formula for elements on higher dimensional figures. The formula attached is what I have so far, its almost perfect but at least 1 thing is wrong, any ideas?
Formula info
elements are vertices, edges, faces... each of those has a dimensional value, vertices=0, edges=1,faces=2 E is this number
n is the dimension of the perfect figure, by the perfect figure i mean square, cube, hypercube. square being 2, cube - 3, hypercube - 4
How to solve formula:
Let E be the dimension of the element
let n be the dimension of the figure
solve the multiplication notation in terms of x
solve the summation notation no variables
multiply the answer by the 2^n-E
that should give you the answer
i think my problem is the tops of the summation notation and the multiplication notation, any ideas what they should be?
questions? leave below and i will answer tomorrow.
|
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7324502","timestamp":"2014-04-20T16:24:47Z","content_type":null,"content_length":"2004","record_id":"<urn:uuid:bd39fb8d-5c51-42ee-beee-7577c5a30f2b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: -mean- over id, save to a new dta, then variance matrix
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: -mean- over id, save to a new dta, then variance matrix
From "Austin Nichols" <austinnichols@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: -mean- over id, save to a new dta, then variance matrix
Date Thu, 18 Oct 2007 10:16:02 -0400
The first part is
collapse v2-v17, by(v1)
but I don't understand what you mean by "I would like to calculate the
variance matrix for such new dta and saving it in one more new dta."
On 10/18/07, nicola.baldini2@unibo.it <nicola.baldini2@unibo.it> wrote:
> I have one dta of 17 columns and 354 rows. The first column, named id, contains the panel identifier (an integer from 1 to 63). I would like to produce a new dta file with 17 columns (keeping the original names) and 63 rows, containing the id variable, and the mean of the remaining 16 columns calculated by the id variable.
> I know that I can have such means by -mean- and -tabstat-, but how can easily export the results in a new dataset? Next, I would like to calculate the variance matrix for such new dta and saving it in one more new dta.
> How can I do?
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2007-10/msg00655.html","timestamp":"2014-04-16T04:22:06Z","content_type":null,"content_length":"6182","record_id":"<urn:uuid:821c8141-2a60-416f-9a6a-388530997516>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: Convergence Problems With Zero Truncated Negative Binomial Regre
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Convergence Problems With Zero Truncated Negative Binomial Regression
From Muhammad Anees <anees@aneconomist.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Convergence Problems With Zero Truncated Negative Binomial Regression
Date Sat, 3 Mar 2012 16:18:23 +0500
Yes, I would further to suggest you how to detect dispersion in count
data models. In one discussion, Joseph Hilbe email me the following
suggestion. I owe to Jo Hilbe for his kind guidance in this regard. I
think it might be helpful to you while the same has been discussed on
this list earlier.
glm panelcount x1 x2 x3, fam(poi)
glm panelcount x1 x2 x3, fam(poi) cluster(panelvar)
glm panelcount x1 x2 x3, fam(nb ml)
glm panelcount x1 x2 x3, fam(nb ml) cluster(panelvar)
where panelvar is your panel variable, eg id and x1-x3 are predictors.
You'll have your own of course, these are just fillers.
Check the Pearson dispersion statistic for the Poisson model. If it is
under 1.0 the model is Poisson underdispersed. If it is, you cannot
use a negative binomial program. If it is over 1, then try the negati
model, checking the same dispersion statistic. See if it is under or
overdispersed. Then apply the cluster option as i show above to the
model. See if the standard errors change much. If not, then there is
not underdispersion.
Following the above steps would let you know the nature of what you
need to do with your modelling.
Hope this helps.
On Sat, Mar 3, 2012 at 4:26 AM, Michael Lebenbaum <mlebenba@uwo.ca> wrote:
> Hi,
> I am attempting to do an analysis of zero truncated counts for two outcomes
> (both are zero truncated count distributions). So far I have been using
> Stata 10, with ztp and ztnb commands with svy. The zero truncated poisson
> model converges properly for both, however the zero truncated negative
> binomial model does not converge (It gets stuck at the fitting a constant
> only model (not concave)). The sample size is large 10K+, so I do not
> believe this is the problem. I think it has to do with sparse data in parts
> of the distribution as the vast majority (~95%+) of the data is at the low
> end of the range with much of the range having 0 or few observations
> (~80-90% of the range). In addition, when I delete a small percentage of the
> top users (2%), the zero-truncated negative binomial model converges. Does
> anyone know of alternative solutions? I would rather not have to alter the
> sample if it is not necessary. Thank you.
> M
Muhammad Anees
Assistant Professor/Programme Coordinator
COMSATS Institute of Information Technology
Attock 43600, Pakistan
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2012-03/msg00133.html","timestamp":"2014-04-17T01:00:45Z","content_type":null,"content_length":"11149","record_id":"<urn:uuid:b8175844-c651-4122-96f2-a51a3bd9fa73>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Element Method for proving...
April 11th 2010, 06:00 PM #1
Mar 2010
Element Method for proving...
Use the element method for proving a set equals the empty set to prove the statement. Assume all sets are subsets of a universal set U.
1. For all sets A and B, if B C Aͨthen A ∩ B = Ø.
Well, if crocodiles are not mammals, then there are no animal that is both a crocodile and a mammal. This is just to give an intuition. You can imagine that B is the set of crocodiles, A is the
set of mammals, and start reasoning.
April 12th 2010, 12:21 AM #2
MHF Contributor
Oct 2009
|
{"url":"http://mathhelpforum.com/discrete-math/138568-element-method-proving.html","timestamp":"2014-04-21T13:40:55Z","content_type":null,"content_length":"32635","record_id":"<urn:uuid:3d54098e-3798-4f8c-94c1-d08699a2ce22>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] Re: PA with few symbols
Vladimir Sazonov V.Sazonov at csc.liv.ac.uk
Tue Jul 20 15:02:57 EDT 2004
Timothy Y. Chow wrote:
> I think Bill Taylor was asking a different question, namely whether it's
> possible to formulate a recursive set of axioms (in the first-order
> language of arithmetic) that is logically equivalent to PA but that uses
> only a finite number of variables.
> The answer is surely no, and probably follows from the fact that the
arithmetical hierarchy is proper, but I don't see an immediate proof.
It depends on concrete formulations and our fantasy. It seems Bill
Taylor had in mind something that would avoid such kind of
> Perhaps in the case of PA the effect of an unlimited number of
> quantifiers can be cunningly handled
> by using only a small number of them and
> "coding up" the rest in much the same way that all PR functions
> can be coded up using just + and * .
The following comments on using variables are of a general character.
In the line of last postings from Ali Enayat and Randall Holmes,
we can always work in an algebraic (or category theory) manner
without variables at all. Another related example is the combinatory
algebra (no variables) which has the same expressive power as lambda
calculus (based on the play with variables). On the other hand, the
pathos of category theory of working with functions having no
arguments (variables) does not seem to me very exciting (although
in many situations this may be quite useful, interesting and even nice).
Also note that functors still have arguments (variables for objects
and arrows). And we know that, for example, category of cartesian
closed categories is in fact, equivalent to the category of
(intuitionistic(*)) equational typed lambda theories (with variables).
(Analogously - for toposes.) From my point of view, this fact makes
the concept of ccc more intuitively understandable (probably, except
the intuitionistic flavour giving the full generality of ccc) for
the majority of mathematicians -- non-categorists.
Properly speaking, I would say that there should be some good
balance between these two trends.
Another example from database theory is relational algebra vs.
relational calculus (essentially the language of first order
logic with variables). The algebra may be better from the point
of view of (efficient) implementation. However, a human writing
a formal query to DB (or any computer program) would rather prefer
to use (free and quantified) variables like in the query language
Footnote (*)
In intuitionistic (typed) lambda theory, given forall xy s(x,y)=t(x,y),
we can infer forall xyz s(x,y)=t(x,y), with z "fictitious", but not
vice versa as in the classical case. A model for such a lambda theory
should be either a ccc, or a Kripke style model with "growing"
universes for each type. Some types, say, for z above, may be even
empty (either currently or permanently, that is, in the current Kripke
world or in all worlds). In this case we really cannot omit fictitious
variable z.
Vladimir Sazonov
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-July/008343.html","timestamp":"2014-04-21T13:26:37Z","content_type":null,"content_length":"5419","record_id":"<urn:uuid:f42ad203-f403-4f4d-b5c9-4e075a894a78>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Diff of /src/general-info/release-19d.txt
59 number with >= 106 bits of precision (about 33 digits). Known number with >= 106 bits of precision (about 33 digits). Known
60 issues: issues:
62 * If you are expecting IEEE-style behavior, you don't get it: * IEEE-style floating point behavior is not supported:
63 - signed zeroes aren't really available. - signed zeroes exist but arithmetic operations may not
64 produce the correct signed zeroes.
65 - overflows don't return infinity but return NaN instead. - overflows don't return infinity but return NaN instead.
66 - rounding might not be quite the same as IEEE - rounding might not be quite the same as IEEE.
67 - SQRT is not accurate to the last bit, as required by IEEE. - SQRT is not accurate to the last bit, as required by IEEE.
68 * Multiplying by a number very close to * Multiplying by a number very close to
69 most-positive-double-float will produce an error even if the most-positive-double-float will produce an error even if the
70 result does not overflow. (This is an artifact of how result does not overflow. (This is an artifact of how
71 multiplication is done. I don't have a solution to this.) multiplication is done.)
72 * Read/write consistency is not working. (Because conversion * Read/write consistency is not working. (Because conversion
73 from a bignum to a double-double-float doesn't really from a bignum to a double-double-float doesn't really
74 understand the internal double-double-float format.) understand the internal double-double-float format.)
75 * INTEGER-DECODE-FLOAT and SCALE-FLOAT aren't "inverses". * INTEGER-DECODE-FLOAT and SCALE-FLOAT aren't "inverses".
76 That is, you can't take the result of integer-decode-float That is, applying the result of integer-decode-float to
77 and use scale-float to produce exactly the same number. scale-float does not produce exactly the same number. This
78 This is because of how bignums are converted to is because of how bignums are converted to double-doubles.
79 * FLOAT-DIGITS always returns 106 even though there could be * FLOAT-DIGITS always returns 106 even though there could be
80 more bits. (Consider the double-double (1d0,1d-200)). This more bits. (Consider the double-double (1d0,1d-200)). This
81 will show up in PRINT where the printed result will have way will show up in PRINT where the printed result will have way
83 number back won't give the same value. number back won't give the same value.
84 * There is probably more consing than is necessary in many of * There is probably more consing than is necessary in many of
85 the standard Common Lisp functions like floor, ffloor, etc. the standard Common Lisp functions like floor, ffloor, etc.
86 * The special functions are not fully tested. I did a few * The special functions are not fully tested. A few random
87 random spot checks for each function and compared the spot checks were done for each function and maxima was used
88 results with maxima to verify them. to verify the results.
89 * The branch cuts for the special functions very likely will * The branch cuts for the special functions very likely will
90 not match the double-float versions, mostly because we don't not match the double-float versions, mostly because we don't
91 have working signed zeroes. have working signed zeroes.
92 * Type derivation for double-double-floats might not be * Type derivation for double-double-floats might not be
93 working quite right. working quite right.
94 * PI is still a double-float. If you want a double-double * PI is still a double-float. The double-double version of pi
95 version of pi, it's KERNEL:DD-PI. (Soon to be EXT:DD-PI.) is EXT:DD-PI.
96 * There are probably still many bugs where double-double-float * There may be bugs where double-double-float support was
97 support was overlooked. overlooked.
98 * The double-double arithmetic operations can be inlined by * The double-double arithmetic operations can be inlined by
99 specifying (SPACE 0). Otherwise, they are not inlined. specifying (SPACE 0). Otherwise, they are not inlined.
100 (Each double-double operation is about 20 FP instructions.) (Each double-double operation is about 20 FP instructions.)
131 - An error is signaled if a declaration is used as the name of a - An error is signaled if a declaration is used as the name of a
132 deftype, condition, or defstruct, and vice versa. deftype, condition, or defstruct, and vice versa.
133 - An error is signaled when trying to generate a namestring from - An error is signaled when trying to generate a namestring from
134 a pathname with just a version component (other than nil, a pathname with just a version component (other than NIL,
135 :newest, or :unspecific). CMUCL cannot print that readably. :NEWEST, or :UNSPECIFIC). CMUCL cannot print that readably.
136 - FLET and LABELS functions will catch errors in keyword - FLET and LABELS functions will catch errors in keyword
137 parameters. Previously, a keyword of NIL was silently parameters. Previously, a keyword of NIL was silently
138 accepted. accepted.
173 out to C varargs functions. Now we always copy any float args out to C varargs functions. Now we always copy any float args
174 to the corresponding int regs (or stack) as required by the to the corresponding int regs (or stack) as required by the
175 ABI. This isn't necessary for non-varargs functions, but ABI. This isn't necessary for non-varargs functions, but
176 CMUCL doesn't know functions which are varargs functions. CMUCL doesn't know which functions are varargs functions.
177 - Callbacks with long-long args or results should work correctly - Callbacks with long-long args or results should work correctly
178 now for Darwin/ppc. now for Darwin/ppc.
179 - DESCRIBE no longer depends on having PCL loaded. - DESCRIBE no longer depends on having PCL loaded.
180 - Tracing with no encapsulation appears to be working now for - Tracing without encapsulation appears to be working now for
181 ppc. ppc.
182 - A simple interface to sysinfo(2) has been added for sparc. - A simple interface to sysinfo(2) has been added for sparc.
183 This is used to provide better values for MACHINE-TYPE and This is used to provide better values for MACHINE-TYPE and
212 - (expt 1 <big number>) doesn't trigger a continuable error - (expt 1 <big number>) doesn't trigger a continuable error
213 anymore and returns 1 immediately. anymore and returns 1 immediately.
214 - Disassembling methods doesn't produce a type error anymore. - Disassembling methods doesn't produce a type error anymore.
215 - The unknown condition type 'LISP:SOCKET-ERROR has been fixed. - The unknown condition type LISP:SOCKET-ERROR has been fixed.
216 It properly signals the EXT:SOCKET-ERROR condition now. It properly signals the EXT:SOCKET-ERROR condition now.
217 - The accuracy of the trig functions (sin, cos, tan) for large - The accuracy of the trig functions (sin, cos, tan) for large
218 arguments has been improved for x86 and ppc. Sparc already arguments has been improved for x86 and ppc. Sparc already
226 been fixed. (CMUCL would sometimes crash to ldb about weird, been fixed. (CMUCL would sometimes crash to ldb about weird,
227 invalid objects.) There are, however, still issues with weak invalid objects.) There are, however, still issues with weak
228 pointers. pointers.
- Hash table entries with a key and value of :EMTPY now work as
- EXT:READ-VECTOR can read binary data from streams with element
type BASE-CHAR or CHARACTER.
230 * Trac Tickets * Trac Tickets
231 3. without-package-locks doesn't work with defmacro 3. without-package-locks doesn't work with defmacro
260 the value of h_errno is returned. the value of h_errno is returned.
261 - A warning is printed when creating a weak key hash table with - A warning is printed when creating a weak key hash table with
262 a test different from EQ. a test different from EQ.
263 - An SCM and project management system is now available at
264 http://trac.common-lisp.net/cmucl. Please use this to submit
265 tickets, documentation, project requests, etc.
267 * Improvements to the PCL implementation of CLOS: * Improvements to the PCL implementation of CLOS:
269 * Changes to rebuilding procedure: * Changes to rebuilding procedure:
|
{"url":"http://common-lisp.net/viewvc/cmucl/src/general-info/release-19d.txt?r1=1.24.2.1&r2=1.24.2.2","timestamp":"2014-04-18T12:43:57Z","content_type":null,"content_length":"39650","record_id":"<urn:uuid:a2f308fa-4fbe-4bdb-b99a-5bbe220ca095>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Learning and Mathematics: Schoenfeld, Metacognition
Replies: 16 Last Post: May 3, 1996 7:46 AM
Messages: [ Previous | Next ]
Re: Learning and Mathematics: Schoenfeld, Metacognition
Posted: Apr 9, 1996 5:59 PM
Laurie Gerber <lgerber1@swarthmore.edu> wrote:
>A new twist is word problems- perhaps just a disguise for the usual rote
>applying of procedures OR perhaps an attempt to bring real world matters
>into the classroom. The children don't know yet so they stick to what they
>know-- which is plugging and chugging through the procedures without
>thinking about the context. Would discussing the question first make a
>difference? Would generating their own "real life" questions or using math
>time to actually answer real questions (like how many papers can we post on
>the bulletin board, how many buses will we need for an actual school trip,
>what grade do I need on my next test to get an 80% average etc.) help bring
>this new message home to kids?
I think that word problems as they are used in many math classes are not much
more than a disguise for applying formulas and procedures. Any word problem
offers the added challenge of figuring out what procedure to apply in that
problem, but that an become just another part of the game. Getting students
to generate à ³real lifeà ² questions and working on à ³real lifeà ² problems in math
class sounds like an excellent idea. I heard of a teacher in Chicago who
teaches algebra by using, among other things, problems involving stops on the
el trains which many of his students ride regularly. I used a similar
approach in teaching a math lesson in New York City, but my lack of detailed
knowledge of the subway system hampered my ability to set the questions up
correctly. The students seemed to enjoy it, though, and for some of them, it
seemed to reinfore the math concept involved.
> Finally, are we ready for this to be the
>message? Do we want kids to learn that math is only useful if it helps us
>with real world things or do we want them to have a place for (blind?)
>procedures as well? If so, should they be taught at the same time,
>integrated or kept separate-- like one day the kids do real life problems
>and one day they do book problems? -Laurie
I think it is important for students to learn about the internal logic and
patterns of math as well as about its real world applications. I think a
certain amount of memorization and drill is useful for learning basic
procedures and operations. However, instead of jumping right into the
procedure, looking at one or two à ³real worldà ² problems is often useful in
motivating why the students would want to learn the procedure. Basically, I
think real world problems, drill work to learn basic operations, and à ³abstract
theoryà ²-- the patterns and à ³whyà ¹sà ² of math should be interspersed in the math
curriculum so that they reinforce each other.
As others have pointed out, students are often able to solve real word math
problems quite well in the real world, but fail to see the connections between
those problems and the drill or concepts they encounter in math class. I
think math teachers should take up the challenge to help students see these
connections. Another thing teachers can do is to encourage students to
explore the patterns in math on their own, perhaps by introducing non-standard
topics like number bases. My fourth grade math teacher used a creative lesson
to introduce that concept to us, and I started exploring it on my own. Ità ¹s
been a topic that has interested me ever since, and it has come up repeatedly
in classes and other problem-solving situations in high school and college.
--Matt Reed
Date Subject Author
3/20/96 Learning and Mathematics: Schoenfeld, Metacognition Sarah Seastone
3/25/96 Re: Learning and Mathematics: Schoenfeld, Metacognition Gypsyamber Berg-Cross
3/25/96 Re: Learning and Mathematics: Schoenfeld, Metacognition Johanna K. Peters-Burton
3/25/96 Re: Learning and Mathematics: Schoenfeld, Metacognition Sasha Clayton
3/26/96 Re: Learning and Mathematics: Schoenfeld, Metacognition Nette Witgert
3/26/96 Re: Learning and Mathematics: Schoenfeld, Metacognition Erika Wenger
3/26/96 Re: Learning and Mathematics: Schoenfeld, Metacognition Tracy L. Rusch
3/26/96 Re: Learning and Mathematics: Schoenfeld, Metacognition Sasha Clayton
3/27/96 Re: Learning and Mathematics: Schoenfeld, Metacognition Stephen Weimar
3/28/96 Re: Learning and Mathematics: Schoenfeld, Metacognition Cathy Glasheen
4/7/96 Re: Learning and Mathematics: Schoenfeld, Metacognition Laurie Gerber
4/7/96 Re: Learning and Mathematics: Schoenfeld, Metacognition Laurie Gerber
4/9/96 Re: Learning and Mathematics: Schoenfeld, Metacognition Matt Reed
4/17/96 Re: Learning and Mathematics: Schoenfeld, Metacognition bliss
4/23/96 Re: Learning and Mathematics: Schoenfeld, Metacognition Richard Tchen
5/2/96 Re: Learning and Mathematics: Schoenfeld, Metacognition Laurie Gerber
5/3/96 Re: Learning and Mathematics: Schoenfeld, Metacognition Pat Ballew
|
{"url":"http://mathforum.org/kb/message.jspa?messageID=1075847","timestamp":"2014-04-19T17:53:18Z","content_type":null,"content_length":"39027","record_id":"<urn:uuid:143c82f9-57b5-4446-b834-d81bace077fe>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Greenway, VA Precalculus Tutor
Find a Greenway, VA Precalculus Tutor
...My goal is for each of my students to not only feel successful but BE successful. Feel free to read my regular blog posts on math education.Having a strong understanding of Algebra 1 is
quintessential for a student's success in higher mathematics. This is where most students start to struggle, and it continues into the later years.
24 Subjects: including precalculus, reading, calculus, geometry
...After all, we do it every day. But the SAT is not an everyday experience. Specific approaches and strategies can achieve better results, and what works best for a short reading selection may
not be the best approach for a long reading selection.
17 Subjects: including precalculus, chemistry, calculus, geometry
...I am a full time tutor, this is not a part time endeavor for me. I can effectively instruct all the math-related aspects of the GMAT. I am a former high school math teacher with well over 10
years of full time teaching & tutoring experience.
28 Subjects: including precalculus, chemistry, calculus, physics
...Golf: I was on my high school varsity team for three years. I cover swing mechanics (I encourage an easy, smooth swing), course etiquette, strategies, and short game, customized to your needs.
Tennis: I have taken six years of tennis lessons and played in junior tournaments when I was younger.
13 Subjects: including precalculus, writing, calculus, algebra 1
...I have worked with middle school and high school students in this subject. I am very patient and can help you become more confident in your math skills! I have tutored Algebra 2 for several
46 Subjects: including precalculus, English, Spanish, algebra 1
Related Greenway, VA Tutors
Greenway, VA Accounting Tutors
Greenway, VA ACT Tutors
Greenway, VA Algebra Tutors
Greenway, VA Algebra 2 Tutors
Greenway, VA Calculus Tutors
Greenway, VA Geometry Tutors
Greenway, VA Math Tutors
Greenway, VA Prealgebra Tutors
Greenway, VA Precalculus Tutors
Greenway, VA SAT Tutors
Greenway, VA SAT Math Tutors
Greenway, VA Science Tutors
Greenway, VA Statistics Tutors
Greenway, VA Trigonometry Tutors
Nearby Cities With precalculus Tutor
Belleview, VA precalculus Tutors
Berwyn, MD precalculus Tutors
Brambleton, VA precalculus Tutors
Calverton, MD precalculus Tutors
Crystal City, VA precalculus Tutors
Green Meadow, MD precalculus Tutors
Jefferson Manor, VA precalculus Tutors
Kingstowne, VA precalculus Tutors
Lansdowne, VA precalculus Tutors
South Riding, VA precalculus Tutors
Sudley Springs, VA precalculus Tutors
Sully Station, VA precalculus Tutors
W Bethesda, MD precalculus Tutors
West Hyattsville, MD precalculus Tutors
West Mclean precalculus Tutors
|
{"url":"http://www.purplemath.com/Greenway_VA_Precalculus_tutors.php","timestamp":"2014-04-18T05:43:55Z","content_type":null,"content_length":"24223","record_id":"<urn:uuid:0a776ace-6e03-4d63-95cf-01b7dd02bae5>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Total # Posts: 171
Writing Sentences and Paragraphs
Here is my Paragraph 1 , It is my good copy just wanna a final check before I submit it. PARAGRAPH 1 According to the job listing in The Daily News, I understand you are seeking a Medical
Administrative Assistant to work in Fletcher Allen Hospital in Burlington, Vermont. This ...
Writing Sentences and Paragraphs Paragraph 2
I have problems with fragments and wordiness hehehe :) but with all your help my other paragraph turned out well and I will work some more tomorrow on this one. THanks so much I hope people on here
appreciate your critique cause I've already learnt alot and you've made...
Writing Sentences and Paragraphs Paragraph 2
ASSIGNMENT: Your favorite cousin has moved to your town and is looking for a job. Her previous experiences are working as a cashier and sales clerk at two department stores. You know she plans to
apply at similar stores in your town. But you also know she is a perfect match fo...
Writting Sentences and Paragraphs
Your right , I just assumed it needed the description. I know I have to revise now for proper use of comma's and such . Is the use for a semi colon even needed in there as well? I also have to have
8-12 sentences in the paragraph so is the 9 I have enough? I can't subm...
Writting Sentences and Paragraphs
Background You ve applied for a specific job in your field of study. The Human Resources Department arranges an interview and tells you to bring with you a polished piece of writing for them to
evaluate your writing skills. The paragraph must describe one particular expe...
Writing Sentences and Paragraphs
My Corrected Paragraph: In response to your job listing in my local newspaper, I understand you are seeking a Medical Administrative Assistant, to work in Fletcher Allen Hospital located in
Burlington, Vermont. I understand in this required position, you must have the ability ...
Writing Sentences and Paragraphs
My Corrected Paragraph: In response to your job listing in my local newspaper, I understand you are seeking a Medical Administrative Assistant, to work in Fletcher Allen Hospital located in
Burlington, Vermont. I understand in this required position, you must have the ability ...
Writing Sentences and Paragraphs
In that case I'll have my boyfriend help tonight and repost tomorrow morning :) Great ideas and what a great website for homework help !
Writing Sentences and Paragraphs
Thank you soooooooooooooooo much , your help is very appreciated . I will work on this tonight and tomorrow with the help you've provided and send out a corrected version tomorrow evening once I can
review it all . Good night and thanks again :)
Writing Sentences and Paragraphs
Should I use my point of inspiration my mother's brain surgery rather than my wrist surgery is this what you mean by taking out the you part? I chose my wrist surgery due to the fact I wasn't around
during my mom's brain surgery but I could use that instead. Also I...
Writing Sentences and Paragraphs
Can you give me an example so I completely understand because I'm having trouble understanding what your asking and would really love to be able to correct what you see wrong. If you could pick out
one sentence as an example it could help alot ! :) thanks for your feedback!
Writing Sentences and Paragraphs
Background You ve applied for a specific job in your field of study. The Human Resources Department arranges an interview and tells you to bring with you a polished piece of writing for them to
evaluate your writing skills. The paragraph must describe one particular exper...
A satellite is in geostationary orbit 22,500 miles above the earth. IF the radius of the earth is 3,960 miles, how fast is the satellite moving in miles per day? How fast is the satellite moving in
miles per hour?
physical science
a newton meter is a measure of work also known as the.... what?
Hypothesis Testing A researcher asks whether attending a private high school leads to higher or lower performance on a test of social skills when compared to students attending public schools. A
sample of 100 students from a private school produces a mean score of 71.30. The p...
I need help with the layout on how to prepare a bank reconciliation and record adjustments. As of July 31 Company is Clark Company.
How do you prepare the following adjusting, reversing and next period entry into an unadjusted trial balance? I also need help with the rest of this problem. Cost of supplies still available December
31 is 2,700 The notes payable requires an interest payment to be made every 3...
I got this of this web site but I don't understand where it all goes. this is the one I am working on. I need help. Part 1 BULLSEYE RANGES For Year Ended December 31, 2005 Unadjusted Trial Balance
Adjustments Adjusted Trial Balance Account Title Dr. Cr. Dr. Cr. Dr. Cr. Cas...
I need to add the last question I entered was for Showers Company
Preparing closing entries and a post-closing trial balance for Exercise E4-3 I need help with this
i tried that i dont know why its not right
A 2.15 kg book is dropped from a height of 1.8 m. a) What is its acceleration? Answer in units of m/s2.
Clothing and textiles
What does Disposing Fullness means?
The heights in inches of 14 randomly selected adult males in LA are listed as: 71, 67, 70, 59, 71, 68, 67, 71, 80, 68, 74, 69, 72, 68. 1. Display the data in a stem-and-leaf plot. 2. Find the mean.
3. Find the median. 4. Find the mode. 5. Find the range. 6. Find the variance....
Assume that the committee consists of 4 Republicans and 5 Democrats. A subcommittee consisting of 5 people is to be selected. How many such subcommittees are possible if each subcommittee must
contain at least 1 and no more than 2 Republicans?
mgt 330
List at least one reason why you think workers may be more motivated by job security rather than salary and benefits in difficult economic times.
verb tense
i am to write five sentences about the role of education in successful financial planning, in which i am correctly to use a different verb tense in each sentence
I need ideas to write an introduction about Positive Discipline for Childern in the Descriptive Pattern of development? Thanks
Written Communication
Can someone help me Pleaseeeee .on a writing assignment? I need to write introduction paragraphs on these Topics My title: Positive Ways to Discipline Children. Thanks Positive Discipline (Pattern
of development: Description) Positive discipline Verses ...
Algebra II
9 dozen or 108 pastries(Answer) 1 dozen=12 12 x 9 = 108
Mathematics Proof Logic
prove |x + y| is greater than or equal to |x| - |y|
I am trying to figure out using the distributive property for 8(w-4)=-32
Thank you, you are always a great help!
I tried to use this...can you tell me if this is the wrong approach or please tell me what I am doing wrong as I keep arriving at the wrong answer. I tried finding v=sqrt(t/u) where t is torque & u
is density. I substituted u with mass/Length. I used this value and sub'd i...
I tried to use this...can you tell me if this is the wrong approach or please tell me what I am doing wrong as I keep arriving at the wrong answer. I tried finding v=sqrt(t/u) where t is torque & u
is density. I substituted u with mass/Length. I used this value and sub'd i...
What about v=sqrt(tension/(mass/Length))?
A string along which waves can travel is 2.5 m long and has a mass of 300 g. The tension in the string is 72 N. What must be the frequency of traveling waves of amplitude 6.6 mm for the average power
to be 68 W?
in the story the lottery by shirley jackson what is tess symbolic?
University Math
Let I denote the interval [0,oo). For each r ∈ I, define A={(x,y)∈ RxR:x^2+y^2=r^2} B={(x,y)∈ RxR:x^2+y^2<=r^2} C={(x,y)∈ RxR:x^2+y^2<r^2} Determine UA and ∩A UB and ∩B UC and ∩C
The water flowing through a 1.9 cm (inside diameter) pipe flows out through three 1.3 cm pipes. (a) If the flow rates in the three smaller pipes are 24, 18, and 13 L/min, what is the flow rate in the
1.9 cm pipe? (b) What is the ratio of the speed of water in the 1.9 cm pipe t...
Got it & Thanks! I was just messing up the (-) signs the whole time. Thanks.
Zero, a hypothetical planet, has a mass of 2.7 x 10^23 kg, a radius of 2.8 x 10^6 m, and no atmosphere. A 3.6 kg space probe is to be launched vertically from its surface. (b) If the probe is to
achieve a maximum distance of 4.6 × 10^6 m from the center of Zero, with wha...
Zero, a hypothetical planet, has a mass of 2.7 x 10^23 kg, a radius of 2.8 x 10^6 m, and no atmosphere. A 3.6 kg space probe is to be launched vertically from its surface. (a) If the probe is
launched with an initial kinetic energy of 8.0 × 10^7 J, what will be its kinet...
In printing an article of 48,000 words, a printer decides to use two sizes of type. Using the larger type, a printed page contains 1,800 words. Using smaller type, a page contains 2,400 words. The
article is allotted 21 full pages in a magazine. How many pages must be in small...
2. Illustrate effects on the accounts and financial statements of each of the following transactions. For a company using a job order cost system: (a) Materials purchased on account $176,000 (b)
Materials requisitioned: For production orders $161,500 For general factory use $8...
1. A pressurized spray painter was purchased on April 1 of the fiscal year for $4,800. It has a useful life of 4 years and a residual value of $300. Determine depreciation expense for the first two
years, assuming a fiscal year end of December 31 and using (a) the straight-lin...
we must apply a force of magnitude 82.0 N to hold the block stationary at x=-2.0 cm. From that position, we then slowly move the block so that our force does +8.0 J of work on the spring block
system; the block is then again stationary. What are the block's positions...
K=mv^2 ? And acceleration? Sorry, I am just so confused right now
Thank you, but how do you obtain the speed of the box?
Angle is 42 degrees
A 3.0 kg breadbox on a frictionless incline of angle ¥è = 42 ¢ª is connected, by a cord that runs over a pulley, to a light spring of spring constant k = 110 N/m, as shown in the figure. The box is
released from rest when the spring is unstretched. Assume ...
how do I start out writing a essay on the advantages of being in a relationship
Social Studies
Social Studies
What is the date of Julius Caesars assassanation?
where the scarabs used in the humdi to eat the person alive?
Given the function f described by f(x)=+10, Find each of the following f(0)= Given the function h described by h(x)=9x,find each of the following a) h(-15) b) h(11) c) h(34) ---------- h(-15)
high school
please proofread this paragraph help me with TOPIC,DETAILS AND CONCLUDING SENTENCE? DOES IT HAVE ANY FRAGMENTS OR RUN-ONS DOES IT INCLUDE PRICE LOCATION ETC ? Here are a few resturants you must visit
on your vacation to Maui. If your looking for a very good meal at a price far...
fundamentals of college writing
thats the problem, i dont know anything about pronouns, or what words im missing thats why i posted on here
fundamentals of college writing
i need someone to proofread. looking for fragments, paragraph structure, and run on sentences.....\ Here are a few resturants you must visit on your vacation to Maui. If your looking for a very good
meal at a price far below fancier resturants Paia Fish Market a great place fo...
english HS
heres a few i can think of, google racial slur database!!! great stuff slanty eyed chink gook, In korean, to say America, it's pronounced "me g@@k", g@@k meaning nation. The americans took it
literally and made it sound like they're saying "Me g@@k"...
-6 divide n = -9 and 5 over 6 Find n.
Please double check my answer. On three consecutive passes, a football team gains 7 yards, loses 22 yards, and gains 49 yards. What number represents the total net yardage? My answer: The total net
yardage is 34 yards.
Is this correct? Trains A and B are traveling in the same direction on parallel tracks. Train A is traveling at 60 miles per hour and train B is traveling at 80 miles per hour. Train A passes a
station at 3:25pm. If train B passes the same station at 3:40p.m., at what time wil...
Is this correct? Amy paid $89.18 for a pair of running shoes during a 15% off sale. What was the regular price? My answer is $102.56
I'm lost. Please help me solve. Trains A and B are traveling in the same direction on parallel tracks. Train A is traveling at 60 miles per hour and train B is traveling at 80 miles per hour. Train A
passes a station at 3:25pm. If train B passes the same station at 3:40p.m...
broom is to sweep as brush is to ? answer must star with scr, spr, str or thr
fifteen is to five as nine is to ? must start with one of the following :scr,spr,str, snd thr
Algebra 2
so R would be sqrt41 and then sin=-4/sqrt41 cos=-5/sqrt41 and so on and so forth right?
Algebra 2
Find the exact values of the six trigonometric functions of theta if the terminal side of theta in standard position contains the point (-5/-4)
pls help w/t math ?
in a right triangle ABC, if ab is 14 and bc is 12 wut is bd? there is a pic of a large trangle abc and in it is a line drawn down angle c making it line cd. angle acd then becomes a right trangle as
well. i tried working = proportions and pyth thm but i dont seem to be getting...
there are 40 jump ropes in the gym class. there are 4 times more red jump ropes than blue ropes. how many of each color are there? i can't seem to write the equation. help!!!!!!!! x+ 4x=40 5x=40 x=8
Pages: <<Prev | 1 | 2
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Shawna&page=2","timestamp":"2014-04-16T14:21:58Z","content_type":null,"content_length":"23995","record_id":"<urn:uuid:3d7c3672-9c9e-48d6-b6f7-74689c83ec3c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent application title: ADAPTIVE CHANNEL TRACKING USING PEAK FADE DEPTH ESTIMATION OVER A SLOT
Inventors: Mac L. Hartless (Forest, VA, US)
Assignees: Harris Corporation
IPC8 Class: AH04B1700FI
USPC Class: 375224
Class name: Pulse or digital communications testing
Publication date: 2012-11-29
Patent application number: 20120300821
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
Peak fade depth is measured (202) over a period of time, and a bandwidth of a channel filter (104) is then determined (206) according to the measured peak fade depth (202). In preferred embodiments
the average peak fade depth over two or more time slots is used. In a specific embodiment, an α filter (206) is used to determine the bandwidth of the matched filter (104), in which α is determined
based upon the measured peak fade depth (204). In various embodiments, peak fade depth is correlated to the Doppler shifting of the channel, which in turn is used to determine the bandwidth of the
matched filter by way of the α parameter. Hence, a non-linear equation can be used to determine the value of α which yields a minimum bit error rate for the matched filter (104). More specifically, a
matched filter (104) is matched to a received signal r(t) having k states according to a plurality of matched filters M
and outputs a signal given by |r(t)-C
, in which C
(t) is a channel estimate provide by a channel tracker (106) for a state k at time period t that is given by C
(t-1), c
)), and conj(M
) is the complex conjugate of M
. For each time slot, α is computed from the running average of the peak fade depth (202) according to a predetermined equation (204).
A method for adaptive channel tracking, comprising: measuring peak fade depth over a period of time; and controlling a bandwidth of a matched filter according to the measured peak fade depth.
The method according to claim 1 wherein measuring peak fade depth comprises measuring average peak fade depth over at least two of the periods of time.
The method according to claim 1, comprising: controlling the bandwidth of the matched filter using an α filter, in which α is a filter parameter; and determining a value of α based on the measured
peak fade depth.
The method according to claim 3 further comprising determining a value of α based on a Doppler shift corresponding to the measured peak fade depth.
The method according to claim 3, further comprising using a non-linear equation to determine said value of α that yields a minimum bit error rate for the matched filter.
The method of claim 3 wherein the matched filter is matched to a received signal r(t) having k states according to a plurality of matched filters M
and outputs a signal given by |r(t)-C
, in which C
(t) is a channel estimate for a state k at time period t that is given by C
(t-1), c
)), and conj(M
) is the complex conjugate of M
The method of claim 6 wherein α is determined based upon an average of peak fade depth over a plurality of time periods t.
The method according to claim 1, wherein said period of time is one time slot of data.
A adaptive channel tracker for generating at least a channel estimate for a matched filter of a receiver for each data slot t, the channel tracker comprising: a peak fade depth estimator configured
to measure a peak fade depth of a received signal over at least a time period of a data slot t and output a corresponding peak fade depth estimate; and processing circuitry configured to generate a
current channel estimate according to the peak fade depth estimate and providing the at least a current channel estimate to the matched filter.
The adaptive channel tracker of claim 9 wherein the processing circuitry is configured to further utilize at least one previous channel estimate and the peak fade depth estimate to generate the
current channel estimate.
The adaptive channel tracker of claim 10 wherein the processing circuitry is configured to implement an α filter to generate the current channel estimate.
The adaptive channel tracker of claim 11 wherein the processing circuitry is configured to implement a non-linear equation to determine a current value of α according to the peak fade depth estimate.
The adaptive channel tracker of claim 12 wherein the matched filter is matched to a received signal r(t) having k states according to a plurality of matched filters M
, and wherein the adaptive channel estimator is configured to output to the matched filter a plurality of current channel estimates C
(t) for each state k at time period t that are given by C
(t-1), in which c
)), and conj(M
) is the complex conjugate of M
The adaptive channel tracker of claim 9 wherein the peak fade depth estimator is configured to generate the peak fade depth estimate according to the average peak fade depth over at least two data
slots t.
BACKGROUND OF THE INVENTION [0001]
1. Statement of the Technical Field
The inventive arrangements relate to coherent demodulators, and more particularly to coherent demodulators that use adaptive channel trackers
2. Description of the Related Art
In digital data communication systems, transmit symbols must be reconstructed from a received sequence of transmitted symbols. A common difficulty which must be overcome in such systems is the
problem of inter-symbol interference (ISI), as is frequently caused by multi-path propagation. It is well known that ISI can be reduced by lowering the symbol transmission rate. However, this leads
to lower efficiency and can be avoided by using an equalizer or a maximum likelihood Viterbi algorithm which effectively compensate for the ISI problem. The equalizer effectively inverts the effects
of the channel by functioning as a system in series with the channel.
In order to function effectively, an equalizer must have some knowledge of the channel. However, real mobile radio channels are constantly changing and therefore the equalizer must be constantly
updated with new information about the current state of the channel. This function is performed by a channel tracker (sometimes referred to as a channel estimator) which implements a channel tracking
algorithm. The combination of the equalizer and the channel tracker is sometimes referred to as an adaptive equalizer.
The optimum bandwidth to be used for a filter which is matched to the modulation scheme will vary depending on the Doppler shift associated with a received sequence of transmitted symbols. Doppler
shift is the frequency shift experienced by a radio signal when a wireless receiver and/or transmitter is in motion. Doppler shift can result in Doppler spread in the frequency domain. Accordingly,
the adaptation time of processes which are used by channel trackers are preferably faster than the rate of change of the channel. Current methods used for adaptive channel tracking are processing
intensive and include Kalman filters, pilot sequences and/or multiple filter banks. Accordingly, it would be desirable to provide adaptive channel tracking that is quick, simple and effective.
SUMMARY OF THE INVENTION [0007]
Embodiments of the invention concern adaptive channel tracking, and in particular involve determining an optimal bandwidth for a channel tracking filter. A peak fade depth is measured over a period
of time, and a bandwidth of a channel tracking filter is then determined according to the measured peak fade depth. The average peak fade depth over two or more time slots is advantageously used for
purposes of determining bandwidth. In a specific embodiment, an α filter is used to determine the bandwidth of the channel tracking filter, in which α is determined based upon the measured peak fade
depth. In various embodiments, peak fade depth is highly correlated to the Doppler shift of the channel, which in turn is used to determine the bandwidth of the channel tracking filter by way of the
α parameter. Hence, a non-linear equation can be used to determine the value of α which yields a minimum bit error rate for the demodulation process. In a preferred embodiment the matched filter unit
is matched to a received signal r(t) having k states according to a plurality of matched filters M
and outputs a signal given by |r(t)-C
. An adaptive channel tracker provides C
(t) that is a filtered channel estimate for a state k at time period t that is given by C
(t-1), where the instantaneous estimate of the channel at time t is given by c
)), and conj(M
) is the complex conjugate of M
. For each time slot, α is computed from the running average of the peak fade depth according to a predetermined equation.
BRIEF DESCRIPTION OF THE DRAWINGS [0008]
Embodiments will be described with reference to the following drawing figures, in which like numerals represent like items throughout the figures, and in which:
FIG. 1 is a block diagram of a coherent demodulator in which an adaptive channel tracker would be used.
FIG. 2 is a flow chart which is useful for understanding the processing performed by the adaptive channel tracker in FIG. 1.
FIG. 3 is a plot which shows peak fade depth represented in dB, versus Doppler shift in Hertz.
FIG. 4 is a plot which shows optimal values of the variable a which should be used in a channel tracker filter, versus Doppler shift in Hertz.
FIG. 5 is a plot which shows optimal values of the variable a which should be used in a channel tracker filter, versus Doppler shift in Hertz.
DETAILED DESCRIPTION [0014]
The invention is described with reference to the attached figures. The figures are not drawn to scale and they are provided merely to illustrate the instant invention. Several aspects of the
invention are described below with reference to example applications for illustration. It should be understood that numerous specific details, relationships, and methods are set forth to provide a
full understanding of the invention. One having ordinary skill in the relevant art, however, will readily recognize that the invention can be practiced without one or more of the specific details or
with other methods. In other instances, well-known structures or operation are not shown in detail to avoid obscuring the invention. The invention is not limited by the illustrated ordering of acts
or events, as some acts may occur in different orders and/or concurrently with other acts or events. Furthermore, not all illustrated acts or events are required to implement a methodology in
accordance with the invention.
Coherent demodulators for communication systems need to adapt to channel conditions for optimum performance. The optimum bandwidth as determined by a channel tracker varies depending on the Doppler
frequency shift of received signals, and hence Doppler tracking can be important for such receivers. However, the Doppler shift is not known in advance, which makes it difficult to know what is the
optimum bandwidth that should be used for the channel tracker. It has been determined, however, that the peak fade depth of a received signal varies as a function of the Doppler shift for that
signal. Accordingly, one embodiment of the present invention provides a simple method for estimating Doppler shift by using the measured peak fade depth of a signal. The estimate of the Doppler shift
is thereafter used as a basis to change the receiver bandwidth. The method is facilitated by use of a simple adaptive filter, and in particular an α filter for the channel tracker. Using such an α
filter, the bandwidth is adjusted in accordance with the parameter α. The method, which is described below, will be more clearly understood as the discussion progresses.
Briefly, there is a relationship between peak fade depth of a signal and the Doppler frequency shift of that signal, which relationship can be determined from computer simulation, empirical data or
combinations of the two. Once the relationship is known, then it is possible to estimate a Doppler frequency shift based on information concerning measured peak fade depth during some time-period,
such as the channel slot time period. For each Doppler frequency shift, there is an optimal matched filter bandwidth that can be used to minimize a bit error rate (BER) when demodulating that signal.
Hence, the optimal matched filter bandwidth may be set as a function of the measured peak fade depth. However, it is typically better to use a filter to estimate the optimal band width rather than
relying on an instantaneous measurement of the peak fade depth, particularly, for example, if the channel is not fast changing. A preferred embodiment uses an α filter as such filters are
computationally easy to implement, although any suitable filter or related algorithm may be used to convert peak fade depth into bandwidth. With regards to an α filter, one can use simulation,
empirical data or a combination of the two to estimate an optimal α to use as a function of the Doppler shift in the received signal to provide the lowest BER. Filter bandwidth is then related to the
value of α, and α is related to the instantaneous peak fade depth. This process is discussed in greater detail in the following.
Doppler shifts occur in the frequency of a transmitted signal due to motion of a transmitter and/or a receiver. The actual amount of shift will vary depending on the frequency of the signal and the
relative velocity of the receiver and transmitter. The Doppler shift will typically result in the frequency of a signal varying over time between a maximum and a minimum value which are determined by
the amount of Doppler shift that has occurred. The Doppler shift will result in spectral broadening of the received signal, which will in turn cause signal fading. Peak fade depth is a measure of the
ratio between a maximum signal power and a minimum signal power, measured during some period of time, where the difference in power is caused by signal fading.
An a filter, as referenced herein is a simple filter having a single tap, in which the output is the function of the input and of the immediately previous output. That is, an α filter has the form: X
(t)=α*x(t)+β*X(t-1), in which the values of α and β are either constants or are computed by other means with each iteration t. For the simplest case, one can set β=(1-α), and hence the α filter has
the form: X(t)=α*x(t)+(1-α)*X(t-1). Although a filters are used in the following, it will be appreciated that other types of filters, or even no filter at all, need be used. For example, more
computationally intensive filters that have greater numbers of taps can also be used.
Referring now to FIG. 1, there is shown a block diagram of an embodiment coherent demodulator system 100. RF signals from an antenna are processed by a receiver (not shown) and converted to an
intermediate frequency (IF), as known in the art. The IF signals are processed in an optional IF filter 102 to remove extraneous signals and noise, as known in the art. Generally, the IF filter 102
is tuned to the bandwidth of the transmitted signal so as to eliminate extraneous noise. The output of the IF filter 102 is a signal r(t) which is intended for demodulation. In an embodiment of the
invention, the coherent demodulator system 100 includes a matched filter 104, a channel tracker 106, a maximum likelihood sequence estimator 108, and a soft decision decoding block 110.
As shown in FIG. 1, the output r(t) of the IF filter 102 is communicated to a matched filter 104 and a channel tracker 106. Generally, for each symbol time t and state k the signal r(t) is compared
to matched filter M
(t) which is matched to the encoding method employed by the transmitter of the signal r(t) and modified in accordance with the best channel estimate C
(t) for that time t and state k. The modified filter can be expressed as C
(t). The best channel estimate C
(t) is generated by the channel estimator 106, such that the modified filter will generate a scalar filtered signal with an increased signal-to-noise ratio (SNR) relative to the original received
signal r(t). That is, the output of the matched filter 104 is given as |r(t)-C
. This filtered scalar signal which represents a difference between what was received and what is the estimate of the transmitted signal at time t and in state k, is then used by the maximum
likelihood sequence estimator 108 for demodulation of the transmitted symbol information by finding the path (the specific state k at time t) through the trellis which minimizes the total measured
difference over a slot.
The channel tracker 106 generates the channel estimate C
(t) for a data slot t and state k that is used by the matched filter 104. Hence, for each slot t, the channel tracker 106 generates k channel estimates and it is therefore desirable that the
complexity of the channel tracker 106 be minimized so as to reduce computational loading. By employing peak fade depth to estimate the value of α in a simple a filter, the channel tracker 106 meets
this criteria.
Because the channel tracker 106 employs an α filter, it is recursive in nature. That is, for each slot t, the channel estimate C
(t) is a function of a current value of α, which itself is a function of the peak fade depth for the slot t, and of the previous channel estimate C
(t-1) for the immediately prior slot (t-1). On startup, i.e., when t=1, the value for C
(0) can be set to the instantaneous value of C
(1). Thereafter, the best channel estimate C
(t) over a slot of data t and state k is given by:
(t-1), (Eqn. 1)
in which:
)), (Eqn. 2)
where M
, a vector value, is the matched filter for the state k, and conj(M
) is the complex conjugate of M
. This scalar value C
(t) of Eqn. 1, which may be thought of as a weighted time average of the instantaneous channel estimate c
(t) of Eqn. 2, is then forwarded on to the matched filter 104 for processing of the input signal r(t), as discussed above.
With respect to the calculation of a for each slot iteration t of the channel tracker 106, reference is drawn to FIG. 2. To predict the current value of α for the current slot t, the channel tracker
106, in a first step 202, estimates the peak fade depth over the slot t and then filters this value by way of averaging. For example, a running average of the peak fade depth can be employed using,
again, a simple a filter, in which α is a constant, for example. Of course, as above, other types of filters can be used, or even no filter at all (i.e., the instantaneous peak fade depth can be used
instead). In a second step 204, the channel tracker 106 uses this averaged (filtered) peak fade depth for the current slot t to determine the current optimum value of α. This can be done, for
example, by way of processing circuitry that employs a mathematical function that uses peak fade depth as an input to output a corresponding value for α, employs a look-up table that indexes based
upon peak fade depth to provide a corresponding α, or employs combinations thereof. Any suitable processing circuitry may be used to perform this conversion operation, such as a digital signal
processor or the like. Methods for finding functions that convert peak fade depth (filtered or otherwise) into a corresponding value for α are discussed below. As indicated, the channel tracker 106
recalls at least the last best channel estimate C
(t-1) for the previous data slot (t-1), such as by storing it in a non-volatile memory region, a register or the like. In a third step 206, the channel tracker 106 uses the immediately previous best
channel estimate C
(t-1) and the computed value of α from the second step 204 to predict the current best channel estimate C
(t) for the current data slot t and state k according to Equations 1 and 2 above.
The channel tracker 106 outputs this best channel estimate C
(t) for the current data slot t and state k to the matched filter 104 and to the soft decision decoder 110, as indicated in FIG. 1. As indicated earlier, the matched filter 104 uses the channel
estimate C
(t), which determines the bandwidth that the matched filter M
will use. Consequently, the output of the matched filter can provide a scalar signal with enhanced SNR to the sequence estimator 108 for subsequent decoding. Further error detection and correction is
then performed by the soft decision decoder 110.
The coherent demodulator 100 includes a sequence estimator 108. According to one embodiment the sequence estimator 108 can be a maximum likelihood sequence estimator (MLSE). As such, the MLSE can
determine a best estimate of the transmitted data by comparing all possible transmitted code words in a data stream with the actual signal output from the matched filter 104. The codeword that is
closest to the received work can be found by exhaustively checking all possible codewords, or by using a more efficient technique that gives better decoding performance. For example, in an embodiment
of the invention, the sequence estimator 108 is advantageously selected to be an MLSE which implements a Viterbi algorithm. As will be appreciated by those skilled in the art, the Viterbi algorithm
can greatly reduce the complexity of an MLSE. Still, the invention is not limited to an MLSE type sequence decoder or Viterbi algorithm and other sequence estimators can also be used, without
limitation. Sequence estimators including MLSEs are well known in the art and therefore will not be descried here in detail.
Finally, as another level of error detection and correction, the coherent demodulator 100 can include a soft decision decoder 110. Any suitable decoder 110 may be employed, as known in the art.
Generally, the soft-decision decoding block 110 will implement an algorithm by way of suitable processing hardware to decode data that has been encoded by the transmitter with an error correcting
As noted earlier, it has been found that there is a relationship between peak fade depth for a slot t and the Doppler shift of the received signal r(t). Knowing the Doppler shift of the signal r(t)
is beneficial for channel tracking purposes. Hence, as a first step for determining α as a function of peak fade depth, one can initially obtain for a slot t the relationship between peak fade depth
and Doppler shift of the signal r(t). In preferred embodiments, the relationship is determined for average peak fade depth as would be measured and reported by the peak fade depth estimator in step
202; however, it will be appreciated that other relationships between peak fade depth and Doppler shift may be investigated, such as instantaneous peak fade depth, or peak fade depth averaged over
more than just two time slots. By way of example, MatLab by MathWorks, El Segunda, Calif., can be used to simulate the relationship between peak fade depth and Doppler shift of the signal r(t). An
example graph of average peak fade depth versus Doppler shift for an embodiment coherent demodulator is shown in FIG. 3.
As a next step, the optimum value of α that yields a minimum BER for a particular Doppler shift can then be determined, such as by experiment or by simulation. That is, for each of a plurality of
Doppler shift values, a corresponding α value is determined, either experimentally, via simulation or combinations thereof, that yields a minimum BER when used in Equations 1 and 2 above for channel
tracking and coherent demodulation purposes. By way of example, optimal α as a function of Doppler shift for an embodiment coherent demodulator is shown in FIG. 4.
Finally, the data obtained from the steps above, i.e., as represented in the graphs of FIGS. 3 and 4, may be combined to generate a function that yields optimum α as a function of peak fade depth,
using, for example, standard mathematical tools known in the art. A graph of optimal α as a function of average peak fade depth for an embodiment coherent demodulator is shown in FIG. 5. The data as
obtained in this step may be encoded in the coherent demodulator 100, such as by way of a formula, lookup tables, combinations thereof or the like to provide a computable algorithm that converts an
input peak fade depth value as generated in step 202 into a corresponding α value that yields an expected minimum BER for channel tracking and demodulation purposes.
Although the above has been discussed with specific reference to a filters, it will be appreciated that other types of filters may be used to determine the bandwidth to employ as a function of
measured peak fade depth. For example, in situations in which the signal strength is known to always be high, one could do away with filters entirely and simply set the filter bandwidth directly as a
function of the instantaneous peak fade depth. Conversely, filters with greater numbers of taps (i.e., using more than one previous time slot) can be employed to estimate the bandwidth as a function
of the averaged peak fade depth or some other function of the instantaneous peak fade depth.
Applicants present certain theoretical aspects above that are believed to be accurate that appear to explain observations made regarding embodiments of the invention. However, embodiments of the
invention may be practiced without the theoretical aspects presented. Moreover, the theoretical aspects are presented with the understanding that Applicants do not seek to be bound by the theory
While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. Numerous changes to the
disclosed embodiments can be made in accordance with the disclosure herein without departing from the spirit or scope of the invention. Thus, the breadth and scope of the present invention should not
be limited by any of the above described embodiments. Rather, the scope of the invention should be defined in accordance with the following claims and their equivalents.
Although the invention has been illustrated and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art upon the
reading and understanding of this specification and the annexed drawings. In addition, while a particular feature of the invention may have been disclosed with respect to only one of several
implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the"
are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, to the extent that the terms "including", "includes", "having", "has", "with", or
variants thereof are used in either the detailed description and/or the claims, such terms are intended to be inclusive in a manner similar to the term "comprising."
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention
belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context
of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Patent applications by Harris Corporation
Patent applications in class TESTING
Patent applications in all subclasses TESTING
User Contributions:
Comment about this patent or add new information about this topic:
|
{"url":"http://www.faqs.org/patents/app/20120300821","timestamp":"2014-04-19T06:18:17Z","content_type":null,"content_length":"53622","record_id":"<urn:uuid:104a8b48-f39e-4290-998a-d3e3c05254e3>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Convergence of a series
May 23rd 2010, 01:34 PM #1
Jan 2010
Convergence of a series
Hi everyone.
I need to prove that if 0 < a < 1 , then the series $\sum_{n=1}^{\infty} (1-\frac{1}{n^a})^n$ converges.
I tried to use the limit test and the comparison test but I couldn't manage to.
Can someone give me a hint how to prove that?
Thanks in advance!
Notice that $\left(1-\frac{1}{n^\alpha}\right)^n=e^{n\ln\left(1-\frac{1}{n^\alpha}\right)}$. But, notice that since $0<\frac{1}{n}\leqslant 1$ we may apply the series to the natural log to obtain
that $e^{n\ln\left(1-\frac{1}{n^\alpha}\right)}=e^{-n\left(\frac{1}{n^\alpha}+\frac{1}{2n^{2\alpha}}+\ cdots\right)}\approx e^{-n^{1-\alpha}}$ so what happens if $0<\alpha<1$?
May 23rd 2010, 08:56 PM #2
|
{"url":"http://mathhelpforum.com/calculus/146134-convergence-series.html","timestamp":"2014-04-20T07:35:11Z","content_type":null,"content_length":"34948","record_id":"<urn:uuid:57d166a3-ea75-4152-987a-3f3907976e77>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Experimental Progress towards Probing the Ground State of an Electron-Hole Bilayer by Low-Temperature Transport
Advances in Condensed Matter Physics
Volume 2011 (2011), Article ID 727958, 22 pages
Review Article
Experimental Progress towards Probing the Ground State of an Electron-Hole Bilayer by Low-Temperature Transport
^1Cavendish Laboratory, University of Cambridge, J.J. Thomson Avenue, Cambridge CB3 0HE, UK
^2Department of Electronic and Electrical Engineering, University College, London WC1E7JE, UK
Received 27 May 2010; Accepted 27 October 2010
Academic Editor: Milica Milovanovic
Copyright © 2011 K. Das Gupta et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Recently, it has been possible to design independently contacted electron-hole bilayers (EHBLs) with carrier densities cm^2 in each layer and a separation of 10–20nm in a GaAs/AlGaAs system. In
these EHBLs, the interlayer interaction can be stronger than the intralayer interactions. Theoretical works have indicated the possibility of a very rich phase diagram in EHBLs consisting of
excitonic superfluid phases, charge density waves, and Wigner crystals. Experiments have revealed that the Coulomb drag on the hole layer shows strong nonmonotonic deviations from a behaviour
expected for Fermi-liquids at low temperatures. Simultaneously, an unexpected insulating behaviour in the single-layer resistances (at a highly “metallic” regime with ) also appears in both layers
despite electron mobilities of above and hole mobilities over . Experimental data also indicates that the point of equal densities () is not special.
1. Introduction
Bringing two layers of 2-dimensional electron gases (2DEG) or a 2-dimensional hole gases (2DHG) in close proximity opens up possibilities that do not exist when the layers are very far apart. We give
a simple example to show why interaction-driven phases can arise more readily in bilayers. Let us recall that the ratio of the kinetic energy of a system of electrons and their potential energies due
to mutual Coulomb interaction is measured by the parameter (where and in 2-dimensions, with electrons per unit area). The ratio is not material independent; it depends on parameters like the relative
dielectric constant and the band effective mass of the material. Confining a large number of particles in a small area makes the interparticle spacing small and hence the Coulomb repulsion large, but
the kinetic energy of the particles increases even faster—making smaller. This somewhat counterintuitive fact is a straightforward consequence of Fermi statistics and is true in all dimensions.
Consider now two parallel layers of electrons or holes with 10^11cm^−2 electrons in each—which is a typical density in many experiments based on GaAs-AlGaAs heterostructures. If they are now brought
closer to each other, the particles in one layer not only interact with others in the same layer but also with those in the other layer. The interparticle spacing in the same layer stays fixed and is
about 30nm. It is now possible to make the distance between the two layers about 10nm with negligible tunneling taking place. 10nm is approximately the excitonic Bohr radius in gallium arsenide
(GaAs) and is an important length scale. We thus get an electron to “see” another electron (or hole) only 10nm away, without paying the kinetic (Fermi) energy cost, because the two layers continue
to be two separate Fermi systems. To get the same average interparticle separation (i.e., 10nm) within a layer, a 9-fold increase in density (and hence Fermi energy) would have been necessary. As a
consequence interaction-driven phases may be expected to occur more readily in bilayers. The case of the electron-hole bilayer may have some remarkable possibilities—particularly if we can make the
interlayer attractive interaction stronger than the intralayer repulsive interactions. This will require a bilayer system where the particles have low enough densities, such that the intralayer
separation between the particles is larger (or of the same order) than the interlayer distance. In practice, this implies that if the layer densities are about 10^11cm^−2, then the interlayer
distance would have to be about 10–20nm.
First, because of the attractive interaction between the electrons and holes, bound pairs may form. Indeed, bound pairs of electrons and holes (excitons) are well known in bulk semiconductors.
However, there is a crucial difference here. In bulk, the lifetime of the excitons can rarely exceed a few nanoseconds, because of radiative decay. The idea that spatially separating electrons and
holes could be a fruitful way of obtaining large exciton lifetimes and possible “bosonic” phases was first proposed in 1975 [1, 2]. Although this is a very exciting possibility, there can be many
legitimate questions about how stable such a condensate would be, whether in 2-dimensions one can get a long-range coherence at all, and so forth. It is not known how to measure the momentum
distribution (characteristic of a condensate) of a bunch of particles by transport—but there is another class of transport-based experiments [3–5] that can turn out to be very useful; these are
measurements of the transresistivity of the bilayer, which can be directly related to the interlayer scattering rate and may also provide indications of a condensate phase [6–8]. Passing current in
both layers in an opposite sense (counterflow) is predicted to couple to the excitons and is expected to be dissipationless for a superfluid [9]. A large increase in the drag resistivity is also
expected [6, 10]. Noise measurements, response to parallel magnetic field, and Josephson junction-like behaviour across a weak link are also anticipated [9, 10].
The first proposals [1, 2] relied on n-semiconductor-insulator-p-semiconductor structures to achieve this. However, only with the rapid improvements in GaAs/AlGaAs, heterostructure technology in the
1980s and subsequent development of closely spaced double quantum-well structures in the 1990s led to the first realistic possibilities of making such a system. In 2D, the relationship between the
critical density and the superfluid transition temperature () is expected to be given by the Kosterlitz-Thouless condition where is the exciton density and is the effective mass of the exciton [2].
In semiconductors, the small effective mass of an exciton () means that the transition temperature is anticipated to be much higher than that required for atomic BEC. The possibility of excitonic BEC
in EHBLs has been reviewed by Littlewood and Zhu [11].
The second possibility is somewhat less intuitive. It involves the densities of the two layers developing a spontaneous periodic modulation. Loosely speaking, it would remind one of a homogeneous
liquid freezing to a solid which has a crystal structure. Such spontaneous ordering may be characterized by the divergence of the relevant susceptibility function at a particular wavevector. Simple
theories describing the susceptibility of a 2DEG (Lindhard response function) would predict that the susceptibility remains nearly constant till (where is the Fermi wavevector) and drops rapidly to
zero after that. This indeed is the correct behaviour at long wavelengths, but it leads to certain unphysical results at short-length scales. A theory of susceptibility also leads to predictions for
the two particle probability distribution . This is not hard to see. Susceptibility is the density-density response function of a system, which by the fluctuation-dissipation theorem is directly
related to the density-density fluctuation or the structure factor. The structure factor, in turn is the Fourier transform of the two particle probability distribution. Among the well-known attempts
[12–14] to get the physically reasonable (nonnegative) values of at small distances is a self-consistent local-field theory of Singwi et al. This approach connects the charge susceptibility,
structure factor, and the local-field corrections for the screened Coulomb potential. With the advent of double quantum well structures in the early 1990s, this was extended successfully to the
bilayer [15–17]. A striking prediction of [17, 18] is that the electron-hole bilayer would be more susceptible to a charge density wave (CDW) formation at wavevectors much smaller than than the
electron-electron bilayer. The density-modulated phases are indicated by the divergence of one of the eigenvalues of the bilayer susceptibility matrix, but this does not require the divergence of the
single-layer susceptibilities—which may still occur at much higher . An excitonic state may be indicated by a divergence in the interlayer pair-correlation function at ; Liu et al. [18] had proposed
that such a divergence would be preceded by a CDW.
How close do we want the two layers to be? If we want to make the interlayer interaction stronger than the intralayer interaction, then we need the interlayer distance () to be smaller than the
intralayer separation () of the particles. Thus, for example, for cm^−2 in each layer, we would want nm. It is easy to see that the ratio would be indicative of the relative strength of the
interlayer and intralayer interactions. At the same time, it is important to ensure that the electron and mobilities are sufficiently high such that their behaviour is not predominantly dictated by
localisation and inhomogeneity.
2. Making Real Bilayers
While remarkable possibilities were predicted for EHBLs, making them experimentally turned out to be difficult and challenging. In this section, we will try to see why the basic requirements for
making transport measurements in EHBLs turned out to be difficult. Since the first attempt by Sivan et al. [19], there was a continued interest in these devices marked by the work of Kane et al.
(1994) [20], Rubel et al. (1998) [21], Pohlt et al. (2002) [22], Keogh et al. (2005) [23], and Seamons et al. (2007) [24]. There are a few key requirements for working with bilayers: (i) independent
ohmic contacts to each layer, (ii) gate voltage control of the densities of each layer, (iii) very low leakage through the barrier separating the two layers.
2.1. Making Independent Ohmic Contacts
Achieving independent ohmic contact to each layer in an EHBL is considerably more difficult than in electron-electron or hole-hole bilayers. Let us first look at the basic idea behind independently
contacting a 2x2DEG. Figure 1 shows how the ohmic contact, usually indium or a gold-germanium-nickel alloy, is deposited at selected places using standard photolithographic techniques. The subsequent
annealing process causes the metal to diffuse into the semiconductor and pass through both wells. The contacts are not independent at this stage, since they pass through and contact both layers. To
achieve independent contact to the bottom layer, the front gate (on the surface side) is biased negatively with respect to the ohmics such that only the upper electron gas is depleted. The gate
raises the conduction band, creating a potential hill just below it. So, the left contact (as in Figure 1) contacts only the bottom 2DEG. If we now have a similar gate at the bottom of the sample,
then a negative voltage on that gate would locally deplete the lower 2DEG, allowing us to contact only the top 2DEG (as shown on the right). The gates below the 2x2DEG must be aligned to the topside
features of the device, correct to a few microns, to ensure that the lower 2DEG is selectively depleted.
There is also another practical point. The topgate is typically only a few hundred nanometers above the upper 2DEG, so a small voltage on that gate will have sufficient effect on the 2DEG. But the
substrate (gallium arsenide) is itself about half a millimeter (500microns) thick. So, a voltage on the backgate (irrespective of whether it is aligned or not), will have 1000 times less effect on
the 2DEG. Therefore, the sample needs to be thinned to lower the biases required. It is practical to make the samples about 50microns in thickness and still handle them. So, the voltages required
would come down to about 50–100volts, which is practical. This was done by Eisenstein et al. in 1990 [25]. Aligning the backgate with topside features is another crucial requirement. This in general
requires a double-sided mask aligner. A process to achieve this with a single-side mask aligner and thinned substrates (≈50μm) was developed by the authors' group and has been described in detail in
There are two possible ways of bringing the backgate almost as close to the quantum wells as the topgate so that the voltages required are comparable. One of these is based on the Focussed Ion Beam
(FIB) technique. The second involves making the sample itself 1-2 microns thick by etching from the back, using an etch stop layer, using a technique named “Epoxy bond and stop etch” (EBASE)
described by Weckwerth et al. [27].
If a conducting region could be grown (during MBE) only at places where we want the backgate (as in Figure 1), then this would be a nice way to selectively deplete the bottom 2DEG. Doped GaAs
conducts because the dopants (Si) occupy some of the Ga lattice sites and contributes one electron to the conduction band. If certain regions are selectively subjected to a beam of heavy ions, then
the regions become nonconducting. Large defect densities may be created which trap the electrons released by the donor atoms or the Si atoms may be displaced—in either case, electrons from the donor
atoms would not be able to populate the conduction band. Experimentally, this can be done by using a beam of Ga ions at 30keV or so from a focussed ion gun. Ion doses like cm^−2 would disrupt the
lattice sufficiently and the layer would be rendered resistive (say at 1.5K). The beam can be directed with high accuracy and write out a desired a pattern. The layers which make up the double
quantum well (DQW) structures are grown after this stage. The entire patterning process and subsequent growth is done without removing the sample from the ultra high vacuum (UHV) environment, to
prevent contamination. This is the basis of the focussed-ion-beam (FIB) method and has also been successfully used for making transport measurements on independently contacted DQW by Hill et al. [28]
and Linfield et al. [29]. Notice that the area on which the active region of the sample will be located is actually not the beam damaged area—this still allows high 2DEG mobilities to be reached. A
pertinent question at this stage is how to align the later photolithographic stages with the damaged/undamaged area pattern written out by the beam. Very high ion doses of 10^17cm^2 or so can be
used to etch alignment marks away from the central region where the 2DEG forms. This level of beam damage makes the subsequent growth on those areas visibly different due to high concentration of
defects. The later stages can thus be aligned with the buried backgates.
3. The Electron-Hole Bilayer
Is it possible to extend a modulation doping-based method of making 2x2DEGs and 2x2DHGs to make an electron-hole bilayer (EHBL)? The answer is that it is possible only if we are satisfied with very
large layer separations nm [21]. At that distance, the interlayer Coulomb interaction would not be dominant. Consider a situation where we try to create two modulation-doped gases, (one 2DEG and one
2DHG) in close proximity, say 10–20nm. The Fermi level must come above the conduction band for a 2DEG to form; similarly, it must fall slightly below the valence band for a hole gas to form. Now,
the top of the valence band and the bottom of the conduction band are separated by 1.5V, which is the bandgap of GaAs. Thus, if a 2DEG and a 2DHG exist at the same electrochemical potential, then
the bands must have a very large slope in the region between the two layers. This implies an electric field of ~10^8V/m, which is too high to sustain. The structure would simply collapse. See Figure
2. As an aside, the bandgap of Silicon is about 1.1V, so the required field would be slightly less. In fact, recently, two groups have succeeded in making EHBLs in Si [29, 30], where the electrons
and holes stay at the same chemical potential. If at some point independent contact to bilayer graphene is made, then it would be very interesting from the point of view of an EHBL, because the
bandgap of graphene is zero. However, in the GaAs-AlGaAs system, the only way around would be to make the electrochemical potential itself discontinuous. This means that we need to connect a battery
from outside between the two layers which would allow the two gases to exist without requiring a huge band slope.
Notice that even before we made independent contacts, it was possible to create a a 2x2DEG. In the case of the EHBL, the contact must exist before the electron and hole gases can be formed. This also
calls for the barrier between the two layers to be exceptionally uniform and robust. At the heart of most bilayer devices (particularly EHBLs) is this barrier that separates the two layers. For
closely spaced (10nm) bilayers, a single growth defect in an area ~100μm × 100μm will cause everything to be dominated by catastrophic leakage and not bilayer physics! This extremely stringent
requirement on the uniformity of the barrier layer is equivalent to placing two sheets of cloth over an area of a football field while maintaining a uniform vertical distance of 1cm between them
throughout (if we scale all the lengths by a factor of a million). These issues were quite well appreciated, and the first EHBL device was made in early 1990s by Sivan et al. But this device had
limited range of operation as far as the density and temperatures (above 9K only) were concerned [19]. Only in last 3-4 years, it has been possible to make EHBLs where transport can be measured down
to millikelvin temperatures, densities can be tuned over a large range, and the interlayer interaction can be made stronger than the intralayer interaction due to values reaching below 1.
3.1. Recent Designs Of Electron-Hole Bilayers
The discussion in the previous section makes it clear that for an EHBL to exist in a GaAs/AlGaAs structure, the electron and hole layers must be held at different electrochemical potentials, and
hence each layer must act as a gate for the other. Thus, a combination of modulation doping and biasing can be used to achieve a stable electron and hole population. Here, we describe the device
fabricated by the authors' group [23, 26, 31]. We begin with an inverted hole gas created with a very low level of doping so that it can be backgated. A high level of doping would prevent the
backgate from acting on the 2DHG. Exactly what level of doping would stop a gate from working is an interesting and somewhat difficult question [26]. Making contact to this hole layer is not
difficult. This is usually done by depositing some gold-beryllium alloy and annealing the metal to make it diffuse into the semiconductor. See Figure 3 for a device schematic representation and
Figure 4 for a self-consistent band structure.
Now, using the hole layer as a gate, we can induce an electron layer on the other side of the AlGaAs barrier (see Figure 4). Electrons start accumulating soon after the bias reaches the bandgap,
provided there is some n-type ohmic contact to the electron layer, from where carriers can be pulled into the heterointerface. Fabricating such a device requires some new thinking. An usual diffused
ohmic would not work, because it would penetrate the barrier and reach the hole layer as well. The method would work only if we can find a “nonspiking” ohmic. Fortunately, there is a way. A heavily
doped capping layer of InAs (cm^−3Si) is used to pin the Fermi-level above the conduction band at the surface of the wafer. A selective etchant (conc. HCl) is used to remove the InAs from all
regions except from where the n-type contacts are to be formed. Any metal which adheres well to this surface (e.g., Ti/Au) can be used to inject electrons into the InAs layer at any infinitesimal
bias. A “Schottky barrier”, normally observed at a metal semiconductor interface, is not formed in this case. Though calculations indicate a small barrier at the interface of the InAs and n-GaAs,
unless the composition is smoothly graded, experimentally, we have not found evidence of such a barrier. A flatband condition (see Figure 4) is maintained in the region below the contacts down to the
2DEG, forming a completely “nonspiking" contact to the electron QW induced above the barrier. However, the 2DEG must not be allowed to extend out to the spiking p-type contacts, else independent
contact between the two layers would be lost. A carefully controlled isolation etch is introduced between each pair of n and p contacts. The etch removes sufficient GaAs to depopulate the upper
electron QW, but does not interrupt the lower hole QW. Fully independent contacts are thus achieved without the need of any depletion gates, focussed ion beam, ion implantation, or shadow masking
during MBE growth. All the necessary processing can be done with standard photolithographic techniques.
A composite IR and visible photo is shown in Figure 5(b) of a finished backgated device. The three independently controllable backgates are shown ( are tied together and ). Each backgate is contacted
at each end so that its continuity can be verified.
Figure 6 shows that the electron (blue) and hole (red) layers behave as 2-dimensional layers as expected.
The crucial point is that there must be independent contacts existing to both the electron and hole layers so that we can apply a voltage bias between them to get both layers to form. Another way was
shown by Seamons et al. [24]. This design relies on two back-to-back field-effect transistors (FETs), one of which is an n-channel device, and the other is p-channel. Under such circumstances, the
gate of the FET needs a small overlap with the ohmic contact to ensure that there is continuous path from the contact to the channel which has the carriers. The device is in reality less than a
micron thick and has to be supported on a substrate using a method described by Weckwerth et al. [27].
3.2. The QHE Bilayer
In closely spaced 2x2DEGs and 2x2DHGs, excitonic BEC is believed to occur in large magnetic fields in the quantum Hall regime and to be observable with transport. When both layers are in the state,
the half-filled Landau levels may be considered to be half full of electrons and half full of holes [32]. Striking experiments in bilayer electrons by Kellogg et al. [33], Tiemann et al. [34] and in
bilayer holes by Tutuc et al. [35] reveal almost dissipationless counterflow transport and vanishing counterflow Hall resistance. While in some ways these systems emulate exciton superfluidity in an
EHBL (for zero magnetic field), there does exist a vacuum of Landau levels and the screening will be very different in magnetic fields. The relation between the physics in the and the EHBL would
doubtlessly be a very interesting area in near future—however, for the purposes of this paper, we have not addressed this question.
4. The Coulomb Drag Experiment
The ability to make independent contacts to bilayers makes some new transport measurements possible. These go by names like Coulomb drag, counterflow, parallel flow transport, and so forth. and can
give us some information that single-layer measurements cannot. The basic importance of the drag measurement lies in the fact that it probes the interlayer scattering rate directly. The measurement
involves sending a known current through one layer () and measuring the open circuit voltage developed in the other layer as a result (). In the linear response regime, we can define a “drag
resistance” , in analogy with normal resistance. In general, this has a strong temperature dependence. The electrons in one layer can see the Coulomb potential due to the electrons (or holes) in the
other layer. Of course, this potential is not the bare Coulomb potential, but it would be the “screened” potential. The net result of this scattering is that the electrons in the drive layer try to
impart a little bit of the momentum they have to the electrons in the other layer. This means that if we closed the circuit in the “drag” layer, a small current would actually flow, which has got
nothing to do with leakage. This is very much like viscous drag between layers of a fluid. Usually, we prevent any current from flowing in the “drag” layer. So, a small pile up of charge occurs in
one end of the layer which results in a voltage appearing across the “dragged” layer. This is the voltage we measure. The interesting (and useful) point is that the magnitude of this voltage is
directly proportional to the scattering rate between the particles in different layers. As in any quantum mechanical calculation, the scattering rate is a product of a “matrix element” and another
factor that gives the density of available states or the phase space" factor. The scattering rate between two electron gases was first measured by Gramilla et al. in 1991 [36].
In reality, one almost always uses low-frequency (few Hz) alternating current for these measurements, the measured voltage thus has an in-phase and an out-of-phase component. It can be shown that the
out of phase component is proportional to the single-layer resistance and the measuring frequency.
4.1. Boltzmann Transport Analysis of the Drag Measurement
This problem has been quite extensively analysed by several authors in the context of 2x2DEGs (or 2x2DHGs) [5, 37–40] and for EHBLs as well [16, 41], using the linearized Boltzmann transport
equation. Linearization is done in the way the Fermi distribution in the drive layer is assumed to change due to the current flow. Here, we quote the final result and point out a few important
relevant features. Summing over all momentum exchange () between particles in the driven layer (layer 2) and dragged layer (layer 1), one gets [5] Here, denotes the probability of the elastic
scattering in which the momentum of a particle changes by . and denote the noninteracting susceptibilities of the layers. and denote the carrier densities. Equation (2) is applicable to 2x2DEGs,
2x2DHGs, and EHBLs. The ratio of drag resistivity to single-layer resistivity is usually a small number, even for high-mobility double quantum well structures, around , or so at most.
Note that individual layer mobilities (, where is the intralayer relaxation time) do not occur in the expression. This is crucial and stems from the fact that while ordinary resistance is a measure
of momentum lost to to all possible channels, the drag resistance is actually a direct measure of the momentum transferred to a single channel only.
Second, at small , Im. This is useful at low temperatures, because as the frequency increases a little, the sinh factor in the denominator would start becoming large. Thus, it is easy to estimate the
low behaviour. We do not expect to have a strong temperature dependence, because the dielectric screening function does not have strong dependence at small . Substituting , then the dominant
dependence is easy to extract. We make a very robust prediction that measured drag (interlayer scattering rate) will be approximately , and go zero as , due to the nature of the Fermi distribution
function alone, independent of many details. It can be shown that for “weak coupling” (i.e., high density) at , and for a peak-to-peak separation of the two wavefunctions, , one gets [5] This
prediction is well verified for 2x2DEGs and 2x2DHGs.
The coefficient of the term, however, requires a good model of dynamical screening of the interlayer Coulomb interaction. For the transition probability, we can use the Born approximation Here,
denotes the matrix element for a transition from state () to a state () The transitions are caused by the screened Coulomb potential of a particle in layer 1, as seen by another particle in layer 2.
Thus, the measured drag gives us a very direct experimental handle on the generic physics of screening in a many body context. Here, we quote some of the relevant important results.
The unscreened Fourier component of the interaction potential due to a point charge in layer 1, as seen in the same layer (), and in the other layer () can be written as where the form factors takes
into account the averaging of the potential over the subband charge distribution of each 2D gas. We can define and similarly For infinitely narrow wells, the charge distributions approach delta
functions. If these are separated by a distance , then and .
The physically important screened components (,) are obtained from the unscreened components (,) by summing the contributions of the original charge and the charges induced by the potential of the
original charge. The connection is provided by the dielectric screening function , which is a matrix in this case The dielectric function can be written as The individual layer susceptibilities can
be determined from the well known expressions given by Stern [42]. From (7), we can determine the screened component and hence determine the matrix element . It is also clear that the result cannot
depend on whether the interaction is attractive (electron-hole) or repulsive (electron-electron and hole-hole). We thus see that within Random Phase Approximation (RPA), the (minor) differences
between the EHBL and a 2x2DEG can arise from the difference in their band effective masses and the shape of the subband wavefunction for the holes. As far as the authors understand, the only way to
appreciate (theoretically) the crucial difference between attractive and repulsive interaction within Born approximation is to take the next step to RPA by introducing the “local-field corrections”.
RPA is known to fail for . By the inclusion of a local-field correction , the short-range potentials can be improved upon (). One approach to solving for is the Hubbard approximation [12] that
includes the effect of exchange. The potentials can also be calculated self-consistently using Singwi et al. (1968) [13] (STLS) approach. This was done for the electron-electron and electron-hole
bilayer by Świerkowski et al. [16, 43], who found that STLS gave a significant drag enhancement over the RPA, due to the effect of short-range correlations.
The drag resistivity in an EHBL was predicted to be larger than the electron-electron bilayer for three reasons, with the larger hole mass responsible for two. First, the excitations in the EHBL are
lower in energy, as is lower for the heavier hole layer. Second, the intralayer correlations are larger in the heavier hole layer (greater ), which reduces the interlayer screening. The third
contribution arises from the attractive interlayer interaction in the EHBL that enhances the interlayer correlations (larger pair-correlation function for small ), with the opposite effect in the
repulsive electron-electron bilayer. Subsequently, the interlayer local-field correction in the EHBL is negative whereas in the electron-electron case, it is positive. Hence, the modified potential
in the EHBL is larger leading to an increased drag resistivity.
If the determinant of vanishes then those regions can make large contributions to . These are the plasmon modes of the bilayer. The location of the (two) plasmon branches with respect to the
single-particle excitation spectrum of the particles in the two layers in the () plane, is an important aspect of the physics of the bilayer. These modes were studied (within RPA) by Das Sarma and
Madhukar (1981) [44] and Hu and Wilkins (1991)[45]. Later work of Liu et al. (1996) [17] and Hwang and Das Sarma [41] that go beyond RPA has also highlighted how the plasmon contribution can differ
in 2x2DEGs and EHBLs. However, it is not possible to get a finite drag at due to contributions from the plasmon modes.
Yurtsever et al. (2003) [38] compared the 2x2DEG drag data of Kellogg et al. [46], with RPA, STLS, and their own method based on the Hubbard approach. The RPA and TF underestimate the drag whereas
the STLS method gives an overestimate of the drag. Good agreement with their Hubbard model was found. Similar work was done by Hwang et al. (2003) [47] looking at data from 2x2DHG of Pillarisetty et
al. [48] that had a drag resistivity 2-3 times larger than drag in corresponding electron-electron bilayers [46]. They used the Hubbard approximation and included scattering with
, appropriate for large , and phonon-mediated drag. A deviation from was found () as for the holes and an enhanced phonon contribution for the hole-hole bilayer compared to the electron-electron
system. The intralayer correlations are dependent on which affects the screening at low densities. A comprehensive comparison of the predictions of various local-field theories, and the Fermi
hypernetted chain approximation for Coulomb drag has been done by Asgari et al. [39], more recently.
For the RPA, STLS, and Hubbard methods, a stronger dependence on density is predicted, [38] (rather than for the TF model [5]), which has been observed [46]. The relationship is only exact in 3D. For
2D, there exist corrections from the divergences in phase space for and , corresponding to forward and backward transitions on the Fermi surface. A correction proportional to is expected [49], but
should be small. This correction is believed to have been observed in low-density electron-electron bilayers [46].
A similar temperature dependence of () is expected [50], when a large amount of disorder (, where is the mean free path) is included at very low temperatures.
Hwang and Das Sarma [41], calculated the drag resistivity (and single-layer resistances) for EHBLs with parameters fitted to the devices of Seamons et al. [51, 52], with 20nm and 30nm barriers.
Using local-field corrections and dynamical screening, they were able to show that the coupled bilayer plasmon modes in the EHBL greatly enhance the drag resistivity with respect to the
electron-electron and hole-hole bilayers.
5. Coulomb Drag in Electron-Hole Bilayer: Experimental Results
Coulomb drag at K in EHBLs in a regime where , has recently been measured by the authors' group and an experimental group in Sandia [51, 53]. Our results show that drag measured in a device with a
10nm barrier (device B138/C4-1) is shown in figure 9 for matched electron and hole densities (). For the two highest densities (cm^−2), the hole drag resistivity is monotonic over this temperature
range and appears to go towards zero as the temperature does. However, for the lower-density traces ( cm^−2), an upturn is seen in the hole drag. The lower-density trace has a larger upturn and the
corresponding electron drag trace (cm^−2) is also shown. Only a very small upturn (if any) is found in the electron drag, with below ~1K.
In another 10nm barrier device (B138/C4-2) fabricated from the same wafer (I.D. A4268), lower-matched densities could be reached. Electron and hole drag resistivities are shown in Figure 10 for
between 4 and cm^−2. The high-temperature drag is a good fit to and . For cm^−2, the coefficient is similar () to the first device B138/C4-1 (see Figure 9). At lower densities (Figure 10), the
upturn does not increase (as seen in Figure 9) but becomes smaller, with the lowest two densities (cm^−2) displaying no upturn at all and a sign-reversal at the lowest temperatures. For an
intermediate density cm^−2, the upturn is followed by a sharp negative downturn around 300mK. In contrast, the electron drag resistivity remains monotonic and follows the expected dependence for two
Fermi liquids. It is interesting to note that the departure from the dependence in the hole drag is relatively insensitive to density and occurs at ~700mK. Note that at cm^−2, is reached.
Nonreciprocity at low temperatures is an unexpected and puzzling result, since the drag is clearly in the linear regime. The drag resistivity was found to be independent of the excitation current
frequency, up to ~100Hz. Switching the grounding point between the layers can detect whether the measurement circuit is equivalent in the two drag configurations. Shifting the grounding and bias
points was found not to affect the anomalous low-temperature data. At higher temperatures, the drag is reciprocal, this too verifies that the measurement circuit is set up correctly. The following
section reports drag measured in EHBL devices below 300mK in a dilution refrigerator.
5.1. Coulomb Drag down to 50mK
Coulomb drag in sample B138/C4-1 was measured down to 50mK. In Figure 11, drag is shown for cm^−2. For the cm^−2 traces, the high-temperature dependence is similar to that in Figure 9 measured in
a different system. An upturn in the hole drag is seen, which appears to be larger for the lower density. Deviation from appears at a higher temperature for the lower density. However, for this
density, below 250mK the hole drag peaks and starts falling. A smaller upturn is seen in the electron layer, despite the I-V plot (Figure 11(b)), showing that the electron drag is still linear down
to lower excitation currents (0.5nA).
In a 25nm barrier, undoped device (B135/C3-4) with resulting higher hole mobilities, the drag resistivity measured down to 35mK is shown in Figure 12 for cm^−2. For the lower-density traces, an
upturn is seen in the hole drag, though it peaks, then falls and below 200mK saturates at a small negative value. As shown in Figure 12(b), even the negative hole drag appears to be linear down to
small currents. But the corresponding electron drag trace still appears to follow the dependence. The features in the low-temperature drag are not hysteretic with temperature. All points were taken
as the sample was cooled, except the I-V traces that were taken as the sample was warmed. The resistances corresponding to these traces are shown as black circles in Figure 12 and agree well with the
other data. Figure 12(c) shows the in- and out-of-phase component of the hole drag signal for cm^−2. This shows no anomalies in the out-of-phase signal coincident with the anomalous behaviour seen
in the in-phase signal, ruling out artefacts from capacitive effects or ohmic contact failure. For the higher-density traces (cm^−2), a small upturn in the electron drag is seen whereas a small
negative downturn is seen in the hole drag (see inset, Figure 12).
For an excitonic superfluid phase, an upturn in the drag is predicted that would diverge, approaching the single-layer resistivities [6], which themselves would diverge as the number of unpaired
electrons and holes that are able to carry single-layer current fall. But such a strong effect is not seen. Besides in the excitonic phase, there is no reason to expect Nonreciprocity. Electrostatic
binding within an exciton may explain the upturn seen in the drag and departure from the behaviour expected for weak particle-particle scattering. However, an excitonic phase is unlikely to have a
preference for the lighter or heavier layer and cannot account for the lack of reciprocity seen at low temperatures. An indicator for the presence of excitons would be an enhancement of the drag at ,
particularly for a BCS-like state where nesting of the electron and hole Fermi surfaces is required.
5.1.1. High-Temperature Dependence
The magnitude of drag in the EHBL is expected to be greater than in electron-electron and hole-hole bilayers, due to the additional plasmon enhancement[41] and due to larger correlations between the
layers [55], including the high-temperature () regime. The coefficient () for the data in Figure 10 (10nm barrier) can be compared with that for 10nm barrier electron-electron and hole-hole
bilayers. The electron-electron data of Kellogg [56] for cm^−2 gives a coefficient of , compared to in the EHBL. This shows an enhancement by a factor of ~9. In the hole-hole bilayer of Pillarisetty
et al. [48], for cm^−2, compared to in the EHBL. These are similar, despite the hole-hole bilayer having a larger in both layers. However, at these densities, for an accurate comparison, the
correction to account for the different must be made.
Comparing the data in Figures 10 and 12 for 10nm and 25nm barrier devices at cm^−2, the dependence on interlayer separation can be examined. The respective coefficients are and , a ratio of ~10.
By measuring the interlayer capacitance, the wavefunction peak to peak can be estimated. The 10nm and 25nm barriers correspond to of 25nm and 40nm, respectively. From (3), one expects . Hence, an
expected ratio of , close to the measured 10. An increase in interlayer correlations with reducing separation may explain the enhancement over the expected value. The density dependence of the
coefficient is examined next.
We have been able to describe our high-temperature (above 1K) drag measurements using the linear Boltzmann formalism as in (2), provided that the average intralayer particle spacing is smaller than
the average interlayer particle spacing. We used a simple model with temperature-dependent Lindhard functions and accounted for the finite-thickness of the wavefunctions. Intralayer correlations were
taken into account using a Hubbard local-field correction as described by Yurtsever et al. [38], but interlayer correlations were neglected. We also neglected phonon effects. This model works well
for 10nm and 25nm barrier devices at high enough densities (see Figure 13). For very low densities, the model underestimates the drag, suggesting that interlayer local-field corrections might be
large because particles in different layers are closer together than particles in the same layer. We have also noticed that the calculated drag is sensitive to the shape of the wavefunction. However,
the low-temperature drag observations cannot be explained by a Boltzmann-type even if local-field corrections are taken into account.
5.2. Drag at Mismatched Densities
Data at constant electron density (cm^−2) with varied, is shown for device B138/C4-1 in Figure 14. At higher temperatures (K and 3K), where the anomalous drag is not seen, agreement is found
between the electron and hole drags, and a good fit to a power-law dependence is found (). At the lowest temperature, the electron drag still has the same power-law dependence on . However, as is
lowered the hole drag no longer agrees with the electron drag, and exhibits the upturn found in Figure 9 at cm^−2. A maximum is seen in the hole drag close to , but for a sharp downturn that goes
negative is found. It is unclear from these observations whether or the value of the hole density (, transition to large-angle scattering) is important, and more work is required to analyse this
point. However, achieving the upturn is not dependent on matching the densities exactly. This point was also investigated by Morath et al. [52]. While a peak is seen in the hole drag close to matched
densities (Figure 14), it cannot be concluded that this is excitonic (or phonon/plasmon) in origin.
Considering the data in Figure 10 for , the coefficients are plotted against layer density () in Figure 15. A power-law dependence for , with is found. This total has been predicted by the RPA,
Hubbard-like and STLS calculations performed by Yurtsever et al. [38] for the electron-electron bilayer. These go beyond the (high-density) weak coupling limit (TF) where is predicted (3). In Figure
14, a dependence of is found, which together implies a weaker dependence () on rather than . For drag taken when the hole density is held constant at cm^−2 and the electron density is varied (data
not shown), the coefficient is plotted in Figure 15 against , showing . Some inaccuracy will occur as the interlayer separation () is to some extent a function of the interlayer bias that determines
the electron density. This effect was studied by Morath et al. [57]. Similarly, the position of the hole wavefunction will be affected by the backgate bias. Depleting holes will push the wavefunction
peak towards the barrier. Nevertheless, it is expected that the drag will be more sensitive to the layer with greater [47, 48]. Earlier work in the EHBL showed the opposite result that the drag was
found to be more sensitive to the electron density [19], but this work was performed above K, where other processes such as phonon-mediated drag will be significant.
5.3. Interlayer Leakage
In all biased structures, finite leakage currents will exist. In the EHBL, the interlayer bias (~1.5V across ~10nm) acts across a small distance and measurable leakage exists, though it is
typically far smaller than the measurement currents used. In the best devices, pA, while the measurement currents are typically between nA. The effect of the leakage current on transport
measurements is important, particularly if it can influence the drag measurement or the state of the system. There are several possible mechanisms whereby leakage can influence measurements directly.
Firstly, the electrons/holes that leak through the barrier will be much hotter than the 2DHG and 2DEG, which will have reached thermal equilibrium with the lattice. It must be possible for these
energetic particles to lose this energy on a shorter timescale (thermalisation time) than the characteristic lifetime of any coherent phase existing in the bilayer, so that the leakage event will be
forgotten quickly by the system. Likewise, the leakage events must be infrequent, relative to the phase lifetime (such as the lifetime of an exciton).
As discussed earlier, the leakage is most likely caused by barrier defects and carriers will probably lose energy via transitions through defect states that exist mid-gap. In this respect, backgate
leakage is not likely to be as important. While much larger biases are used, a charged particle is unlikely to travel the distance from the backgate to the 2DHG ballistically, and energy will be
dissipated to the lattice. It is necessary for the particles to be in local equilibrium for the true ground state to emerge, despite the two layers being at different electrochemical potentials. It
is unknown how much leakage will affect this condition. For a superfluid-like state, the lifetime due to leakage between the layers must be larger than , where is the energy gap. For the effect of
the gap to be observed, we need , which assuming a lowest measurable temperature of 50mK, gives a bound of ps. A typical interlayer leakage current in our device is ~50pA, over an area () of
0.14mm^2 (including any leakage due to radiative recombination). The characteristic timescale between leakage events (how long the particle remains in one layer) is . Hence, an approximate leakage
lifetime (for cm^−2) is s, which is much longer than any typical transport lifetime (ns, ps) in these devices and any lifetime corresponding to a gapped phase that could exist within the measured
The drag resistivity at low temperatures is typically smaller than , and stray currents can adversely affect the measurement due to the larger single-layer resistivities. All measurements were
conducted with a.c. phase sensitive detection. Incoherent d.c. leakage cannot contribute directly to an error in the measurement. However, a weak point in the barrier may allow a path for the a.c.
excitation current to cross into the other layer and return via the interlayer bias supply (battery) (Figure 16), which appears to the a.c. as a low resistance path. As shown in Figure 16(a), by
changing the interlayer biasing points, the effect can be reversed. For device B138/C4-2 at cm^−2, changing the bias has little effect on the Nonreciprocity or the upturn in the hole drag. The
electron drag is also unaffected.
6. Discussion
Features are seen in the drag resistivity at low temperatures that cannot be explained within the framework of Fermi-liquid theory [41]. For two Fermi gases, the phase space allowed for interlayer
particle-particle scattering must go to zero as the temperature does.
Qualitatively, similar anomalous behaviour is seen in two 10nm barrier devices, with an upturn below 0.5K that may be followed by a downturn or sign reversal at the lowest temperatures, for the
lowest densities (Figures 10 and 14). The magnitude of the upturn differs by a factor of ten between devices (for the cm^−2 at 300mK, Figures 9 and 10), with the high-temperature drag () agreeing
well. This would suggest that sample-dependent factors such as disorder might be important in the anomalous regime. Anomalous behaviour also occurs at larger layer separations (25nm barrier),
consistent with the findings of Seamons et al. (2009) [51], where a small upturn was found in 20nm barrier samples but not for 30nm.
Third-order corrections to the interlayer interaction by Levchenko and Kamenev [58], showed that nonzero drag at was possible, and so does not necessarily indicate the presence of strong interlayer
correlations. However, the effect they find is small (~1), particularly for high-mobility samples, and cannot explain the anomalous drag seen (~1).
The deviation is too large to be caused by a plasmon or phonon enhancement [28, 36], which would be peaked at matched densities and higher temperatures. Plasmon enhancement is expected at (K, K),
while below the Bloch-Grüneisen temperature (~1K), the phonon contribution is heavily suppressed.
6.1. Coulomb Drag Upturn
The upturn in the Coulomb drag (Figure 9) at the lowest temperature, may be a signature of an increased interlayer coupling due to the formation of excitons. This coupling is not suppressed by the
falling phase space with temperature, restricting scattering at the Fermi surfaces. Within the exciton regime, distinct Fermi surfaces no longer exist when the binding energy exceeds , lifting this
phase-space restriction.
The transition temperature for an excitonic superfluid state (assuming a 2D Kosterlitz-Thouless type transition), is expected to increase with exciton density (in 2D , (1)), where is the exciton
density. Seamons et al. [51] attempted to identify the temperature of the minimum in the drag as the transition temperature, arguing that this point occurs at higher temperatures for larger
densities. It is possible to see the same trend in Figures 9 and 11, though in the latter the deviation from clearly occurs at a higher temperature for the lower-density data (cm^−2). This point of
deviation may be more significant than the drag minimum.
However, the upturn is far smaller than that predicted for an excitonic state [6, 8, 10]. The drag is anticipated to reach a value approaching the single-layer resistivities, with a sharp change
(discontinuity) signifying the phase transition (unless only a small fraction of electrons and holes enter a paired state). Hu [8] predicted an enhancement due to electron-hole pair fluctuations
above the transition rising as . Fitting this to an upturn to obtain is possible (Morath et al. (2009) [52]), but it cannot explain the subsequent downturn seen in Figures 11 and 12. The prefactor to
the expression predicted by Hu [8], is larger (by a factor of ~1000) than the upturn measured [52]. For the drag at mismatched densities (Figure 14), the peak expected at is not seen. The peak in the
figure is not sharp enough and is asymmetric. As discussed earlier, it is unclear whether or plays a role in the peak. Crucially, matched densities are not necessary to see the upturn.
Other indicators for an excitonic phase could include a temperature dependent Hall voltage, since neutral excitons would not feel the Lorentz force. The authors had looked for this effect, but not
observed it in their experiments.
6.2. Coulomb Drag Sign Reversal
At the lowest temperatures, in 10nm and 25nm barrier samples, a sign reversal of the drag resistivity has been seen. In this situation, driving the current in one layer in one direction causes
particles in the other layer to move in the opposite direction. Sign reversal has been seen for drag between layers at large filling factors (moderate magnetic fields) in the quantum Hall regime [59–
63]. Partly filled Landau levels, possess electron and hole character. If the highest Landau levels in the drag and drive layers have opposite deviation from half filling, then is negative
(electron-hole like) at low temperatures. At higher temperatures when is larger than the disorder broadened width of the Landau level (), then the drag returns to the zero magnetic field (positive)
sign. For nonzero drag, the excitations at the Fermi surface must not have particle-hole symmetry [60, 63]. At the centre of a disorder broadened Landau level, this symmetry is acquired and the drag
goes to zero. Varying the magnetic field will change the Landau level populations and a complex series of positive and negative drag oscillations results.
Figure 12 shares many of the features seen in the temperature dependence of magnetodrag in an electron-electron bilayer [63] at , where a peak is followed by a downturn. It is not immediately clear
how the banding (density of states) required for this would occur at zero magnetic field. For 2DHG and 2DEG, is expected to be continuous and the concept of disorder broadening cannot be applied.
Alkauskas et al. (2002) [64] have proposed that a sign reversal in the drag resistivity may result from the inclusion of an in-plane periodic potential, with wavelength much greater than that of the
underlying atomic lattice. This extra periodicity creates an additional Bragg plane and the formation of minibands, with a bandgap at much smaller , at the Brillouin zone boundary corresponding to
the large wavelength of this additional potential. They found that as the density was increased, a sign reversal would occur. Normally, particles exist within the parabolic bottom of the band and are
scattered before they reach the zone boundary. If the Brillouin zone is smaller, then the point at which () changes sign is attainable at experimental densities, leading to a sign change in the
effective mass and a sign reversal in the drag resistivity. But for this to be relevant, a periodic solid-like phase must appear in the EHBL. Candidates for this include the Wigner crystal, where one
or both layers has crystallised into a periodic array overlaying each other [65] or a spontaneous periodic modulation in charge density, known as a charge density wave [55]. These possibilities would
be discussed further in the context of the single-layer resistivity measurements.
6.3. Drag Reciprocity
It is expected that for measurements in the linear regime, the hole and electron drag resistivities should be equal [54, 66, 67]. At the lowest temperatures, where the deviation from is found, a
Nonreciprocity between and is observed, despite the measurements appearing to be in the linear regime. The measurement circuit is not the cause of the Nonreciprocity as the drag is reciprocal at
higher temperatures. The effect of interlayer leakage was discussed before as a possible source of error, but in our experiments, we have verified that this gives no noticeable contribution (Figure
Measurements of Coulomb drag down to 300mK on EHBLs has been performed by Seamons et al. (2009) [51, 52], on samples with 20nm and 30nm barriers and similar densities. For the narrower barrier, a
small upturn is found only in the drag measured on the holes (). This Nonreciprocity was explained as a result of additional Joule heating caused by passing current through the more resistive hole
layer for the electron drag configuration. They report saturation in the electron Shubnikov-de Haas oscillations at about 1K, when the same current used for drag is passed through the hole layer.
Single-layer measurements reported by Seamons et al. on the hole layer require a much smaller current than that used for drag, so heating was avoided.
Joule heating cannot explain the Nonreciprocity found in this work. Heating will be a nonlinear effect (R), but no nonlinearity is found in the I-V traces of the drag resistivity. For the data taken
in Figure 9 at cm^−2, at the lowest temperature, the two-probe resistance of the electron and hole layers differed by a factor of 9 (hole 2-probe , electron 2-probe ). The two and four-probe
(single-layer) resistances of the hole and electron layers are shown in Figure 17 as a function of temperature. Reducing the current by a factor of three will give the same Joule heating for the
electron drag configuration as for the hole drag. But nonlinearities are not seen in this range (Figure 9). Indeed the same currents were used for single-layer measurements as for drag and no
saturation of Shubnikov-de Haas amplitude with temperature was seen.
7. Features in Single-Layer Resistivity: An Interaction-Driven Insulating State
In our EHBL devices it is possible to perform experiments with only the 2DHG present. This is achieved by keeping the interlayer bias below the threshold for electron accumulation (V) and biasing
both backgates negatively to induce holes in the QW. The hole density can then be controlled with the central backgate . The temperature dependence (0.3 to 1.5K) of the single-layer resistivity of
the holes is shown in Figure 18(a). As the density is lowered, there is a transition from metallic to insulating behaviour between 5 and cm^−2. These features are consistent with results from
several 2DMIT studies of Silicon MOSFETs and GaAs/AlGaAs-based devices. The crossover occurs as expected close to or equivalently to , with for and for . At , the mean free path () is approximately
equal to the interparticle separation (). Non-monotonic behaviour is observed for the cm^−2 trace, which is insulating above K and metallic below. This has been observed before in 2DHGs [68] and in
an EHBL with a 30nm barrier [57] and can be explained within Fermi liquid theory [69].
Most striking is the change in behaviour of the hole layer with the addition of the electron layer (kept in an open-circuit configuration), [70, 71] shown in Figure 18(b). All traces are now clearly
insulating by K, even those that had been metallic. This insulating state is occurring at , with resistivity at K far below the regime where a transition to insulating behaviour is expected.
Localisation due to background disorder (impurities/dopants/defects) cannot account for this because the 2DHG sees the same disorder with and without the 2DEG present. Placing a plane of mobile
charge next to the 2DHG is expected to improve the hole mobility. Adding the electrons does result in a significant three-fold increase in hole mobility at K; for cm^−2, falls to . This is
consistent with the results of Morath et al. [57], from a 30nm barrier EHBL device, where the dependence of the hole mobility with electron density was explored. The effect is much larger here,
possibly due to the smaller barrier.
Improvement of the high-temperature mobility is likely to be the result of several processes. Background impurities will be screened by the presence of the 2DEG. Inducing the electrons requires a
large electric field across the barrier, and then to reach matched densities () a depleting backgate bias is also required. Both of these cause the wavefunction to be “squeezed” against the AlGaAs
barrier, improving the screening as the holes become more greatly confined (though also potentially harming the mobility due to increased interface roughness scattering and the higher level of
background impurities found in AlGaAs). In this regime, the hole mobility is limited by remote ionised impurity scattering caused by the intentional p-dopants (verified by comparison to undoped
structures). The interlayer bias will pull the holes towards the barrier and increase the effective spacer thickness (between the 2DHG and dopants) and accordingly improve the mobility.
Placing a conducting electron layer close to the 2DHG will also improve the screening, as image charges will form in this layer. If the interlayer separation is , then the dipolar field (charge and
its image) will drop faster than at distances greater than . This effect has been studied in gated 2DEGs [72] and in double QW structures (2x2DHG) with nm [73].
While the change at K can be accounted for by a combination of effects, the insulating behaviour at low temperature is unexpected as these arguments always improve the intralayer screening and lower
the effective . Matthiessen's rule-based addition of scattering rates cannot explain the increase in mobility [74] at K. Adding the contribution of the interlayer scattering rate to the impurity
scattering rate will cause a reduction in mobility. Going from the situation of a single hole gas () (where , whose mobility is primarily dictated by impurity scattering (), to the mobility of the
holes with the addition of the electron layer (bilayer configuration) () must introduce a term corresponding to the presence of the electron layer () and so This must mean that is changing in the
presence of the second layer. Note that [74], where is the density of the electron layer. The anomalous drag cannot account for the increase in at lower temperatures, as it is too small. This
suggests that a new single-layer scattering mechanism has emerged due to the presence of the electron layer.
The insulating state is also seen in the electron layer. Figure 19 shows both electron and hole layer resistivities down to ~50mK. Both layers exhibit a similar behaviour, though the relative change
in resistance appears to be much larger in the hole layer. However, the loss of conductivity in both layers between 2K and 50mK (inset Figure 19) is similar (~0.2mS), over which the insulating
behaviour is seen. This is much larger than the change weak localisation (quantum interference) can account for (μS) [75]. Weak localisation predicts that , where and are the temperature-dependent
lifetimes for eigenstates of energy and momentum, respectively. The insulating state was also seen in a third device (B141/C5-2) that had been processed with a shorter hall bar (250μm as opposed to
500μm), and the effect was found to be independent of the length to width ratio.
Figure 18(e) is an Arrhenius plot ( versus ) for cm^−2, and the resistivity shows a good fit to an activated behaviour (It is difficult to distinguish between a power law and an exponential rise as
the insulating phase occurs over a small temperature range, less than one order of magnitude.) (), yielding an energy gap of K. Similar analysis for the corresponding electron trace (data not shown)
gives a far smaller gap of K. As the density is lowered (Figure 18) and the interaction strength is increased (larger ), the fit to an exponential rise becomes poorer. But the traces are expected to
be (weakly) insulating for cm^−2, regardless of the presence of the electrons. The important result is the emergence of a strongly insulating state at large (). In Figure 19, for both layers appears
to saturate at the lowest temperatures. This is likely to an artefact of the electron temperature not reaching the thermometry temperature (Shubnikov-de Haas oscillation amplitude had saturated by
~100mK, in this measurement run).
7.1. Mismatched Densities
Matching the densities is not crucial to achieving the insulating state. In Figure 21, the hole density was held at cm^−2 with the electron density varied (cm^−2). This was chosen so that for the
lowest electron density cm^−2, both layers have similar resistivities at K. As the electron density is increased, both layers undergo a transition from metallic to insulating behaviour. The
transition occurs between 4 and cm^−2 when in this instance, far from matched densities. A transition to insulating behaviour as the density is raised is very striking and incompatible with a
disorder-driven mechanism; this strengthens the argument for an interlayer interaction-driven effect.
A similar experiment was performed (down to 50mK) where was fixed at cm^−2 and was varied (Figure 22). For the largest hole density (cm^−2), both layers are metallic, and as the hole density was
lowered, both become insulating. Some degree of matching appears to be required for the insulating state to occur, though one would expect that in the limit of becoming large relative to (and the
hole screening improving accordingly), they would behave as two isolated gases. Arrows in Figure 22 indicate the approximate points of transition to insulating behaviour, suggesting that there may be
no abrupt transition as the density is lowered, but a shift in transition temperature.
As the hole density is varied with the electron density held constant (cm^−2), varies monotonically for both layers across (Figure 23) within the insulating regime at 300mK. The electron resistance
does increase slightly as the hole density is lowered. Matching the densities exactly does not appear to play a significant role. In all the experiments performed, the insulating state appeared to
occur in both layers simultaneously, though at higher densities has little temperature dependence over the range measured.
7.2. Inhomogeneity
It is important to determine whether the emergent insulating state in the hole layer at large can be attributed to (device-driven) density inhomogeneity. If the hole gas were highly inhomogeneous,
the average resistance might be determined by high-density (low-resistance) regions, while the temperature dependence was dominated by low-density insulating regions. It can be established that
without the electron gas present, the hole gas is homogeneous, from the hole layer magnetoresistance (Figure 18). The Shubnikov-de Haas oscillations are periodic in and the reciprocal of the carrier
density. A variation in carrier density over the Hall bar region would smear the oscillation. In Figure 18, magnetoresistance traces are shown for four carrier densities, which clearly show
well-pronounced oscillations with minima that go to zero for the higher densities. At lower densities, the larger sheet resistance and shorter transport lifetime broaden the Landau levels, requiring
lower temperatures for them to be as clearly resolved. Considering only the higher two densities (8 and cm^−2), the resolution of the oscillations is consistent with the densities measured by the
Hall probes at each end of the Hall bar, which record a difference of cm^−2.
With the introduction of the electrons, verifying the homogeneity of the 2DHG with magnetoresistance is no longer possible. Indeed, if the insulating state corresponds to a density-modulated phase,
then it might be expected that the Shubnikov-de Haas oscillations would no longer be resolved due to the inherent spatial density-variation. Indeed, there are no strong oscillations at 300mK in the
hole layer, but normal oscillations persist in the electron layer at low fields.
If in-built inhomogeneity is the source of the insulating behaviour, it must only be present when the electrons are induced across the barrier. Variation in the thickness of a 10nm barrier will
cause spatial density fluctuations due to a change in interlayer capacitance. MBE growth is capable of producing interfaces that are smooth to a couple of monolayers (0.3nm for GaAs), giving a
possible variation of ~1nm in the barrier width. This results in a density fluctuation of %. (This is an overestimate as the appropriate distance corresponding to the interlayer capacitance is
~25nm for a 10nm barrier) and cannot force regions to become insulating (cm^−2) if the average density is cm^−2. Such inhomogeneity would also be mirrored equally in the electron layer and
detectable in the 2DEG magnetoresistance as described above. However, even in the insulating regime the Shubnikov-de Haas oscillations are clearly resolved at 50mK in the electron layer.
To go from to requires (as well as an increase in interlayer bias) a depleting backgate bias as opposed to an inducing one. It is unclear how backgate action (50μm away) can produce density
variation on the short-length scale required (<60μm, width of Hall bar/probes). The disagreement in density taken from the Hall slope at each end of the Hall bar is no worse with the electron layer
present, (still about cm^−2).
7.3. Two Component Plasma and the Significance of Unequal Electron and Hole Masses
Most of the early theoretical (and numerical) work on the EHBL made the simplifying assumption that the effective masses of the electrons and holes are equal. qSTLS was used by Moudgil et al. 2002 [
76] to study the ground state of electron-electron and electron-hole bilayers (). For the EHBL, a divergence for at small (CDW) was found for with a crossover to the WC state above (). Unlike [17], a
divergence in was found for the electron-electron bilayer. They found that the local fields in the electron-electron bilayer are weaker than the EHBL, and the density-modulated phases require larger
and smaller . The results were compared with diffusion Monte Carlo simulations performed on the EHBL by De Palo et al. [65], who were able to show a transition to an excitonic condensate state
(BCS-like state) and WC. The WC transition was in good agreement with the qSTLS data. However the CDW state was not considered by De Palo et al. [65].
Subsequent qSTLS work by Moudgil [77] studied the mass-asymmetric EHBL, with (appropriate to GaAs) and included the finite widths of the electron and hole gases. The mass-asymmetry pushes the CDW and
WC transitions to higher density, though the WC is found to exist only at an intermediate well separation, with the CDW favoured for smaller separations. The larger in the hole layer is found to be
significant, with Wigner crystallisation predicted at . Interestingly, the correlation in the hole layer () for the density-modulated phases is found to be stronger than in the electron layer ().
Including a finite QW width was found to lower the critical density for Wigner crystallisation to .
More recent work implemented Monte Carlo methods for studying mass-asymmetric EHBLs, with the mass ratio varied [78] between 1 and 100, with the interlayer separation () fixed. Electron densities
studied corresponded to , but for a large layer separation (nm in GaAs). They found that by increasing the mass ratio, the hole layer evolves from a homogeneous to a localised state (WC), with the
electron layer remaining in a relatively homogeneous state. Periodic structure in exists by , a ratio appropriate to GaAs. While these particle densities are considerably lower than those achieved in
these experiments, the ratios are similar at ~1.
These results provide a large contrast to the low densities predicted to be required for a WC to occur in a single 2D gas, by Tanatar and Ceperley (1989) [79]. This corresponds to an electron density
of cm^−2; this is difficult to achieve experimentally in GaAs while maintaining a high mobility such that localisation due to Coulomb repulsion can be distinguished from that driven by disorder.
Hole densities as low as cm^−2 (in GaAs/AlGaAs) have been reported [80], where a much larger is reached (relative to electrons), but at these densities, the 2DHG is in the insulating regime. But in
the EHBL, the density-modulated phases are predicted at experimentally accessible that lie within the “metallic” regime.
8. Conclusion
The idea of an excitonic condensate was put forward nearly forty years ago by Blatt et al. [81] and Moskalenko and Snoke [82]. But excitonic phases were initially thought to be necessarily insulating
and not accessible by transport because they consisted of charge neutral particles. However, the key experimental development that has radically changed this perspective is the ability to make
independent ohmic contacts to the electron-like and the hole-like parts of a system. Experimentally, 2x2DEGs, and 2x2DHGs in a magnetic field have shown striking evidence of transport by neutral
objects driven by counterflow currents [33–35]. More recently, electron-hole bilayers in zero magnetic field have shown evidence of an emerging non-Fermi liquid phase [51–53, 70]. In these systems,
excitonic phases and collective modes, characteristic of a 2-component plasma, may be competing in determining the ground state. It is very likely that this field will lead to exciting experimental
and theoretical results in near future.
The work reported here was funded by EPSRC (UK) under the Grants EP/D008506/1 and EP/H017720/1. A. F. Croxall acknowledges Trinity College for a fellowship. I. Farrer acknowledges Toshiba for
support. The authors acknowledge several useful discussions with D. Neilson and A. R. Hamilton.
1. Yu. E. Lozovik and V. I. Yudson, “A new mechanism for superconductivity: pairing between spatially separated electrons and holes,” Soviet Physics Journal of Experimental and Theoretical Physics,
vol. 44, p. 389, 1976.
2. Yu. E. Lozovik and V. I. Yudson, “Feasibility of superfluidity of paired spatially separated electrons and holes; a new superconductivity mechanism,” Journal of Experimental and Theoretical
Physics Letters, vol. 22, p. 274, 1975.
3. P. J. Price, “Hot electron effects in heterolayers,” Physica B+C, vol. 117-118, no. 2, pp. 750–752, 1983. View at Scopus
4. M. B. Pogrebinskii, “Mutual drag of carriers in a semiconductor-insulator-semiconductor system,” Soviet Physics: Semiconductors, vol. 11, no. 4, pp. 372–376, 1977. View at Scopus
5. A. P. Jauho and H. Smith, “Coulomb drag between parallel two-dimensional electron systems,” Physical Review B, vol. 47, no. 8, pp. 4420–4428, 1993. View at Publisher · View at Google Scholar ·
View at Scopus
6. G. Vignale and A. H. MacDonald, “Drag in paired electron-hole layers,” Physical Review Letters, vol. 76, no. 15, pp. 2786–2789, 1996. View at Scopus
7. S. Conti, G. Vignale, and A. H. MacDonald, “Engineering superfluidity in electron-hole double layers,” Physical Review B, vol. 57, no. 12, pp. R6846–R6849, 1998. View at Scopus
8. B. Y. K. Hu, “Prospecting for the superfluid transition in electron-hole coupled quantum wells using Coulomb drag,” Physical Review Letters, vol. 85, no. 4, pp. 820–823, 2000. View at Publisher ·
View at Google Scholar · View at Scopus
9. A. V. Balatsky, Y. N. Joglekar, and P. B. Littlewood, “Dipolar superfluidity in electron-hole bilayer systems,” Physical Review Letters, vol. 93, no. 26, Article ID 266801, 2004. View at
Publisher · View at Google Scholar · View at Scopus
10. Y. N. Joglekar, A. V. Balatsky, and M. P. Lilly, “Excitonic condensate and quasiparticle transport in electron-hole bilayer systems,” Physical Review B, vol. 72, no. 20, Article ID 205313, 6
pages, 2005. View at Publisher · View at Google Scholar
11. P. B. Littlewood and X. Zhu, “Possibilities for exciton condensation in semiconductor quantum-well structures,” Physica Scripta, vol. T68, p. 56, 1996.
12. J. Hubbard, “The description of collective motions in terms of many-body perturbation theory II. The correlation energy of a free electron gas,” Proceedings of the Royal Society of London A, vol.
243, p. 336, 1958.
13. K. S. Singwi, M. P. Tosi, R. H. Land, and A. Sjölander, “Electron correlations at metallic densities,” Physical Review, vol. 176, no. 2, pp. 589–599, 1968. View at Publisher · View at Google
Scholar · View at Scopus
14. K. S. Singwi and M. P. Tosi, “Correlations in electron liquids,” in Solid State Physics: Advances in Research and Applications, H. Ehrenreich, Ed., vol. 36, pp. 177–266, Academic Press, New York,
NY, USA, 1981.
15. L. Zheng and A. H. MacDonald, “Correlation in double-layer two-dimensional electron-gas systems: Singwi-Tosi-Land-Sjölander theory at B=0,” Physical Review B, vol. 49, no. 8, pp. 5522–5530, 1994.
View at Publisher · View at Google Scholar · View at Scopus
16. L. Świerkowski, J. Szymański, and Z. W. Gortel, “Coupled electron-hole transport: beyond the mean field approximation,” Physical Review Letters, vol. 74, no. 16, pp. 3245–3248, 1995. View at
Publisher · View at Google Scholar · View at Scopus
17. L. Liu, L. Świerkowski, D. Neilson, and J. Szymański, “Static and dynamic properties of coupled electron-electron and electron-hole layers,” Physical Review B, vol. 53, no. 12, pp. 7923–7931,
1996. View at Scopus
18. L. Liu, L. Świerkowski, and D. Neilson, “Exciton and charge density wave formation in spatially separated electron - Hole liquids,” Physica B, vol. 249–251, pp. 594–597, 1998. View at Scopus
19. U. Sivan, P. M. Solomon, and H. Shtrikman, “Coupled electron-hole transport,” Physical Review Letters, vol. 68, no. 8, pp. 1196–1199, 1992. View at Publisher · View at Google Scholar · View at
20. B. E. Kane, J. P. Eisenstein, W. Wegscheider, L. N. Pfeiffer, and K. W. West, “Separately contacted electron-hole double layer in a $\text{GaAs}/{\text{Al}}_{x}{\text{Ga}}_{1-x}\text{As}$
heterostructure,” Applied Physics Letters, vol. 65, no. 25, pp. 3266–3268, 1994. View at Publisher · View at Google Scholar · View at Scopus
21. H. Rubel, A. Fischer, W. Dietsche, K. von Klitzing, and K. Eberl, “Fabrication of indepedently contacted and tuneable 2D electron-hole systems in GaAs-AlGaAs double quantum wells,” Materials
Science and Engineering B, vol. 51, p. 205, 1998.
22. M. Pohlt, M. Lynass, J. G. S. Lok et al., “Closely spaced and separately contacted two-dimensional electron and hole gases by in situ focused-ion implantation,” Applied Physics Letters, vol. 80,
no. 12, p. 2105, 2002. View at Publisher · View at Google Scholar · View at Scopus
23. J. A. Keogh, K. Das Gupta, H. E. Beere, D. A. Ritchie, and M. Pepper, “Fabrication of closely spaced, independently contacted electron-hole bilayers in GaAs-AlGaAs heterostructures,” Applied
Physics Letters, vol. 87, no. 20, Article ID 202104, 3 pages, 2005. View at Publisher · View at Google Scholar · View at Scopus
24. J. A. Seamons, D. R. Tibbetts, J. L. Reno, and M. P. Lilly, “Undoped electron-hole bilayers in a GaAs/AlGaAs double quantum well,” Applied Physics Letters, vol. 90, no. 5, Article ID 052103,
2007. View at Publisher · View at Google Scholar · View at Scopus
25. J. P. Eisenstein, L. N. Pfieffer, and K. W. West, “Independently contacted 2-dimensional electron systems in double quantum wells,” Applied Physics Letters, vol. 57, p. 2324, 1990.
26. A. F. Croxall, K. Das Gupta, C. A. Nicoll et al., “Patterned backgating using single-sided mask aligners: application to density-matched electron-hole bilayers,” Journal of Applied Physics, vol.
104, no. 11, Article ID 113715, 2008. View at Publisher · View at Google Scholar
27. M. V. Weckwerth, J. A. Simmons, N. E. Harff et al., “Epoxy bond and stop-etch (EBASE) technique enabling backside processing of (AI)GaAs heterostructures,” Superlattices and Microstructures, vol.
20, no. 4, pp. 561–567, 1996. View at Publisher · View at Google Scholar
28. N. P. R. Hill, J. T. Nicholls, E. H. Linfield et al., “Correlation effects on the coupled plasmon modes of a double quantum well,” Physical Review Letters, vol. 78, no. 11, pp. 2204–2207, 1997.
29. M. Prunnila, S. J. Laakso, J. M. Kivioja, and J. Ahopelto, “Electrons and holes in Si quantum well: a room-temperature transport and drag resistance study,” Applied Physics Letters, vol. 93, no.
11, Article ID 112113, 2008. View at Publisher · View at Google Scholar
30. K. Takashina, K. Nishiguchi, Y. Ono et al., “Electrons and holes in a 40nm thick silicon slab at cryogenic temperatures,” Applied Physics Letters, vol. 94, no. 14, Article ID 142104, 2009. View
at Publisher · View at Google Scholar · View at Scopus
31. I. Farrer, A. F. Croxall, K. D. Gupta et al., “MBE growth and patterned backgating of electron-hole bilayer structures,” Journal of Crystal Growth, vol. 311, no. 7, pp. 1988–1993, 2009. View at
Publisher · View at Google Scholar · View at Scopus
32. J. P. Eisenstein and A. H. MacDonald, “Bose-Einstein condensation of excitons in bilayer electron systems,” Nature, vol. 432, no. 7018, pp. 691–694, 2004. View at Publisher · View at Google
Scholar · View at PubMed
33. M. Kellogg, J. P. Eisenstein, L. N. Pfeiffer, and K. W. West, “Vanishing hall resistance at high magnetic field in a double-layer two-dimensional electron system,” Physical Review Letters, vol.
93, no. 3, Article ID 036801, 2004. View at Publisher · View at Google Scholar
34. L. Tiemann, J. G. S. Lok, W. Dietsche et al., “Exciton condensate at a total filling factor of one in Corbino two-dimensional electron bilayers,” Physical Review B, vol. 77, no. 3, Article ID
033306, 2008. View at Publisher · View at Google Scholar
35. E. Tutuc, M. Shayegan, and D. A. Huse, “Counterflow measurements in strongly correlated GaAs hole bilayers: evidence for electron-hole pairing,” Physical Review Letters, vol. 93, no. 3, Article
ID 036802, 2004. View at Publisher · View at Google Scholar
36. T. J. Gramila, J. P. Eisenstein, A. H. MacDonald, L. N. Pfeiffer, and K. W. West, “Mutual friction between parallel two-dimensional electron systems,” Physical Review Letters, vol. 66, no. 9, pp.
1216–1219, 1991. View at Publisher · View at Google Scholar
37. M. C. Bønsager, K. Flensberg, B. Y. K. Hu, and A. H. MacDonald, “Frictional drag between quantum wells mediated by phonon exchange,” Physical Review B, vol. 57, no. 12, pp. 7085–7102, 1998.
38. A. Yurtsever, V. Moldoveanu, and B. Tanatar, “Many-body effects in the Coulomb drag between low density electron layers,” Solid State Communications, vol. 125, no. 11-12, pp. 575–579, 2003. View
at Publisher · View at Google Scholar
39. R. Asgari, B. Tanatar, and B. Davoudi, “Comparative study of screened interlayer interactions in the Coulomb drag effect in bilayer electron systems,” Physical Review B, vol. 77, no. 11, Article
ID 115301, 2008. View at Publisher · View at Google Scholar
40. S. Das Sarma and E. H. Hwang, “In-plane magnetodrag in dilute bilayer two-dimensional systems: a Fermi-liquid theory,” Physical Review B, vol. 71, no. 19, Article ID 195322, 5 pages, 2005. View
at Publisher · View at Google Scholar
41. E. H. Hwang and S. Das Sarma, “Transport and drag in undoped electron-hole bilayers,” Physical Review B, vol. 78, no. 7, Article ID 075430, 2008. View at Publisher · View at Google Scholar
42. F. Stern, “Polarizability of a two-dimensional electron gas,” Physical Review Letters, vol. 18, no. 14, pp. 546–548, 1967. View at Publisher · View at Google Scholar
43. L. Świerkowski, J. Szymański, and Z. W. Gortel, “Linear-response theory for multicomponent fermion systems and its application to transresistance in two-layer semiconductor structures,” Physical
Review B, vol. 55, no. 4, pp. 2280–2292, 1997.
44. S. Das Sarma and A. Madhukar, “Collective modes of spatially separated, two-component, two-dimensional plasma in solids,” Physical Review B, vol. 23, no. 2, pp. 805–815, 1981. View at Publisher ·
View at Google Scholar
45. B. Y. K. Hu and J. W. Wilkins, “Two-stream instabilities in solid-state plasmas caused by conventional and unconventional mechanisms,” Physical Review B, vol. 43, no. 17, pp. 14009–14029, 1991.
View at Publisher · View at Google Scholar
46. M. Kellogg, J. P. Eisenstein, L. N. Pfeiffer, and K. W. West, “Evidence for 2k electron-electron scattering processes in Coulomb drag,” Solid State Communications, vol. 123, no. 12, pp. 515–519,
2002. View at Publisher · View at Google Scholar
47. E. H. Hwang, S. Das Sarma, V. Braude, and A. Stern, “Frictional drag in dilute bilayer 2D hole systems,” Physical Review Letters, vol. 90, no. 8, Article ID 086801, 4 pages, 2003.
48. R. Pillarisetty, H. Noh, D. C. Tsui, E. P. De Poortere, E. Tutuc, and M. Shayegan, “Frictional drag between two dilute two-dimensional hole layers,” Physical Review Letters, vol. 89, no. 1,
Article ID 016805, 2002.
49. C. Hodges, H. Smith, and J. W. Wilkins, “Effect of fermi surface geometry on electron-electron scattering,” Physical Review B, vol. 4, no. 2, pp. 302–311, 1971. View at Publisher · View at Google
50. L. Zheng and A. H. MacDonald, “Coulomb drag between disordered two-dimensional electron-gas layers,” Physical Review B, vol. 48, no. 11, pp. 8203–8209, 1993. View at Publisher · View at Google
51. J. A. Seamons, C. P. Morath, J. L. Reno, and M. P. Lilly, “Coulomb drag in the exciton regime in electron-hole bilayers,” Physical Review Letters, vol. 102, no. 2, Article ID 026804, 2009. View
at Publisher · View at Google Scholar
52. C. P. Morath, J. A. Seamons, J. L. Reno, and M. P. Lilly, “Density imbalance effect on the Coulomb drag upturn in an undoped electron-hole bilayer,” Physical Review B, vol. 79, no. 4, Article ID
041305, 2009. View at Publisher · View at Google Scholar
53. A. F. Croxall, K. Das Gupta, C. A. Nicoll et al., “Anomalous coulomb drag in electron-hole bilayers,” Physical Review Letters, vol. 101, no. 24, Article ID 246801, 2008. View at Publisher · View
at Google Scholar
54. H. B. G. Casimir, “On Onsager's principle of microscopic reversibility,” Reviews of Modern Physics, vol. 17, no. 2-3, pp. 343–350, 1945. View at Publisher · View at Google Scholar
55. J. Szymański, L. Wierkowski, and D. Neilson, “Correlations in coupled layers of electrons and holes,” Physical Review B, vol. 50, no. 15, pp. 11002–11007, 1994. View at Publisher · View at Google
Scholar · View at Scopus
56. M. J. Kellogg, Evidence for excitonic superfluidity in a two dimensional electron system, Ph.D. thesis, California Institute of Technology, 2005.
57. C. P. Morath, J. A. Seamons, J. L. Reno, and M. P. Lilly, “Layer interdependence of transport in an undoped electron-hole bilayer,” Physical Review B, vol. 78, no. 11, Article ID 115318, 2008.
View at Publisher · View at Google Scholar · View at Scopus
58. A. Levchenko and A. Kamenev, “Coulomb drag at zero temperature,” Physical Review Letters, vol. 100, no. 2, Article ID 026805, 2008. View at Publisher · View at Google Scholar · View at Scopus
59. M. P. Lilly, J. P. Eisenstein, L. N. Pfeiffer, and K. W. West, “Coulomb drag in the extreme quantum limit,” Physical Review Letters, vol. 80, no. 8, pp. 1714–1717, 1998. View at Scopus
60. X. G. Feng, S. Zelakiewicz, H. Noh et al., “Negative electron drag and holelike behavior in the integer quantum hall regime,” Physical Review Letters, vol. 81, no. 15, pp. 3219–3222, 1998. View
at Scopus
61. N. P. R. Hill, J. T. Nicholls, E. H. Linfield et al., “Electron-electron scattering between closely spaced two-dimensional electron gases,” Physica B, vol. 249–251, pp. 868–872, 1998. View at
62. J. G. S. Lok, S. Kraus, M. Pohlt et al., “Spin effects in the magnetodrag between double quantum wells,” Physical Review B, vol. 63, no. 4, Article ID 041305, 4 pages, 2001.
63. K. Muraki, J. G. S. Lok, S. Kraus et al., “Coulomb drag as a probe of the nature of compressible states in a magnetic field,” Physical Review Letters, vol. 92, no. 24, Article ID 246801, 2004.
View at Publisher · View at Google Scholar · View at Scopus
64. A. Alkauskas, K. Flensberg, B. Y. K. Hu, and A. P. Jauho, “Sign reversal of drag in bilayer systems with in-plane periodic potential modulation,” Physical Review B, vol. 66, no. 20, Article ID
201304, 2002. View at Scopus
65. S. De Palo, F. Rapisarda, and G. Senatore, “Excitonic condensation in a symmetric electron-hole bilayer,” Physical Review Letters, vol. 88, no. 20, Article ID 206401, 4 pages, 2002. View at
66. L. Onsager, “Reciprocal relations in irreversible processes. I,” Physical Review, vol. 37, no. 4, pp. 405–426, 1931. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View
at Scopus
67. L. Onsager, “Reciprocal relations in irreversible processes. II,” Physical Review, vol. 38, no. 12, pp. 2265–2279, 1931. View at Publisher · View at Google Scholar · View at Scopus
68. A. P. Mills Jr., A. P. Ramirez, L. N. Pfeiffer, and K. W. West, “Nonmonotonic temperature-dependent resistance in low density 2D hole gases,” Physical Review Letters, vol. 83, no. 14, pp.
2805–2808, 1999.
69. S. Das Sarma and E. H. Hwang, “Calculated temperature-dependent resistance in low-density two-dimensional hole gases in GaAs heterostructures,” Physical Review B, vol. 61, no. 12, pp.
R7838–R7841, 2000.
70. A. F. Croxall, K. Das Gupta, C. A. Nicoll et al., “Possible effect of collective modes in zero magnetic field transport in an electron-hole bilayer,” Physical Review B, vol. 80, no. 12, Article
ID 125323, 2009. View at Publisher · View at Google Scholar · View at Scopus
71. A. F. Croxall, K. Das Gupta, C. A. Nicoll et al., “Towards the ground state of an electron-hole bilayer,” Physica E, vol. 42, no. 4, pp. 1247–1250, 2010. View at Publisher · View at Google
Scholar · View at Scopus
72. J. Huang, D. S. Novikov, D. C. Tsui, L. N. Pfieffer, and K. W. West, “Interaction effects in the transport of two dimensional holes in GaAs,” http://arxiv.org/abs/cond-mat/0610320.
73. L. H. Ho, W. R. Clarke, A. P. Micolich et al., “Effect of screening long-range Coulomb interactions on the metallic behavior in two-dimensional hole systems,” Physical Review B, vol. 77, no. 20,
Article ID 201402, 2008. View at Publisher · View at Google Scholar
74. L. Świerkowski, J. Szymański, and Z. W. Gortel, “Intrinsic limits on carrier mobilities in double-layer systems,” Journal of Physics Condensed Matter, vol. 8, no. 18, pp. L295–L300, 1996.
75. G. Bergman, “Weak Localisation in thin films: a time of flight experiment with conduction electrons,” Physics Reports, vol. 107, no. 1, pp. 1–58, 1984.
76. R. K. Moudgil, G. Senatore, and L. K. Saini, “Dynamic correlations in symmetric electron-electron and electron-hole bilayers,” Physical Review B, vol. 66, no. 20, Article ID 205316, 10 pages,
77. R. K. Moudgil, “Coupled electron-hole quantum well structure: mass asymmetry and finite width effects,” Journal of Physics Condensed Matter, vol. 18, no. 4, pp. 1285–1301, 2006. View at Publisher
· View at Google Scholar
78. P. Ludwig, A. Filinov, YU. E. Lozovik, H. Stolz, and M. Bonitz, “Crystallization in mass-asymmetric electron-hole bilayers,” Contributions to Plasma Physics, vol. 47, no. 4-5, pp. 335–344, 2007.
View at Publisher · View at Google Scholar
79. B. Tanatar and D. M. Ceperley, “Ground state of the two-dimensional electron gas,” Physical Review B, vol. 39, no. 8, pp. 5005–5016, 1989. View at Publisher · View at Google Scholar
80. J. Huang, D. S. Novikov, D. C. Tsui, L. N. Pfeiffer, and K. W. West, “Nonactivated transport of strongly interacting two-dimensional holes in GaAs,” Physical Review B, vol. 74, no. 20, Article ID
201302, 2006. View at Publisher · View at Google Scholar
81. J. M. Blatt, K. W. Böer, and W. Brandt, “Bose-einstein condensation of excitons,” Physical Review, vol. 126, no. 5, pp. 1691–1692, 1962. View at Publisher · View at Google Scholar
82. S. A. Moskalenko and D. W. Snoke, Bose-Einstein Condensation of Excitons and Biexcitons and Coherent Nonlinear Optics with Excitons, Cambridge University Press, Cambridge, UK, 2000.
|
{"url":"http://www.hindawi.com/journals/acmp/2011/727958/","timestamp":"2014-04-19T21:02:24Z","content_type":null,"content_length":"530386","record_id":"<urn:uuid:e0d878ae-7df9-452d-bb48-911b8b4a43e0>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Diffusion sample paths as deformed Brownian sample paths
up vote 6 down vote favorite
Suppose $X$ is a non-explosive diffusion with dynamics
$dX_t = \mu(X_t)dt + \sigma(X_t)dW_t$,
where $W$ is a standard Brownian motion. My intuition about $X$ is that if $\mu$ and $\sigma$ are sufficiently nice, then the sample paths of $X$ are in some sense "deformed" sample paths of $W$. Is
there any way to formalise this idea? For example, is it possible to define a suitable topology on sample paths of $W$ and construct diffusion sample paths $X(\omega)$ as homeomorphisms of $W(\omega)
Part of the motivation for this question comes from the observation that it's possible to do something very similar in the discrete-time case. Given the Euler approximation
$\Delta X_{t+1} = \mu(X_t)\Delta t + \sigma(X_t)\sqrt{\Delta t} W_t $
with $W_t \sim N(0,1)$, then if one knows the values of $\Delta X_t$, then one can unambiguously recover the driving noise $W$. In that sense, one can view $X$ as a transformed version of $W$.
pr.probability stochastic-processes
1 A (one-dimensional) diffusion is expressed fairly explicitly in terms of so-called "scale measure" and "speed measure", you can easily find the formula in old classical textbooks. – zhoraster Jan
24 '11 at 18:30
@Zhoraster Thanks - I'll work through the material in Karatzas & Shreve. – Simon Lyons Jan 24 '11 at 18:32
I mean, expressed as a transform of a Wiener process path. – zhoraster Jan 24 '11 at 18:33
Just give a shout if you don't find the formula, I'll find a reference. – zhoraster Jan 24 '11 at 18:34
add comment
6 Answers
active oldest votes
This is an interesting question
I don't have the complete answer to your question rather some leads about it, but it seems to me that what you need to show for solutions of your SDE, is that the natural filtrations of $X$
is the same as the natural filtration of $W$.
If you have this done, then by some kind of Doob's Lemma you should be able to write your Brownian path with respect to a "measurable" functional of the path of $X$ (i.e. $W_t=f((X_s,s\le
t))$ for some functional $f$). This not a constructive way of showing the result though (i.e. you only have existence)
up vote
4 down Anyway I think this is not the case for very broad class of SDEs even if I don't have a counterexample at hand, but there must be some litterature about this (maybe in Revuz and Yor's book).
You can also look at the Lamperti's Transform (beware I think that there two Lamperti transform in the litterature), which says that under some conditions you can transform a SDE of the form
: $dX_t=\sigma(X_t,t)dW_t$ into some SDE of the form $dX_t=\mu_{\sigma}(X_t,t)dt+dB_t$ but I can't remember if this is done in a path-by-path way (i.e. if B_t =f(W.) where f is a function
over the path space). You should have a look at the proof yourself. Here is a paper where the Lamperti's Transform is disccussed "Moller, Madsen - From State Dependent Diffusion to Constant
Diffusion in SDEs by the Lamperti Transformation" and the references therein.
Best Regards and let us know if you find something interesting
Thanks, this is the type of idea I was looking for. Yes, I can see that this kind of strategy will not work for many SDEs. Perhaps one could construct a counterexample based on Tanaka's
formula. I'll take a look at the Lamperti transform. – Simon Lyons Jan 25 '11 at 15:56
The pleasure is mine if I was of any help – The Bridge Jan 25 '11 at 16:10
Simon, you are right, for $X_t=|B_t|$ it seems difficult to rebuilt $B_t$ from $X_t$'s paths. – The Bridge Jan 25 '11 at 17:49
add comment
As other have said, in the one dimensional case at least, you can suppose that the volatility is constant. Then the solution of the SDE is nothing else than a solution of the integral
equation $$X(t) = \int_0^t \mu(X_s) ds + \sigma W_t \qquad \forall t \in [0,T].$$ You can then check that if $\mu(\cdot)$ is a Lipschitz function, say, then the function $\Psi$ that maps $
up vote (W_t)_{t \in [0,T]}$ to the solution of the above integral equation is continuous (Gronwall Lemma) on $C([0,T],\mathbb{R})$ with the supremum norm. Hence you can indeed write $X = \Psi(W)$
3 down and see the path $(X_t)_{t \in [0,T]}$ as a 'deformation' of the Brownian path $(W_t)_{t \in [0,T]}$. The function $\Psi: C([0,T],\mathbb{R}) \to C([0,T],\mathbb{R})$ is sometimes called the
vote 'Ito map' in the literature.
add comment
You should probably look at the Girsanov's theorem http://en.wikipedia.org/wiki/Girsanov_theorem
up vote 1 The process $X$ is a probability distrubution on the space of continuous functions, so is the Wiener process $W$. Girsanov' theorem states that the distribuitons of $X$ and $W$ are
down vote equivalent, and gives explicitly the Radon-Nykodim derivative.
Hm, Girsanov's theorem isn't really what I'm looking for. It tells you how to transform the laws of $X$ and $W$, whereas I'm looking for a transform on the paths themselves, if such a
thing exists. That said, it wasn't me who voted your answer down. – Simon Lyons Jan 24 '11 at 20:15
add comment
1. If you want to have $X_t$ as a "deformed" $W_t$ - at first I advise to assume $\sigma\neq 0$ a.s. Otherwise you will have some problems (really in such points you may have almost
deterministic dynamics).
2. If $\mu = 0$ then you can just change the time since all continuous martingales are time-transformed Brownian motion (it seems to me that zhoraster talked about something closer).
up vote 1
down vote 3. If $\mu \neq 0$ then you can either apply change of measure +change of time, but since you do not want to apply the change of measure - please make your question more precise. What do
you want? If we have the function $t^2$ and $t$ are their paths "homeomorphic"? The same question I would like to ask you about all functions of bounded variation.
add comment
Another way to approach the problem is as follows.
One can notice that $X$ is a semimartingale (probably under some mild assumptions on $\sigma,\mu$). The martingale $M$ part of $X$ can be represented as
up vote 0 down vote
$$M_t = B_{< M,M>_t},$$ where $B$ is some Brownian motion. This is know as "Dambis, Dubins-Schwarz Theorem" (e.g. see Chapter V of Revuz, Yor - "Continous martingales ...").
add comment
Suppose $\mu$ and $\sigma$ are sufficiently well-behaved so that we may define $$ B_t := - \int_0^t\frac{\mu(X_s)}{\sigma(X_s)}\\,ds + \int_0^t \frac1{\sigma(X_s)}\\,dX_s. $$ This should be
the case, for example, if $\mu$ and $\sigma$ are globally Lipschitz with linear growth and $|\sigma|$ is bounded below. We then obtain $$ B_t = - \int_0^t\frac{\mu(X_s)}{\sigma(X_s)}\\,ds +
up vote \int_0^t\frac{\mu(X_s)}{\sigma(X_s)}\\,ds + \int_0^t \\,dW_s = W_t. $$ If $\sigma$ is continuous, then the stochastic integral in the definition of $B$ can be realized as a limit in
0 down probability of left-endpoint Riemann sums, and so it is evident that $B$ (i.e. $W$) is adapted to the natural filtration of $X$. Since $X$ is also adapted to the natural filtration of $W$,
vote these two filtrations must be the same.
add comment
Not the answer you're looking for? Browse other questions tagged pr.probability stochastic-processes or ask your own question.
|
{"url":"http://mathoverflow.net/questions/53073/diffusion-sample-paths-as-deformed-brownian-sample-paths?sort=votes","timestamp":"2014-04-20T21:10:36Z","content_type":null,"content_length":"78047","record_id":"<urn:uuid:d45a98c3-c585-4e84-ae8e-2f76eeb7d719>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Noise-insensitive image optimal flow estimation using higher-order statistics
A new algorithm is presented that estimates the displacement vector field from two successive image frames. In the case where the sequence is severely corrupted by additive (Gaussian or not, colored)
noise of unknown covariance, then second-order statistics methods do not work well. However, we have studied this topic from a viewpoint different from the above to explore the fundamental limits in
image optimal flow estimation. Our scheme is based on subpixel optimal flow estimation using the bispectrum in the parametric domain. The displacement vector of a moving object is estimated by
solving linear equations involving third-order holograms and the matrix containing the Dirac delta function. To prove the feasibility of the proposed method, we compared it with a phase correlation
technique and the nonparametric bispectrum method described in Res. Lett. Signal Process., ID 417915 (2008) . Our results show that our method is considerably more immune to the presence of noise.
© 2009 Optical Society of America
OCIS Codes
(100.4145) Image processing : Motion, hyperspectral image processing
(110.4153) Imaging systems : Motion estimation and optical flow
ToC Category:
Imaging Systems
Original Manuscript: May 20, 2008
Revised Manuscript: November 21, 2008
Manuscript Accepted: December 18, 2008
Published: April 20, 2009
Virtual Issues
Vol. 4, Iss. 7 Virtual Journal for Biomedical Optics
El Mehdi Ismaili Alaoui, Elhassane Ibn-elhaj, and El Houssaine Bouyakhf, "Noise-insensitive image optimal flow estimation using higher-order statistics," J. Opt. Soc. Am. A 26, 1212-1220 (2009)
Sort: Year | Journal | Reset
1. R. M. Armitano, R. W. Schafer, F. L. Kitson, and V. Bhaskaran, “Robust block-matching motion-estimation technique for noisy sources,” in Proceedings of 1997 IEEE International Conference on
Acoustics, Speech and Signal Processing (IEEE, 1997), pp. 2685-2688. [CrossRef]
2. S. Bhattacharya, N. C. Ray, and S. Sinha, “2-D signal modelling and reconstruction using third-order cumulants,” Signal Process. 62, 61-72 (1997). [CrossRef]
3. E. M. Ismaili Aalaoui and E. Ibn-Elhaj, “Estimation of subpixel motion using bispectrum,” Res. Lett. Signal Process. , ID 417915 (2008).
4. J. M. Anderson and G. B. Giannakis, “Image motion estimation algorithms using cumulants,” IEEE Trans. Image Process. 4, 346-357 (1995). [CrossRef] [PubMed]
5. R. P. Kleihorst, R. L. Lagendijk, and J. Biemond, “Noise reduction of severely corrupted image sequences,” in Proceedings of 1993 IEEE International Conference on Acoustics, Speech, and Signal
Processing (IEEE, 1993), pp. 293-296. [CrossRef]
6. E. Ibn-elhaj, D. Aboutajdine, S. Pateux, and L. Morin, “HOS-based method of global motion estimation for noisy image sequences,” Electron. Lett. 35, 1320-1322 (1999). [CrossRef]
7. E. Sayrol, A. Gasull, and J. R. Fonollosa, “Motion estimation using higher order statistics,” IEEE Trans. Image Process. 5, 1077-1084 (1996). [CrossRef] [PubMed]
8. A. N. Netravali and J. D. Robbins “Motion-compensated television coding: Part I,” Bell Syst. Tech. J. 58, 629-668 (1979).
9. V. Murino, C. Ottonello, and S. Pagnan, “Noisy texture classification: A higher-order statistics approach,” Pattern Recogn. 31, 383-393 (1998). [CrossRef]
10. M. R. Raghuveer and C. L. Nikias, “Bispectrum estimation: A parametric approach,” IEEE Trans. Acoust., Speech, Signal Process. ASSP-33, 1213-1230 (1985). [CrossRef]
11. G. B. Giannakis, “On the identifiability of non Gaussian ARMA models using cumulants,” IEEE Trans. Autom. Control 35, 18-26 (1990). [CrossRef]
12. J. M. Mendel, “Tutorial on higher order statistics (spectra) in signal processing and systems theory: Theoretical results and some applications,” Proc. IEEE 79, 278-305 (1991). [CrossRef]
13. C. L. Nikias and R. Pan, “Time delay estimation in unknown Gaussian spatially correlated noise,” IEEE Trans. Acoust., Speech, Signal Process. ASSP-36, 1706-1714 (1988). [CrossRef]
14. B. M. Sadler and G. B. Giannakis, “Shift- and rotation-invariant object reconstruction using the bispectrum,” J. Opt. Soc. Am. A 9, 57-69 (1992). [CrossRef]
15. J. Heikkilä, “Image scale and rotation from the phase-only bispectrum,” in Proceedings of the 2004 IEEE International Conference on Image Processing (IEEE, 2004).
16. A. P. Petropulu and H. Pozidis, “Phase reconstuction from bispectrum slices,” IEEE Trans. Image Process. 46, 527-530 (1998).
17. C. L. Nikias and A. P. Petropulu, Higher-Order Spectra Analysis: A Nonlinear Signal Processing Framework (Prentice-Hall, 1993).
18. Y. T. Chan, “Notes on: Time delay estimation, ARMA processes, tracking filters” (Department of Electrical Engineering, Royal Military College Canada, Kingston, Ontario, Canada K7L2W3, April
19. G. Madec, “Half pixel accuracy in blockmatching,” in Proceedings of the Picture Coding Symposium (PCS 90), Cambridge, Massachusetts, USA, March 1990.
20. E. M. Ismaili Aalaoui and E. Ibn-Elhaj, “Estimation of motion fields from noisy image sequences using generalized cross-correlation methods,” in Proceedings of IEEE International Conference on
Signal Processing and Communications 2007 (IEEE, 2007).
21. E. M. Ismaili Aalaoui and E. Ibn-Elhaj, “Estimation of displacement vector field from noisy data using maximum likelihood estimator,” in 14th IEEE International Conference on Electronics,
Circuits and Systems (IEEE, 2007).
22. W. K. Pratt, Digital Image Processing, PIKS Scientific Inside, 4th ed. (Wiley, 2007).
23. S. G. Johnson and M. Frigo, “A modified split-radix FFT with fewer arithmetic operations,” IEEE Trans. Signal Process. 55, 111-119 (2007). [CrossRef]
24. K. S. Lii and K. N. Helland, “Cross bispectrum computation and variance estimation,” ACM Trans. Math. Softw. 7, 284-294 (1981). [CrossRef]
25. J.-M. L. Caillec and R. Garello, “Comparison of statistical indices using third order statistics for nonlinearity detection,” Signal Process. 84, 499-525 (2004). [CrossRef]
26. S. A. Kruger and A. D. Calway, “A multiresolution frequency domain method for estimating affine motion parameters,” in 1996 Proceedings of International Conference on Image Processing (IEEE,
1996), Vol. 1, 113-116. [CrossRef]
27. A. K. Jain, Fundamentals of Digital Image Processing (Prentice Hall, 1989).
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article »
|
{"url":"http://www.opticsinfobase.org/josaa/abstract.cfm?uri=josaa-26-5-1212","timestamp":"2014-04-16T15:21:13Z","content_type":null,"content_length":"145188","record_id":"<urn:uuid:537a0014-77ed-40c9-9cd2-bff30f1f92f4>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pricing American options by si
Mark Broadie
“Pricing American options by simulation using a stochastic mesh with optimized weights”
Coauthor(s): Paul Glasserman, Zachary Ha.
Editors: Stanislav P. Uryasev
This chapter develops a simulation method for pricing path-dependent American options, and American options on a large number of underlying assets, such as basket options. Standard numerical
procedures (lattice methods and nite difference methods) are generally inapplicable to such high-dimensional problems, and this has motivated research into simulation-based methods. The optimal
stopping problem embedded in the pricing of American options makes this a nonstandard problem for simulation. This chapter extends the stochastic mesh introduced in Broadie and Glasserman. In its
original form, the stochastic mesh method required knowledge of the transition density of the underlying process of asset prices and other state variables. This chapter extends the method to settings
in which the transition density is either unknown or fails to exist. We avoid the need for a transition density by choosing mesh weights through a constrained optimization problem. If the weights are
constrained to correctly price su ciently many simple instruments, they can be expected to work well in pricing a more complex American option. We investigate two criteria for use in the optimization
| maximum entropy and least squares. The methods are illustrated through numerical examples.
Source: Probabilistic constrained optimization: Methodology and applications
Exact Citation:
Broadie, Mark, Paul Glasserman, and Z. Ha. "Pricing American options by simulation using a stochastic mesh with optimized weights." In Probabilistic constrained optimization: Methodology and
applications, 26-44. Ed. Stanislav P. Uryasev. Dorwell, MA: Kluwer, 2000.
Pages: 26-44
Place: Dorwell, MA
Date: 2000
|
{"url":"http://www0.gsb.columbia.edu/whoswho/more.cfm?uni=mnb2&pub=3976","timestamp":"2014-04-18T06:59:15Z","content_type":null,"content_length":"4873","record_id":"<urn:uuid:fddd32d3-b547-4c3c-8c13-04fbab0c2bb6>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Elizabethport, NJ Science Tutor
Find an Elizabethport, NJ Science Tutor
...I have helped students prepare for MCAT exams and DAT exams as well as the math and chemistry classes during the academic year. I am patient, attentive, and I believe in teaching through
practice problems versus just lecturing. I have flexible hours and am willing to travel within NYC.
2 Subjects: including chemistry, organic chemistry
Hi everyone! I'm a Princeton graduate in Mechanical Engineering specializing in math, science, and test prep. I scored 790M/780W/760CR on my SATs; I am a National Merit Finalist for the PSAT, and
I earned perfect 5s on: Physics C E&M, Calculus BC, Physics C Mech, Biology, Psychology, Physics B, and English Lang.
26 Subjects: including mechanical engineering, psychology, ACT Science, English
...I fell in love with genetics when I was taking my basic science classes for my MD degree. I can help students relate their knowledge of biochemistry to genetics. Organic chemistry is like
33 Subjects: including psychology, general music, geometry, physics
...My areas of expertise are Ancient Philosophy and contemporary analytic philosophy. I have been tutoring in philosophy for several years at the Universities of Oxford and New York University. I
am happy to provide my student evaluations upon request.
2 Subjects: including philosophy, Italian
...Knowledge is power and it transformed me in many ways. I have had the pleasure of helping hundreds and hundreds of students for the last 15 years and see them improve. I love to see students
empowered, realize their own potential and master challenges that they never thought possible before.
55 Subjects: including nursing, calculus, SAT math, anatomy
Related Elizabethport, NJ Tutors
Elizabethport, NJ Accounting Tutors
Elizabethport, NJ ACT Tutors
Elizabethport, NJ Algebra Tutors
Elizabethport, NJ Algebra 2 Tutors
Elizabethport, NJ Calculus Tutors
Elizabethport, NJ Geometry Tutors
Elizabethport, NJ Math Tutors
Elizabethport, NJ Prealgebra Tutors
Elizabethport, NJ Precalculus Tutors
Elizabethport, NJ SAT Tutors
Elizabethport, NJ SAT Math Tutors
Elizabethport, NJ Science Tutors
Elizabethport, NJ Statistics Tutors
Elizabethport, NJ Trigonometry Tutors
Nearby Cities With Science Tutor
Avenel Science Tutors
East Newark, NJ Science Tutors
Elizabeth, NJ Science Tutors
Hillside, NJ Science Tutors
Kenilworth, NJ Science Tutors
Linden, NJ Science Tutors
Midtown, NJ Science Tutors
Millburn Science Tutors
North Elizabeth, NJ Science Tutors
Parkandbush, NJ Science Tutors
Peterstown, NJ Science Tutors
Rahway Science Tutors
Roselle Park Science Tutors
Roselle, NJ Science Tutors
Union Square, NJ Science Tutors
|
{"url":"http://www.purplemath.com/Elizabethport_NJ_Science_tutors.php","timestamp":"2014-04-19T19:47:25Z","content_type":null,"content_length":"23944","record_id":"<urn:uuid:60406433-fe06-4156-9c9a-ecbd7496ded7>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Scalars and Operators - devshed
Scalars and Operators
In this second part of a five-part series on scalars in Perl, you’ll learn about operators (both arithmetic and bitwise), among other things. This article is excerpted from chapter two of the book
Beginning Perl
, written by James Lee (Apress; ISBN: 159059391X).
Alternative Delimiters
That’s all very well, of course, until we want a / in the string. Suppose we want to replace “Slashdot” with “/.”—now we’re back where we started, having to escape things again. Thankfully, Perl
allows us to choose our own delimiters so we don’t have to stick with // . Any nonalphanumeric (that is, nonalphabetic and nonnumeric) character can be used as a delimiter, provided it’s the same on
both sides of the text. Furthermore, you can use {} , [] , () , and <> as left and right delimiters. Here’s a few ways of doing the print qq/…/; , all of which have the same effect:
#!/usr/bin/perl -w
# quotes6.pl
print qq|’"Hi," said Jack. "Have you read /. today?"’n|;
print qq#’"Hi," said Jack. "Have you read /. today?"’n#;
print qq(‘"Hi," said Jack. "Have you read /. today?"’n);
print qq<’"Hi," said Jack. "Have you read /. today?"’n>;
We’ll see more of these alternative delimiters when we start working with regular expressions.
There’s one final way of specifying a string—by means of a here-document. This idea was taken from the Unix shell, and works on any platform. Effectively, it means that you can write a large amount
of text within your program, and it will be treated as a string provided it is identified correctly. Here’s an example:
#!/usr/bin/perl -w
# heredoc.pl
print <<EOF;
This is a here-document. It starts on the line after the two arrows,
and it ends when the text following the arrows is found at the beginning
of a line, like this:
A here-document must start with << and then a label. The label can be anything, but is traditionally EOF (end of file) or EOT (end of text). The label must immediately follow the arrows with no
spaces between, unless the same number of spaces precedes the end marker. It ends when the label is found at the beginning of a line. In our case, the semicolon does not form part of the label,
because it marks the end of the print() function call.
By default, a here-document works like a double-quoted string. In order for it to work like a single-quoted string, surround the label in single quotes. This will become important when variable
interpolation comes into play, as we’ll see later on.
{mospagebreak title=Converting Between Numbers and Strings}
Perl treats numbers and strings on an equal footing, and where necessary, Perl converts between strings, integers, and floating point numbers behind the scenes. There is a special term for this:
automatic conversion of scalars. This means that you don’t have to worry about making the conversions yourself, like you do in other languages. If you have a string literal "0.25", and multiply it by
four, Perl treats it as a number and gives you the expected answer, 1 . For example:
#!/usr/bin/perl -w
# autoconvert.pl
print "0.25" * 4, "n";
The asterisk ( * ) is the multiplication operator. All of Perl’s operators, including this one, will be discussed in the next section.
There is, however, one area where this automatic conversion does not take place. Octal, hex, and binary numbers in string literals or strings stored in variables don’t get converted automatically.
#!/usr/bin/perl -w
# octhex1.pl
print "0x30n";
print "030n";
gives you
$ perl octhex1.pl
If you ever find yourself with a string containing a hex or octal value that you need to convert into a number, you can use the hex() or oct() functions accordingly:
#!/usr/bin/perl -w
# octhex2.pl
print hex("0×30"), "n";
print oct("030"), "n";
This will now produce the expected answers, 48 and 24. Note that for hex() or oct() , the prefix 0x or 0 respectively is not required. If you know that what you have is definitely supposed to be a
hex or octal number, then hex(30) and oct(30) will produce the preceding results. As you can see from that, the string "30" and the number 30 are treated as the same.
Furthermore, these functions will stop reading when they get to a digit that doesn’t make sense in that number system:
#!/usr/bin/perl -w
# octhex3.pl
print hex("FFG"), "n";
print oct("178"), "n";
These will stop at FF and 17 respectively, and convert to 255 and 15. Perl will warn you, though, since those are illegal characters in hex and octal numbers.
What about binary numbers? Well, there’s no corresponding bin() function but there is actually a little trick here. If you have the correct prefix in place for any of the number systems ( 0 , 0b , or
0x ), you can use oct() to convert it to decimal. For example,
print oct("0b11010") prints 26.
{mospagebreak title=Operators}
Now we know how to specify our strings and numbers, let’s see what we can do with them. The majority of the things we’ll be looking at here are numeric operators (operators that act on and produce
numbers) like plus and minus, which take two numbers as arguments, called operands, and add or subtract them. There aren’t as many string operators, but there are a lot of string functions. Perl
doesn’t draw a very strong distinction between functions and operators, but the main difference between the two is that operators tend to go in the middle of their arguments—for example: 2 + 2.
Functions go before their arguments and have them separated by commas.
Both of them take arguments, do something with them, and produce a new value; we generally say they return a value, or evaluate to a value. Let’s take a look.
Numeric Operators
The numeric operators take at least one number as an argument, and evaluate to another number. Of course, because Perl automatically converts between strings and numbers, the arguments may appear as
string literals or come from strings in variables. We’ll group these operators into three types: arithmetic operators, bitwise operators, and logic operators.
Arithmetic Operators
The arithmetic operators are those that deal with basic mathematics like adding, subtracting, multiplying, dividing, and so on. To add two numbers together, we would write something like this:
#!/usr/bin/perl -w
# arithop1.pl
print 69 + 118, "n";
And, of course, we would see the answer 187. Subtracting numbers is easy too, and we can subtract at the same time:
#!/usr/bin/perl -w
# arithop2.pl
print "21 from 25 is: ", 25 – 21, "n"; print "4 + 13 – 7 is: ", 4 + 13 – 7, "n";
$ perl arithop2.pl
21 from 25 is: 4
4 + 13 – 7 is: 10
Our next set of operators (multiplying and dividing) is where it gets interesting. We use the * and / operators to multiply and divide respectively.
#!/usr/bin/perl -w
# arithop3.pl
print "7 times 15 is ", 7 * 15, "n";
print "249 divided by 3 is ", 249 / 3, "n";
The fun comes when you want to multiply something and then add something, or add then divide. Here’s an example of the problem:
#!/usr/bin/perl -w
# arithop4.pl
print 3 + 7 * 15, "n";
This could mean one of two things: either Perl must add the 3 and the 7, and then multiply by 15, or multiply 7 and 15 first, and then add. Which does Perl do? Try it and see . . .
Perl should have given you 108, meaning it did the multiplication first. The order in which Perl performs operations is called operator precedence. Multiply and divide have a higher precedence than
add and subtract, and so they get performed first. We can start to draw up a list of precedence as follows:
To force Perl to perform an operation of lower precedence first, we need to use parentheses, like so:
#!/usr/bin/perl -w
# arithop5.pl
print (3 + 7) * 15, "n";
Unfortunately, if you run that, you’ll get a warning and 10 is printed. What happened? The problem is that print() is a function and the parentheses around 3 + 7 are treated as the only argument to
print() .
print() as an operator takes a list of arguments, performs an operation (printing them to the screen), and returns a 1 if it succeeds, or no value if it does not. Perl calculated 3 plus 7, printed
the result, and then multiplied the result of the returned value (1) by 15, throwing away the final result of 15.
To get what we actually want, then, we need another set of parentheses:
#!/usr/bin/perl -w
# arithop6.pl
print((3 + 7) * 15, "n");
This now gives us the correct answer, 150, and we can put another entry in our list of precedence:
List operators
Next we have the exponentiation operator, ** , which simply raises one number to the power of another—squaring, cubing, and so on. Here’s an example of some exponentiation:
#!/usr/bin/perl -w
# arithop7.pl
print 2**4, " ", 3**5, " ", -2**4, "n";
That’s 2*2*2*2, 3*3*3*3*3, and –2*–2*–2*–2. Or is it?
The output we get is
$ perl arithop7.pl
16 243 -16
Hmm, the first two look OK, but the last one’s a bit wrong. –2 to the 4th power should be positive. Again, it’s a precedence issue. Turning a number into a negative number requires an operator, the
unary minus operator. It’s called unary because unlike the ordinary minus oper ator, it only takes one argument. Although unary minus has a higher precedence than multiply and divide, it has a lower
precedence than exponentiation. What’s actually happening, then, is -(2**4) instead of
(-2)**4 . Let’s put these two operators in our list of precedence as well:
List operators
Unary minus
The last arithmetic operator is % , the remainder, or modulo operator. This calculates the remainder when one number divides another. For example, 6 divides into 15 twice, with a remainder of 3, as
our next program will confirm:
#!/usr/bin/perl -w
# arithop8.pl
print "15 divided by 6 is exactly ", 15 / 6, "n";
print "That’s a remainder of ", 15 % 6, "n";
$ perl arithop8.pl
15 divided by 6 is exactly 2.5
That’s a remainder of 3
The modulo operator has the same precedence as multiply and divide.
{mospagebreak title=Bitwise Operators}
Up to this point, the operators worked on numbers in the way we think of them. However, as we already know, computers don’t see numbers the same as we do; they see them as a string of bits. These
next few operators perform operations on numbers one bit at a time—that’s why we call them bitwise operators. These aren’t used quite so much in Perl as in other languages, but we’ll see them when
dealing with things like low-level file access.
First, let’s have a look at the kind of numbers we’re going to use in this section, just so we get used to them:
1. 0 in binary is 0, but let’s write it as 8 bits: 00000000.
2. 51 in binary is 00110011.
3. 85 in binary is 01010101.
4. 170 in binary is 10101010.
5. 204 in binary is 11001100.
6. 255 in binary is 11111111.
Does it surprise you that 10101010 (170) is twice as much as 01010101 (85)? It shouldn’t, when we multiply a number by 10 in base 10, all we do is slap a 0 on the end, so 21 becomes 210. Similarly,
to multiply a number by 2 in base 2, we do exactly the same.
People think of bitwise operators as working from right to left; the rightmost bit is called the least significant bit and the leftmost is called the most significant bit.
The AND Operator
The easiest bitwise operator to fathom is called the and operator, and is written &. This compares pairs of bits as follows:
1. 1 and 1 gives 1.
2. 1 and 0 gives 0.
3. 0 and 1 gives 0.
4. 0 and 0 gives 0.
For example, 51 & 85 looks like this:
Sure enough, if we ask Perl the following:
#!/usr/bin/perl -w
# bitop1.pl
print "51 ANDed with 85 gives us ", 51 & 85, "n";
it’ll tell us the answer is 17. Notice that since we’re comparing one pair of bits at a time, it doesn’t really matter which way around the arguments go, 51 & 85 is exactly the same as 85 & 51 . Oper
ators with this property are called associative operators. Addition (+) and multiplication (*) are also associative: 5 * 12 produces the same result as 12 * 5. Subtraction (–) and division (/) are
not associative: 5 – 12 does not produce the same result as 12 – 5.
Here’s another example—look at the bits, and see what you get:
The OR Operator
As well as checking whether the first and the second bits are 1, we can check whether one or another is 1, the or operator in Perl is |. This is how we would calculate 204 | 85 :
Now we produce 0s only if both the bits are 0; if either or both are 1, we produce a 1. As a quick rule of thumb, X & Y will always be smaller or equal to the smallest value of X and Y , and X | Y
will be bigger than or equal to the largest value of X or Y .
The XOR Operator
What if you really want to know if one or the other, but not both, are one? For this, you need the exclusive or operator, written as the ^ operator:
Please check back next week for the third part of this article.
Google+ Comments
/?php comment_form(); ?>
Google+ Comments
/?php get_sidebar(); ?>
|
{"url":"http://www.devshed.com/c/a/perl/scalars-and-operators/1/","timestamp":"2014-04-25T09:42:43Z","content_type":null,"content_length":"40684","record_id":"<urn:uuid:1b33dd2e-0989-4b7f-94b9-db63c0beceb3>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Partitions into parts from an arithmetic progresion
up vote 11 down vote favorite
Fix an arithmetic progression $R=(a, a+m, a+2m, \ldots)$, and assume that $gcd(a,m)=1$. Define $q_R(n)$ as the following coefficients: $$\prod_{i=0}^\infty (1+ t^{a+mi}) = \sum_{n=0}^\infty q_R(n) t^
n $$ In other words, $q_R(n)$ is number integer partitions of $n$ into distinct parts from $R$.
Problem 1. Prove that $q_R(n)$ are increasing for $\ n\ge n(a,m)$ large enough.
I first assumed this is either standard, well known, or easily follows from the existing results. Now I am less sure. My literature search gives only papers like this (A. Tripathi, "Coin exchange
problem for arithmetic progressions"). Note that for $a=m=1$, we get the usual partitions into distinct parts and the claim follows from Euler's theorem that they are equinumerous with partitions
into odd parts.
More generally, I need to prove that all finite differences are positive for large enough $n$. Formally, define $$(t-1)^r \prod_{i=0}^\infty (1+ t^{a+mi}) = \sum_{n=0}^\infty q_R(n,r) t^n $$
Problem 2. For every $r\ge 1$, prove that $q_R(n,r)>0$ for $\ n\geq n(a,m,r)$ large enough.
partitions nt.number-theory co.combinatorics
Have you checked Nyblom & Evans, On the enumeration of partitions with summands in arithmetic progression, Australasian J Combinatorics 28 (2003) 149--159, ajc.maths.uq.edu.au/pdf/28/
ajc_v28_p149.pdf ? – Gerry Myerson Nov 5 '12 at 4:30
There is also Munagi and Shonhiwa, On the partitions of a number into arithmetic progressions, J Integer Sequences 11 (2008) 08.5.4, cs.uwaterloo.ca/journals/JIS/VOL11/Shonhiwa/shonhiwa13.ps –
Gerry Myerson Nov 5 '12 at 4:33
1 @Gerry - yeas, I saw these. If you open the links, you will see these papers are unrelated. – Igor Pak Nov 5 '12 at 4:50
If m=1, the statement is trivial for all a, you don't have to use Euler's theorem or anything else. Just increase the biggest term by 1 to go from a partition of n to a partition of n+1 (such that
the largest term-1 is not used). – domotorp Nov 5 '12 at 8:42
Here's a trivial observation that might help with the "follows from known results" angle. We have $n=ra+sm$ with $r>0, s \geq 0$ for around $n/(ma)$ values of $r$, and $q_R(n)$ is the sum over
these $r$ of the number of ways of partitioning the corresponding $s$ into at most $r$ parts. – Ben Barber Nov 5 '12 at 14:01
show 3 more comments
3 Answers
active oldest votes
Problem 1 is solved completely, in the affirmative, in the following paper of Grosswald:
Emil Grosswald, Some theorems concerning partitions, Trans. Amer. Math. Soc. 89, 1958, 113–128.
up vote 9
down vote Grosswald in fact gives a very accurate estimate for the asymptotics of $q_R(n)$, showing that they grow exponentially fast with $n$, and generalizes things to the case that R consists
of any finite union of arithmetic progressions as well. (If you look at his rather intricate paper, look at the function that he calls $H(x)$).
Great! Let me take a look first. Many thanks, – Igor Pak Nov 8 '12 at 5:45
add comment
The answer is indeed affirmative, but was worked out before Grosswald's paper. The earliest paper I found which deals with a general version of this problem is
Roth, K.F., Szekeres, G. "Some asymptotic formulae in the theory of partitions", Quart. J. Math., Oxford Ser. (2) 5, (1954). 241–259, MR0067913
up vote Suppose $\lbrace u_k\rbrace$ is an eventually increasing sequence of positive integers satisfying some mild technical conditions. The paper above gives accurate asymptotics for $p_u(n)$, the
6 down number of partitions of $n$ with distinct parts from $\lbrace u_k\rbrace$. The most relevant result is that for $n$ greater than some $n_0$ which depends on $\lbrace u_k\rbrace$ and $\delta$
vote we have a constant $c$ so that $$p_u(n+1)-p_u(n)\geq cn^{-\frac{s}{s+1}-\delta}p_u(n),$$ where $s=\lim_{k \to \infty}\frac{\log u_k}{\log k}$. In particular their result works for sequences
$u_k=p(k)$ where $p$ is a polynomial taking integers to integers with $\gcd(p(1),p(2),\dots)=1$.
Many thanks! This is very useful. – Igor Pak Nov 10 '12 at 5:10
add comment
[This is more of a comment than an answer, but I lack the reputation.] For partitions with repetitions allowed, the analogue of your Problem #2 is solved by a very general theorem of
Bateman and Erdos:
Let $A$ be an arbitrary set of natural numbers. For each nonnegative integer $k$, define $p_k(n)$ so that $$ \sum_{n=0}^{\infty} p_k(n) X^n = (1-X)^k \prod_{a \in A} (1-X^a)^{-1}. $$
up vote 4 down
vote They show that $p_k(n)$ is positive for all sufficiently large $n$ if and only if the following holds: There are more than $k$ elements in $A$, and if we remove an arbitrary subset of
$k$ elements of $A$, the remaining elements have greatest common divisor $1$.
Unfortunately they remark that the problem of partitions into distinct parts is much harder and refer to the paper of Roth and Szekeres already mentioned in Gjergji's answer.
add comment
Not the answer you're looking for? Browse other questions tagged partitions nt.number-theory co.combinatorics or ask your own question.
|
{"url":"http://mathoverflow.net/questions/111507/partitions-into-parts-from-an-arithmetic-progresion","timestamp":"2014-04-16T04:45:40Z","content_type":null,"content_length":"66686","record_id":"<urn:uuid:5905e45a-be2f-42da-addd-b054303e5b41>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sewanee: The University of the South
Senior Talk - Elizabeth Killinger
WL 121
Wed, January 30, 2013 - 6:00 pm Wed, January 30, 2013 - 7:00 pm'; } else { echo ' – 7:00 pm'; } ?>
Chvatal's Art Gallery Theorem and Fisk's Proof
The question of how many guards were needed to guard a simple polygon was posed by Victor Klee to Chvatal in 1973 during a math conference. Chvatal quickly proved the theorem, but Fisk later
simplified the proof. We can solve this theorem using a 3-coloring argument proved by Fisk.
|
{"url":"http://math.sewanee.edu/news/event/senior-talk-elizabeth-killinger","timestamp":"2014-04-16T18:56:42Z","content_type":null,"content_length":"8456","record_id":"<urn:uuid:22d07730-3d31-4505-97bd-6c0c46bc6582>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rankine Cycle Problems
Calculate the thermal efficiency of a simple Rankine cycle for which steam leaves the boiler as saturated vapor at 3 x 10^6 N/m^2 and is condensed to saturated liquid at 7000 N/m^2. The pump and
turbine have isentropic efficiencies of 0.6 and 0.8, respectively. The pump inlet is saturated liquid.
|
{"url":"http://www.mhtl.uwaterloo.ca/old/courses/me354/topic6.html","timestamp":"2014-04-16T08:26:28Z","content_type":null,"content_length":"6573","record_id":"<urn:uuid:894015f5-b3fc-470d-a48e-f79edc4865fb>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How long will it take a spacecraft to decompress?
Vacuum Exposure
How long will it take a spacecraft to decompress?
Is there a formula or rule-of-thumb for making a rough estimate of the rate of air loss in a space craft for a given size air leak?
The quick approximation is that the air will flow out of the hole at the speed of sound.
For a more detailed calculation, Professor Andrew Higgins of McGill University gives the following answer:
The air will leak through the hole at sonic velocity (Mach one at constriction of the leak).
So, the mass flow rate is:
dm/dt = rho V A (eqn. 1)
where rho is density, V is velocity, and A is the area of the hole. The velocity equals the speed of sound (sonic orifice), but this is slightly lower than the speed of sound in the spacecraft cabin
due to expansion of gas as it flows through the hole. Density is lower also. So, it is more practical to express the mass flow rate in terms of stagnation conditions, i.e., the conditions in the
cabin, which I will denote p[o] and T[o]:
dm/dt = A p[o] Sqrt [(g/(R T[o])) (2/(g+1))^((g+1)/(g-1))) ] (eqn. 2)
here "g" is gamma, the ratio of specific heats (g = 1.4 for air) and R is the gas constant (R = 287 J/kg-K for air). You can find this derived in any compressible fluid dynamics textbook (or any
fluids book with a chapter on compressible flow).
For air, this simplifies to:
dm/dt = 0.04042 A*p[o]/Sqrt[T[o]] (eqn. 3, "Fliegner's formula")
if you stick to MKS units (using Pascals for pressure and K for temperature), this will give you the mass flow rate of air leak in kg/s.
So far, we have assumed that the spacecraft remains at the same p[o], T[o]. Of course, as the leak progresses, the pressure in the spacecraft begins to drop, and this affects the mass flow rate
through the leak. Thus, dm/dt is no longer constant, and we have to integrate the above differential equation coupled to the decrease in p[o] and T[o] as the spacecraft leaks. You can find the
details of this in Saad's Compressible Fluid Flow (2nd Ed., pp. 103-106). The answer is that, to leak from an initial pressure of p[i] to a final pressure of p[f], the time required is:
t = 0.43 V/A [(p[i]/p[f])^0.143 - 1]/(Sqrt[T[i]]) (eqn. 4)
Again using MKS units, where V is the volume of the spacecraft (T[i] = initial temperature), this gives you the time "t" to leak down to p[f] in seconds (assuming the cabin gas is air).
This assumed that the blow-down was isentropic. In practice, any blow-down that will last tens of seconds to minutes, the process in the spacecraft is more likely to be isothermal: mass of spacecraft
has huge thermal capacity compared to the (decreasing) mass of gas inside and will keep the gas warm as it expands. With the assumption of isothermal blow-down, the time required becomes:
t = 0.086 (V/A) Ln[p[i]/p[f]]/(Sqrt[T]) (eqn. 5)
where T is the (constant) spacecraft temperature.
If the atmosphere inside the spacecraft starts out at room temperature, 293K, this simplifies to:
t = 0.005 (V/A) Ln[p[i]/p[f]] (eqn. 6)
A spacecraft with a volume V=10 m^3 is initially pressurized with air at 300 K. It has a 1 cm x 1 cm hole. V/A is (10/10^-4)= 10^5, so the time it takes the pressure to drop from 1 atm to 0.5 atm (p
[i]/p[f] = 2) is (from equation 5):
t = 0.086 *(10^5) * Ln[2]/(Sqrt[300]) = 344.2 s
or about six minutes.
Andrew J. Higgins
Mechanical Engineering Deptartment
Assistant Professor, Shock Wave Physics Group
McGill University, Montreal, Quebec CANADA
1. Demetriades, S.T., "On the Decompression of a Punctured Pressurized Cabin in Vacuum Flight," Jet Propulsion, January-February, 1954, pp. 35-36.
2. Saad, M., Compressible Fluid Flow, 2nd Ed., Pearson Education, 1998.
This document is not a work of the U.S. government, and any opinions expressed in it are the views of the author, and not NASA or the U.S. government. Links
Page by Geoffrey A. Landis, copyright 2003
|
{"url":"http://www.geoffreylandis.com/higgins.html","timestamp":"2014-04-18T10:35:09Z","content_type":null,"content_length":"5852","record_id":"<urn:uuid:727c601d-23f0-47d8-80a0-8ffb5bcfc88e>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Metuchen Prealgebra Tutor
Find a Metuchen Prealgebra Tutor
...Whatever the needs of the student in Calculus I can provide them appropriately and teach satisfactorily. Since I have experience of working in public schools I am fully enriched with the
content knowledge, skills, applications in projects. Also I am equipped with the geometry models, geometrica...
10 Subjects: including prealgebra, calculus, geometry, algebra 1
...I completed honors and AP courses in High school, and successfully completed the SAT, ACT, and GRE. I am open to help anyone that I can. As a result of work commitments throughout the state, I
am fairly flexible with location.
49 Subjects: including prealgebra, Spanish, English, reading
...My specialties include Mathematics, from basic math to Calculus,I am certified in teaching secondary Math. copy of certification available upon request. I hold a BA in Math. I am also a native
speaker of French and Spanish, which I tutor on a regular basis. finally I have a Master of Science in Computer Sciences.
14 Subjects: including prealgebra, Spanish, calculus, ASVAB
...I specialize in SAT/ACT Math. I teach students how to look at problems, how to break them down, which methods, strategies, and techniques to apply, and how to derive the quickest solution. I go
through problems step-by-step and show students what to look for and what tools are necessary.
30 Subjects: including prealgebra, reading, English, physics
...I acquired my Bachelor's with high honors (GPA 3.72) in Mathematics and Economics as well as my Master's in Statistics (GPA 4.00) from Rutgers University. I think that anyone can learn and love
mathematics when the material is delivered in a fashion that is conductive to the person's understandi...
18 Subjects: including prealgebra, calculus, statistics, algebra 1
|
{"url":"http://www.purplemath.com/metuchen_nj_prealgebra_tutors.php","timestamp":"2014-04-16T16:19:50Z","content_type":null,"content_length":"23952","record_id":"<urn:uuid:04739742-3ba1-4e37-b833-4f3e972f09e9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Local Behavior of a Polynomial near a Root
All properties described only hold
near the root. For example, a locally increasing function may decrease a short distance away.
If is a root of the polynomial , then can be factored as , where is a positive integer and is another polynomial without a root at . The number is called the degree of the root. If the roots of the
polynomial are all real, the sum of the degrees of all the roots is the degree of the polynomial.
The local behavior of a polynomial at a root depends on whether the degree of the root is even or odd; the linear term of is positive, zero, or negative; and the sign of its leading coefficient is
positive or negative—a total of twelve possible cases.
The higher the degree, the flatter the function near the root.
If the degree of the root is odd, there is an inflection point at the root. If the degree of the root is even, there is a maximum or minimum at or near the root.
Suppose the coefficient of the linear term is zero, so that the function has a critical point at the root. If the degree of the root is even, there is a minimum or maximum at the root, depending on
whether the sign of is positive or negative. If the degree of the root is odd, there is a flat inflection point at the root, and the function is nondecreasing or nonincreasing near the root
according to whether the sign of the leading coefficient of is positive or negative.
|
{"url":"http://demonstrations.wolfram.com/LocalBehaviorOfAPolynomialNearARoot/","timestamp":"2014-04-17T12:51:44Z","content_type":null,"content_length":"44482","record_id":"<urn:uuid:69dc582c-ff27-43cc-b4c5-9dea6c84ffd8>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/gopeder/answered","timestamp":"2014-04-19T07:34:03Z","content_type":null,"content_length":"104790","record_id":"<urn:uuid:c4fc62d6-dbc0-4ff7-8c8e-b3e12f9fcfcd>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Desk-Mates (Stable Matching) with Privacy of Preferences, and a New Distributed CSP Framework
Marius C. Silaghi and Amit Abhyankar, Florida Institute of Technology; Markus Zanker, Universität Klagenfurt; and Roman Bartak, Charles University
The desk-mates matcher application places students in pairs of two for working in projects (similar to the well known problems of stable matchings or stable roommates). Each of the students has a
(hopefully stable) secret preference between every two colleagues. The participants want to find an allocation satisfying their secret preferences and without leaking any of these secret preferences,
except for what a participant can infer from the identity of the partner that was recommended to her.
The peculiarities of the above problem require solvers based on old distributed CSP frameworks to use models whose search spaces are higher than those in centralized solvers, with bad effects on
efficiency. Therefore we introduce a new distributed constraint satisfaction (DisCSP) framework where the actual constraints are secrets that are not known by any agent. They are defined by a set of
functions on some secret inputs from all agents. The solution is also kept secret and each agent learns just the result of applying an agreed function on the solution. The expressiveness of the new
framework is shown to improve the efficiency (O(2^{m^3-\log(m)}) times) in modeling and solving the aforementioned problem with m participants. We show how to extend our previous techniques to solve
securely problems modeled with the new formalism, and exemplify with the problem in the title. An experimental implementation in the form of an applet-based solver is available.
This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.
|
{"url":"http://www.aaai.org/Library/FLAIRS/2005/flairs05-110.php","timestamp":"2014-04-16T04:18:49Z","content_type":null,"content_length":"3580","record_id":"<urn:uuid:66f0e5f8-cf62-459f-84d2-bb1998e75af5>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
|