content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Negative Type Numbers - STABS
5.1.3 Negative Type Numbers
This is the method used in XCOFF for defining builtin types. Since the debugger knows about the builtin types anyway, the idea of negative type numbers is simply to give a special type number which
indicates the builtin type. There is no stab defining these types.
There are several subtle issues with negative type numbers.
One is the size of the type. A builtin type (for example the C types int or long) might have different sizes depending on compiler options, the target architecture, the ABI, etc. This issue doesn't
come up for IBM tools since (so far) they just target the RS/6000; the sizes indicated below for each size are what the IBM RS/6000 tools use. To deal with differing sizes, either define separate
negative type numbers for each size (which works but requires changing the debugger, and, unless you get both AIX dbx and GDB to accept the change, introduces an incompatibility), or use a type
attribute (see String Field) to define a new type with the appropriate size (which merely requires a debugger which understands type attributes, like AIX dbx or GDB). For example,
.stabs "boolean:t10=@s8;-16",128,0,0,0
defines an 8-bit boolean type, and
.stabs "boolean:t10=@s64;-16",128,0,0,0
defines a 64-bit boolean type.
A similar issue is the format of the type. This comes up most often for floating-point types, which could have various formats (particularly extended doubles, which vary quite a bit even among IEEE
systems). Again, it is best to define a new negative type number for each different format; changing the format based on the target system has various problems. One such problem is that the Alpha has
both VAX and IEEE floating types. One can easily imagine one library using the VAX types and another library in the same executable using the IEEE types. Another example is that the interpretation of
whether a boolean is true or false can be based on the least significant bit, most significant bit, whether it is zero, etc., and different compilers (or different options to the same compiler) might
provide different kinds of boolean.
The last major issue is the names of the types. The name of a given type depends only on the negative type number given; these do not vary depending on the language, the target system, or anything
else. One can always define separate type numbers—in the following list you will see for example separate int and integer*4 types which are identical except for the name. But compatibility can be
maintained by not inventing new negative type numbers and instead just defining a new type with a new name. For example:
.stabs "CARDINAL:t10=-8",128,0,0,0
Here is the list of negative type numbers. The phrase integral type is used to mean twos-complement (I strongly suspect that all machines which use stabs use twos-complement; most machines use
twos-complement these days).
int, 32 bit signed integral type.
char, 8 bit type holding a character. Both GDB and dbx on AIX treat this as signed. GCC uses this type whether char is signed or not, which seems like a bad idea. The AIX compiler (xlc) seems to
avoid this type; it uses -5 instead for char.
short, 16 bit signed integral type.
long, 32 bit signed integral type.
unsigned char, 8 bit unsigned integral type.
signed char, 8 bit signed integral type.
unsigned short, 16 bit unsigned integral type.
unsigned int, 32 bit unsigned integral type.
unsigned, 32 bit unsigned integral type.
unsigned long, 32 bit unsigned integral type.
void, type indicating the lack of a value.
float, IEEE single precision.
double, IEEE double precision.
long double, IEEE double precision. The compiler claims the size will increase in a future release, and for binary compatibility you have to avoid using long double. I hope when they increase it
they use a new negative type number.
integer. 32 bit signed integral type.
boolean. 32 bit type. GDB and GCC assume that zero is false, one is true, and other values have unspecified meaning. I hope this agrees with how the IBM tools use the type.
short real. IEEE single precision.
real. IEEE double precision.
stringptr. See Strings.
character, 8 bit unsigned character type.
logical*1, 8 bit type. This Fortran type has a split personality in that it is used for boolean variables, but can also be used for unsigned integers. 0 is false, 1 is true, and other values are
logical*2, 16 bit type. This Fortran type has a split personality in that it is used for boolean variables, but can also be used for unsigned integers. 0 is false, 1 is true, and other values are
logical*4, 32 bit type. This Fortran type has a split personality in that it is used for boolean variables, but can also be used for unsigned integers. 0 is false, 1 is true, and other values are
logical, 32 bit type. This Fortran type has a split personality in that it is used for boolean variables, but can also be used for unsigned integers. 0 is false, 1 is true, and other values are
complex. A complex type consisting of two IEEE single-precision floating point values.
complex. A complex type consisting of two IEEE double-precision floating point values.
integer*1, 8 bit signed integral type.
integer*2, 16 bit signed integral type.
integer*4, 32 bit signed integral type.
wchar. Wide character, 16 bits wide, unsigned (what format? Unicode?).
long long, 64 bit signed integral type.
unsigned long long, 64 bit unsigned integral type.
logical*8, 64 bit unsigned integral type.
integer*8, 64 bit signed integral type.
|
{"url":"http://sourceware.org/gdb/onlinedocs/stabs/Negative-Type-Numbers.html","timestamp":"2014-04-20T19:08:03Z","content_type":null,"content_length":"9835","record_id":"<urn:uuid:e34941fb-d3f6-4e84-a61e-1bee8dd3b87f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Some Classes of Recursive Functions
Results 1 - 10 of 25
, 1995
"... This paper is the first one in a sequel of papers resulting from the authors Habilitationsschrift [22] which are devoted to determine the growth in proofs of standard parts of analysis. A
hierarchy (GnA # )n#I N of systems of arithmetic in all finite types is introduced whose definable objects of ..."
Cited by 34 (21 self)
Add to MetaCart
This paper is the first one in a sequel of papers resulting from the authors Habilitationsschrift [22] which are devoted to determine the growth in proofs of standard parts of analysis. A hierarchy
(GnA # )n#I N of systems of arithmetic in all finite types is introduced whose definable objects of type 1 = 0(0) correspond to the Grzegorczyk hierarchy of primitive recursive functions. We
establish the following extraction rule for an extension of GnA # by quantifier--free choice AC--qf and analytical axioms # having the form #x # #y ## sx#z # F0 (including also a `non-- standard'
axiom F - which does not hold in the full set--theoretic model but in the strongly majorizable functionals): From a proof GnA # +AC--qf + # # #u 1 , k 0 #v ## tuk#w 0 A0(u, k, v, w) one can extract a
uniform bound # such that #u 1 , k 0 #v ## tuk#w # #ukA0 (u, k, v, w) holds in the full set--theoretic type structure. In case n = 2 (resp. n = 3) #uk is a polynomial (resp. an elementary recursive
function) in k, u M := #x. max(u0, . . . , ux). In the present paper we show that for n # 2, GnA # +AC--qf+F - proves a generalization of the binary Knig's lemma yielding new conservation results
since the conclusion of the above rule can be verified in G max(3,n) A # in this case. In a subsequent paper we will show that many important ine#ective analytical principles and theorems can be
proved already in G2A # +AC--qf+# for suitable #. 1
- Journal of Complexity , 2002
"... We study a restricted version of Shannon's General . . . ..."
, 2001
"... Compare first-order functional programs with higher-order programs allowing functions as function parameters. Can the the first program class solve fewer problems than the second? The answer is
no: both classes are Turing complete, meaning that they can compute all partial recursive functions. In pa ..."
Cited by 24 (1 self)
Add to MetaCart
Compare first-order functional programs with higher-order programs allowing functions as function parameters. Can the the first program class solve fewer problems than the second? The answer is no:
both classes are Turing complete, meaning that they can compute all partial recursive functions. In particular, higher-order values may be first-order simulated by use of the list constructor ‘cons’
to build function closures. This paper uses complexity theory to prove some expressivity results about small programming languages that are less than Turing complete. Complexity classes of decision
problems are used to characterize the expressive power of functional programming language features. An example: second-order programs are more powerful than first-order, since a function f of type &
lsqb;Bool]-〉Bool is computable by a cons-free first-order functional program if and only if f is in PTIME, whereas f is computable by a cons-free second-order program if and only if f is in
EXPTIME. Exact characterizations are given for those problems of type [Bool]-〉Bool solvable by programs with several combinations of operations on data: presence or absence of
constructors; the order of data values: 0, 1, or higher; and program control structures: general recursion, tail recursion, primitive recursion.
- ACM TOIS , 2006
"... XPath is the standard language for navigating XML documents and returning a set of matching nodes. We present a sound and complete decision procedure for containment of XPath queries, as well as
other related XPath decision problems such as satisfiability, equivalence, overlap, and coverage. The con ..."
Cited by 16 (3 self)
Add to MetaCart
XPath is the standard language for navigating XML documents and returning a set of matching nodes. We present a sound and complete decision procedure for containment of XPath queries, as well as
other related XPath decision problems such as satisfiability, equivalence, overlap, and coverage. The considered XPath fragment covers most of the language features used in practice. Specifically, we
propose a unifying logic for XML, namely, the alternation-free modal μ-calculus with converse. We show how to translate major XML concepts such as XPath and regular XML types (including DTDs) into
this logic. Based on these embeddings, we show how XPath decision problems, in the presence or absence of XML types, can be solved using a decision procedure for μ-calculus satisfiability. We provide
a complexity analysis of our system together with practical experiments to illustrate the efficiency of the approach for realistic scenarios.
- SIAM Journal of Computing , 1998
"... Abstract. Traditional results in subrecursion theory are integrated with the recent work in “predicative recursion ” by defining a simple ranking ρ of all primitive recursive functions. The
hierarchy defined by this ranking coincides with the Grzegorczyk hierarchy at and above the linearspace level. ..."
Cited by 10 (1 self)
Add to MetaCart
Abstract. Traditional results in subrecursion theory are integrated with the recent work in “predicative recursion ” by defining a simple ranking ρ of all primitive recursive functions. The hierarchy
defined by this ranking coincides with the Grzegorczyk hierarchy at and above the linearspace level. Thus, the result is like an extension of the Schwichtenberg/Müller theorems. When primitive
recursion is replaced by recursion on notation, the same series of classes is obtained except with the polynomial time computable functions at the first level.
- Proceedings of CSL'94, number 933 in LNCS , 1994
"... We are motivated by finding a good basis for the semantics of programming languages and investigate small classes in subrecursive hierarchies of functions. We do this with the help of pairing
functions because in this way we can explore the amazing coding powers of S-expressions of LISP within t ..."
Cited by 9 (8 self)
Add to MetaCart
We are motivated by finding a good basis for the semantics of programming languages and investigate small classes in subrecursive hierarchies of functions. We do this with the help of pairing
functions because in this way we can explore the amazing coding powers of S-expressions of LISP within the domain of natural numbers. In the process of doing this we introduce a missing stage in
Grzegorczyk-based hierarchies which solves the longstanding open problem of what is the precise relation between the small recursive classes and those of complexity theory. 1 Introduction We
investigate subrecursive hierarchies based on pairing functions and solve a longstanding open problem in small recursive classes of what is the relationship between these and computational complexity
classes (see [11]). The problem is solved by discovering that there is a missing stage in Grzegorczyk-based hierarchies [7, 11]. The motivation for this research comes from our search for a good
programming langu...
"... We consider various extensions and modifications of Shannon's General Purpose Analog Computer, which is a model of computation by differential equations in continuous time. We show that several
classical computation classes have natural analog counterparts, including the primitive recursive function ..."
Cited by 8 (2 self)
Add to MetaCart
We consider various extensions and modifications of Shannon's General Purpose Analog Computer, which is a model of computation by differential equations in continuous time. We show that several
classical computation classes have natural analog counterparts, including the primitive recursive functions, the elementary functions, the levels of the Grzegorczyk hierarchy, and the arithmetical
and analytical hierarchies.
, 1997
"... In recent years a number of conditions has been established that a monoid must necessarily satisfy if it is to have a presentation through some finite convergent stringrewriting system. Here we
give a survey on this development, explaining these necessary conditions in detail and describing the rela ..."
Cited by 6 (5 self)
Add to MetaCart
In recent years a number of conditions has been established that a monoid must necessarily satisfy if it is to have a presentation through some finite convergent stringrewriting system. Here we give
a survey on this development, explaining these necessary conditions in detail and describing the relationships between them. 1 Introduction String-rewriting systems, also known as semi-Thue systems,
have played a major role in the development of theoretical computer science. On the one hand, they give a calculus that is equivalent to that of the Turing machine (see, e.g., [Dav58]), and in this
way they capture the notion of `effective computability' that is central to computer science. On the other hand, in the phrase-structure grammars introduced by N. Chomsky they are used as sets of
productions, which form the essential part of these grammars [HoUl79]. In this way string-rewriting systems are at the very heart of formal language theory. Finally, they are also used in
combinatorial semig...
- PROC. 4TH CONFERENCE ON REAL NUMBERS AND COMPUTERS , 2000
"... We study a restricted version of Shannon’s General Purpose Analog Computer in which we only allow the machine to solve linear differential equations. This corresponds to only allowing local
feedback in the machine’s variables. We show that if this computer is allowed to sense inequalities in a dif ..."
Cited by 6 (1 self)
Add to MetaCart
We study a restricted version of Shannon’s General Purpose Analog Computer in which we only allow the machine to solve linear differential equations. This corresponds to only allowing local feedback
in the machine’s variables. We show that if this computer is allowed to sense inequalities in a differentiable way, then it can compute exactly the elementary functions. Furthermore, we show that if
the machine has access to an oracle which computes a function f(x) with a suitable growth as x goes to infinity, then it can compute functions on any given level of the Grzegorczyk hierarchy. More
precisely, we show that the model contains exactly the nth level of the Grzegorczyk hierarchy if it is allowed to solve n − 3 non-linear differential equations of a certain kind. Therefore, we claim
that there is a close connection between analog complexity classes, and the dynamical systems that compute them, and classical sets of subrecursive functions.
- Data & Knowledge Engineering , 2007
"... XPath is the standard language for addressing parts of an XML document. We present a sound and complete decision procedure for containment of XPath queries. The considered XPath fragment covers
most of the language features used in practice. Specifically, we show how XPath queries can be translated ..."
Cited by 5 (2 self)
Add to MetaCart
XPath is the standard language for addressing parts of an XML document. We present a sound and complete decision procedure for containment of XPath queries. The considered XPath fragment covers most
of the language features used in practice. Specifically, we show how XPath queries can be translated into equivalent formulas in monadic second-order logic. Using this translation, we construct an
optimized logical formulation of the containment problem, which is decided using tree automata. When the containment relation does not hold between two XPath expressions, a counter-example XML tree
is generated. We provide practical experiments that illustrate the efficiency of the decision procedure for realistic scenarios.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=746440","timestamp":"2014-04-19T23:42:23Z","content_type":null,"content_length":"38484","record_id":"<urn:uuid:cd481de9-cded-4205-b0ec-d6b91624463d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A small part of a chain rule proof I don't get
January 10th 2011, 09:22 AM
A small part of a chain rule proof I don't get
I've been trying to get my head round part of a proof and I don't see the following implication.
Suppose we have
$\lim_{x \to x_0} E(g(x)) = E(g(x_0)) = 0$
then does it follow that
$\lim_{x \to x_0}[f'(g(x_0) + E(g(x)))]\frac{g(x) - g(x_0)}{x - x_0} = f'(g(x_0))g'(x_0)$?
Since doesn't the above require that $f'$ is continuous at $g(x_0)$, and if it does, why is this necessarily true?
Thanks :)
January 10th 2011, 11:33 AM
I've been trying to get my head round part of a proof and I don't see the following implication.
Suppose we have
$\lim_{x \to x_0} E(g(x)) = E(g(x_0)) = 0$
then does it follow that
$\lim_{x \to x_0}[f'(g(x_0) + E(g(x)))]\frac{g(x) - g(x_0)}{x - x_0} = f'(g(x_0))g'(x_0)$?
Since doesn't the above require that $f'$ is continuous at $g(x_0)$, and if it does, why is this necessarily true?
Thanks :)
There's too little context here, but from what I can tell you used the continuity of $f$ at $g(x_0)$ to conclude that $\displaystyle \lim_{x\to x_0}f(g(x))=f(g(x_0))$ it seems. Care to fill in
the rest of the proof?
January 10th 2011, 11:39 PM
The proof is from Trench's Introduction to Real Analysis, which can be freely downloaded here
William Trench - Trinity University Mathematics
I was wary of posting the entire proof verbatim in case of any infringement (not sure what the laws are concerning mathematical proofs but I thought it better to be safe).
The proof in question is on page 78 of the book.
EDIT: Actually I believe there may be a bracket missing from equation (10) in the proof, immediately after $f'(g(x_0))$. Can anyone confirm this?
January 10th 2011, 11:54 PM
The proof is from Trench's Introduction to Real Analysis, which can be freely downloaded here
William Trench - Trinity University Mathematics
I was wary of posting the entire proof verbatim in case of any infringement (not sure what the laws are concerning mathematical proofs but I thought it better to be safe).
The proof in question is on page 78 of the book.
EDIT: Actually I believe there may be a bracket missing from equation (10) in the proof, immediately after $f'(g(x_0))$. Can anyone confirm this?
Ha! Luckily for you I own Trench, I don't think many people would download it :). Anyways the part you mention reads
$\displaystyle \frac{h(x)-h(x_0)}{x-x_0}=\left[f'\left(g(x_0)+E(g(x))\right]\frac{g(x)-g(x_0)}{x-x_0}\right]$
when it should read
$\displaystyle \frac{h(x)-h(x_0)}{x-x_0}=\left[f'\overbrace{(g(x_0))}^{\text{here}}+E(g(x))\right]\frac{g(x)-g(x_0)}{x-x_0}\right]$
January 11th 2011, 03:03 AM
That makes a lot more sense now, thanks :)
|
{"url":"http://mathhelpforum.com/differential-geometry/167961-small-part-chain-rule-proof-i-dont-get-print.html","timestamp":"2014-04-17T16:40:16Z","content_type":null,"content_length":"11255","record_id":"<urn:uuid:5f5d3ae9-eb99-4853-b299-51b11da18e21>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Measurement unit conversion: mTorr
›› Measurement unit: mTorr
Full name: millitorr
Plural form: millitorr
Symbol: mTorr
Category type: pressure
Scale factor: 0.13332237
›› SI unit: pascal
The SI derived unit for pressure is the pascal.
1 pascal is equal to 7.50061673821 mTorr.
Valid units must be of the pressure type.
You can use this form to select from known units:
I'm feeling lucky, show me some random units
›› Definition: Millitorr
The SI prefix "milli" represents a factor of 10^-3, or in exponential notation, 1E-3.
So 1 millitorr = 10^-3 torrs.
The definition of a torr is as follows:
The torr is a non-SI unit of pressure, named after Evangelista Torricelli. Its symbol is Torr.
›› Sample conversions: mTorr
mTorr to nanobar mTorr to ton/square foot [long] mTorr to inch of water [4 °C] mTorr to terapascal mTorr to pascal mTorr to kip/square inch mTorr to water column [millimeter] mTorr to kilobar mTorr
to kilopond/square millimeter mTorr to femtopascal
|
{"url":"http://www.convertunits.com/info/mTorr","timestamp":"2014-04-18T20:44:12Z","content_type":null,"content_length":"24743","record_id":"<urn:uuid:3e0e22dd-6df5-40d3-beb0-982b7ee7ce27>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
|
discrete distribution - find the constant
June 18th 2008, 08:18 AM
math beginner
discrete distribution - find the constant
This is a question from Degroot and Schervish's Probability and Statistics Textbook: p 103 qn 9
Suppose that a random variable X has a discrete distribution with the following probability function:
f(x) = ( c/(x squared) for x=1,2,...
( 0 otherwise
Find the value of the constant c.
I tried working it out and my answer is c = 1.
But the back of the textbook states the answer as 6 / pi squared.
Please could someone help me understand why?
June 18th 2008, 08:32 AM
Well, this is a well known sum: $\sum\limits_{x = 1}^\infty {\frac{1}{{x^2 }}} = \frac{{\pi ^2 }}{6}$.
Does that help you understand?
June 18th 2008, 05:13 PM
math beginner
Thank you! I'd be interested to read the proof if that's possible.
June 18th 2008, 05:29 PM
mr fantastic
|
{"url":"http://mathhelpforum.com/advanced-statistics/41892-discrete-distribution-find-constant-print.html","timestamp":"2014-04-21T04:40:10Z","content_type":null,"content_length":"5520","record_id":"<urn:uuid:d9087aea-dfec-4501-86c1-ea14443f5a8d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Now we move on to the second part of the Exercise 5.2, which requires to implement regularized logistic regression using Newton's Method.Plot the data:
Machine Learning Ex 5.1 – Regularized Linear Regression
The first part of the Exercise 5.1 requires to implement a regularized version of linear regression.Adding regularization parameter can prevent the problem of over-fitting when fitting a high-order
polynomial.Read More: 194 Words Totally
Machine Learning Ex4 – Logistic Regression
Exercise 4 required implementing Logistic Regression using Newton's Method.The dataset in use is 80 students and their grades of 2 exams, 40 students were admitted to college and the other 40
students were not. We need to implement a binary classification model to estimates college admission based on the student's scores on...
ggplot2 Version of Figures in “25 Recipes for Getting Started with R”
In order to provide an option to compare graphs produced by basic internal plot function and ggplot2, I recreated the figures in the book, 25 Recipes for Getting Started with R, with ggplot2.The code
used to create the images is in separate paragraphs, allowing easy comparison.Read...
the batman equation
HardOCP has an image with an equation which apparently draws the Batman logo.Read More: 1295 Words Totally
ProjectEuler-Problem 46
It was proposed by Christian Goldbach that every odd composite number can be written as the sum of a prime and twice a square.9 = 7 + 21215 = 7 + 222Read More: 461 Words Totally
R package DOSE released
Disease Ontology (DO) provides an open source ontology for the integration of biomedical data that is associated with human disease. DO analysis can lead to interesting discoveries that deserve
further clinical investigation.DOSE was designed for semantic similarity measure and enrichment analysis.Read More: 619 Words Totally
[Project Euler] – Problem 58
Starting with 1 and spiralling anticlockwise in the following way, a square spiral with side length 7 is formed.37 36 35 34 33 32 3138 17 16 15 14 13 30Read More: 597 Words Totally
[Project Euler] – Problem 57
It is possible to show that the square root of two can be expressed as an infinite continued fraction.√ 2 = 1 + 1/(2 + 1/(2 + 1/(2 + … ))) = 1.414213…By expanding this for the first four iterations,
we get:Read More: 547 Words Totally
Machine Learning Ex3 – Multivariate Linear Regression
Part 1. Finding alpha.The first question to resolve in Exercise 3 is to pick a good learning rate alpha.This require making an initial selection, running gradient descent and observing the cost
function.Read More: 221 Words Totally
|
{"url":"http://www.r-bloggers.com/author/ygc/page/3/","timestamp":"2014-04-21T14:45:00Z","content_type":null,"content_length":"36698","record_id":"<urn:uuid:e0f1b53c-b0b3-4dc8-9325-9237f958b260>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cupertino Prealgebra Tutor
Find a Cupertino Prealgebra Tutor
...It's the writers' responsibility to use proper grammar to clearly convey their thoughts and to avoid ambiguity. I enjoy studying European History by means of books, videos, University lectures
and seminars. A knowledge of history is essential in the modern world in terms of policy and decision making.
37 Subjects: including prealgebra, reading, English, physics
...I am extremely patient with them, and I will explain it in as many times and different ways I need to until the idea is clear. I find tutoring one of most rewarding experiences one can do. I
look forward to helping students grow and mature academically.The foundation for all other math courses,...
9 Subjects: including prealgebra, calculus, geometry, algebra 1
...Parent testimonials from my previous clients are available upon request. Parents have raved about my role in their child's achievement of 1-2 letter grade improvements on average, and growth in
intellectual excitement and confidence. In the past, most students make significant improvement in the first month or two of tutoring, and they also enjoy their sessions.
40 Subjects: including prealgebra, reading, English, writing
...I have tutored several high school and college students to help them improve grades in math, including algebra, calculus, geometry and probability. I graduated from high school as top student
in my class, especially the math subject. I have been extensively using calculus for my PhD computational chemistry research.
15 Subjects: including prealgebra, chemistry, GRE, Chinese
...I was taught to love to learn and to love to teach. I am currently teaching various subjects to 8th grade students in East Palo Alto and have been teaching in various classroom settings for
about 4 years. My approach is very similar to the way that Common Core should work--I like to foster crit...
13 Subjects: including prealgebra, reading, English, writing
|
{"url":"http://www.purplemath.com/Cupertino_prealgebra_tutors.php","timestamp":"2014-04-18T08:18:53Z","content_type":null,"content_length":"24192","record_id":"<urn:uuid:4fcff561-6d3c-40c7-8cbb-92ada7afd387>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The discovery, rediscovery, and re-rediscovery of computed tomography
Note: This post is my contribution to The Giant’s Shoulders #2, to be held at The Lay Scientist. I thought I’d cover something a little more recent than my previous entries to the classic paper
carnival; in truth, I need a break from translating 30-page papers written in antiquated German/French!
One of the fascinating things about scientific progress is what you might call its inevitability. Looking at the history of a crucial breakthrough, one often finds that a number of researchers were
pushing towards the same discovery from different angles. For example, Isaac Newton and Gottfried Leibniz developed the theory of calculus independently and nearly simultaneously. Another example is
the prehistory of quantum mechanics: numerous experimental researchers independently began to discover ‘anomalies’ in the behavior of systems on the microscopic level.
I would say that the development of certain techniques and theories become ‘inevitable’ when the discovery becomes necessary for further progress and a number of crucial discoveries pave the way to
understanding (in fact, one might say that this is the whole point of The Giant’s Shoulders). Occasionally it turns out that others had made a similar discovery earlier, but had failed to grasp the
broader significance of their result or were missing a crucial application or piece of evidence to make the result stand out.
A good example of this is the technique known as computed tomography, or by various other aliases (computed axial tomography, computer assisted tomography, or just CAT or CT). The pioneering work was
developed independently by G.N. Hounsfield and A.M. Cormack in the 1960s, and they shared the well-deserved 1979 Nobel Prize in Medicine “for the development of computer assisted tomography.” Before
Hounsfield and Cormack, however, a number of researchers had independently developed the same essential mathematical technique, for a variety of applications. In this post I’ll discuss the
foundations of CT, the work of Hounsfield and Cormack, and note the various predecessors to the groundbreaking research.
What is computed tomography, and how does it work? CT is a technique for non-destructive imaging of the interior of an object. Its best-known application is in medical imaging, in which a portion of
the patient’s body is imaged using x-rays, often to check for cancer. A picture of a modern CT scanner (from Wikipedia) is shown below:
The word tomography is derived from the Greek tomos (slice) and graphein (to write); it refers to the manner of image reconstruction: images of the body are put together in two-dimensional slices,
which can then be assimilated into a full three-dimensional structure, if desired. An example of a CT image of the lungs is shown below (image from RadiologyInfo):
It is important to note that a CT image is far superior to a normal x-ray image such as one might get at the doctor’s office after breaking a bone (also from RadiologyInfo):
A standard x-ray, as shown above, is a single picture taken of a human ankle. The object to be imaged is placed between the x-ray source and a photographic plate, as schematically (and crudely)
illustrated below:
Different materials in the human body absorb x-rays to a greater or lesser extent; bone absorbs the most. The image recorded on the photographic plate is in essence the x-ray shadow of the human
body. This technique, though extremely useful for medical diagnosis, has a number of severe limitations. First and foremost, there is no depth information recorded about the object: the photographic
plate records a ‘shadowgram’ of everything that lies between itself and the source. Small tumors could in principle be hidden (overshadowed) by a large piece of bone which lies directly above or
below them. Second, a standard x-ray is extremely insensitive to soft tissue. As one can see in the ankle image above, the bone is clearly visible but the muscle and skin leaves hardly a trace on the
plate. The technique will not detect a tumor in its earliest stages, which is the best time for successful treatment.
How does computed tomography differ? A computed tomogram is derived from a large number of x-ray shadowgrams, each taken at different angles of ‘attack’:
Each of these shadowgrams gives a different view of the interior of the patient’s body. By a nontrivial mathematical process requiring the use of a computer (hence the ‘computed’ in computed
tomography), the information from this collection of shadowgrams can be combined to give a picture of a particular ‘slice’ of the body. Unlike a single shadowgram, this computed picture gives an
exact cross-sectional picture of the body, and gives quantitative values for the absorption of different tissues of the body. Also unlike a single shadowgram, a CT scan can distinguish clearly
between different types of soft tissue, as can be seen in the sample image above. A CT scan can find tumors at an earlier stage than a standard shadowgram.
We’ll discuss how CT actually works at the end of the post; for now, let’s look at the development of the process. The first work was done by Allan M. Cormack in the 1950s; in his own words (from his
Nobel lecture),
In 1955 I was a lecturer in Physics at the University of Cape Town when the Hospital Physicist at the Groote Schuur Hospital resigned. South African law required that a properly qualified
physicist supervise the use of any radioactive isotopes, and since I was the only nuclear physicist in Cape Town, I was asked to spend 1 1/2 days a week at the hospital attending to the use of
isotopes, and I did so for the first half of 1956. I was placed in the Radiology Department under Dr. J. Muir Grieve, and in the course of my work I observed the planning of radiotherapy
treatments. A girl would superpose isodose charts and come up with isodose contours which the physician would then examine and adjust, and the process would be repeated until a satisfactory
dose-distribution was found. The isodose charts were for homogeneous materials, and it occurred to me that since the human body is quite inhomogeneous these results would be quite distorted by
the inhomogeneities – a fact that physicians were, of course, well aware of. It occurred to me that in order to improve treatment planning one had to know the distribution of the attenuation
coefficient of tissues in the body, and that this distribution had to be found by measurements made external to the body. It soon occurred to me that this information would be useful for
diagnostic purposes and would constitute a tomogram or series of tomograms, though I did not learn the word “tomogram” for many years.
In simpler terms: radiation therapy requires being able to send a precise dosage of radiation to a particular location in a patient. The amount of radiation reaching a location in the body depends
significantly on the internal structure, and no method existed at that time for measuring the absorption properties of an individual’s body; Cormack therefore decided to look for one. An initial
literature search produced no prior results, so Cormack developed a mathematical technique for determining internal structure from a series of x-ray shadowgrams. The technique is now known as the
Radon transform, and we will come back to it shortly.
Over the next few years, Cormack tested his new technique with systems, and targets, of increasing complexity. In 1957 in Cape Town he measured a circularly symmetric sample consisting of a cylinder
of aluminum surrounded by an annulus of wood. As Cormack noted,
Even this simple result proved to have some predictive value for it will be seen that the three points nearest the origin [of his data set] lie on a line of a slightly different slope from the
other points in the aluminum. Subsequent inquiry in the machine shop revealed that the aluminum cylinder contained an inner peg of slightly lower absorption coefficient than the rest of the
By 1963, Cormack was prepared to do work on a “phantom” (simulated patient made of aluminum and lucite) without circular symmetry, using the device pictured below (taken from the Nobel lecture,
Quite a far-cry still from the CT machines of today, the cylinders are collimators containing the source, Co^60 gamma rays, and the detector. Quite good measurements of the properties of the phantom
were found, and the results were published in a pair of papers in 1963 and 1964*. From Cormack’s Nobel lecture again,
Publication took place in 1963 and 1964. There was virtually no response. The most interesting request for a reprint came from a Swiss Centre for Avalanche Research. The method would work for
deposits of snow on mountains if one could get either the detector or the source into the mountain under the snow!
Cormack did little else on this subject for a number of years. Meanwhile, about the time that Cormack was moving on to other things, Godfrey N. Hounsfield, working at EMI Central Research
Laboratories in Hayes, UK, started thinking about the problem along similar lines. Now quoting from his Nobel lecture,
Some time ago I imagined the possibility that a computer might be able to reconstruct a picture from sets of very accurate X-ray measurements taken through the body at a multitude of different
angles. Many hundreds of thousands of measurements would have to be taken, and reconstructing a picture from them seemed to be a mammoth task as it appeared at the time that it would require an
equal number of many hundreds of thousands of simultaneous equations to be solved.
When I investigated the advantages over conventional X-ray techniques however, it became apparent that the conventional methods were not making full use of all the information the X-rays could
Hounsfield put together his own crude initial apparatus to test his ideas:
The equipment was very much improvised. A lathe bed provided the lateral scanning movement of the gamma-ray source, and sensitive detectors were placed on either side of the object to be viewed
which was rotated 1” at the end of each sweep. The 28,000 measurements from the detector were digitized and automatically recorded on paper tape. After the scan had been completed this was fed
into the computer and processed.
Many tests were made on this machine, and the pictures were encouraging despite the fact that the machine worked extremely slowly, taking 9 days to scan the object because of the low intensity
gamma source. The pictures took 2 1/2 hours to be processed on a large computer… Clearly, nine days for a picture was too time-consuming, and the gamma source was replaced by a more powerful
X-ray tube source, which reduced the scanning time to nine hours. From then on, much better pictures were obtained; these were usually blocks of perspex. A preserved specimen of a human brain was
eventually provided by a local hospital museum and we produced the first picture of a brain to show grey and white matter.
Disappointingly, further analyses revealed that the formalin used to preserve the specimen had enhanced the readings, and had produced exaggerated results. Fresh bullock’s brains were therefore
used to cross-check the experiments, and although the variations in tissue density were less pronounced, it was confirmed that a large amount of anatomic detail could be seen… Although the speed
had been increased to one picture per day, we had a little trouble with the specimen decaying while the picture was being taken, so producing gas bubbles, which increased in size as the scanning
The picture of Hounsfield’s prototype CT machine is shown below, taken from his Nobel lecture.
The use of fresh brains led to some entertaining moments, as he notes in his Nobel autobiography,
As might be expected, the programme involved many frustrations, occasional awareness of achievement when particular technical hurdles were overcome, and some amusing incidents, not least the
experiences of travelling across London by public transport carrying bullock’s brains for use in evaluation of an experimental scanner rig in the Laboratories.
The initial tests demonstrated the principle, but a faster, more sophisticated machine needed to be built in order to make it a worthwhile clinical tool. By 1972 a machine similar in appearance to
contemporary CT scanners was installed at Atkinson Morley’s Hospital, London, and was used on a woman with a suspected brain lesion. Hounsfield published a paper describing his new system, and naming
it “Computerized transverse axial scanning tomography,” in 1973**.
The paper includes many technical details and a photograph of the various components, such as the basic machine, shown below:
This early machine had many differences with the machines of today. The early system took some five minutes to image a slice, but modern systems can finish such a scan in seconds. In the early
system, a patient’s head was enclosed in a water-filled rubber cap, which reduced the range of x-ray intensities arriving at the detectors. The early system also, not surprisingly, had a relatively
low resolution: pictures were 80 x 80 ‘picture points’, derived from 28,800 readings.
This work seems to have been immediately recognized as groundbreaking and of fundamental importance (at the very least this can be seen by how quickly the Nobel prize was awarded for the
achievement). Most hospitals now have a radiology department with some sort of CT scanner. CT is also commonly used for nondestructive testing of materials in industrial applications. Applications
involving other types of waves have also been applied: CT-like algorithms have been used in geological exploration and oil prospecting, as well as in magnetic resonance imaging (MRI), another
important medical diagnostic technique. The same tomographic methods are now also used for reconstructing quantum wavefunctions (which is worth a post in itself at a later date). Techniques which are
inspired by or generalize CT, such as diffraction tomography, diffusion tomography, and optical coherence tomography, are also in use or under investigation.
Perhaps more broadly, CT was the first widely successful application of the theory of inverse problems. An inverse problem is a solution of a physical problem in the opposite direction from the usual
’cause-effect’ sequence of events. Each inverse problem is associated with a more familiar ‘forward problem’ in physics which represents the ’cause-effect’ process. For instance, the problem of
determining the absorption of x-rays given the structure which they are incident upon is a forward problem; the problem of determining the structure of an object based on how x-rays are absorbed by
the object in an inverse problem. Computed tomography effectively created its own ‘inverse problems’ subfield of mathematical physics.
Interestingly, though, the mathematical problem solved by Cormack turns out to have been solved numerous times previously by other researchers for other applications though, as noted at the beginning
of this post, those works either were too far ahead of their time or too limited in application to gain much attention. All of these were acknowledged by Cormack in his Nobel lecture.
The first to develop the mathematics behind computed tomography was Johann Radon*** in 1917, in his paper, “Über die Bestimmung von Funktionen durch ihre Integralwerte längs gewisser
Mannigfaltigkeiten.” (“On the determination of functions by their integral values along certain manifolds”.)
In probability theory, the problem was developed by H. Cramér and H. Wold in 1936****. In 1968, D.J. De Rosier and A. Klug***** used a similar technique in electron microscopy to reconstruct the
three-dimensional structure of the tail of a bacteriophage, work which eventually led Klug to win the 1982 Nobel Prize in Chemistry “for his development of crystallographic electron microscopy and
his structural elucidation of biologically important nucleic acid-protein complexes.”
Perhaps the most unusual precursor to computed tomography is the 1956 work by R.N. Bracewell******, who derived exactly the same mathematical technique to reconstructions of celestial radio sources.
Enough history, for now: how does computed tomography (and its predecessors) work? The exact mathematics of CT, i.e. Radon’s elegant formula for reconstruction an object, is rather involved, and will
be left for a future post. Instead we give two simple illustrations of the principles behind the process, one completely conceptual and one which involves a little bit of math.
First, a conceptual illustration: suppose we have a ‘black box’ which contains a perfectly absorbing object inside of it. We wish to roughly determine the shape of that object by measuring the
absorption of x-rays which pass through the box. Let’s pretend that the object within the box is a square; how many measurements do we need to prove this? Suppose we first shine x-rays horizontally
through the box; the result is as follows:
The intensity of the x-rays as a function of position is plotted on the right. This clearly doesn’t tell us the shape of the object, because the following objects would result in the same shadow:
We can eliminate the possibility of the rectangular shape by shining our x-rays from above:
This second measurement still, however, cannot distinguish between a square object or a round object. We can then take a diagonal measurement:
This measurement suggests that the object is wider along the diagonal, but we are still left with the following possibility for an object:
By making even more measurements, we can narrow down the shape of the object. In a realistic CT measurement, there are no ‘perfectly absorbing’ objects, but the same principle applies: by measuring
the absorption properties of the patient/target from many directions, one can develop the interior absorption profile of the object.
Obviously, determining the actual absorption profile from the massive amount of x-ray data is not a trivial process. Radon (and Cormack’s) theoretical formulation of the problem describes how the
x-ray data is related to the object absorption. With the help of a computer, one can substitute the data into Radon’s formula to get an exact description of the object’s properties.
Let’s try and show how tomography works in another, more algebraic, way: suppose we reduce our ‘patient’ to a block of 9 different regions, each with its own absorption coefficient $x_i$:
We wish to determine each of these numbers, but the only way we can measure them is by passing an x-ray through the box and measuring the total of any row, column, or diagonal; for example:
According to basic linear algebra, in order to uniquely solve a system of equations for 9 unknowns ( the $x_i$), we need 9 independent equations relating those unknowns. We make the following
‘measurements’ through the box, with the following results:
This gives the following set of 12 equations:
$x_1+x_2+x_3=6$, $x_4+x_5+x_6=9$, $x_7+x_8+x_9=8$,
$x_1+x_4+x_7=7$, $x_2+x_5+x_8=11$, $x_3+x_6+x_9=5$,
$x_2+x_6=3$, $x_1+x_5+x_9=9$, $x_4+x_8=7$,
$x_2+x_4=7$, $x_3+x_5+x_7=9$, $x_6+x_8=3$.
It turns out that we need 11 of these equations to solve uniquely for the values within the squares, as the equations are not independent of one another. This can be seen by noting that the sum of
the first three equations gives
which is exactly the same equation as the sum of the second three equations,
Using a computer to solve the equations, the numbers within the boxes are:
If we have a larger system of boxes, we will need even more measurements to make a unique solution. In the idealized limit of an infinite (continuous) set of boxes, we would need to measure the
transmission through the square from every direction.
This ‘brute force’ method of doing tomography was in essence the solution used by Hounsfield in his original paper, as he was unaware of the work of Radon and Cormack; he notes:
If the body is divided into a series of small cubes each having a calculable value of absorption, then the sum of the absorption values of the cubes which are contained within the X-ray beam will
equal the total absorption of the beam path. Each beam path, therefore, forms one of a series of 28,800 simultaneous equations, in which there are 6,400 variables and, providing that there are
more equations than variables, then the values of each cube in the slice can be solved. In short there must be more X-ray readings than picture points.
We can see that this method is inefficient even from our simple example, as we ended up having more measurements than we needed. The use of Radon’s née Cormack’s formula gives a better understanding
of how to sort, arrange, and efficiently process the acquired data.
The history of Cormack and Hounsfield’s discovery makes a good argument by example for the usefulness of ‘cross-pollination’ between fields and subfields of science. Radon’s original calculation had
to be ‘rediscovered’ numerous times by different authors working in different fields, a process which is not uncommon in the physical sciences.
On the flip side, authors often make major breakthroughs or simplify their work greatly by searching for results which have been forgotten or restricted to a little-explored subfield. The now hot
topic of metamaterials began in essence with the rediscovery of a 1967 paper by Russian scientist V. Veselago. The field of quantum optics was advanced rapidly by applying results that had already
been developed in the field of nuclear magnetic resonance.
None of this should be taken as a slight on the achievements of Cormack and Hounsfield; they saw possibilities in the development of tomographic methods that numerous others who came before them did
not. Their researches came at a time when there was a genuine need for new diagnostic techniques, coupled with the computational ability to carry them out.
* A.M. Cormack, “Representation of a function by its line integrals, with some radiological applications,” J. Appl. Phys. 34 (1963), 2722-2727. A.M. Cormack, “Representation of a function by its line
integrals, with some radiological applications. II,” J. Appl. Phys. 35 (1964), 2908-2913.
** G.N. Hounsfield, “Computerized transverse axial scanning (tomography): Part I. Description of system,” Brit. J. Radiol. 46 (1973), 1016-1022.
*** J. Radon, “Über die Bestimmung von Funktionen durch ihre Integralwerte längs gewisser Mannigfaltigkeiten,” Ber. König Säch. Aka. Wiss. (Leipzig), Math. Phys. Klasse 69 (1917), 262-267.
**** H. Cramér and H. Wold, “Some theorems on distribution functions,” J. London Math. Soc. 11 (1936), 290-294.
***** D.J. De Rosier and A. Klug, “Reconstruction of three dimensional structures from electron micrographs,” Nature 217 (1968), 130-134.
****** R.N. Bracewell, “Strip integration in radio astronomy,” Austr. J. Phys. 9 (1956), 198-217.
13 Responses to The discovery, rediscovery, and re-rediscovery of computed tomography
1. The correct translation of “gewisser Mannigfaltigkeiten”, in the title of Radon’s paper, is ‘certain manifolds’. Manifold is a catchall for sets, collections, spaces and similarly defined
mathematical objects.
2. I’m sorry I shouldn’t correct translations before breakfast! I only noticed the second ‘error’ after having posted my first correction. This time I will give a full translation of Radon’s German
“Über die Bestimmung von Funktionen durch ihre Integralwerte längs gewisser Mannigfaltigkeiten.”
“On the determination of functions by their integral values along certain manifolds”.
3. Thony C.: Thanks for the translation! I’ve updated the post accordingly. I was in a hurry when I first wrote it and simply used babelfish, though I could tell it wasn’t quite right. Somewhere
I’ve got a translation of Radon’s paper, but I was too lazy to hunt for it at the time.
4. Excellent post! This is exactly where science blogging shines most — taking a difficult subject that laymen are nonetheless interested in, and giving a clear explanation in everyday terms.
Your posts do a great job of helping people understand that science isn’t a magical thing done only by geniuses. It’s a pragmatic activity done by real people for real, commonsense reasons.
5. “Thanks for the translation!”
You’re welcome, I’ll send the bill at the end of the month! ;)
I’ll just re-iterate what Mr Walker said and say thanks for some excellent post on scientific subjects. Living in a town that is one of the major centres in the world for CT production and being
a historian of science myself I was fascinated by an obviously well researched and well-written piece on their origins.
6. Wade and Thony C.: A belated thanks for the comments, and compliments!
7. Nice introduction to CT. Can you comment on how realistic this system of linear equations is? These equations assume forward scattering and no diffraction (which would complicate the equations).
I ask because the (nonlinear) inverse problems I have come across end up being formulated as an optimization (minimization) problem.
8. Trey: To the best of my knowledge, for medical x-ray CT the linear system of equations works just fine. Because of the high energy/short wavelength nature of the x-rays, and the relatively small
refractive index contrasts of the human body at those wavelengths, the x-rays follow essentially straight-line paths through the body. One can show, in fact, that linearized scattering reduces to
the CT form in the limit of short wavelengths; see G. Gbur and E. Wolf, “Relation between computed tomography and diffraction tomography”, J. Opt. Soc. Am. A 18 (2001), 2132, for instance.
There are a couple of caveats:
If a person has metal implants, the implants strongly scatter x-rays and ordinary CT won’t work. There are researchers actively seeking ways to account for this.
Ordinary CT neglects phase changes of the x-rays on propagation through tissue, but this influence is still there and leads to propagation changes far enough downstream. In recent years such
‘phase contrast tomography’ has become an important research topic.
You’re right that most inverse scattering problems are nonlinear and require techniques which are quite involved. CT is somewhat special because it uses high-energy photons (x-rays), which barrel
through the body with little deviation. Most inverse scattering theory since then seems to have involved trying to achieve similar success with photons at lower energies, where multiple
scattering and diffraction effects are significant.
9. Pingback: Advances in the History of Psychology » Blog Archive » More Classic(?) Science from “The Giants’ Shoulders”
10. Great post! Ever since I discovered the inverse radon transform, I assumed that this was how a CT scan must work.
I’d also like to mention that, as an engineer who does a lot of image processing, I suspect that real CT scan computations use some sort of least squares solution to the system of equations. I
find this to be the most practical way of solving complex image transforms in the presence of noise and imperfections, where a real solution is often inconsistent.
2-D phase unwrapping, for example, doesn’t really work when you have missing data points, but a weighted least squares solution is indistinguishable from perfect in most cases.
11. Tercel: Thanks for the comment! Least square solutions certainly play a big role in the theory of inverse problems in general, though it seems that the early CT work was done by, as I noted,
brute force methods.
12. There are good reasons to claim that Paul Funk pre-empted Radon with a 1916 publication:
Funk, P. (1916). “Uber eine geometrische anwendung der abelschen integralgleichung.” Math. Ann. 77(129-135).
Funk’s work is limited to integrals on great circles of the sphere. Is this more evidence for Arnold’s law: Discoveries are rarely attributed to the correct person?
□ Kieran: Interesting! I’ve not seen Funk’s paper, though it wouldn’t surprise me that others had done similar things to Radon in the same era. My experience, in looking through the history of
science, is that many discoveries are inevitable, in the sense that multiple researchers start working towards the same goal independently.
This entry was posted in History of science, Physics. Bookmark the permalink.
|
{"url":"http://skullsinthestars.com/2008/08/05/the-discovery-rediscovery-and-re-rediscovery-of-computed-tomography/","timestamp":"2014-04-20T23:47:25Z","content_type":null,"content_length":"127454","record_id":"<urn:uuid:a88a95a6-e283-4026-a6ee-b53bb95fd174>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Physics Lab: what happens if we ignore gravity?
You could do the same experiment with the spring in space, as long as the coils of the spring were not touching (easy to do by designing the spring right or by over stretching it a bit). The period
of oscillation would be the same.
The sine function comes from the solution to the equation of motion of a spring for which the restoring force is proportional to the displacement.
Why must the coils of the spring not touch? How come the position function is a perfect sine wave with gravity? It doesn't make sense to me. As the mass is moving up while below the y-axis, wouldn't
it be slowed down by gravity? And why the mass is moving down while above the y-axis, wouldn't it be sped up by gravity?
Does the solution to the equation of motion of a spring explicitly give the sine function as the only solution? So it would be impossible to model the wave if trigonometry was not studied and the
sine function was not discovered?
|
{"url":"http://www.physicsforums.com/showpost.php?p=4241532&postcount=3","timestamp":"2014-04-20T00:58:30Z","content_type":null,"content_length":"8767","record_id":"<urn:uuid:ab5af8ea-ba0d-47f2-a13c-c42b6e9d091f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Puzzles are in all cultures throughout time... And the 9 Dot puzzle is as old as the hills. Even though it appears in Sam Loyd’s 1914 “Cyclopedia of Puzzles”, the Nine Dot puzzle existed long before
Loyd under many variants. In fact, such a puzzle belongs to the large labyrinth games family.
9 Dot puzzle is also a very well known problem used by many psychologists, philosophers and authors (Paul Watzlawick, Richard Mayer, Norman Maier, James Adams, Victor Papanek...) to explain the
mechanism of ‘unblocking’ the mind in problem solving activities. It is probable that this brainteaser gave origin to the expression ‘thinking outside the box’.
Solving it
We hope you don’t mind if we use nice ladybugs instead of boring dots to make our puzzle demonstrations... Well, below are nine ladybugs arranged in a set of 3 rows. The challenge is to draw with a
pencil four continuous STRAIGHT lines which go through the middle of all of the 9 ladybugs without taking the pencil off the paper.
The most frequent difficulty people encounter with this puzzle is that they tend to join up the dots as if they were located on the perimeter (boundary) of an imaginary square, because:
- they assume a boundary exists since there are no dots to join a line to outside the puzzle.
- it is implicitly presumed that tracing out lines outside the ‘invisible’ boundary is outside the scope of the problem.
- they are so close to doing it that they keep trying the same way but harder.Unfortunately, repeating the same wrong process again and again with more dynamism doesn’t work... No matter how many
times they try to draw four straight lines without lifting the pencil. A dot is always left over!
Trial-and-error strategy
a and b below.
That intuition turns out, in fact, to be the relevant ‘insight’. Thanks to your imagination, the curved line can be stretched as much as needed to obtain 4 straight lines! (fig. c). Obviously, there
are other ways to approach the puzzle...
See the final unique solution
Lessons to be learned from this puzzle
- Analyze the definition to find out what is allowed and what is not.
- Look for other definitions of problems (if a problem definition is wrong, no number of solutions will solve the real problem).
In conclusion, sometimes to solve a problem we need to remove a mental (and unnecessary) constriction or assumption we initially imposed on ourselves (the lines must be straight, the lines must be
drawn inside a ‘subjective’ square, etc.). In fact, mental constrictions always limit our investigation field.
Here are more tips and puzzle-solving strategies to consider.
Alternative solutions
These solutions seem less mathematical/logical but more creative!
3 line solution:
From a mathematical point of view, a dot/point has no dimension, but on the paper, the dots appear like small discs... Then, we can use the thickness of the lines to solve the puzzle with just 3
contiguous segments:
Tridimensional solution:
The problem is formulated in a way we implicitly assume that it must be solved in plane geometry... Though it might be possible to solve it using a different surface, like a sphere or a cylinder, and
by drawing only one single line (see example below).
The origami-like solution:
This is our favorite one! Reproduce the puzzle on a square sheet of paper. By ingeniously folding it, according to the example below, it is possible to align the 9 dots in order to connect them
together with a final pencil stroke.
Source: MateMagica, Sarcone & Waeber, ISBN: 88-89197-56-0.
Sixteen Dot Version
Can you solve the Sixteen Dot (4 x 4) puzzle variant shown below? Again, you just have to join the dots together without lifting your pencil. What is the MINIMUM number of straight lines required to
solve it? Do you notice any correlation between number of dots and number of connecting lines?
See the solution
|
{"url":"http://archimedes-lab.org/How_to_Solve/9_dots.html","timestamp":"2014-04-17T01:29:24Z","content_type":null,"content_length":"50566","record_id":"<urn:uuid:955dd68d-57b0-4324-8f42-ff90cbab0658>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Greenbrae Geometry Tutor
Find a Greenbrae Geometry Tutor
...I have been doing it professionally now for over ten years. I love it when my student's understand a new concept. Their eyes light up, they smile at me and say, "but that's easy!" Most students
who come to me have been having trouble for a long time, but managed to get by.
10 Subjects: including geometry, calculus, precalculus, algebra 1
...I was enrolled in Harvard Divinity School before leaving to start my first video production company. I named the company after a Native American religion and have continued my religious studies
since. I am married to a Biblical scholar and have produced numerous instructional pieces on women's spirituality and Biblical subjects.
22 Subjects: including geometry, English, writing, physics
...I also teach people how to excel at standardized tests. Unfortunately for many people with test anxiety, test scores are very important in college and other school admissions and can therefore
have a huge impact on your life. If you approach test-taking in a way that makes it fun, it takes a lot of the anxiety out of the process, and your scores will improve.
48 Subjects: including geometry, Spanish, English, reading
...Just as there are hundreds of ways to prove the Pythagorean Theorem, there's bound to be an explanation for you. Math isn't an abstract subject for geniuses, it's a practical tool set that just
requires a lot of practice and hard work. My goal for you isn't to just memorize formulas and regurgitate facts, but to fall in love with the subject the same way I did.
19 Subjects: including geometry, calculus, physics, writing
I graduated from the Florida State University with a B.S. in chemistry, and minors in Computer Science and Mathematics. Currently, I am attending the University of San Francisco to obtain my
master's degree in chemistry. While in college, I did research in an organic chemistry lab for more than a year, in addition to teaching chemistry labs to college students.
4 Subjects: including geometry, chemistry, oboe, Microsoft Windows
|
{"url":"http://www.purplemath.com/greenbrae_ca_geometry_tutors.php","timestamp":"2014-04-19T20:03:47Z","content_type":null,"content_length":"24243","record_id":"<urn:uuid:2c796c26-1e04-4a3f-92d9-6bde933af3d4>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This blog concerns the mathematical analysis of Candyland®, a children’s game produced by Hasbro (before that Milton Bradley). The original game was designed by Eleanor Abbot in 1945. Over the
years, subtle rule changes have been made. This analysis is based on the rules in the version of the game that our family owns!
Before reading this article, if you have not done so already, you might want to read the previous two articles regarding the analysis of Chutes and Ladders and Risk. These will set some context.
The Game
There is no skill required to play Candyland (other than being able to recognise colours). A deck of movement cards it shuffled, and players take turns to move their tokens according to the
instructions on the card.
The board consists of a linear track containing 134 spaces, mostly coloured red, green, blue, yellow, orange or purple. In addition there are six pink spaces containing named characters, two bridges,
and three of the coloured spaces are sticky (more of this later).
Players alternate in drawing movement cards, most of which show one of six colours, and then moving their token ahead to the next space of that colour. Some cards have two marks of a colour, in which
case the player moves his or her marker ahead to the second-next space of that colour. The deck has one pink card for each named location, and drawing such a card moves a player directly to that
board location (This move can be either forward or backward). Finally, if a player lands on one of the sticky spaces, he misses his next turn.
Landing on the start space to one of the two bridges allows the player to take a short cut. The rules are not explicit about where/how the player moves on a bridge, but in our house, landing on the
bridge places you in limbo on the bridge at the end of that turn, and the turn of your next card on your subsequent move determines your destination past the exit of the bridge.
The first player to the last space on the board is the winner.
The Board
My first task was to model the board. There are 134 Visible spaces on the board, which can be stored as a simple array. In addition, there is a space at index zero for the starting location (players
start off the board). I modeled the bridges as their own state (when a player is on a bridge, they are on either space 135 or space 136). Finally, to represent the sticky locations, there are three
additional ‘hidden’ squares 137-139, one behind each sticky location. Landing on a sticky location, moves the player to this square for their ‘missed turn’, then they move forward on their next
First I created a quick stylised map in Excel (shown on the left) from the game board (shown below). This enabled me to enter the data for the cell colours into an array.
Once done, I could then get code to render the picture:
Distribution of cards
I counted the distribution of cards in the game. Assuming my children had not lost any, there were 64 cards supplied in our version of the game.
Crippled Markov Chain
The previous systems I have analysed with stochastic Markov chains (RISK dice battles, and Chutes & Ladders) have been Memoryless; The probability of moving to different states has been agnostic as
to events that have happened in the past. This is not the case with Candyland.
As cards are drawn, they are used and discarded and so the probability of which card will be turned up next is dependent on cards that have already been seen. (This is easy to confirm with a thought
experiment: For instance, if all four double red cards have already been played, the probability that the next unseen card is also a double red will be zero). To work through every possible
combination of ways that the cards could be arranged in the deck is computationally infeasible.
I’m going to make a simplification to turn the system back into a memoryless stochastic system. (We’ll check later on how good this approximation is).
How I’m going to simplify the system is that at after a draw and acting on the card, we replace it back into the deck and shuffle. At each draw, there are always 64 cards in the deck, and so a 1/64
chance of drawing any distinct card (essentially making every draw just like the first draw). Yes, it does mean that it is mathematically possible to draw the same card again (especially dangerous
when drawing one of the special pink cards, meaning that a player could be sent back more than once), but I’m assuming the probabilities of these events occurring will be ‘noise’ compared to the
main probabilities, especially in short games. (In longer games, which sometimes happen, all the cards in the deck are used up anyway and need to be reshuffled, so this results in the same
Transition Matrix
The transition matrix (for explanation of a transition matrix, review the posting about Chutes and Ladders) for this game is 140x140, with each element (i, j) describing the probability of a player
moving from location i to location j on the next move. There is a row/column for each of the visible squares on the board, as well as locations for the ‘hidden’ states (the start, the bridges, and
the temporary sticky states ‘behind’ on the three licorice spaces)*
By stochastic definition, the sum of all the probabilities in a row adds up to 1.0 — Something has to happen on each iteration.
*Yes, yes, I know that we can reduce the size of matrix slightly by reducing the redundant rows and columns associated with the start of the bridges.
A snippet of the transition matrix is shown below for the follow section of board.
If the player is currently on square 7 there is a 6/64 chance of drawing a Single Purple card and moving to square 8. Square 9 is a special pink character, and there is only a 1/64 chance of drawing
the unique character card for this space. (Similarly 20 and 42 and all the other pink squares.)
Squares 10-14 are also reachable with single cards with probability 6/64. Further out, the squares that are accessible with double cards have different probabilities because of their different
frequencies in the deck.
With care it is possible to construct a transition matrix for the game which enumerates the probabilities of moving from every game state to every other game state.
All that is needed now, is to create a column identity vector with 1.0 in row zero and the rest zeros (all games are started with the player token off the board with 100% certainty!), and multiply
this by the transition matrix. The row vector output will show the probability distribution for where the player token could be after one card draw. Below are the results represented in graphical
form, with shading denoting the probabilities. Darker red representing high probabilities and Lighter red representing low probabilities.
After one draw, the darkest areas are, obviously, the ones that are reached by the cards that have the highest distribution in the deck (the single colours). The spaces obtained by the double colours
are lighter shaded (some lighter than others because there are less of these cards in the deck). You can see that it is possible to reach the first bridge with a single green, and each of the special
pink locations is shaded very light to represent the 1/64 chance of turning up the special character card for that location. It’s not possible to complete the game in one move, so the percentage
chance to finish is 0.000%
If we multiply the above output vector by the transition matrix again, the new output vector represents the superposition of probabilities of where the player token could be after the second drawn
card (starting from all the weighted starting positions of the first vector). The result is shown below. The probability ‘cloud’ is thinning out a little, and the most probable space is the red
square 14 (I’ll leave it as an exercise to the reader to convince themselves why this would be the case). The pink squares are the same shade, and there is still no way to reach the finish line with
just two card draws. You can see a second probability ‘cloud’ appearing downstream of the first bridge (and also very, very faint clouds downstream of each pink square, but these are so faint that
contrast does not allow the eye to differentiate them from white using the shading scheme I’m using here).
After the third card draw, the probability distribution looks like the picture below. The clouds are getting wider, and more of the board is shaded. (It’s possible now that the second bridge might
have been used).
After four draws, and interesting event occurs — it is mathematically possible to win the game! (The probability of landing of finishing the game in just four moves is approximately once in every
10,000 games). There are multiple ways to achieve this victory, but all require that the first card drawn be the pink special card that takes the player to square 102. (In our version of the game,
this is the Ice Cream).
By five draws, there are more ways to achieve the victory condition as you can see from the probability increase of the token being on the finish space.
Below are thumbnails of the board showing the percentage of finishing increasing with each drawn card.
Here is a short video animation showing the probability distribution for the first 50 card draws.
How long is an ‘average’ game?
Below is a graph of the probability of a single person finishing the game by move-n.
99.2% of games will finish within 100 card draws.
The Modal number of card draws to is 22 . (More games will finish using this number of moves than any other number).
The Median number of card draws to is 25 . (Half the number of games will be shorter than this, and half longer).
The Arithmetic Mean (average) number of card draws to is 30.56 (If you play N games, on average, you’ll draw N*30.56 cards).
How good an approximation is our simplification?
I was curious to learn just how close the replace-and-reshuffle simplification to the model was to real results, so I decided run an objective comparison. I created a Monte-Carlo simulation of the
game so that I could automate the game and execute it millions of times. (This did not take long to code, as most of the work in creating the first simulation was the data-structures, and
serialization of state and this could all be re-used).
Another benefit of writing the simulation is that it is possible to model a game with more than one player. There are two reasons why this is important:
1) When there are more players, more cards are drawn, so it is much more likely that the deck will need to be reshuffled (sometimes many times). Reshuffling creates the situation that the special
pink cards are encountered multiple times.
2) In real-life, a game ends when the first person crosses the finish line. With multiple players, there are multiple chances for the game to end in a round.
Quick Compare
Here I've plotted the results of The Markov Chain Analysis and Monte-Carlo Simulation (100,000,000 games) on the same chart.
They look pretty close! This is a good thing, and shows that we've probably not made any simple logic errors in the code — Getting the same results using two different mechansims of calculation
always makes one feel warm inside. The fact that the curves are pretty close also confirms that our simplification approximation was a valid one to make.
The 'true' curve, the Monte-Carlo one, peaks a little higher at the mode, which is to be expected. Why? Well, to complete the game in a small number of moves requires the use of one of the pink cards
and so these will be used up. Until the deck is reshuffled (post 64 drawn cards), there is zero chance of encountering that pink card again. However, in the Markov Chain model, it is possible to
encounter the same pink card again, and encountering the same card twice will send you backwards. This does not happen often, but it does enough to slightly lower the perentage chance at the modal
Another way to look at the data is to view the Cumulative Probability of a game finishing by at least n-moves. As you can see, the simplified Markov model seems to slightly over-estimate the
probabilities once about half the deck has been drawn.
The Mode and Median of the two methods are the same at 22 and 25 respectively, but the Mean decreases from 30.56 for the Markov approximation to 28.98 for the Monte-Carlo simulation.
Multiple Players
When there is more than one player in a game, the statistics change quite considerably because of the two reasons mentioned earlier.
Below is the graph of the percentage chance of completing a game in n-rounds. Note – Here 'round' refers to each person taking a card, and this is subtly different from drawing cards. With four
players in the game, a 'round' uses four times as many cards so the deck reshuffled much more. Remember, the Mean number of cards taken in a single player game is less than half a deck? This means
that in half of the single player games, the Ice Cream card (which teleports you most of the way across the board) will never have been select. With four players eating through cards at four times
this pace, everytime there is a re-shuffle, it's a certainty that one player used this card.
Here is the same data presented in cummulative probability format.
Effect on the averages
Here are the changes to the averages number of rounds, based on the number of players.
1 Player 2 Players 3 Players 4 Players
Mode 22 14 14 13
Median 25 19 16 14 Mean 28.98 19.97 16.70 14.87
Check out other interesting blog^articles.
|
{"url":"http://www.datagenetics.com/blog/december12011/index.html","timestamp":"2014-04-20T23:50:56Z","content_type":null,"content_length":"21404","record_id":"<urn:uuid:918b56a1-197b-400e-a16e-3b8451692db1>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Interpreting results: Kaplan-Meier curves
Kaplan-Meier survival fractions
Prism calculates survival fractions using the product limit (Kaplan-Meier) method. For each X value (time), Prism shows the fraction still alive (or the fraction already dead, if you chose to begin
the curve at 0.0 rather than 1.0). This table contains the numbers used to graph survival vs. time.
The calculations take into account censored observations. Subjects whose data are censored--either because they left the study, or because the study ended -- can't contribute any information beyond
the time of censoring. This makes the computation of survival percentage somewhat tricky. While it seems intuitive that the curve ought to end at a survival fraction computed as the total number of
subjects who died divided by the total number of subjects, this is only correct if there are no censored data. If some subjects were censored, then subjects were not all followed for the same
duration, so computation of the survival fraction is not straightforward (and what the Kaplan-Meier method is for).
If the time of death of some subjects is identical to the time of censoring for others, Prism does the computations assuming the deaths come first.
Confidence intervals of survival percentages
Prism reports the uncertainty of the fractional survival as a standard error or 95% confidence intervals. Standard errors are calculated by the method of Greenwood.
You can choose between two methods of computing the 95% confidence intervals:
•Asymmetrical method (recommended). It is computed using the log-log transform method, which has also been called the exponential Greenwood formula. It is explained on page 42 and page 43 of Machin
(reference below). You will get the same results from the survfit R function by setting error to Greenwood and conf.type to log-log. These intervals apply to each time point. The idea is that at each
time point, there is a 95% chance that the interval includes the true population survival. We call the method asymmetrical because the distance that the interval extends above the survival time does
not usually equal the distance it extends below. These are called pointwise confidence limits. It is also possible (but not by Prism) to compute confidence bands that have a 95% chance of containing
the entire population survival curve. These confidence bands are wider than pointwise confidence limits.
•Symmetrical method. These intervals are computed as 1.96 times the standard error in each direction. In some cases the confidence interval calculated this way would start below 0.0 or end above 1.0
(or 100%). In these cases, the error bars are clipped to avoid impossible values. We provide this method only for compatibility with older versions of Prism, and don't recommend it.
Number of subjects at risk at various times
One of the pages (or 'views') in the survival analysis page is "# of subjects at risk". Since the number at risk applies to a range of days, and not to a single day, the table is a bit ambiguous.
Here are the top six rows of that table for the sample data (comparing two groups) that you can choose from Prism's Welcome dialog.
Days Standard Experimental
The experiment starts with 16 subjects receiving standard therapy and 14 receiving experimental therapy. On day 90, one of the patients receiving standard therapy died. So the value next to 90 tells
you that there were 16 subjects alive up until day 90, and 15 at risk between day 90 and 142. At day 142, the next patient dies, also on standard therapy. So between days 142 and 150 (the next
death), 14 subjects are at risk in the standard group.
Prism does not graph this table automatically. If you want to create a graph of number of subjects at risk over time, follow these steps:
1.Go to the results subpage of number of subjects at risk.
2.Click New, and then Graph of existing data.
3.Choose the XY tab and a graph with no error bars.
4.Change the Y-axis title to “Number of subjects at risk” and the X-axis title to “Days”.
|
{"url":"http://www.graphpad.com/guides/prism/6/statistics/the_fraction_(or_percent)_survival_at_each_time.htm","timestamp":"2014-04-21T12:09:22Z","content_type":null,"content_length":"21675","record_id":"<urn:uuid:dab04ead-dd1f-458d-8d29-44e870cfd102>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Now we move on to the second part of the Exercise 5.2, which requires to implement regularized logistic regression using Newton's Method.Plot the data:
Machine Learning Ex 5.1 – Regularized Linear Regression
The first part of the Exercise 5.1 requires to implement a regularized version of linear regression.Adding regularization parameter can prevent the problem of over-fitting when fitting a high-order
polynomial.Read More: 194 Words Totally
Machine Learning Ex4 – Logistic Regression
Exercise 4 required implementing Logistic Regression using Newton's Method.The dataset in use is 80 students and their grades of 2 exams, 40 students were admitted to college and the other 40
students were not. We need to implement a binary classification model to estimates college admission based on the student's scores on...
ggplot2 Version of Figures in “25 Recipes for Getting Started with R”
In order to provide an option to compare graphs produced by basic internal plot function and ggplot2, I recreated the figures in the book, 25 Recipes for Getting Started with R, with ggplot2.The code
used to create the images is in separate paragraphs, allowing easy comparison.Read...
the batman equation
HardOCP has an image with an equation which apparently draws the Batman logo.Read More: 1295 Words Totally
ProjectEuler-Problem 46
It was proposed by Christian Goldbach that every odd composite number can be written as the sum of a prime and twice a square.9 = 7 + 21215 = 7 + 222Read More: 461 Words Totally
R package DOSE released
Disease Ontology (DO) provides an open source ontology for the integration of biomedical data that is associated with human disease. DO analysis can lead to interesting discoveries that deserve
further clinical investigation.DOSE was designed for semantic similarity measure and enrichment analysis.Read More: 619 Words Totally
[Project Euler] – Problem 58
Starting with 1 and spiralling anticlockwise in the following way, a square spiral with side length 7 is formed.37 36 35 34 33 32 3138 17 16 15 14 13 30Read More: 597 Words Totally
[Project Euler] – Problem 57
It is possible to show that the square root of two can be expressed as an infinite continued fraction.√ 2 = 1 + 1/(2 + 1/(2 + 1/(2 + … ))) = 1.414213…By expanding this for the first four iterations,
we get:Read More: 547 Words Totally
Machine Learning Ex3 – Multivariate Linear Regression
Part 1. Finding alpha.The first question to resolve in Exercise 3 is to pick a good learning rate alpha.This require making an initial selection, running gradient descent and observing the cost
function.Read More: 221 Words Totally
|
{"url":"http://www.r-bloggers.com/author/ygc/page/3/","timestamp":"2014-04-21T14:45:00Z","content_type":null,"content_length":"36698","record_id":"<urn:uuid:e0f1b53c-b0b3-4dc8-9325-9237f958b260>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This module defines the functions that can be used to simulate the running of QIO computations.
type Pure = VecEqL CC HeapMapSource
A Pure state can be thought of as a vector of classical basis states, stored as Heaps, along with complex amplitudes.
updateP :: Pure -> Qbit -> Bool -> PureSource
The state of a qubit can be updated in a Pure state, by mapping the update operation over each Heap.
newtype Unitary Source
A Unitary can be thought of as an operation on a HeapMap that may produce a Pure state.
Monoid Unitary The Unitary type forms a Monoid
unitaryRot :: Rotation -> BoolSource
A function that checks if a given Rotation is in face unitary. Note that this is currently a dummy stub function, and states that any rotation is unitary. (This is only o.k. at the moment as all the
rotations defined in the QIO library are unitary, but won't catch un-unitary user-defined Rotations)
uMatrix :: Qbit -> (CC, CC, CC, CC) -> UnitarySource
Given the four complex numbers that make up a 2-by-2 matrix, we can create a Unitary that applies the rotation to the given qubit.
runU :: U -> UnitarySource
Any member of the U type can be "run" by converting it into a Unitary.
data StateQ Source
A quantum state is a defined as the next free qubit reference, along with the Pure state that represents the overall quantum state
free :: Int
pure :: Pure
pa :: Pure -> RRSource
Given a Pure state, return a sum of all the amplitudes.
data Split Source
A Split, is defined as a probability, along with the two Pure states.
split :: Pure -> Qbit -> SplitSource
Given a Pure state, we can create a Split, by essentially splitting the state into the part where the qubit is True, and the part where the qubit is False. This is how measurements are implemented in
class Monad m => PMonad m whereSource
We can extend a Monad into a PMonad if it defines a way of probabilistically merging two computations, based on a given probability.
PMonad IO IO forms a PMonad, using the random number generator to pick one of the "merged" computations probabilistically.
PMonad Prob Prob is also a PMonad, where the result of both computations are combined into a probability distribution.
data Prob a Source
The type Prob is defined as a wrapper around Vectors with Real probabilities.
Monad Prob Prob forms a Monad
PMonad Prob Prob is also a PMonad, where the result of both computations are combined into a probability distribution.
Show a => Show (Prob a) We can show a probability distribution by filtering out elements with a zero probability.
evalWith :: PMonad m => QIO a -> State StateQ (m a)Source
Given a PMonad, a QIO Computation can be converted into a Stateful computation over a quantum state (of type StateQ).
eval :: PMonad m => QIO a -> m aSource
A QIO computation is evaluated by converting it into a stateful computation and then evaluating that over the initial state.
run :: QIO a -> IO aSource
Running a QIO computation evaluates it in the IO PMonad
sim :: QIO a -> Prob aSource
Simulating a QIO computation evaluates it in the Prob PMonad
|
{"url":"http://hackage.haskell.org/package/QIO-1.2/docs/QIO-Qio.html","timestamp":"2014-04-23T07:30:05Z","content_type":null,"content_length":"20447","record_id":"<urn:uuid:ba2870a9-9427-4851-9e65-8392b2723258>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: Re: what is the difference between '.psel' and '.xbs' for heckma
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Re: what is the difference between '.psel' and '.xbs' for heckman postestimation
From Maarten buis <maartenbuis@yahoo.co.uk>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Re: what is the difference between '.psel' and '.xbs' for heckman postestimation
Date Tue, 20 Apr 2010 11:05:50 +0000 (GMT)
--- On Tue, 20/4/10, jing liu wrote:
> So after run -heckman- command, I should use -predict-
> command.
Not necesarily, but it can sometimes be useful, another
possibilty is to use the -margins- command.
> For stata 8 it was -predict varname, xbs-. The -xbs-
> is used to generate the probability that an observation
> will be selected or selection probability. But there is
> a -psel- option talking about the same thing in Stata11.
> So now I don't know > which one i should choose, and
> what is the difference between the -xbs- and -psel- now.
-xbs- is the linear predictor, and -psel- is the
probability. You can use -xbs- to create -psel-, or you
can use -psel- directly.
*------- begin example --------------
webuse womenwk, clear
heckman wage educ age, ///
select(married children educ age)
predict double psel,psel
predict double xbs,xbs
gen double psel2 = normal(xbs)
twoway function y=x || ///
scatter psel2 psel, ///
*--------- end example --------------
> thirdly, it was possible to use the following code to
> calculate inverse> mills ratio (gen testpr =
> normden(selxbpr)/norm(selxbpr)), now we are able
> to use subcommand (mills) to do the same thing, but
> the command (normden and > norm) doesn't not exis
> any more. are they changed? and what are they now?
they are now -normal()- and -normalden()-, alternatively
you can use the -nshazard- or -mills- options in
-predict- to get those directly without any further
> forthly, it says twostep doesn't work with any
> weight subcommand in heckman
> regression. But I run my heckman with both twostep and
> pweight/iweight.
Well, that answers your first question: if you have
weights then you have no choice, you'll have to use
the ML.
Hope this helps,
Maarten L. Buis
Institut fuer Soziologie
Universitaet Tuebingen
Wilhelmstrasse 36
72074 Tuebingen
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2010-04/msg01065.html","timestamp":"2014-04-16T16:10:38Z","content_type":null,"content_length":"10076","record_id":"<urn:uuid:46d74275-b10b-4e83-b84b-697efe66a692>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
|
User user2734
bio website
visits member for 4 years, 4 months
seen May 17 '10 at 21:32
stats profile views 1,281
8 awarded Nice Answer
25 awarded Revival
21 awarded Yearling
15 awarded Nice Question
22 awarded Yearling
3 awarded Nice Answer
10 awarded Popular Question
22 awarded Yearling
7 awarded Nice Answer
May Restriction of a complex polynomial to the unit circle
17 comment [deleted the comment referred to in the OP's comment from Mar. 5, 4:37]
May What is the “right” universal property of the completion of a metric space?
17 comment [deleted previous comments]
May What is the “right” universal property of the completion of a metric space?
17 comment [deleted previous comments]
May Practical applications of algebraic number theory?
17 comment I honestly don't know [deleted previous comment]
May Given a finite field $K$, what are the possible degrees of a polynomial $p\in K[x]$ such that $x\longmapsto p(x)$ is one-to-one?
17 comment There seem to be some discussion on the subject of permutation polynomials in Ch. 7 of Lidl-Niederreiter (books.google.com/…). [I hope I understood the question correctly. In the title
shouldn’t $x\mapsto f(x)$ really be $c\mapsto f(c)$?]
May Reference request for category theory works which quickly prove the theorem which generalises the 1st isomorphism theorem for groups/rings/…
15 comment Perhaps instead of category theory, you should look at some basic book on universal algebra, for example, you can try part 3 of Cohn’s algebra.
May is the presheaf category of a locally small category locally small?
14 comment Thanks for the pointers! The link doesn't seem to work, but I will look at nLab now that I know what to look for. I suppose that at the moment (before finished reading Mac Lane) I'll
stick to the foundations described in Section 1.6 of Mac Lane. Although limited (as I now see..), these foundations seem to be sufficient in Mac Lane (as Mike Shulman told me in a
comment...), and I should probably not "dive" into something else right now. Thanks again for your help.
May is the presheaf category of a locally small category locally small?
14 comment Thank you very much! Having such tips from an expert is extremely helpful. I have a million more question to ask, but I guess these comments aren't the right place for such a mini
May is the presheaf category of a locally small category locally small?
14 comment Thank you very much for your answer! I hope it is OK that I ask another silly question: Is my comment above correct, but just useless, because (for some reason that I still don't
understand) "locally small" can refer to non-small hom-sets that are in bijection with small sets? [I'm assuming a single universe, as in Mac Lane.]
May is the presheaf category of a locally small category locally small?
13 comment @Todd Trimble: Is this simple argument wrong? Consider a non-empty hom-set $\widehat{C}(F,G)$, and let $\tau$ be in this hom-set. If the hom-set is small, then by transitivity of the
universe, $\tau$ is small too. But $\tau$ is just a function with domain $\operatorname{obj}(C)$ (so $\tau$ is a triple with $\operatorname{obj}(C)$ as its first component). But then from
transitivity again we get $\operatorname{obj}(C)\in U$, a contradiction. (So, in general,the answer to the original question is ``no'')
May comment If G is monadic and the comparison functor is an equivalence that is not an isomorphism, does G create limits?
13 @Mike Shulman: Thanks for your answers!
|
{"url":"http://mathoverflow.net/users/2734/user2734?tab=activity","timestamp":"2014-04-20T01:50:01Z","content_type":null,"content_length":"46320","record_id":"<urn:uuid:51df7e45-3393-4e9c-afdc-7eabc539f5a5>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matches for:
Return to List
A Mathematical Gift, III: The interplay between topology, functions, geometry, and algebra
Supplementary Material            
Mathematical World This book brings the beauty and fun of mathematics to the classroom. It offers serious mathematics in a lively, reader-friendly style. Included are exercises and many
figures illustrating the main concepts.
2005; 129 pp; softcover
The first chapter talks about the theory of manifolds. It includes discussion of smoothness, differentiability, and analyticity, the idea of local coordinates and
Volume: 23 coordinate transformation, and a detailed explanation of the Whitney imbedding theorem (both in weak and in strong form). The second chapter discusses the notion of the
area of a figure on the plane and the volume of a solid body in space. It includes the proof of the Bolyai-Gerwien theorem about scissors-congruent polynomials and Dehn's
ISBN-10: 0-8218-3284-0 solution of the Third Hilbert Problem.
ISBN-13: 978-0-8218-3284-4 This is the third volume originating from a series of lectures given at Kyoto University (Japan). It is suitable for classroom use for high school mathematics teachers and
for undergraduate mathematics courses in the sciences and liberal arts. The first and second volumes are available as Volume 19 and Volume 20 in the AMS series,
List Price: US$32 Mathematical World.
Member Price: US$25.60 Advanced high-school students and undergraduates in mathematics.
Order Code: MAWRLD/23 The story of the birth of manifolds
• The prelude to the birth of manifolds
• The birth of manifolds
This item is also sold as
part of the following set: The story of area and volume from everyday notions to mathematical concepts
MAWRLD-GSET • Transition from the notion of "size" to the concept of "area"
• Scissors-congruent polygons
• Scissors-congruent polyhedra
|
{"url":"http://ams.org/bookstore?fn=20&arg1=mawrldseries&ikey=MAWRLD-23","timestamp":"2014-04-19T00:16:54Z","content_type":null,"content_length":"16046","record_id":"<urn:uuid:6dea84ce-6d42-4f22-9dac-1be3c4c7554e>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Simplifying with Assumptions
Simplify[expr,assum] simplify expr with assumptions
This tells
to make the assumption , so that simplification can proceed.
Element[x,dom] state that x is an element of the domain dom
Element[{x[1],x[2],...},dom] state that all the are elements of the domain dom
Reals real numbers
Integers integers
Primes prime numbers
Some domains used in assumptions.
|
{"url":"http://reference.wolfram.com/mathematica/tutorial/SimplifyingWithAssumptions.html","timestamp":"2014-04-19T09:37:09Z","content_type":null,"content_length":"36188","record_id":"<urn:uuid:737c878b-f109-44fb-9ed6-91ed78fc5ffb>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sword Gestures
It has been two weeks since I have done the slightest amount of work on the sword game. Kudos to Ben for keeping it moving along. Anyway, it’s time for some brainstorming.
Our current idea (as of last week) for a control scheme for the sword game is (1) to have it in slow motion for strategic, rather than strictly action, play, and (2) to control the sword based on
gestures, of sorts. Not pre-programmed gestures like (ugh) Black and White; more like drawings of what you want the sword to do. Draw a Z and you’ll get a Zorro motion. I like this idea a lot, and I
think it has a lot of potential. So, I have to tackle the problem of, given a gesture, how do you figure out what to do with the sword.
Well, we want it to be sort of vague and interpretive. We also want it to be natural, the way a human would interpret when given that gesture. These two things point me straight toward setting up
descriptive equations, filling in known information, and solving for the rest of the details. But what kind of information does the gesture specify?
We know it starts about where the sword currently is when you draw the gesture, and it ends in the same direction as the end of the gesture. We can intuit velocities, but I think it’s better to let
the equations figure those out. We also know a few points on the curve (as many as we need, really). We’re trying to solve for the force to exert at every step. Oh, and we know we have a maximum
force we are allowed to exert, which should play a role in the solving process (so the solution doesn’t try to do anything impossible).
A Bezier-like spline might suffice. We are given a curve, i.e. a bunch of points, and we’re trying to find a smooth interpolant. Let’s look at what information we have at each point:
• A 2D position at each point.
• A 3D position and velocity at the first point.
• (perhaps) A time at the last point.
y much information. And the information we need:
• A 3D force for the handle at each point.
• A 3D force for the tip at each point.
I’m making the assumption that we linearly interpolate the forces between each point, so the force half-way between two points is the average of the forces at each of the two points. So we have 6
unknowns. We need 6 equations (for each point).
How do we interpret the 2D data? I’m going to say for simplicity’s sake that we put a plane in front of the fighter and require that the sword pass through the corresponding point. This may be the
weakest assumption of the algorithm. And there’s another problem with this approach: in order to compute that, we need the position and orientation of the sword at each point, another five unknowns
in exchange for only two equations!
Hmm, as I hash this approach out, it seems to be breaking down. Maybe I should scrap and come from a different angle.
One thought on “Sword Gestures”
1. Look into a game called “die by the sword” mouse controled sword movement in 3d.
|
{"url":"http://lukepalmer.wordpress.com/2007/02/08/sword-gestures/","timestamp":"2014-04-21T14:58:16Z","content_type":null,"content_length":"58913","record_id":"<urn:uuid:bfb73093-78fb-47e4-a0be-fd6359022f0b>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
|
• Publication
• PrePrints
• Abstract - The Sum-over-Forests Density Index: Identifying Dense Regions in a Graph
This Article
Bibliographic References
Add to:
ASCII Text x
Mathieu Senelle, Silvia Garcia-Diez, Amin Mantrach, Masashi Shimbo, Marco Saerens, François Fouss, "The Sum-over-Forests Density Index: Identifying Dense Regions in a Graph," IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 99, no. 1, pp. 1, , 5555.
BibTex x
@article{ 10.1109/TPAMI.2013.227,
author = {Mathieu Senelle and Silvia Garcia-Diez and Amin Mantrach and Masashi Shimbo and Marco Saerens and François Fouss},
title = {The Sum-over-Forests Density Index: Identifying Dense Regions in a Graph},
journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence},
volume = {99},
number = {1},
issn = {0162-8828},
year = {5555},
pages = {1},
doi = {http://doi.ieeecomputersociety.org/10.1109/TPAMI.2013.227},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
TI - The Sum-over-Forests Density Index: Identifying Dense Regions in a Graph
IS - 1
SN - 0162-8828
EPD - 1
A1 - Mathieu Senelle,
A1 - Silvia Garcia-Diez,
A1 - Amin Mantrach,
A1 - Masashi Shimbo,
A1 - Marco Saerens,
A1 - François Fouss,
PY - 5555
KW - Indexes
KW - Equations
KW - Vegetation
KW - Probability distribution
KW - Physics
KW - Correlation
KW - Mining methods and algorithms
KW - Trees
KW - Graph Theory
KW - Discrete Mathematics
KW - Mathematics of Computing
KW - Data mining
VL - 99
JA - IEEE Transactions on Pattern Analysis and Machine Intelligence
ER -
This work introduces a novel nonparametric density index defined on graphs, the Sum-over-Forests (SoF) density index. It is based on a clear and intuitive idea: high-density regions in a graph are
characterized by the fact that they contain a large amount of low-cost trees with high outdegrees while low-density regions contain few ones. Therefore, a Boltzmann probability distribution on the
countable set of forests in the graph is defined so that large (high-cost) forests occur with a low probability while short (low-cost) forests occur with a high probability. Then, the SoF density
index of a node is defined as the expected outdegree of this node on the set of forests, thus providing a measure of density around that node. Following the matrix-forest theorem and a statistical
physics framework, it is shown that the SoF density index can be easily computed in closed form through a simple matrix inversion. Experiments on artificial and real data sets show that the proposed
index performs well on finding dense regions, for graphs of various origins.
Index Terms:
Indexes,Equations,Vegetation,Probability distribution,Physics,Correlation,Mining methods and algorithms,Trees,Graph Theory,Discrete Mathematics,Mathematics of Computing,Data mining
Mathieu Senelle, Silvia Garcia-Diez, Amin Mantrach, Masashi Shimbo, Marco Saerens, François Fouss, "The Sum-over-Forests Density Index: Identifying Dense Regions in a Graph," IEEE Transactions on
Pattern Analysis and Machine Intelligence, 21 Nov. 2013. IEEE computer Society Digital Library. IEEE Computer Society, <http://doi.ieeecomputersociety.org/10.1109/TPAMI.2013.227>
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/portal/csdl/trans/tp/preprint/06656802-abs.html","timestamp":"2014-04-19T14:38:02Z","content_type":null,"content_length":"53480","record_id":"<urn:uuid:4e92e4de-9fff-4d4a-9563-0edb870efb59>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
|
online assignment 2
Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with
the best way to expand their education.
Below is a small sample set of documents:
CSU Chico - PHYS - 101213
MasteringPhysics: Assignment Print ViewPage 1 of 16Manage this Assignment: Print Version with Answers 2a. Electric PotentialDue: 11:00pm on Sunday, January 17, 2010Note: To understand how points are
awarded, read your instructor's Grading Policy.Ele
CSU Chico - PHYS - 101213
MasteringPhysics: Assignment Print ViewPage 1 of 12Manage this Assignment: Print Version with Answers 4a. DC CircuitsDue: 11:00pm on Sunday, January 31, 2010Note: To understand how points are
awarded, read your instructor's Grading Policy.Capacitors
CSU Chico - PHYS - 101213
MasteringPhysics: Assignment Print ViewPage 1 of 10Manage this Assignment: Print Version with Answers 3a. Electric CurrentsDue: 11:00pm on Sunday, January 24, 2010Note: To understand how points are
awarded, read your instructor's Grading Policy.Alte
CSU Chico - PHYS - 101213
MasteringPhysics Tutorials, Chapter 21MasteringPhysics tutorials assume that you have the formula in front of you. Open your book to the Chapter Summary on page 751. Many questions can usually be
answered by examining or using the formula. Resistance and
CSU Chico - PHYS - 101213
MasteringPhysics: Assignment Print ViewPage 1 of 11Manage this Assignment: Print Version with Answers7. Induction Part 2Due: 11:00pm on Thursday, February 25, 2010Note: To understand how points are
awarded, read your instructor's Grading Policy.Ener
CSU Chico - PHYS - 101213
MasteringPhysics: Assignment Print Viewfile:/C:/Documents%20and%20Settings/lbr/Desktop/Ch6_files/assi.Physics 220Assignment 6Assignment is due at 11:59pm on Tuesday, October 17, 2006Credit for
problems submitted late will decrease to 0% over the cour
CSU Chico - PHYS - 101213
Assignment 15Due: 11:00pm on Friday, December 4, 2009Note: To understand how points are awarded, read your instructor's Grading Policy . [ Return to Standard Assignment View][ Print ]Electricity and
Water AnalogyLearning Goal: To understand the analo
CSU Chico - PHYS - 101213
MasteringPhysics: Assignment Print Viewfile:/c:/Documents%20and%20Settings/lbr/Desktop/HW3_files/ass.Physics 220Assignment 3Assignment is due at 11:59pm on Tuesday, September 26, 2006Credit for
problems submitted late will decrease to 0% over the cou
CSU Chico - PHYS - 101213
MasteringPhysics: Assignment Print ViewPage 1 of 16Manage this Assignment: Print Version with Answers 5. MagnetismDue: 11:00pm on Thursday, February 11, 2010Note: To understand how points are
awarded, read your instructor's Grading Policy.Mass Spect
CSU Chico - PHYS - 101213
MasteringPhysics: Assignment Print ViewPage 1 of 16Manage this Assignment: Print Version with Answers 6a. InductionDue: 11:00pm on Sunday, February 14, 2010Note: To understand how points are awarded,
read your instructor's Grading Policy.Understandi
CSU Chico - PHYS - 101213
MasteringPhysics: Assignment Print ViewPage 1 of 13Manage this Assignment: Print Version with Answers8a. Electromagnetic wavesDue: 11:00pm on Sunday, February 28, 2010Note: To understand how points
are awarded, read your instructor's Grading Policy.
CSU Chico - PHYS - 101213
Geometrical Optics Refraction through prismSynopsis 1. Prism is a medium enclosed between two plane surfaces intersecting at some angle along a line. The line of intersection of the two planes is
called refracting edge and angle of intersection of the tw
CSU Chico - PHYS - 101213
CSU Chico - PHYS - 101213
CSU Chico - PHYS - 101213
CSU Chico - PHYS - 101213
CSU Chico - PHYS - 101213
CSU Chico - PHYS - 101213
CSU Chico - PHYS - 101213
CSU Chico - PHYS - 101213
Potential energy Electrical PotentialE EEnergy can be stored in an electric field in a way analogous to that in the gravitational field. Recall: P. E. = U = -Workhi to hf = mghgravEESo,Uelec = -
We-fieldElectrical Potential Energy Plotting Potentia
Birmingham UK - DDD - 4454
UnC Universidade do Contestado CaadorDisciplina Filosofia da Educao I Professor Sidnei Rui Acadmicas Alinne Mayte Terhorst Fernanda Guedes Ribas Franciele Fa Sebastiana AlvesCurso Pedagogia 1 fase
Regime Especial 1 semestre ano 2004O EXISTENCIALISTA MA
UCLA - EE - EE 132A
Final Exam 20011. (15 points) Basic communications questions. a) (5 points) In 64-QAM, how many bits per symbol are transmitted?log 2 64 = 6 bits/Symbolb) (10 points) Consider the matched filter
setup shown below .The waveform of g(t) is as follows2,
UCLA - EE - EE 132A
Final Exam 20021) (10 points) High level communications system In this class we have learned something about every part of a communication system. To demonstrate your understanding of the entire
communications system, put the following blocks in order fr
Australian National - ECON - IDEC8087
THE MYSTERY OF ECONOMIC GROWTHTHE MYSTERY OF ECONOMIC GROWTHELHANAN HELPMANthe belknap press of harvard university pressCambridge, Massachusetts, and London, England2004Copyright 2004 by the
President and Fellows of Harvard College All rights reserv
Abraham Baldwin Agricultural College - ACCOUNTING - 131a
SOLUTIONS TO BRIEF EXERCISESBRIEF EXERCISE 3-1 May 1 4,000 Cash . Common Stock . 4,000 Equipment . 1,100 Accounts Payable .1,100 Rent Expense . 400 Cash. Accounts Receivable . 500 Service Revenue
.31340021500BRIEF EXERCISE 3-2 Aug. 2 12,000 Cash .
Arizona - HCA - HCA 220
HIV/AIDS: An Overview HCA/240 Axia College Jessica KeithHIV is human immunodeficiency virus. It can weaken your immune system by destroying cells that helps you fight diseases and infections from
your body. The HIV attacks the t-cells or CD4 cells which
Arizona - HCA - HCA 220
Skin Cancer MelanomaHow Cancer Affects the Body: All forms of cancer occur when cells in the body start to grow uncontrollably. When melanocyte, the cells of the skin that produce its pigmentation
which is color for the skin, start growing out of control
Simon Fraser - KIN - 142
4) I dont think device is very accurate to measure flexibility. It was very difficult to place the goniometer at the axis of rotation of the joint and have the arms aligned with long axis of the
bones. The goniometer arms would not exactly align when they
Simon Fraser - KIN - 142
Kinesiology 142GENERAL LABORATORY INSTRUCTIONS 1) All students are expected to be in the laboratory on time and ready to work. The doors to the lab will be opened at 20 minutes past the hour. 2) At
the beginning of each laboratory session, a short pre-la
Simon Fraser - KIN - 142
Kinesiology 142 Surrey Introduction to Kinesiology Summer 2010Instructor: Dr Michael Walsh Office: K8621 (phone 778-782-4065) (I only have a Burnaby office) Email: walsha@sfu.ca (Please put Kin 142
Surrey in the subject of your email to me) INTRODUCTION:
Simon Fraser - KIN - 142
1KINESIOLOGY 142 LABORATORY REVIEW QUESTIONS - PART 2 Laboratory #7 Cardio-Respiratory Anatomy 1. Be able to identify the anatomical structures listed on pages 16-1 and 16-2 in the Kinesiology 142
Laboratory Manual. 2. Define atria and ventricles. Which
UC Davis - CHE - 118A
UC Davis - CHE - 118A
UC Davis - CHE - 118A
UC Davis - CHE - 118A
UC Davis - CHE - 118A
ToupadakisCHEM 118AKEYExam 2SPR20101 of 5Last Name _ First Name _thelifecurve.comLab Sec. # _; TA: _; Lab day/time: _ Last 4 digits of student ID number: _Dr.Toupadakis Spring 2010"Not
everything that counts can beCHEMISTRY 118A EXAM 2Instruct
UC Davis - CHE - 118A
Quiz 5: Tuesday Name: Section: Date: 1. Circle the chiral compounds (3pts)OH Me Br H H Br OH Br H Br HHO2C HCO2H HHO2C HH CO2H2. Assign R or S configuration to all stereocenters in the following
molecules (4pts)O H HN H H OMe OMeHOHO3. Circle t
UC Davis - CHE - 118A
UC Davis - CHE - 118A
UC Davis - CHE - 118A
UC Davis - CHE - 118A
TuesdayWeek 6: Tuesday Name: Date:1. Where might the following compounds have IR absorptions? (Identify three for each of the compounds) (4 pts) CH2OH HC IR absorption 1. functional group 1.C OO
functional groupIR absorption2.2.3.3.2. For the fo
UC Davis - CHE - 118A
UC Davis - BIS 101 - BIS 101
BIS 101-001 (Engebrecht) Spring 2010 Form A DO NOT OPEN UNTIL INSTRUCTEDFinalFill out your name, student identification number and form letter on the blue Scantron Form with a #2 pencil. READ THE
QUESTIONS CAREFULLY BEFORE YOU ANSWER. Fill in your answe
UC Davis - BIS 101 - BIS 101
BIS 101-003 (Engebrecht) Fall 2008Name: _ Last, First ID #:_BIS 101-003 - MIDTERM IIAThis exam has a total of 100 points. READ THE QUESTIONS CAREFULLY BEFORE YOU ANSWER. SHOW ALL WORK TO OBTAIN FULL
CREDIT. Please sign your name on the score sheet. Thi
UC Davis - BIS 101 - BIS 101
BIS101-001/EngebrechtSpring 2010Review Quiz please enter answers on the Course site listed under Tests & Quizzes (https:/smartsite.ucdavis.edu/xsl-portal/site/cc31da6c-ce07-42ac-b87eb3fd0f69ec3a/
page/16096a7c-90c2-49d0-970b-80316e7e533d). You will only
UC Davis - BIS 101 - BIS 101
Homework assignment #1 Due in your session, the week of 4/4/10-4/9/10 3 problems 1. (16pts) In mice, a recessive mutation in gene T results in tail-less animals and a second unlinked recessive
mutation in gene F results in fat animals. Indicate the genoty
UC Davis - BIS 101 - BIS 101
Homework02key2. You have obtained two true-breeding strains of mice, each homozygous for an independently discovered recessive mutation that prevents the formation of hair on the body. The discoverer
of one of the mutant strains calls his mutation naked,
UC Davis - BIS 101 - BIS 101
BIS101-001/Engebrecht Suggested problems and homework for week #3 Problems suggested for the whole class these are not to be turned in, but are for practice/study aid. The following problems can be
done after the linkage lectures: Ch. 4: in both 8th and 9
UC Davis - BIS 101 - BIS 101
From Homework03: You are trying to determine the position on the E. coli chromosome of a new mutation that you isolated, called slo- (slow growing). You do generalized P1 transduction experiments
using donor and recipient strains as shown below. You obser
UC Davis - BIS 101 - BIS 101
BIS101/Engebrecht Homework05 1. Mutants of Neurospora were selected on the basis of their requirement for compound B. The following data were obtained from growth tests in which the intermediate
metabolites A, C, D, E, and F are supplied singly (all of th
UC Davis - BIS 101 - BIS 101
BIS101/Engebrecht Homework06 1. The following DNA is part of a gene that codes for a polypeptide of at least seven amino acids: 3' c a a t t g a t t a g t c a g t c a a t t g a t 5' 5' g t t a a c t
a a t c a g t c a g t t a a c t a 3' Please see codon ta
UC Davis - BIS 101 - BIS 101
6. (12 pts) Predict the molecular consequences of the following mutations on an essential mouse ribosomal protein gene and the probable phenotype of the homozygous mouse for that particular mutation.
Please give the reasoning for you answers. a. A +1 fram
UC Davis - BIS 101 - BIS 101
Homework07bkey2. Sequence analysis of an unc gene in C. elegans reveals that it encodes for a myosin, a component of muscles. As a good geneticist, you are interested in understanding the nature of
the different alleles of the unc gene. You PCR unc from
UC Davis - BIS 101 - BIS 101
BIS101/Engebrecht Homework08 2. You have isolated a large number of C. elegans mutants that are uncoordinated (unc). Seven of these mutants (mutants 1-7) are recessive and you perform complementation
tests with the following results: 1 2 3 4 5 6 7 1 2 + 3
UC Davis - BIS 101 - BIS 101
BIS101/Engebrecht Homework09 In 8th edition: Chapter 11: 17, 21, 22, 29, 31, 34, 35; Chapter 12: 32, 35, 36 In 9th edition: Chapter 20: 16, 18, 21, 22 1. How would you go about cloning the yeast HIS4
gene? Assume that you have a yeast his4 mutant and a ge
UC Davis - BIS 101 - BIS 101
BIS101-001/Engebrecht Spring 2010 Outline01Lectures1-2; Introduction and Mendel I. Class Organization Discussion of syllabus and all organizational matters concerning the course. Online Review Quiz
due April 9 by 11:55pm. Reading for introduction. Chapter
UC Davis - BIS 101 - BIS 101
BIS101-001/Engebrecht Spring 2010 Outline02Lectures3-4: Sex-linkage, human genetics, mitosis, meiosis Reading Chapter 2, 42-58 in 8th edition or Chapter 2, 61-75 in 9th edition. I. Sex-linked
inheritance a. Sex determination (chromosomal control). Homogam
UC Davis - BIS 101 - BIS 101
BIS101-001/Engebrecht Spring 2010 Outline03Lectures5-6: Allele and Gene Interactions Organisms and genes are complex. Different alleles of a single gene can result in different phenotypes. In
addition, multiple genes interact resulting in unexpected pheno
UC Davis - BIS 101 - BIS 101
BIS101-001/Engebrecht Spring 2010 Outline04Lectures7-8: Genetic Linkage and Mapping Thus far, we have examined how genes segregate and influence each other. We will now extend our definition of the
gene to include residing on a fixed location on a chromos
UC Davis - BIS 101 - BIS 101
BIS101-001/Engebrecht Spring 2010 Outline05Lectures9-10: Bacterial and Viral Genetics The genetics of bacteria and viruses have been instrumental for elucidating basic cell and molecular processes.
They have also been the workhorses for recombinant DNA te
UC Davis - BIS 101 - BIS 101
BIS101/Engebrecht Spring10 Outline06Lectures11-12 DNA Structure and Organization in Chromosomes Reading in 8th edition Chapter 7, 227-236, in 9th edition, Chapter 7, 265-275 I. DNA as hereditary
material We will only briefly cover this in class but I have
|
{"url":"http://www.coursehero.com/file/5916480/online-assignment-2-3/","timestamp":"2014-04-20T19:11:24Z","content_type":null,"content_length":"98655","record_id":"<urn:uuid:f1e715b7-adca-4456-9c13-4af2ce75580b>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
|
US Standard Area
These are the most common measurements:
• Square Inch
• Square Foot
• Square Yard
• Acre
• Square Mile
Small surfaces are usually measured in square inches, rooms and buildings in square feet, carpet in square yards, property in acres, and territory in square miles.
Square Inch
Area is length by length, so a square inch is a square that is 1 inch on each side.
The Unit is inch × inch, which can be written many ways, such as:
• in^2
• inch^2
• sq in
• sq inch
• square inch
• etc ...
Square Foot
square foot is a square that is 1 foot on each side.
Here Ariel the Dog sits next to a square foot made with tape measures.
The Unit is foot × foot, which can be written many ways, such as
• ft^2
• foot^2
• sq ft
• sq foot
• sq feet
• etc ...
Square Yard
A square yard is a square that is 1 yard on each side. It is equal to 9 square feet.
One acre is equal to 43,560 square feet.
Acres are commonly used to measure land.
Square Mile
A square mile is a square that is 1 mile on each side.
Square miles are commonly used to measure large areas of land.
More Examples
A square inch is about:
An acre is:
• 4,840 square yards
• 43,560 square feet
• 4,046.8564224 square meters
• a square with sides about 208.71 feet (or 69.57 yards)
• about 0.4 hectare (40% of the area of a hectare)
• a bit less than a football field
• about 16 tennis courts
A square mile is:
|
{"url":"http://www.mathsisfun.com/measure/us-standard-area.html","timestamp":"2014-04-19T11:56:53Z","content_type":null,"content_length":"7538","record_id":"<urn:uuid:ce595f8f-e234-4e73-95bd-351f506c75f5>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: RE: using loop to generate distributions with different means and st
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: RE: using loop to generate distributions with different means and standard deviations
From "Sarah Edgington" <sedging@ucla.edu>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: using loop to generate distributions with different means and standard deviations
Date Fri, 20 May 2011 11:44:35 -0700
Is your actual data really different from the simulated data you've shown?
If not I don't understand why the solution I suggested before doesn't solve
your problem. If your variables are actually numbered like you've shown,
it's still just a -forvalues- loop to get the drawnorm part working.
forv i=1/3 {
drawnorm product`i', m(max_product`i') sd(sd_product`i')
Unless you give more specific information about the actual problem you're
trying to solve and why the suggested solution doesn't work, I don't think
you're going to get much help.
-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu
[mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Lance Wiggains
Sent: Friday, May 20, 2011 11:29 AM
To: statalist@hsphsun2.harvard.edu
Subject: st: using loop to generate distributions with different means and
standard deviations
Sorry for the vagueness. Right now I'm just using simulated data for 3
different products. Here is my code:
My data looks like this
Week Product 1 Product 2 Product 3
tsset week
gen n=_n
egen max_n=max(n)
ds week n max_n, not
foreach var in `r(varlist)'{
tssmooth ma ms_`var'= `var', weights(1 1<2>1)
ds ms*
foreach var in `r(varlist)' {
gen week3_`var'=`var' if n==max_n
egen max_week3_`var'=max(week3_`var')
drop week3*
drop ms*
ds week n max_*, not
foreach var in `r(varlist)' {
gen max_`var'=max_week3_ms_`var'
drop max_week*
keep if n+3>=max_n
ds week n max*, not
foreach var in `r(varlist)'{
egen sd_`var'=sd(`var')
rename max_n maximum_n
ds max_* sd* week, not
foreach var in `r(varlist)'{
drop `var'
drawnorm product1, m(max_product1) sd(sd_product1)
- Hide quoted text -
On Wed, May 18, 2011 at 1:51 PM, Sarah Edgington <sedging@ucla.edu> wrote:
> Lance,
> That's a different problem. From your original post I assumed you had
> all the variables already created.
> One strategy for writing loops is to write out the code for the first
> two examples of something repetitive you want to do. Then identify
> the parts of the example that remain the same across the examples.
> If you post the code your trying to repeat we may be able to help you
> but your current description is too vague for me to do much more than
> offer vague suggestions of how to think about loops.
> -Sarah
> -----Original Message-----
> From: owner-statalist@hsphsun2.harvard.edu
> [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Lance
> Wiggains
> Sent: Wednesday, May 18, 2011 10:36 AM
> To: statalist@hsphsun2.harvard.edu
> Subject: Re: st: RE: using loop to generate distributions with
> different means and standard deviations
> I've tried that but the problem is that I'm pre-calculating the means
> and sd's for the variable because I'm only using the last 3-4
> observations for each variable to calculate those values. I'm doing
> this because I want it to reflect the changes that happen recently. My
> mean function uses tssmooth, with weights (1 1<2>), to average the
> last 3 weeks of sales. So if sales were 70,80,90, and 100 I get a
> value of 92.5 for my mean. It also calculates a SD for the last 3-4
> observations. Then I want to plug those numbers into the drawnorm function
using a loop. Any idea about how that would work?
> Lance
> On Wed, May 18, 2011 at 1:16 PM, Sarah Edgington <sedging@ucla.edu> wrote:
>> Lance,
>> Try something like this:
>> forv i=1/3 {
>> drawnorm name`i', m(mean_var`i') sd(sd_var`i')
>> }
>> You'll run into problems, though, if your data actually includes the
>> variable names you list since there isn't a sd_var1.
>> -Sarah
>> -----Original Message-----
>> From: owner-statalist@hsphsun2.harvard.edu
>> [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Lance
>> Wiggains
>> Sent: Wednesday, May 18, 2011 10:09 AM
>> To: statalist@hsphsun2.harvard.edu
>> Subject: st: using loop to generate distributions with different
>> means and standard deviations
>> Statalist members,
>> I'm trying to get Stata to generate a distribution of data from
>> variables in my data set.
>> My appended data looks like this
>> mean_var1=90
>> standard_deviation_var1=5
>> mean_var2=100
>> sd_var2=10
>> mean var3=110
>> sd_var3=15
>> and so on
>> I'm need a loop that will take my variables and create the
>> distributions for me.
>> I've been using the drawnorm command
>> drawnorm name1, m(mean_var1) sd(sd_var1) but I can't get it to
>> recognize more than 1 variable at a time
>> I want it to perform the distribution command for each pair of my
> variables.
>> I.e. (m_var1, sd_var1), (m_var2, sd_var2) , (m_var3, sd_var3)
>> Thanks for your consideration,
>> Lance
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2011-05/msg01092.html","timestamp":"2014-04-18T18:54:02Z","content_type":null,"content_length":"13997","record_id":"<urn:uuid:f654c4f5-6137-43a0-9b84-c325483caec5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Villa Rica, PR Prealgebra Tutor
Find a Villa Rica, PR Prealgebra Tutor
...Starting my educational career in New York (working the private and public education sectors in both general and special education) and moving to Georgia teaching high and middle school math
and social studies, and preparing for administrative roles and responsibilities has made me more equipped ...
22 Subjects: including prealgebra, reading, writing, algebra 1
I have been a teacher for seven years and currently teach 9th grade Coordinate Algebra and 10th grade Analytic Geometry. I am up to date with all of the requirements in preparation for the EOCT.
I am currently finishing up my masters degree from KSU.
4 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...My highest ACT Math Score: 35 My highest ACT Science Score: 30 My highest SAT Math Score: 780 Scored a 5 on AP Calculus AB ExamI scored a 5 on the AP Calculus AB Exam, which is the highest
possible score. I have knowledge of math subjects that are lower than Calculus. I have a year's worth of peer-tutoring experience in Chemistry.
17 Subjects: including prealgebra, chemistry, calculus, geometry
...I can help with essay format, writing, sentence style, grammer, punctuation, mechanics, research, and reference lists. I also can help with MLA or APA formats. I use a variety of books,
personal experience and computer aids to help students learn, but I am receptive to other ideas as well.
17 Subjects: including prealgebra, English, reading, writing
...I have several years of experience tutoring. I have tutored individuals from elementary school up to college level. I have tutored the child that is in elementary school in all subjects.
18 Subjects: including prealgebra, reading, geometry, algebra 1
Related Villa Rica, PR Tutors
Villa Rica, PR Accounting Tutors
Villa Rica, PR ACT Tutors
Villa Rica, PR Algebra Tutors
Villa Rica, PR Algebra 2 Tutors
Villa Rica, PR Calculus Tutors
Villa Rica, PR Geometry Tutors
Villa Rica, PR Math Tutors
Villa Rica, PR Prealgebra Tutors
Villa Rica, PR Precalculus Tutors
Villa Rica, PR SAT Tutors
Villa Rica, PR SAT Math Tutors
Villa Rica, PR Science Tutors
Villa Rica, PR Statistics Tutors
Villa Rica, PR Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Acworth, GA prealgebra Tutors
Austell prealgebra Tutors
Carrollton, GA prealgebra Tutors
Dallas, GA prealgebra Tutors
Douglasville prealgebra Tutors
Fayetteville, GA prealgebra Tutors
Forest Park, GA prealgebra Tutors
Hiram, GA prealgebra Tutors
Newnan prealgebra Tutors
Oxford, AL prealgebra Tutors
Powder Springs, GA prealgebra Tutors
Temple, GA prealgebra Tutors
Tyrone, GA prealgebra Tutors
Union City, GA prealgebra Tutors
Villa Rica prealgebra Tutors
|
{"url":"http://www.purplemath.com/villa_rica_pr_prealgebra_tutors.php","timestamp":"2014-04-18T11:05:55Z","content_type":null,"content_length":"24184","record_id":"<urn:uuid:2f270224-57d1-4278-8eed-461f9a2d6192>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Tutor] Floating point exercise 3 from Learn python the hard way
Sander Sweers sander.sweers at gmail.com
Tue Jun 14 16:55:10 CEST 2011
On 14 June 2011 15:20, amt <0101amt at gmail.com> wrote:
>>> But I can't understand at line 2 and 3. I mean it makes no difference
>>> for me. Saying 30 or 30.0 is the same thing.
>>> As well as saying 97 or 97.0.
>> Precisely, thats why I asked the question.
> As a beginner at line 2 and 3 I see no point of using floating numbers
> except the fact to serve as an example that if I have a floating point
> number it affects the rest of the expression evaluation.
Correct, 2 and 3 do not need floating point arithmetic.
> So, should I keep them or not?It's unclear to me and sadly the author of the
> book doesn't provide the solutions to the exercises. The only way I can
> verify myself is using this list.
If you tell Python to divide 2 integers (whole numbers) by default it
will use integer arithmetic. So in your calculation on line 5 you get
a result of 7 as Python does not switch to floating point arithmetic.
You can ask Python to use floating point arithmetic a couple of ways.
For example if you want 2 divided by 3 you can tell Python to use
floating point arithmetic by:
>>> float(2) / 3
>>> float(20) / 7
>>> 2.0 / 3
>>> 20.0 / 7
What happens here is that Python on one side of the division has a
floating point number. This will force it to use floating point
arithmetic instead of integer. Generally the second form is used and
the first is for demonstration purpose only.
>>> 2 / 3
>>> 20 / 7
Here we have 2 integers and therefor Python uses integer arithmetic
and gives 0 and 2 as result. This is because the results of the
division can not be stored as an integer so everything after the
decimal place gets cut off.
In Python 3 this has been changed that by default it will do floating
point arithmetic. You can also get this in Python 2 by importing
division from __future__.
The Python tutorial [1] on floating point arithmetic will give you
much more info on floating point arithmatic in Python.
[1] http://docs.python.org/tutorial/floatingpoint.html
More information about the Tutor mailing list
|
{"url":"https://mail.python.org/pipermail/tutor/2011-June/084066.html","timestamp":"2014-04-21T15:34:25Z","content_type":null,"content_length":"5274","record_id":"<urn:uuid:2cd92f8e-dc74-4cc0-90d5-e258ba084297>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Integration by parts and with trig
October 5th 2008, 11:13 AM #1
Oct 2008
Integration by parts and with trig
I have two problems I don't really know how to work through.
The first is the integral from 0 to 1 of r^3/(sqr(4+r^2))
I started by integrating by parts and eventually ended up with (4+r^2)^(7/2) so I know I'm doing something wrong.
The other problem is: find the integral of cosPIx * cos4PIx
I don't know where to begin here. Any help would be appreciated.
For first one you don't need to integrate by parts. Simply sub $u = 4 + r^2.$
For the second one, you can either integrate by parts twice and treat the integral as the unknown or you can use the product to sum identity:
$\cos{a}\cos{b} = \frac{1}{2}(\cos{(a-b)} + \cos{(a+b)})$
$<br /> \int {\frac{{r^3 }}<br /> {{\sqrt {4 + r^2 } }}dr} <br />$
define: $<br /> 4 + r^2 = x^2 \Rightarrow rdr = xdx<br />$
Now I will do something wrong, but help when I subtitute dx
$<br /> \int {\frac{{r^3 }}<br /> {x} \cdot \frac{x}<br /> {r}} dx = \int {r^2 dx} <br />$
but $<br /> r^2 = x^2 - 4<br />$ then
$<br /> \int {x^2 - 4dx} = \frac{{x^3 }}<br /> {3} - 4x + C<br />$
now restoring we obtain ordered
In this case you can integrating by parts, but whit identiti given for Chop Suey the solution is direct
Link's method is not double substitution and it works.
If you use Chops' method,
$u = 4 + r^2$ yields
$du = 2r \, dr$.
Note that $\int \frac{r^3}{\sqrt{4 + r^2}} dr = \frac{1}{2} \int \frac{2r^3 \, dr}{\sqrt{4 + r^2}} = \frac{1}{2} \int \frac{r^2 \cdot 2r \, dr}{\sqrt{4 + r^2}}$, so you can substitute du for 2r
dr and u - 4 for r^2:
$\frac{1}{2} \int \frac{u - 4}{\sqrt{u}}du$
For the second one, I get to the integral of 1/2((cosPIx*cos4PIx + sinPIx*sin4PIx) + (cosPIx*cos4PIx - sinPIx*sin4PIx))dx
I'm pretty sure I use substitution but I don't know what I substitute. Am I allowed to set u=cosPIx du=-PIsinPIx and putting a 4 in front of u for the cos4PIx?
Why did you expand the cosines using the addition identity? You're complicating things. Just integrate $\cos{(-3\pi x)} = \cos{(3\pi x)}$ as you would integrate a cos(x).
If the negative sign inside is throwing you off, recall that Cosine is an even function:
$\cos{(-x)} = \cos{(x)}$
There's no negative in the cos. I just don't know if I'm "allowed" to take the 4 out of cos(4PIx) and put it in front of the cos. [4cosPIx]
No, you can't do that.
$\cos{(\pi x}\cos{(4\pi x)}$
$= \frac{1}{2}(\cos{(\pi x - 4 \pi x)} + \cos{(\pi x + 4 \pi x)})$
$= \frac{1}{2}(\cos{(-3 \pi x)} + \cos{(5 \pi x)})$
$= \frac{1}{2}(\cos{(3 \pi x)}) + \frac{1}{2}(\cos{(5 \pi x)})$
Just integrate it directly!
no, always that you have this question, restitute whit numbers
in fact:
$<br /> \cos \left( {4\pi x} \right) = \cos \left( {2\pi x + 2\pi x} \right) = \cos ^2 \left( {2\pi x} \right) - \sin ^2 \left( {2\pi x} \right)<br />$
Oh. I'm so sorry I read the question wrong. Thank you all very much for the help. I greatly appreciate it.
October 5th 2008, 11:19 AM #2
Super Member
Jun 2008
October 5th 2008, 11:29 AM #3
October 5th 2008, 03:16 PM #4
Oct 2008
October 5th 2008, 03:31 PM #5
MHF Contributor
Apr 2008
October 5th 2008, 03:40 PM #6
October 5th 2008, 06:21 PM #7
Oct 2008
October 5th 2008, 06:26 PM #8
Super Member
Jun 2008
October 5th 2008, 06:56 PM #9
Oct 2008
October 5th 2008, 07:01 PM #10
Super Member
Jun 2008
October 5th 2008, 07:10 PM #11
October 5th 2008, 07:27 PM #12
Oct 2008
|
{"url":"http://mathhelpforum.com/calculus/52091-integration-parts-trig.html","timestamp":"2014-04-21T02:09:47Z","content_type":null,"content_length":"65190","record_id":"<urn:uuid:88c7a805-39ba-4ebe-8891-eed2669a6c9d>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SQL for the Balanced Number Partitioning Problem
I noticed a post on AskTom recently that referred to an SQL solution to a version of the so-called Bin Fitting problem, where even distribution is the aim. The solution, How do I solve a Bin Fitting
problem in an SQL statement?, uses Oracle's Model clause, and, as the poster of the link observed, has the drawback that the number of bins is embedded in the query structure. I thought it might be
interesting to find solutions without that drawback, so that the number of bins could be passed to the query as a bind variable. I came up with three solutions using different techniques, starting
An interesting article in American Scientist, The Easiest Hard Problem, notes that the problem is NP-complete, or certifiably hard, but that simple greedy heuristics often produce a good solution,
including one used by schoolboys to pick football teams. The article uses the more descriptive term for the problem of balanced number partitioning, and notes some practical applications. The Model
clause solution implements a multiple-bin version of the main Greedy Algorithm, while my non-Model SQL solutions implement variants of it that allow other techniques to be used, one of which is very
simple and fast: this implements the team picking heuristic for multiple teams.
Another poster, Stew Ashton, suggested a simple change to my Model solution that improved performance, and I use this modified version here. He also suggested that using PL/SQL might be faster, and I
have added my own simple PL/SQL implementation of the Greedy Algorithm, as well as a second version of the recursive subquery factoring solution that performs better than the first.
This article explains the solutions, considers two simple examples to illustrate them, and reports on performance testing across dimensions of number of items and number of bins. These show that the
solutions exhibit either linear or quadratic variation in execution time with number of items, and some methods are sensitive to the number of bins while others are not.
After I had posted my solutions on the AskTom thread, I came across a thread on OTN, need help to resolve this issue, that requested a solution to a form of bin fitting problem where the bins have
fixed capacity and the number of bins required must be determined. I realised that my solutions could easily be extended to add that feature, and posted extended versions of two of the solutions
there. I have added a section here for this.
Updated, 5 June 2013: added Model and RSF diagrams
Greedy Algorithm Variants
Say there are N bins and M items.
Greedy Algorithm (GDY)
Set bin sizes zero
Loop over items in descending order of size
• Add item to current smallest bin
• Calculate new bin size
End Loop
Greedy Algorithm with Batched Rebalancing (GBR)
Set bin sizes zero
Loop over items in descending order of size in batches of N items
• Assign batch to N bins, with bins in ascending order of size
• Calculate new bin sizes
End Loop
Greedy Algorithm with No Rebalancing - or, Team Picking Algorithm (TPA)
Assign items to bins cyclically by bin sequence in descending order of item size
Two Examples
Example: Four Items
Here we see that the Greedy Algorithm finds the perfect solution, with no difference in bin size, but the two variants have a difference of two.
Example: Six Items
Here we see that none of the algorithms finds the perfect solution. Both the standard Greedy Algorithm and its batched variant give a difference of two, while the variant without rebalancing gives a
difference of four.
SQL Solutions
Original Model for GDY
See the link above for the SQL for the problem with three bins only.
The author has two measures for each bin and implements the GDY algorithm using CASE expressions and aggregation within the rules. The idea is to iterate over the items in descending order of size,
setting the item bin to the bin with current smallest value. I use the word 'bin' for his 'bucket'. Some notes:
• Dimension by row number, ordered by item value
• Add measures for the iteration, it, and number of iterations required, counter
• Add measures for the bin name, bucket_name, and current minimum bin value, min_tmp (only first entry used)
• Add measures for each item bin value, bucket_1-3, being the item value if it's in that bin, else zero
• Add measures for each bin running sum, pbucket_1-3, being the current value of each bin (only first two entries used)
• The current minimum bin value, bin_tmp[1] is computed as the least of the running sums
• The current item bin value is set to the item value for the bin whose value matches the minimum just computed, and null for the others
• The current bin name is set similarly to be the bin matching the minimum
• The new running sums are computed for each bin
Brendan's Generic Model for GDY
SELECT item_name, bin, item_value, Max (bin_value) OVER (PARTITION BY bin) bin_value
FROM (
SELECT * FROM items
DIMENSION BY (Row_Number() OVER (ORDER BY item_value DESC) rn)
MEASURES (item_name,
Row_Number() OVER (ORDER BY item_value DESC) bin,
item_value bin_value,
Row_Number() OVER (ORDER BY item_value DESC) rn_m,
0 min_bin,
Count(*) OVER () - :N_BINS - 1 n_iters
RULES ITERATE(100000) UNTIL (ITERATION_NUMBER >= n_iters[1]) (
min_bin[1] = Min(rn_m) KEEP (DENSE_RANK FIRST ORDER BY bin_value)[rn <= :N_BINS],
bin[ITERATION_NUMBER + :N_BINS + 1] = min_bin[1],
bin_value[min_bin[1]] = bin_value[CV()] + Nvl (item_value[ITERATION_NUMBER + :N_BINS + 1], 0)
WHERE item_name IS NOT NULL
ORDER BY item_value DESC
My Model solution works for any number of bins, passing the number of bins as a bind variable. The key idea here is to use values in the first N rows of a generic bin value measure to store all the
running bin values, rather than as individual measures. I have included two modifications suggested by Stew in the AskTom thread.
• Dimension by row number, ordered by item value
• Initialise a bin measure to the row number (the first N items will remain fixed)
• Initialise a bin value measure to item value (only first N entries used)
• Add the row number as a measure, rn_m, in addition to a dimension, for referencing purposes
• Add a min_bin measure for current minimum bin index (first entry only)
• Add a measure for the number of iterations required, n_iters
• The first N items are correctly binned in the measure initialisation
• Set the minimum bin index using analytic Min function with KEEP clause over the first N rows of bin value
• Set the bin for the current item to this index
• Update the bin value for the corresponding bin only
Recursive Subquery Factor for GBR
WITH bins AS (
SELECT LEVEL bin, :N_BINS n_bins FROM DUAL CONNECT BY LEVEL <= :N_BINS
), items_desc AS (
SELECT item_name, item_value, Row_Number () OVER (ORDER BY item_value DESC) rn
FROM items
), rsf (bin, item_name, item_value, bin_value, lev, bin_rank, n_bins) AS (
SELECT b.bin,
b.n_bins - i.rn + 1,
FROM bins b
JOIN items_desc i
ON i.rn = b.bin
SELECT r.bin,
r.bin_value + i.item_value,
r.lev + 1,
Row_Number () OVER (ORDER BY r.bin_value + i.item_value),
FROM rsf r
JOIN items_desc i
ON i.rn = r.bin_rank + r.lev * r.n_bins
SELECT r.item_name,
r.bin, r.item_value, r.bin_value
FROM rsf r
ORDER BY item_value DESC
The idea here is to use recursive subquery factors to iterate through the items in batches of N items, assigning each item to a bin according to the rank of the bin on the previous iteration.
• Initial subquery factors form record sets for the bins and for the items with their ranks in descending order of value
• The anchor branch assign bins to the first N items, assigning the item values to a bin value field, and setting the bin rank in ascending order of this bin value
• The recursive branch joins the batch of items to the record in the previous batch whose bin rank matches that of the item in the reverse sense (so largest item goes to smallest bin etc.)
• The analytic Row_Number function computes the updated bin ranks, and the bin values are updated by simple addition
Recursive Subquery Factor for GBR with Temporary Table
Create Table and Index
DROP TABLE items_desc_temp
CREATE GLOBAL TEMPORARY TABLE items_desc_temp (
item_name VARCHAR2(30) NOT NULL,
item_value NUMBER(8) NOT NULL,
rn NUMBER
CREATE INDEX items_desc_temp_N1 ON items_desc_temp (rn)
Insert into Temporary Table
INSERT INTO items_desc_temp
SELECT item_name, item_value, Row_Number () OVER (ORDER BY item_value DESC) rn
FROM items;
RSF Query with Temporary Table
WITH bins AS (
SELECT LEVEL bin, :N_BINS n_bins FROM DUAL CONNECT BY LEVEL <= :N_BINS
), rsf (bin, item_name, item_value, bin_value, lev, bin_rank, n_bins) AS (
SELECT b.bin,
b.n_bins - i.rn + 1,
FROM bins b
JOIN items_desc_temp i
ON i.rn = b.bin
SELECT r.bin,
r.bin_value + i.item_value,
r.lev + 1,
Row_Number () OVER (ORDER BY r.bin_value + i.item_value),
FROM rsf r
JOIN items_desc_temp i
ON i.rn = r.bin_rank + r.lev * r.n_bins
SELECT item_name, bin, item_value, bin_value
FROM rsf
ORDER BY item_value DESC
The idea here is that in the initial RSF query a subquery factor of items was joined on a calculated field, so the whole record set had to be read, and performance could be improved by putting that
initial record set into an indexed temporary table ahead of the main query. We'll see in the performance testing section that this changes quadratic variation with problem size into linear variation.
Plain Old SQL Solution for TPA
WITH items_desc AS (
SELECT item_name, item_value,
Mod (Row_Number () OVER (ORDER BY item_value DESC), :N_BINS) + 1 bin
FROM items
SELECT item_name, bin, item_value, Sum (item_value) OVER (PARTITION BY bin) bin_total
FROM items_desc
ORDER BY item_value DESC
The idea here is that the TPA algorithm can be implemented in simple SQL using analyic functions.
• The subquery factor assigns the bins by taking the item rank in descending order of value and applying the modulo (N) function
• The main query returns the bin totals in addition by analytic summing by bin
Pipelined Function for GDY
CREATE OR REPLACE PACKAGE Bin_Fit AS
TYPE bin_fit_rec_type IS RECORD (item_name VARCHAR2(100), item_value NUMBER, bin NUMBER);
TYPE bin_fit_list_type IS VARRAY(1000) OF bin_fit_rec_type;
TYPE bin_fit_cur_rec_type IS RECORD (item_name VARCHAR2(100), item_value NUMBER);
TYPE bin_fit_cur_type IS REF CURSOR RETURN bin_fit_cur_rec_type;
FUNCTION Items_Binned (p_items_cur bin_fit_cur_type, p_n_bins PLS_INTEGER) RETURN bin_fit_list_type PIPELINED;
END Bin_Fit;
CREATE OR REPLACE PACKAGE BODY Bin_Fit AS
c_big_value CONSTANT NUMBER := 100000000;
TYPE bin_fit_cur_list_type IS VARRAY(100) OF bin_fit_cur_rec_type;
FUNCTION Items_Binned (p_items_cur bin_fit_cur_type, p_n_bins PLS_INTEGER) RETURN bin_fit_list_type PIPELINED IS
l_min_bin PLS_INTEGER := 1;
l_min_bin_val NUMBER;
l_bins SYS.ODCINumberList := SYS.ODCINumberList();
l_bin_fit_cur_rec bin_fit_cur_rec_type;
l_bin_fit_rec bin_fit_rec_type;
l_bin_fit_cur_list bin_fit_cur_list_type;
l_bins.Extend (p_n_bins);
FOR i IN 1..p_n_bins LOOP
l_bins(i) := 0;
END LOOP;
FETCH p_items_cur BULK COLLECT INTO l_bin_fit_cur_list LIMIT 100;
EXIT WHEN l_bin_fit_cur_list.COUNT = 0;
FOR j IN 1..l_bin_fit_cur_list.COUNT LOOP
l_bin_fit_rec.item_name := l_bin_fit_cur_list(j).item_name;
l_bin_fit_rec.item_value := l_bin_fit_cur_list(j).item_value;
l_bin_fit_rec.bin := l_min_bin;
PIPE ROW (l_bin_fit_rec);
l_bins(l_min_bin) := l_bins(l_min_bin) + l_bin_fit_cur_list(j).item_value;
l_min_bin_val := c_big_value;
FOR i IN 1..p_n_bins LOOP
IF l_bins(i) < l_min_bin_val THEN
l_min_bin := i;
l_min_bin_val := l_bins(i);
END IF;
END LOOP;
END LOOP;
END LOOP;
END Items_Binned;
SQL Query
SELECT item_name, bin, item_value, Sum (item_value) OVER (PARTITION BY bin) bin_value
FROM TABLE (Bin_Fit.Items_Binned (
CURSOR (SELECT item_name, item_value FROM items ORDER BY item_value DESC),
ORDER BY item_value DESC
The idea here is that procedural algorithms can often be implemented more efficiently in PL/SQL than in SQL.
• The first parameter to the function is a strongly-typed reference cursor
• The SQL call passes in a SELECT statement wrapped in the CURSOR keyword, so the function can be used for any set of records that returns name and numeric value pairs
• The item records are fetched in batches of 100 using the LIMIT clause to improves efficiency
Performance Testing
I tested performance of the various queries using my own benchmarking framework across grids of data points, with two data sets to split the queries into two sets based on performance.
Query Modifications for Performance Testing
• The RSF query with staging table was run within a pipelined function in order to easily include the insert in the timings
• A system context was used to pass the bind variables as the framework runs the queries from PL/SQL, not from SQL*Plus
• I found that calculating the bin values using analytic sums, as in the code above, affected performance, so I removed this for clarity of results, outputting only item name, value and bin
Test Data Sets
For a given depth parameter, d, random numbers were inserted within the range 0-d for d-1 records. The insert was:
INSERT INTO items
SELECT 'item-' || n, DBMS_Random.Value (0, p_point_deep) FROM
(SELECT LEVEL n FROM DUAL CONNECT BY LEVEL < p_point_deep);
The number of bins was passed as a width parameter, but note that the original, linked Model solution, MODO, hard-codes the number of bins to 3.
Test Results
Data Set 1 - Small
This was used for the following queries:
• MODO - Original Model for GDY
• MODB - Brendan's Generic Model for GDY
• RSFQ - Recursive Subquery Factor for GBR
Depth W3 W3 W3
Run Type=MODO
D1000 1.03 1.77 1.05
D2000 3.98 6.46 5.38
D4000 15.79 20.7 25.58
D8000 63.18 88.75 92.27
D16000 364.2 347.74 351.99
Run Type=MODB
Depth W3 W6 W12
D1000 .27 .42 .27
D2000 1 1.58 1.59
D4000 3.86 3.8 6.19
D8000 23.26 24.57 17.19
D16000 82.29 92.04 96.02
Run Type=RSFQ
D1000 3.24 3.17 1.53
D2000 8.58 9.68 8.02
D4000 25.65 24.07 23.17
D8000 111.3 108.25 98.33
D16000 471.17 407.65 399.99
• Quadratic variation of CPU time with number of items
• Little variation of CPU time with number of bins, although RSFQ seems to show some decline
• RSFQ is slightly slower than MODO, while my version of Model, MODB is about 4 times faster than MODO
Data Set 2 - Large
This was used for the following queries:
• RSFT - Recursive Subquery Factor for GBR with Temporary Table
• POSS - Plain Old SQL Solution for TPA
• PLFN - Pipelined Function for GDY
This table gives the CPU times in seconds across the data set:
Depth W100 W1000 W10000
Run Type=PLFN
D20000 .31 1.92 19.25
D40000 .65 3.87 55.78
D80000 1.28 7.72 92.83
D160000 2.67 16.59 214.96
D320000 5.29 38.68 418.7
D640000 11.61 84.57 823.9
Run Type=POSS
D20000 .09 .13 .13
D40000 .18 .21 .18
D80000 .27 .36 .6
D160000 .74 1.07 .83
D320000 1.36 1.58 1.58
D640000 3.13 3.97 4.04
Run Type=RSFT
D20000 .78 .78 .84
D40000 1.41 1.54 1.7
D80000 3.02 3.39 4.88
D160000 6.11 9.56 8.42
D320000 13.05 18.93 20.84
D640000 41.62 40.98 41.09
• Linear variation of CPU time with number of items
• Little variation of CPU time with number of bins for POSS and RSFT, but roughly linear variation for PLFN
• These linear methods are much faster than the earlier quadratic ones for larger numbers of items
• Its approximate proportionality of time to number of bins means that, while PLFN is faster than RSFT for small number of bins, it becomes slower from around 50 bins for our problem
• The proportionality to number of bins for PLFN presumably arises from the step to find the bin of minimum value
• The lack of proportionality to number of bins for RSFT may appear surprising since it performs a sort of the bins iteratively: However, while the work for this sort is likely to be proportional
to the number of bins, the number of iterations is inversely proportional and thus cancels out the variation
Solution Quality
The methods reported above implement three underlying algorithms, none of which guarantees an optimal solution. In order to get an idea of how the quality compares, I created new versions of the
second set of queries using analytic functions to output the difference between minimum and maximum bin values, with percentage of the maximum also output. I ran these on the same grid, and report
below the results for the four corners.
Method: PLFN RSFT POSS
Point: W100/D20000
Diff/%: 72/.004% 72/.004% 19,825/1%
Point: W100/D640000
Diff/%: 60/.000003% 60/.000003% 633499/.03%
Point: W10000/D20000
Diff/%: 189/.9% 180/.9% 19,995/67%
Point: W10000/D640000
Diff/%: 695/.003% 695/.003% 639,933/3%
The results indicate that GDY (Greedy Algorithm) and GBR (Greedy Algorithm with Batched Rebalancing) generally give very similar quality results, while TPA (Team Picking Algorithm) tends to be quite
a lot worse.
Extended Problem: Finding the Number of Bins Required
An important extension to the problem is when the bins have fixed capacity, and it is desired to find the minimum number of bins, then spread the items evenly between them. As mentioned at the start,
I posted extensions to two of my solutions on an OTN thread, and I reproduce them here. It turns out to be quite easy to make the extension. The remainder of this section is just lifted from my OTN
post and refers to the table of the original poster.
Start OTN Extract
So how do we determine the number of bins? The total quantity divided by bin capacity, rounded up, gives a lower bound on the number of bins needed. The actual number required may be larger, but
mostly it will be within a very small range from the lower bound, I believe (I suspect it will nearly always be the lower bound). A good practical solution, therefore, would be to compute the
solutions for a base number, plus one or more increments, and this can be done with negligible extra work (although Model might be an exception, I haven't tried it). Then the bin totals can be
computed, and the first solution that meets the constraints can be used. I took two bin sets here.
WITH items AS (
SELECT sl_pm_code item_name, sl_wt item_amt, sl_qty item_qty,
Ceil (Sum(sl_qty) OVER () / :MAX_QTY) n_bins
FROM ow_ship_det
), items_desc AS (
SELECT item_name, item_amt, item_qty, n_bins,
Mod (Row_Number () OVER (ORDER BY item_qty DESC), n_bins) bin_1,
Mod (Row_Number () OVER (ORDER BY item_qty DESC), n_bins + 1) bin_2
FROM items
SELECT item_name, item_amt, item_qty,
CASE bin_1 WHEN 0 THEN n_bins ELSE bin_1 END bin_1,
CASE bin_2 WHEN 0 THEN n_bins + 1 ELSE bin_2 END bin_2,
Sum (item_amt) OVER (PARTITION BY bin_1) bin_1_amt,
Sum (item_qty) OVER (PARTITION BY bin_1) bin_1_qty,
Sum (item_amt) OVER (PARTITION BY bin_2) bin_2_amt,
Sum (item_qty) OVER (PARTITION BY bin_2) bin_2_qty
FROM items_desc
ORDER BY item_qty DESC, bin_1, bin_2
SQL Pipelined
SELECT osd.sl_pm_code item_name, osd.sl_wt item_amt, osd.sl_qty item_qty,
tab.bin_1, tab.bin_2,
Sum (osd.sl_wt) OVER (PARTITION BY tab.bin_1) bin_1_amt,
Sum (osd.sl_qty) OVER (PARTITION BY tab.bin_1) bin_1_qty,
Sum (osd.sl_wt) OVER (PARTITION BY tab.bin_2) bin_2_amt,
Sum (osd.sl_qty) OVER (PARTITION BY tab.bin_2) bin_2_qty
FROM ow_ship_det osd
JOIN TABLE (Bin_Even.Items_Binned (
CURSOR (SELECT sl_pm_code item_name, sl_qty item_value,
Sum(sl_qty) OVER () item_total
FROM ow_ship_det
ORDER BY sl_qty DESC, sl_wt DESC),
:MAX_QTY)) tab
ON tab.item_name = osd.sl_pm_code
ORDER BY osd.sl_qty DESC, tab.bin_1
Pipelined Function
CREATE OR REPLACE PACKAGE Bin_Even AS
TYPE bin_even_rec_type IS RECORD (item_name VARCHAR2(100), item_value NUMBER, bin_1 NUMBER, bin_2 NUMBER);
TYPE bin_even_list_type IS VARRAY(1000) OF bin_even_rec_type;
TYPE bin_even_cur_rec_type IS RECORD (item_name VARCHAR2(100), item_value NUMBER, item_total NUMBER);
TYPE bin_even_cur_type IS REF CURSOR RETURN bin_even_cur_rec_type;
FUNCTION Items_Binned (p_items_cur bin_even_cur_type, p_bin_max NUMBER) RETURN bin_even_list_type PIPELINED;
END Bin_Even;
CREATE OR REPLACE PACKAGE BODY Bin_Even AS
c_big_value CONSTANT NUMBER := 100000000;
c_n_bin_sets CONSTANT NUMBER := 2;
TYPE bin_even_cur_list_type IS VARRAY(100) OF bin_even_cur_rec_type;
TYPE num_lol_list_type IS VARRAY(100) OF SYS.ODCINumberList;
FUNCTION Items_Binned (p_items_cur bin_even_cur_type, p_bin_max NUMBER) RETURN bin_even_list_type PIPELINED IS
l_min_bin SYS.ODCINumberList := SYS.ODCINumberList (1, 1);
l_min_bin_val SYS.ODCINumberList := SYS.ODCINumberList (c_big_value, c_big_value);
l_bins num_lol_list_type := num_lol_list_type (SYS.ODCINumberList(), SYS.ODCINumberList());
l_bin_even_cur_rec bin_even_cur_rec_type;
l_bin_even_rec bin_even_rec_type;
l_bin_even_cur_list bin_even_cur_list_type;
l_n_bins PLS_INTEGER;
l_n_bins_base PLS_INTEGER;
l_is_first_fetch BOOLEAN := TRUE;
FETCH p_items_cur BULK COLLECT INTO l_bin_even_cur_list LIMIT 100;
EXIT WHEN l_Bin_Even_cur_list.COUNT = 0;
IF l_is_first_fetch THEN
l_n_bins_base := Ceil (l_Bin_Even_cur_list(1).item_total / p_bin_max) - 1;
l_is_first_fetch := FALSE;
l_n_bins := l_n_bins_base;
FOR i IN 1..c_n_bin_sets LOOP
l_n_bins := l_n_bins + 1;
l_bins(i).Extend (l_n_bins);
FOR k IN 1..l_n_bins LOOP
l_bins(i)(k) := 0;
END LOOP;
END LOOP;
END IF;
FOR j IN 1..l_Bin_Even_cur_list.COUNT LOOP
l_bin_even_rec.item_name := l_bin_even_cur_list(j).item_name;
l_bin_even_rec.item_value := l_bin_even_cur_list(j).item_value;
l_bin_even_rec.bin_1 := l_min_bin(1);
l_bin_even_rec.bin_2 := l_min_bin(2);
PIPE ROW (l_bin_even_rec);
l_n_bins := l_n_bins_base;
FOR i IN 1..c_n_bin_sets LOOP
l_n_bins := l_n_bins + 1;
l_bins(i)(l_min_bin(i)) := l_bins(i)(l_min_bin(i)) + l_Bin_Even_cur_list(j).item_value;
l_min_bin_val(i) := c_big_value;
FOR k IN 1..l_n_bins LOOP
IF l_bins(i)(k) < l_min_bin_val(i) THEN
l_min_bin(i) := k;
l_min_bin_val(i) := l_bins(i)(k);
END IF;
END LOOP;
END LOOP;
END LOOP;
END LOOP;
END Items_Binned;
END Bin_Even;
Output POS
Note BIN_1 means bin set 1, which turns out to have 4 bins, while bin set 2 then necessarily has 5.
ITEM_NAME ITEM_AMT ITEM_QTY BIN_1 BIN_2 BIN_1_AMT BIN_1_QTY BIN_2_AMT BIN_2_QTY
--------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
1239606-1080 4024 266 1 1 25562 995 17482 827
1239606-1045 1880 192 2 2 19394 886 14568 732
1239606-1044 1567 160 3 3 18115 835 14097 688
1239606-1081 2118 140 4 4 18988 793 17130 657
1239606-2094 5741 96 1 5 25562 995 18782 605
1239606-2107 80 3 4 2 18988 793 14568 732
1239606-2084 122 3 4 3 18988 793 14097 688
1239606-2110 210 2 2 3 19394 886 14097 688
1239606-4022 212 2 3 4 18115 835 17130 657
1239606-4021 212 2 4 5 18988 793 18782 605
Output Pipelined
ITEM_NAME ITEM_AMT ITEM_QTY BIN_1 BIN_2 BIN_1_AMT BIN_1_QTY BIN_2_AMT BIN_2_QTY
--------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
1239606-1080 4024 266 1 1 20627 878 15805 703
1239606-1045 1880 192 2 2 18220 877 16176 703
1239606-1044 1567 160 3 3 20425 878 15651 701
1239606-1081 2118 140 4 4 22787 876 14797 701
1239606-2094 5741 96 4 5 22787 876 19630 701
1239606-2089 80 3 4 1 22787 876 15805 703
1239606-2112 141 3 4 2 22787 876 16176 703
1239606-4022 212 2 1 1 20627 878 15805 703
1239606-4021 212 2 2 1 18220 877 15805 703
1239606-2110 210 2 3 2 20425 878 16176 703
End OTN Extract
• Various solutions for the balanced number partitioning problem have been presented, using Oracle's Model clause, Recursive Subquery Factoring, Pipelined Functions and simple SQL
• The performance characteristics of these solutions have been tested across a range of data sets
• As is often the case, the best solution depends on the shape and size of the data set
• A simple extension has been shown to allow determining the number of bins required in the bin-fitting interpretation of the problem
• Replacing a WITH clause with a staging table can be a useful technique to allow indexed scans
One thought on “SQL for the Balanced Number Partitioning Problem”
1. Most interesting and enlightening stuff, Brendan, thanks for sharing all this!
|
{"url":"http://aprogrammerwrites.eu/?p=803","timestamp":"2014-04-19T09:24:55Z","content_type":null,"content_length":"45268","record_id":"<urn:uuid:8500c8cd-e8dd-4a8a-8ec6-27debf37dc44>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions - Re: Matheology � 246
Date: Apr 13, 2013 12:47 AM
Author: Scott Berg
Subject: Re: Matheology � 246
"WM" <mueckenh@rz.fh-augsburg.de> wrote in message
On 12 Apr., 21:42, "AMiews" <inva...@invalid.com> wrote:
> "WM" <mueck...@rz.fh-augsburg.de> wrote in message
> > WM <mueck...@rz.fh-augsburg.de> wrote:
>> wrong. repeating sequences of bits in an infinitely long string indicate
>> representation as a fraction.
>Since there is no topology defined for Cantor's binary sequences,
>there is no chance to determine a limit of wmwmwmwm...
you are only complaining about the three dots or periods " ... "
indicating repeating in that fashion, so get over it...
>> >Most of them cannot be written by finite expressions. And they cannot
>> >be written as infinite expressions.
>> wrong. you seem ill at ease with infinite representations of numbers
>Have you ever seen an infinite expression? Do you think that 0.111...
>is an infinite expression?
it is short hand for one,
the "..." mean repeated, usally one uses a bar over the last repeated
numbers, but cant do that with text.
> 1/9 or 0.111... are very finite expressions
yes and you are fussing over notation convention, meaning you are unfamiliar
with math(s)
>for infinite sequences. But those sequences are not available.
why ? where did they go ? if they were infinite, they would fill up your
>every d_n of a numerical Cantor-list is the last digit of a
>terminating decimal.
>Never, do you understand, never anybody has seen or used a d_n that
>does not belong to a terminating decimal.
you seem confused by standard math notation here. Irrationals no one has
seen the end.
>Therefore Cantor proves that the countable set of rationals is
that is what you say, but study up on common math notation first.
>Regards, WM
|
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8893391","timestamp":"2014-04-16T16:47:46Z","content_type":null,"content_length":"3463","record_id":"<urn:uuid:d4a6de29-b860-40d9-b35a-e003ce5a43a2>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Inverse formula for counting marginals
up vote 2 down vote favorite
I am interested in a formula which relating two functions over a multiset.
I have a multiset $X$ of sets where each element in $X$ is a set $x \subseteq \{1,2,\ldots,m\}$. Now I have two ``count'' functions
$p_s = |\{x \in X : s = x\}|$
$\eta_s = |\{x \in X : s \subseteq x\}|$
One can expand the formula for the marginal count $\eta_s$ as
$\eta_s = \sum_{s \subseteq t,|t|\leq m} p_t$
I have confirmed for up to $m = 4$ that the following results holds
$p_s = \sum_{s \subseteq t,|t|\leq m} (-1)^{|s|-|t|}\eta_t$
Does the above result hold for arbitrary $m$? This seems like it must be related to the inclusion/exclusion principle (http://en.wikipedia.org/wiki/Inclusion-exclusion_principle) but there is a
subtle difference, in that the summation is over set which include $s$ as subsets. Perhaps this difference is immaterial, but I don't see the argument just yet. Also, in the general problem that I
wish to solve I will have $x \subseteq \{(i,a_i)\}_{i \in I}$ where $I$ is all combinations of $\{1,\ldots,m\}$ and $a_i$ is drawn from a finite set $A:|A|=n$.
set-theory co.combinatorics combinatorial-identities
add comment
1 Answer
active oldest votes
The answer is yes, and this is known as Moebius inversion. See Section E.1, p.286 in Graphical models, exponential families, and variational inference.
up vote 4 down vote accepted
add comment
Not the answer you're looking for? Browse other questions tagged set-theory co.combinatorics combinatorial-identities or ask your own question.
|
{"url":"http://mathoverflow.net/questions/40383/inverse-formula-for-counting-marginals?sort=oldest","timestamp":"2014-04-20T16:11:58Z","content_type":null,"content_length":"50046","record_id":"<urn:uuid:0ea12f61-7a30-4c9d-9ff6-93f92b720470>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Comparisions for while loop
08-31-2010, 11:35 AM
Comparisions for while loop
How would i write the following as a comparison for a while loop or if statement:
verticalVelocity equals 0 and angle.getValue() does not equil 0 or 90
08-31-2010, 11:42 AM
Try replacing each of those comparison "phrases" with the corresponding symbols and see what you come up with.
08-31-2010, 11:47 AM
I've tried but if you do a direct "translation" into java language you get:
verticalVelocity == 0 && angle.getValue() != 0 || 90
which donen't work. The best I can come up with is:
verticalVelocity == 0 && angle.getValue() != 0 || verticalVelocity == 0 && angle.getValue() != 90
but this isn't the same as my origonal statement in the my first post.
08-31-2010, 11:52 AM
Starting point is your first attempt. Put parens around the rest of the statement after the && and add make the part after the || "complete" (i.e. add the condition you are comparing "90" to).
The second example would also work (but is "wordy") if you put parens around the two "halfs" of the statements (i.e. on each side of the ||).
The thing you are not thinking of is the "grouping". You acheive that through selective use of parens.
08-31-2010, 11:58 AM
o ye I see it now. thanks
|
{"url":"http://www.java-forums.org/new-java/32099-comparisions-while-loop-print.html","timestamp":"2014-04-23T07:56:23Z","content_type":null,"content_length":"5877","record_id":"<urn:uuid:3f0f45e4-3691-4df8-84a6-3044733c5aa4>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tarau, Paul - Department of Computer Science and Engineering, University of North Texas
• Isomorphic Data Encodings and their Generalization to Hylomorphisms on
• Proceedings of CICLOPS 2009
• Declarative Combinatorics: Boolean Functions, Circuit Synthesis and BDDs in Haskell
• Pairing Functions, Boolean Evaluation and Binary Decision Diagrams
• Exact Combinational Logic Synthesis and Non-Standard Circuit Design
• A Functional Hitchhiker's Guide to Hereditarily Finite Sets, Ackermann Encodings and Pairing Functions
• Ranking and Unranking of Hereditarily Finite Functions and Permutations
• Executable Set Theory and Arithmetic Encodings in Prolog
• Logic Engines as Interactors Department of Computer Science and Engineering
• Ranking Catamorphisms and Unranking Anamorphisms on Hereditarily Finite Datatypes
• Isomorphic Data Encodings and their Generalization to Hylomorphisms on
• Fluents: a Uniform Extension of Kernel Prolog for Reflection and Interoperation with External Objects
• Mobile Threads through First Order Continuations
• Multicast Protocols for Jinni Agents Satyam Tyagi and Paul Tarau
• The Power of Partial Translation: an Experiment with the Cification of
• Virtual World Brokerage with BinProlog and D'epartement d'Informatique
• Lowlevel Issues in Implementing a HighPerformance Continuation Passing
• MultiEngine Horn Clause Prolog Paul Tarau 1
• Fluents: A Refactoring of Prolog for Uniform Re ection and Interoperation with External
• Under consideration for publication in Theory and Practice of Logic Programming 1 HighLevel Networking with Mobile Code and
• HigherOrder Programming in an ORintensive Style
• A Most Speci c Method Finding Algorithm for Re ection Based Dynamic Prolog-to-Java
• J. LOGIC PROGRAMMING 1993:12:1--199 1 LogiMOO: an Extensible MultiUser Virtual World with
• J. LOGIC PROGRAMMING 1993:12:1--199 1 ON DELPHI LEMMAS AND OTHER
• Multicast Protocols for Jinni Agents Satyam Tyagi, Paul Tarau, and Armin Mikler
• Logic Programming with Monads and Comprehensions
• Architecture of the Jinni 2000 Runtime System Department of Computer Science
• Datalog Grammars Veronica Dahl
• 1 Inference and Computation Mobility with Paul Tarau 1
• Assumption Grammars for Generating Dynamic VRML Pages
• Jinni: Intelligent Mobile Agent Programming at the Intersection of Java and Prolog
• A Novel Term Compression Scheme and Data Representation in the BinWAM
• High Level Logic Programming Tools for Remote Execution, Mobile Code and Agents
• Backtrackable State with Linear Affine Implication and Assumption Grammars
• Knowledge Intensive Conversational Agents: a Logic Programming Approach
• Assumption Grammars: Parsing as Hypothetical Reasoning 1
• 1 Inference and Computation Mobility with Paul Tarau 1
• Jinni: Intelligent Mobile Agent Programming at the Intersection of Java and Prolog
• The BinProlog Experience: Implementing a HighPerformance Continuation Passing
• A Logic Programming Based Software Architecture for Reactive Intelligent Mobile Agents
• Jinni: Intelligent Mobile Agent Programming at the Intersection of Java and Prolog
• Logic Programming and Logic Grammars with Binarization and Firstorder Continuations
• J. LOGIC PROGRAMMING 1993:12:1--199 1 HighLevel Networking with Mobile Code and
• Agent Programming Constructs and Object Oriented Logic Programming in Jinni 2003
• J. LOGIC PROGRAMMING 1993:12:1--199 1 LogiMOO: an Extensible MultiUser Virtual World with
• Language Embedding by Dual Compilation and State Mirroring
• Partial Translation: Towards a Portable and Efficient Prolog Implementation Technology \Lambda
• Fluents: A Refactoring of Prolog for Uniform Reflection and Interoperation with External
• Under consideration for publication in Theory and Practice of Logic Programming 1 High-Level Networking with Mobile Code and
• A Functional Hitchhiker's Guide to Hereditarily Finite Sets, Ackermann Encodings and Pairing Functions
• Integrated Symbol, Engine Table and Heap Memory Management in Multi-Engine Prolog
• Computing with Hereditarily Finite
• A Most Specific Method Finding Algorithm for Reflection Based Dynamic Prolog-to-Java
• Assumptive Logic Programming Veronica Dahl
• Concurrent Programming Constructs in Multi-Engine Prolog Parallelism just for the cores (and not more!)
• Emulating Primality with Multiset Representations of Natural Numbers
• Integrated Symbol Table, Engine and Heap Memory Management in Multi-Engine Prolog
• Circuit Morphing: Declarative Modeling of Reconfigurable Combinational Logic
• Fluents: A Refactoring of Prolog for Uniform Reflection and Interoperation with External
• Declarative Combinatorics: Boolean Functions, Circuit Synthesis and BDDs in Haskell
• Ranking and Unranking of Hereditarily Finite Functions and Permutations
• Jinni: Intelligent Mobile Agent Programming at the Intersection of Java and Prolog
• Proceedings of CICLOPS 2009
• Shared Axiomatizations and Virtual Datatypes Department of Computer Science and Engineering
• Isomorphic Data Encodings and their Generalization to Hylomorphisms on
• Executable Set Theory and Arithmetic Encodings in Prolog
• Coordination and Concurrency in Multi-Engine Department of Computer Science and Engineering
• Isomorphic Data Encodings and their Generalization to Hylomorphisms on
• Ranking Catamorphisms and Unranking Anamorphisms on Hereditarily Finite Datatypes
• Declarative Modeling of Finite Mathematics Department of Computer Science and Engineering
• Pairing Functions, Boolean Evaluation and Binary Decision Diagrams
• Under consideration for publication in Theory and Practice of Logic Programming 1 Declarative Combinatorics: Exact Combinational
• Logic Engines as Interactors Department of Computer Science and Engineering
• ZU064-05-FPR paper 3 September 2009 20:13 Under consideration for publication in J. Functional Programming 1
• An Embedded Declarative Data Transformation Language Department of Computer Science and Engineering
• Bijective Godel Numberings for Term Algebras Department of Computer Science and Engineering
• Object Oriented Logic Programming as an Agent Building Infrastructure
• Exact Combinational Logic Synthesis and Non-Standard Circuit Design
• Computing with Hereditarily Finite
• Interoperating Logic Engines Paul Tarau1
• Under consideration for publication in Theory and Practice of Logic Programming 1 The BinProlog Experience: Architecture and
• Knowledge Based Conversational Agents and Virtual Storytelling
• Agent Mobility with Weak Local Inheritance and Transactional Remote Logic Invocation
• Ranking and Unranking Functions for Ordered Decision Trees with Applications to Circuit
• Architecture and Implementation Aspects of the Lean Prolog System
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/49/099.html","timestamp":"2014-04-18T13:59:17Z","content_type":null,"content_length":"20824","record_id":"<urn:uuid:9f670076-5f6c-43f3-9e3e-e4acf4043a51>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Things I've Tagged (weekly)
Saturday, March 3, 2012
Things I've Tagged (weekly)
Posted from Diigo. The rest of my favorite links are here.
2 comments:
Lsquared said...
I love the factoring game. Where did you find it? I'd love to give credit to the right people if/when I share it.
KFouss said...
I find a lot of the links I tag from googling random things. :) I just tried to backtrack and find a source for the factoring game (other than the link) but can't.
Looks like a cute idea though, huh!
|
{"url":"http://myweb20journey.blogspot.com/2012/03/things-i-tagged-weekly.html","timestamp":"2014-04-18T16:47:58Z","content_type":null,"content_length":"117728","record_id":"<urn:uuid:6c5958a1-e2a8-40ce-bc97-a94c33a9724b>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions - Re: How do you TraditionalForm inside your TextCell?
Date: Dec 9, 2012 11:30 PM
Author: Murray Eisenberg
Subject: Re: How do you TraditionalForm inside your TextCell?
Within a Text cell, type Ctrl ( to begin the in-line mode, then type the equation or whatever you want, and finally type Ctrl ) to end the in-line mode.
On Dec 9, 2012, at 10:58 AM, sabio@isabio.com wrote:
> I used to be able to transform a portion of my text cell into TraditionalForm by highlighting it then pressing control-shift-T. Now it does not work.
> Generating an output using TraditionalForm, then copying it into my text cell does not work either.
> In fact, I don't know of any way to signal that I am going to start typing an equation. In only way I figured out is by using Pallet and falsely start with exponent template, then do not enter the exponent.
> Did anyone figure out how to insert equations inside a text cell?
> James Choi
Murray Eisenberg
Mathematics & Statistics Dept.
Lederle Graduate Research Tower phone 413 549-1020 (H)
University of Massachusetts 413 545-2838 (W)
710 North Pleasant Street fax 413 545-1801
Amherst, MA 01003-9305
|
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7934723","timestamp":"2014-04-16T22:01:29Z","content_type":null,"content_length":"2283","record_id":"<urn:uuid:d106c5c4-dd7b-46f6-9107-230173c24514>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The arXiv has enabled MathJax!
You're reading: News
The arXiv has enabled MathJax!
By Christian Perfect. Posted in News
A bit of slightly overdue but welcome news: the arXiv has enabled MathJax on paper abstract pages. Authors have regularly been using LaTeX syntax in their titles and abstracts, but now the arXiv
typesets them automatically for you.
There’s some explanation of how it works and what it means on a new help page. I’ve picked out at random a few examples of abstracts containing LaTeX for you to look at:
It shouldn’t come as a big surprise that many authors use custom LaTeX macros in their abstracts, which cause the typesetting to fail. But maybe people will be more careful to use standard LaTeX now
that the paper submission page gives you a preview of the typeset maths.
Grouches, grumps, and good-for-nothings can disable MathJax, if their cold hearts tell them they must, by clicking a link on the arXiv’s MathJax help page.
MathJax on the arXiv help page
via David Roberts on Google+
Dana Ernst
One Response to “The arXiv has enabled MathJax!”
|
{"url":"http://aperiodical.com/2013/10/the-arxiv-has-enabled-mathjax/","timestamp":"2014-04-20T05:42:14Z","content_type":null,"content_length":"40506","record_id":"<urn:uuid:bc56655d-5842-4e38-a747-5b6e50a02702>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Olney, MD Precalculus Tutor
Find an Olney, MD Precalculus Tutor
...My interest in tutoring begins with a deep love for the subject matter, which means that for me there's no substitute for actually understanding it: getting the right answer isn't nearly as
important as being able to explain why it's right. As a tutor, my main job isn't to talk, but to listen: I...
18 Subjects: including precalculus, writing, calculus, geometry
...To this day, I speak, read and write in Russian fluently, which is due in large part to the fact that my family speaks Russian at home and that we have a large collection of Russian books. I
have taken a large number of drawing and painting classes from Montgomery College. I am intimately famil...
27 Subjects: including precalculus, chemistry, English, reading
...I truly believe that math can be fun and easy if it's broken down for you in a way that you can comprehend it. If you just need a tune-up, we can skip straight to the nuts and bolts. I'm very
passionate about helping students succeed in math, because I believe it is the foundation of having a good set of life skills in general.
15 Subjects: including precalculus, chemistry, calculus, geometry
...While working on my Molecular Biology BS from Johns Hopkins University, I tutored college students on Math (including Calculus) and Science (including Chemistry). I have worked with individual
students and small groups. I like to get feedback from my students often in order to improve their expe...
40 Subjects: including precalculus, English, reading, chemistry
...I also love playing chess and have taught many young people the game. By starting out with just a few pieces at a time, new players are not overwhelmed by the complexity of the game.I love
teaching chess to new players. By only introducing one piece at a time, newcomers to the game are never overwhelmed by the complexity of the game.
14 Subjects: including precalculus, calculus, geometry, ASVAB
Related Olney, MD Tutors
Olney, MD Accounting Tutors
Olney, MD ACT Tutors
Olney, MD Algebra Tutors
Olney, MD Algebra 2 Tutors
Olney, MD Calculus Tutors
Olney, MD Geometry Tutors
Olney, MD Math Tutors
Olney, MD Prealgebra Tutors
Olney, MD Precalculus Tutors
Olney, MD SAT Tutors
Olney, MD SAT Math Tutors
Olney, MD Science Tutors
Olney, MD Statistics Tutors
Olney, MD Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Olney_MD_Precalculus_tutors.php","timestamp":"2014-04-16T05:03:46Z","content_type":null,"content_length":"24246","record_id":"<urn:uuid:d5144571-1daf-4f69-bcaf-f3699a30c8d9>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Clifton, NJ Trigonometry Tutor
Find a Clifton, NJ Trigonometry Tutor
...I am a results-oriented tutor, and I will do my best to help you in math.Algebra I is a very critical subject. A good solid foundation in Algebra I will make Algebra II and Algebra III
(precalculus) a breeze. It is important that you have a very good understanding of this subject...otherwise you will be miserable as you take more advanced math classes.
11 Subjects: including trigonometry, calculus, algebra 1, geometry
...I will also show you how to keep your work structured, which will help you understand it. I have taught classroom and computer based algebra courses at the university level and I have tutored
many algebra students at the high school and university level with great success. Calculus is the most difficult math most students will ever study - unless I tutor you.
23 Subjects: including trigonometry, calculus, geometry, ASVAB
...Tutoring Subjects: All levels of Mathematics including, but not limited to, Trigonometry, Algebra, AP Calculus AB and BC, Calculus Honors, Pre-calculus Honors, and Advanced Mathematics. For the
last 10 years I have been teaching AP Calculus AB and BC, Advanced Pre-calculus Honors and Calculus...
8 Subjects: including trigonometry, calculus, geometry, algebra 1
...My chess rating is 1370 in the Elo rating system, which is by no means a professional chess rating. I am qualified to teach an introduction to chess only. Discrete Math is a collection of
various other Math subjects including Logic, Combinatorics, Graph Theory, Algorithms, and more.
32 Subjects: including trigonometry, calculus, physics, geometry
...If that's not enough to convince you that I'm an awesome choice of tutor, how about this: I'm a senior graduate student working on my PhD at New York University's School of Medicine. Pretty
soon, I'm gonna be a doctor! (No, not THAT type of doctor. Please don't send me pictures of your aunt's moles.)Alright, if you're still reading, then you must be interested!
17 Subjects: including trigonometry, calculus, geometry, biology
|
{"url":"http://www.purplemath.com/Clifton_NJ_Trigonometry_tutors.php","timestamp":"2014-04-20T11:21:51Z","content_type":null,"content_length":"24429","record_id":"<urn:uuid:7bf7f678-939e-442d-8718-d030a862dfd4>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
|
pragmatist comments on Natural Laws Are Descriptions, not Rules - Less Wrong
Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Comments (234)
I'm a bit skeptical of your claim that entropy is dependent on your state of knowledge; It's not what they taught me in my Statistical Mechanics class, and it's not what my brief skim of
Wikipedia indicates. Could you provide a citation or something similar?
Sure. See section 5.3 of James Sethna's excellent textbook for a basic discussion (free PDF version available here). A quote:
"The most general interpretation of entropy is as a measure of our ignorance about a system. The equilibrium state of a system maximizes the entropy because we have lost all information about the
initial conditions except for the conserved quantities... This interpretation -- that entropy is not a property of the system, but of our knowledge about the system (represented by the ensemble of
possibilities) -- cleanly resolves many otherwise confusing issues."
The Szilard engine is a nice illustration of how knowledge of a system can impact how much work is extractable from a system. Here's a nice experimental demonstration of the same principle (see here
for a summary). This is a good book-length treatment of the connection between entropy and knowledge of a system.
Let's say you start with some prior over possible initial microstates. You can then time evolve each of these microstates separately; now you have a probability distribution over possible final
microstates. You then take the entropy of the this system.
Yes, but the prior over initial microstates is doing a lot of work here. For one, it is encoding the appropriate macroproperties. Adding a probability distribution over phase space in order to make
the derivation work seems very different from saying that the Second Law is provable from the fundamental laws. If all you have are the fundamental laws and the initial microstate of the universe
then you will not be able to derive the Second Law, because the same microscopic trajectory through phase space is compatible with entropy increase, entropy decrease or neither, depending on how you
carve up phase space into macrostates.
EDITED TO ADD: Also, simply starting with a prior and evolving the distribution in accord with the laws will not work (even ignoring what I say in the next paragraph). The entropy of the probability
distribution won't change if you follow that procedure, so you won't recover the Second Law asymmetry. This is a consequence of Liouville's theorem. In order to get entropy increase, you need a
periodic coarse-graining of the distribution. Adding this ingredient makes your derivation even further from a pure reduction to the fundamental laws.
In any case, it is not so clear that even the procedure you propose works. The main account of why the entropy was low in the early universe appeals to the entropy of the gravitational field as
compensation for the high thermal entropy of the initial state. As of yet, I haven't seen any rigorous demonstration of how to apply the standard tools of statistical physics to the gravitational
field, such as constructing a phase space which incorporates gravitational degrees of freedom. Hawking and Page attempted to do something like this (I could find you the citation if you like, but I
can't remember it off the top of my head), but they came up with weird results. (ETA: Here's the paper I was thinking of.) The natural invariant measure over state space turned out not to be
normalizable in their model, which means that one could not define sensible probability distributions over it. So I'm not yet convinced that the techniques we apply so fruitfully when it comes to
thermal systems can be applied to universe as a whole.
Also, simply starting with a prior and evolving the distribution in accord with the laws will not work (even ignoring what I say in the next paragraph). The entropy of the probability
distribution won't change if you follow that procedure, so you won't recover the Second Law asymmetry. This is a consequence of Liouville's theorem. In order to get entropy increase, you need a
periodic coarse-graining of the distribution. Adding this ingredient makes your derivation even further from a pure reduction to the fundamental laws.
Dang, you're right. I'm still not entirely convinced of your point in the original post, but I think I need to do some reading up in order to:
• Understand the distinction in approach to the Second Law you're proposing is not sufficiently explored
• See if it seems plausible that this is a result of treating physics as rules instead of descriptions.
This has been an interesting thread; I hope to continue discussing this at some point in the not super-distant future (I'm going to be pretty busy over the next week or so).
|
{"url":"http://lesswrong.com/lw/ct3/natural_laws_are_descriptions_not_rules/77lp","timestamp":"2014-04-21T14:42:47Z","content_type":null,"content_length":"36322","record_id":"<urn:uuid:cff3e7b5-fabd-4246-9411-83fd6a4f79f6>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
|
East Elmhurst Trigonometry Tutor
...I also provide much more individual feedback than could be possible in a large classroom, so you will find that time spent with me is much more productive. Whether you have taken a class
before, or if you have just been working on your own--or even if you have not yet begun to study, I can help ...
55 Subjects: including trigonometry, reading, writing, English
I hold a bachelors degree in microbiology, a masters degree in biochemistry and molecular biology, and have five years of Laboratory experience in medical centers like UT Southwestern and Mount
Sinai medical center. These experiences have helped me to understand the subject matters in depth. As a ...
16 Subjects: including trigonometry, chemistry, geometry, biology
...I keep the lesson objectives very clear. I encourage the student’s involvement throughout the lesson. I plan lessons using hands on activities and manipulatives.
39 Subjects: including trigonometry, English, reading, ESL/ESOL
...I am looking for NYC students for the weekdays and NJ students for weekends. My previous experience includes: Public and private tutoring, public and private school teaching, prestigious
fellowship teaching math and science in middle school classrooms. The subjects I can tutor are as follows: ALL K-12 mathematics courses including AP courses.
31 Subjects: including trigonometry, English, statistics, geometry
I am determined to discover how a person thinks, not just what they know. Reasoning ability is a central component of my educational philosophy. When you work with me, you are working with a
seasoned math expert and an enthusiastic (slightly nerdy) engineering professional.
11 Subjects: including trigonometry, calculus, algebra 2, algebra 1
|
{"url":"http://www.purplemath.com/East_Elmhurst_Trigonometry_tutors.php","timestamp":"2014-04-19T07:08:11Z","content_type":null,"content_length":"24283","record_id":"<urn:uuid:0ee1f3f0-dc70-4950-9230-7a0f8e43c17f>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics 3704 - Thermal Physics
Instructor: Michel Pleimling
Email: Michel.Pleimling@vt.edu
Office: Robeson 221
Phone: (540) 231-2675
Office hours: Tuesday, Thursday: 2.30 - 3.30 p.m., Monday: 1.30 - 2.30 p.m., and by appointment
Lectures: Thursday, Tuesday: 5.00 - 6.15 p.m., Robeson 101
Homework: Three to four problems will be assigned each Thursday. Solutions are due one week later in class (unless otherwise noted). Teamwork in solving these problems is encouraged, but
solutions must be handed in separately. Your solutions will be graded, and sample solutions will be provided.
First midterm Tuesday, February 19, 5.00 - 6.15 p.m.
Second midterm Tuesday, March 25, 5.00 - 6.15 p.m.
Final exam Saturday, May 3, 7.00 - 9.00 p.m.
All are closed-books exams. A sheet with useful equations will be provided. The honor code pertains to all three in-class exams.
Grade 30 % homework, 20 % each midterm exam, 30 % final exam
Prerequisite: PHYS 2306 - Foundations of Physics I
Recommended Draft chapters of Thermal and Statistical Physics by Harvey Gould and Jan Tobochnik, freely available at http://stp.clarku.edu/notes/
M. Kardar, Statistical Physics of Particles (Cambridge 2007)
R. Baierlein, Thermal Physics (Cambridge 1999)
A. H. Carter, Classical and Statistical Thermodynamics (Prentice Hall 2001)
C. Kittel and H. Kroemer, Thermal Physics (Freeman 1980)
D. V. Schroeder, An Introduction to Thermal Physics (Addison-Wesley 1999)
Topics: 1. Fundamentals of Statistical Mechanics
2. The microcanonical, the canonical and the grand-canonical ensembles
3. Thermodynamic processes
4. Quantum systems: Bose- and Fermi-statistics
5. Interacting systems and phase transitions
|
{"url":"http://www.phys.vt.edu/~pleim/thphys.html","timestamp":"2014-04-19T06:52:14Z","content_type":null,"content_length":"10497","record_id":"<urn:uuid:8037b004-e76a-4bd3-8187-08a9b9a8eaff>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Coordinate Locations on a Map
11.16: Coordinate Locations on a Map
Created by: CK-12
Practice Coordinate Locations on a Map
Have you ever created a map?
Kevin and his pen pal Charlotte are both creating maps of their neighborhoods to show each other what it looks like where they live. Kevin has decided to name the most important things on his map. He
has decided to include his house, his school, the skate park and the library. Since Kevin lives close to each of these things, he is sure that he can draw them on a map.
Kevin has decided to use a coordinate grid to show each location. He wants to send Charlotte a key that will match each location with its accurate coordinates.
Here is Kevin's grid.
Given this map, which coordinates should Kevin use to name each location?
Pay close attention to this Concept and you will learn how to write coordinates to name locations.
When we graphed geometric figures, we used integer coordinates to find the location of each point. Then we graphed each point according to its location. Maps also use integer coordinates to identify
different locations. If you look at a map, you will see some numbers and sometimes letters around the border of the map. This can assist you in figuring out the location of cities or even different
Some maps use integers to identify different locations. Let’s look at a map that does this. Here we have used a coordinate grid to identify where different places are in a town. Let’s look at this
We can say that Kara’s house is blue, Mark’s house is pink and Chase’s house is green. Each house has coordinates. We can say that the center of each house marks its coordinates on the map.
Kara’s house is at (-3, 1)
Mark’s house is at (3, -2)
Chase’s house is at (3, 4)
Local maps use letters and numbers to identify locations. World maps use degrees written in latitude and longitude. Let’s learn about this real life use of coordinates.
Longitude is the measure of lines vertically on a map.
Latitude is the measure of lines horizontally on a map.
We can measure longitude and latitude using degrees. These degrees are written as ordered pairs.
Here you can see degrees of latitude as horizontal measures. The degrees of longitude are the vertical measures.
We can identify different locations on a map if we have the coordinates of the location. Notice that the degrees of latitude are written first, those are the horizontal degrees, and the degrees of
longitude are written second. Those are the vertical degrees.
Practice working in degrees. Identify the states according to their locations in latitude and longitude.
Example A
$30^\circ, 83^\circ$
Example B
$42^\circ, 100^\circ$
Solution: South Dakota
Example C
$30^\circ, 100^\circ$
Solution: Texas
Now back to the map.
Here is the original problem once again. Reread the problem and then use what you have learned to write the coordinates to match Kevin’s map.
Kevin and his pen pal Charlotte are both creating maps of their neighborhoods to show each other what it looks like where they live. Kevin has decided to name the most important things on his map. He
has decided to include his house, his school, the skate park and the library. Since Kevin lives close to each of these things, he is sure that he can draw them on a map.
Kevin has decided to use a coordinate grid to show each location. He wants to send Charlotte a key that will match each location with its accurate coordinates.
Here is the grid that Kevin starts off with.
Given this map, which coordinates should Kevin use to name each location?
Now that you have finished this Concept, let’s work on writing coordinates to match Kevin’s map.
First, let’s start with his home. His house is located at (4, 5).
His school is located close to his home at (4, 2)
The library is located at (-1, 3).
Finally, the skate park is the farthest away from his home at (0, -3).
Kevin is ready to send his map and coordinates to Charlotte. He can’t wait to see her map.
Here are the vocabulary words in this Concept.
the four sections of a coordinate grid
the place where the $x$$y$
Ordered Pair
the $x$$y$$(x,y)$
the horizontal axis on the coordinate grid
the vertical axis on the coordinate grid
the $x$$y$
vertical measure of degrees on a map
horizontal measure of degrees on a map
Guided Practice
Which state is at $45^\circ, 70^\circ$
To answer this question, we start with the horizontal degrees, the latitude. That says $45^\circ$
You can see that we are at the state of Maine.
Maine is our answer.
As long as you have values on a map, you can use coordinates to identify any location.
Video Review
Here is a video for review.
http://video.about.com/geography/Latitude-and-Longitude.htm - This is a video from about.com on latitude and longitude.
Directions : Identify each place on the map according to latitude and longitude.
1. What is at $35^\circ, 70^\circ$
2. What is at $30^\circ, 90^\circ$
3. What is at $85^\circ, 70^\circ$
4. What is at $55^\circ, 90^\circ$
5. What is at $20^\circ, 40^\circ$
6. What is at $40^\circ, 80^\circ$
7. What is at $85^\circ, 100^\circ$
8. What is at $95^\circ, 80^\circ$
9. What is at $95^\circ, 100^\circ$
10. What is at $60^\circ, 30^\circ$
11. What is at $75^\circ, 30^\circ$
12. What is at $45^\circ, 40^\circ$
13. What is at $45^\circ, 45^\circ$
14. What is at $50^\circ, 70^\circ$
15. What is at $80^\circ, 50^\circ$
Files can only be attached to the latest version of Modality
|
{"url":"http://www.ck12.org/book/CK-12-Concept-Middle-School-Math---Grade-6/r2/section/11.16/","timestamp":"2014-04-23T13:01:26Z","content_type":null,"content_length":"134054","record_id":"<urn:uuid:0441612a-9363-450e-ab79-e4066b12d716>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: [ap-stat] Proofs for APStat Teachers and Others
Replies: 3 Last Post: Jul 23, 2012 11:23 AM
Messages: [ Previous | Next ]
re:[ap-stat] Proofs for APStat Teachers and Others
Posted: Jul 23, 2012 11:23 AM
Among the important results of APStat that teachers should know how
to prove mathematically are the following three:
1. The Central Limit Theorem (CLT)
2. The Chi-Square Test Stat Is Approximately Chi-Square Distributed
3. The T Test Stat Has A Student's t Distribution (exactly so when
sampling from a normal population)
For those still in a proof-reading frame of mind, feel free to
request either of the first two off-List.
Briefly, there are two versions of each: The first for the CLT is
the usual version (sans the rigor of such --- thanks to JS for
making me attentive of such) and is good for those who have not
seen a proof of the CLT before. The second proof of the CLT uses two
applications of L'Hospital's Rule to complete the proof.
Wrt 2 above, the first version is Fisherian in nature (especially
good for those teaching BCCalc) and the second version is Neyman-
Pearsonian in nature and good for those who were introduced to
math-stat from books with such an approach.
Thus, feel free to let me know off-List if you'd like any of the
proofs. (Suggestion: You should request one at a time and request
another one if you were comfortable following the one requested.)
Wrt 3 above, in a way, such might seem like an entire math prob-and-
stat course by itself, and so I'm holding it off for a while. (If
anyone else wants to read and comment on it, then let me know off-
List and I'll forward a copy to you --- btw, it runs six pages,
even though the above theorem [3] and proof of such itself run only
six lines...)
-- David Bee
PS: Please allow for some time for me to respond. (I should be able
to do so within the same day.)
PPS: Thanks to those who have offered comments.
Frequently asked questions(FAQ) http://mrmathman.com/faq
List Archives from 1994: http://mathforum.org/kb/forum.jspa?forumID=67
ap-stat resources: http://apstatsmonkey.com
To search the list archives for previous posts go to
To unsubscribe click here:
To change your subscription address or other settings click here:
Date Subject Author
7/18/12 [ap-stat] Proofs for APStat Teachers and Others David Bee
7/19/12 re:[ap-stat] Proofs for APStat Teachers and Others David Bee
7/20/12 re:[ap-stat] Proofs for APStat Teachers and Others David Bee
7/23/12 re:[ap-stat] Proofs for APStat Teachers and Others David Bee
|
{"url":"http://mathforum.org/kb/message.jspa?messageID=7852487","timestamp":"2014-04-20T00:57:08Z","content_type":null,"content_length":"22124","record_id":"<urn:uuid:21a4e3a5-0bb5-49af-95e5-72846cf9aa4a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding a set of linearly independent columns?
Hello, everyone! I apologize if this is a basic question. I am looking for a way to find a maximal set of linearly independent columns. I know that dgesvd can be used to find the rank of a matrix.
Can it also be used to find the actual indices of the linearly independent columns?
Thank you very much for your time!
Best regards,
Re: Finding a set of linearly independent columns?
You might want to use QR with pivoting for that. (DGEQP3.)
Re: Finding a set of linearly independent columns?
Thank you! Okay, I just want to be absolutely sure that I have this right. The Lapack call dgeqp3 takes the following:
* JPVT (input/output) INTEGER array, dimension (N)
* On entry, if JPVT(J).ne.0, the J-th column of A is permuted
* to the front of A*P (a leading column); if JPVT(J)=0,
* the J-th column of A is a free column.
* On exit, if JPVT(J)=K, then the J-th column of A*P was the
* the K-th column of A.
Since I don't know anything about the rank and columns of my matrix beforehand, I initialize the array JPVT to zero, and then pass it to db3qp3. On exit, I walk the array JPVT, and find *any* column
with a non-zero entry. These columns form a maximal linearly independent set.
Or, do I need to know the rank of A, and find only the columns that have been permuted to the 1 -> rank(A) position?
Thank you very much! I am sure this is a very basic question, so I apologize for taking time!
Very best regards,
Re: Finding a set of linearly independent columns?
Well I indeed forgot myself how this routine worked ...
OK so I re-read the routine, I think I am up to speed now.
JPVT is an array of INTEGER. It is input/ouput.
In input, what matters is whether it is different or zero or not.
If it is equal to 0 then the column j is treated in the general case.
The general case is the following:
(1) you find the column with greatest norm, say column k
(2) you permute the k-th column with the first
(3) you factorize with the first column that is: you orthogonalize all columns from 2 to n with column 1
And you loop on these steps (1)-(2)-(3).
So at each step you extract the column with largest norm from the column space.
Now you might want to specify a few columns that no matter how large their norms are, you want to orthonormalize against them first. Then you set their JPVT in input to a value different from 0 (say
1). The first thing the routine DGEQP3 will do is to orthonormalize against these columns using standar QR factorization (without pivoting). This is the special case.
OK so that said, yes, just set up JPVT to 0 for JPVT(1) to JPVT(N).
To get the rank, you have several ways to go. You however need to realize that your question is really tricky in finite precision arithmetic (see remark after).
One way to go is the following: the columns of the R factor are ordered from the largest in norm to the smallest. So you can scan the columns of R, actually you simply need to scan the diagonal of R,
and when you see a big drop in the diagonal. There is your rank, and you get the columns in JPVT(1) to JPVT(RANK).
Well, so of course all this is very tricky with finite precision arithmetic. The concept of rank is tricky in itself. To define it properly (to define the numerical rank) you need the SVD. But then
you can not relate the rank to a given set of column of the matrix A has you have properly in your first post. So please try the "one way to go" and if it works for you, that's great. Otherwise ....
Re: Finding a set of linearly independent columns?
If possible, can you please quantify the big drop in the diagonal of R.
Re: Finding a set of linearly independent columns?
Not really ... this really depends on your application.
If the singular values of the matrix are 1, 1, 1, 1e-3, 1e-3, 1e-10, 1e-10, what is the rank of the matrix?
Some will tell you that the numerical rank is 7, others that the numerical rank is 5, and others that the numerical rank is 3.
We cannot answer this question for the users.
|
{"url":"http://icl.cs.utk.edu/lapack-forum/viewtopic.php?f=2&t=1545&p=8136","timestamp":"2014-04-20T08:38:38Z","content_type":null,"content_length":"24429","record_id":"<urn:uuid:dec7e346-4261-4b2d-becf-84575a250c34>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lakeside, CO Precalculus Tutor
Find a Lakeside, CO Precalculus Tutor
I have taught students from kindergarten through eighth grade for 20 years in Boulder Valley School District, spending the majority of that time teaching middle school math and science. I have
also tutored high school students and adults in Honors Geometry, Advanced Algebra 2, and Precalculus. I l...
14 Subjects: including precalculus, reading, geometry, algebra 1
I hold a Ph.D. degree in Mathematics from the University of Leipzig (Germany). My area of expertise is Differential Equations with Variational Structure. I have been doing Mathematics for quite
some time now. You probably came across my profile because you have some troubles in Mathematics.
15 Subjects: including precalculus, French, calculus, algebra 1
...Through helping others, my excitement for learning was enhanced, which inspired me to pursue tutoring opportunities. I now have experience in tutoring algebra, geometry, pre-calculus,
trigonometry, calculus, and physics at the high school and university levels, as well as experience in elementar...
13 Subjects: including precalculus, physics, calculus, geometry
...I have extensive coursework in math and science, but also took Advanced Placement courses in History and English in high school. Additionally, Notre Dame requires each student to obtain a broad
liberal arts foundation regardless of major, which enables me to think critically about a variety of t...
15 Subjects: including precalculus, calculus, algebra 1, algebra 2
...My name is Peter, and I am a CU graduate, going to graduate school to earn my PhD in chemistry in the Fall. I have TA'd organic, general, and introductory level courses, as well as taken
classes in pedagogy to further improve my skills as a teacher. I have served as a private tutor for many stu...
6 Subjects: including precalculus, chemistry, algebra 1, algebra 2
|
{"url":"http://www.purplemath.com/Lakeside_CO_Precalculus_tutors.php","timestamp":"2014-04-18T15:52:19Z","content_type":null,"content_length":"24243","record_id":"<urn:uuid:0bc00a40-a96d-42ba-a6c5-0427384fecfc>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4ee28957e4b0a50f5c56459e","timestamp":"2014-04-17T18:58:14Z","content_type":null,"content_length":"51261","record_id":"<urn:uuid:029d3f33-2ff1-4b18-8f9c-c9608779e904>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Basic Algebra/Solving Quadratic Equations/Solving Quadratic Equations by Factoring
The easiest way to solve a quadratic equation is to factor the problem. Steps:
1. Make "Y" a zero.
2. list all the factors of "a" in y=ax^2+bx+c.
3. List all the factors of "c" in y=ax^2+bx+c.
4. See which factors can be used in such a way that when added or subtracted together they equal "b"
Example ProblemsEdit
1. a=1 1*1=1
2. c=1 1*1=1
3. b=2 1+1=2
0=(x+1)(x+1) or 0=(x+1)^2
Practice GamesEdit
Practice ProblemsEdit
Last modified on 14 May 2010, at 19:45
|
{"url":"http://en.m.wikibooks.org/wiki/Basic_Algebra/Solving_Quadratic_Equations/Solving_Quadratic_Equations_by_Factoring","timestamp":"2014-04-18T13:17:26Z","content_type":null,"content_length":"15478","record_id":"<urn:uuid:22649bf3-d75f-4597-8960-0e5b2182d6f0>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Lazy Sequence
The following is some theoretical learning I have done about functions of functions in Haskell and F#. It uses haskells type and function notation but has a couple of F#s (perhaps actually ocaml?)
names for functions because they are clearer for my purposes because they have clear directionality, and the flipped versions show some of the similar properties more clearly than the commonly used
There are two main operations you can do with function values: Apply them, and compose them. One other operation referenced here is flip, which swaps the argument order of the function.
First up, the function application operator, with its haskell name and then the F# versions that i'm going to prefer:
1 $ :: (a -> b) -> a -> b
2 <| :: (a -> b) -> a -> b -- Function application;
3 <| = ($)
4 |> :: a -> (a -> b) -> b -- Pipeline
5 |> = flip ($)
The second common operator on functions is function composition; Again the haskell name followed by the F# names. The latter is not to be confused with haskell's monad operator of the same name <<.
1 . :: (b -> c) -> (a -> b) -> (a -> c)
2 << :: (b -> c) -> (a -> b) -> (a -> c)
3 << = (.)
4 >> :: (a -> b) -> (b -> c) -> (a -> c)
5 >> = flip (.)
In Haskell its not immediately obvious that monadic bind is related to function application; in this case apply a function to a value in a monadic context. With the flipped function application (aka
pipeline) operator we can see clear similarities in the types:
1 |> :: a -> (a -> b) -> b
2 >>= :: Monad m => m a -> (a -> m b) -> m b
3 <| :: (a -> b) -> a -> b
4 =<< :: Monad m => (a -> m b) -> m a -> m b
Note that in both cases there is a function applied to a value, in the case of the bind operators that value is in a Monad type. The difference is that bind has a more specific type than plain
application, in the form of a type that implements the monad interface. This provides richer semantics about the function application.
We know that there is a relationship between function application and function composition. This is particularly clear with pipeline and right facing compose. In this expression, g and h are both
1 f val = val |> g |> h == f = g >> h
And given we know there is a relationship between bind and application, it can easily be supposed that there is a similar operation to composition specifically for monadic functions.
If we replace the three functions in the type of compose with monadic operations, we get an operator with a type such as:
1 >> :: (a -> b) -> (b -> c) -> (a -> c)
2 >=> :: Monad m => (a -> m b) -> (b -> m c) -> (a -> m c)
3 << :: (b -> c) -> (a -> b) -> (a -> c)
4 <=< :: Monad m => (b -> m c) -> (a -> m b) -> (a -> m c)
Indeed this operator does exist and is known as Kleisli composition of monads.
Generalized functions of functions
A key feature that Monads bring is that they allow a single operator to generalize many different kinds of function applications. The types that implement the Monad typeclass in haskell each provide
different contexts to manage how various functions are applied. The identity monad effectively is the function application operator dressed up with a more complex type.
Given there is a generalization for application, it seems likely that there would be a generalisation for composition too as we have already seen two general patterns. It turns out that there is, and
in Haskell it is known as the Arrow typeclass. Of particular interest here, Arrow provides an operator:
1 >>> :: Arrow ar => ar b c -> ar c d -> ar b d
An arrow type 'ar a b' represents a transformation from some type a to type b. For example:
1 a -> b -- becomes:
2 Arrow ar => ar a b
3 Monad m => a -> m b -- becomes:
4 (Arrow ar, Monad m) => ar a (m b)
This, the function application (>>) and Kleisli operators (>=>) can be represented in terms of Arrows as:
1 >> :: (a -> b) -> (b -> c) -> (a -> c)
2 >>> :: Arrow ar => ar b c -> ar c d -> ar b d
3 >=> :: Monad m => (a -> m b) -> (b -> m c) -> (a -> m c)
4 >>> :: (Arrow ar, Monad m) => ar b (m c) -> ar c (m d) -> ar b (m d)
What's the point?
All this shows just shows a correspondence between monads, arrows and the primative operations of functions. In particular how those operations can be generalised when more specific types are
|
{"url":"http://brehaut.net/blog/2009/functions_of_functions","timestamp":"2014-04-17T12:29:37Z","content_type":null,"content_length":"19947","record_id":"<urn:uuid:170d9f89-f1c5-4890-b94c-b4aeabe8684f>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Graphing Functions: Translation
Methods for changing a function to shift it left, right, up, or down. Includes three examples.
Course Material Related to This Topic:
Graphing Functions: Changing Scale
Ways to stretch or shrink a function by changing the expression used to define it, with an example.
Course Material Related to This Topic:
Graphing Functions: Even and Odd Functions
How to reflect a function across either of the coordinate axes, including definitions for even and odd functions. Rules for the behavior of even and odd functions are given, along with examples.
Course Material Related to This Topic:
Graphing Functions: Trigonometric Functions
Graphs of the sine, cosine, and tangent functions, including definitions of periodicity and the general sinusoidal wave, with examples.
Course Material Related to This Topic:
Graphing Functions: Inverses
Reflecting a graph across the line y=x to create an inverse function. Includes examples and discussion of the need to restrict the domain of the inverse function in some cases.
Course Material Related to This Topic:
Complete Graph Analysis
Graphing a function and finding its asymptotes, maxima, minima, inflection points, and regions where the graph is concave up or concave down.
Course Material Related to This Topic:
Nine questions involving translation, change of scale, even functions, odd functions, inverses, and trigonometric functions.
Course Material Related to This Topic:
Sketching Graphs
Three problems which involve sketching the graph of a function.
Course Material Related to This Topic:
|
{"url":"http://ocw.mit.edu/high-school/mathematics/exam-prep/analysis-of-graphs/graphs-of-functions/","timestamp":"2014-04-20T21:13:29Z","content_type":null,"content_length":"47245","record_id":"<urn:uuid:2f354f99-8948-4bce-af28-eac442282b03>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Landover Hills, MD ACT Tutor
Find a Landover Hills, MD ACT Tutor
...I enjoy math and I am very patient. All 3 of my children took Algebra 2 when they were in high school. I was very successful as their tutor.
15 Subjects: including ACT Math, chemistry, physics, ASVAB
...We can work together to significantly improve your scores and grades. I have very flexible hours and am happy to come work with you wherever is most convenient. I can meet you at a library,
coffee shop or even your house whatever is most comfortable for you!
22 Subjects: including ACT Math, calculus, geometry, GRE
...I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a single B, regardless of the subject. I did this through perfecting a
system of self-learning and studying that allowed me to efficiently learn all the required materials whil...
15 Subjects: including ACT Math, calculus, physics, GRE
...I have always enjoyed tutoring – especially at the high school level. I now devote more time and effort to it than was possible before retirement. I have hundreds of hours of experience with
more than 50 students.
13 Subjects: including ACT Math, chemistry, physics, calculus
...I have a PhD in Organic Chemistry and over 10 years tutoring experience. I also offer study skills for sciences and maths. Classes Offered are Organic Chemistry, Chemistry for nursing
students, General/introductory Chemistry to college students, High school Chemistry, and AP Chemistry.I have a PhD in Organic Synthesis.
7 Subjects: including ACT Math, chemistry, algebra 1, organic chemistry
Related Landover Hills, MD Tutors
Landover Hills, MD Accounting Tutors
Landover Hills, MD ACT Tutors
Landover Hills, MD Algebra Tutors
Landover Hills, MD Algebra 2 Tutors
Landover Hills, MD Calculus Tutors
Landover Hills, MD Geometry Tutors
Landover Hills, MD Math Tutors
Landover Hills, MD Prealgebra Tutors
Landover Hills, MD Precalculus Tutors
Landover Hills, MD SAT Tutors
Landover Hills, MD SAT Math Tutors
Landover Hills, MD Science Tutors
Landover Hills, MD Statistics Tutors
Landover Hills, MD Trigonometry Tutors
Nearby Cities With ACT Tutor
Bladensburg, MD ACT Tutors
Cheverly, MD ACT Tutors
Colmar Manor, MD ACT Tutors
Cottage City, MD ACT Tutors
Edmonston, MD ACT Tutors
Glenarden, MD ACT Tutors
Landover, MD ACT Tutors
Lanham ACT Tutors
Lanham Seabrook, MD ACT Tutors
Mount Rainier ACT Tutors
New Carrollton, MD ACT Tutors
North Brentwood, MD ACT Tutors
Riverdale Park, MD ACT Tutors
Riverdale Pk, MD ACT Tutors
Riverdale, MD ACT Tutors
|
{"url":"http://www.purplemath.com/landover_hills_md_act_tutors.php","timestamp":"2014-04-20T23:30:53Z","content_type":null,"content_length":"23963","record_id":"<urn:uuid:5101427f-70e5-4868-8bca-6e8bbcdc2bf4>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Somerville, MA Geometry Tutor
Find a Somerville, MA Geometry Tutor
...In previous years, I have taught Pre-Algebra, Algebra I, Geometry, Algebra II, Trigonometry and Calculus. I graduated from Rensselaer Polytechnic Institute (RPI) cum laude, and have a Masters
degree from Boston University. I'm very confident in my skills to tutor students and have been tutoring math and SAT prep for many years.
10 Subjects: including geometry, calculus, algebra 1, algebra 2
As an undergraduate I studied the math/physics track as a geology major and have a love for math and physics! While in high school I tutored algebra students, and while an undergraduate I graded
homework for a calculus class as well as tutored students in a calculus class. I also worked in a first grade classroom and frequently helped individual students with math lessons.
12 Subjects: including geometry, calculus, physics, precalculus
...I had my undergraduate in pharmacy and master program in biomedcial science. I took pharmacology in both program and get A in both class. I used to help my classmate in passing the exam.
10 Subjects: including geometry, chemistry, Chinese, algebra 1
I'm a high school and middle school math teacher and tutor, MA licensed and certified, currently teaching (part time) and tutoring math students at grades 7-12. I'm a good teacher, listen and
explain things well, enjoy teens and am patient and understanding. I have excellent tutoring references.
15 Subjects: including geometry, calculus, physics, statistics
...My references will gladly provide details about their own experiences. I have a master's degree in computer engineering and run my own data analysis company. Before starting that company, I
developed software for large and small companies and was most recently the IT director at a large accounting firm.
11 Subjects: including geometry, algebra 1, algebra 2, precalculus
Related Somerville, MA Tutors
Somerville, MA Accounting Tutors
Somerville, MA ACT Tutors
Somerville, MA Algebra Tutors
Somerville, MA Algebra 2 Tutors
Somerville, MA Calculus Tutors
Somerville, MA Geometry Tutors
Somerville, MA Math Tutors
Somerville, MA Prealgebra Tutors
Somerville, MA Precalculus Tutors
Somerville, MA SAT Tutors
Somerville, MA SAT Math Tutors
Somerville, MA Science Tutors
Somerville, MA Statistics Tutors
Somerville, MA Trigonometry Tutors
Nearby Cities With geometry Tutor
Allston geometry Tutors
Arlington, MA geometry Tutors
Boston geometry Tutors
Brighton, MA geometry Tutors
Brookline, MA geometry Tutors
Cambridge, MA geometry Tutors
Charlestown, MA geometry Tutors
East Somerville, MA geometry Tutors
Everett, MA geometry Tutors
Malden, MA geometry Tutors
Medford, MA geometry Tutors
Newton, MA geometry Tutors
Revere, MA geometry Tutors
Roxbury, MA geometry Tutors
Winter Hill, MA geometry Tutors
|
{"url":"http://www.purplemath.com/Somerville_MA_Geometry_tutors.php","timestamp":"2014-04-16T04:41:31Z","content_type":null,"content_length":"24179","record_id":"<urn:uuid:4333d278-f014-408f-ba4f-eff99e738e53>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fulton, MD Prealgebra Tutor
Find a Fulton, MD Prealgebra Tutor
...I am comfortable teaching writing - from basic spelling, vocabulary, and sentence construction to research papers and creative fiction. I have worked with a student whose severe spelling
deficit masked a gifted storyteller, building his confidence and skills. I have an extensive background in mathematics rarely found in elementary teachers.
32 Subjects: including prealgebra, reading, English, chemistry
After graduating from Indiana University of Pennsylvania with a Bachelor's of Science in Business Administration with a concentration in Accounting, I have been working as an accountant with over
25 years of professional experience under my belt. My volunteer experience includes many years at my ch...
13 Subjects: including prealgebra, geometry, accounting, algebra 1
...The abstract notation of algebra often gives new students difficulty, but the concept, when properly explained, is not difficult. A good working knowledge of algebra is essential to fields
like science, engineering, math, economics, and finance. It is also useful in real life, from figuring out...
4 Subjects: including prealgebra, algebra 1, elementary math, baseball
...From NY to GA. I coach youths from 10 years upwards. Chess has been my indoor pastime ever since I could remember.
12 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...I currently teach Technology Fluency at Coppin State University as adjunct faculty. I love tutoring and I am so passionate about transferring knowledge and see people succeed. Patience is one
of the key things that a good tutor should have and I can proudly tell that do have it in me.EDUCATION ...
15 Subjects: including prealgebra, calculus, precalculus, trigonometry
Related Fulton, MD Tutors
Fulton, MD Accounting Tutors
Fulton, MD ACT Tutors
Fulton, MD Algebra Tutors
Fulton, MD Algebra 2 Tutors
Fulton, MD Calculus Tutors
Fulton, MD Geometry Tutors
Fulton, MD Math Tutors
Fulton, MD Prealgebra Tutors
Fulton, MD Precalculus Tutors
Fulton, MD SAT Tutors
Fulton, MD SAT Math Tutors
Fulton, MD Science Tutors
Fulton, MD Statistics Tutors
Fulton, MD Trigonometry Tutors
|
{"url":"http://www.purplemath.com/fulton_md_prealgebra_tutors.php","timestamp":"2014-04-21T07:24:35Z","content_type":null,"content_length":"24002","record_id":"<urn:uuid:d6533df1-c7c7-48d1-b3c8-59e768caaf8f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Case for a Positive Lambda-Term - V. Sahni &
A. Starobinsky
3.1. Closed universe models (
Consider a particle moving with negative total energy under the influence of the potential V(a) shown in fig. (2), then the following situations arise (the one dimensional particle coordinate is
equivalent to the value of `a' - the expansion factor.)
(i) Oscillating models: The particle moves from left to right (starting from a = 0) but with insufficient energy to surmount the potential barrier. Consequently the expansion factor a(t) first
increases then decreases describing a universe which, after expanding, contracts into a singularity. Such models are called oscillating models of the first kind [142].
(ii) Bouncing models: The particle moves from right to left (starting from a = a(t) first decreases then increases and the universe rebounds after collapsing without ever reaching a singular state.
Such models are called bouncing models or oscillating models of the second kind, an example of such a model is provided by the complete de Sitter space-time
where -t <
(iii) Static Einstein Universe (SE): The particle is placed at the top of the potential with exactly zero kinetic energy: w = 0) we obtain
which relates the value of the cosmological constant to the density of matter and the curvature of space. The volume and mass of a SE universe are respectively V = 2^2 a[0]^3, M = V [m] = 2^2 a[0]^3
[m]. As a result M = (c^2 / 2G)a[0], and one finds lim[a[0]] M i.e. the mass of the static Einstein universe decreases as its radius shrinks to zero, consequently a static empty universe simply
cannot exist ! This feature of SE found favour with the proponents of Mach's principle as discussed in section 2.
(iv) Loitering Universe: The static Einstein universe is clearly unstable: small fluctuations can make it either contract or expand (these correspond to tiny perturbations of a particle located at
the hump of V(a) in fig. (2) which cause it to roll either towards the left (a a [crit]. In this case the universe begins from the Big Bang, approaches the static Einstein universe and remains close
to it for a substantial period of time before re-expanding [53, 122]. (If [crit] the universe will contract instead of expanding.) The quasi-static or loitering phase, during which the universe
remains close to a a[0], has several appealing features not present in models which expand monotonically [169]: (i) density perturbations grow at the exponential (Jeans) rate G ^1/2 t and not at the
weaker rate t^2/3 characteristic of an Einstein-de Sitter universe; (ii) a prolonged quasi-static phase results in an older universe, ameliorating the `age' problem which can arise in matter
dominated flat cosmologies if the value of the Hubble parameter turns out to be large (see section 4.1).
Interest in loitering models rose dramatically in the late 1960's when observations suggested the existence of an excess of quasars near redshift z[l] z[l] 160, 175, 166]. (Loitering at z[l] arises
if the cosmological constant exactly balances [m] leading to the relation: (1 + z[l])^3 = [] / [m], where [] = H^2. A decaying cosmological constant will lead to loitering at higher values of z[l]
which has certain advantages from the standpoint of current observations [169].)
(v) Monotonic Universe: The particle approaches the potential from the left (a = 0) with sufficient energy to surmount it and travel on towards a
(vi) Nonsingular Oscillating model: Another cosmological model deserving mention consists of a form of matter which behaves as a G a^p / [a] 8G [a] 8G a^p, p = 3, 4 for matter and radiation
respectively. The potential V(a) = - 4G / 3a^2 associated with this model has a broad minimum which leads to a non-singular oscillatory motion of the expansion factor a(t). This toy model is
interesting since it exhibits an infinite number of expansion and contraction cycles without ever becoming singular.
(vii) Other possibilities not shown in Figure (1) include `asymptotic models' in which the universe asymptotically approaches or moves away from the static Einstein universe. The reader is referred
to [62, 142] for a more quantitative discussion of these issues.
Figure 1. Four distinct possible solutions of the Einstein equations with a cosmological constant are schematically shown for a closed universe (
Although the above discussion referred to cosmological models filled with matter having non-negative pressure and a cosmological constant, it is easy to show that the qualitative behaviour of the
universe described in (i) - (vii) remains valid, if we generalize the definition of the strong energy condition so that [] + 3P[] < 0 [169].
Figure 2. The `effective potential' V(a) describing the expansion of the universe in the presence of matter and a cosmological constant (see equation (7)). The large variety of solutions to the
Einstein equations can be analyzed by studying the kindered problem of the motion of a particle moving under the influence of the potential V.
|
{"url":"http://ned.ipac.caltech.edu/level5/March02/Sahni/Sahni3_1.html","timestamp":"2014-04-20T13:27:34Z","content_type":null,"content_length":"13927","record_id":"<urn:uuid:8627a11a-0d31-4f79-9747-0ae308e8cf41>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Heaviest Convex Polygon
up vote 5 down vote favorite
Suppose we have an arbitrary function $f : \mathbb{R}^2 \to \mathbb{R}$. For any subset $s \subseteq \mathbb{R}^2$, we can define $g_f(s)$ as the integral* of $f$ over the region $s$. Suppose further
that we have access to an oracle that will tell us the value of $g_f(s)$ for any $s$.
Now, restrict our attention to subsets of $s$ that are the convex hull of a given subset of points $\bar x_c \subseteq \{x_1, \ldots, x_N \}$ with $x_i \in \mathbb{R}^2$. Assuming calls to the oracle
are O(1), what is the complexity (in terms of $N$) of finding $\bar x_c^* = \arg \max_{\bar x_c} g_f(conv(\bar x_c))$? Is there a known algorithm or reduction to a known problem?
EDIT: *Previous statement that Scott answered said "average value" here.
algorithms geometry
1 g_f doesn't always exist without some kind of assumption on f. For example, is f continuous? – Qiaochu Yuan Feb 2 '10 at 18:26
If you're asking about computational complexity, then you'll need to be more specific about the inputs. How will f be described as an input? – user2498 Feb 2 '10 at 18:28
We can assume $f$ is bounded. Is that good enough? – Andrew Feb 2 '10 at 18:30
I'm only interested in complexity in terms of the number of points N. – Andrew Feb 2 '10 at 18:31
@Konrad, there are no computability issues if one assumes that the oracle g works. g, restricted to subsets of the given set of points, only knows a finite amount of data - more precisely, the
1 integral of f over the regions cut out by all lines among the given points. (Also, the answer I gave below - which I have deleted - was in response to the original formulation of the question.) –
Qiaochu Yuan Feb 2 '10 at 20:27
show 2 more comments
2 Answers
active oldest votes
It should be polynomial (probably O(N^3)) in the number of input points using the dynamic programming technique in my paper with Overmars et al, "Finding minimum area k-gons", Disc.
Comput. Geom. 7:45-58, 1992, doi:10.1007/BF02187823.
The idea is: for each three points p,q,r, let W[p,q,r] be the optimal convex polygon that has p as its bottommost point (smallest y-coordinate) and qr and rp as edges. We can
up vote 5 down calculate W[p,q,r] by looking at all choices of s for which psqr is convex and combining the (previously computed) value W[p,s,q] with the weight of triangle pqr.
vote accepted
As described above this takes time O(N^4) but I think that, for each pair of p and q one can examine the points s and r in the order of the slopes of the lines sq and sr, keeping
track of the best s seen so far and using that choice of s for each r in this slope ordering, to reduce the time to O(N^3)
Excellent--this makes sense and I will think about it further. Thank you! – Andrew Feb 2 '10 at 19:57
If I could downvote my own reply, I would: Scott Carnahan's is much better. – David Eppstein Feb 2 '10 at 20:07
I responded to his comment and upvoted his answer. Somehow you read what I intended to write even though I completely mis-stated it. – Andrew Feb 2 '10 at 20:11
add comment
I'm assuming the N points are fixed ahead of time. In that case, it seems to me that you can just use the oracle on each triple of points, since any convex polygon with more than three
up vote 6 sides will have average at most the maximum of the averages over triangles in any triangulation. This gives you O(N^3) at worst.
down vote
2 Oh oh, of course. I apologize, I want $g$ to be the integral, not average value. Somehow David knew what I was talking about even though I wrote it completely wrong. I updated the
question and profusely apologize for the mis-statement of the problem. – Andrew Feb 2 '10 at 20:10
No problem. I'm glad the confusion got cleared up. – S. Carnahan♦ Feb 2 '10 at 20:24
add comment
Not the answer you're looking for? Browse other questions tagged algorithms geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/13844/heaviest-convex-polygon/13863","timestamp":"2014-04-20T16:39:09Z","content_type":null,"content_length":"65024","record_id":"<urn:uuid:dcb9a1d3-1789-474d-9077-1bfe5a9a2b77>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
|
RE: Re: st: RE: AW: ratio function
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: Re: st: RE: AW: ratio function
From "Roman Kasal" <kasal@trexima.cz>
To <statalist@hsphsun2.harvard.edu>
Subject RE: Re: st: RE: AW: ratio function
Date Thu, 22 Apr 2010 08:57:17 +0200
thank you for the code, but I have found a problem:
if I calculate over(foreign) the bound are enumerated with "e(N_psu)-e(N_strata)" degrees of freedom, but not for each foreign (degrees of freedom are for whole dataset) and this is wrong I assume.
thank you
-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Steve Samuels
Sent: Friday, April 02, 2010 2:58 PM
To: statalist@hsphsun2.harvard.edu
Subject: Re: Re: st: RE: AW: ratio function
Perhaps we misunderstand what you are asking for. I We have been
assuming that you want the ratio of the means of two variables
("columns"?) measured possibly on the same person. Perhaps you want
the ratio of the means of one variable for two subpopulations. Both
analyses will ignore missing values.
If this is not what you desire, then please demonstrate by hand what
you do want on a small, non-survey data set.. Also I'd like to know
which R function does what are asking for
The following do file computes the ratio of means with CI and then
does the same for the log ratio and transforms to the original scale.
**************************CODE BEGINS**************************
capture program drop _all
program antilog
local lparm el(r(b),1,1)
local se sqrt(el(r(V),1,1))
local bound invttail(e(df_r),.025)*`se'
local parm exp(`lparm')
local ll exp(`lparm' - `bound')
local ul exp( `lparm' + `bound')
di "parm =" `parm' " ll = " `ll' " ul = " `ul'
sysuse auto, clear
svyset _n
svy: mean mpg, over(foreign)
nlcom (myratio1: _b[Domestic]/_b[Foreign]) //ratio
nlcom (myratio2: log(_b[Domestic]/_b[Foreign])) // log ratio
// Confidence interval of last -nlcom- on antilog scale
***************************CODE ENDS***************************
On Fri, Apr 2, 2010 at 2:37 AM, Roman Kasal <kasal@trexima.cz> wrote:
> I don't agree...so how to do it when you want to find out ratio between
> years, male X female, ...? So there is no solution? Just to keep N,mean,
> SE, degrees of freedom, N_strata, N_psu, .... and calculate it manually?
> I think it is not appropriate solution, at least to have it as an
> option. I think there is missing a lot with complex survey in Stata and
> complex survey is needed for almost every survey research, even freeware
> R-project is better equipped :(
> so have a hope Stata will get it soon....immediately we are buying it
> again :)
> And it should. Data (x,y) (1,2) (2,4) (3,6) (100,.) will give an
> entirely different view of the data if the unpaired observation is
> included in a mean or ratio calculation. Or consider data with x
> missing in half the pairs and y missing in the other half; the ratio
> of means would be meaningless.
> The formulas for standard errors for ratios assume that the data are
> paired. Formally, they are based on the residual MSE of a regression
> of y on x through the origin. You cannot do that regression with
> unpaired data.
> If your concern is missing data, the solution is to impute the missing
> values before analysis.
> Steve
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
Steven Samuels
18 Cantine's Island
Saugerties NY 12477
Voice: 845-246-0774
Fax: 206-202-4783
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2010-04/msg01193.html","timestamp":"2014-04-18T13:15:07Z","content_type":null,"content_length":"12004","record_id":"<urn:uuid:f83d64b9-9d17-4f95-bbca-fef1bfca37e6>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Brookside Village, TX Algebra Tutor
Find a Brookside Village, TX Algebra Tutor
I have taught math and science as a tutor since 1989. I am a retired state certified teacher in Texas both in composite high school science and mathematics. I offer a no-fail guarantee (contact
me via WyzAnt for details). I am available at any time of the day; I try to be as flexible as possible.
35 Subjects: including algebra 2, algebra 1, chemistry, physics
...I possess a special talent for making learning fun by utilizing creative ways in which each student can relate. I also have experience tutoring for the Texas State Test (TAKS & STAAR)
resulting in 'recognized' or 'advanced' in the Reading and Science and outstanding achievements in Mathematics. ...
12 Subjects: including algebra 1, reading, English, geometry
...Try me and it may become a piece of cake for you. I have tutored lots of students in chemistry. All of my students have increased their grades at least by two letter grades, if they follow my
lesson plan.
19 Subjects: including algebra 1, algebra 2, chemistry, calculus
...I have also taught the advanced and lower levels too. So I have taught all levels of ability. Since retiring from the classroom in 2009, I have been able to do what I enjoy the most which is
individual tutoring!
5 Subjects: including algebra 1, geometry, ASVAB, prealgebra
...I am the previous director of Career & Technical Education program that encompassed career development pathways,internships, job readiness, industry certification acquisitions and softskills
training. I'm a graduate of the University of Pennsylvania in Liberal Arts. Former Deputy Director for Mayor"s Office of Philadelphia.
15 Subjects: including algebra 1, reading, writing, basketball
Related Brookside Village, TX Tutors
Brookside Village, TX Accounting Tutors
Brookside Village, TX ACT Tutors
Brookside Village, TX Algebra Tutors
Brookside Village, TX Algebra 2 Tutors
Brookside Village, TX Calculus Tutors
Brookside Village, TX Geometry Tutors
Brookside Village, TX Math Tutors
Brookside Village, TX Prealgebra Tutors
Brookside Village, TX Precalculus Tutors
Brookside Village, TX SAT Tutors
Brookside Village, TX SAT Math Tutors
Brookside Village, TX Science Tutors
Brookside Village, TX Statistics Tutors
Brookside Village, TX Trigonometry Tutors
Nearby Cities With algebra Tutor
Arcola, TX algebra Tutors
Bellaire, TX algebra Tutors
Bunker Hill Village, TX algebra Tutors
Fresno, TX algebra Tutors
Hedwig Village, TX algebra Tutors
Hilshire Village, TX algebra Tutors
Iowa Colony, TX algebra Tutors
Kemah algebra Tutors
Manvel, TX algebra Tutors
Meadows Place, TX algebra Tutors
Nassau Bay, TX algebra Tutors
Pearland algebra Tutors
South Houston algebra Tutors
Southside Place, TX algebra Tutors
Spring Valley, TX algebra Tutors
|
{"url":"http://www.purplemath.com/Brookside_Village_TX_Algebra_tutors.php","timestamp":"2014-04-20T21:24:55Z","content_type":null,"content_length":"24390","record_id":"<urn:uuid:47c949f6-15e0-4490-91be-f58c52ca3ce7>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Questions on Polygons Inscribed in Circles
previous post based on this topic before trying these questions.
Question 1: Four points that form a polygon lie on the circumference of the circle. What is the area of the polygon ABCD?
Statement I: The radius of the circle is 3 cm.
Statement II: ABCD is square.
Notice that you have been given that angles B and D are right angles. Does that imply that the polygon is a square? No. You haven’t been given that the polygon is a regular polygon. The diagonal AC
is the diameter since arc ADC subtends a right angle ABC. Hence arc ADC and arc ABC are semi-circles. But the sides of the polygon (AB, BC, CD, DA) may not be equal. Look at the diagram given below:
Statement I: The radius of the circle is 3 cm.
This statement alone is not sufficient. Look at the two figures given above. The area in the two cases will be different depending on the length of the sides. Just knowing the diagonal AC is not
enough. Hence this statement alone is not sufficient.
Statement II: ABCD is square.
This tells us that the first figure is valid i.e. the polygon is actually a square. But this statement alone doesn’t give us the measure of any side/diagonal. Hence this statement alone is not
Using both statements together, we know that ABCD is a square with a diagonal of length 6 cm. This means that the side of the square is 6/√2 cm giving us an area of (6/√2)^2 = 18 cm^2.
Answer (C)
Let’s look at a more complicated question now.
Question 2: A regular polygon is inscribed in a circle. How many sides does the polygon have?
Statement I: The length of the diagonal of the polygon is equal to the length of the diameter of the circle.
Statement II: The ratio of area of the polygon to the area of the circle is less than 2:3.
In this question, we know that the polygon is a regular polygon i.e. all sides are equal in length. As the number of sides keeps increasing, the area of the circle enclosed in the regular polygon
keeps increasing till the number of sides is infinite (i.e. we get a circle) and it overlaps with the original circle. The diagram given below will make this clearer.
Let’s look at each statement:
Statement I: The length of one of the diagonals of the polygon is equal to the length of the diameter of the circle.
Do we get the number of sides of the polygon using this statement? No. The diagram below tells you why.
Regular polygons with even number of sides will be symmetrical around their middle diagonal and hence the diagonal will be the diameter. Hence the polygon could have 4/6/8/10 etc sides. Hence this
statement alone is not sufficient.
Statement II: The ratio of area of the polygon to the area of the circle is less than 2:3.
Let’s find the fraction of area enclosed by a square.
In the previous post we saw that
Side of the square = √2 * Radius of the circle
Area of the square = Side^2 = 2*Radius^2
Area of the circle = π*Radius^2 = 3.14*Radius^2
Ratio of area of the square to area of the circle is 2/3.14 i.e. slightly less than 2/3.
So a square encloses less than 2/3 of the area of the circle. This means a triangle will enclose even less area. Hence, we see that already the number of sides of the regular polygon could be 3 or 4.
Hence this statement alone is not sufficient.
Using both statements together, we see that the polygon has 4/6/8 etc sides but the area enclosed should be less than 2/3 of the area of the circle. Hence the regular polygon must have 4 sides. Since
the area of a square is a little less than 2/3^rd the area of the circle, we can say with fair amount of certainty that the area of a regular hexagon will be more than 2/3^rd the area of the circle.
But just to be sure, you can do this:
Side of the regular hexagon = Radius of the circle
Area of a regular hexagon = 6*Area of each of the 6 equilateral triangles = 6*(√3/4)*Radius^2 = 2.6*Radius^2
2.6/3.14 is certainly more than 2/3 so the regular polygon cannot be a hexagon. The regular polygon must have 4 sides only.
Answer (C)
Karishma, a Computer Engineer with a keen interest in alternative Mathematical approaches, has mentored students in the continents of Asia, Europe and North America. She teaches the GMAT for Veritas
Prep and regularly participates in content development projects such as this blog!
|
{"url":"http://www.veritasprep.com/blog/2013/07/quarter-wit-quarter-wisdom-questions-on-polygons-inscribed-in-circles/","timestamp":"2014-04-16T07:45:32Z","content_type":null,"content_length":"50214","record_id":"<urn:uuid:46cc4476-57ac-49ab-bb68-9ea9f7f0cfb4>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Add and Subtract Whole Numbers
Addition and Subtraction of Whole Numbers
The use of a base-ten positional number system for writing numbers led to the development of powerful algorithms for arithmetic operations. An algorithm is an organized procedure for performing a
given type of calculation. In the addition and subtraction algorithms, digits are aligned according to place value, and the computation is completed from right to left.
Example: Add 3,378 + 2,983
Add the ones.
11 ones =
1 ten + 1 one
Add the tens.
16 tens =
1 hundred + 6 tens
Add the hundreds.
hundreds = 1
thousand + 3 hundreds
Add the thousands.
Example: Subtract 3,150 − 1,732
5 tens 0 ones =
4 tens 10 ones
Subtract the ones.
Subtract the tens.
3 thousands 1 hundred =
2 thousands 11 hundreds
Subtract the hundreds.
Subtract the thousands.
Expressions and Equations
An arithmetic expression consists of numbers and operations using parentheses, exponents, multiplication, division, addition, and subtraction. An algebraic expression is like an arithmetic
expression, but contains at least one variable. A variable is a letter that represents a number.
When evaluating the expressions in this chapter, students should evaluate first within the parentheses; second, exponents; third, multiplication and division from left to right; and fourth, addition
and subtraction from left to right. The equality of two expressions gives an equation. To solve an equation means to find the value of the variable that will make the equation true. These equations
are simple enough to be solved by inspection or by using a guess-and-check strategy. Explain that not all equations are as simple as the equations in this chapter, so it is necessary to learn to use
inverse operations to solve simple equations before learning to solve more difficult equations.
p + 4 = 11
p = 11 − 4
p = 7
← Use inverse operations. →
x − 3 = 9
x = 9 + 3
x = 12
In future chapters, students will learn more about evaluating expressions and solving equations.
Teaching Model 2.1: Expressions and Addition Properties
|
{"url":"http://www.eduplace.com/math/mw/models/overview/5_2_1.html","timestamp":"2014-04-21T02:08:15Z","content_type":null,"content_length":"7068","record_id":"<urn:uuid:037a17cb-28b0-4cdd-97a7-9965ea15bec1>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ten challenges redux: recent progress in propositional reasoning and search
- In Proceedings of the National Conference on Artificial Intelligence (AAAI , 2005
"... Over the past decade general satisfiability testing algorithms have proven to be surprisingly effective at solving a wide variety of constraint satisfaction problem, such as planning and
scheduling (Kautz and Selman 2003). Solving such NPcomplete tasks by “compilation to SAT ” has turned out to be a ..."
Cited by 28 (0 self)
Add to MetaCart
Over the past decade general satisfiability testing algorithms have proven to be surprisingly effective at solving a wide variety of constraint satisfaction problem, such as planning and scheduling
(Kautz and Selman 2003). Solving such NPcomplete tasks by “compilation to SAT ” has turned out to be an approach that is of both practical and theoretical interest. Recently, (Sang et al. 2004) have
shown that state of the art SAT algorithms can be efficiently extended to the harder task of counting the number of models (satisfying assignments) of a formula, by employing a technique called
component caching. This paper begins to investigate the question of whether “compilation to model-counting ” could be a practical technique for solving real-world #P-complete problems, in particular
Bayesian inference. We describe an efficient translation from Bayesian networks to weighted model counting, extend the best model-counting algorithms to weighted model counting, develop an efficient
method for computing all marginals in a single counting pass, and evaluate the approach on computationally challenging reasoning problems.
- In proceedings of AAAI , 2004
"... Algorithms based on following local gradient information are surprisingly effective for certain classes of constraint satisfaction problems. Unfortunately, previous local search algorithms are
notoriously incomplete: They are not guaranteed to find a feasible solution if one exists and they cannot b ..."
Cited by 24 (0 self)
Add to MetaCart
Algorithms based on following local gradient information are surprisingly effective for certain classes of constraint satisfaction problems. Unfortunately, previous local search algorithms are
notoriously incomplete: They are not guaranteed to find a feasible solution if one exists and they cannot be used to determine unsatisfiability. We present an algorithmic framework for complete local
search and discuss in detail an instantiation for the propositional satisfiability problem (SAT). The fundamental idea is to use constraint learning in combination with a novel objective function
that converges during search to a surface without local minima. Although the algorithm has worst-case exponential space complexity, we present empirical results on challenging SAT competition
benchmarks that suggest that our implementation can perform as well as state-of-the-art solvers based on more mature techniques. Our framework suggests a range of possible algorithms lying between
tree-based search and local search.
"... We focus on the random generation of SAT instances that have computational properties that are similar to real-world instances. It is known that industrial instances, even with a great number of
variables, can be solved by a clever solver in a reasonable amount of time. This is not possible, in gene ..."
Cited by 3 (2 self)
Add to MetaCart
We focus on the random generation of SAT instances that have computational properties that are similar to real-world instances. It is known that industrial instances, even with a great number of
variables, can be solved by a clever solver in a reasonable amount of time. This is not possible, in general, with classical randomly generated instances. We provide different generation models of
SAT instances, extending the uniform and regular 3-CNF models. They are based on the use of non-uniform probability distributions to select variables. Our last model also uses a mechanism to produce
clauses of different lengths as in industrial instances. We show the existence of the phase transition phenomena for our models and we study the hardness of the generated instances as a function of
the parameters of the probability distributions. We prove that, with these parameters we can adjust the difficulty of the problems in the phase transition point. We measure hardness in terms of the
performance of different solvers. We show how these models will allow us to generate random instances similar to industrial instances, of interest for testing purposes. 1
- In Proceedings, 36th International Symposium on Multiple-Valued Logics (ISMVL , 2006
"... We define the MaxSAT problem for many-valued CNF formulas, called many-valued MaxSAT, and establish its complexity class. We then describe a basic branch and bound algorithm for solving
many-valued MaxSAT, and an exact many-valued MaxSAT solver we have implemented. Finally, we report the experimenta ..."
Cited by 2 (1 self)
Add to MetaCart
We define the MaxSAT problem for many-valued CNF formulas, called many-valued MaxSAT, and establish its complexity class. We then describe a basic branch and bound algorithm for solving many-valued
MaxSAT, and an exact many-valued MaxSAT solver we have implemented. Finally, we report the experimental investigation we have performed to compare our solver with Boolean MaxSAT solvers on graph
coloring instances. The results obtained indicate that many-valued CNF formulas can become a competitive formalism for representing and solving combinatorial optimization problems. 1
"... Abstract. During this decade, it has been observed that many realworld graphs, like the web and some social and metabolic networks, have a scale-free structure. These graphs are characterized by
a big variability in the arity of nodes, that seems to follow a power-law distribution. This came as a bi ..."
Cited by 2 (0 self)
Add to MetaCart
Abstract. During this decade, it has been observed that many realworld graphs, like the web and some social and metabolic networks, have a scale-free structure. These graphs are characterized by a
big variability in the arity of nodes, that seems to follow a power-law distribution. This came as a big surprise to researchers steeped in the tradition of classical random networks. SAT instances
can also be seen as (bi-partite) graphs. In this paper we study many families of industrial SAT instances used in SAT competitions, and show that most of them also present this scale-free structure.
On the contrary, random SAT instances, viewed as graphs, are closer to the classical random graph model, where arity of nodes follows a Poisson distribution with small variability. This would explain
their distinct nature. We also analyze what happens when we instantiate a fraction of the variables, at random or using some heuristics, and how the scale-free structure is modified by these
instantiations. Finally, we study how the structure is modified during the execution of a SAT solver, concluding that the scale-free structure is preserved. 1
- Int. J. Embedded Systems , 2005
"... ..."
, 2005
"... A CSP lookahead search algorithm, like FC or MAC, explores a search tree during its run. Every node of the search tree can be associated with a CSP created by the refined domains of unassigned
variables. If the algorithm detects that the CSP associated with a node is insoluble, the node becomes a ..."
Cited by 2 (0 self)
Add to MetaCart
A CSP lookahead search algorithm, like FC or MAC, explores a search tree during its run. Every node of the search tree can be associated with a CSP created by the refined domains of unassigned
variables. If the algorithm detects that the CSP associated with a node is insoluble, the node becomes a dead-end. A strategy of pruning ”by analogy ” states that the current node of the search tree
can be discarded if the CSP associated with it is ”more constrained ” than a CSP associated with some dead-end node. In this paper we present a method of pruning based on the above strategy. The
information about the CSPs associated with dead-end nodes is kept in the structures called responsibility set and kernel. The method that uses these structures for pruning is termed Responsibility
set, Kernel, Propagation- RKP. The resulting combined algorithms are FC-RKP and MAC-RKP. Under certain restrictions, FC-RKP is shown theoretically to simulate FC-CBJ. Experimental evaluation is
presented demonstrating that MAC-RKP outperforms MAC-CBJ on random CSPs and on random graph coloring problems. .
"... Quantum algorithms and circuits can, in principle, outperform the best non-quantum (classical) techniques for some hard computational problems. However, this does not necessarily lead to useful
applications. To gauge the practical significance of a quantum algorithm, one must weigh it against the be ..."
Add to MetaCart
Quantum algorithms and circuits can, in principle, outperform the best non-quantum (classical) techniques for some hard computational problems. However, this does not necessarily lead to useful
applications. To gauge the practical significance of a quantum algorithm, one must weigh it against the best conventional techniques applied to useful instances of the same problem. Grover's quantum
search algorithm is one of the most widely studied. We identify requirements for Grover's algorithm to be useful in practice: (1) a search application S where classical methods do not provide
sufficient scalability; (2) an instantiation of Grover's algorithm Q(S) for S that has a smaller asymptotic worst-case runtime than any classical algorithm C(S) for S; (3) Q(S) with smaller actual
runtime for practical instances of S than that of any C(S). We show that several commonly-suggested applications fail to satisfy these requirements, and outline directions for future work on quantum
"... • Terrance Swift has financial interest in mdlogix, Inc which is mentioned in this talk • Terrance Swift has financial interest in XSB, Inc which is not mentioned in this talk – The programming
system XSB Prolog is opensource and freely available. It is not controlled in any way by XSB, Inc, althoug ..."
Add to MetaCart
• Terrance Swift has financial interest in mdlogix, Inc which is mentioned in this talk • Terrance Swift has financial interest in XSB, Inc which is not mentioned in this talk – The programming
system XSB Prolog is opensource and freely available. It is not controlled in any way by XSB, Inc, although they have generously contributed to its development 2 Learning Objectives • to survey how
formal logic and logic programming have been used for knowledge representation and reasoning for a variety of medical applications • to sketch the current state of logic programming systems, focusing
on open-source systems • to indicate how some current research directions in logic programming may be relevant to areas such as adaptive workflow management for healthcare or for dynamic decision
support.... a little learning is a dangerous thing, and so is writing your learning objectives before your talk:-) 3
, 2013
"... Abstract. In this paper, a novel hybrid and complete approach for propositional satisfiability, called SATHYS (Sat Hybrid Solver), is introduced. It efficiently combines the strength of both
local search and CDCL based SAT solvers. Considering the consistent partial assignment under construction by ..."
Add to MetaCart
Abstract. In this paper, a novel hybrid and complete approach for propositional satisfiability, called SATHYS (Sat Hybrid Solver), is introduced. It efficiently combines the strength of both local
search and CDCL based SAT solvers. Considering the consistent partial assignment under construction by the CDCL SAT solver, local search is used to extend it to a model of the Boolean formula, while
the CDCL component is used by the local search one as a strategy to escape from a local minimum. Additionally, both solvers heavily cooperate thanks to relevant information gathered during search.
Experimentations on SAT instances taken from the last competitions demonstrate the efficiency and the robustness of our hybrid solver with respect to the state-of-the-art CDCL based, local search and
hybrid SAT solvers. 1
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=161799","timestamp":"2014-04-21T02:13:24Z","content_type":null,"content_length":"38342","record_id":"<urn:uuid:2fe4b8b9-2bd6-4363-b440-2af0f7d85db2>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solving for x when x's cancel out?
February 11th 2008, 11:03 AM #1
Feb 2008
Solving for x when x's cancel out?
(I also posted this is the Algebra forum, but got a lot of views, but no replies, so I'm trying here instead...)
I'm sure the answer to this is incredibly obvious, but something is slipping my mind:
│x-3│= x + 2
if you move the x out of the absolute value braket, the x's cancel... so how do you solve it?
There are 3 possible cases.
Case 1:
$x-3 > 0$
$x > 3$
If it's positive, we can say that $x-3 = x+2$ which doesn't have a root.
Case 2:
It's negative, we can rewrite it as $x-3 = -(x+2)$
$2x = 1$
$\boxed{x = \frac{1}{2}}$ --! We found a root
Case 3:
$x-3 = 0$
For $x=3$, $|x-3|eq x+2$ so we don't have roots in this region.
Conclusion: The equation has only the root $x=\frac{1}{2}$.
We can rewrite the equation as $|x-3|-x-2=0$ and plot this function.
See where this graph crosses the x axis.
February 11th 2008, 01:21 PM #2
|
{"url":"http://mathhelpforum.com/pre-calculus/27984-solving-x-when-x-s-cancel-out.html","timestamp":"2014-04-18T07:37:57Z","content_type":null,"content_length":"34935","record_id":"<urn:uuid:520bc4cb-7458-4bb9-9e6f-275e0494c50c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Divide 1 by a polynomial
January 20th 2009, 08:29 PM
Divide 1 by a polynomial
I know how to use synthetic division to divide a polynomial by one of lower degree, but how does one divide a poly. by one of higher degree?
In particular, a text I am studying says
1 divided by x-1 =1 + x + x^2 + x^3 + ...
iff -1 < x < 1,
but I don't know how to find this out myself.
How do you divide 1 by x-1 ?
January 20th 2009, 08:42 PM
I know how to use synthetic division to divide a polynomial by one of lower degree, but how does one divide a poly. by one of higher degree?
In particular, a text I am studying says
1 divided by x-1 =1 + x + x^2 + x^3 + ...
iff -1 < x < 1,
but I don't know how to find this out myself.
How do you divide 1 by x-1 ?
This is done using the geometric series formula:
$\displaystyle \sum_{k=0}^{\infty} ar^k = \frac{a}{1-r}$
In your case r = x, a = 1
$\displaystyle \frac{1}{1-x} = \sum_{k=0}^{\infty} x^k = x^0 + x^1 + x^2 + x^3 + x^4 ...$
As you can see, the result goes on to infinity. There is no method of doing this in the same way that you can divide using synthetic division/long division.
|
{"url":"http://mathhelpforum.com/algebra/69158-divide-1-polynomial-print.html","timestamp":"2014-04-20T01:48:47Z","content_type":null,"content_length":"5267","record_id":"<urn:uuid:5d355ca6-10ec-4664-ab35-2aa2388cb0f5>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebra 2 Tutors
Glendale, CA 91208
Alleviate Math Anxiety and Computer Programming Tutor
...Whether your concern is solving quadratic equations, graphing, solving linear systems, matrices, function evaluation, etc., I can facilitate the
Algebra 2
learning process. I have provided individual and class assistance in these areas and I am confident that I...
Offering 9 subjects including algebra 2
|
{"url":"http://www.wyzant.com/Los_Angeles_Algebra_2_tutors.aspx","timestamp":"2014-04-19T05:59:42Z","content_type":null,"content_length":"65020","record_id":"<urn:uuid:89ed8885-25cf-4e32-b322-2a43309583d2>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Definition of Degrees & Radians | Chegg.com
Degrees and radians are the two common units for measurements of angles. Degrees are determined by dividing a circle into 360 equal parts. Radians work the same way, except a circle has 2π radians.
When the arc length is the same length as the radius, the central angle to the circle has a size of one radian.
The central angle of a circle going one full revolution is equal to the circumference or 2πr. This means there are 2π radiuses or 2π radians in a circle. Thus, 2π radians equal 360 degrees, meaning
that one radian is equal to 180/π degrees.
|
{"url":"http://www.chegg.com/homework-help/definitions/degrees-radians-65?cp=CHEGGFREESHIP","timestamp":"2014-04-18T13:17:10Z","content_type":null,"content_length":"17774","record_id":"<urn:uuid:ea7ead70-bb95-4397-aec4-7219c4cdf649>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
|
GMAT Overlapping Sets
• GMAT Prep (USA)
• GMAT Prep (International)
• Favorite GMAT Articles
□ 800score GMAT Course Reviews
□ GMAT Course Comparision
□ GMAT Essay Tips & Grading from 800score
• Free GMAT Prep
□ Free GMAT CAT test
□ Free GMAT Course
• MBA Admissions
□ MBA Admissions Info
□ Admissions Consultants
□ Admission Essay Tips
• Forums
□ Forums Home
□ GMAT Strategies and Content
□ Quantitative Reasoning
□ Verbal Reasoning
□ GMAT Strategies
□ GMAT Essays
□ MBA Admissions Discussions
□ MBA Admissions Help
□ MBA Admissions Essays
□ Commercial GMAT Preparation
□ 800score
□ Manhattan GMAT
□ Veritas
• Contributors
□ Manhattan GMAT
□ Ron Purewal
□ Stacey Koprince
□ VeritasPrep
□ Jim Stekelberg
□ 800score
□ Steve Alexandris
□ GMAT MBA Prep
□ JP Morgan
• Board contributors include instructors with "800" GMAT scores.
• 95% of posts have replies within 24 hours.
• Join for discounts with 800score, VeritasPrep and ManhattanGMAT
FAQ - Register - Search - Login
View unanswered posts | View active topics
All times are UTC - 7 hours
GMAT Overlapping Sets
Page 1 of 1 [ 6 posts ]
Print view Previous topic | Next topic
Author Message
AEK Post subject: GMAT Overlapping Sets
Posted: Sun Sep 13, 2009 6:59 pm
In a village of 100 households, 75 have at least one DVD player, 80 have at least one cell phone, and 55 have at least
one MP3 player. Every village has at least one of these three devices. If x and y are respectively the greatest and
lowest possible number of households that have all three of these devices, x – y is :
Joined: Sun Apr 19,
2009 6:56 pm Kindly can you provide much more detail explanation how the minimum value y is 10.
Posts: 32
GMATguru Post subject: Re: Test 5, Question 23
Posted: Sun Sep 13, 2009 7:07 pm
This is a set question.
75 have one or more DVD players
Joined: Mon Apr 06, 80 have one or more Cell phone
2009 5:44 pm 55 have one or more MP3 player
Posts: 81
Total households is 100
The limiting factor is clearly the MP3 player, so the greatest number of homes that have all three devices is going to
be 55.
If we add up 80 and 75 we get 155. There are only 100 households, so minimally, there is an overlap of 55. 155 – 100 =
55 households must have both a DVD player and a cell phone.
We also know that 100 – 55 = 45 households do not have an MP3 player, and it is possible that ALL of these households
is among the 55 households that have both a DVD player and a cell phone. This means that, at minimum, 10 households (55
– 45) have an MP3 player, DVD player and cell phone.
Gennadiy Post subject: Re: Test 5, Question 23
Posted: Tue Aug 03, 2010 2:56 am
Let us use Venn diagram to analyze this question. In most general way it will look:
Joined: Fri Apr 09,
2010 2:11 pm Let us consider only DVD players and cell phones.
Posts: 453
We denote number of dorm rooms that have both a DVD player and a cell phone by z. Then the number of dorm rooms that
have a DVD player but don't have a cell phone is 75 – z. The number of dorm rooms that have a cell phone but don't have
a DVD player is 80 – z.
So the number of dorm rooms that have at least DVD player or a cell phone is (75 – z) + (80 – z) + z = 155 – z
We know that this number doesn't exceed 100, so 155 – z ≤ 100 or 55 ≤ z.
Now, let us consider the number of dorm rooms that have both: a DVD player and cell phone, together with the number of
dorm rooms that have an MP3 player.
Let us denote the number of dorm rooms that have all: a DVD player, a cell phone and an MP3 player by y.
Then the number of dorm rooms that have either both: a DVD player and a cell phone or have an MP3 player is not less
(55 – y) + (55 – y) + y = 110 – y
We know that this can not exceed 100, so 110 – y ≤ 100
or 10 ≤ y.
So y is not less than 10. And it can be 10 if Venn diagram is following:
Therefore y = 10 is the lowest possible number of dorms that have all three of these devices.
questioner Post subject: Re: Test 5, Question 23
Posted: Mon Jan 10, 2011 1:34 pm
The answer provided seems too complicated.
The question is asking for the difference between the greatest and lowest possible number of dorms that have all three
Joined: Tue Apr 13, devices.
2010 8:48 am
Posts: 477 The maximum cannot be no more than 100 given that there are only 100 units and the minimum cannot be lower than 55
since that's the lowest number of units that have at least on of the three devices. Therefore the greatest and lowest
possible number of units that have all three of these devices are 100 and 55, respectively. Is my logic correct?
I found it difficult trying to follow the explanation provided by 800score
Gennadiy Post subject: Re: Test 5, Question 23
Posted: Mon Jan 10, 2011 2:36 pm
The maximum cannot be no more than 100 given that there are only 100 units
Joined: Fri Apr 09, This statement is correct, the maximum value can be no more than 100, so in fact x ≤ 100. But it does NOT mean that the
2010 2:11 pm maximum possible x can be 100.
Posts: 453 Many inequalities, such as x ≤ 101, or x ≤ 200, etc. are true. But can x equal 100? No, because in that case each room
would have every device installed. But the question statement tells there are only 75 units with a DVD player, 80 units
with a cell phone and 55 with an MP3 player.
So the greatest possible value for x is incorrect.
the minimum cannot be lower than 55 since that's the lowest number of units that have at least on of the three devices.
The least possible value for y is incorrect. The following Venn diagram shows that y can be 10:
How to obtain the proper minimum and maximum values?
Obtaining the maximum possible value is easy.
Since only 55 units are equipped with an MP3 player, the greatest possible value of units that have all three devices
CANNOT exceed 55 (x ≤ 55). And it can equal 55 if all the rooms that are equipped with an MP3 player are also equipped
with other devices.
The least possible value can be calculated using Venn diagrams as shown above. Or we can reason a little bit
Suppose, we decide what units have particular devices. In the beginning we have just 100 empty units:
Then we place 80 cell phones and 75 DVD players so that there are as little units that have both as possible:
Then we add MP3 players. We place as many MP3 players as possible in the units that have only 1 device. So that the
rooms that already have two devices will get as little MP3 players as possible:
10 is the least possible value.
questioner Post subject: Re: Test 5, Question 23
Posted: Wed Jan 12, 2011 2:41 pm
Thanks for the clarity Gennadiy.
Joined: Tue Apr 13,
2010 8:48 am
Posts: 477
Page 1 of 1 [ 6 posts ]
All times are UTC - 7 hours
Who is online
Users browsing this forum: No registered users and 1 guest
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum
GMAT(TM) and GMAT CAT (TM) are registered trademarks of the Graduate Management Admission Council(TM). The Graduate Management Admission Council
(TM) does not endorse, nor is affiliated in any way with the owner or any content of this site.
• GMAT Prep (USA)
• GMAT Prep (International)
• Favorite GMAT Articles
□ 800score GMAT Course Reviews
□ GMAT Course Comparision
□ GMAT Essay Tips & Grading from 800score
• Free GMAT Prep
□ Free GMAT CAT test
□ Free GMAT Course
• MBA Admissions
□ MBA Admissions Info
□ Admissions Consultants
□ Admission Essay Tips
• Forums
□ Forums Home
□ GMAT Strategies and Content
□ Quantitative Reasoning
□ Verbal Reasoning
□ GMAT Strategies
□ GMAT Essays
□ MBA Admissions Discussions
□ MBA Admissions Help
□ MBA Admissions Essays
□ Commercial GMAT Preparation
□ 800score
□ Manhattan GMAT
□ Veritas
• Contributors
□ Manhattan GMAT
□ Ron Purewal
□ Stacey Koprince
□ VeritasPrep
□ Jim Stekelberg
□ 800score
□ Steve Alexandris
□ GMAT MBA Prep
□ JP Morgan
• Board contributors include instructors with "800" GMAT scores.
• 95% of posts have replies within 24 hours.
• Join for discounts with 800score, VeritasPrep and ManhattanGMAT
FAQ - Register - Search - Login
View unanswered posts | View active topics
All times are UTC - 7 hours
GMAT Overlapping Sets
Page 1 of 1 [ 6 posts ]
Print view Previous topic | Next topic
Author Message
AEK Post subject: GMAT Overlapping Sets
Posted: Sun Sep 13, 2009 6:59 pm
In a village of 100 households, 75 have at least one DVD player, 80 have at least one cell phone, and 55 have at least one MP3 player. Every village has at least one of
these three devices. If x and y are respectively the greatest and lowest possible number of households that have all three of these devices, x – y is :
Joined: Sun Apr 19, Kindly can you provide much more detail explanation how the minimum value y is 10.
2009 6:56 pm
Posts: 32
GMATguru Post subject: Re: Test 5, Question 23
Posted: Sun Sep 13, 2009 7:07 pm
This is a set question.
75 have one or more DVD players
Joined: Mon Apr 06, 80 have one or more Cell phone
2009 5:44 pm 55 have one or more MP3 player
Posts: 81
Total households is 100
The limiting factor is clearly the MP3 player, so the greatest number of homes that have all three devices is going to be 55.
If we add up 80 and 75 we get 155. There are only 100 households, so minimally, there is an overlap of 55. 155 – 100 = 55 households must have both a DVD player and a cell
We also know that 100 – 55 = 45 households do not have an MP3 player, and it is possible that ALL of these households is among the 55 households that have both a DVD
player and a cell phone. This means that, at minimum, 10 households (55 – 45) have an MP3 player, DVD player and cell phone.
Gennadiy Post subject: Re: Test 5, Question 23
Posted: Tue Aug 03, 2010 2:56 am
Let us use Venn diagram to analyze this question. In most general way it will look:
Joined: Fri Apr 09,
2010 2:11 pm Let us consider only DVD players and cell phones.
Posts: 453
We denote number of dorm rooms that have both a DVD player and a cell phone by z. Then the number of dorm rooms that have a DVD player but don't have a cell phone is 75 –
z. The number of dorm rooms that have a cell phone but don't have a DVD player is 80 – z.
So the number of dorm rooms that have at least DVD player or a cell phone is (75 – z) + (80 – z) + z = 155 – z
We know that this number doesn't exceed 100, so 155 – z ≤ 100 or 55 ≤ z.
Now, let us consider the number of dorm rooms that have both: a DVD player and cell phone, together with the number of dorm rooms that have an MP3 player.
Let us denote the number of dorm rooms that have all: a DVD player, a cell phone and an MP3 player by y.
Then the number of dorm rooms that have either both: a DVD player and a cell phone or have an MP3 player is not less than
(55 – y) + (55 – y) + y = 110 – y
We know that this can not exceed 100, so 110 – y ≤ 100
or 10 ≤ y.
So y is not less than 10. And it can be 10 if Venn diagram is following:
Therefore y = 10 is the lowest possible number of dorms that have all three of these devices.
questioner Post subject: Re: Test 5, Question 23
Posted: Mon Jan 10, 2011 1:34 pm
The answer provided seems too complicated.
The question is asking for the difference between the greatest and lowest possible number of dorms that have all three devices.
Joined: Tue Apr 13,
2010 8:48 am The maximum cannot be no more than 100 given that there are only 100 units and the minimum cannot be lower than 55 since that's the lowest number of units that have at
Posts: 477 least on of the three devices. Therefore the greatest and lowest possible number of units that have all three of these devices are 100 and 55, respectively. Is my logic
I found it difficult trying to follow the explanation provided by 800score
Gennadiy Post subject: Re: Test 5, Question 23
Posted: Mon Jan 10, 2011 2:36 pm
The maximum cannot be no more than 100 given that there are only 100 units
Joined: Fri Apr 09, This statement is correct, the maximum value can be no more than 100, so in fact x ≤ 100. But it does NOT mean that the maximum possible x can be 100.
2010 2:11 pm Many inequalities, such as x ≤ 101, or x ≤ 200, etc. are true. But can x equal 100? No, because in that case each room would have every device installed. But the question
Posts: 453 statement tells there are only 75 units with a DVD player, 80 units with a cell phone and 55 with an MP3 player.
So the greatest possible value for x is incorrect.
the minimum cannot be lower than 55 since that's the lowest number of units that have at least on of the three devices.
The least possible value for y is incorrect. The following Venn diagram shows that y can be 10:
How to obtain the proper minimum and maximum values?
Obtaining the maximum possible value is easy.
Since only 55 units are equipped with an MP3 player, the greatest possible value of units that have all three devices CANNOT exceed 55 (x ≤ 55). And it can equal 55 if all
the rooms that are equipped with an MP3 player are also equipped with other devices.
The least possible value can be calculated using Venn diagrams as shown above. Or we can reason a little bit differently.
Suppose, we decide what units have particular devices. In the beginning we have just 100 empty units:
Then we place 80 cell phones and 75 DVD players so that there are as little units that have both as possible:
Then we add MP3 players. We place as many MP3 players as possible in the units that have only 1 device. So that the rooms that already have two devices will get as little
MP3 players as possible:
10 is the least possible value.
questioner Post subject: Re: Test 5, Question 23
Posted: Wed Jan 12, 2011 2:41 pm
Thanks for the clarity Gennadiy.
Joined: Tue Apr 13,
2010 8:48 am
Posts: 477
Page 1 of 1 [ 6 posts ]
All times are UTC - 7 hours
Who is online
Users browsing this forum: No registered users and 1 guest
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum
• GMAT Prep (USA)
• GMAT Prep (International)
• Favorite GMAT Articles
□ 800score GMAT Course Reviews
□ GMAT Course Comparision
□ GMAT Essay Tips & Grading from 800score
• Free GMAT Prep
□ Free GMAT CAT test
□ Free GMAT Course
• MBA Admissions
□ MBA Admissions Info
□ Admissions Consultants
□ Admission Essay Tips
• Forums
□ Forums Home
□ GMAT Strategies and Content
□ Quantitative Reasoning
□ Verbal Reasoning
□ GMAT Strategies
□ GMAT Essays
□ MBA Admissions Discussions
□ MBA Admissions Help
□ MBA Admissions Essays
□ Commercial GMAT Preparation
□ 800score
□ Manhattan GMAT
□ Veritas
• Contributors
□ Manhattan GMAT
□ Ron Purewal
□ Stacey Koprince
□ VeritasPrep
□ Jim Stekelberg
□ 800score
□ Steve Alexandris
□ GMAT MBA Prep
□ JP Morgan
Author Message
AEK Post subject: GMAT Overlapping Sets
Posted: Sun Sep 13, 2009 6:59 pm
In a village of 100 households, 75 have at least one DVD player, 80 have at least one cell phone, and 55 have at least one MP3 player. Every village has at least one of these
three devices. If x and y are respectively the greatest and lowest possible number of households that have all three of these devices, x – y is :
Joined: Sun Apr 19, Kindly can you provide much more detail explanation how the minimum value y is 10.
2009 6:56 pm
Posts: 32
Post subject: GMAT Overlapping Sets
Posted: Sun Sep 13, 2009 6:59 pm
In a village of 100 households, 75 have at least one DVD player, 80 have at least one cell phone, and 55 have at least one MP3 player. Every village has at least one of these three devices. If x and
y are respectively the greatest and lowest possible number of households that have all three of these devices, x – y is :
Kindly can you provide much more detail explanation how the minimum value y is 10.
GMATguru Post subject: Re: Test 5, Question 23
Posted: Sun Sep 13, 2009 7:07 pm
This is a set question.
75 have one or more DVD players
Joined: Mon Apr 06, 80 have one or more Cell phone
2009 5:44 pm 55 have one or more MP3 player
Posts: 81
Total households is 100
The limiting factor is clearly the MP3 player, so the greatest number of homes that have all three devices is going to be 55.
If we add up 80 and 75 we get 155. There are only 100 households, so minimally, there is an overlap of 55. 155 – 100 = 55 households must have both a DVD player and a cell
We also know that 100 – 55 = 45 households do not have an MP3 player, and it is possible that ALL of these households is among the 55 households that have both a DVD player and
a cell phone. This means that, at minimum, 10 households (55 – 45) have an MP3 player, DVD player and cell phone.
Post subject: Re: Test 5, Question 23
Posted: Sun Sep 13, 2009 7:07 pm
This is a set question.
75 have one or more DVD players
80 have one or more Cell phone
55 have one or more MP3 player
Total households is 100
The limiting factor is clearly the MP3 player, so the greatest number of homes that have all three devices is going to be 55.
If we add up 80 and 75 we get 155. There are only 100 households, so minimally, there is an overlap of 55. 155 – 100 = 55 households must have both a DVD player and a cell phone.
We also know that 100 – 55 = 45 households do not have an MP3 player, and it is possible that ALL of these households is among the 55 households that have both a DVD player and a cell phone. This
means that, at minimum, 10 households (55 – 45) have an MP3 player, DVD player and cell phone.
Gennadiy Post subject: Re: Test 5, Question 23
Posted: Tue Aug 03, 2010 2:56 am
Let us use Venn diagram to analyze this question. In most general way it will look:
Joined: Fri Apr 09,
2010 2:11 pm Let us consider only DVD players and cell phones.
Posts: 453
We denote number of dorm rooms that have both a DVD player and a cell phone by z. Then the number of dorm rooms that have a DVD player but don't have a cell phone is 75 – z. The
number of dorm rooms that have a cell phone but don't have a DVD player is 80 – z.
So the number of dorm rooms that have at least DVD player or a cell phone is (75 – z) + (80 – z) + z = 155 – z
We know that this number doesn't exceed 100, so 155 – z ≤ 100 or 55 ≤ z.
Now, let us consider the number of dorm rooms that have both: a DVD player and cell phone, together with the number of dorm rooms that have an MP3 player.
Let us denote the number of dorm rooms that have all: a DVD player, a cell phone and an MP3 player by y.
Then the number of dorm rooms that have either both: a DVD player and a cell phone or have an MP3 player is not less than
(55 – y) + (55 – y) + y = 110 – y
We know that this can not exceed 100, so 110 – y ≤ 100
or 10 ≤ y.
So y is not less than 10. And it can be 10 if Venn diagram is following:
Therefore y = 10 is the lowest possible number of dorms that have all three of these devices.
Post subject: Re: Test 5, Question 23
Posted: Tue Aug 03, 2010 2:56 am
Let us use Venn diagram to analyze this question. In most general way it will look:
Let us consider only DVD players and cell phones.
We denote number of dorm rooms that have both a DVD player and a cell phone by z. Then the number of dorm rooms that have a DVD player but don't have a cell phone is 75 – z. The number of dorm rooms
that have a cell phone but don't have a DVD player is 80 – z.
So the number of dorm rooms that have at least DVD player or a cell phone is (75 – z) + (80 – z) + z = 155 – z
We know that this number doesn't exceed 100, so 155 – z ≤ 100 or 55 ≤ z.
Now, let us consider the number of dorm rooms that have both: a DVD player and cell phone, together with the number of dorm rooms that have an MP3 player.
Let us denote the number of dorm rooms that have all: a DVD player, a cell phone and an MP3 player by y.
Then the number of dorm rooms that have either both: a DVD player and a cell phone or have an MP3 player is not less than
(55 – y) + (55 – y) + y = 110 – y
We know that this can not exceed 100, so 110 – y ≤ 100
or 10 ≤ y.
So y is not less than 10. And it can be 10 if Venn diagram is following:
Therefore y = 10 is the lowest possible number of dorms that have all three of these devices.
questioner Post subject: Re: Test 5, Question 23
Posted: Mon Jan 10, 2011 1:34 pm
The answer provided seems too complicated.
The question is asking for the difference between the greatest and lowest possible number of dorms that have all three devices.
Joined: Tue Apr 13,
2010 8:48 am The maximum cannot be no more than 100 given that there are only 100 units and the minimum cannot be lower than 55 since that's the lowest number of units that have at least on
Posts: 477 of the three devices. Therefore the greatest and lowest possible number of units that have all three of these devices are 100 and 55, respectively. Is my logic correct?
I found it difficult trying to follow the explanation provided by 800score
Post subject: Re: Test 5, Question 23
Posted: Mon Jan 10, 2011 1:34 pm
The answer provided seems too complicated.
The question is asking for the difference between the greatest and lowest possible number of dorms that have all three devices.
The maximum cannot be no more than 100 given that there are only 100 units and the minimum cannot be lower than 55 since that's the lowest number of units that have at least on of the three devices.
Therefore the greatest and lowest possible number of units that have all three of these devices are 100 and 55, respectively. Is my logic correct?
I found it difficult trying to follow the explanation provided by 800score
Gennadiy Post subject: Re: Test 5, Question 23
Posted: Mon Jan 10, 2011 2:36 pm
The maximum cannot be no more than 100 given that there are only 100 units
Joined: Fri Apr 09, This statement is correct, the maximum value can be no more than 100, so in fact x ≤ 100. But it does NOT mean that the maximum possible x can be 100.
2010 2:11 pm Many inequalities, such as x ≤ 101, or x ≤ 200, etc. are true. But can x equal 100? No, because in that case each room would have every device installed. But the question
Posts: 453 statement tells there are only 75 units with a DVD player, 80 units with a cell phone and 55 with an MP3 player.
So the greatest possible value for x is incorrect.
the minimum cannot be lower than 55 since that's the lowest number of units that have at least on of the three devices.
The least possible value for y is incorrect. The following Venn diagram shows that y can be 10:
How to obtain the proper minimum and maximum values?
Obtaining the maximum possible value is easy.
Since only 55 units are equipped with an MP3 player, the greatest possible value of units that have all three devices CANNOT exceed 55 (x ≤ 55). And it can equal 55 if all the
rooms that are equipped with an MP3 player are also equipped with other devices.
The least possible value can be calculated using Venn diagrams as shown above. Or we can reason a little bit differently.
Suppose, we decide what units have particular devices. In the beginning we have just 100 empty units:
Then we place 80 cell phones and 75 DVD players so that there are as little units that have both as possible:
Then we add MP3 players. We place as many MP3 players as possible in the units that have only 1 device. So that the rooms that already have two devices will get as little MP3
players as possible:
10 is the least possible value.
Post subject: Re: Test 5, Question 23
Posted: Mon Jan 10, 2011 2:36 pm
The maximum cannot be no more than 100 given that there are only 100 units
This statement is correct, the maximum value can be no more than 100, so in fact x ≤ 100. But it does NOT mean that the maximum possible x can be 100.
Many inequalities, such as x ≤ 101, or x ≤ 200, etc. are true. But can x equal 100? No, because in that case each room would have every device installed. But the question statement tells there are
only 75 units with a DVD player, 80 units with a cell phone and 55 with an MP3 player.
So the greatest possible value for x is incorrect.
the minimum cannot be lower than 55 since that's the lowest number of units that have at least on of the three devices.
The least possible value for y is incorrect. The following Venn diagram shows that y can be 10:
How to obtain the proper minimum and maximum values?
Obtaining the maximum possible value is easy.
Since only 55 units are equipped with an MP3 player, the greatest possible value of units that have all three devices CANNOT exceed 55 (x ≤ 55). And it can equal 55 if all the rooms that are equipped
with an MP3 player are also equipped with other devices.
The least possible value can be calculated using Venn diagrams as shown above. Or we can reason a little bit differently.
Suppose, we decide what units have particular devices. In the beginning we have just 100 empty units:
Then we place 80 cell phones and 75 DVD players so that there are as little units that have both as possible:
Then we add MP3 players. We place as many MP3 players as possible in the units that have only 1 device. So that the rooms that already have two devices will get as little MP3 players as possible:
10 is the least possible value.
questioner Post subject: Re: Test 5, Question 23
Posted: Wed Jan 12, 2011 2:41 pm
Thanks for the clarity Gennadiy.
Joined: Tue Apr 13,
2010 8:48 am
Posts: 477
Post subject: Re: Test 5, Question 23
Posted: Wed Jan 12, 2011 2:41 pm
Who is online
Users browsing this forum: No registered users and 1 guest
Users browsing this forum: No registered users and 1 guest
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum
|
{"url":"http://www.gmat-mba-prep.com/forum/viewtopic.php?p=4051","timestamp":"2014-04-18T13:07:40Z","content_type":null,"content_length":"78638","record_id":"<urn:uuid:dd008720-4db1-4110-95e3-bc441d53b407>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Denote by pk the kth prime number. Show that p1p2...pn +1 cannot be the perfect square of an integer.
Best Response
You've already chosen the best response.
i am not sure i understand the question, but the product of n primes ( or the first n primes) cannot be a perfect square by the fundamental theorem of algebra. \[n=p_1^{\alpha _1}p_2^{\alpha
_2}...p_j^{\alpha _j}\implies n^2=2p_1^{\alpha _1}p_2^{2\alpha _2}...p_j^{2\alpha _j}\]
Best Response
You've already chosen the best response.
You missed the plus one at the end of the product of primes
Best Response
You've already chosen the best response.
ooooh i see. sorry. i thought that was a subscript. so this number leaves a remainder of one when dividing by each smaller prime. i am sure you can use that to show it is not the square of
Best Response
You've already chosen the best response.
As @satellite73 said, any number n can be written as:\[n=p_1^{\alpha _1}p_2^{\alpha _2}...p_j^{\alpha _j}\]\[\therefore n^2=p_1^{2\alpha _1}p_2^{2\alpha _2}...p_j^{2\alpha _j}\]and we know that:\
[p_1p_2p_3...p_j+1\]is not divisible by any of the primes \(p_1\) to \(p_j\)therefore it cannot be the square of an integer? I /think/ this proves it?
Best Response
You've already chosen the best response.
because the square HAS to divisible by primes less than itself and:\[p_1p_2...p_j+1\]is NOT divisible by any of the primes less than it.
Best Response
You've already chosen the best response.
My proof is Suppose that \[p_{1}p_{2}p_{3}p_{4}p_{5}p_{6}...p_{j} + 1 = m^{2}\] \[p_{1}p_{2}p_{3}p_{4}p_{5}p_{6}..p_{j} = (m-1)(m+1)\] m+1 or m-1 must be an even number since 2 is a prime number
m must be odd If m is odd, then m + 1 and m - 1 are even numbers But there is only one even number in the set of prime numbers Therefore p1p2...pn +1 cannot be the perfect square of an integer.
Best Response
You've already chosen the best response.
how do you conclude the last part? But there is only one even number in the set of prime numbers <--- TRUE Therefore p1p2...pn +1 cannot be the perfect square of an integer. <--- WHY?
Best Response
You've already chosen the best response.
wait - I just got it! - your method is quite clever
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4edaacb6e4b01b4ec4921a5c","timestamp":"2014-04-17T16:07:03Z","content_type":null,"content_length":"45777","record_id":"<urn:uuid:a06978f5-d618-4590-b600-b42a4836d798>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Super Version of 2-Plectic Geometry for Classical Superstrings?
Posted by John Baez
In our paper Categorified symplectic geometry and the classical string, Alex Hoffnung, Chris Rogers and I described a Lie 2-algebra of observables for the classical bosonic string. The idea was to
generalize the usual Poisson brackets coming from symplectic geometry, which make the observables for a classical point particle into a Lie algebra. The key was to replace symplectic geometry by the
next thing up the dimensional ladder: 2-plectic geometry.
Now I have a slight hankering to do the same thing for the classical superstring. Ideally this would be a formal exercise in ‘super-thinking’ — replacing everything in sight by its ‘super’ (meaning $
\mathbb{Z}/2$-graded) analogue. But maybe it’s not. Either way, I have a lot of catching up to do. So, here are some basic questions.
First, is there a nice ‘superspace formulation’ of the classical superstring in 3, 4, 6 and 10 dimensions? That is, can we describe such a superstring as a map from a (super?) Riemann surface to some
supermanifold? And can we write the action nicely in these terms?
For some reason the catchphrase ‘superspace formulation’ seems to get used a lot more often for supergravity and super-Yang-Mills theory that for superstrings. Why is that?
Second, does anyone know references where the phase space of a classical superparticle has been described as a ‘supersymplectic manifold’ or ‘super-Poisson manifold’? There seems to be at least a
little work along these lines, e.g.:
Third — and now perhaps I’m pushing my luck — has anybody studied supersymmetric field theories using ‘super-multisymplectic geometry’, in analogy to the treatment of ordinary field theories (e.g.
nonlinear sigma-models) using multisymplectic geometry?
I just don’t want to reinvent a wheel, or waste my time inventing a square one.
Posted at December 16, 2008 7:14 PM UTC
Re: Super Version of 2-Plectic Geometry for Classical Superstrings?
I was about to call it quits for today when I saw this post. So just verz quickly:
the big technical problem with superstrings is that
- there is a formulation where the worldsheet is a supermanifold, but target space is not. This is called the NSR superstring. It’s quantization is tractable and leads to 2-dimensional superconformal
field theory (SCFT) on the worldsheet. The big problem is that using this the supersymmetry on the effective target space is, while present, not manifest.
- there is a formulation where the worldsheet is taken to be an ordinary manifold, but target space is taken to be a supermanifold. This is the GS-superstring (Green-Schwarz). It has the advantage
that target space supersymmetry is manifest, but the disadvantage that it is hard to quantize. Or impossible even. The thing is that this involves certain second class constraints which noboody, as
far as I know, has managed to handle in the quantum theory.
- There are attempts to combine the advantages of the two approaches while at the same time getting rid of their disadvantages. One such attempt is Berkovits’ formulation of the superstring. I am not
sure what the status of this is, precisely. It started with an extremely ad-hoc guess, which later was seen to have nice relations to the chiral deRham complex. The point here is that one “solves”
the complicated dynamics of the fermions by, roughly, covering the complicated target space for the fermions (the “pure spinors” in this approach) by open subsets. Restricting maps to go just to one
such subset one is left with a free worldsheet theory, which can directly be quantized. This way one obtains a presheaf of SCFTs over the pure spinor target. The whole problem has now been moved into
gluing that back into one global structure.
This is the idea, which is quite beautiful I think. But I don’t know to which extent superstring quantization has been understood this way entirely.
For what you want to do, super-2-plectic geometry, the RNS superstring is the right approach.
Posted by: Urs Schreiber on December 16, 2008 9:03 PM | Permalink | Reply to this
Re: Super Version of 2-Plectic Geometry for Classical Superstrings?
Thanks for all this information, Urs!
For what you want to do, super-2-plectic geometry, the RNS superstring is the right approach.
I trust you, but can you say why this is better than the Green–Schwarz approach? Naively I’d be happier with a manifold mapped into a supermanifold than vice versa, for some reason.
(I’m not completely sure why — maybe because I understand the loop space of a supermanifold better than the superloop space of a manifold!)
Obviously the formulation that’s sufficiently tractable to give a 2d SCFT upon quantization is likely to be better. But I bet you have some more precise reason.
Posted by: John Baez on December 17, 2008 7:07 AM | Permalink | Reply to this
Re: Super Version of 2-Plectic Geometry for Classical Superstrings?
For some reason the catchphrase superspace formulation seems to get used a lot more often for supergravity and super-Yang-Mills theory that for superstrings. Why is that?
For search results that better fit your needs, try combinations like “super worldsheet”.
I am not sure where best to point you short of the standard superstring text books and the standard online lecture notes which you know yourself.
I see that there is a 1-page summary of the RNS superstring on p. 5 of D’Hoker-Phong: Lectures on Two-Loop Superstring. Maybe you want to chase references from there.
Posted by: Urs Schreiber on December 16, 2008 9:19 PM | Permalink | Reply to this
Re: Super Version of 2-Plectic Geometry for Classical Superstrings?
Appendix B p. 53 of D’Hoker-Phong Two-Loop Superstrings II summarizes useful stuff for $N=1$, $d=2$ supergeometry. This is what you want.
More details in their The geometry of string perturbation theory.
Posted by: Urs Schreiber on December 16, 2008 9:28 PM | Permalink | Reply to this
Re: Super Version of 2-Plectic Geometry for Classical Superstrings?
Thanks for these pointers. I know basic superstring references, but I’m looking for mathematically minded stuff. Not just for superstrings, but also superparticles. Do you know stuff that describes
the phase space of a superparticle as some sort of ‘supersymplectic manifold’?
Posted by: John Baez on December 17, 2008 7:13 AM | Permalink | Reply to this
|
{"url":"http://golem.ph.utexas.edu/category/2008/12/super_version_of_2plectic_geom.html","timestamp":"2014-04-21T12:10:27Z","content_type":null,"content_length":"22715","record_id":"<urn:uuid:1b30bff1-445a-4ea4-90f8-2e1c08f76bb0>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
|
roblem solving
You are here: Home → Online resources → Problem solving
Problem solving and word problem resources online
Find here an annotated list of problem solving websites and books, and a list of math contests. There are many fine resources for word problems on the net! have personally checked & reviewed each
website, to make sure it is truly useful.
Websites Books Contests Math Projects
Resources on my site
Scales Problems
My video lesson where I solve 14 different balance problems, starting from the most simple and advancing to some that have double scales.
Do's and don'ts of teaching problem solving
My article. Why do most students have so much trouble with word problems? I feel the reason is too many one-step word problems in math textbooks.
Brain growth and the value of mistakes in math learning
In this article I discuss brain plasticity -- or the huge potential for our brains to grow -- which means that EVERY student CAN learn math. Students need to have a growth mindset where they value
mistakes and see them as opportunities for brain growth and learning.
Problem solving websites
Word Problems for Kids
A great selection of word problems for grades 5-12. A hint and a complete solution available for each problem.
Problem Solving Decks from North Carolina Public Schools
Includes a deck of problem cards for grades 1-8, student sheets, and solutions. Many of these problems are best solved with calculators. All of these problems lend themselves to students telling and
writing about their thinking.
Math Stars Problem Solving Newsletter (grades 1-8)
These newsletters are a fantastic, printable resource for problesm solve and their solutions.
Word Problem Worksheets from DadsWorksheets.com
Very simple, mostly one-operation word problem worksheets for grades 1-4. Some worksheets have problems for two different operations.
www.dadsworksheets.com/v1/Worksheets/Word Problems.html
MathCounts School Handbook
This handbook contains 300 creative problems for grades 6-8. All problems are mapped according to topic, to difficulty level, and to the Common Core State Standards.
Enriching Mathematics
Open-ended, rich, and investigative math challenges & activities for all levels.
Maths Problem Solving
A collection of math problems in two levels.
Virtual Math Club
Problem sets & puzzles similar to those found on math contests such as the AMC 8, AMC 10, MATHCOUNTS, or the middle school math olympiads, including answers and video solutions posted a week later.
For middle school/early high school level.
Open-Ended Math Problems
Collection of problems that lend themselves to more than one way of solving.
Math Kangaroo Problem Database
Easily make worksheets of challenging math problems based on actual past Math Kangaroo competition problems.
Thinking Blocks
Learn to model or make visual representations of word problems with this interactive program.
Math Circle Presentations
Math circle presentations for grades 6-12 from University of Waterloo and their related student exercises, available as PDF files. These can be used as enrichment, as challenging word problems or as
review of certain topics.
Figure This! Math Challenges for Families
Word problems related to real life. They don't always have all the information but you have to estimate and think. For each problem, there is a hint, other related problems, and interesting trivia.
Website supported by National Council of Teachers of Mathematics.
Maths Problem Solving
A collection of math problems in two levels.
Learn to Solve Word Problems
A collection of school algebra word problem solvers that solve your problems and help you understand the solutions. All problems are customizable, meaning that you can change all parameters.
Math Word Problems for Children - MathStories.com
Over 12,000 interactive and non-interactive NCTM compliant math word problems, available in both English and Spanish. Helps elementary and middle school children boost their math problem solving and
critical-thinking skills. Need to pay for membership.
Massachusetts Tests for Educator Licensure, Mathematics Subtest
This is a downloadable math practice test for prospective elementary school teachers, and it contains a lot of good problems for problem solving practice, mostly in the middle school, some in the
high school level.
See also:
• Word Problems Solving Strategies
Gives one example of each strategy: Find a pattern, Make a table, Work backwards, Guess and check, Draw a picture, Make a list, Write a number sentence, Use logical reasoning.
Math Word Problems Worksheets
Free worksheets from About.com for various grades in PDF form.
Problem Makeovers by Dan Meyer
Textbook problems that have been changed to be more open and inviting for student exploration.
Search for number, geometry, probability etc. word problems and challenges. Includes solutions.
Noetic Learning Challenge Math
This is a program with weekly assignemts, designed to hone young students' mathematical problem solving skills and logical reasoning skills. The problems are non-routine problem solving questions
that are adapted to many math competitions. Price is about $20 for a semester.
Archive for the Handley Math Page Problems of the Week
From 1998 till 2005 - lots of good problems with solutions.
Best Articles about Solving Word Problems from Let's Play Math
A collection of the best problem solving articles from Let's Play Math blog. Many of these explain and use the bar diagram method also found in Singapore Math books.
Smart Skies - Distance Rate Time problems
Six Air Traffic Control (ATC) Problems, with downloadable curriculum materials and teacher guide. For grades 5-9.
Ms. Lindquist: the Tutor
An intelligent tutoring system for algebra for tutoring students in writing expressions for algebra word problems.
Primary Grade Challenge Math by Edward Zaccaro
A very good book on problem solving with very varied word problems and strategies on how to solve problems. Includes chapters on: Sequences, Problem-solving, Money, Percents, Algebraic Thinking,
Negative Numbers, Logic, Ratios, Probability, Measurements, Fractions, Division. Each chapter’s questions are broken down into four levels: easy, somewhat challenging, challenging, and very
Challenge Math For the Elementary and Middle School Student by Edward Zaccaro
Another problem-solving book by Zaccaro, for middle school levels. It contains both lessons and exercises for problem solving with three levels of questions.
Challenging Word Problems for Primary Mathematics
From Singapore Math, meant to supplement their curriculum, but can be used to supplement any math curriculum. They show worked examples for each topic and have lots of exercises to solve.
Ray's Arithmetic (free download)
This old arithmetic book bases its instruction on word problems, starting with 1st grade addition and subtraction problems, and advancing through the topics until percentage. One could use it to help
children solve word problems, since it starts from the simplest kind of word problems, and gradually advances. Read online or download for free; the links are in the left sidebar.
A Middle School Math Workbook: Pre-Algebra, Algebra I, and Geometry
This book contains challenging problems that will help your child gain math skills at an internationally competitive level.
A High School Math Workbook: Algebra, Geometry & Precalculus by Qishen Huang
This book contains challenging problems that will help your child gain math skills at an internationally competitive level.
Contests and similar
"Problem of the Week" (POWs)
Problem of the week contests are excellent for finding challenging problems and for motivation. There exist several:
Math League's Homeschool Contests
Challenge your children with the same interesting math contests used in schools. Contests for grades 4, 5, 6, 7, 8, Algebra Course 1, and High School are available in a non-competitive format for the
homeschoolers. The goal is to encourage student interest and confidence in mathematics through solving worthwhile problems and build important critical thinking skills. By subscription only.
National Math Bee
Online math tournament in which students in grades one through six compete in any or all of the four basic operations: addition, subtraction, multiplication, and division.
Math projects or investigations
Mixing in Math
A list of activities that mix math in to sports, snack time, arts and crafts, playground games etc. Easy to prepare and lead, and free.
Yummy Math
Quality math activities from real life, organized into categories such as math and social studies, geometry, data and probability, holidays and annual events, food, sports, number sense, etc.
Available as PDF and DOC files. The activities are free but answers are available only to subscribers.
Make It Real Learning
Math projects or activitity worksheets of real-life applications, focused on answering the question, "When am I ever going to use this?" Free samples available.
A blog with quality math investigations and projects for middle and high school level.
Math Projects.com (MPJ)
Powerful and innovative math projects, lesson plans, and thought-provoking articles, from pre-algebra through geometry.
Middle School Math Projects
Quality math projects, spanning from number sense, geometry, statistics, probability, algebra, to special math events. Pricing $7 on up.
the Math Academy
Downloadable booklets that include hands-on activities about real-life applications of mathematics. Includes probability, fractions & percents, statistics, combinatorics, and patterns & functions.
Real-Life Math Vol. 1 PDF download
A collection of activities for grades 6-8 of how math is used in real life. All twelve lessons include a teacher's guide that explains how to conduct the activity and a student worksheet that guides
the student and poses problems. Price: $9.99. Free example pages included.
Math Activities and Games at Education.com
A huge list of free math activities, organized by grade level from kindergarten through high school.
Innovative and creative board games, activities and worksheets for elementary and middle school math topics.
Hands-on Manipulative Instructions
Print, cut out, and color & glue - various blocks, tiles, graph paper.
|
{"url":"http://www.homeschoolmath.net/online/problem_solving.php","timestamp":"2014-04-19T11:58:09Z","content_type":null,"content_length":"51601","record_id":"<urn:uuid:085e0488-db4b-41ad-af4c-3f2a822fec98>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Posts by Seth
Total # Posts: 153
f(t) 30000 1+20e^-1.5(t)
F(t)= 30000/1+20e^-1.5t Describes the number of people, f(t), who have become ill with influenza t weeks after the initial outbreak in a town with 30,000 inhabitants. a.) How many people became ill
with flu when the epidemic began? b.) How many people were ill by the end of th...
Medical research indicates that the risk of having a car accident increases exponentially as the concentration of alcohol in the blood increases. The risk is modeled by R=6e^12.77x Where x is the
blood alcohol concentration and R, given as a percent, is the risk of having a ca...
Solve the following: 3ln(4x)=17
solve: log base of 7 (3x+2)=5
How does 26.666 equal 26.2? Could you show me a model of how to do this? Thank You
A shark swam 30 meters to a boat in 16 seconds. How long did it take to swim each meter? 16/30 or 8/15 seconds?
Can you show me the formula used for the following question - how can 7 kids share 3 liters of soda equally?
I need help solving 8 = 3x + 6y for y. Thank you!
How do I find the perimeter of a square with one side listed as 4x + 3 and one more side listed as 3(2x - 5)? Thank you
On a 9-hour trip home from winter break, Mary was forced to drive on a stretch of highway that had a lot of construction. At first she had to go 55 miles per hour for a certain amount of time, but
then she was able to go 65 miles per hour for the rest of the trip and in total ...
what are the simplest methods to understand C++ programming language?
The area of a trapezium is 360 in2. If the ratio of the bases to the height is 6:10:5, find the dimensions of the trapezium.
A 20 foot ladder is leaning against a building. If the ratio of the base of the ladder's distance from the building to its top's height from the ground id 3:4, find how high the top of the ladder is
from the ground.
The area of a yard is 15000. If the ratio of the length to the width to the depth is 4:2:1, what are the dimensions of the pool?
A yo-yo with an axle diameter of 2.00cm has a 80.0cm length of string wrapped around it many times in such a way that the string completely covers the surface of its axle, but there are no double
layers of string. The outermost portion of the yo-yo is 4.00 {\rm cm} from the ce...
How much mechanical energy was lost during the ball's fall?
Number Five is three!!
John cut an apple pie into 6 equal slices. His friends ate some of the pie and now 3/5 is left. John wants to put each slice of the leftover pie on its own plate. What part of the pie will he put on
each plate?
Math Help please
You complain that the hot tub in your hotel suite is not enough. The hotel tells they will increase the temperature by 10% each hour. If the current temperature of the hot tub is 75 degrees
Fahrenheit, what will be the temperature of the hot tub after three hours to the neares...
A heat engine does 650 J of work and dumps 1270 J of heat to its cold reservoir. What is its efficiency?
Solve for the value of x: 6log(x^2+1)-x=0
Please help! Maths
For what value of K will the equation 3x^2- 4x-(1+K)= 0 have equal roots?
Please Reiny Solve it For me Again! please......
Please Reiny Solve it For me Again! please......
Solve for the value of x: log5(x-2)+log8(x-4)=Log6(x-1)..For your information that 5,8 and 6 are bases not just multipliers. Thank you.
Please solve
Solve and graph: y>2x-1
Please Help work it out! LOGARITHM
log4(3x-7)^2=10. I have no idea in working it out.
Please Help!!!! Maths
Please Help work it out! Maths
Please Help work it out! Maths
(2x-1)/(x+1)=2X/(X-1)+ (5/X)
Find the value of x: (2x-1)/(x+1)=2x/(x-1)-(5/x)
Please solve it for me!! Please...Maths
Please solve it for me!! Please...Maths
Solve for the value of x: 6log(x^2+1)-x=0
Please help me work it out!!! Maths
solve for x: ln(x^2-6x-16)=5
Please help me here!!! Maths
Solve for the value of x: ln(x^2-6x-16)=5. Please help me work it out because I do not have an idea in work it out.
Please help me here!!! Maths
Isolve for x: ln(x^2-6x-16)= 0 . help me beacuse I do not have the idea to work it out.
Please help me!!!! maths
Solve for x: log5(x-2)+log8(x-4) = log6(x-1). I do not have any idea on this topic.
Help!!!! math
An hiking group wanted to travel a long chapter of the Appalachian Trail for 4 days. They planned to hike 6 hours per day and wanted to complete a trail chapter that was 60 miles long. what rate of
speed would they have to average to complete the trail chapter as planned?
Please solve it.Logarithm
Please help me because I do not any idea in Logarithm topic.Solve for the value of X: 6log(x^2+1)=5
For the following graph: a. Find the domain of f. b. Find the range of f. c. Find the x-intercepts. d. Find the y-intercept. e. Find the intervals over which f is increasing. f. Find the intervals
over which f is decreasing. g. Find the intervals over which f is constant. h. F...
The country of Arkanslavia had money supplies of 183 million arkollars in 2010 and 644 million in 2011. They have real output of 791 arkollars in both years. In Arkanslavia velocity was 7 in 2010 but
changed to 5 in 2011. What is the difference in the price level between 2011 ...
Physics: Buoyancy
A crane lifts the 23000 kg steel hull of a ship out of the water. Determine the following. (a) the tension in the crane's cable when the hull is submerged in the water
Physics: Buoyancy
A crane lifts the 23000 kg steel hull of a ship out of the water. Determine the following. (a) the tension in the crane's cable when the hull is submerged in the water
LM has the endpoints L(-1,1) and M(-5,-3) find the coordinates of the midpoint of LM
Suppose that $200 was deposited on 1st Jan 2000 into an account that earned 5% interest compounded semiannually. Suppose further that $200 was deposited on 1st Jan 2001 into a different account that
earned 6% interest compounded semiannually. In what month of what year will th...
Since it didnt ask for an improper fraction or mixed number and that is what we are studying I came up with 40/8 as the improper and 5 for the mixed?
A shipment of boxes weighs 40 pounds. There are 8 boxes and each weighs the same number of pounds. How much does each box weigh?
Explain what happens during each of the three steps of cellular respiration and explain why a lack of oxygen is such a severe problem to the process.
Using the fundamentals of thermodynamics, explain why we must continue to eat in order to maintain homeostasis?
A 400-g block of copper at a temperature of 85°C is dropped into 300 g of water at 33°C. The water is contained in a 250-g glass container. What is the final temperature of the mixture?
A force of 1300 lb compresses a spring from its natural length of 16 in to a length of 10 in. How much work is done in compressing it from 10 in to 6 in?
find the work done in winding up a 175 ft cable that weighs 4.00 lb/ft?
a disk has a mass of 2kg and a length of 2 meters and it oscillates about an axis that is 1.5 meters from the center of the disk. It is initially displaced at an angular displacement of 0.2 radians
and released. A) What is the equation of motion? B)What is the kinetic energy a...
What is the gravitational force on a bar that is 3 meters long and has a density of 3x kg/m and lies 3 meters from a 2 kg point mass
A 2 kg steel ball strikes a wall with a speed of 8.15 m/s at an angle of 54.4◦ with the normal to the wall. It bounces off with the same speed and angle, as shown in the figure. If the ball is in
contact with the wall for 0.243 s, what is the magnitude of the average for...
A 0.30 kg puck, initially at rest on a frictionless horizontal surface, is struck by a 0.20 kg puck that is initially moving along the x axis with a velocity of 2.8 m/s. After the collision, the 0.20
kg puck has a speed of 0.5 m/s at an angle of θ = 53° to the positiv...
How far to the nearest tenth of a meter can a runner running at 15 m/s run in the time it takes a rock to fall from rest 68 meters?
A box of books weighing 290 N is shoved across the floor of an apartment by a force of 400 N exerted downward at an angle of 35.0° below the horizontal. If the coefficient of kinetic friction between
box and floor is 0.57, how long does it take to move the box 3.80 m, star...
A train has a mass of 5.22 multiplied by 106 kg and is moving at 70.0 km/h. The engineer applies the brakes, which results in a net backward force of 1.87 multiplied by 106 N on the train. The brakes
are held on for 37.0 s. (a) What is the new speed of the train? (b) How far d...
a mineral is organic, which means that it contains what?
sam mowed half of the yard on friday and half of the remaining grass on saturday what fractional part of yard still needs to be mowed
how do you find the square root of 57 and then round to the nearest tenth
In the following reaction, it 13g of sodium are reacted how many grams of copper(solid) can be obtained? 2Na(s)+ Cu(NO3) 2 - - > 2NaNO3 + Cu(s)
If the product of two numbers is 65 and one of the numbers is 8 more than the other, what is the remaining number?
3rd grade math
my answer is 37
How many mL of orange juice would be required to get the recommended daily allowance of 60 mg of vitamin C? (Concentration of vitamin C is .011 moles/L in 5.0 mL of juice).
world history
is pennsylvania's government a monotheistic or not?
The useful power output of Bryan Allen, who flew a human-powered airplane across the English Channel on June 12th, 1979, was about 350 W. The propellor of the airplane was driven by the pilot's legs,
using a bicycle-tyoed mechanism. Using the efficiency for legs (20%), cal...
"By means of a rope, whose mass is negligible, two blocks (mass: 11kg and 44 kg) are suspended over a pulley. The pulley can be treated as a uniform solid cylindrical disk. The downward acceleration
of the 44 kg block is observed to be exactly one half the acceleration du...
"A solid cylinder and a thin-walled hollow cylinder have the same mass and radius. They are rolling horizontally along the ground toward the bottom of an incline. They center of mass of each cylinder
has the same translational speed. The cylinders roll up the incline and ...
"A 1220-N uniform beam is attached to a vertical wall at one end and is supported by a cable at the other end. A 1960-N crate hangs from the far end of the beam. Find a) the magnitude of the tension
in the cable and b) the magnitude of the horizontal and vertical componen...
"A small 0.5 kg object moves on a frictionless horizontal table in a circular path of radius 1 meter. The angular speed is 6.28 rad/s. The object is attached to a string of negligible mass that
passes through a small hole in the table at the center of the circle. Someone ...
"A small 0.5 kg object moves on a frictionless horizontal table in a circular path of radius 1 meter. The angular speed is 6.28 rad/s. The object is attached to a string of negligible mass that
passes through a small hole in the table at the center of the circle. Someone...
"A 1220-N uniform beam is attached to a vertical wall at one end and is supported by a cable at the other end. A 1960-N crate hangs from the far end of the beam. Find a) the magnitude of the tension
in the cable and b) the magnitude of the horizontal and vertical compone...
"A solid cylinder and a thin-walled hollow cylinder have the same mass and radius. They are rolling horizontally along the ground toward the bottom of an incline. They center of mass of each cylinder
has the same translational speed. The cylinders roll up the incline and...
"By means of a rope, whose mass is negligible, two blocks (mass: 11kg and 44 kg) are suspended over a pulley. The pulley can be treated as a uniform solid cylindrical disk. The downward acceleration
of the 44 kg block is observed to be exactly one half the acceleration d...
Since there are no non-conservative forces involved in this scenario, you can use the principle of the conservation of momentum. Set up the momentum of the baseball player equal to the momentum of
the baseball: (75kg)(vB)=(0.145kg)(40m/s). The velocity of the baseball player ...
separatio of variables
solve the initial value problem by separation of variables dy/dx=6x^2y and y(0)=4
separation of variables
solve the initial value problem by separation of variables dy/dx=x2/y given y=-5 when x=3
What does it mean when the scale factor of the actual object to its model is less than one?
if the sum of the diameters of two circles is 28 inches one circle has a diameter that is 8 inches longer than the diameter of the other what are the diameters of the two circles
Explain why arcsin 3 is undefined.
Is this a buffer system? NH3/(NH4)2SO4? It is formed by mixing NH3 (weak base) and a strong acid (H2SO4), but according to my worksheet it is not a buffer system. Why is this so? Isnt it a buffer as
long as it consists of a weak base and its salt of a strong acid? Please do no...
how does movement differs among Bacteria, grasshoppers, fish, bear
1) DETERMINE pH of each of the following 2-component solutions. 0.08M RbOH & 0.130M NaHCO3
Suppose a fuel-cell generator was used to produce electricity for a house. If each H2 molecule produces 2e- , how many kilograms of hydrogen would be required to generate the electricity needed for a
typical house? Assume the home uses about 850 kWh of electricity per month, w...
carts of a roller coaster are travelling 5m/s over the top of the first hill. If there is no energy loss due to friction and the hill is 50m high,what speed will the carts have at the bottom
carts of a roller coaster are travelling 5m/s over the top of the first hill. If there is no energy loss due to friction and the hill is 50m high,what speed will the carts have at the bottom
How much limestone (CaCO3) in kilograms would be required to completely neutralize a lake with a volume of 5.2 x 10^9L . The lake contains 5.0 x 10^-3 moles of H2SO4 per liter?
How much limestone (CaCO3) in kilograms would be required to completely neutralize a 5.2 x 109 --L lake containing 5.0 x 10 -3 of H2 SO4 per liter?
Math: Is my answer correct?
it is a
what is a mode
41.7 g KBr
9th grade
I need a good thesis statement for a research paper about agriculture as a career.
Yes, and the sentence is exactly as written. I'm just supposed to chose. That is why it is confusing.
Money might be sewn into a woman's or women's coat linings. Worry haunted the refugee children's or childrens' eyes
Pages: 1 | 2 | Next>>
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Seth","timestamp":"2014-04-21T03:46:39Z","content_type":null,"content_length":"26681","record_id":"<urn:uuid:b76af2ef-26ab-42da-b1d0-f457ff5e21e2>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Philosophy of Real Mathematics
My Book | Reviews | New Blog | Old Blog | Ideas
What is the Philosophy of Real Mathematics?
I have characterised the Philosophy of Real Mathematics in my book as the study of "what leading mathematicians of their day have achieved, how their styles of reasoning evolve, how they justify the
course along which they steer their programmes, what constitute obstacles to these programmes, how they come to view a domain as worthy of study and how their ideas shape and are shaped by the
concerns of physicists and other scientists".
My latest thinking is contained in these papers: How Mathematicians May Fail to be Fully Rational, now called 'Narrative and the Rationality of Mathematical Practice'; Reflections on Michael
Friedman's Dynamics of Reason; Categorification as a Heuristic Device; Mathematical Kinds, or Being Kind to Mathematics; Review of Omnes' 'Converging Realities'; Smoke Rings, a history of early knot
theory (large files of figures, here and here); Blending Philosophy of Mathematics and Cognitive Science (slides); Why and How to Write a History of Higher-Dimensional Algebra (notes).
The time is right for the philosophy of mathematics to reconnect with mathematics since a revolution is in the air. The French mathematician Pierre Cartier remarks in an interview:
When I began in mathematics the main task of a mathematician was to bring order and make a synthesis of existing material, to create what Thomas Kuhn called normal science. Mathematics, in the
forties and fifties, was undergoing what Kuhn calls a solidification period. In a given science there are times when you have to take all the existing material and create a unified terminology,
unified standards, and train people in a unified style. The purpose of mathematics, in the fifties and sixties, was that, to create a new era of normal science. Now we are again at the beginning
of a new revolution. Mathematics is undergoing major changes. We don't know exactly where it will go. It is not yet time to make a synthesis of all these things—maybe in twenty or thirty years it
will be time for a new Bourbaki. I consider myself very fortunate to have had two lives, a life of normal science and a life of scientific revolution.
An important factor behind the timing of this revolution has been the flood of Russian mathematicians coming to the West after the collapse of the Soviet Union. Certainly, the revolution is young,
but this fact should not be used to argue that no philosophical resources be devoted to the cause. Russell wasn't just tidying up after the dust had settled during the last revolution.
Part of the job of the philosopher of X, and a considerable part of it at that, is to understand how the values of practitioners of X operate in their discipline. In Lecture 2 of The Empirical Stance
(Yale University Press) Bas van Fraassen claims this for the natural sciences. I do so for mathematics. This is partly a descriptive task and partly an evaluative one. So much effort has been devoted
to a thin notion of truth, so little to the thicker notion of significance. To say that scientists and mathematicians aim merely for the truth is a gross distortion. They aim for significant truths.
What we can do as philosophers is to assist in the articulation of a "strongly evaluative language", to use a term of Charles Taylor (Philosophical Papers I, CUP 1985: 25), one which will be
qualitative and contrastive. In the case of mathematics, for example, we need to articulate the notions of "significance" and "importance" with a view to influencing the ways language is used in the
decision-making of the mathematical community.
This kind of philosophical work is far removed from much contemporary philosophy of mathematics, and here lies a source of our present troubles. Much of the training required to become a philosopher
of mathematics concerns its representation in logical languages which do not run along its grain, but which are supposed to reveal its 'ontological commitments'. The value of truth as applied to a
statement is the only one possible to treat in this way, and it sits at the core of the discipline. The rub is that this tends to switch off mathematicians' interest. After all, they do not gain
their doctorate, promotion or respect from their peers for producing merely impeccably logically correct mathematics. The values at stake for them at the beginning of this century are ones connected
with the conceptual organisation of the discipline. Now, I don't discount the possibility that in the future some of these other values may be fruitfully treated as searches for timeless solutions to
problems, but I believe I may be allowed my doubts on this score. Jamie Tappenden has shown us that Frege viewed his Begriffsschrift as a means not only to secure the truth of propositions, but also
to "carve out" concepts correctly. It is very clear a century later that this hope has not been realised. The only plausible candidate to achieve a similar task at present is category theory, a
language I'd be only too happy for some more philosophers to learn, but going beyond a judgement of category theory's present power to get oneself straightened out, conceptually speaking, to a
timeless conclusion would seem to me foolhardy.
Of course, we could rejoice in the fact that our philosophy of mathematics does not speak to mathematicians. Have we recovered yet from our discipline's dogmatic advocacy of Euclidean geometry? Safer
then to talk amongst ourselves. But we are permitted to aspire to more subtle forms of influence. About values in general, Charles Taylor has written:
Our attempts to formulate what we hold important must, like descriptions, strive to be faithful to something. But what they strive to be faithful to is not an independent object with a fixed
degree and manner of evidence, but rather a largely inarticulate sense of what is of decisive importance. An articulation of this 'object' tends to make it something different from what it was
before. (Philosophical Papers I: 38)
That we can find resonances in the writings of philosophers who have never been close to mathematics may appear surprising at first. But consider how a similar trend may be detected in contemporary
philosophy of science when, for example, in The Empirical Stance Bas van Fraassen uses Sartre to help him think through revolutionary change in science. I'll end this introduction, then, with a
quotation from the mathematician Hermann Weyl, an avid reader of Heidegger and interlocutor of Jaspers, which suggests why we might follow van Fraassen's example:
Mathematics is not the rigid and petrifying schema, as the layman so much likes to view it; with it, we rather stand precisely at the point of intersection of restraint and freedom that makes up
the essence of man itself.
(p. 136, 'The Current Epistemogical Situation in Mathematics' in Paolo Mancosu (ed.) From Brouwer to Hilbert. The Debate on the Foundations of Mathematics in the 1920s, Oxford University Press, 1998,
pp. 123-142).
Taking a Look
Ian Hacking in his Historical Ontology (Harvard University Press 2002: 63) discusses and recommends a philosophical approach to the study of science, including mathematics, which involves "taking a
look" at the practice of science. Why bother? Philosophers seem to divide fairly sharply on this issue. Those with a strong historical sense tend to see the point straight away, and would agree with
R.G.Collingwood when he claims:
"There are two questions to be asked whenever anyone inquires into the nature of any science: what is it like? and what is it about? of these two questions the one I have put first must necessarily
be asked before the one I have put second, but when in due course we come to answer the second we can only answer it by a fresh and closer consideration of the first." (The Principles of History:
Collingwood intends 'science' to include any viable knowledge-acquiring practice. Clearly, then, to gain a sense of what mathematics is like, one needs to make a study of what mathematicians do. But
we may perhaps arrive at the same point starting out from a squarely analytic orientation. A theme emerging in recent philosophy of mathematics is the idea that two dimensions may have been conflated
in debates concerning realism. One dimension relates to the mode of existence of mathematical entities (concrete like a chair, abstract like democracy, fictional like Oliver Twist, etc.). The other
dimension concerns what it is that constrains the mathematician in her choice of research. Is there something more restricting than logic, other than mere fashionability, which dictates that some
concepts just fly after they've been introduced (e.g., quantum groups) while others never really make it, and which gives many mathematicians the sense that they are sculpting from hard stone rather
than from butter?
We can detect something of the splitting of these dimensions in an article by Mark Balaguer ('A Theory of Mathematical Truth', Pacific Philosophical Quarterly 82 (2001), 87-114), where he discusses a
notion of "objective correctness" to which one may ascribe whether realist or anti-realist:
"A mathematical sentence is objectively correct just in case it is "built into", or follows from, the notions, conceptions, intuitions, and so on that we have in connection with the given branch of
The task of the philosopher of real mathematics is to determine how these "notions, conceptions, intuitions, and so on" are developed, and what constitute good reasons for this process.
Doesn't arithmetic suffice? Already there we have a rich mix of symbolism, concepts, intuition, applications, judgements, necessity, the infinite, pertinent formal results, and plenty of first rate
philosophical spadework. In that mathematical activity presents to some degree the "link between experience, language, thought, and the world, which is at the very centre of what it is to be human"
(Michael Potter Reason's Nearest Kin: 18), surely all the clues are there to be found in arithmetic. Well, I have wagered against this claim. I hold that not all of the philosophical juice to be
squeezed out of mathematics will be found by considering this theory and its applications. Some aspects do not feature in arithmetic, some are only partially realised there, and some are fully
realised but very unlikely to be noticed without a detour. Set theory takes us only a little further.
In my struggle as a philosopher to make my work responsive to mathematics as actually and historically practised, I have generally found it illuminating to search for similarities between mathematics
and the natural sciences. Now, this strikes some people as wrong-headed. Even if there are similarities between these two knowledge-acquiring practices, why focus on these and not what makes
mathematics unique, say, its distinctive use of proof. But what if this choice between searching for similarities and searching for differences in the case of mathematics and science resembled that
facing the zoologist wondering about that large sea creature - the whale? It seems clear to us now that we can be led to make important discoveries about the whale from previously acquiring knowledge
about the elephant (anatomy, physiology, genetics, behaviour). After locating commonalities we can start to think about which features are unique to whales, and who knows an apparently unique feature
there might find an analogue back amongst the elephants. You are probably thinking that I am likening the less accessible species - the whale - to mathematics, and the more accessible one - the
elephant - to science. But consider the possibility that certain features of knowledge-acquisition are easier to detect in the case of mathematics. This crazy idea was one that the Hungarian
mathematician George Polya adhered to. And in fact as early as 1941 he had worked out most of the principles of the probabilistic approach to epistemology known as Bayesianism. Much effort could have
been saved if philosophers of science had listened to him. Well I believe there are many other reasons we might to look to mathematics: as an example of an extraordinarily intricate web of coherence
meeting hierarchical foundations; clearer examples of the heuristic effects of thinking in different ways about the same object; the relationship between rationality and aesthetics; and, the tireless
working over of ideas after publication.
This having been said, enormously more work has been carried out on the natural sciences, so it is natural to look to studies of science for inspiration. In the introduction to my book I raise five
debates, contributions to which should illuminate the nature of mathematics. The first three are brought over from Ian Hacking's discussion of the natural sciences in 'The Social Construction of
What?' (Harvard University Press, 1999).
1. Inherent structurism/nominalism: Can we make any sense of the idea of carving the definition of a concept correctly? Can we agree with Lakatos here?
"As far as naïve classification is concerned, nominalists are close to the truth when claiming that the only thing that polyhedra have in common is their name. But after a few centuries of proofs and
refutations, as the theory of polyhedra develops, and theoretical classification replaces naïve classification, the balance changes in favour of the realist." (Lakatos Proofs and Refutations: 92n)
Why does Frege describe the qualities of good mathematical concepts in the same terms as modern exponents of the theory of natural kinds?
[Kant] seems to think of concepts as defined by giving a simple list of characteristics in no special order; but of all ways of forming concepts, that is one of the least fruitful. If we look through
the definitions given in the course of this book, we shall scarcely find one that is of this description. The same is true of the really fruitful definitions in mathematics, such as that of the
continuity of a function. What we find in these is not a simple list of characteristics; every element is intimately, I might almost say organically, connected with others. (Frege, Foundations of
Arithmetic: 100)
2. Inevitability: given a mathematics as sophisticated as our own, how probable was it that concepts such as natural numbers, groups, groupoids would be devised? Do any concepts 'force' themselves
upon our attention, and do they dictate to us the way they are used?
3. Reasons for stability: Why do we persist in teaching certain ways of thinking about particular concepts - social inertia or because that's what they are like?
4. Connectivity of mathematics: How should we represent the connectivity of mathematics on a scale running from thoroughly fragmented to very unified? Why are there so many apparently surprising
5. Miraculousness of applicability: Choose a position between the amazement of Eugene Wigner and its deflation by those who take empirical research as the source of much mathematics.
Last revised: September 23, 2005.
|
{"url":"http://math.ucr.edu/home/baez/corfield/phorem.htm","timestamp":"2014-04-21T12:30:21Z","content_type":null,"content_length":"18207","record_id":"<urn:uuid:0681a8d9-ce64-431c-9dea-6cbf7ad34bf8>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Normal Distribution Question
October 19th 2008, 11:30 PM #1
Oct 2008
Normal Distribution Question
Hi, i have this question
The weight of a box of cereal has a normal distribution with mean = 340g and standard deviation = 5g.
(c) 30 boxes of this cereal are selected at random for weighing. Find the probability that the sample variance is more than 36.69.
how do i solve this? thank you.
Hi, i have this question
The weight of a box of cereal has a normal distribution with mean = 340g and standard deviation = 5g.
(c) 30 boxes of this cereal are selected at random for weighing. Find the probability that the sample variance is more than 36.69.
how do i solve this? thank you.
You should know that when samples of size n are taken from a normal population with variance $\sigma^2$ then
$\frac{n-1}{\sigma^2} S^2$ ~ $\chi^2_{n-1}$
where $S^2$ is the random variable variance of sample.
So for your problem $\frac{29}{25} S^2$ ~ $\chi^2_{29}$.
$\Pr\left( S^2 > 36.69\right)$$= \Pr\left( \frac{29}{25} \, S^2 > \left( \frac{29}{25} \right) \, 36.69 = 42.5604\right) = \Pr(\chi^2_{29} > 42.5604) = 0.05$, correct to two decimal places.
October 20th 2008, 01:35 AM #2
|
{"url":"http://mathhelpforum.com/advanced-statistics/54645-normal-distribution-question.html","timestamp":"2014-04-16T08:29:54Z","content_type":null,"content_length":"36202","record_id":"<urn:uuid:cf3e1512-e046-4d8e-8b95-ba562c8a0e56>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Law of Sines and Law of Cosines
Law of Sines and Law of Cosines
ratios to the nearest hundredth and angle measures to the nearest degree. ... Example 1: Finding Trigonometric Ratios for Obtuse Angles ... – PowerPoint PPT presentation
Number of Views:1076
Avg rating:3.0/5.0
Slides: 38
Added by: Anonymous
more less
Transcript and Presenter's Notes
|
{"url":"http://www.powershow.com/view/12c36a-YmZlM/Law_of_Sines_and_Law_of_Cosines_powerpoint_ppt_presentation","timestamp":"2014-04-20T03:18:25Z","content_type":null,"content_length":"107790","record_id":"<urn:uuid:0e9a5b05-12d9-484c-911e-49a0da7250ec>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A new computer algorithm to solve previously unsolvable counting problems
The latest news from academia, regulators research labs and other things of interest
Posted: February 12, 2009
A new computer algorithm to solve previously unsolvable counting problems
(Nanowerk News) How many different sudokus are there? How many different ways are there to color in the countries on a map? And how do atoms behave in a solid? Researchers at the Max Planck Institute
for Dynamics and Self-Organization in Göttingen and at Cornell University (Ithaca, USA) have now developed a new method that quickly provides an answer to these questions.
In principle, there has always been a way to solve them. However, computers were unable to find the solution as the calculations took too long. With the new method, the scientists look at separate
sections of the problem and work through them one at a time. Up to now, each stage of the calculation has involved the whole map or the whole sudoku. The answers to many problems in physics,
mathematics and computer science can be provided in this way for the first time (Counting complex disordered states by efficient pattern matching: chromatic polynomials and Potts partition functions
Whether sudoku, a map of Germany or solid bodies - in all of these cases, it’s all about counting possibilities. In the sudoku, it is the permitted solutions; in the solid body, it is the possible
arrangements of atoms. In the map, the question is how many ways the map can be colored so that adjacent countries are always shown in a different color. Scientists depict these counting problems as
a network of lines and nodes. Consequently, they need to answer just one question: How many different ways are there to color in the nodes with a certain number of colors? The only condition: nodes
joined by a line may not have the same color. Depending on the application, the color of a node is given a completely new significance. In the case of the map, "color" actually means color; with
sudoku the "colors" represent different figures.
Two of 1152 different ways to color the nodes of the same lattice. Nodes connected by a line may not have the same color. (Image: Max Planck Institute for Dynamics and Self-Organization)
"The existing algorithm copies the whole network for each stage of the calculation and only changes one aspect of it each time," explains Frank van Bussel of the Max Planck Institute for Dynamics and
Self-Organization (MPIDS). Increasing the number of nodes dramatically increases the calculation time. For a square lattice the size of a chess board, this is estimated to be many billions of years.
The new algorithm developed by the Göttingen-based scientists is significantly faster. "Our calculation for the chess board lattice only takes seven seconds," explains Denny Fliegner from MPIDS.
This is how it’s done: With the new method, the researchers move through the network node by node. As if the computer program were short-sighted, it only ever looks at the next node point and not at
the whole network. At the first node point, it cannot finalize the color selection as it would have to know how all the other nodes are connected to each other. However, instead of answering this
question, the program notes down a formula for the first lattice point which contains this uncertainty as an unknown quantity. As it progresses through the network, all the connections become visible
and the unknown quantities are eliminated. Having arrived at the final node point, the program’s knowledge of the network is complete.
This new method can be used on much more complicated cases than the existing standard algorithm. "We can now answer many questions in physics, graph theory and computer science that have hitherto
been practically unsolvable," says Marc Timme from MPIDS. "For example, our method can be applied to antiferromagnetic solids," he adds. In these solid bodies, every atom has an internal rotational
pulse, called spin, which can have different values. Usually, adjacent atoms exhibit different spins. It is now possible to calculate the number of possible spin arrangements, which will allow
physicists to draw conclusions about the fundamental characteristics of the thermodynamics of solid bodies.
|
{"url":"http://www.nanowerk.com/news/newsid=9262.php","timestamp":"2014-04-19T17:10:07Z","content_type":null,"content_length":"35239","record_id":"<urn:uuid:360bf359-2e5a-41b3-a1da-ae0cc54ab977>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Difficult Math Problems
Posted on 04/07/2011 by Anthony Purcell
I went searching for a fun math video on YouTube. The top one that showed up was the Ma and Pa Kettle Math. First, it’s a great video. It cracks me up every time. It is very true though. If you look
at how they are doing the math, it all makes sense.
As you can see, when you look at how the Kettle’s are doing the math, it makes complete sense. This is how kids can look at math. When a student gives you an answer you cannot just say it’s wrong.
Ask them how they got the answer. They may have a correct answer for the way that they did the problem.
This next video is of a celebrity on “Who Wants to be a Millionaire. If you go and read the comments on the video, people are just rude. First, comments on any type of media just gets trashy. I wish
people would grow up. (Alright off my soap box.) I want to give her the benefit of the doubt. The question IS easy, but the way it’s approached is not. Here is a screen shot of the question and
Can I tell you that $1.50 multiplied by 5 is $7.50? Of course! Like I said, the problem is easy. However, when you look at the answers you have to think. I’m a math teacher and I don’t have these 4
answers on the top of my head. So when asking people about this, it is difficult.
As you can see, she can do the multiplication rather easily. Students are the same way. However, when you show them the question and answers in this form, many students will freak out. There are so
many factors to this question that it gets scary when you first see it. The main key is to keep students calm and just work through the problem step by step.
There are many times that I share the problem with students to have them solve it. Then I show their answers to choose from and they understand how to do it. However, on state assessments, students
see the whole thing at once and they get scared and are unsure of themselves. Why do states set up problems in a way for students to fail? Are we not here to help them?
My job is to help students solve math problems, NOT to teach them how the state will try to trick them on the assessments. This makes it seem as if people are out to get them. Students get scared.
This does not make any sense to me. Here is the video of Patricia Heaton with Regis on the show.
You can see that Regis leads her through the answers. Students need assistance at times just like adults. I wish that the state would stop trying to trick students in the wording of questions and
answers. Questions like the one on this video is what makes people scared of math. It’s a simple problem made difficult.
*I would like to apologize to Patricia Heaton for the screen shot that I captured. I enjoyed you on Everybody Loves Raymond and her new show The Middle.
Leave a Reply Cancel reply
About Anthony Purcell
I am Anthony Purcell and am teaching Science for the first time in the 2011-12 school year. I have also taught middle school math and fourth grade. I am looking at education and trying to figure out
what it means to me and how I want to teach.
This entry was posted in 1 million dollars, Assessments, Education, Math, School, Teaching, Testing. Bookmark the permalink.
|
{"url":"https://educationallyminded.wordpress.com/2011/04/07/difficult-math-problems/","timestamp":"2014-04-16T04:56:50Z","content_type":null,"content_length":"78701","record_id":"<urn:uuid:713120f2-ad9c-4300-8c49-85796b29f792>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: August 2006 [00520]
[Date Index] [Thread Index] [Author Index]
Re: Rapid execution of gaussian integrals
• To: mathgroup at smc.vnet.net
• Subject: [mg68812] Re: Rapid execution of gaussian integrals
• From: Paul Abbott <paul at physics.uwa.edu.au>
• Date: Sat, 19 Aug 2006 00:41:28 -0400 (EDT)
• Organization: The University of Western Australia
• References: <ec1b5t$p0b$1@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
In article <ec1b5t$p0b$1 at smc.vnet.net>, AES <siegman at stanford.edu>
> I recently posted about very slow execution of infinite symbolic
> integrals of the general form
> Integrate[const * Exp[-A x^2 + 2 B x], {x, -Infinity, Infinity},]
> when A and B are constant-valued expressions of even modest complexity,
> even when one inserts the necessary Assumptions to make the integral
> convergent.
> Daniel Lichtbau replied with some helpful suggestions on how to speed
> things up a bit; but to really speed things up I've since been using the
> brute force approach
> gaussianIntegral[func_, x_] : = Module[ {exp, A, B, C, const},
> exp = Exponent[func, E];
> A = Coefficient[-exp ,x, 2];
> B = Coefficient[ exp, x, 1]/2;
> C = Coefficient[ exp, x, 0];
> const=func/Exp[exp];
> const Sqrt[Pi/A] Exp[ (B^2/A) + C]]
> Crude, but seems to work OK for everything I've thrown at it so far. If
> anyone has any criticism or warnings, I'll be glad to learn from them.
This approach should work fine as long as your integrals are convergent.
Another approach is to use pattern-matching. Don't work with integrals,
just with integrands (as you are doing with gaussianIntegral):
E^(a_ x^2 + b_ x + c_) :> (E^(c - b^2/(4 a)) Sqrt[Pi])/Sqrt[-a]
To get exponentials such as Exp[a (x+1)^2] into this form, one can use
E^(z_) :> E^Collect[z, x]
Paul Abbott Phone: 61 8 6488 2734
School of Physics, M013 Fax: +61 8 6488 1014
The University of Western Australia (CRICOS Provider No 00126G)
AUSTRALIA http://physics.uwa.edu.au/~paul
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2006/Aug/msg00520.html","timestamp":"2014-04-16T10:25:13Z","content_type":null,"content_length":"36212","record_id":"<urn:uuid:f4a0e517-43ee-4022-beae-0c0bb9213d38>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/_blondest_brunette_/answered","timestamp":"2014-04-18T03:19:39Z","content_type":null,"content_length":"66471","record_id":"<urn:uuid:8930cbf4-f414-4176-af7a-a4a0078fdaa8>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is an exponential solution the correct thing to try here?
September 6th 2010, 02:19 PM #1
May 2010
Is an exponential solution the correct thing to try here?
Hi all,
I have an ODE of the form X" + aX' + (bc)^2 X = 0.
Is the correct way of solving this to try an exponential solution of the form Ae^rx?
Assuming a,b, and c are constant it could be of this form, have you solved your charateristic equation yet? What did you get?
Yes, a,b and c are constants, sorry I forgot to put that in.
It becomes a quadratic equation:
r^2 + ar + (bc)^2 = 0.
The quadratic formula gives
r = [-a + sqrt(a^2 - 4(bc)^2)]/2 and
r = [-a - sqrt(a^2 - 4(bc)^2)]/2.
I note that there will be two independent solutions when the discrimant is positive, that is, when
a^2 - 4(bc)^2 > 0, or
a^2 > 4(bc)^2, or
|a| > |2bc|
September 6th 2010, 02:33 PM #2
September 6th 2010, 02:41 PM #3
May 2010
|
{"url":"http://mathhelpforum.com/differential-equations/155391-exponential-solution-correct-thing-try-here.html","timestamp":"2014-04-17T01:18:21Z","content_type":null,"content_length":"35094","record_id":"<urn:uuid:e7690101-30e9-43e4-89a8-c923ebe99552>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An easy puzzle for all.
Re: An easy puzzle for all.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=169323","timestamp":"2014-04-20T13:24:29Z","content_type":null,"content_length":"13685","record_id":"<urn:uuid:74a27363-8cde-47ac-ada1-472f46fafb04>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
|
F Statistic Table: What is it Used For?
Main > Definitions>F statistic table
The F Statistic Table is actually a collection of tables, for four alpha levels: .10. .5, .025 and .01.
What is an F Distribution?
The f value, or f statistic, is a random variable that can be used in an f test to compare two variances. The f value has an f distribution. The f distribution is used most often in Analysis of
Variance (ANOVA/MANOVA) and is a ratio of two Chi-square distributions. Each F distribution has a specific degrees of freedom in the numerator and in the denominator. The numerator degrees of freedom
are always first, as changing the order of degrees of freedom changes the distribution; for example F(9,11) does not equal F(11,9)).
What is the F Statistic Table Used for?
When you have found the F value, you can compare it with an f critical value in the table. If you use the alpha=.05 (5%), that means you are looking at the 95 percent confidence level. If your
observed value of F is larger than the value in the F table, then you can reject the null hypothesis with 95 percent confidence that the variance between your two populations isn’t due to random
How to use the F Statistic Table
The name of the table is the right tail area. The rows in the F Distribution Table represent denominator degrees of freedom and the columns represent numerator degrees of freedom.
For example, to determine the .10 critical value for an F distribution with 6 and 6 degrees of freedom, look in the 6 column (numerator) and the 6 row (denominator) of the F Table for alpha=.10. F
(.10, 6, 6) = 3.054 55.
Why Use the F Statistic Table? Why not just use a calculator?
A calculator will certainly give you a fast answer. But with many scenarios in statistics, you will look at a range of possibilities and a table is much better for visualizing a large number of
probabilities at the same time.
|
{"url":"http://www.statisticshowto.com/what-is-the-f-statistic-table-used-for/","timestamp":"2014-04-16T10:11:13Z","content_type":null,"content_length":"21033","record_id":"<urn:uuid:dda24df2-bef6-4d6a-b33c-edf9c35eb54b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is the diameter of the largest pipe which will fit between the board, the fence, and the ground?A board leaning... - Homework Help - eNotes.com
What is the diameter of the largest pipe which will fit between the board, the fence, and the ground?
A board leaning against a fence makes an angle of 30 degrees with the horizontal. If the board is 4 feet long, what is the diameter of the largest pipe which will fit between the board, the fence,
and the ground?
The board, fence and ground form a right special triangle. The angles are 30, 60 and 90. Since the board is 4 feet, it forms the hypotenuse of the triangle. The side opposite the 30 degrees (fence
side) is 2 feet, or half of the hypotenuse. The side opposite the 60 degrees (ground side) is 2 times square root 3, which is approximately 3.464 feet.
The inscribed circle, or incircle, of the triangle is the largest diameter pipe that will fit. The equation for the radius of the circle is 2 times the area of the triangle divided by the perimeter
of the triangle (2a/p).
In this problem, the area of the triangle is .5bh = .5(2)(2sqrt(3)) = 2sqrt(3).
The perimeter is 2+2sqrt(3)+4 = 6+2sqrt(3)
radius = 2(2sqrt(3))/(6+2sqrt(3)) = 6.928/9.464 = .732
The radius of the pipe is approximately .732 feet.
The diameter of the larget pipe is 2(.732) = 1.464 feet.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/what-diameter-largest-pipe-which-will-fit-between-283403","timestamp":"2014-04-20T12:28:37Z","content_type":null,"content_length":"25910","record_id":"<urn:uuid:0b52d919-b94f-429f-b145-b90d46a7610e>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
|
program for fibonnaci number in python
Although the recursive implementation given above is elegant and close to the mathematical definition, it is not very practical. Calculating the nth Fibonacci number requires calculating two smaller
Fibonacci numbers, which in turn require two additional recursive calls each, and so on until all branches reach 1. The iterative solution is faster, but still repeats a lot of calculations when
computing successive Fibonacci numbers. To remedy this, we can employ memorization to cache previous computations.
We first establish a memorization "cache", which stores previously computed Fibonacci numbers. In this case we use an ArrayList, initialized with the first two Fibonacci numbers, as the cache. Note
that we have also moved from using ints to using Java's BigInteger class, which provides arbitrary-precision integers. As a result, the memorized implementation can also compute much larger Fibonacci
numbers than the previous solutions (although they too could have been implemented using BigIntegers.
|
{"url":"http://www.technicaltalk.net/index.php?topic=925.0","timestamp":"2014-04-20T05:43:37Z","content_type":null,"content_length":"56049","record_id":"<urn:uuid:424aa61e-99ff-4922-a78e-0b37bde88e54>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Latent Variable Mixture Modeling: When Heterogeneity Requires Both Categories and Dimensions
Dichotomies come easily to us, especially when they are caricatures as shown in this cartoon. These personality types do seem real, and without much difficulty, we can anticipate how they might react
in different situations. For example, if we were to give our Type A and Type B vacationers a checklist to indicate what activities they would like to do on their next trip, we would expect to observe
two different patterns. Type A would select the more adventurous and challenging activities, while Type B would pick the opposite. That is, if we were to array the activities from relaxing to active,
our Type A would be marking the more active items with the relaxing portion of the scale being checked by our Type B respondent. Although our example is hypothetical, market segmentation in tourism
is an ongoing research area as you will see if you follow the link to an article by
Sara Dolnicar
, whose name is associated with several R packages.
Yet, personality type does not explain all the heterogeneity we observe. We would expect a different pattern of check marks for Type A and B, but we would not be surprised if "type" were also a
matter of degree with respondents more or less reflecting their type. The more "devout" Type A checks only the most active items and rejects the less active along with all the passive activities.
Similarly, the more "pure" Type B is likely to want only the most relaxing activities. Thus, we might need both personality type (categorical) and intensity of respective type (continuous) in order
to explain all the observed heterogeneity.
Should we think of this dimension as graded membership in a personality type so that we need to represent personality type by a probability rather than all or none types? I would argue that vacation
preference can be described more productively by two latent variables: a categorical framing or orientation (Type A vs. Type B) and a continuous latent trait controlling the expression of the type
(perhaps a combination of prior experience plus risk aversion and public visibility). It's a two-step process. One picks a theme and then decides how far to go.
Of course, some customers might be "compromisers" and be searching for the ideal vacation that would balance active and relaxing, that is, the "just right" vacation. In just a case we would need an
ideal point item response model (e.g.,
2013 US Senate ideal points
R code
for the analysis). However, to keep the presentation simple, we will assume that our vacationers want only a short trip with single theme: either a relaxing break or an active getaway. To clarify,
voting for a U.S. Senator is a compromise because I select (vote for) a single individual who is closest to my positions on a large assortment of issues. Alternatively, a focused purchase such as a
short vacation can seek to accomplish only one goal (e.g., most exciting, most relaxing, most educational, or most romantic).
In a previous
I showed how brand perceptions could be modeled using item response theory. Individuals do see brands differently, but those differences raise or lower all the attributes together rather than
changing the rank ordering of the items. For instance, regardless of your like or dislike for BMWs, everyone would tend to see the attribute "well-engineered" as more associated with the car maker
than "reasonably priced." Brand perceptions are constrained by an associative network connecting the attributes so that "a rising tide lifts all boats". As we have seen in our Type A-Type B example,
this is not the case with preferences, which can often be ordered in reverse directions.
Where's the Latent Variable Mixture?
Heterogeneity among our respondents is explained by two latent variables. We cannot observe personality type, but we believe that it takes one of two possible values: Type A or Type B. If I were to
select a respondent at random, they would be either Type A or Type B. In the case of our vacationers, being Type A or Type B would mean that they would see their vacation as an opportunity for
something challenging or as a chance to relax. Given their personality type frame, our respondents need to decide next the intensity of their commitment. Because intensity is a continuous latent
variable, we have a latent variable mixture.
Let's look at some R code and see if some concrete data will help clarify the discussion. We can start with a checklist containing 8 items ranging from relaxing to active, and we will need two groups
of respondents for our Type A and Type B personalities. The
function from the psych package will work.
TypeA<-sim.rasch(nvar=8, n=100,
d=c(+2.0, +1.5, +1.0, +0.5, -0.5, -1.0, -1.5, -2.0))
TypeB<-sim.rasch(nvar=8, n=100,
d=c(-2.0, -1.5, -1.0, -0.5, +0.5, +1.0, +1.5, +2.0))
apply(TypeA$items, 2, table)
apply(TypeB$items, 2, table)
Created by Pretty R at inside-R.org
The sim.rasch() function begins with a series of default values. By default, our n=100 Type A and Type B vacationers come from a latent trait that is normally distributed with mean 0 and standard
deviation 1. This latent trait can be thought of intensity, as we will soon see. So far the two types are the same, that is, two random samples from the same normal latent distribution. Their only
difference is in d, which stands for difficulty. The term "difficulty" comes to us from educational testing where the latent trait is ability. A student has a certain amount of latent ability, and
each item has a difficulty that "tests" the student's latent ability. Because latent ability and item difficulty are measured on the same scale, a student with average ability (mean=0) has a 50-50
chance of answering correctly an item of average difficulty (d=0). If d is a negative value, then the item is easier and our average student has a better than 50-50 chance of getting it right. On the
other hand, items with positive d are more difficult and pose a greater challenge for our average student.
In our example, the eight activities flow from more relaxing to more active. Let's take a look at how an average Type A and Type B would respond to the checklist. Our average Type A has a latent
intensity of 0, so the first item is a severe test with d=+2, and they are not likely to check it. The opposite is true for our average Type B respondent since d=-2 for their personality type.
Checking relaxing items is easy for Type B and hard for Type A. And this pattern continues with difficulty moving in opposite directions for our two types until we reach item 8, which you will recall
is the most active activity. It is an easy item for Type A (d=-2) because they like active. It is a difficult item for Type B (d=+2) because they dislike active. As a reminder, if our vacationers
were seeking balance or if our items were too extreme (i.e., more challenging than Type As wanted or more relaxing than sought by our Type Bs), we would be fitting an ideal point model.
The sim.rasch() function stores its results in a list so that you need to access $items in order to retrieve the response patterns of zeros and ones for the two groups. If you run the apply functions
to get your tables (see below), you will see the frequencies of checks (response=1) is increasing across the 8 items for Type A and decreasing for Type B, as one might have expected. Clearly, with
real data we know none of this and all we have is a mixture of unobserved types of unobserved intensity.
Clustering and Other Separation Mechanisms
Unaware that our sample is a mixture of two separate personality types, we would be misled looking at the aggregate descriptive statistics. The total column suggests that all the activities are
almost equally appealing when clearly that is not the case.
Number Checking Each Item
Item Total Type A Type B
V1 100 14 86
V2 103 22 81
V3 96 28 68
V4 104 43 61
V5 94 60 34
V6 114 79 35
V7 94 75 19
V8 105 87 18
n 200 100 100
To get a better picture of the mixture of these two groups, we can look at the plot of all the respondents in the two-dimensional space formed by the first two principal components. This is fairly
easy to do in R using prcomp() to calculate the principal component scores and plotting the unobserved personality type (which we only know because the data are simulated) along with arrows
representing the projection of the 8 items onto this space.
arrows(0,0,pc$rotation[,1],pc$rotation[,2], length=.1)
Created by Pretty R at inside-R.org
The resulting plot shows the separation between our two personality type (labeled 1 and 2 for A and B, respectively) and the distortion in the covariance structure that splits the 8 items into two
factors (the first 4 items vs. the last 4 items).
Obviously, we need to "unmix" our data, and as you might have guessed from the well-defined separation in the above plot, any cluster analysis ought to be able to successfully recover the two
segments (had we known that the number of clusters was two). K-means works fine, correctly classifying 94% of the respondents.
Had we stopped with a single categorical latent variable, however, we would have lost the ordering of our items. This is the essence of item response theory. Saying "It's the ordering of the items,
stupid" might be a bit strong but may be required to focus attention on the need for an item response model. In addition to a categorical type, our data require a dimension or continuous latent
variable that uses the same scale to differentiate simultaneously among items and individuals. Categories alone are not sufficient to describe fully the heterogeneity in our data.
The R package psychomix
The R package
provides an accessible introduction to the topic of latent variable mixture modeling without needing to understand all the details. However,
provides a readable overview for those wanting a more comprehensive summary. Searching his chapter for "IRT" should help one see where psychomix fits into the larger framework of latent variable
mixture modeling.
We will be using the raschmix() function to test for 1 to 3 mixtures of different difficulty parameters. Obviously, we never know the respondents personality type with real data. In fact, we may not
know if we have a mixture of different types at all. All we have is the response patterns of check marks across the 8 items. The function raschmix() must help us decide how many, if any, latent
categories and the item difficulty parameters in each category. Fortunately, it all becomes clear with an example, so here is the R code to run the analysis.
mixture<-raschmix(more, k=1:3)
## inspect results
## select best BIC model
best <- getModel(mixture, which = "BIC")
table(group,segment[(person>0 & person<8)])
Created by Pretty R at inside-R.org
At a minimum, the function raschmix() needs a data matrix [not a data frame, so use as.matrix()] and the number of mixtures to be tested. We have set k=1:3, so that we can compare the BIC for 1, 2,
and 3 mixtures. The results have been stored in a list called mixture, and one extracts information from the list using methods. For example, typing "mixture" (the name of the list object holding the
results) will produce the summary fit statistics.
iter converged k k0 logLik AIC BIC ICL
1 2 TRUE 1 1 -1096 2223 2272 2222
2 10 TRUE 2 2 -949 1956 2051 2011
3 76 TRUE 3 3 -938 1963 2103 2083
Although one should use indexes such as the BIC cautiously, these statistics suggest that there are two mixtures. Because raschmix() relies on an expectation-maximum (EM) algorithm, you ought not be
surprised if you get somewhat different results when you run this code. In fact, the solution for the 3-mixture model may not converge under the default 100 iteration limit. We use the getModel()
method to extract the two mixture model with the highest BIC and print out the solution with summary().
prior size post>0 ratio
Comp.1 0.503 97 178 0.545
Comp.2 0.497 97 169 0.574
Item Parameters:
Comp.1 Comp.2
V1 2.35 -2.24
V2 1.64 -1.72
V3 1.21 -0.83
V4 0.38 -0.41
V5 -0.35 0.79
V6 -1.58 0.80
V7 -1.17 1.65
V8 -2.47 1.97
We started with 200 respondents, but six respondents were removed because three checked none of the items and three checked all of the items. That is why the sizes for the two mixture components do
not sum to 200. The item parameters are the item difficulties that we specified with our d argument in sim.rasch() when we randomly generated the data. The first component looks like our Type A
personality with the easiest to check activities toward the end of the list with the negative difficulty parameters. Type B is the opposite with the more passive activities at the beginning of the
list being the easiest to check because they are the most preferred by this segment.
Finally, the last three lines of R code first identifies the cluster membership for every respondent using the psychomix method clusters() and then verifies its accuracy with tables(). As we saw with
k-means earlier, we are able to correctly identify almost all the personality types when the two segments are well-separated by the reverse ordering of their difficulty parameters.
Of course, real data can be considerably messier than our simulation with sim.rasch(), requiring us to think hard before we start the analysis. In particular, items must be carefully selected since
we are attempting to separate respondents using different response generation processes based solely on their pattern of checked boxes. Fortunately, markets have an underlying structure that helps us
understand how consumer heterogeneity is formed and maintained.
No comments:
|
{"url":"http://joelcadwell.blogspot.com/2013/08/latent-variable-mixture-modeling-when.html","timestamp":"2014-04-17T00:49:04Z","content_type":null,"content_length":"125876","record_id":"<urn:uuid:6bbbf522-8d63-490c-a440-0767a3c97224>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Harrisburg, TX Trigonometry Tutor
Find a Harrisburg, TX Trigonometry Tutor
...I offer a no-fail guarantee (contact me via WyzAnt for details). I am available at any time of the day; I try to be as flexible as possible. I try as much as possible to work in the comfort of
your own home at a schedule convenient to you. I operate my business with the highest ethical standar...
35 Subjects: including trigonometry, chemistry, calculus, physics
...My name is Eric, and my love of teaching comes from the happiness that I get when I am able to push myself to improve and achieve my full potential. This is not solely a personal pleasure as I
also love to see my students growing and thriving! I am currently a senior at Rice University, majoring in Chemical Engineering.
22 Subjects: including trigonometry, chemistry, geometry, physics
I have been a private math tutor for over ten(10) years and am a certified secondary math instructor in the state of Texas. I have taught middle and high-school math for over ten (10) years. I am
available to travel all over the greater Houston area, including as far south as Pearland, as far north as Spring, as far west as Katy and as far east as the Galena Park/Pasadena area.
9 Subjects: including trigonometry, calculus, geometry, algebra 1
Your progress is what is most important in all this. I am just here to help. I can help in the math related fields.
17 Subjects: including trigonometry, calculus, physics, geometry
...As a tutor first I will try to identify the real problems of the student by asking good questions about the subject matter they are interested. I will try to help my students to reinforce
basic ideas, build connections among concepts, and push them to apply their knowledge to solve problems. I am interested in students who can try to do what they should do.
11 Subjects: including trigonometry, calculus, geometry, algebra 1
Related Harrisburg, TX Tutors
Harrisburg, TX Accounting Tutors
Harrisburg, TX ACT Tutors
Harrisburg, TX Algebra Tutors
Harrisburg, TX Algebra 2 Tutors
Harrisburg, TX Calculus Tutors
Harrisburg, TX Geometry Tutors
Harrisburg, TX Math Tutors
Harrisburg, TX Prealgebra Tutors
Harrisburg, TX Precalculus Tutors
Harrisburg, TX SAT Tutors
Harrisburg, TX SAT Math Tutors
Harrisburg, TX Science Tutors
Harrisburg, TX Statistics Tutors
Harrisburg, TX Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Alta Loma, TX trigonometry Tutors
Bordersville, TX trigonometry Tutors
El Jardin, TX trigonometry Tutors
Galena Park trigonometry Tutors
Greenway Plaza, TX trigonometry Tutors
Howellville, TX trigonometry Tutors
Inks Lake Village, TX trigonometry Tutors
Lomax, TX trigonometry Tutors
Oak Forest, TX trigonometry Tutors
Pine Valley, TX trigonometry Tutors
Satsuma, TX trigonometry Tutors
Sunny Side, TX trigonometry Tutors
Sylvan Beach, TX trigonometry Tutors
Timber Cove, TX trigonometry Tutors
Tod, TX trigonometry Tutors
|
{"url":"http://www.purplemath.com/Harrisburg_TX_Trigonometry_tutors.php","timestamp":"2014-04-18T13:43:38Z","content_type":null,"content_length":"24448","record_id":"<urn:uuid:badd9959-7e36-454c-b1b9-09a415333f14>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Integral Help
1. October 8th 2012, 10:08 PM #1
2. October 8th 2012, 10:21 PM #2
Re: Integral Help
Hey ineedhelplz.
The factor is taken into account within the limits of the new integral. Let z*5 = x. 5*dz = dx/5. If the old limits for z were [0,a] then when z = 0, x = 0 and when z = a, x = 5a. This is
just an integral substitution that retains the value of the integral, but compensates for the change in variable by changing the limits.
3. October 8th 2012, 10:31 PM #3
Re: Integral Help
Where is the z coming from?
Sorry I'm a bit confused
4. October 8th 2012, 10:38 PM #4
Re: Integral Help
It's just a dummy variable. where we define 5*z = x or z = x/5 and look at the integral in relation to x/5 instead of x. So if you think of the first integral in terms of z = x and the next
integral in terms of x = 5*z you go from a normal f(z) integral to a f(5*z) or f(x/5) which is what is in the integral on the RHS.\
You can go from f(x) to f(x/5) but I've just introduced a dummy variable so that you don't confuse the two x's as being the same.
The integration substitution formula is that if we have an integral of Integral [a,b] f(g(x))g'(x)dx then we make the substitution u = g(x) and this gives us the new integral [g(a),g(b)] f(u)
du and both integrals have the same value.
5. October 8th 2012, 10:45 PM #5
Re: Integral Help
We are asked to evaluate:
$2\int_0^{5a}f\left(\frac{x}{5} \right)+3\,dx=2\int_0^{5a}f\left(\frac{x}{5} \right)\,dx+6(5a-0)$
We may make a substitution:
$u=\frac{x}{5}\,\therefore\,dx=5\,du$ and we have:
$2\int_0^{5a}f\left(\frac{x}{5} \right)+3\,dx=10\int_0^{a}f(u)\,du+30a=10a+30a=40a$
As mentioned, the variable of integration is a "dummy" variable, as it gets integrated out in the evaluation of the definite integral.
6. October 8th 2012, 10:49 PM #6
Similar Math Help Forum Discussions
Search Tags
|
{"url":"http://mathhelpforum.com/calculus/204914-integral-help.html","timestamp":"2014-04-17T15:32:22Z","content_type":null,"content_length":"42558","record_id":"<urn:uuid:ecf55390-ab38-4465-8041-fb847284b0a4>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Overview - BASIC MATH ASSESSMENTS - NUMBER OPERATIONS
Basic Math Assessments reinforce critical basic math skills with these supplements in test-prep format. Six reproducible books offer pre- and post-assessments focused on testing the skills covered in
the Basic Math Practice Binders and Basic Math Warm-Ups. Included are correlations and progress charts.
Two leveled pre-assessments and two leveled post-assessments are provided for specific skills. Level A and Level B offer a variety of problems on the same skill. Each book also contains benchmark
assessments to allow the teacher to assess students in previously learned skills.
Number Concepts
Features a unit for each of the following: shapes, patterns, counting, greater than and less than, one-to-one correspondence, place value, and ordinal numbers.
Number Operations
Features a unit for each basic number operation. Units for addition and subtraction range from basic facts to regrouping three-digit numbers; multiplication and division contains problems that teach
basic facts.
Covers customary linear measurement, weight, capacity, time, and temperature. Activities with many visual aids address nonstandard and standard units of measure and address relationships between
Tables, Charts, and Graphs
Covers sorting; tables; bar, line, and circle graphs; pictographs; and charts.
Rounding, Reasonableness, and Estimation
Covers rounding whole numbers and money; reasonableness to find an appropriate solution, unit of measure, or monetary amount; and estimation of solutions, measurements, and monetary amounts.
Fractions, Decimals, and Percents
Covers adding, subtracting, multiplying, and dividing fractions (simple, mixed, and improper), decimals (numbers and monetary amounts), and percents (0-100%). Most activity sheets include rules,
explanations, and examples to aid in students' understanding.
Reading Level: 2-4
Interest Level: 1-10
|
{"url":"http://www.pcieducation.com/store/item.aspx?DepartmentId=45&CategoryId=15&TypeId=14&ItemId=45898","timestamp":"2014-04-18T06:26:58Z","content_type":null,"content_length":"67665","record_id":"<urn:uuid:cba60888-79f8-4529-b75f-cae1b536c991>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Another Look at Launch Speed in Angry Birds | Science Blogs | WIRED
• By Rhett Allain
• 01.09.12 |
• 8:30 am |
The last time I looked at the launch speed in Angry Birds, there was a problem. The problem was that it wasn’t trivial to get the position-time data of the flung birds. But that was quite some time
ago. That was before the Google Chrome version of Angry Birds. With this, I can use screen capture software with my computer.
There is another reason to revisit the launch speed in Angry Birds. The result from my last attempt wasn’t as clear as I had hoped. If the birds were shot from a sling shot that acted like a real
spring, higher launch angles should have lower launch speeds (since the bird must move up vertically during the launch). I won’t re-derive this, but if the sling shot is indeed a spring, the
following relationship should be true.
I guess I should say that s is the distance the sling shot is pulled back and k is the spring constant. But the point is that if I make a plot of launch velocity squared versus the sine of the launch
angle, it should be a linear function. Here is the plot I first created.
My conclusion was that the launch speed was constant and independent of the angle even though there was one data point quite out of line.
Second Try
How about more data and better data? I want to look at that same plot, but what do I need to collect from each shot? I need:
• The x-velocity of the bird. This is pretty easy to get since this should be constant. The slope of the x-t plot will be the x-velocity.
• The y-velocity of the bird at launch. This isn’t as easy. I can do a couple of things: I could look at the maximum height of the bird or find the velocity from a quadratic fit to the data. Both
of these will take some time. A third way would be to just look at the first few data points and use change in y position over change in time.
• The launch angle. If I have both the horizontal and vertical velocities — this is pretty straightforward.
Let me test the vertical velocity measurement. Here is a plot of the vertical position for a particular shot:
Tracker Video can fit a quadratic function to the data. The velocity would just be the first derivative of this function with respect to time, so I get:
CAUTION. The variable a is NOT the acceleration but rather the coefficient in front of the t term. But moving on. Looking back at the data, I see that the bird was launched at a time of 57.87
seconds. So, putting in this time and the values of the fitting coefficients I get an initial y-velocity of 20.76 m/s.
What about another method? What if I just fit a linear function to the first two data points? Like this:
This gives an initial y-velocity of 20.65 m/s. Not too bad (and much quicker).
More Data
OK, I have more data. Now for the plot. This is the launch velocity squared versus the sine of the launch angle. Remember, if the sling shot acts like a real sling shot, this should be linear.
Curses! Foiled again! It is that one dumb data point that is off. You know why? It is because I try to be cool. I think, “Hey, how about a shoot an angry bird down?” This is what happens. But I have
one more trick. Let me show a distribution of the starting velocities for these shots.
From this data, I get an average launch speed of 23.1 m/s with a standard deviation of 2.4 m/s (even with that crazy data point). So, I am sticking with my original post. The launch speed in Angry
Birds is constant. Maybe for homework, you can compare this to the launch speed for the other birds. (This data just looked at the red bird.) I suspect they are all the same.
Oh, one final tip. If you want to collect data from Angry Birds in Chrome, zoom the screen all the way out before you shoot the bird. This way, the background in the game will stay in the same place
and you won’t have to move the coordinate system.
Top image: j_10suited/Flickr/CC-licensed
|
{"url":"http://www.wired.com/2012/01/another-look-at-launch-speed-in-angry-birds/","timestamp":"2014-04-21T13:03:21Z","content_type":null,"content_length":"106420","record_id":"<urn:uuid:d8d91cc2-21c7-4a78-8cc1-1778839dcaa4>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ballistic Pendulum
The Ballistic Pendulum
Objective: After you finish this lab you should know:
1. The difference between elastic and inelastic collisions.
2. Which conservation laws apply to each type of collision.
3. Two ways to find the initial velocity of a projectile.
A collision between two bodies can be elastic or inelastic. In an elastic collision the two bodies rebound with no loss of kinetic energy. In an inelastic collision one or both bodies can be
deformed, or they can stick together. Either of these processes absorbs some of the systems kinetic energy. The system can also dissipate kinetic energy in producing sound or heat. Since there are no
external forces acting on the system at the time of the collision, linear momentum must be conserved in both cases.
In this lab you will study an inelastic collision using a Blackwood ballistic pendulum. The colliding bodies are a small metal ball, which is fired from a spring loaded gun, and a metal receptacle,
or catcher. The receptacle is also the bob of a simple pendulum. Initially the pendulum is at rest. When the gun fires, the ball collides with the pendulum and is trapped in the catcher which then
starts to swing. A ratchet and pawl system catches the pendulum at the height of its swing.
The best way to understand this experiment is to divide it into three separate events. First, the gun fires and the ball of mass m travels horizontally with initial velocity U In the absence of
external forces, the horizontal component of its velocity will not change. The horizontal component of the ball's initial linear momentum is:
In the second event, the ball collides with the "catcher" of mass M and is trapped by the spring. The system loses kinetic energy in the deformation of the spring and the creation of sound. Linear
momentum, however, must be conserved. The pendulum, of mass (M + m), moves with a new horizontal velocity, V. The momentum of the system is now:
Since the two momenta are equal, we can solve for U:
Finally, the system acts like a simple pendulum. It moves upward and is caught by the ratchet at the highest part of its swing. The ratchet and pawl are designed to exert negligible force on the
pendulum while it is moving upward, so mechanical energy is conserved. This means that the pendulum's kinetic energy at the bottom of its swing must equal its potential energy at the top of its
swing. The change in height, h, of the center of mass can be easily measured. We can then solve for V:
Substitute this value for V into equation 3:
There is a different, more direct way to measure the initial velocity of the ball. We can fire the gun horizontally from a known height and measure its range, R. The velocity, U, is given by:
where T is the time of flight. We can solve for the time of flight because the ball's horizontal and vertical motion are independent. As it moves horizontally, the ball also falls because of the
influence of gravity. If the ball drops a distance Y:
solve for U:
Use MKS units. Measure masses to 0.001 kg. Measure lengths to 0.001 m.
NEVER POINT THE GUN AT ANYONE !!!
Your instructor will show you how to load and fire the gun. Occasionally the ball will miss the catcher; therefore do not fire it if anyone is within range of the ball.
Be careful when you remove the ball from the catcher; push the spring inward to release the ball. The spring will break if you pull it backwards.
Part I
1. Unscrew the set screw that holds the pendulum at the top of the apparatus. Measure the mass of the ball (m) and of the ball and pendulum together (M + m). Reassemble the apparatus. Make sure that
the pendulum can swing freely with a minimum of side-to-side motion.
2. Fire the gun and record the number of the notch in which the pendulum comes to rest. Repeat twice. If the notch numbers vary greatly, or if you have other problems, consult your instructor.
3. While the pendulum is hanging freely, measure the height of the center of mass marker from the base of the apparatus.
4. Average the notch numbers and raise the pendulum to that notch. Measure the height of the center of mass marker from the base of the apparatus. The difference between the two heights is h.
5. Use equation 6 to find the initial velocity of the ball.
Part II
6. Leave the pendulum raised. Position the apparatus near the end of the table so that it is aimed out across the floor with a range of about three meters. Shoot the ball and note where it hits the
floor. Make sure that the gun does not move. Use masking tape to secure a piece of cork pad with a carbon set on it to the floor where the ball landed.
7. Fire the ball three more times or until you have three marks on the carbon set.
8. Put the gun back on the gun rod but do not cock it. Measure the distance from the bottom of the ball to the floor. This is Y. Also, measure the horizontal distance from the point on the floor
directly below the ball to the center of the marks on the carbon set. This distance is R.
9. Use equation 9 to find U. Find the percent difference between the two values for U.
1. Only a fraction, F, of the ball's kinetic energy is transferred to the ball and catcher during the collision:
With a bit of algebra you can show that:
What was the fraction of kinetic energy transferred during the collision?
2. Why was the distance Y measured from the bottom of the ball rather than the center of mass?
3. In part II, if the velocity of the ball had been doubled due to a stronger gun spring, how would the ball's time of flight have changed?
4. Which method of finding the velocity of the ball do you think was more accurate? Explain why.
The Ballistic Pendulum
Part I
mass of ball (m) __________
mass of ball and catcher (m + M) __________
notch 1 _____ notch 2 _____ notch 3 _____
average notch number _____
resting center of mass height _______________
raised center of mass heightĘ_______________
change in height __________
U = _______________
Part II
Y = _______________
R = _______________
U = _______________
% difference = _______________
|
{"url":"http://hyperphysics.phy-astr.gsu.edu/hbase/Class/PhSciLab/balpen.html","timestamp":"2014-04-18T06:29:25Z","content_type":null,"content_length":"9407","record_id":"<urn:uuid:fad3dfbc-c80f-4fd6-8baa-e2b1aa8ee374>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mplus Discussion >> SEM interaction analyses and centering of indicators
Anonymous posted on Tuesday, July 19, 2005 - 1:55 pm
I want to investigate potential interaction effects between two latent contious variables on a dependent latent continous variable in a SEM model (similar to textbook example 5.13). Do you recommend
to center (= substract the mean) of the observed indicators of the latent constructs before running the model, as often announced in textbook for conventional moderator analyses?
bmuthen posted on Tuesday, July 19, 2005 - 2:18 pm
No, that is not necessary when using the Mplus approach (I think Klein advocated this but that was using a slightly different parameterization) - the intercept parameters in the model will capture
the indicator means properly even without centering them.
Elmar posted on Friday, September 02, 2005 - 2:54 am
may I ask a follow up:
The setting:
I have hierachically structered data (280 persons in 90 organizations). I now need to model interaction effects between
two observed continous indicators, that is one individual- and one organisational variable.
My question:
Does the Mplus-approach in the Complex-procedure imply that I do not need to center the variables?
Many thanks,
bmuthen posted on Friday, September 02, 2005 - 8:59 am
You can handle this in two ways, using Type=Twolevel or Type = Complex. With two-level analysis the interaction is captured by having a level 1 random slope for your individual-variable relationship
which on level 2 is regressed on the between-level (organizational) variable. With complex analysis you simply create a product variable from the individual and organizational variables using Define;
no centering needed, although that can help the interpretation.
sammy posted on Tuesday, February 12, 2008 - 1:16 am
is it also possible to analyze the effect of interaction terms in a path analysis when the data-file consists of a correlation matrix?
Is it correct to calculate the correlation between the interaction terms and the rest of the variables and insert this new correlation matrix?
Thank you for your help!
Linda K. Muthen posted on Tuesday, February 12, 2008 - 9:37 am
In Mplus you need raw data for models with interactions.
Elizabeth Penela posted on Thursday, June 07, 2012 - 5:57 am
I am creating an interaction term between a latent factor and an observed variable using the xwith command. From reading posts above, it seems that it is not necessary to center the indicators of the
latent variable.
However - is it necessary to center the observed variable?
Thanks for your help.
Linda K. Muthen posted on Thursday, June 07, 2012 - 7:28 am
Right. Center only the observed variable.
Elizabeth Penela posted on Thursday, June 07, 2012 - 8:17 am
OK- great. Thanks!
Stefan Schneider posted on Wednesday, October 16, 2013 - 1:27 pm
I am estimating a model with two latent variables and their interaction ("xwith") predicting an observed variable. Both latent variables are standardized with a mean of zero and variance of one.
Based on the short course topic 3 handouts (page 168), I am assuming that the estimated intercept of the observed outcome variable should represent its mean in this case (given that the mean of the
latent variables is zero). Is this correct? I'm unsure because I noticed that adding the latent variable interaction to the model slightly changes the estimated intercept of the outcome variable,
whereas the intercept is virtually identical to the observed mean of the outcome in a model without the interaction.
Thanks in advance for your input!
Bengt O. Muthen posted on Thursday, October 17, 2013 - 6:17 am
The mean of an interaction is not zero even though each of the two variables has mean zero. This is described in the FAQ "Latent variable interactions." The mean is a function of the covariance of
the two variables.
Back to top
|
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=11&page=740","timestamp":"2014-04-18T00:32:10Z","content_type":null,"content_length":"29660","record_id":"<urn:uuid:f452bf4d-4627-4be9-a1e3-32f03f607577>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Geometric Programming
- In Proceedings of the 1998 IEEE/ACM international conference on Computer-aided design , 1997
"... This paper considers simultaneous gate and wire sizing for general VLSI circuits under the Elmore delay model. We present a fast and exact algorithm which can minimize total area subject to
maximum delay bound. The algorithm can be easily modified to give exact algorithms for optimizing several othe ..."
Cited by 80 (8 self)
Add to MetaCart
This paper considers simultaneous gate and wire sizing for general VLSI circuits under the Elmore delay model. We present a fast and exact algorithm which can minimize total area subject to maximum
delay bound. The algorithm can be easily modified to give exact algorithms for optimizing several other objectives (e.g. minimizing maximum delay or minimizing total area subject to arrival time
specifications at all inputs and outputs). No previous algorithm for simultaneous gate and wire sizing can guarantee exact solutions for general circuits. Our algorithm is an iterative one with a
guarantee on convergence to global optimal solutions. It is based on Lagrangian relaxation and "one-gate/wire-at-a-time" local optimizations, and is extremely economical and fast. For example, we can
optimize a circuit with 13824 gates and wires in about 13 minutes using under 12 MB memory on an IBM RS/6000 workstation. 1 Introduction Since the invention of integrated circuits almost 40 years
ago, gate si...
- IEEE Transactions on Computer-Aided Design , 2001
"... We describe a new method for determining component values and transistor dimensions for CMOS operational ampli ers (op-amps). We observe that a wide variety of design objectives and constraints
have a special form, i.e., they are posynomial functions of the design variables. As a result the ampli er ..."
Cited by 51 (10 self)
Add to MetaCart
We describe a new method for determining component values and transistor dimensions for CMOS operational ampli ers (op-amps). We observe that a wide variety of design objectives and constraints have
a special form, i.e., they are posynomial functions of the design variables. As a result the ampli er design problem can be expressed as a special form of optimization problem called geometric
programming, for which very e cient global optimization methods have been developed. As a consequence we can e ciently determine globally optimal ampli er designs, or globally optimal trade-o s among
competing performance measures such aspower, open-loop gain, and bandwidth. Our method therefore yields completely automated synthesis of (globally) optimal CMOS ampli ers, directly from speci
cations. In this paper we apply this method to a speci c, widely used operational ampli er architecture, showing in detail how to formulate the design problem as a geometric program. We compute
globally optimal trade-o curves relating performance measures such as power dissipation, unity-gain bandwidth, and open-loop gain. We show how the method can be used to synthesize robust designs,
i.e., designs guaranteed to meet the speci cations for a
- IEEE Journal of Solid-State Circuits , 2000
"... We present a technique for enhancing the bandwidth of gigahertz broad-band circuitry by using optimized on-chip spiral inductors as shunt-peaking elements. The series resistance of the on-chip
inductor is incorporated as part of the load resistance to permit a large inductance to be realized with mi ..."
Cited by 12 (3 self)
Add to MetaCart
We present a technique for enhancing the bandwidth of gigahertz broad-band circuitry by using optimized on-chip spiral inductors as shunt-peaking elements. The series resistance of the on-chip
inductor is incorporated as part of the load resistance to permit a large inductance to be realized with minimum area and capacitance. Simple, accurate inductance expressions are used in a lumped
circuit inductor model to allow the passive and active components in the circuit to be simultaneously optimized. A quick and efficient global optimization method, based on geometric programming, is
discussed. The bandwidth extension technique is applied in the implementation of a 2.125-Gbaud preamplifier that employs a common-gate input stage followed by a cascoded common-source stage. On-chip
shunt peaking is introduced at the dominant pole to improve the overall system performance, including a 40% increase in the transimpedance. This implementation achieves a 1.6-k\Omega transimpedance
and a 0.6- A i...
- IEEE Trans. Computer-Aided Design , 1999
"... Abstract—In this paper, we consider the problem of interconnect delay minimization by simultaneous buffer and wire sizing under the Elmore delay model. We first present a polynomial time
algorithm SBWS to minimize the delay of an interconnect wire. Previously, no polynomial time algorithm for the pr ..."
Cited by 11 (0 self)
Add to MetaCart
Abstract—In this paper, we consider the problem of interconnect delay minimization by simultaneous buffer and wire sizing under the Elmore delay model. We first present a polynomial time algorithm
SBWS to minimize the delay of an interconnect wire. Previously, no polynomial time algorithm for the problem has been reported in the literature. SBWS is an iterative algorithm with guaranteed
convergence to the optimal solution. It runs in quadratic time and uses constant memory for computation. Experimental results show that SBWS is extremely efficient in practice. For example, for an
interconnect of 10 000 segments and buffers, the CPU time is only 0.255 s. We then extend our result to handle interconnect trees. We present an algorithm SBWS-T which always gives the optimal
solution. Experimental results show that SBWS-T is faster than the greedy wire sizing algorithm [2] in practice. Index Terms — Buffer sizing, interconnect, performance optimization, physical design,
wire sizing.
- IN PROCEEDINGS OF THE TWELFTH ANNUAL CONFERENCE ON COMPUTATIONAL LEARNING THEORY , 1997
"... Measures for sparse best–basis selection are analyzed and shown to fit into a general framework based on majorization, Schur-concavity, and concavity. This framework facilitates the analysis of
algorithm performance and clarifies the relationships between existing proposed concentration measures use ..."
Cited by 6 (3 self)
Add to MetaCart
Measures for sparse best–basis selection are analyzed and shown to fit into a general framework based on majorization, Schur-concavity, and concavity. This framework facilitates the analysis of
algorithm performance and clarifies the relationships between existing proposed concentration measures useful for sparse basis selection. It also allows one to define new concentration measures, and
several general classes of measures are proposed and analyzed in this paper. Admissible measures are given by the Schur-concave functions, which are the class of functions consistent with the
so-called Lorentz ordering (a partial ordering on vectors also known as majorization). In particular, concave functions form an important subclass of the Schur-concave functions which attain their
minima at sparse solutions to the best basis selection problem. A general affine scaling optimization algorithm obtained from a special factorization of the gradient function is developed and proved
to converge to a sparse solution for measures chosen from within this subclass.
"... Abstract: In this paper we first considered a maximum likelihood estimation of trip distribution problem and next use primal-dual geometric programming method the said trip distribution problem
converted into an entropy maximization trip distribution problem. Here the generalized cost function is as ..."
Add to MetaCart
Abstract: In this paper we first considered a maximum likelihood estimation of trip distribution problem and next use primal-dual geometric programming method the said trip distribution problem
converted into an entropy maximization trip distribution problem. Here the generalized cost function is assumed in different form, and then the said formulation is equivalent to single or
multi-objective entropy maximization trip distribution problem. We use fuzzy mathematical programming method to show this equivalent problem formulation. The present article we use the concept of
multi-objective trip distribution problem.
- In Proc. IEEE ISCAS , 1996
"... In this paper, we present a fast algorithm for continuous wire-sizing under the distributed Elmore delay model. Our algorithm GWSA-C is an extension of the GWSA algorithm (for discrete
wire-sizing) in [CL93a]. GWSA-C is an iterative algorithm with guaranteed convergence to optimal wire-sizing soluti ..."
Add to MetaCart
In this paper, we present a fast algorithm for continuous wire-sizing under the distributed Elmore delay model. Our algorithm GWSA-C is an extension of the GWSA algorithm (for discrete wire-sizing)
in [CL93a]. GWSA-C is an iterative algorithm with guaranteed convergence to optimal wire-sizing solutions. When specialized to discrete wiresizing problems, GWSA-C is an improved implementation of
GWSA, both in terms of runtime and memory usage. GWSA-C is extremely fast even on very large problems. For example, we can solve a 1,000,000 wire-segment problem in less than 3 minutes runtime on an
IBM RS/6000 workstation. This kind of efficiency in wire-sizing has never been reported in the literature. 1 Introduction Since interconnect delay has become the dominant factor of circuit delay in
deep sub-micron fabrication technology, it is clear that sizing the gates alone is not enough to achieve desirable circuit performance. As a result, wire-sizing for performance optimization is
currently a v...
"... Abstract: In this work an approach is proposed to solve geometric programming problems under uncertainty. The proposed approach derives the lower and upper bounds of the objective of geometric
programming problems with fuzzy parameters. A pair of two-level mathematical programs is formulated to calc ..."
Add to MetaCart
Abstract: In this work an approach is proposed to solve geometric programming problems under uncertainty. The proposed approach derives the lower and upper bounds of the objective of geometric
programming problems with fuzzy parameters. A pair of two-level mathematical programs is formulated to calculate the lower and upper bounds of the objective value. The solution is in a range. Two
illustrative examples are presented to clarify the proposed approach. The problem of suppressing the crane load sway has been also considered as a practical application to illustrate the
effectiveness of the proposed approach. Key words: Optimization • geometric programming • fuzzy parameters • duality theorem • crane load sway
"... Abstract—We propose a new method of power control for interference-limited wireless networks with Rayleigh fading of both the desired and interference signals. Our method explictly takes into
account the statistical variation of both the received signal and interference power and optimally allocates ..."
Add to MetaCart
Abstract—We propose a new method of power control for interference-limited wireless networks with Rayleigh fading of both the desired and interference signals. Our method explictly takes into account
the statistical variation of both the received signal and interference power and optimally allocates power subject to constraints on the probability of fading induced outage for each transmitter/
receiver pair. We establish several results for this type of problem. We establish tight bounds that relate the outage probability caused by channel fading to the signal-to-interference margin
calculated when the statistical variation of the signal and intereference powers is ignored. This allows us to show that well-known methods for allocating power, based on Perron–Frobenius eigenvalue
theory, can be used to determine power allocations that are provably close to achieving optimal (i.e., minimal) outage probability. We show that the problems of minimizing transmitter power subject
to constraints on outage probability and minimizing outage probability subject to power constraints can be posed as a geometric program (GP). A GP is a special type of optimization problem that can
be transformed to a nonlinear convex optimization problem by a change of variables and therefore solved globally and efficiently by recently developed interior-point methods. We also give a fast
iterative method for finding the optimal power allocation to minimize outage probability. Index Terms—Fading channels, personal communication networks, power control, radio communication. I.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=252941","timestamp":"2014-04-16T05:45:59Z","content_type":null,"content_length":"37067","record_id":"<urn:uuid:4e1f2564-6761-420c-ac71-ee115fecef66>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Help with Finding the Bounds of a Triple Integral
December 1st 2012, 11:41 PM #1
Junior Member
Aug 2012
Help with Finding the Bounds of a Triple Integral
Hey everyone,
I am given that a solid is bounded by: y= x^2-1, y+z=0, z=0.
I am asked to find the bounds of a triple integral with respect to dz, dy, dx. So, I basically traced each graph with respect to the xy, xz, and yz plane. The bounds that I got are: (x=-1 to x=
1), (y=x^2-1 to y=0) and (z=0 to z=-y). Can anyone tell me if I am approaching this problem correctly? I am not exactly sure if my bounds are correct.
Any help and feedback appreciated, thanks.
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/calculus/208864-help-finding-bounds-triple-integral.html","timestamp":"2014-04-18T17:00:38Z","content_type":null,"content_length":"29422","record_id":"<urn:uuid:911c0fe9-0946-4ffb-8745-d7153953f812>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Holbrook, MA Trigonometry Tutor
Find a Holbrook, MA Trigonometry Tutor
...I will travel throughout the area to meet in your home, library, or wherever is comfortable for you.Materials Physics Research Associate, Harvard, current Geophysics postdoctoral fellow, MIT,
2010-2012 Physics PhD, Brandeis University, 2010 -Includes experience teaching and lecturing Physics...
16 Subjects: including trigonometry, calculus, physics, geometry
...For ACT Math, I review with students the concepts they need to know for the test. We also discuss how to approach questions that they don't know how to do, using "backwards" strategies,
learning how to make good estimations and eliminating answer choices. Students often worry about the ACT Science section, but truth is, they don't need to study any science at all.
26 Subjects: including trigonometry, English, linear algebra, algebra 1
...I have taught a course involving statistics and concentrated in several stats courses at the PhD level. Statistics offers many new concepts which, depending how it's taught, can be overwhelming
at times. I have experience taking topics in statistics which students find challenging or intimidating and placing them in an easier to understand context.
24 Subjects: including trigonometry, chemistry, calculus, physics
...My name's Isaac, and I am a student in the School of Arts and Sciences at Boston College. I am majoring in psychology (BA) and minoring in Hispanic studies, but as a student at a liberal arts
college, I am continuing to explore different interests as much as my schedule allows me to in academic ...
30 Subjects: including trigonometry, English, reading, elementary (k-6th)
...If you or your children are interested in the language learning or culture learning, I hope I could be your or your children's assistant.Chinese is my mother language. I am fluent in Mandarin
and Cantonese. I took Chinese classes and obtained fairly good grades throughout elementary school and high school.
16 Subjects: including trigonometry, calculus, geometry, algebra 1
|
{"url":"http://www.purplemath.com/holbrook_ma_trigonometry_tutors.php","timestamp":"2014-04-18T00:53:05Z","content_type":null,"content_length":"24503","record_id":"<urn:uuid:0017b53f-f76d-46bd-a740-e8a925a5a1fc>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Can't make simple iteration work
August 3rd 2010, 11:06 PM #1
Can't make simple iteration work
I let $f(x)=e^x-3x-1=0$ for $x\in [1,2]$ and rewrite the equation such that $x=ln(3x+1)$.
I then rewrite $f(x)=0$ to the equivalent form $x-g(x)=0$ where $g(x)=ln(3x+1)$.
Since $g'(x)=\frac{3}{3x+1}$ I have that $g'(1)=\frac{3}{4}$ and $g'(2)=\frac{3}{7}$.
By the mean value theorem I know that there exists a number $\eta\in [1,2]$ such that
Since $g''(x)=-\frac{9}{(3x+1)^2}$, $g$ is monotonic decreasing and so
$g'(1)\geq g'(\eta) \geq g'(2)$.
$g$ is then a contraction with $L=g'(1)=\frac{3}{4}$.
I then look at the sequence $x_{k+1}=g(x_k)}$ and write a small piece of code that prints out the first 20 iterations.
(0, 1)
(1, 1.3862943611198906)
(2, 1.6407200993500939)
(3, 1.778701297541748)
(4, 1.846264051572333)
(5, 1.8777524625894917)
(6, 1.8920959939166664)
(7, 1.8985621417401013)
(8, 1.9014635019263759)
(9, 1.9027626111520601)
(10, 1.9033437520231324)
(11, 1.9036036091052895)
(12, 1.9037197823293694)
(13, 1.9037717150438356)
(14, 1.9037949295626633)
(15, 1.9038053065444669)
(16, 1.9038099450615822)
(17, 1.9038120184745679)
(18, 1.9038129452869204)
(19, 1.9038133595703104)
(20, 1.9038135447541562)
The maximum number of iterations I need to get an accuracy of $|x_k-\xi|\leq \epsilon$ where $x_k$is the kth iteration and $\xi$ is the fixed point og $g$ is given by:
$k_0(\epsilon) \leq \right[ \frac{ln|x_1-x_0| - ln(\epsilon(1-L))}{ln(1/L)}\left] +1$
If I calculate this with $x_1 = 1.386294$, $x_0=1$, $L=3/4$ and $\epsilon=0.5*10^{-4}$ (to get an accuracy of 4 decimal digits) I get approx. 41 iterations.
But from my calculation of the sequence it seems as if the number of iterations needed for an accuracy of 4 decimal digits is not more than about 15.
Funny thing is, in the book I am reading they use $g(x)=ln(2x+1)$and that seems to work..
Sorry for the long post, but I hope someone can help me out.
A few comments:
Why are you using the second derivative of $g$ to check the monotonicity of $g$? That information is contained in the first derivative. The function $g$ is actually monotone increasing. I think
you meant to say that $g'$ is monotone decreasing, which it is.
So you have proven that $g$ is a contraction mapping.
At the end, I think you're wondering why the actual number of iterations needed to converge is smaller than the theoretical number of iterations needed to converge. Is that correct? If so, I
would say that it's simply a fact that iteration schemes sometimes converge faster than theory says they will. All your theory tells you is that you won't need to go over a certain number of
iterations, which you didn't. So your theoretical bound on the number of iterations isn't very sharp. That's all. It doesn't strike me as a shortcoming. If you want your theoretical bound to be a
little sharper, try a few different starting points, and see how that affects iteration numbers. Sometimes if you start farther away from the fixed point, the number of iterations will go up
(though I'm guessing not by a whole lot!).
Does this make sense?
thanks for replying.
Yes, I meant that g'(x) is monotonically decreasing.
I did not know that the max theoretical number of iterations should deviate this much from the actual one.
I mean if I want an accuracy of 6 decimal digits I need about 20 iterations. The thoery gives me 50-something. Can this really be so?
I mean if I want an accuracy of 6 decimal digits I need about 20 iterations. The thoery gives me 50-something. Can this really be so?
Absolutely. For my Ph.D. dissertation, I proved a ridiculous bound on what's called the "chirp" in a fiber optic cable. I showed that solitons cease to exist when you chirp a signal enough. This
result was known experimentally for a long time, but I proved the first theoretical bound on it. The bound I proved was extremely non-sharp. We're talking orders of magnitude here. So, in
reality, solitons disappeared long before I said they would.
But this is the way math goes. Someone will hopefully come along and prove a better bound, maybe: like the better mousetrap.
Kind of surprising that such an un-sharp bound is in use. I quess it's better than no bound at all.
Thank you.
I quess it's better than no bound at all.
Exactly. Any bound at all can limit your search space and save time. Of course better bounds are, well, better.
You're welcome, and have a good one!
August 4th 2010, 02:48 AM #2
August 4th 2010, 02:52 AM #3
August 4th 2010, 02:59 AM #4
August 4th 2010, 03:01 AM #5
August 4th 2010, 03:03 AM #6
|
{"url":"http://mathhelpforum.com/calculus/152748-can-t-make-simple-iteration-work.html","timestamp":"2014-04-16T04:45:29Z","content_type":null,"content_length":"56589","record_id":"<urn:uuid:92eb6c55-d4a6-496b-8801-9fdb3a8c3b05>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Colmar Manor, MD Geometry Tutor
Find a Colmar Manor, MD Geometry Tutor
...F. Computations of Derivatives. III.
21 Subjects: including geometry, calculus, statistics, algebra 1
...In preparation for my own academic degrees, I have completed math classes up through calculus. I have taught algebra via chemistry and physics classes. Most recently, I have tutored algebra.
5 Subjects: including geometry, chemistry, algebra 1, algebra 2
I am a hydraulic/civil engineer by profession. For my undergraduate program, I was the recipient of the prize for an outstanding student. I have both classroom teaching and private tutoring
14 Subjects: including geometry, chemistry, calculus, physics
...I currently teach a web design class where I teach my students how to build websites, from scratch, without the use of a web editor. Although I am a secondary teacher, I do have experience
teaching elementary students as well, particularly with the older ones. For 3 years I taught summer enrichment classes to elementary students, ranging from grades 1 through 5.
29 Subjects: including geometry, reading, writing, algebra 1
...My 30 years of services in the system of education includes teaching high school and college math, developing elementary and middle school grade texts, and tutoring students of different
grades including my own daughter and son. My philosophy of teaching is that every student can be successful i...
7 Subjects: including geometry, algebra 1, algebra 2, SAT math
Related Colmar Manor, MD Tutors
Colmar Manor, MD Accounting Tutors
Colmar Manor, MD ACT Tutors
Colmar Manor, MD Algebra Tutors
Colmar Manor, MD Algebra 2 Tutors
Colmar Manor, MD Calculus Tutors
Colmar Manor, MD Geometry Tutors
Colmar Manor, MD Math Tutors
Colmar Manor, MD Prealgebra Tutors
Colmar Manor, MD Precalculus Tutors
Colmar Manor, MD SAT Tutors
Colmar Manor, MD SAT Math Tutors
Colmar Manor, MD Science Tutors
Colmar Manor, MD Statistics Tutors
Colmar Manor, MD Trigonometry Tutors
Nearby Cities With geometry Tutor
Bladensburg, MD geometry Tutors
Brentwood, MD geometry Tutors
Cottage City, MD geometry Tutors
Edmonston, MD geometry Tutors
Fairmount Heights, MD geometry Tutors
Garrett Park geometry Tutors
Hyattsville geometry Tutors
Mount Rainier geometry Tutors
North Brentwood, MD geometry Tutors
Riverdale Park, MD geometry Tutors
Riverdale Pk, MD geometry Tutors
Riverdale, MD geometry Tutors
Seat Pleasant, MD geometry Tutors
Somerset, MD geometry Tutors
University Park, MD geometry Tutors
|
{"url":"http://www.purplemath.com/colmar_manor_md_geometry_tutors.php","timestamp":"2014-04-18T18:40:55Z","content_type":null,"content_length":"24072","record_id":"<urn:uuid:353eff34-f79e-49d6-b24b-f532a7f9e174>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Comment on
A version that returns the English names of the input 1-2 digit number.
Update: Hmmm. I just noticed on the stackoverflow page that they clarified that the numbers should be spelled out in both the left AND right columns. Phooey.
Another update: And now there's some further stipulation that there be a separator(space, hyphen, whatever) between spelled words and the input number must come in through STDIN, not as a passed
parameter, so no pop @ARGV.
Update: 283
+,@p;$n[8].=t;sub n{$n=shift;$n?$n<20?$n[$n]:"$m[$n/10-2]-$n[$n%10]":Z
+ero}$p+=<>;warn$m=n($p)," is ",$_=$p-4?n$p=()=$m=~/\w/g:magic,".\n"un
Zero is Four.
Four is magic.
Forty is Five.
Five is Four.
Four is magic.
Sixty-Seven is Ten.
Ten is Three.
Three is Five.
Five is Four.
Four is magic.
Eighty-Three is Eleven.
Eleven is Six.
Six is Three.
Three is Five.
Five is Four.
Four is magic.
##### Versions that don't spell out right column. #####
Update:Sigh. Apparently I'm blind. Using duelafns suggestion, modifying the end conditional and some other minor tweaks.
+Zero," is ",$_=$_-4?length$z:magic,".\n"until/a/
perl -E'@p=(Thir,Four,Fif,Six,Seven,Eigh,Nine);@n=("",One,Two,Three,Fo
+_%10]:Zero," is ",$_=$_-4?length$z:magic,"."until/a/'
Previous versions
Update: 314 Reformatted a bit to make it easier to run as a one liner. Capitalized names... because I could. Knocked another 2 strokes off. It bothers me that I can't figure out how to get rid of the
duplicated Four without making it longer. I guess Four IS magic...
@u="0335443554366887798866555766"=~/./g;sub n{@p=(Thir,Four,Fif,Six,Se
+$_<20?$n[$_]:$n[$_/10+18].$n[$_%10]:Zero}$_=pop;print$z=n($_)," is ",
Update: 316
@u='0335443554366887798866555766'=~/./g;sub n{@p=(thir,four,fif,six,se
+)?$_<20?$n[$_]:$n[$_/10+18].$n[$_%10]:zero}$_=pop;print$z=n($_)," is
Update: 337
@u='0335443554366887798866555766'=~/./g;sub n{@p=(qw/thir four fif six
+ seven eigh nine/);@n=('',qw/one two three four five/,@p[3..6],qw/ten
+ eleven twelve/,map($_.'teen',@p),'twenty',map$_.'ty',@p);$n[22]=~s/u
+>;print$z=n($_)," is ",$_=$_-4?length$z:magic,".\n"while/\d/
Update: 341
@u='0335443554366887798866555766'=~/./g;sub n{shift;@p=(qw/thir four f
+if six seven eigh nine/);@n=('',qw/one two three four five/,@p[3..6],
+qw/ten eleven twelve/,map($_.'teen',@p),'twenty',map$_.'ty',@p);$n[22
+$_=<>;print$z=n($_)," is ",$_=$_-4?length$z:magic,".\n"while/\d/
Update: 343
@u='0335443554366887798866555766'=~/./g;sub n{shift;@p=(qw/thir four f
+if six seven eigh nine/);@n=('',qw/one two three four five/,@p[3..6],
+qw/ten eleven twelve/,map($_.'teen',@p),'twenty',map$_.'ty',@p);$n[22
+$_=<>;print n($_)," is ",$_=$_-4?length n($_):magic,".\n"while/\d/
Update: 344
@u='0335443554366887798866555766'=~/./g;sub n{shift;@p=(qw/thir four f
+if six seven eigh nine/);@n=('',qw/one two three four five/,@p[3..6],
+qw/ten eleven twelve/,map($_.'teen',@p),'twenty',map$_.'ty',@p);$n[22
+$_=pop;print n($_)," is ",$_=$_-4?length n($_):magic,".\n"while/\d/
Update: 345
@u='0335443554366887798866555766'=~/./g;sub n{shift;@p=(qw/thir four f
+if six seven eigh nine/);@n=('',qw/one two three four five/,@p[3..6],
+qw/ten eleven twelve/,map($_.'teen',@p),'twenty',map($_.'ty',@p);$n[2
+}$_=pop;print n($_)," is ",$_=$_-4?length n($_):magic,".\n"while/\d/
Update: 361
@u='0335443554366887798866555766'=~/./g;sub n{shift;@p=(qw/thir four f
+if six seven eigh nine/);@n=('',qw/one two three four five/,@p[3..6],
+qw/ten eleven twelve/);push@n,$_.'teen'for@p;$p[1]=~s/u//;push@n,'twe
+$n[$_%10]:'zero'}$_=pop;print n($_)," is ",$_=$_-4?length n($_):magic
Update: Save a stroke: 383.
@u='0335443554366887798866555766'=~/./g;sub n{shift;@p=(qw/thir four f
+if six seven eigh nine/);@n=('',qw/one two three four five/,@p[3..6],
+qw/ten eleven twelve/);push@n,$_.'teen'for@p;$p[1]=~s/u//;push@n,'twe
+$n[$_%10]:'zero'}$_=pop;print n($_)," is ",$_=$_-4?$_<20?$u[$_]||4:$u
Proof of concept. Should be lots of room for improvement.
Update: whoops. Fixed for fourteen/forty. Curse you, irregular number names!
384 strokes.
@u='0335443554366887798866555766'=~/./g;sub n{shift;@p=(qw/thir four f
+if six seven eigh nine/);@n=('',qw/one two three four five/,@p[3..6],
+qw/ten eleven twelve/);push@n,$_.'teen'for@p;push@n,'twenty',;push@n,
+.$n[$_%10]:'zero'}$_=pop;print n($_)," is ",$_=$_-4?$_<20?$u[$_]||4:$
C:\test>magic.pl 0
zero is 4.
four is magic.
C:\test>magic.pl 1
one is 3.
three is 5.
five is 4.
four is magic.
C:\test>magic.pl 14
fourteen is 8.
eight is 5.
five is 4.
four is magic.
C:\test>magic.pl 15
fifteen is 7.
seven is 5.
five is 4.
four is magic.
C:\test>magic.pl 18
eighteen is 8.
eight is 5.
five is 4.
four is magic.
C:\test>magic.pl 44
fortyfour is 9.
nine is 4.
four is magic.
C:\test>magic.pl 77
seventyseven is 12.
twelve is 6.
six is 3.
three is 5.
five is 4.
four is magic.
C:\test>magic.pl 80
eighty is 6.
six is 3.
three is 5.
five is 4.
four is magic.
C:\test>magic.pl 99
ninetynine is 10.
ten is 3.
three is 5.
five is 4.
four is magic.
Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
Read Where should I post X? if you're not absolutely sure you're posting in the right place.
Please read these before you post! —
Posts may use any of the Perl Monks Approved HTML tags:
a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike,
strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
Outside of code tags, you may need to use entities for some characters:
For: Use:
& &
< <
> >
[ [
] ]
Link using PerlMonks shortcuts! What shortcuts can I use for linking?
See Writeup Formatting Tips and other pages linked from there for more info.
|
{"url":"http://www.perlmonks.org/?parent=849776;node_id=3333","timestamp":"2014-04-17T16:13:12Z","content_type":null,"content_length":"35896","record_id":"<urn:uuid:9c3eeacd-0b61-4141-9e8c-e4d4bbffb568>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Another Look at Launch Speed in Angry Birds | Science Blogs | WIRED
• By Rhett Allain
• 01.09.12 |
• 8:30 am |
The last time I looked at the launch speed in Angry Birds, there was a problem. The problem was that it wasn’t trivial to get the position-time data of the flung birds. But that was quite some time
ago. That was before the Google Chrome version of Angry Birds. With this, I can use screen capture software with my computer.
There is another reason to revisit the launch speed in Angry Birds. The result from my last attempt wasn’t as clear as I had hoped. If the birds were shot from a sling shot that acted like a real
spring, higher launch angles should have lower launch speeds (since the bird must move up vertically during the launch). I won’t re-derive this, but if the sling shot is indeed a spring, the
following relationship should be true.
I guess I should say that s is the distance the sling shot is pulled back and k is the spring constant. But the point is that if I make a plot of launch velocity squared versus the sine of the launch
angle, it should be a linear function. Here is the plot I first created.
My conclusion was that the launch speed was constant and independent of the angle even though there was one data point quite out of line.
Second Try
How about more data and better data? I want to look at that same plot, but what do I need to collect from each shot? I need:
• The x-velocity of the bird. This is pretty easy to get since this should be constant. The slope of the x-t plot will be the x-velocity.
• The y-velocity of the bird at launch. This isn’t as easy. I can do a couple of things: I could look at the maximum height of the bird or find the velocity from a quadratic fit to the data. Both
of these will take some time. A third way would be to just look at the first few data points and use change in y position over change in time.
• The launch angle. If I have both the horizontal and vertical velocities — this is pretty straightforward.
Let me test the vertical velocity measurement. Here is a plot of the vertical position for a particular shot:
Tracker Video can fit a quadratic function to the data. The velocity would just be the first derivative of this function with respect to time, so I get:
CAUTION. The variable a is NOT the acceleration but rather the coefficient in front of the t term. But moving on. Looking back at the data, I see that the bird was launched at a time of 57.87
seconds. So, putting in this time and the values of the fitting coefficients I get an initial y-velocity of 20.76 m/s.
What about another method? What if I just fit a linear function to the first two data points? Like this:
This gives an initial y-velocity of 20.65 m/s. Not too bad (and much quicker).
More Data
OK, I have more data. Now for the plot. This is the launch velocity squared versus the sine of the launch angle. Remember, if the sling shot acts like a real sling shot, this should be linear.
Curses! Foiled again! It is that one dumb data point that is off. You know why? It is because I try to be cool. I think, “Hey, how about a shoot an angry bird down?” This is what happens. But I have
one more trick. Let me show a distribution of the starting velocities for these shots.
From this data, I get an average launch speed of 23.1 m/s with a standard deviation of 2.4 m/s (even with that crazy data point). So, I am sticking with my original post. The launch speed in Angry
Birds is constant. Maybe for homework, you can compare this to the launch speed for the other birds. (This data just looked at the red bird.) I suspect they are all the same.
Oh, one final tip. If you want to collect data from Angry Birds in Chrome, zoom the screen all the way out before you shoot the bird. This way, the background in the game will stay in the same place
and you won’t have to move the coordinate system.
Top image: j_10suited/Flickr/CC-licensed
|
{"url":"http://www.wired.com/2012/01/another-look-at-launch-speed-in-angry-birds/","timestamp":"2014-04-21T13:03:21Z","content_type":null,"content_length":"106420","record_id":"<urn:uuid:d8d91cc2-21c7-4a78-8cc1-1778839dcaa4>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Numerical Analysis of Partial Differential Equations
ISBN: 978-0-470-64728-8
512 pages
August 2011
Read an Excerpt
A balanced guide to the essential techniques for solving elliptic partial differential equations
Numerical Analysis of Partial Differential Equations provides a comprehensive, self-contained treatment of the quantitative methods used to solve elliptic partial differential equations (PDEs), with
a focus on the efficiency as well as the error of the presented methods. The author utilizes coverage of theoretical PDEs, along with the nu merical solution of linear systems and various examples
and exercises, to supply readers with an introduction to the essential concepts in the numerical analysis of PDEs.
The book presents the three main discretization methods of elliptic PDEs: finite difference, finite elements, and spectral methods. Each topic has its own devoted chapters and is discussed alongside
additional key topics, including:
• The mathematical theory of elliptic PDEs
• Numerical linear algebra
• Time-dependent PDEs
• Multigrid and domain decomposition
• PDEs posed on infinite domains
The book concludes with a discussion of the methods for nonlinear problems, such as Newton's method, and addresses the importance of hands-on work to facilitate learning. Each chapter concludes with
a set of exercises, including theoretical and programming problems, that allows readers to test their understanding of the presented theories and techniques. In addition, the book discusses important
nonlinear problems in many fields of science and engineering, providing information as to how they can serve as computing projects across various disciplines.
Requiring only a preliminary understanding of analysis, Numerical Analysis of Partial Differential Equations is suitable for courses on numerical PDEs at the upper-undergraduate and graduate levels.
The book is also appropriate for students majoring in the mathematical sciences and engineering.
See More
1. Finite Difference.
1.1 Second-Order Approximation for Δ.
1.2 Fourth-Order Approximation for Δ.
1.3 Neumann Boundary Condition.
1.4 Polar Coordinates.
1.5 Curved Boundary.
1.6 Difference Approximation for Δ^2.
1.7 A Convection-Diffusion Equation.
1.8 Appendix: Analysis of Discrete Operators.
1.9 Summary and Exercises.
2. Mathematical Theory of Elliptic PDEs.
2.1 Function Spaces.
2.2 Derivatives.
2.3 Sobolev Spaces.
2.4 Sobolev Embedding Theory.
2.5 Traces.
2.6 Negative Sobolev Spaces.
2.7 Some Inequalities and Identities.
2.8 Weak Solutions.
2.9 Linear Elliptic PDEs.
2.10 Appendix: Some Definitions and Theorems.
2.11 Summary and Exercises.
3. Finite Elements.
3.1 Approximate Methods of Solution.
3.2 Finite Elements in 1D.
3.3 Finite Elements in 2D.
3.4 Inverse Estimate.
3.5 L^2 and Negative-Norm Estimates.
3.6 A Posteriori Estimate.
3.7 Higher-Order Elements.
3.8 Quadrilateral Elements.
3.9 Numerical Integration.
3.10 Stokes Problem.
3.11 Linear Elasticity.
3.12 Summary and Exercises.
4. Numerical Linear Algebra.
4.1 Condition Numbers.
4.2 Classical Iterative Methods.
4.3 Krylov Subspace Methods.
4.4 Preconditioning.
4.5 Direct Methods.
4.6 Appendix: Chebyshev Polynomials.
4.7 Summary and Exercises.
5. Spectral Methods.
5.1 Trigonometric Polynomials.
5.2 Fourier Spectral Method.
5.3 Orthogonal Polynomials.
5.4 Spectral Gakerkin and Spectral Tau Methods.
5.5 Spectral Collocation.
5.6 Polar Coordinates.
5.7 Neumann Problems
5.8 Fourth-Order PDEs.
5.9 Summary and Exercises.
6. Evolutionary PDEs.
6.1 Finite Difference Schemes for Heat Equation.
6.2 Other Time Discretization Schemes.
6.3 Convection-Dominated equations.
6.4 Finite Element Scheme for Heat Equation.
6.5 Spectral Collocation for Heat Equation.
6.6 Finite Different Scheme for Wave Equation.
6.7 Dispersion.
6.8 Summary and Exercises.
7. Multigrid.
7.1 Introduction.
7.2 Two-Grid Method.
7.3 Practical Multigrid Algorithms.
7.4 Finite Element Multigrid.
7.5 Summary and Exercises.
8. Domain Decomposition.
8.1 Overlapping Schwarz Methods.
8.2 Projections.
8.3 Non-overlapping Schwarz Method.
8.4 Substructuring Methods.
8.5 Optimal Substructuring Methods.
8.6 Summary and Exercises.
9. Infinite Domains.
9.1 Absorbing Boundary Conditions.
9.2 Dirichlet-Neumann Map.
9.3 Perfectly Matched Layer.
9.4 Boundary Integral Methods.
9.5 Fast Multiple Method.
9.6 Summary and Exercises.
10. Nonlinear Problems.
10.1 Newton’s Method.
10.2 Other Methods.
10.3 Some Nonlinear Problems.
10.4 Software.
10.5 Program Verification.
10.6 Summary and Exercises.
Answers to Selected Exercises.
See More
S. H. Lui, PhD, is Associate Professor of Mathematics in the Department of Mathematics at the University of Manitoba, Canada.
See More
“Requiring only a preliminary understanding of analysis, Numerical Analysis of Partial Differential Equations is suitable for courses on numerical PDEs at the upper-undergraduate and graduate levels.
The book is also appropriate for students majoring in the mathematical sciences and engineering.” (Zentralblatt MATH, 1 December 2012)
“Recommended. Upper-division undergraduates, graduate students, and researchers/faculty.” (Choice, 1 May 2012)
See More
Buy Both and Save 25%!
Numerical Analysis of Partial Differential Equations (US $119.00)
-and- Real Analysis: A Constructive Approach (US $64.95)
Total List Price: US $183.95
Discounted Price: US $137.96 (Save: US $45.99)
Cannot be combined with any other offers. Learn more.
|
{"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470647280,subjectCd-EE33.html","timestamp":"2014-04-21T10:58:21Z","content_type":null,"content_length":"57231","record_id":"<urn:uuid:8b52aeeb-e0d4-481b-94a3-dd9f0ffce81c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
|
how to reduce?
Yes I know I have to differential but if the diff. it with respect to t , what will happen to the second term on the left hand side?
Use the Leibniz Integral Rule.
\frac{\partial}{\partial t}f(x,t)dx+f(x,t){\partial x}{\partial t}\right|_{x=a(t)}^{x=b(t)}[/tex]
for the initial condition you should know that
[tex]\int_0^0 f(x) dx=0[/tex]
|
{"url":"http://www.physicsforums.com/showthread.php?t=86040","timestamp":"2014-04-20T20:55:16Z","content_type":null,"content_length":"28580","record_id":"<urn:uuid:5fc98bf3-af95-4c64-8d5a-960892dfe0a1>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
|
University of Pittsburgh Bradford
Financial Aid Calculators
Cost of Attendance
How much will college cost? Use the College Cost Projector to estimate how much college will cost when you are ready to enroll.
Savings Plan Designer
The Savings Plan Designer shows you how much money you must contribute each month to an interest-bearing bank account or investment fund in order to reach your savings goal for a college education.
Student Loan Repayment
Loan Payment Calculator computes an estimated monthly loan payment in equal installments using a standard repayment schedule.
Graduated Repayment Calculator starts payments low and gradually increases the payments until the balance is paid.
Trust Funds
Trust Fund Calculator This calculator performs a net present value calculation for the most common types of trust funds.
Calculators courtesy of FinAid.org
|
{"url":"http://www.upb.pitt.edu/templates/AdmissionsBlank.aspx?menu_id=1265&id=278","timestamp":"2014-04-18T18:28:50Z","content_type":null,"content_length":"5741","record_id":"<urn:uuid:9bc7f0ee-89d5-4b78-afe8-442b561ac736>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reviews of books, websites, poster sets, movies, and other resources for learning and teaching the history of mathematics.
A collection of articles dealing with mathematical models and objects and how they can be used in teaching.
A history of the development of Pascal's triangle in its various manifestations
A study of the nature of architecture in ancient Egypt and its relationship to Egyptian mathematics.
This website contains a complete version of Euclid's Elements, with all the proofs.
Review of Bertram's text on the history of Greek science.
A five-volume set of biographies of mathematicians from ancient times to the twentieth century, aimed at secondary students.
A small selection of Euler's works, explained by a master expositor.
The history of the Poincare conjecture up to its recent proof by Grigori Perelman.
|
{"url":"http://www.maa.org/publications/periodicals/convergence/critics-corner?page=6&device=mobile","timestamp":"2014-04-18T12:10:42Z","content_type":null,"content_length":"25420","record_id":"<urn:uuid:51ec8c95-6cd2-4c79-a552-8d6b18605b5b>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Middle City East, PA SAT Math Tutor
Find a Middle City East, PA SAT Math Tutor
...At the University of Oregon, I tutored in the undergraduate writing center, served as a teaching assistant in the literature department, and taught my own writing courses. Prior to my teaching
experience at the University of Oregon, I tutored AP English and SAT prep for high school students in t...
17 Subjects: including SAT math, English, grammar, literature
...I don't just teach my students what's "right" and "wrong" in regard to grammar. Instead, we study how the language works. My goal is to help my students become lifelong writers (and readers),
not only to master a test.
47 Subjects: including SAT math, chemistry, reading, English
...I also hold New Jersey and Pennsylvania teaching certificates from Kindergarten through high school. My family of now five and I reside in Mullica Hill. My husband and I have a five year old
(going on 20), a three year old and a one year old, along with our first born, our dog.
12 Subjects: including SAT math, geometry, algebra 1, algebra 2
...I have prepared high school students for the AP Calculus exams (both AB and BC), undergraduate students for the math portion of the GRE, and have helped many other students with math skills
ranging from basic arithmetic all the way up to Calculus 3 and basic linear algebra. In my free time, I en...
22 Subjects: including SAT math, calculus, geometry, statistics
...So, before every issue, I would work one-on-one with each of my writers to introduce them to new writing techniques and work to rewrite their articles to prepare for print. I'm generally a
math and writing nerd. When I teach, I find it most important to -Give perspective about the field of study we're covering.
25 Subjects: including SAT math, chemistry, calculus, writing
Related Middle City East, PA Tutors
Middle City East, PA Accounting Tutors
Middle City East, PA ACT Tutors
Middle City East, PA Algebra Tutors
Middle City East, PA Algebra 2 Tutors
Middle City East, PA Calculus Tutors
Middle City East, PA Geometry Tutors
Middle City East, PA Math Tutors
Middle City East, PA Prealgebra Tutors
Middle City East, PA Precalculus Tutors
Middle City East, PA SAT Tutors
Middle City East, PA SAT Math Tutors
Middle City East, PA Science Tutors
Middle City East, PA Statistics Tutors
Middle City East, PA Trigonometry Tutors
Nearby Cities With SAT math Tutor
Belmont Hills, PA SAT math Tutors
Carroll Park, PA SAT math Tutors
Center City, PA SAT math Tutors
East Camden, NJ SAT math Tutors
Eastwick, PA SAT math Tutors
Fernwood, PA SAT math Tutors
Middle City West, PA SAT math Tutors
Overbrook Hills, PA SAT math Tutors
Passyunk, PA SAT math Tutors
Penn Ctr, PA SAT math Tutors
Philadelphia SAT math Tutors
Philadelphia Ndc, PA SAT math Tutors
South Camden, NJ SAT math Tutors
West Collingswood, NJ SAT math Tutors
Westmont, NJ SAT math Tutors
|
{"url":"http://www.purplemath.com/Middle_City_East_PA_SAT_math_tutors.php","timestamp":"2014-04-17T21:27:49Z","content_type":null,"content_length":"24638","record_id":"<urn:uuid:ba59d60b-f972-4070-9ec6-135190145dcc>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
|
basic trigonometry
But why? Are there some explanations behind or are they just defined by scientists?
What do you mean by "explanations"?
To answer the question, they are the definitions used in planar geometry. There are other (equivalent) ways of defining them, but you'll get the same properties nonetheless.
|
{"url":"http://www.physicsforums.com/showthread.php?p=4208025","timestamp":"2014-04-20T16:12:11Z","content_type":null,"content_length":"35551","record_id":"<urn:uuid:65ad2976-adb8-42c2-a4a5-2aee9629a365>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Braingle: 'Random Numbers With Constraints' Brain Teaser
Random Numbers With Constraints
Probability puzzles require you to weigh all the possibilities and pick the most likely outcome.
Puzzle ID: #14822
Category: Probability
Submitted By: j72883
Corrected By: Sunrose
One of ten different furbies was randomly placed into each box of some type of cereal. If a mother decided to buy this cereal for her children until she obtained at least one of each of the ten
different furbies, what is the number of boxes of cereal that she is expected to purchase?
Show Hint
Show Answer
What Next?
Icewolf I thought this was a great teaser !!!
Sep 23,
krishnan Great teaser. That is a nice way of calculating the answer.
Sep 23,
rose_rox This probability one is great.
Sep 23,
j72883 Thanks guys!
Sep 25,
Poker 30 boxes?!!
Aug 01,
tsimkin You say "She will probably need to buy 30 boxes before she gets one of each toy." That's not quite true. The mean number she will need to purchase is indeed 29.29. The median, though,
Nov 23, which is how many she will "probably" have to purchase (i.e., has a 50% chance of having the number she needs) is actually 27. This difference comes from the fact that the distribution
2004 is right-skewed (there are times when you need 400 boxes, but never a case when you can do it in less than 10.) Interestingly, the number of boxes she will buy most often (i.e., the
mode) is only about 23 (again, a byproduct of the skew in this distribution.) Great question, though. I had to run a 16000-trial Monte Carlo to come up with the right answers for mean
and mode. (I am sure this could be determined with calculus, but I do so love pushing Excel to its limits.) Another interesting follow up is how often she can buy just ten boxes. The
answer here, of course, is 10!/10^10 = 0.0363% -- not very likely!
Jimbo I think the answer is correct according to the way the questions was worded. I know the answer was given "probably have to buy" but the question actually asked how many she "expected" to
Mar 05, buy. So if you have a 90% chance of getting one furbie in a box, that means if you bought 10 boxes you would expect to get a furbie in 9 of them. ie you expect to get 9 furbies. So on
2006 the average you would expect to buy 10/9 = 1.1111.. boxes in order to get 1 furbie. And so on. I liked the teaser!
brainjuice i dont understand although i've seen the solution..
Mar 27,
Quax Did an Excel spreadsheet on it, came up with the same as tsimkin. At 44 boxes, you're 90% sure. At 51 boxes, you're 95% sure.
Jul 26,
javaguru Nice puzzle, elegant solution.
Dec 13,
2008 You don't need to run Monte Carlo or use calculus to exactly calculate the probabilities, although it will be useful in finding limits such as which box are you most likely to find the
10th furby in.
The probability of finding the 10th furby in the nth box (where n >= 10) is
P = (9! * S(n-1, 9)) / 10^(n-1)
where S(n, 9) is the Stirling number of the second kind, which in this case is the number of ways to partition n elements into 9 non-empty subsets. These subsets represent all the
possible combinations of the boxes that contained each of the first 9 furbys. The 10th furby is always only in the last box.
The first four Stirling numbers in the sequence for 9 subsets are:
S(9,9) = 1 since the subset for each furby must contain a single box;
S(10,9) = 45 since each combination includes one subset with two of the 10 boxes and there are C(10,2) = 45 ways to make that subset;
S(11,9) = 1155 since each combination includes either one subset with three boxes (C(11,3) = 165) or two subsets with two boxes ((C(11,2) * C(9,2)) / 2! = 990) and there are 165 + 990 =
1155 ways to make those subsets;
S(12,9) = 22,275 since each combination includes either one subset with four boxes (C(12,4) = 495) or one subset with three boxes and one subset with two boxes (C(12,3) * C(9,2) = 7,920)
or three subsets with two boxes ((C(12,2)*C(10,2)*C(8,2))/3! = 13,860) and there are 495 + 7920 + 12860 = 22275 ways to make those subsets.
Note that since the formula for S(n,9) is:
(1/9!) * Sum[i=1to9: (-1)^(9-i)*C(9,i)*i^n]
the 9! cancels out of (9! * S(n, 9), so the equation for the probability of finding the last furby in the nth box can be simplified to
P = Sum(i=1to9: (-1)^(9-i)*C(9,i)*i^n) / 9^n
The other question is "what is the probability that I will find the 10 furbies after opening n boxes?" Now the 10th furby is no longer constrained to being in the last box opened, so the
probability is
P = (10! * S(n, 10)) / 10^n
This implies that
(10! * S(n, 10)) / 10^n = Sum(b=10 to n: (9! * S(b-1, 9)) / 10^(b-1))
opqpop I don't like this question because I couldn't simplify 1/1 + 1/2 + ... + 1/10.
Sep 27,
|
{"url":"http://www.braingle.com/brainteasers/teaser.php?id=14822&op=0&comm=1","timestamp":"2014-04-16T10:27:34Z","content_type":null,"content_length":"34416","record_id":"<urn:uuid:b12b48aa-8375-44b7-a377-ce2ecb2d425e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
|