content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Problmes de satisfaction de contraintes flexibles: une approche galitariste. Revue d'Intelligence Artificielle
- Applied Intelligence , 1996
"... In classical Constraint Satisfaction Problems (CSPs) knowledge is embedded in a set of hard constraints, each one restricting the possible values of a set of variables. However constraints in
real world problems are seldom hard, and CSP's are often idealizations that do not account for the preferenc ..."
Cited by 74 (13 self)
Add to MetaCart
In classical Constraint Satisfaction Problems (CSPs) knowledge is embedded in a set of hard constraints, each one restricting the possible values of a set of variables. However constraints in real
world problems are seldom hard, and CSP's are often idealizations that do not account for the preference among feasible solutions. Moreover some constraints may have priority over others. Lastly,
constraints may involve uncertain parameters. This paper advocates the use of fuzzy sets and possibility theory as a realistic approach for the representation of these three aspects. Fuzzy
constraints encompass both preference relations among possible instanciations and priorities among constraints. In a Fuzzy Constraint Satisfaction Problem (FCSP), a constraint is satisfied to a
degree (rather than satisfied or not satisfied) and the acceptability of a potential solution becomes a gradual notion. Even if the FCSP is partially inconsistent, best instanciations are provided
owing to the relaxation of ...
- Journal of Intelligent Manufacturing , 1995
"... : This paper proposes an extension of the constraint-based approach to job-shop scheduling, that accounts for the flexibility of temporal constraints and the uncertainty of operation durations.
The set of solutions to a problem is viewed as a fuzzy set whose membership function reflects preference. ..."
Cited by 53 (9 self)
Add to MetaCart
: This paper proposes an extension of the constraint-based approach to job-shop scheduling, that accounts for the flexibility of temporal constraints and the uncertainty of operation durations. The
set of solutions to a problem is viewed as a fuzzy set whose membership function reflects preference. This membership function is obtained by an egalitarist aggregation of local
constraint-satisfaction levels. Uncertainty is qualitatively described is terms of possibility distributions. The paper formulates a simple mathematical model of jobshop scheduling under preference
and uncertainty, relating it to the formal framework of constraint-satisfaction problems in Artificial Intelligence. A combinatorial search method that solves the problem is outlined, including fuzzy
extensions of well-known look-ahead schemes. 1. Introduction There are traditionally three kinds of approaches to jobshop scheduling problems: priority rules, combinatorial optimization and
constraint analysis. The first kind ...
- In Proc. 4th Int. Conf. on Principles and Practice of Constraint Programming (CP98). Springer-Verlag, LNCS 1520 , 1998
"... Constraint Violation Minimization Problems arise when dealing with over-constrained CSPs. Unfortunately, experiments and practice show that they quickly become too large and too difficult to be
optimally solved. In this context, multiple methods (limited tree search, heuristic or stochastic local ..."
Cited by 6 (0 self)
Add to MetaCart
Constraint Violation Minimization Problems arise when dealing with over-constrained CSPs. Unfortunately, experiments and practice show that they quickly become too large and too difficult to be
optimally solved. In this context, multiple methods (limited tree search, heuristic or stochastic local search) are available to produce non-optimal, but good quality solutions, and thus to provide
the user with anytime upper bounds of the problem optimum. On the other hand, few methods are available to produce anytime lower bounds of this optimum.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1469869","timestamp":"2014-04-20T21:18:58Z","content_type":null,"content_length":"19228","record_id":"<urn:uuid:3961ec86-e71a-4d26-ae41-33ef1b80c8af>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
|
La Mirada SAT Math Tutor
As an educator and private tutor, I have been very involved in all levels of education from grade K through high school. My love of teaching children and helping students through private tutoring
has become a passion for me. My patience and unique approach to solving problems and explaining difficult subjects to students has proven to be successful and very rewarding.
22 Subjects: including SAT math, English, reading, writing
...I studied psychology at Westmont and have had a lot of personal interactions with people who are considered special needs ever since I was a child. Over the years, I have worked with people who
have learning disabilities, mental disorders, and physical limitations. I consider it a privilege to ...
69 Subjects: including SAT math, chemistry, Spanish, reading
I am an experienced tutor in math and science subjects. I have an undergraduate and a graduate degree in electrical engineering and have tutored many students before. I am patient and will always
work with students to overcome obstacles that they might have.
37 Subjects: including SAT math, chemistry, statistics, English
...I am an approved tutor in SAT preparation. I have been working with the teacher training program at UCLA giving future teachers techniques and methods of teaching elementary mathematics. I work
well with K-6th children.
72 Subjects: including SAT math, English, reading, writing
...I am looking to earn some extra money in order to pay for textbooks and classes. I prefer to tutor in math. I took up to AP-Calculus BC in high school and scored a four out of five on the
AP-Calculus AB test.
22 Subjects: including SAT math, reading, English, writing
|
{"url":"http://www.purplemath.com/la_mirada_sat_math_tutors.php","timestamp":"2014-04-21T04:46:39Z","content_type":null,"content_length":"23860","record_id":"<urn:uuid:fbb0d0d3-4f18-48a7-9d22-3033e471da3d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
|
for a 2x2 matrix A, how do i classify the system given by X_n+1 = AX_n
May 12th 2012, 03:03 AM
for a 2x2 matrix A, how do i classify the system given by X_n+1 = AX_n
for a 2x2 matrix how do i decide if the origin is: (attractor, repellor, saddle, stable focus, unstable focus, or a center)?
can you give examples?
May 12th 2012, 12:30 PM
Re: for a 2x2 matrix A, how do i classify the system given by X_n+1 = AX_n
Look at the eigenvalue of A. Are they real or non-real complex? Are the real parts positive or negative?
May 12th 2012, 05:10 PM
Re: for a 2x2 matrix A, how do i classify the system given by X_n+1 = AX_n
i know how to find real/complex eigenvalues and determine if the real part is +ve or -ve, but "which condition tells me what type of origin it is?"
|
{"url":"http://mathhelpforum.com/advanced-algebra/198704-2x2-matrix-how-do-i-classify-system-given-x_n-1-ax_n-print.html","timestamp":"2014-04-19T08:58:47Z","content_type":null,"content_length":"4239","record_id":"<urn:uuid:f7e51def-dd86-4396-add4-80b6fbfe6356>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
|
how can implementing a heap that each node is a class?
up vote 1 down vote favorite
I want to crate a Heap structure that each node have 2 data , 1) string 2) int so i think that each node must be a Class that's name is "heapNode" , but i have a trouble in swap method , please help
import java.util.ArrayList;
public class MainHeap {
ArrayList<heapNode> heap;
MainHeap (){
new ArrayList<heapNode>();
public int getMin(){
return heap.get(0).data ;
private int parent(int pos) {
return pos / 2;
private void swap(int pos1, int pos2) {
heapNode temp =new heapNode();
temp = heap.get(pos1);
heap.get(pos1) = heap.get(pos2);
heap.get(pos2) = temp;
public void insert(int elem) {
int max = heap.size();
heap.get(max).data = elem ;
int current = heap.size() ;
while (heap.get(current).data < heap.get(parent(current)).data){
swap ( current , parent(current));
and this is my heapNode class
public class heapNode {
int data;
String fileName;
the swap method has error but i cant solve errors
java data-structures heap
add comment
2 Answers
active oldest votes
Your swap code actually makes the objects point to different objects. It does not modify the positions in the arraylist itself. If using arraylist, you will have to remove an object
up vote 2 from an index and set that object at a new index to swap or else you can use other data structure.
down vote
thanks a lot can you see my following code I wander if it is true or not private void swap(int pos1, int pos2) { heapNode temp =new heapNode(); temp = heap.get(pos1); heap.remove
(pos1); heap.add(pos1, heap.get(pos2)); heap.remove(pos2); heap.add(pos2, temp); } – RoBay Oct 20 '11 at 6:36
add comment
up vote 0 down vote
add comment
Not the answer you're looking for? Browse other questions tagged java data-structures heap or ask your own question.
|
{"url":"http://stackoverflow.com/questions/7831905/how-can-implementing-a-heap-that-each-node-is-a-class","timestamp":"2014-04-18T07:10:29Z","content_type":null,"content_length":"70457","record_id":"<urn:uuid:9d7cffe4-9fe7-4037-8f5e-ea4bd5a32655>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Multimedia Archives - Page 2 of 3 - Webmonkey
Bit Depth
Bit depth describes the file size of an image by orders of magnitude.
When wrangling with file size versus image quality, it’s often important to minimize the bit depth of an image while maximizing the number of colors. To calculate the maximum number of colors for an
image of a particular bit depth, remember that the number of colors is equal to two to the power of what the bit depth is. For example, a GIF can support up to eight bits per pixel, and therefore can
have as a many as 256 colors, since two to the power of eight equals 256.
The cell is nature’s building block, and the pixel is the web designer’s. Pixel is one of those half-baked half-acronyms:PICture ELement. It refers to how monitors divide the display screen into
thousands or millions of individual dots. A pixel is one of those dots. An 8-bit color monitor can display 256 pixels, while a 24-bit color monitor can display more than 16 million. If you design a
web graphic on a 24-bit monitor, there’s an excellent chance that many of your 16 million pixels won’t be seen by visitors to your site. Since the agreed-upon lowest common denominator palette for
the web has 216 colors, you should design your graphics using 8-bit color. (see Bit Depth)
CMYK stands for cyan magenta yellow and blacK and is a color system used in the offset printing of full-color documents.
Offset uses cyan, magenta, yellow, and black inks and is often referred to as “four-color” printing. Monitors use red, green, and blue light instead, so they display images using a different color
system called RGB. One of the great problems of the digital age has been matching colors between these two systems; i.e., taking a digital RGB image and making it look the same in print using CMYK.
These problems are addressed by applications such as the Pantone Matching System.
To crop means to cut the pieces of an image that you don’t need.
Cropping differs from resizing because when you crop an image you retain the dimensions of the image. Resizing an image actually shrinks the image into smaller dimensions.
DeCSS is a software program that allows decryption of a CSS-encrypted movie and copying of the files to a hard disc (CSS stands for content scrambling system, and it’s used to protect the content of
a DVD disc.) The DeCSS utility made online trading of DVD movies possible, although the interactive elements and outstanding audio/visual quality of DVD are compromised in the process.
|
{"url":"http://www.webmonkey.com/tag/multimedia/page/2/","timestamp":"2014-04-17T12:51:04Z","content_type":null,"content_length":"64935","record_id":"<urn:uuid:6aa45e92-6657-4226-94db-ec162c21aeae>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Types of grammars.
hunk@alpha1.csd.uwm.edu (Mark William Hopkins)
2 Jun 1999 01:38:39 -0400
From comp.compilers
| List of all articles for this month |
From: hunk@alpha1.csd.uwm.edu (Mark William Hopkins)
Newsgroups: comp.compilers
Date: 2 Jun 1999 01:38:39 -0400
Organization: University of Wisconsin - Milwaukee, Computing Services Division
References: 98-08-134 99-05-080
Keywords: parse, LL(1), LR(1)
sergenth@tin.it writes:
>A grammar may be LL(k) if it satisfies some condition, other condition are
>needed for a grammar to be LR(k).
>It turns out that a grammar that is LL(k) also satisfies the conditions to
>be LR(k).
>For the languages defined by the grammars, you have that every language
>defined by a grammar LL(k) may also be defined by a grammar LR(1).
Actually, a context-free grammar is LL(k) (or LR(k)) if its
'canonical' extension to a simple syntax directed transduction (SSDT)
is LL(k) (or LR(k)). The canonical extension is the one that is
defined by an SSDT grammar which is formed by suffixing each rule of
the original context-free grammar by an distinct output symbol.
Note that the attributes LL(K) and LR(k) are actually attributes of
SSDT's, not of context-free languages (or CFL's). Two grammars
defining the same CFL may have completely different canonical
extensions to SSDT's. Indeed, the same grammar could have different
extensions to SSDT's if output symbols are placed at different points
within the rules, other than at the end. So the attributes are really
not even meaningful of context free grammars either.
This is both important and relevant because a parser is actually a a
processor for an SSDT, not a processor for a CFL. The formalism,
methodology and theory of parsing is based on the former not the
The more general theorems would be that:
(A) Every LL(k) SSDT is also LR(k)
(B) Every LL(k) SSDT is LR(1) (which I'm not sure is actually true)
(C) Every LR(k) SSDT reduces to an equivalent LR(1) SSDT (almost certainly
A 'look-ahead' in an SSDT corresponds to commuting an input symbol with an
output symbol (yx -> xy, x is an input symbol, y an output symbol). This
corresponds to 'deferring the action y until after looking ahead to x'.
An SSDT is deterministic if it can be defined by a set of rules, all headed
by input symbols, with no two rules for a given non-terminal being headed
by the same input symbol. The k in LR(k) is probably directly related to
the maximum depth of transpositions that have to be made to render an SSDT
It'll get more complex than this, since you'll have to make algebraic
substitutions for those rules that are left-recursive in order to
eliminate all the left-recursions. This is what's done effectively and
systematically by the LR(0) construction with an end-marker.
If you're working directly with SSDT's (retaining the output symbols
explicitly in the rules) then all you need to do is the LR(0) construction.
>From this, by making yx->xy transpositions, you can systematically
increment the parser up to LR(k).
For example, the grammar
S -> A a | b A c | B c | b B a
A -> d
B -> d
has the following canonical extension, which is strictly LR(1) (not even
S -> A a y1 | b A c y2 | B c y3 | b B a y4
A -> d y5
B -> d y6
The example is easy enough to illustrate the transformation without
explicitly resorting to LR(0) construction. The grammar is equivalent to
S -> A a y1 | B c y3 | b T
T -> A c y2 | B a y4
A -> d y5
B -> d y6
If A and B are substituted for, the result is this:
S -> d y5 a y1 | d y6 c y3 | b T
T -> d y5 c y2 | d y6 a y4
A -> d y5
B -> d y6
which is equivalent to:
S -> d U | b T
T -> d V
U -> y5 a y1 | y6 c y3
V -> y5 c y2 | y6 a y4
This is basically what the LR(0) construction would give you (actually,
an order of magnitude simpler, but equal in 'determinicity')
One level of transpositions is required to make this deterministic:
S -> d U | b T
T -> d V
U -> a y5 y1 | c y6 y3
V -> c y5 y2 | a y6 y4
so the SSDT is LR(1). The significance is that you have to look ahead to
the symbol a or c to determine whether you need to perform (for rule U)
actions y5-y1 (corresponding to the reductions A->d and S -> Aa) or
actions y6-y3 (corresponding to the reductions B->d and S -> Bc).
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again.
|
{"url":"http://compilers.iecc.com/comparch/article/99-06-011","timestamp":"2014-04-16T19:01:08Z","content_type":null,"content_length":"9047","record_id":"<urn:uuid:46232399-6c05-4a3c-bf0e-ee69342061e8>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
|
First Order Peano Arithmetic (FOPA) question-PLEASE HELP
May 26th 2008, 09:16 PM #1
Mar 2008
Can someone please please please help me on this question, I totally can't do it
Let A be a sentence of FOPA (i.e., a formula with no free variables). Consider the following three assertions:
FOPA l- A (Means is A), FOPA ~I- A (Means is not A), A is true.
On the face of it, there are 8 possibilities for these assertions, namely
TTT, TTF, TFT, TFF, . . . , FFF, where, for example, TFF means it is true that FOPA I- A, it is false that FOPA ~A, and A is not a true statement in number theory.
For each of the 8 possibilities, is there such a sentence A?
If so, find one. If not, why not?
I know for the first possibility, TTT, it is not possible because if FOPA can prove A is true, Not A is true and the statement is true. It is not possible because FOPA cannot both prove something
that is right and something as wrong too. So TTT is not possible but I am not sure if this is the correct reason for it. But I can't do the rest and I don't know the correct reasons to back them
up. Can someone please please please help me ? I would really appreciate your help.
|
{"url":"http://mathhelpforum.com/number-theory/39735-first-order-peano-arithmetic-fopa-question-please-help.html","timestamp":"2014-04-20T17:33:39Z","content_type":null,"content_length":"30258","record_id":"<urn:uuid:50215832-498b-4d11-b4bc-71559960b67e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Taylor Polynomial
March 11th 2013, 08:02 PM #1
May 2012
A place
Taylor Polynomial
Spivak 4th Edition: Chapter 20, Problem 13
f(x) = ((e^x) - 1)/x , x not equal to 0
f(x) = 1, x = 0
a) Find the Taylor Polynomial of degree n for f at 0, compute f^(k)(0) [f differentiated k times at 0], and give an estimate for the remainder term Rn,0,f.
b) compute integral from 0 to 1 of f with error of less than 10^-4.
I haven't worked with any questions like this before so I'm unable to get started. Seeing this example worked fully would be enough to understand the topic.
~Many Thanks
Re: Taylor Polynomial
Any help would be appreciated.
Re: Taylor Polynomial
Look up the MacLaurin Series (Taylor Series centred at x=0) for \displaystyle \begin{align*} e^x \end{align*}, use this to get a series for \displaystyle \begin{align*} \frac{e^x - 1}{x} \end
March 12th 2013, 02:32 PM #2
May 2012
A place
March 12th 2013, 05:24 PM #3
|
{"url":"http://mathhelpforum.com/calculus/214622-taylor-polynomial.html","timestamp":"2014-04-20T05:51:13Z","content_type":null,"content_length":"35116","record_id":"<urn:uuid:30f9de7a-7201-455f-978a-dc401650e48f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How Quantum Numbers Arise from the Schrodinger Equation
Quantum numbers arise in the process of solving the Schrodinger equation by constraints or boundary conditions which must be applied to get the solution to fit the physical situation. The case of a
particle confined in a three-dimensional box can be used to show how quantum numbers arise.
Foundational to this process is the nature of the quantum mechanical wavefunction and its root in probability. For a particle in space, the wavefunction can be written
This wavefunction is subject to the constraints
For a free particle, the x, y and z directions can be considered to be governed by independent probabilities, and the nature of probability suggests the possibility of writing the wavefunction as the
product of probability densities for the three coordinates.
This anticipation comes from the nature of probability, and can be illustrated by the probabilities of two random events such as the throw of a die.
If the probability of a random event is known, then the probability of two such random events is the product of the two probabilities. In the wavefunction above, the wavefunction is written as the
product of the probability densities for the x, y, and z directions.
For a particle in a 3-D box, assume that we can write the wavefunction as
so that the Schrodinger equation becomes
By dividing through by the wavefunction FGH, this can be put in the form
Since the equation must hold for all values of x, y and z, and since these coordinates can vary independently of each other, it follows that each term on the left must equal a constant. For example,
the x part of the equation can be separated out and set equal to a constant.
This is the form of the 1-dimensional particle in a box and has the solution
In this process, we see that the quantum number n[1] arises from the nature of the wavefunction and its solution when it is forced to fit the boundary conditions imposed by the potential. Applying
the same procedure to the other two dimensions leads to a solution for the wavefunction
and yields energy eigenvalues which are determined by three quantum numbers.
The solution to the Schrodinger equation in three dimensions led to three quantum numbers. In the case of the hydrogen atom, the boundary conditions are much different, but they also lead to three
spatial quantum numbers.
Suppose the three dimensions of our box are the same. The ground state and first excited state energies are
We say that the excited state is "degenerate", i.e., there are three sets of quantum numbers which give the same energy. Degeneracy in energy is generally associated with some kind of symmetry, in
this case the obvious one of having cubic symmetry. In the case of the hydrogen atom, the energy level of the n=2 state depends only upon the principal quantum number. This means that it is
degenerate with respect to the orbital and spin quantum numbers, an 8-fold degeneracy because of the 8 possible quantum number combinations. But this degeneracy is not exact - upon closer examination
we find the hydrogen fine structure which implies a slight asymmetry introduced by the spin-orbit effect. And if we apply an external magnetic field, the Zeeman effect introduces further asymmetry
and breaks up the degenerate energy levels into discrete splittings.
|
{"url":"http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/qnsch.html","timestamp":"2014-04-19T01:49:59Z","content_type":null,"content_length":"5324","record_id":"<urn:uuid:43cfa898-f602-440d-b66f-a60e09d4bd4f>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebra: Exponents and Roots Help and Practice Problems
Find study help on exponents and roots for algebra. Use the links below to select the specific area of exponents and roots you're looking for help with. Each guide comes complete with an explanation,
example problems, and practice problems with solutions to help you learn exponents and roots for algebra.
The most popular articles in this category
|
{"url":"http://www.education.com/study-help/study-help-algebra-exponents-and-roots/","timestamp":"2014-04-17T11:06:27Z","content_type":null,"content_length":"95296","record_id":"<urn:uuid:3836c447-07b7-477e-a339-c9ec584a7fae>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: Skewness estimates with svyset data
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Skewness estimates with svyset data
From "Richard Palmer-Jones" <richard.palmerjones@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Skewness estimates with svyset data
Date Tue, 4 Nov 2008 13:00:40 +0000
Thanks for all your contributions.
I worked out the missing "^3" last night (and the x^3 = (_b[ht])^3 -
good old Yule and Kendal) i.e.:
nlcom ((_b[ht])^3 - 3*_b[ht2] * _b[ht] + 2*(_b[ht])^3)/(_b[ht2] -
_b[ht] * _b[ht])^3/2
but I am not convinced it gives sensible results - but then how to judge?
In this dataset (NHANES3) using summarize with weights shows heights
are not greatly skewed at any age; but weights are clearly negatively
skewed up to the age of 5, and positively skewed thereafter (ditto for
BMI). The nlcom calculation is quite close to the estimated skewness
for height but for weight, although Pearson r = .5, the absolute
sizes are not that close (skewness = 0.51 * nlcom - 0.079, r2 = .29,
N=49, both coefs p< .000). The nlcom estimate seriously underestimates
skewness after age 5 compared to the summarize estimate (with
I actually want to compare adult heights, weights, and BMIs in a
situation where nutritional status has apparently been improving quite
rapidly. Heights, weights & BMIs for 25 year olds are greater than
those of 45 year olds (assuming no differntial mortalities, which I
doubt). Most programs which compute anthopometry z-scores (zanthro, or
WHO's Anthro macros) are for children or adolescents, so I wanted
something like a zanthro for adults. One might set the standards using
USA or UK heatlh surveys which give heights and weights of adults, but
then one might want to compute skewness both to test for normality
(they are not) and to use the LMS method (Cole et al. 2008) to develop
the standards. Height at each age group for both sexes may not be
normal, but as noted above weights are generally not (different tests
give different results, but omninorm suggests weight is seriously not
normal, and height slightly (p between 0.05 & 0.01) not).
I suspect that pro temp I am better off using summarize, and smoothing
the skewness estimates (and median and cv, but any further advice
Thanks again
On Mon, Nov 3, 2008 at 10:11 PM, Maarten buis <maartenbuis@yahoo.co.uk> wrote:
> --- Richard Palmer-Jones <richard.palmerjones@gmail.com> wrote:
>> Thanks - I did check using summarize with weights, and other tests
>> (sktest), and qnorm/pnrom, and generally skewness is no problem, but
>> for some subsamples it may be. I am jconcerned that stratification
>> is lost by these views.
> It looks like you are worried about normality assumptions. This worries
> me for two reasons: First, these assumptions typically refer to the
> residuals, and not the dependent variable (or in other words the
> dependent variable is normally distributed conditional on the
> explanatory variables). Your reference to subsamples suggests that you
> are not looking at the residuals. Second, when you are doing survey
> methods, you are automatically using robust/Huber/White/sandwich
> estimators, so you are effectively bypassing many if not all the
> normality assumptions.
> Hope this helps,
> Maarten
> -----------------------------------------
> Maarten L. Buis
> Department of Social Research Methodology
> Vrije Universiteit Amsterdam
> Boelelaan 1081
> 1081 HV Amsterdam
> The Netherlands
> visiting address:
> Buitenveldertselaan 3 (Metropolitan), room N515
> +31 20 5986715
> http://home.fsw.vu.nl/m.buis/
> -----------------------------------------
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2008-11/msg00098.html","timestamp":"2014-04-19T04:37:30Z","content_type":null,"content_length":"9915","record_id":"<urn:uuid:c4e2ca57-b20c-4db1-be35-ee7608808be7>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: February 2004 [00683]
[Date Index] [Thread Index] [Author Index]
Factoring two-dimensional series expansions? (Ince polynomials again)
• To: mathgroup at smc.vnet.net
• Subject: [mg46684] Factoring two-dimensional series expansions? (Ince polynomials again)
• From: AES/newspost <siegman at stanford.edu>
• Date: Sun, 29 Feb 2004 03:16:32 -0500 (EST)
• Sender: owner-wri-mathgroup at wolfram.com
This is a math question rather than a Mathematica question, but anyway:
Suppose I have a function f[x,y] that's a power series expansion in
factors x^m y^n , that is,
(1) f[x, y] = Sum[ a[m,n] x^m y^n, {m, 0, mm}, {n, 0, mm} ]
with known a[m,n] coefficients
Are there algorithmic procedures for factoring this function
(analytically or numerically) into a simple product of power series or
simple folynomials in x and y separately, i.e.,
(2) f[x ,y] = fx[x] fy[x]
or maybe
(3) f[z1, z2] = fz1[z1] fz2[z2]
where z1 and z2 are linear combinations of x and y ?
Or more realistically there tests for *when* or whether the original
function can be so factored?
The question is motivated by some recent work in paraxial beam
propagation in which the function f[x,y] is actually the sum of
Hermitian polynomials, call 'em h[m,x] and h[n,y] for brevity, with
expansion coefficients b[m,n], i.e.
(4) f[x, y] = Sum[ b[m,n] h[m,x] h[n,y], {m, 0, mm}, {n, 0, mm} ]
where the coefficients b[m,n] can be arbitrary but there is a special
constraint that m + n = a constant integer p .
Apparently this expansion can be factored into a product like (3) where
the functions fz1{z1} and fz2[z2] are both some kind of mysterious
"Ince polynomials" and the variables z1 and z2 are elliptical
coordinates in the x,y plane, with the elliptical coordinate system
vasrying with the choice of the coefficients b[m,n] .
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2004/Feb/msg00683.html","timestamp":"2014-04-18T03:10:06Z","content_type":null,"content_length":"35708","record_id":"<urn:uuid:036e7d85-80f5-4178-adb9-8808b17ae50f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Model Not Showing up on screen...
Model Not Showing up on screen...
Okay, so here is my new update function...
void Update()
this->GetFinalMatrix() = ParentNode->GetFinalMatrix() * LocalMatrix;
This updates the transformation to be relative to the parents final transformation (kinda like car wheels, or the flame on a torch, etc etc..)...
Everything is multiplying correctly, all the matrices (4x4 matrices) are initialized as identity matrices, I havn't done any rotation or translation yet, this->GetFinalMatrix() should look like
this when we send it to the glLoadMatrixf function:
[1 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 1]
This should be the same as calling glTranslatef(0.0, 0.0, 0.0) in place of the glLoadMatrix function...
Unfortunately this isn't working, the model isn't showing up on the screen, I've debugged and made sure that the final matrix being send to the gl pipeline IS a 4x4 identity matrix..
Now I'm getting to the point where I believe that this is an opengl issue I'm having, maybe misunderstanding the functions?
What am I doing wrong to not have my model show up on my screen?
EDIT: Could it be that the final matrix is an identity matrix, but is in the form of FinalMatrix[16] and not FinalMatrix[4][4] ?
Sometimes matrices are stored vertically instead of horizontally, if you know what I mean. And what library are you using to load the model in the first place? If you wrote it yourself, maybe you
should try using a pre-built library, and once you get that working, rewrite your existing library.
Also, (I'm trying to load models myslef) when tried it, it didn't work at first because my hand-made model was way to small! It looked fine in the modelling program, but when I compared it to
other models that I had downloaded from the internet, I realized that my model had to be much larger. After a scaled it up a bit, it worked. Now all I have to do is texture map this thing.....
It isn't the model loading I'm having trouble with, I know that works, its true and tested :), it is the translating through matrices that is giving me trouble.
I know opengl uses a left handed coordinate system, i think, but I didn't think I would really have to worry about that....
What to do what to do..
|
{"url":"http://cboard.cprogramming.com/game-programming/77749-model-not-showing-up-screen-printable-thread.html","timestamp":"2014-04-20T03:46:16Z","content_type":null,"content_length":"9459","record_id":"<urn:uuid:fe07d07b-09df-4baf-bda4-1862cb0064a5>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Are bicategories of lax functors also bicategories of of pseudofunctors?
up vote 8 down vote favorite
Let A be a bicategory. Can I construct in some "natural" way a bicategory L(A) such that for every bicategory B, the bicategory of lax functors Lax(A, B) is (bi)equivalent to the bicategory of
pseudofunctors Pseudo(L(A), B)? (Choose the morphisms in these functor categories as you wish. Ideally the same construction L would work for any choice.)
For example, when A = •, I believe we may take L(A) = BΔ[+], the delooping of the augmented simplex category with monoidal structure given by ordinal sum. In general I imagine L(A) as being formed as
something like the free category on the objects, 1-morphisms and 2-morphisms of A--denote the generator corresponding to a 1-morphism f of A by [f]--as well as 2-morphisms id → [id] and [f] o [g] →
[f o g] for every composable f and g in A.
(By Chris's question here, L(A) cannot be an equivalence invariant of A.)
ct.category-theory higher-category-theory
Reid, I think there's a typo. You should have "Pseudo(L(A), B)" in the second sentence. – Tom Leinster Dec 4 '09 at 12:44
Yup, thanks. Fixed. – Reid Barton Dec 4 '09 at 15:47
add comment
2 Answers
active oldest votes
Yes, there is. A relevant general framework is the following: for any 2-monad T, we can define notions of pseudo and lax morphism between T-algebras, and there is a forgetful functor from
the 2-category of T-algebras and pseudo morphisms to the 2-category of T-algebras and lax morphisms. If T is well-behaved, this forgetful functor has a left adjoint; see for instance this
There is a 2-monad on the 2-category of Cat-graphs whose algebras are bicategories, whose pseudo morphisms are pseudofunctors, and whose lax morphisms are lax functors. Therefore, the above
applies to bicategories. If you trace through the construction, you'll see that it is given essentially by the recipe you proposed. (This case of the construction can probably be found
elsewhere in the literature as well, in more explicit form, but this is the way I prefer to think about it.)
up vote
8 down The caveat is that the 2-cells in the 2-categories defined above are not any of the the usual sort of transformations between bicategories, only the icons. (This is what allows you to have a
vote 2-category containing lax functors.) However, the usual sorts of transformations are "corepresentable," that is, for any bicategory D there is a bicategory Cyl(D) such that pseudo or lax
functors into Cyl(D) are the same as pairs of pseudo or lax functors into D and a pseudo (or lax, with a different definition of Cyl) natural transformation between them, and likewise we
have 2Cyl(D) for modifications. I believe one can use this to show that in this case, the construction coming from 2-monad theory does have the property you want.
Of course, by Chris' question, it seems that this version of L cannot itself be described as a left adjoint, since there is no 2- or 3-category containing lax functors and arbitrary pseudo/
lax transformations.
add comment
For 2-categories there is a proof on "FOrmal CAtegory Theory I" of J. W. Gray LNM 391 (see I,4.23 pag. 92) . There is cited that J. Benabou did a general proof for bicategories.
up vote 1 down vote
add comment
Not the answer you're looking for? Browse other questions tagged ct.category-theory higher-category-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/7762/are-bicategories-of-lax-functors-also-bicategories-of-of-pseudofunctors","timestamp":"2014-04-21T05:15:29Z","content_type":null,"content_length":"57291","record_id":"<urn:uuid:85d3f8d1-df75-4c13-9db3-1d1df355c184>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Unit Conversions
(dimensional analysis)
Converting one unit to another is something we do all the time. For example we convert quarters to dollars, cups to liters or days to weeks etc. . Many of these conversion we can do in our heads but
sometimes the problems are little more difficult - a technique becomes necessary. This technique of converting from one unit to another is often called diminesional alysis.
In dimensional analysis we use conversion factors to successively convert from the initial unit to the desired unit. An example of a conversion factor is 4 quaters = 1 dollar. Equivlently we can
write the unit ratios (4 quater/1 dollar or 1 dollar/4 quarter). If we want to convert $120 to quarters we proceed as follows:
Note how the dollar is placed in the denominator to cancel the dollar in the numerator. The unit "quarters", which is what we want, is all that is left.
A more difficult example might be the following: How many meters are there in a a 100 yard football field? The plan of attack would be yards -> feet -> inches -> cm -> m:
Derived units are converted in a simplar way: Suppose we want to convert 60 mph to meters/second? The plan might be miles -> feet -> inches -> cm -> m and hour -> minutes -> seconds:
Interstingly, this result states that a car traveling at 60 mph will travel almost 27 meters every second!
If we want to convert areas or volumes we must be careful keeping proper track of the squares and cubes. Suppose we want to convert a 10 cm x 12 cm x 15 cm rectangular box to cubic inches. We need to
use the conversion factor 2.54 cm = 1 in in the following way:
Note that the cubic centimeters cancel only because the conversion factor is cubed.
Sometimes it is possible to convert between different kinds of units, e.g., mass to volume. this kind of conversion will be discussed later.
|
{"url":"http://www.iun.edu/~cpanhd/C101webnotes/measurements/unit-conversions.html","timestamp":"2014-04-18T08:45:28Z","content_type":null,"content_length":"4840","record_id":"<urn:uuid:046dbb86-95b9-42c9-8c8d-31d4b4b41da2>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/doc.brown/medals","timestamp":"2014-04-16T04:27:05Z","content_type":null,"content_length":"92892","record_id":"<urn:uuid:c71ae0a3-2f24-4ca9-bc5a-87a45023655d>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
|
September 17th 2007, 10:59 PM
Which degenerate form or forms of the parabola cannot be obtained from the intersection of a plane and a double-napped cone? Describe how to obtain this (or these) form(s).
I will greatly appreciate your help.
September 18th 2007, 04:48 AM
Given this definition of a degenerate conic section, which I think is standard, I can't see how to make a parabola from a degenerate form. Unless the answer is supposed to be a pair of
intersecting lines.
Edit: No, upon more reflection, the degenerate parabola should just be a single line passing through the apex.
September 18th 2007, 05:02 AM
Hello, Ivan!
Which degenerate form or forms of the parabola cannot be obtained
from the intersection of a plane and a double-napped cone?
Describe how to obtain this (these) form(s).
A parabola is formed by the intersection of a double-napped cone
. . and a plane parallel to its "slant".
One degenerate parabola, a line, is obtained by passing the plane
. . through the vertex of the cone.
The other degenerate parabola, a point, cannot be obtained.
It can be formed with a plane through the vertex
. . which is not parallel to the cone's slant.
|
{"url":"http://mathhelpforum.com/pre-calculus/19113-conics-print.html","timestamp":"2014-04-23T17:45:11Z","content_type":null,"content_length":"5653","record_id":"<urn:uuid:38d46e4c-ab57-4f81-bf12-74874f3d60e8>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Weighted Average
Weighted Average Calculator (Click Here or Scroll Down)
The weighted average formula is used to calculate the average value of a particular set of numbers with different levels of relevance. The relevance of each number is called its weight. The weights
should be represented as a percentage of the total relevancy. Therefore, all weights should be equal to 100%, or 1.
The most common formula used to determine an average is the arithmetic mean formula. This formula adds all of the numbers and divides by the amount of numbers. An example would be the average of 1,2,
and 3 would be the sum of 1 + 2 + 3 divided by 3, which would return 2. However, the weighted average formula looks at how relevant each number is. Say that 1 only happens 10% of the time while 2 and
3 each happen 45% of the time. The percentages in this example would be the weights. The weighted average would be 2.35.
The weighted average formula is a general mathematical formula, but the following information will focus on how it applies to finance.
Use of Weighted Average Formula
The concept of weighted average is used in various financial formulas. Weighted average cost of capital (WACC) and weighted average beta are two examples that use this formula.
Another example of using the weighted average formula is when a company has a wide fluctuation in sales, perhaps due to producing a seasonal product. If the company would like to calculate the
average of one of their variable expenses, the company could use the weighted average formula with sales as the weight to gain a better understanding of their expenses compared to how much they
produce or sell.
Example of Weighted Average Formula
A basic example of the weighted average formula would be an investor who would like to determine his rate of return on three investments. Assume the investments are proportioned accordingly: 25% in
investment A, 25% in investment B, and 50% in investment C. The rate of return is 5% for investment A, 6% for investment B, and 2% for investment C. Putting these variables into the formula would be
which would return a total weighted average of 3.75% on the total amount invested. If the investor had made the mistake of using the arithmetic mean, the incorrect return on investment calculated
would have been 4.33%. This considerable difference between the calculations shows how important it is to use the appropriate formula to have an accurate analysis on how profitable a company is or
how well an investment is doing.
Return to Top
Formulas related to Weighted Average: Geometric Mean Return
|
{"url":"http://www.financeformulas.net/Weighted_Average.html","timestamp":"2014-04-16T04:23:44Z","content_type":null,"content_length":"13789","record_id":"<urn:uuid:6c32e14b-30a0-4223-bf08-eea16797918b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How should I have solved this?
January 19th 2011, 07:45 AM #1
Sep 2009
North West England
How should I have solved this?
So just got my exam and there was this question on it that I just didn't know where to start, wondered if any of you could explain to me how it should have been done.
Had to show that $(cosecx)/(cosecx+1)+(cosecx)/(cosecx-1)=50$ could be written as $sec^2x=25$
I'm pretty sure that was the question, if its not possible then the + is probably a -after the first fraction.
Also the next part was to solve $sec^2x = 25$ and all my friends got 2 answers and I got 4, does that sound right?
Only identies we can use in this exam are
and all the 1 overs for tan sin and cos.
Throughout I will use $\csc(x) = cosec(x)$ because I'm lazy
A good start will be to get a common denominator which will be $(\csc(x)+1)(\csc(x)-1)$.
$\dfrac{\csc(x)(\csc(x)-1)}{(\csc(x)+1)} + \dfrac{\csc(x)(\csc(x)+1)}{\csc(x)-1} = \dfrac{\csc(x)(\csc(x)-1) + \csc(x)(\csc(x)+1)}{(\csc(x)-1)(\csc(x)+1)}$
When you expand you get (note that the denominator is difference of two squares)
$\dfrac{\csc^2(x)-1 + \csc^2(x)+1}{\csc^2(x)-1} = \dfrac{2\csc^2(x)}{\csc^2(x)-1}$
edit: don't want to give the whole working away so put some in a spoiler. Look if you want, tis no water off my back
January 19th 2011, 07:55 AM #2
|
{"url":"http://mathhelpforum.com/trigonometry/168764-how-should-i-have-solved.html","timestamp":"2014-04-18T18:17:16Z","content_type":null,"content_length":"33024","record_id":"<urn:uuid:4ec33b05-756f-430e-8657-6e5d4fe2487e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finite Element Methods for Biharmonic Problem
Abstract and Applied Analysis
Volume 2012 (2012), Article ID 863125, 19 pages
Research Article
Constrained Finite Element Methods for Biharmonic Problem
College of Mathematics and Information Science, Wenzhou University, Wenzhou, Zhejiang 325035, China
Received 12 September 2012; Revised 29 November 2012; Accepted 29 November 2012
Academic Editor: Allan Peterson
Copyright © 2012 Rong An and Xuehai Huang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
This paper presents some constrained finite element approximation methods for the biharmonic problem, which include the symmetric interior penalty method, the nonsymmetric interior penalty method,
and the nonsymmetric superpenalty method. In the finite element spaces, the continuity across the interelement boundaries is obtained weakly by the constrained condition. For the symmetric interior
penalty method, the optimal error estimates in the broken norm and in the norm are derived. However, for the nonsymmetric interior penalty method, the error estimate in the broken norm is optimal and
the error estimate in the norm is suboptimal because of the lack of adjoint consistency. To obtain the optimal error estimate, the nonsymmetric superpenalty method is introduced and the optimal error
estimate is derived.
1. Introduction
The discontinuous Galerkin methods (DGMs) have become a popular method to deal with the partial differential equations, especially for nonlinear hyperbolic problem, which exists the discontinuous
solution even when the data is well smooth, and the convection-dominated diffusion problem, and the advection-diffusion problem. For the second-order elliptic problem, according to the different
numerical fluxes, there exist different discontinuous Galerkin methods, such as the interior penalty method (IP), the nonsymmetric interior penalty method (NIPG), and local discontinuous Galerkin
method (LDG). A unified analysis of discontinuous Galerkin methods for the second-order elliptic problem is studied by Arnold et al. in [1].
The DGM for the fourth-order elliptic problem can be traced back to 1970s. Baker in [2] used the IP method to study the biharmonic problem and obtained the optimal error estimates. Moreover, for IP
method, the and continuity can be achieved weakly by the interior penalty. Recently, using IP method and NIPG method, Süli and Mozolevski in [3–5] studied the -version DGM for the biharmonic problem,
where the error estimates are optimal with respect to the mesh size and are suboptimal with respect to the degree of the piecewise polynomial approximation . However, we observe that the bilinear
forms and the norms corresponding to the IP method in [3–5] are much complicated. A method to simplify the bilinear forms and the norms is using interior penalty method. interior penalty method for
the biharmonic problem was introduced by Babuka and Zlámal in [6], where they used the nonconforming element and considered the inconsistent formulation and obtained the suboptimal error estimate.
Motivated by the Engel and his collaborators’ work [7], Brenner and Sung in [8] studied the interior penalty method for fourth-order problem on polygonal domains. They used the finite element
solution to approximate solution by a postprocessing procedure, and the continuity can be achieved weakly by the penalty on the jump of the normal derivatives on the interelement boundaries.
In this paper, thanks to Rivière et al.’s idea in [9], we will study some constrained finite element approximation methods for the biharmonic problem. The continuity can be weakly achieved by a
constrained condition that integrating the jump of the normal derivatives over the inter-element boundaries vanish. Under this constrained condition, we discuss three finite element methods which
include the symmetric interior penalty method based on the symmetric bilinear form, the nonsymmetric interior penalty method, and nonsymmetric superpenalty method based on the nonsymmetric bilinear
forms. First, we study the symmetric interior penalty method and obtain the optimal error estimates in the broken norm and in norm. However, for the nonsymmetric interior penalty method, the norm is
suboptimal because of the lack of adjoint consistency. Finally, in order to improve the order of the error estimate, we give the nonsymmetric superpenalty method and show the optimal error estimates.
2. Finite Element Approximation
Let be a bounded and convex domain with boundary . Consider the following biharmonic problem: where denotes the unit external normal vector to . We assume that is sufficiently smooth such that the
problem (2.1) admits a unique solution .
Let be a family of nondegenerate triangular partition of into triangles. The corresponding ordered triangles are denoted by . Let , and . The nondegenerate requirement is that there exists such that
contains a ball of radius in its interior. Conventionally, the boundary of is denoted by . We denote
Assume that the partition is quasiuniform; that is, there exists a positive constant such that Let and be the set of interior edges and boundary edges of , respectively. Let . Denote by the
restriction of to . Let with . Then we denote the jump and the average of on by If , we denote and of on by
Define by with broken norm where is the standard Sobolev norm in . Define the broken norm by where is the seminorm in .
For every and any , we apply the integration by parts formula to obtain Summing all , we have Since and , then on . Thus, the previous identity can be simplified as follows:
Now, we introduce the following two bilinear forms: It is clear that is a symmetric bilinear form and is a nonsymmetric bilinear form. In terms of (2.11) and the solution to problem (2.1) satisfies
the following variational problems:
Let denote the space of the polynomials on of degree at most . Define the following constrained finite element space: from which we note that the continuity of can be weakly achieved by the
constrained condition for all . Next, we define the degrees of freedom for this finite element space. To this end, for any , denote by the three vertices of . Recall that the degrees of freedom of
Lagrange element on are , for all with (cf. [10]) Then we modify the degrees of freedom of Lagrange element to suit the constraint of normal derivatives over the edges in . Specifically speaking, the
degrees of freedom of are given by
Based on the symmetric bilinear form , the symmetric interior penalty finite element approximation of (2.1) is Based on the nonsymmetric bilinear form , the nonsymmetric interior penalty finite
element approximation of (2.1) is Moreover, the following orthogonal equations hold:
In order to introduce a global interpolation operator, we first define for and according to the degrees of freedom of by Due to standard scaling argument and Sobolev embedding theorem (cf. [10]), we
have that for every with , , where and is independent of . We also suppose that the following inverse inequalities hold: where is independent of . Then for every , we define the global interpolation
operator by . Moreover, from (2.22) there holds where is independent of .
The following lemma is useful to establish the existence and uniqueness of the finite element approximation solution.
Lemma 2.1. There exists some constant independent of such that
Proof. Introduce a conforming finite element space thanks to Guzmán and Neilan [11]. The advantage of is that the degrees of freedom depend only on the values of functions and their first
derivatives. Denote by the interpolation operator from to . Then there holds where is independent of . Thus, we have which completes the proof of (2.25) with .
3. Symmetric Interior Penalty Method
In this section, we will show the optimal error estimates in the broken norm and in the norm between the solution to problem (2.1) and the solution to the problem (2.17). First, concerning the
symmetric , we have the following coercive property in .
Lemma 3.1. For sufficiently large , there exists some constant such that
Proof. According to the definition of , we have Using the Hölder’s inequality and the Young’s inequality, we have where is independent of and is a sufficiently small constant. Thus Taking , then, for
sufficiently large such that , using (2.25) we have with .
A direct result of Lemma 3.1 is that the discretized problem (2.17) admits a unique solution for sufficiently large .
Lemma 3.2. For all , there holds where and is independent of .
Proof. For all and , we have where is independent of . Note that for . Thus, for some constant depending on there holds That is From [9], we have where is independent of . Thus Substituting (3.11)
into (3.7) and using (2.22)-(2.23) give
Theorem 3.3. Suppose that and are the solutions to problems and (2.17), respectively; then the following optimal broken error estimate holds: where and is independent of .
Proof. According to Lemma 3.1, we have where we use the orthogonal equation (2.19) and Lemma 3.2. The previous estimate implies Finally, the triangular inequality and (2.24) yield
Next, we will show the optimal error estimate in terms of the duality technique. Suppose and consider the following biharmonic problem: Suppose that problem (3.17) admits a unique solution such that
where denotes the norm in and denotes the norm in and is independent of .
Denote by the continuous interpolate operator from to , and satisfies the approximation property (2.22). Then for the solution to problem (3.17), there hold where is independent of .
Theorem 3.4. Suppose that and are the solutions to problems and (2.17), respectively; then the following optimal error estimate holds: where and is independent of .
Proof. Setting in (3.17), multiplying (3.17) by , and integrating over , we have where we use the orthogonal equation (2.19). We estimate two terms in the right-hand side of (3.21) as follows: where
we use the estimate (3.15). In terms of the inequalities (2.22)-(2.23), we have Substituting the estimates (3.22)-(3.23) into (3.21) yields
4. Nonsymmetric Interior Penalty Method
In this section, we will show the error estimates in the broken norm and in norm between the solution to problem (2.1) and the solution to the problem (2.18). The optimal broken error estimate is
derived. However, the error estimate is suboptimal because of the lack of adjoint consistency. According to Lemma 2.1, we have Moreover, for the nonsymmetric bilinear form , proceeding as in the
proof of Lemma 3.2, we have the following lemma.
Lemma 4.1. For all , there holds where and is independent of .
Theorem 4.2. Suppose that and are the solutions to problems and (2.18), respectively; then there holds where and is independent of .
Proof. According to (2.20), (4.1), and Lemma 4.1, we have where is independent of . That is Using the triangular inequality yields
Theorem 4.3. Suppose that and are the solutions to problems and (2.18), respectively; then there holds where and is independent of .
Proof. Setting in (3.17), multiplying (3.17) by , and integrating over , we have where we use . We estimate as follows: We estimate as follows: We estimate as follows: where is some positive
constant. Substituting the following estimate into (4.11) gives Finally, substituting (4.9)–(4.13) into (4.8), we obtain
5. Superpenalty Nonsymmetric Method
In order to obtain the optimal error estimate for the nonsymmetric method, in this section we will consider the superpenalty nonsymmetric method. First, we introduce a new nonsymmetric bilinear form:
The broken norm is modified to From Lemma 2.1, it is easy to show that there exists some constant such that In fact, we have for . Since the solution to problem (2.1) belongs to , then it satisfies
In this case, the the superpenalty nonsymmetric finite element approximation of (2.1) is Then, we have the following orthogonal equation:
Let be the continuous interpolated operator defined in Section 3. Observe that on every , and proceeding as in the proof of Lemmas 3.2 and 4.1, we have the following.
Lemma 5.1. For all , there holds where and is independent of .
Using Lemma 5.1, the following optimal broken error estimate holds.
Theorem 5.2. Suppose that and are the solutions to problems and (5.5), respectively; then there holds where and is independent of .
The main result in this section is the following optimal error estimate.
Theorem 5.3. Suppose that and are the solutions to problems and (5.5), respectively; then there holds where and is independent of .
Proof. Let . Multiplying (3.17) by and integrating over , we have where we use . Proceeding as in the proof of Theorem 4.2, we can estimate and as follows: The different estimate compared to Theorem
4.3 is the estimate of . Under the new norm , we have Substituting (5.11)–(5.13) into (5.10) yields the following optimal error estimate:
The authors would like to thank the anonymous reviewers for their careful reviews and comments to improve this paper. This material is based upon work funded by the National Natural Science
Foundation of China under Grants no. 10901122, no. 11001205, and no. 11126226 and by Zhejiang Provincial Natural Science Foundation of China under Grants no. LY12A01015 and no. Y6110240.
1. D. N. Arnold, F. Brezzi, B. Cockburn, and L. D. Marini, “Unified analysis of discontinuous Galerkin methods for elliptic problems,” SIAM Journal on Numerical Analysis, vol. 39, no. 5, pp.
1749–1779, 2002. View at Publisher · View at Google Scholar
2. G. A. Baker, “Finite element methods for elliptic equations using nonconforming elements,” Mathematics of Computation, vol. 31, no. 137, pp. 45–59, 1977.
3. I. Mozolevski and E. Süli, “A priori error analysis for the hp-version of the discontinuous Galerkin finite element method for the biharmonic equation,” Computational Methods in Applied
Mathematics, vol. 3, no. 4, pp. 596–607, 2003.
4. I. Mozolevski, E. Süli, and P. R. Bösing, “hp-version a priori error analysis of interior penalty discontinuous Galerkin finite element approximations to the biharmonic equation,” Journal of
Scientific Computing, vol. 30, no. 3, pp. 465–491, 2007. View at Publisher · View at Google Scholar
5. E. Süli and I. Mozolevski, “hp—version interior penalty DGFEMs for the biharmonic equation,” Computer Methods in Applied Mechanics and Engineering, vol. 196, no. 13–16, pp. 1851–1863, 2007. View
at Publisher · View at Google Scholar
6. I. Babuška and M. Zlámal, “Nonconforming elements in the finite element method with penalty,” SIAM Journal on Numerical Analysis, vol. 10, pp. 863–875, 1973.
7. G. Engel, K. Garikipati, T. J. R. Hughes, M. G. Larson, L. Mazzei, and R. L. Taylor, “Continuous/discontinuous finite element approximations of fourth-order elliptic problems in structural and
continuum mechanics with applications to thin beams and plates, and strain gradient elasticity,” Computer Methods in Applied Mechanics and Engineering, vol. 191, no. 34, pp. 3669–3750, 2002. View
at Publisher · View at Google Scholar
8. S. C. Brenner and L.-Y. Sung, “${C}^{0}$ interior penalty methods for fourth order elliptic boundary value problems on polygonal domains,” Journal of Scientific Computing, vol. 22-23, pp. 83–118,
2005. View at Publisher · View at Google Scholar
9. B. Rivière, M. F. Wheeler, and V. Girault, “Improved energy estimates for interior penalty, constrained and discontinuous Galerkin methods for elliptic problems. I,” Computational Geosciences,
vol. 3, no. 3-4, pp. 337–360, 1999. View at Publisher · View at Google Scholar
10. P. G. Ciarlet, The Finite Element Method for Elliptic Problems, North-Holland Publishing, Amsterdam, The Netherlands, 1978, Studies in Mathematics and Its Applications.
11. J. Guzmán and M. Neilan, “Conforming and divergence free stokes elements on general triangular meshes,” Mathematics of Computation. In press.
|
{"url":"http://www.hindawi.com/journals/aaa/2012/863125/","timestamp":"2014-04-18T08:28:53Z","content_type":null,"content_length":"865975","record_id":"<urn:uuid:b901392f-7f7b-4f5c-925f-c97a9de1eb44>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fourier series to calculate an infinite series
August 4th 2010, 01:36 PM
Fourier series to calculate an infinite series
I'm not asking the solution, but rather the main idea of how to solve the part 2:
1)Find the Fourier development of $f(t)=|t|$ with $-\pi \leq t \leq \pi$.
2)Show using the previous result that $\sum _{k=0}^{\infty} \frac{1}{(2k+1)^2}=\frac{\pi ^2}{8}$.
Attempt: Is it just by using Parseval's theorem?
August 4th 2010, 01:49 PM
I'm not asking the solution, but rather the main idea of how to solve the part 2:
1)Find the Fourier development of $f(t)=|t|$ with $-\pi \leq t \leq \pi$.
2)Show using the previous result that $\sum _{k=0}^{\infty} \frac{1}{(2k+1)^2}=\frac{\pi ^2}{8}$.
Attempt: Is it just by using Parseval's theorem?
I do believe you could use Parseval's theorem for part 2.
|
{"url":"http://mathhelpforum.com/differential-geometry/152793-fourier-series-calculate-infinite-series-print.html","timestamp":"2014-04-19T20:08:15Z","content_type":null,"content_length":"5607","record_id":"<urn:uuid:9f01c69a-6cf2-464a-ba13-77ffd174b176>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
|
learning teen numbers in kindergarten - KindergartenWorks
Are you beginning to get your school “groove” for the year? Right around day 20-25 is when I think that students really begin to hear your voice. Clarification, they have heard your voice enough over
that time period that it can begin to pierce their little current train of thought and you can more easily and quickly gain their attention. It took me a couple of years to learn this, but its almost
like magic when it takes hold! We are in our fifties at this point in the school year and have come such a long way!
In math, I really feel like we are making great strides and have found the routine and groove for making it all happen! The last three times I met with my math groups our main focus was on answering
the question, “Why do teens have a 1?”
“Why do teens have a 1?”
The first day was almost brutal as we started with a discussion and didn’t get very far. We used some manipulatives and made some teen numbers to try and find out why, but nothing really seemed to
stick by the end of our exploration. We used a decomposing statement to write down what we made and boy did I make a big deal out of this number one that we saw!
We’ve seen this statement every day since the beginning of school as part of our calendar routine, but now we took our fingers and rainbow slid the number from the statement to the correct place
value in the first box.
I don’t want to say that I expected it, but I figured we’d need a few times of hitting this to really get the concept. Oh boy. Some of them may have appeared more confused than anything. Would we
really be able to learn our teens better by understanding why 13 looks the way it does? This is my hope.
Would we really be able to learn our teens better by understanding why 13 looks the way it does?
The second time we met, they looked like deer in the headlights as I re-asked the question, “Why do teens have a 1?” But would you know it that almost one in each group was able to communicate enough
to share that they had gotten it… just when I thought no one had connected the dots. But, that’s just 1 out of 6 in a group! So, we began exploring again…
Using ten frame foam mats and bingo chips as counters, we made teen numbers and explored numbers larger (depending upon the group) to see why they don’t have a one. Well, this began to grow some
excitement. Their eyes lit up in three out of four groups to begin to see that they could form larger numbers in counting by tens and then switching to ones. Using our clapping and snapping they
wanted to put their counters and a partner’s counters together. Then they wanted to put them together with another, and then combine them all!
Over the duration of one additional lesson, we had enough time and practice to start small and work up to big numbers (up to 91), all the while writing our decomposing statement as we went that many
of them are really figuring it out!
things are really beginning to click
Things are really beginning to click and they are seeing the why behind this portion of our calendar time, more than just the routine of doing it everyday. So now, they smile when I ask, “Why do
teens have a one?” because we are proud to feel like we’ve cracked the code as we are learning more about these numbers!
I am looking forward to the next weeks climbing into December to work on solidifying, working through misconceptions as they come and continuing to excite them about those big numbers and using what
they are learning about counting by tens and ones to put it all together!
I share this since its something different I am trying this year instead of first learning what each numeral looks like (I know how much we kinder teachers love 11, 12, 13 and 15 {wink} and then
learning how to make it, we are relying on our rote counting and 1:1 counting skills to make it first and then determine why it looks the way it does. By trying it this way we are hitting all of
these Common Core Standards:
• K.NBT.1.a Compose numbers from 11-19 from a group of ten ones and additional ones using objects.
• K.NBT.1.b Decompose numbers from 11-19 into a group of 10 ones and additional ones using objects.
• K.CC.3.c. Print numbers from 0-20 when prompted
• K.CC.3.d. Recognize numbers from 11-20 out of sequence.
• K.CC.5.a. Count up to 20 objects that are in an order by answering the question “how many”.
More Math
More K.NBT (Composing/Decomposing)
Let me remember this the next time I want to try something new and it feels like I’m trying to herd a group of cats up Mount Sinai… that with a question and guided exploration we can make it click!
If you like what I do here on KindergartenWorks, then be sure to subscribe today. I look forward to sharing ideas with you weekly.
- Leslie
This is awesome! Thanks for sharing!!!
This looks amazing and I would love to use it in my classroom, however the downloads don’t seem to work. Any suggestions?
Thank you!
• http://www.kindergartenworks.com Leslie @KindergartenWorks
Hi Alyssa,
Perhaps try a different device or web browser. Those tricks might just make downloading the freebies a bit easier.
- Leslie
|
{"url":"http://www.kindergartenworks.com/freebie/learning-teen-numbers-in-kindergarten/","timestamp":"2014-04-21T04:32:30Z","content_type":null,"content_length":"79903","record_id":"<urn:uuid:9bd768bf-b39e-47b2-bf8c-ed4c42beee19>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reading dependencies from polytree-like Bayesian networks
"... We present a sound and complete graphical criterion for reading dependencies from the minimal undirected independence map G of a graphoid M that satisfies weak transitivity. Here, complete means
that it is able to read all the dependencies in M that can be derived by applying the graphoid properties ..."
Add to MetaCart
We present a sound and complete graphical criterion for reading dependencies from the minimal undirected independence map G of a graphoid M that satisfies weak transitivity. Here, complete means that
it is able to read all the dependencies in M that can be derived by applying the graphoid properties and weak transitivity to the dependencies used in the construction of G and the independencies
obtained from G by vertex separation. We argue that assuming weak transitivity is not too restrictive. As an intermediate step in the derivation of the graphical criterion, we prove that for any
undirected graph G there exists a strictly positive discrete probability distribution with the prescribed sample spaces that is faithful to G. We also report an algorithm that implements the
graphical criterion and whose running time is considered to be at most O(n 2 (e+n)) for n nodes and e edges. Finally, we illustrate how the graphical criterion can be used within bioinformatics to
identify biologically meaningful gene dependencies.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=10973082","timestamp":"2014-04-19T18:16:17Z","content_type":null,"content_length":"12736","record_id":"<urn:uuid:082cc1c7-f554-421b-aee2-1744842a1bb6>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: Re: xtabond2 - what does large N mean?
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: Re: xtabond2 - what does large N mean?
From "tutor um" <tutor2005um@googlemail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Re: xtabond2 - what does large N mean?
Date Thu, 10 May 2007 17:58:57 +0200
Thank you Rodrigo for your quick answer! That helped.
Another question concerning the size of N. If N=100 and xtabond uses
e.g. 100 instruments, would this still yield ok results? where would
be the limit?
Any help is appreciated.
Thank you.
On 5/9/07, Rodrigo A. Alfaro <raalfaroa@gmail.com> wrote:
N means number of groups, countries in your case. Your N (160) is fine for
few periods (T), maybe less than 8. You could check that the AB estimator is
not appropriate for T large. See 2 econometrica papers: Hahn-Kuersteiner
(2002) and Alvarez-Arellano (2003). Likely to find the working papers on the
----- Original Message -----
From: "tutor um" <tutor2005um@googlemail.com>
To: <statalist@hsphsun2.harvard.edu>
Sent: Wednesday, May 09, 2007 2:03 PM
Subject: st: xtabond2 - what does large N mean?
> Hello everybody,
> I am trying to use xtabond2 for a panel dataset of around 160 countries.
> In the paper "How To Do xtabond2"
> (http://repec.org/nasug2006/howtodoxtabond2.cgdev.pdf) Roodman on page
> 1 (line 2) refers ro large N panels.
> Quote: "Both are general estimators designed for situations with 1)
> "small T, large N" panels,
> meaning few time periods and many individuals; ..."
> I am wondering what exactly is meant by N?
> Is it the number of observations in the panel (in my case around 800)?
> Or is it the number of Groups (in my case the countries) in the panel?
> Also in what respect is N meant to be compared?
> xtabond2 warns if the number of instruments is too big. Does that in
> reverse mean everything is fine if it doesn´t?
> Are the results not suitable for interpretation at all if a warning
> has been displayed while estimating? Or is it a hint for extra caution
> when interpreting results since they are displayed anyways.
> I hope someone can help me with my questions.
> Thank you so much for your help!
> Andreas Marns
> *
> * For searches and help try:
> * http://www.stata.com/support/faqs/res/findit.html
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2007-05/msg00334.html","timestamp":"2014-04-17T12:52:05Z","content_type":null,"content_length":"9368","record_id":"<urn:uuid:4f0b4ae2-ca38-46f6-a8af-b029f0ef1d19>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
|
RE: st: Does Blasnik's Law apply to -use-?
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
RE: st: Does Blasnik's Law apply to -use-?
From "Newson, Roger B" <r.newson@imperial.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject RE: st: Does Blasnik's Law apply to -use-?
Date Sun, 16 Sep 2007 15:19:09 +0100
Thanks to David Elliot, Mike Blasnik and David Airey for their very
helpful and detailed replies to my query. These shall be used to inform
the first Stata 10 update to -parmby-, when I have Stata 10.
And thanks also to Vince Wiggins, who warned me (during the 13th UK
Stata User Meeting last week) of the dangers of ordinary users trying to
get too deep into the undocumented _prefix suite of commands, used
internally by StataCorp for -statsby- and other prefixes. (In Stata,
whelp _prefix
to find out more about these.)
Best wishes
Roger Newson
Lecturer in Medical Statistics
Respiratory Epidemiology and Public Health Group
National Heart and Lung Institute
Imperial College London
Royal Brompton campus
Room 33, Emmanuel Kaye Building
1B Manresa Road
London SW3 6LR
Tel: +44 (0)20 7352 8121 ext 3381
Fax: +44 (0)20 7351 8322
Email: r.newson@imperial.ac.uk
Web page: www.imperial.ac.uk/nhli/r.newson/
Departmental Web page:
Opinions expressed are those of the author, not of the institution.
-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu
[mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of David Elliott
Sent: 14 September 2007 15:07
To: statalist@hsphsun2.harvard.edu
Subject: Re: st: Does Blasnik's Law apply to -use-?
Being Stata users, we should approach this in a rigorous scientific
program define intest
version 9.0
*! version 1.0.0 2007.09.13
*! Simulate using part of file with in #/##
*! by David C. Elliott
*! using name of trial dataset
*! postname specifies filename of postfile
*! numblocks is number of file blocks to create
syntax using/ ,POSTname(string) NUMblocks(int)
local more `c(more)'
set more off
use `using', clear //Load first to eliminate any first pass caching
local recblock = round(`c(N)'/`numblocks',1)
tempname post
postfile `post' double block float timein timeif using `postname',
every(10) replace
timer clear 1
n di _n(2) "{txt}{col 11}{center 10:-- IF --}{center 10:-- IN --}" _n
"{center 10:Block}{center 10:Time}{center 10:Time}" _n ///
"{hline 30}"
local lastblock = `c(N)' - `recblock'
forvalues i=1(`recblock')`lastblock ' {
local block = `i'
foreach I in if in {
if "`I'" == "in" {
local ifin in `i'/`=`i'+`recblock''
else {
local ifin if inrange(_n, `i',
timer on 1
use `using' `ifin', clear
timer off 1
qui timer list 1
local time`I' :display %5.2f round(`r(t1)',.01)
timer clear 1
post `post' (`block') (`timein') (`timeif')
n di "{res}{ralign 10:`block'}{ralign 10:`timeif'}{ralign
postclose `post'
set more `more'
use `postname', clear
lab var block "Record Block"
lab var timein "Load Time using IN"
lab var timeif "Load Time using IF"
tw line timein block || line timeif block
. intest using dss_data_06_07.dta , postname(intest.dta) numblocks(100)
-- IN -- -- IF --
Block Time Time
1 0.64 0.88
17278 0.47 0.77
34555 0.47 0.77
51832 0.47 0.78
69109 0.45 0.78
86386 0.45 0.78
103663 0.47 0.78
120940 0.47 0.77
This adofile will run an -if- versus -in- simulation and graph the
results. From my findings I can confirm a speed advantage of about
50% using -in- on dataset with obs:1,727,673 vars:28 size:266,061,642
However, things get murkier. Run a simulation, then max out Stata's
memory setting with as much memory as the system will give you and run
the simulation again. When you do this, you eliminate the system's
ability to cache the file. Ordinarily, subject to filesize and
available memory, Stata may be reading the file from cache. If this
is the case, one will see an advantage to using -in-. However, if the
caching advantage is eliminated by increasing Stata memory, my
simulations show the speed reduction using -in- is negated. I also
tested this on large network databases and was unable to demonstrate
any advantage to -in-.
So back to Roger's initial question. It would appear that for
cacheable filesizes and large numbers of bygroups a strategy using
-in- might be feasible. There is an overhead penalty of setting up
the bygroups to make them selectable using -in- involving sorts and
the like. For a small number of bygroups the speed advantages might
be lost, but for many levels and a large number of iterations there
would be an advantage.
DC Elliott
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2007-09/msg00492.html","timestamp":"2014-04-18T08:31:19Z","content_type":null,"content_length":"12334","record_id":"<urn:uuid:be71e1bf-4e50-45f9-b28f-1be41f2137de>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sur la notion de l’ordre dans la théorie des ensembles. Fundamenta Mathematicae 2, 161–171
- The Bulletin of Symbolic Logic , 1996
"... This article is dedicated to Professor Burton Dreben on his coming of age. I owe him particular thanks for his careful reading and numerous suggestions for improvement. My thanks go also to Jose
Ruiz and the referee for their helpful comments. Parts of this account were given at the 1995 summer meet ..."
Cited by 8 (2 self)
Add to MetaCart
This article is dedicated to Professor Burton Dreben on his coming of age. I owe him particular thanks for his careful reading and numerous suggestions for improvement. My thanks go also to Jose Ruiz
and the referee for their helpful comments. Parts of this account were given at the 1995 summer meeting of the Association for Symbolic Logic at Haifa, in the Massachusetts Institute of Technology
logic seminar, and to the Paris Logic Group. The author would like to express his thanks to the various organizers, as well as his gratitude to the Hebrew University of Jerusalem for its hospitality
during the preparation of this article in the autumn of 1995.
- Bull. Symbolic Logic , 1997
"... this paper, the seminal results of set theory are woven together in terms of a unifying mathematical motif, one whose transmutations serve to illuminate the historical development of the
subject. The motif is foreshadowed in Cantor's diagonal proof, and emerges in the interstices of the inclusion vs ..."
Cited by 5 (1 self)
Add to MetaCart
this paper, the seminal results of set theory are woven together in terms of a unifying mathematical motif, one whose transmutations serve to illuminate the historical development of the subject. The
motif is foreshadowed in Cantor's diagonal proof, and emerges in the interstices of the inclusion vs. membership distinction, a distinction only clarified at the turn of this century, remarkable
though this may seem. Russell runs with this distinction, but is quickly caught on the horns of his well-known paradox, an early expression of our motif. The motif becomes fully manifest through the
study of functions f :
, 2008
"... Schumm, Timothy Smiley and Matthias Wille. Comments by two anonymous referees have also led to significant improvements. The aim here is to describe how to complete the constructive logicist
program, in the author’s book Anti-Realism and Logic, of deriving all the Peano-Dedekind postulates for arith ..."
Cited by 2 (2 self)
Add to MetaCart
Schumm, Timothy Smiley and Matthias Wille. Comments by two anonymous referees have also led to significant improvements. The aim here is to describe how to complete the constructive logicist program,
in the author’s book Anti-Realism and Logic, of deriving all the Peano-Dedekind postulates for arithmetic within a theory of natural numbers that also accounts for their applicability in counting
finite collections of objects. The axioms still to be derived are those for addition and multiplication. Frege did not derive them in a fully explicit, conceptually illuminating way. Nor has any
neo-Fregean done so. These outstanding axioms need to be derived in a way fully in keeping with the spirit and the letter of Frege’s logicism and his doctrine of definition. To that end this study
develops a logic, in the Gentzen-Prawitz style of natural deduction, for the operation of orderly pairing. The logic is an extension of free first-order logic with identity. Orderly pairing is
treated as a primitive. No notion of set is presupposed, nor any set-theoretic notion of membership. The formation of ordered pairs, and the two projection operations yielding their left and right
coordinates, form a coeval family of logical notions. The challenge is to furnish them with introduction and elimination rules that capture their exact meanings, and no more. Orderly pairing as a
logical primitive is then used in order to introduce addition and multiplication in a conceptually satisfying way within a constructive logicist theory of the natural numbers. Because of its
reliance, throughout, on sense-constituting rules of natural deduction, the completed account can be described as ‘natural logicism’. 2 1 Introduction: historical
"... Abstract. We discuss the work of Paul Bernays in set theory, mainly his axiomatization and his use of classes but also his higher-order reflection principles. Paul Isaak Bernays (1888–1977) is
an important figure in the development of mathematical logic, being the main bridge between Hilbert and Göd ..."
Add to MetaCart
Abstract. We discuss the work of Paul Bernays in set theory, mainly his axiomatization and his use of classes but also his higher-order reflection principles. Paul Isaak Bernays (1888–1977) is an
important figure in the development of mathematical logic, being the main bridge between Hilbert and Gödel in the intermediate generation and making contributions in proof theory, set theory, and the
philosophy of mathematics. Bernays is best known for the two-volume 1934,1939 Grundlagen der Mathematik [39, 40], written solely by him though Hilbert was retained as first author. Going into many
reprintings and an eventual second edition thirty years later, this monumental work provided a magisterial exposition of the work of the Hilbert school in the formalization of first-order logic and
in proof theory and the work of Gödel on incompleteness and its surround, including the first complete proof of the Second Incompleteness Theorem. 1 Recent re-evaluation of Bernays ’ role actually
places him at the center of the development of mathematical logic and Hilbert’s program. 2 But starting in his forties, Bernays did his most individuated, distinctive mathematical work in set theory,
providing a timely axiomatization and later applying higher-order reflection principles, and produced a stream of
, 2006
"... By “alternative set theories ” we mean systems of set theory differing significantly from the dominant ZF (Zermelo-Frankel set theory) and its close relatives (though we will review these
systems in the article). Among the systems we will review are typed theories of sets, Zermelo set theory and its ..."
Add to MetaCart
By “alternative set theories ” we mean systems of set theory differing significantly from the dominant ZF (Zermelo-Frankel set theory) and its close relatives (though we will review these systems in
the article). Among the systems we will review are typed theories of sets, Zermelo set theory and its variations, New Foundations and related systems, positive set theories, and constructive set
theories. An interest in the range of alternative set theories does not presuppose an interest in replacing the dominant set theory with one of the alternatives; acquainting ourselves with
foundations of mathematics formulated in terms of an alternative system can be instructive as showing us what any set theory (including the usual one) is supposed to do for us. The study of
alternative set theories can dispel a facile identification of “set theory ” with “Zermelo-Fraenkel set theory”; they are not the same thing. Contents 1 Why set theory? 2 1.1 The Dedekind
construction of the reals............... 3 1.2 The Frege-Russell definition of the natural numbers....... 4
"... In this note we define a machine that generates nests. The basic relations commonly attributed to linguistic expressions in configurational syntactic models as well as the device of chains
postulated in current transformational grammar to represent distance relations can be naturally derived from th ..."
Add to MetaCart
In this note we define a machine that generates nests. The basic relations commonly attributed to linguistic expressions in configurational syntactic models as well as the device of chains postulated
in current transformational grammar to represent distance relations can be naturally derived from the assumption that the combinatorial syntactic procedure is a nesting machine. Accordingly, the core
of the transformational generative syntactic theory of language can be solidly constructed on the basis of nests, in the same terms as the general theory of order, an important methodological step
that provides a rigorization of Chomsky’s minimalist intuition that the simplest way to generate hierarchically organized linguistic expressions is by postulating a combinatorial operation called
Merge, which can be internal or external. Importantly, there is reason to think that nests are a useful representative tool in other domains besides language where either some recursive algorithm or
evolutionary process is at work, which suggests the unifying force of the mathematical abstraction this note is based on.
"... This article centers around two questions: What is the relation between movement and structure sharing, and how can complex syntactic structures be linearized? It is shown that regular movement
involves internal remerge, and sharing or ‘sideward movement ’ external remerge. Without ad hoc restrictio ..."
Add to MetaCart
This article centers around two questions: What is the relation between movement and structure sharing, and how can complex syntactic structures be linearized? It is shown that regular movement
involves internal remerge, and sharing or ‘sideward movement ’ external remerge. Without ad hoc restrictions on the input, both options follow from Merge. They can be represented in terms of
multidominance. Although more structural freedom ensues than standardly thought, the grammar is not completely unconstrained: Arguably, proliferation of roots is prohibited. Furthermore, it is
explained why external remerge has somewhat different consequences than internal remerge. For instance, apparent non-local behavior is attested. At the PF interface, the linearization of structures
involving remerge is non-trivial. A central problem is identified, apart from the general issue why remerged material is only pronounced once: There are seemingly contradictory linearization demands
for internal and external remerge. This can be resolved by taking into account the different structural configurations. It is argued that the linearization is a PF procedure involving a recursive
structure scanning algorithm that makes use of the inherent asymmetry between sister nodes imposed by the operation of Merge. Keywords: linearization; movement; multidominance; PF interface; (re-)
"... Abstract: It is argued that there are at least three distinct kinds of meaning that have wide currency across many different kinds of language use. The first kind consists of formal definitions
of terms in mathematics and science. These definitions are usually clearly distinguished, as such, in the ..."
Add to MetaCart
Abstract: It is argued that there are at least three distinct kinds of meaning that have wide currency across many different kinds of language use. The first kind consists of formal definitions of
terms in mathematics and science. These definitions are usually clearly distinguished, as such, in the discourse context in which they occur. The second kind consists of dictionary definitions,
familiar to all of us. The third kind, that of associative meanings, is not as widely recognized as the first two, but associative meanings are at the center of our cognitive and emotional
experience. Baldly stated, the thesis defended is that associations provide the computational method of computing meaning as we speak, listen, read or write about'iour thoughts and feelings. This
claim is supported by a variety of research in psychology and neuroscience. For much of the use of this third kind of meaning, the familiar analytic-synthetic philosophical distinction is artificial
and awkward. 1. Meaning given by formal definition I first consider definitions formalized within a theory in the ordinary
"... Abstract. This article centers around two questions: (i) what is the relation between movement and structure sharing?, and (ii) how can complex syntactic structures be linearized? It is shown
that regular movement involves internal remerge, and sharing or ‘sideward movement ’ external remerge. Witho ..."
Add to MetaCart
Abstract. This article centers around two questions: (i) what is the relation between movement and structure sharing?, and (ii) how can complex syntactic structures be linearized? It is shown that
regular movement involves internal remerge, and sharing or ‘sideward movement ’ external remerge. Without ad hoc restrictions on the input, both options follow from Merge. They can be represented in
terms of multidominance. Although more structural freedom ensues than standardly thought, the grammar is not completely unconstrained: arguably, proliferation of roots is prohibited. Furthermore, it
is explained why external remerge has somewhat different consequences than internal remerge. For instance, apparent non-local behavior is attested. At the PF interface, the linearization of
structures involving remerge is non-trivial. A central problem is identified, apart from the general issue why remerged material is only pronounced once: there are seemingly contradictory
linearization demands for internal and external remerge. This can be resolved by taking into account the different structural configurations. It is argued that the linearization is a PF procedure
involving a recursive structure scanning algorithm that makes use of the inherent asymmetry between sister nodes imposed by the operation of Merge.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2324031","timestamp":"2014-04-19T12:08:39Z","content_type":null,"content_length":"37856","record_id":"<urn:uuid:5caeb784-87b0-4f68-b71d-e245e3d68dc1>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sterling, VA Algebra Tutor
Find a Sterling, VA Algebra Tutor
...I graduated in 2003. I have been working as a systems engineer/project manager since 2005 in an engineering company in Beltsville, MD that contracts with NASA and NOAA in the field of
satellite aided search and rescue. I have developed and integrated a digital receiver currently used for search and rescue operations by NOAA.
9 Subjects: including algebra 1, algebra 2, French, calculus
I am a current student at George Mason University studying Biology which allows me to connect to other students struggling with certain subjects. I tutor students in reading, chemistry, anatomy,
and math on a high school level and lower. I hope to help students understand the subject they are working with by repetition, memorization, and individualized instruction.
9 Subjects: including algebra 2, algebra 1, reading, English
I recently graduated with a master's degree in chemistry, all the while tutoring extensively in math and science courses throughout my studies. I am well versed in efficient studying techniques,
and am confident that I will be able to make the most use of both your time and mine! I have taken the ...
17 Subjects: including algebra 1, algebra 2, chemistry, calculus
...I have experience working with children ages 10 and up, so I can work with those who are younger or those who are older and need assistance with more advanced coursework. I do work full time,
but would be available during evening hours and sometimes on the weekends on a case by case basis. I'd ...
25 Subjects: including algebra 1, algebra 2, chemistry, calculus
...I have helped children as young as 6 with both reading comprehension and mathematical concepts. Young learners are more eager to learn new concepts but often have not learned logic or working
with multiple steps to answer problems. I like to help this age group prepare for what they will study in high school.
31 Subjects: including algebra 1, algebra 2, chemistry, reading
Related Sterling, VA Tutors
Sterling, VA Accounting Tutors
Sterling, VA ACT Tutors
Sterling, VA Algebra Tutors
Sterling, VA Algebra 2 Tutors
Sterling, VA Calculus Tutors
Sterling, VA Geometry Tutors
Sterling, VA Math Tutors
Sterling, VA Prealgebra Tutors
Sterling, VA Precalculus Tutors
Sterling, VA SAT Tutors
Sterling, VA SAT Math Tutors
Sterling, VA Science Tutors
Sterling, VA Statistics Tutors
Sterling, VA Trigonometry Tutors
Nearby Cities With algebra Tutor
Ashburn, VA algebra Tutors
Centreville, VA algebra Tutors
Chantilly algebra Tutors
Dulles algebra Tutors
Fairfax, VA algebra Tutors
Falls Church algebra Tutors
Gaithersburg algebra Tutors
Germantown, MD algebra Tutors
Herndon, VA algebra Tutors
Mc Lean, VA algebra Tutors
Montgomery Village, MD algebra Tutors
Potomac Falls, VA algebra Tutors
Reston algebra Tutors
Rockville, MD algebra Tutors
Vienna, VA algebra Tutors
|
{"url":"http://www.purplemath.com/Sterling_VA_Algebra_tutors.php","timestamp":"2014-04-19T23:54:20Z","content_type":null,"content_length":"24104","record_id":"<urn:uuid:c2c25d0b-3494-412f-8c59-450cf95bbffa>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Global Defensive Alliances in Graphs
A defensive alliance in a graph $G = (V,E)$ is a set of vertices $S \subseteq V$ satisfying the condition that for every vertex $v \in S$, the number of neighbors $v$ has in $S$ plus one (counting
$v$) is at least as large as the number of neighbors it has in $V-S$. Because of such an alliance, the vertices in $S$, agreeing to mutually support each other, have the strength of numbers to be
able to defend themselves from the vertices in $V-S$. A defensive alliance $S$ is called global if it effects every vertex in $V-S$, that is, every vertex in $V-S$ is adjacent to at least one member
of the alliance $S$. Note that a global defensive alliance is a dominating set. We study global defensive alliances in graphs.
Full Text:
|
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v10i1r47/0","timestamp":"2014-04-18T10:40:03Z","content_type":null,"content_length":"14998","record_id":"<urn:uuid:67838384-c55b-4fb0-9ed1-f6146c9ecf5a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
if i match something with it's inverse does that mean im matching it with its opposite
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5000d74be4b0848ddd65a8de","timestamp":"2014-04-16T13:25:56Z","content_type":null,"content_length":"44205","record_id":"<urn:uuid:290b393f-b502-4285-aa9e-8ff397cffc41>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/doc.brown/medals","timestamp":"2014-04-16T04:27:05Z","content_type":null,"content_length":"92892","record_id":"<urn:uuid:c71ae0a3-2f24-4ca9-bc5a-87a45023655d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Illuminated UTM, 30" by 44", 1995
John Sloan, McSorley's Bar, 1912
Oil on canvas
26 x 32 in. (66 x 81.3 cm)
Detroit Institute of Arts
Manchester Illuminated UTM, 1998
What is a Universal Turing Machine?
The gating logic for circuit boards in all general computers descends from a logical procedure known as a Universal Turing Machine (UTM) . The logic underlying this algorithm was outlined by Alan
Turing (1912-54) in 1936 and published in 1937 in the Proceedings of the London Mathematical Society. His paper, “On computable numbers”, planted the seminal idea, the meme, for all general
As an artist, working with algorithmic art, I became aware of the beauty of the Universal Turing Machine procedure when I came upon a binary version presented by Roger Penrose in The Emperor's New
Mind (Chapter II). The code for a UTM, the meta-algorithm of algorithms, seized my imagination and would not let go. To me it symbolized a historical turning point in the human ability to manage
extensive rational procedure. Many are not aware of the time when "computers" referred to humans who did the computing. Businesses required teams of "computers", namely human workers, to do laborious
computation that is now done with machines that are, in essence, Universal Turing Machines. Those who recognize the power of algorithmic procedure implemented with computers will appreciate the
beauty and power of Alan Turing's contribution and recognize its special place in the history of ideas.
Some years ago I learned that McSorley's Bar was one of the favored places for the human "computers" who did the book-keeping work in lower Manhattan's Wall Street sector. This was, of course, well
before the time of Alan Turing and the information age revolution. McSorley's had a sawdust floor and was restricted to men only. This restriction was lifted during the 1960's. In the 19th and early
20th Centuries only men were employed to do the manual book keeping work on Wall Street. Many, who lived in upper Manhattan, would stop at McSorley's for an ale on the way home.
I recall visiting McSorley's in the 1960's as a tribute to John Sloan who was one of my favorite American Painters. And I admired this painting. From my perspective now, knowing that many of these
clients at the end of the working day were the "computers" of that time, I view this work with a special appreciation. Comparing these human computers to a hardwired Universal Turing Machine makes me
Rationale for this work.
For me, the binary code of a UTM algorithm, like a biblical text in medieval times, radiates an aura of authority even though it is difficult for most of us to comprehend. In the tradition of
medieval manuscript illumination I have written algorithms to illuminate the code for a Universal Turing Machine and to celebrate its impact on our culture. These illuminations are works of art and
not exercises in computer science. They are intended to celebrate the value and significance of the UTM concept in shaping cultural change in the late 20th century. Like medieval Latin that
transcended the vernacular this code speaks a universal tongue. To celebrate it more broadly I have also mounted several UTM versions, with documentation, on my web site as cyberspace illuminations.
One version is presented as a “Self Portrait” of the computer with which it is viewed. See: http://www.verostko.com/u.html.
Art & Algorithms.
Detail, illuminated UTM text.
Similar to composers of musical scores, as an algorist, I create “scores” for drawing. The engine for executing my drawing scores, in the most radical sense, is driven by the logic for a UTM. Whence,
with a certain wonder and awe, I treasure the UTM texts that I have illuminated to celebrate the treasure we have inherited from those giants who have preceded us.
Roman Verostko, Minneapolis, c.1999
See also my 1998 notes: The 'Cloud of Unknowing' revisited: Notes on a Universal Turing Machine (UTM) and the Undecidable
Some documentation and binary versions of UTM's are located on my web site at: http://www.verostko.com/turing-doc.html
For a collection of essays and further reference both general and technical see The Universal Turing Machine: A Half-Century Survey, Edited by Rolf Herken. Springer Verlag 1995, Wien, NY.
Roger Penrose, THE EMPEROR'S NEW MIND: concerning computers, minds and the laws of physics (Oxford University Press, 1989). Chapter II, "Algorithms and Turing Machines " discusses Turing machine
|
{"url":"http://www.penplot.com/archive/statements/illuminating_a_utm.htm","timestamp":"2014-04-19T20:25:18Z","content_type":null,"content_length":"12421","record_id":"<urn:uuid:ff86b88e-9de1-4cc0-a592-39a3815f97b4>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Monomials, Polynomials
Date: 03/02/2001 at 13:31:35
From: Chris Heller
Subject: Monomials
Can you explain monomials and polynomials to me?
Date: 03/02/2001 at 17:57:10
From: Doctor Jordi
Subject: Re: Monomials
Hello, Chris - thanks for writing to Dr. Math.
I am not sure exactly what you would like to know about monomials and
polynomials. There is not much to say about monomials, except that
they are the building blocks of polynomials. Polynomials, on the other
hand, are very interesting mathematical structures, and a whole lot of
things can be said about them. I will spare you for the moment saying
every interesting fact about polynomials that I can think of, since I
am imagining that for now you are only interested in knowing the
definitions regarding the two.
A monomial is a product of as many factors as you like, each raised to
a POSITIVE power. By definition, negative exponents are not allowed.
The following are examples of monomials:
x^2 (x squared)
3(x^26)*(y^(pi)) (3 times x to the 26th power times y to the pi power)
37*a*b*c*d* ... *z*alpha*beta*gamma* ... *omega*aleph*(a partridge in
a pear tree)
(The product of 37 times the variables represented by all the letters
of the English alphabet times the variables represented by all the
letters of the Greek alphabet times the first letter of the Hebrew
alphabet, times a partridge in a pear tree. It is still a monomial, no
matter how many numbers we are multiplying together. Don't worry too
much about this goofy example; I am just trying to point out that a
monomial is a very general concept.)
A polynomial is nothing more than the sum of two or more monomials.
The following are examples of polynomials:
x (yes, a monomial is also a polynomial)
x + y
ax^2 + bx + c (the very famous quadratic polynomial)
(x^2)*(y^2) + xy + x
x^34 + r^(3.535) + c
abc + rst + xyz^2 + abcdefghijklmnopqrstuvwxyz
Again, don't worry too much about that last goofy example. I'm sure
you will probably never encounter a polynomial anytime soon that uses
all the letters of the English alphabet.
Usually, however, when we say "polynomial," we are talking about a
very specific kind of polynomial that occurs very often. This is the
polynomial where there is only one variable, all the powers of this
variable are nonnegative integers, and each power of this variable may
be multiplied by a constant. The way mathematicians usually write this
polynomial is
A_n * x^n + A_(n-1) * x^(n-1) + ... + A_2 * x^2 + A_1 * x + A_0
Where the A_i symbols (A_i is intended to represent A with a subscript
i, the letter A indexed by i) represent constants (i.e. numbers like
0, 1, -3, 7/4, 34.334 and pi) and n is a positive integer. The famous
quadratic polynomial I mentioned above in my examples is a special
case of these very common polynomials when n = 2.
In these polynomials, by the way, n is called the "degree" of the
polynomial. Thus, a quadratic polynomial has degree 2. Polynomials of
degree 3 are called "cubic polynomials," of degree 1 are called
"linear expressions" (they are technically also polynomials, but
nobody calls them that) and of degree 0 (i.e., there are no variables,
only the A_0) are called "constants." Yes, constants are also a
special case of polynomials. In fact, a constant by itself is also a
monomial, but we usually never call them that, in order to avoid
Summarizing, a monomial is a single mathematical expression that
contains only multiplication and exponentiation. A polynomial is a sum
of monomials, so you can think of it as a mathematical expression that
only contains the operations of addition, multiplication, and
I hope this explanation helped. Please write back if you have more
questions, or if you would like to talk about this some more.
- Doctor Jordi, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/53239.html","timestamp":"2014-04-19T12:30:01Z","content_type":null,"content_length":"9009","record_id":"<urn:uuid:bd5623b0-fc49-470e-b243-4112b547b456>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Natural Logarithm Regression Equation
August 10th 2009, 12:52 AM #1
Jul 2009
Natural Logarithm Regression Equation
The table gives Sweden's nuclear power generation data in billions of kilowatt-hours. Let x = 5 represent 1980, x = 10 represent 1985, and so on.
Energy produced |25.3|55.8|65.2|66.5|
(a) Find a natural logarithm regression equation for the data.
(b) Predict when Sweden's nuclear power generation will reach 85 billion kilowatt-hours.
Ok, so in my calculator i have (I think) found the ln regression equation, which is Y= a+blnx
a= -20.9
b= 30.8
r^2= .93
r= .97
So the equation should be -20.9 + 30.8lnx correct?
and to predict for (b), I need to make a graph of the data, but I am unsure on how to do that, could someone help please?
Thanks a bunch!
The table gives Sweden's nuclear power generation data in billions of kilowatt-hours. Let x = 5 represent 1980, x = 10 represent 1985, and so on.
Energy produced |25.3|55.8|65.2|66.5|
(a) Find a natural logarithm regression equation for the data.
(b) Predict when Sweden's nuclear power generation will reach 85 billion kilowatt-hours.
Ok, so in my calculator i have (I think) found the ln regression equation, which is Y= a+blnx
a= -20.9
b= 30.8
r^2= .93
r= .97
So the equation should be -20.9 + 30.8lnx correct?
and to predict for (b), I need to make a graph of the data, but I am unsure on how to do that, could someone help please?
Thanks a bunch!
see the following thread ...
August 10th 2009, 08:03 AM #2
|
{"url":"http://mathhelpforum.com/calculus/97539-natural-logarithm-regression-equation.html","timestamp":"2014-04-17T21:44:22Z","content_type":null,"content_length":"34290","record_id":"<urn:uuid:5734ef09-c4f2-49ca-bd85-eb083664e1ac>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Determining the satellite's altitude above the surface of the Earth
1. The problem statement, all variables and given/known data
A satellite moves in a circular orbit around the Earth at a speed of 5.1km/s.
Determine the satellite's altitude above the suface of the Earth. Assume the Earth is a homogenous sphere of radius Rearth= 6370km, and mass Mearth=5.98x10^24 kg. You will need G=6.67259x10^(-11) Nm^
2/kg^2. Answer in units of km
2. Relevant equations
v^2=(G*Mass of earth)/(radius)
3. The attempt at a solution
(5100 km/s)^2 = (6.67259x10^(-11))(5.98x10^(24))/r
r= 15340985.32 - 6370000= 8970985.321m = 8970.985321km
i think its wrong, but i dont know where i made the mistake..
|
{"url":"http://www.physicsforums.com/showthread.php?t=150256","timestamp":"2014-04-18T03:13:01Z","content_type":null,"content_length":"22900","record_id":"<urn:uuid:ddb4e5b5-4526-4d8e-a362-6d8cd4180e9a>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Frequency Content (Paul Boersma )
Subject: Re: Frequency Content
From: Paul Boersma <paul.boersma(at)HUM.UVA.NL>
Date: Wed, 31 Jul 2002 17:32:17 +0200
Dear Marius,
very compact C code for Fourier transforms is available in a book called Numerical Recipes by Press et al., which is widely available. Copy your 512 chars to an array of 512 floats, then use the function "realft" to convert this inline to a spectral representation (you also have to type in the function "four1").
As for the result, data[1]*data[1] is the direct-current power (f = 0), and data[2]*data[2] is the power in the highest frequency, which is half of the sample rate.
The first genuine frequency power (f = sample rate divided by 512) is data[3]*data[3]+data[4]*data[4], the second frequency power (f = 2 * samplerate/512) is data[5]*data[5]+data[6]*data[6], and so on, up to data[511]*data[511]+data[512]*data[512], which is the power at f = 255 * samplerate/512.
Best wishes,
Paul Boersma
Institute of Phonetic Sciences, University of Amsterdam
Herengracht 338, 1016CG Amsterdam, The Netherlands
phone +31-20-5252385
This message came from the mail archive
maintained by: DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University
|
{"url":"http://www.auditory.org/postings/2002/275.html","timestamp":"2014-04-18T15:45:50Z","content_type":null,"content_length":"2037","record_id":"<urn:uuid:3e0c0ab6-89fc-4d10-af27-511ef1094324>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Probabilistic Models in the Study of
Probabilistic Models in the Study of Language
I'm in the process of writing a textbook on the topic of using probabilistic models in scientific work on language ranging from experimental data analysis to corpus work to cognitive modeling. The
intended audience is graduate students in linguistics, psychology, cognitive science, and computer science who are interested in using probabilistic models to study language. Feedback (both comments
on existing drafts, and expressed desires for additional material to include!) is more than welcome -- send it to rlevy@ucsd.edu.
Note that if you access these chapters repeatedly, you may need to clear the cache of your web browser to ensure that you're getting the latest version.
A current (partial) draft of the complete textis available here.
Here are drafts of those individual chapters that are already available:
1. Introduction
2. Univariate Probability, with R code
3. Parameter Estimation, with R code
4. Interlude chapter (contents TBD)
5. Hierarchical Models (a.k.a. multi-level, mixed-effects models), with R code
6. Latent-Variable Models (partial draft), with R code
7. Nonparametric Models
8. Probabilistic Grammars
1. A brief introduction to sampling techniques
Last modified: Wed Oct 3 12:03:18 PDT 2012
|
{"url":"http://idiom.ucsd.edu/~rlevy/pmsl_textbook/text.html","timestamp":"2014-04-20T15:52:33Z","content_type":null,"content_length":"3076","record_id":"<urn:uuid:0d1cde92-7992-4cb3-91c1-8bda566aeff7>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A New-Keynesian model of the yield curve with learning dynamics: A Bayesian evaluation
Dewachter, Hans and Iania, Leonardo and Lyrio, Marco (2011): A New-Keynesian model of the yield curve with learning dynamics: A Bayesian evaluation.
Download (489Kb) | Preview
We estimate a New-Keynesian macro-finance model of the yield curve incorporating learning by private agents with respect to the long-run expectation of inflation and the equilibrium real interest
rate. A preliminary analysis shows that some liquidity premia, expressed as some degree of mispricing relative to no-arbitrage restrictions, and time variation in the prices of risk are important
features of the data. These features are, therefore, included in our learning model. The model is estimated on U.S. data using Bayesian techniques. The learning model succeeds in explaining the yield
curve movements in terms of macroeconomic shocks. The results also show that the introduction of a learning dynamics is not sufficient to explain the rejection of the extended expectations
hypothesis. The learning mechanism, however, reveals some interesting points. We observe an important difference between the estimated inflation target of the central bank and the perceived long-run
inflation expectation of private agents, implying the latter were weakly anchored. This is especially the case for the period from mid-1970s to mid-1990s. The learning model also allows a new
interpretation of the standard level, slope, and curvature factors based on macroeconomic variables. In line with standard macro-finance models, the slope and curvature factors are mainly driven by
exogenous monetary policy shocks. Most of the variation in the level factor, however, is due to shocks to the output-neutral real rate, in contrast to the mentioned literature which attributes most
of its variation to long-run inflation expectations.
Item Type: MPRA Paper
Original A New-Keynesian model of the yield curve with learning dynamics: A Bayesian evaluation
Language: English
Keywords: New-Keynesian model; Affine yield curve model; Learning; Bayesian estimation
E - Macroeconomics and Monetary Economics > E4 - Money and Interest Rates > E43 - Interest Rates: Determination, Term Structure, and Effects
Subjects: E - Macroeconomics and Monetary Economics > E5 - Monetary Policy, Central Banking, and the Supply of Money and Credit > E52 - Monetary Policy
E - Macroeconomics and Monetary Economics > E4 - Money and Interest Rates > E44 - Financial Markets and the Macroeconomy
Item ID: 34461
Depositing Marco Lyrio
Date 02. Nov 2011 14:15
Last 12. Feb 2013 21:10
Ang, A., and M. Piazzesi (2003): �A No-Arbitrage Vector Autoregression of Term Structure Dynamics with Macroeconomic and Latent Variables,�Journal of Monetary Economics, 50(4), 745�787.
Bekaert, G., S. Cho, and A. Moreno (2010): �New-Keynesian Macroeconomics and the Term Structure,�Journal of Money, Credit and Banking, 42, 33�62.
Bjørnland, H. C., K. Leitemo, and J. Maih (2008): �Estimating the Natural Rates in a Simple New Keynesian Framework,�Working Paper 2007/10, Norges Bank.
Calvo, G. A. (1983): �Staggered Prices in a Utility-Maximizing Framework,� Journal of Monetary Economics, 12(3), 383�398.
Campbell, J. Y., A. Sunderam, and L. M. Viceira (2009): �In�ation Bets or De�ation Hedges? The Changing Risks of Nominal Bonds,�NBER Working Papers 14701, National Bureau of Economic
Research, Inc.
Cho, S., and A. Moreno (2006): �A Small-Sample Study of the New-Keynesian Macro Model,�Journal of Money, Credit and Banking, 38(6), 1461�1481. Chun, A. L. (2011): �Expectations, Bond
Yields, and Monetary Policy,�Review of Financial Studies, 24(4), 208�247.
Clark, T. E., and S. Kozicki (2004): �Estimating Equilibrium Real Interest Rates in Real-Time,� Discussion Paper Series 1: Economic Studies 2004,32, Deutsche Bundesbank, Research
De Graeve, F., M. Emiris, and R. Wouters (2009): �A Structural Decomposition of the US Yield Curve,�Journal of Monetary Economics, 56(4), 545�559.
Dewachter, H., and L. Iania (2010): �An Extended Macro-Finance Model with Financial Factors,� Journal of Financial and Quantitative Analysis, forthcoming.
Dewachter, H., and M. Lyrio (2006): �Macro Factors and the Term Structure of Interest Rates,� Journal of Money, Credit and Banking, 38(1), 119�140.
Dewachter, H., and M. Lyrio (2008): Learning, Macroeconomic Dynamics, and the Term Structure of Interest Rates, Asset prices and monetary policy. NBER.
Doh, T. (2006): �Estimating a Structural Macro Finance Model of the Term Structure,� Discussion paper.
Doh, T. (2007): �What Does the Yield Curve Tell Us about the Federal Reserve�s Implicit In�ation Target?,�Research Working Paper RWP 07-10, Federal Reserve Bank of Kansas City.
Duffee, G. R. (2002): �Term Premia and Interest Rate Forecasts in A¢ ne Models,�The Journal of Finance, 57(1), 405�443. 22
Duffie, D., and R. Kan (1996): �A Yield-factor Model of Interest Rates,�Mathematical Finance, 6, 379�406.
English, W. B., W. R. Nelson, and B. P. Sack (2003): �Interpreting the Signi�cance of the Lagged Interest Rate in Estimated Monetary Policy Rules,� The B.E. Journal of Macroeconomics:
Contributions to Macroeconomics, 3(5), 1�20.
Fuhrer, J. C. (2000): �Habit Formation in Consumption and Its Implications for Monetary-Policy Models,�American Economic Review, 90(3), 367�390.
Galí, J., and M. Gertler (1999): �In�ation Dynamics: A Structural Econometric Analysis,�Journal of Monetary Economics, 44(2), 195�222.
Galí, J., M. Gertler, and J. David Lopez-Salido (2005): �Robustness of the Estimates of the Hybrid New Keynesian Phillips curve,�Journal of Monetary Economics, 52(6), 1107�1118.
Gerlach-Kristen, P. (2004): �Interest-Rate Smoothing: Monetary Policy Inertia or Unobserved Vari- ables?,�The B.E. Journal of Macroeconomics: Contributions to Macroeconomics, 4(3),
References: 1�17.
Geweke, J. (1999): �Using Simulation Methods for Bayesian Econometric Models: Inference, Develop- ment,and Communication,�Econometric Reviews, 18(1), 1�73.
Gürkaynak, R. S., B. Sack, and J. Wright (2008): �The TIPS Yield Curve and In�ation Compen- sation,�Finance and Economics Discussion Series 2008-05, Board of Governors of the Federal
Reserve System (U.S.).
Gürkaynak, R. S., B. Sack, and J. H. Wright (2007): �The U.S. Treasury Yield Curve: 1961 to the Present,�Journal of Monetary Economics, 54(8), 2291�2304.
Hördahl, P., O. Tristani, and D. Vestin (2006): �A Joint Econometric Model of Macroeconomic and Term-Structure Dynamics,�Journal of Econometrics, 131(1-2), 405�444.
Ireland, P. N. (2007): �Changes in the Federal Reserve�s In�ation Target: Causes and Consequences,� Journal of Money, Credit and Banking, 39(8), 1851�1882.
Kozicki, S., and P. A. Tinsley (2001): �Shifting Endpoints in the Term Structure of Interest Rates,� Journal of Monetary Economics, 47(3), 613�652.
Kozicki, S., and P. A. Tinsley (2002): �Dynamic Speci�cations in Optimizing Trend-Deviation Macro Models,� Journal of Economic Dynamics and Control, 26, 1585�1611.
Kozicki, S., and P. A. Tinsley (2005a): �Permanent and Transitory Policy Shocks in an Empirical Macro Model with Asym- metric Information,�Journal of Economic Dynamics and Control, 29
(11), 1985�2015.
Kozicki, S., and P. A. Tinsley (2005b): �What do You Expect? Imperfect Policy Credibility and Tests of the Expectations Hypothesis,�Journal of Monetary Economics, 52(2), 421�447. 23
Laubach, T., and J. C. Williams (2003): �Measuring the Natural Rate of Interest,�The Review of Economics and Statistics, 85(4), 1063�1070.
McCulloch, J., and H. Kwon (1993): �US Term Structure Data, 1947-1991,�Working Paper 93-6, Ohio State University.
Milani, F. (2007): �Expectations, Learning and Macroeconomic Persistence,� Journal of Monetary Economics, 54(7), 2065�2082.
Orphanides, A., and M. Wei (2010): �Evolving Macroeconomic Perceptions and the Term Structure of Interest Rates,�Working Paper 2010/01, Federal Reserve Board, Washington, D.C.
Rudebusch, G. D. (2002): �Term Structure Evidence on Interest Rate Smoothing and Monetary Policy Inertia,�Journal of Monetary Economics, 49(6), 1161�1187.
Rudebusch, G. D., B. P. Sack, and E. T. Swanson (2007): �Macroeconomic Implications of Changes in the Term Premium,�Federal Reserve Bank of St. Louis Review, 84(4), 241�269.
Rudebusch, G. D., and T. Wu (2008): �A Macro-Finance Model of the Term Structure, Monetary Policy and the Economy,�The Economic Journal, 118, 906�926.
Shiller, R. J., J. Y. Campbell, and K. L. Schoenholtz (1983): �Forward Rates and Future Policy: Interpreting the Term Structure of Interest Rates,�Brookings Papers on Economic Activity,
1, 173�217.
Taylor, J. B. (1993): �Discretion versus policy rules in practice,�Carnegie-Rochester Conference Series on Public Policy, 39(1), 195�214.
Trehan, B., and T. Wu (2007): �Time-Varying Equilibrium Real Rates and Monetary Policy Analysis,� Journal of Economic Dynamics and Control, 31, 1584�1609.
Wu, T. (2006): �Macro Factors and the A¢ ne Term Structure of Interest Rates,� Journal of Money, Credit and Banking, 38(7), 1847�1875.
URI: http://mpra.ub.uni-muenchen.de/id/eprint/34461
|
{"url":"http://mpra.ub.uni-muenchen.de/34461/","timestamp":"2014-04-18T00:32:13Z","content_type":null,"content_length":"34643","record_id":"<urn:uuid:5d105a9a-9e04-4053-8b6d-76d5810a8076>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to generate random numbers in Java?
October 6th, 2013, 08:52 PM
How to generate random numbers in Java?
I know that you have to import Random from the Java utilities and I know how to generate the number when you have certain parameters like generating a number between 5 and 15, but how can you do
it if the number has to be one of the following: 0, 0.25, 0.5, 1, 2, or 4. Is there a way to set the parameters to this? Any help is appreciated!! Thank you!!
October 6th, 2013, 09:25 PM
Re: How to generate random numbers in Java?
I have never done that with randoms. Perhaps you are complicating it more than it should? I see a solution with an Array and a random value with max size of array.
October 7th, 2013, 01:31 AM
Re: How to generate random numbers in Java?
Put those numbers in an array and randomly choose them from their indices, 0 through 5.
|
{"url":"http://www.javaprogrammingforums.com/%20whats-wrong-my-code/32707-how-generate-random-numbers-java-printingthethread.html","timestamp":"2014-04-18T03:09:05Z","content_type":null,"content_length":"4596","record_id":"<urn:uuid:84dc47c3-dedb-4da5-89c4-81bbd90b69e8>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computer Science Major and Minor
Major Requirements
These requirements are effective beginning in the Fall 2010 semester. Continuing students may choose to satisfy these requirements or those of any previous year they were a student at Butler.
All of the Following Courses
• MA 106 - Calculus & Analytic Geometry 1
The beginning calculus course for properly prepared students. Topics include differentiation, integration, elementary differential equations, and exponential, logarithmic, and trigonometric
functions. Applications are emphasized. The Analytic Reasoning core course is waived for students who successfully complete this course. Prerequisite: Placement, or C- in MA 102. (U)(5)
• MA 107 - Calculus & Analytic Geometry 2
Continuation of MA 106. Topics include methods of integration, improper integrals, infinite series, conic sections and polar coordinates. Prerequisite: MA 106. (U)(4)
• MA 215 - Linear Algebra
Systems of linear equations, matrices, determinants, vector spaces, linear transformations and the eigenvalue problem. Prerequisite: MA 107. (U)(3)
• CS 151 - Foundations of Computer Science
Introduction to mathematical problem solving, with emphasis on techniques for designing computer-based solutions. Concepts include problem-solving principles, logic, proof techniques, sets,
sequences, functions, relations, and inductive and recursive thinking. Prerequisites: MA 101 or 102 or equivalent. (U)(3)
• CS 252 - Foundations of Computing 2
As a continuation of CS151, concepts include mathematical logic, formal grammars, algebraic structures, finite state machines and automata, graph theory, and combinatorics. Prerequisite: CS151
(U) (3)
• CS 248 - Object-Oriented Programming and Data Structures
This course is an introduction to object-oriented programming using Java. Topics include algorithm analysis, recursion, the stack, queue, tree, and heap data structures, sorting algorithms, and
GUI programming. A brief survey of computer science is also included: history, software engineering, computer organization, operating systems, networks, programming languages, databases,
artificial intelligence, and theory. Prerequisites: CS 142 or equivalent and CS 151. (U)(5)
• CS 282
• CS 321
Principles of computer architecture are introduced from a layered point of view, beginning at the level of gates and digital logic, and progressing through micro-programming, the machine language
execution cycle, addressing modes, symbolic assembly language, and the fundamentals of operating systems. Advanced topics including pipelined and parallel architectures are also covered.
Corequisite: CS 248. (U) (3)
• CS 351 - Algorithms
A systematic study of data structures and algorithms with an introduction to theoretical computer science. Topics include lists, stacks, queues, trees, and graph structure, searching and sorting
algorithms, mathematical algorithms, time and space complexity, an introduction to the theory of NP-completeness, and an introduction to computability theory. Prerequisite: 248. (U)(3)
• CS 433 - Database Systems
An introduction to the theory, design and use of modern database management systems. Topics include the relational, entity-relationship, and object-oriented data models, query languages such as
SQL, file systems, concurrency and deadlock, reliability, security, and query optimization. Prerequisites: CS 248, CS 252, and CS 321. (U-G) (3)
• CS 452
A study of theoretical and practical paradigms of parallel algorithm design. Topics include model costs, lower bounds, architecture and topology, data-parallelism, synchronization, transactional
memory, message passing, and parallel design for sorting, graphs, string processing, and dynamic programming. (U)(3)
• CS 473 - Topics in Computer Science: Advanced User Interfaces
In-depth study of special topics not covered in regular courses. Prerequisite: permission of department. (U-G) (3)
• SE 473
In-depth study of special topics not covered in regular courses. Prerequisite: Permission of the department. (U-G)
• CS 485 - Computer Ethics
Ethical and social issues in computing with emphasis on professional responsibilities, risks and liabilities, and intellectual property. Prerequisite: CS 142 and sophomore standing. (U-G)(1)
• SE 361 - Object-Oriented Design
This course uses the Unified Modeling Language (UML) as a vehicle to introduce the basic principles of object-oriented methodology and design, covering classes, objects, data abstraction,
polymorphism, information hiding and relationships among classes such as inheritance, association, aggregation and composition. Specific design techniques are covered for object-oriented
programming languages such as Java and C++. The course also provides a first exposure to the software development lifecycle of object-oriented software applications. A small team design project
is required. Prerequisite: CS 248. (U)(3)
Theory Courses
• CS 441 - Organization of Programming Languages
Emphasizes the principles and programming paradigms that govern the design and implementation of contemporary programming languages. Includes the study of language syntax, processors,
representations, and paradigms. Prerequisites: CS 252, CS 321, and SE 361. (U-G) (3)
• CS 451
Basic theoretical principles of computer science that are embodied in formal languages, automata, computability and computational complexity. Includes regular expressions, context-free grammars,
Turing machines, Church's thesis, and unsolvability. Prerequisites: CS 252, CS 321 and CS 351. (U-G)(3)
• CS 455
Solutions of equations and systems, error analysis, numerical differentiation and integration, interpolation, least squares approximation, numerical solution of ordinary differential equations.
Prerequisites: MA 107 and CS 142 or equivalent. (U/G)(3)
Systems Course
• CS 431
Introduces the major concept areas of operating systems principles, including the study of process, storage, and processor management; performance issues; distributed systems; and protection and
security. Prerequisites: CS 248, CS 252, and CS 321. (U-G) (3)
• CS 435 - Computer Networks
An introduction to computer networks from a layered point of view beginning with the physical and data link layers, and progressing through the medium access layer, the network layer, the
transport layer, and the applications layer. Specific content includes Ethernet, TCP/IP, and the Web. Students will write client/server programs that communicate across a network. Prerequisite:
CS 321. (U-G) (3)
• SE 461 - Managing Software Development
Techniques, principles, and processes for developing large, complex software systems: Systems analysis and specification, modeling, design patterns, implementation, validation and verification,
quality assurance and project management. A team-based software project is required. Prerequisite: SE361. (U-G)(3)
• SE 462
Fundamental concepts, principles, techniques and tools for the maintenance and evolution of legacy software systems. Software maintenance and evolution process models, reengineering, reverse
engineering, and program comprehension tools. A modernization project is required. Prerequisite: SE361. (U-G)(3)
• SE 463 - Testing & Quality Assurance
Basic concepts, systematic techniques and tools involved in testing and QA of software systems. Some topics to be covered include black and white box testing techniques, object-oriented testing,
regression testing, system integration testing, planning and reporting of testing activities. Prereq: SE361
Total Credits: 38 computer science and 12 mathematics.
Minor Courses
There requirements are effective beginning in the Fall of 2010 Semester.
Both of the Following
• CS 151 - Foundations of Computer Science
Introduction to mathematical problem solving, with emphasis on techniques for designing computer-based solutions. Concepts include problem-solving principles, logic, proof techniques, sets,
sequences, functions, relations, and inductive and recursive thinking. Prerequisites: MA 101 or 102 or equivalent. (U)(3)
• CS 248 - Object-Oriented Programming and Data Structures
This course is an introduction to object-oriented programming using Java. Topics include algorithm analysis, recursion, the stack, queue, tree, and heap data structures, sorting algorithms, and
GUI programming. A brief survey of computer science is also included: history, software engineering, computer organization, operating systems, networks, programming languages, databases,
artificial intelligence, and theory. Prerequisites: CS 142 or equivalent and CS 151. (U)(5)
Twelve additional credit hours of CS or SE
Total credits: 20 computer science.
We often accept MA205, 206 in place of CS151, 252 for satisfying the major or minor requirements upon petition to the department head. Note that doing this, together with using MA341/CS451 or MA365/
CS455 as CS electives, makes the CS minor relatively easy for mathematics majors to obtain.
|
{"url":"http://www.butler.edu/computer-science/programs-courses/cs-major-and-minor/","timestamp":"2014-04-18T06:08:40Z","content_type":null,"content_length":"30763","record_id":"<urn:uuid:bc51be1c-5c96-49e5-b215-50a713a6ef24>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Centennial, CO Precalculus Tutor
Find a Centennial, CO Precalculus Tutor
Hi, I'm Leslie! I am a mathematics and writing tutor with three years of experience teaching and tutoring. I spent two years teaching mathematics and physics at a secondary school in rural
27 Subjects: including precalculus, reading, writing, geometry
...I have seen this as a very natural outgrowth of the academic support I have provided. Study skills are somewhat subject specific and I would plan to present guidelines based on the particular
academic area. I have taught Sunday School for more than 30 years.
43 Subjects: including precalculus, reading, chemistry, Spanish
...I appreciate your time and consideration. Best, KevinI received a Bachelor of Science degree from the University of Southern California and was on the Dean's List. I currently teach all levels
of math and science, ACT, and study skills at a learning center.
21 Subjects: including precalculus, chemistry, calculus, geometry
...Another descriptive aspect of statistics might be used to describe the relationship between a given sample and the population that the sample was drawn from. You might compare information
about the population mean and standard deviation with a given sample value, and use the combination to calcu...
18 Subjects: including precalculus, calculus, geometry, statistics
Hey! I grew up in the Bay Area and graduated from College Park High School in 2012. This fall, I will be in my sophomore year at the University of Denver, working on degrees in Biology,
International Studies, and Mathematics.
6 Subjects: including precalculus, calculus, algebra 2, SAT math
Related Centennial, CO Tutors
Centennial, CO Accounting Tutors
Centennial, CO ACT Tutors
Centennial, CO Algebra Tutors
Centennial, CO Algebra 2 Tutors
Centennial, CO Calculus Tutors
Centennial, CO Geometry Tutors
Centennial, CO Math Tutors
Centennial, CO Prealgebra Tutors
Centennial, CO Precalculus Tutors
Centennial, CO SAT Tutors
Centennial, CO SAT Math Tutors
Centennial, CO Science Tutors
Centennial, CO Statistics Tutors
Centennial, CO Trigonometry Tutors
Nearby Cities With precalculus Tutor
Arvada, CO precalculus Tutors
Aurora, CO precalculus Tutors
Cherry Hills Village, CO precalculus Tutors
Denver precalculus Tutors
Englewood, CO precalculus Tutors
Greenwood Village, CO precalculus Tutors
Highlands Ranch, CO precalculus Tutors
Lakewood, CO precalculus Tutors
Littleton, CO precalculus Tutors
Lone Tree, CO precalculus Tutors
Lonetree, CO precalculus Tutors
Parker, CO precalculus Tutors
Thornton, CO precalculus Tutors
Westminster, CO precalculus Tutors
Wheat Ridge precalculus Tutors
|
{"url":"http://www.purplemath.com/Centennial_CO_precalculus_tutors.php","timestamp":"2014-04-18T18:42:55Z","content_type":null,"content_length":"24131","record_id":"<urn:uuid:30711503-a9f7-45dd-945a-ff5fe25962a0>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: July 2002 [00070]
[Date Index] [Thread Index] [Author Index]
Re: Examples using Trig option
• To: mathgroup at smc.vnet.net
• Subject: [mg35274] Re: [mg35237] Examples using Trig option
• From: BobHanlon at aol.com
• Date: Fri, 5 Jul 2002 02:21:48 -0400 (EDT)
• Sender: owner-wri-mathgroup at wolfram.com
In a message dated 7/3/02 7:18:56 AM, ErsekTR at navair.navy.mil writes:
>The usage message for Trig says:
>Trig is an option for algebraic manipulation functions which specifies
>whether trigonometric functions should be treated as rational functions
>The built-in functions that have the Trig option are:
>Apart, ApartSquareFree, Cancel, Coefficient, CoefficientList, Collect,
>Denominator, Expand, ExpandAll, ExpandDenominator, ExpandNumerator,
>Exponent, Factor, FactorList, FactorSquareFree, FactorSquareFreeList,
>FactorTerms, FactorTermsList, FullSimplify, Numerator, PolynomialGCD,
>PolynomialLCM, PolynomialMod, Resultant, Simplify, Together.
>After searching all available documentation I can't find a single example
>f[expr, Trig->True] would give a different result than
>f[expr, Trig->False], and I can't come up with one on my own. Can you give
>some examples. An example for each function above is not necessary.
Apart[expr, Trig->#]& /@ {True, False}
{-Cos[y] (Cos[x]-Sin[x])+(Cos[x]-Sin[x]) Sin[y],-Cos[x-y]+Sin[x+y]}
Expand[expr, Trig->#]& /@ {True, False}
{-Cos[x] Cos[y]+Cos[y] Sin[x]+Cos[x] Sin[y]-Sin[x] Sin[y],-Cos[x-y]+Sin[x+y]}
Factor[expr, Trig->#]& /@ {True, False}
{-(Cos[x]-Sin[x]) (Cos[y]-Sin[y]),-Cos[x-y]+Sin[x+y]}
Bob Hanlon
Chantilly, VA USA
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2002/Jul/msg00070.html","timestamp":"2014-04-20T20:55:26Z","content_type":null,"content_length":"35448","record_id":"<urn:uuid:b6f5aff3-33da-48ae-ba3d-5ade83d619ad>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
|
APPENDIX E Probability and Reliability Analysis
Probability is a number between 0 and 1 that expresses a degree of uncertainty about whether an event, such as an accident, will occur. A logically impossible event is assigned the number 0, and a
logically certain event is assigned the number 1. The axioms of probability tell us how to combine various uncertainties.
Interpretations of Probability
There are at least four interpretations of probability:
1. classical (equally likely)
2. logical (the "necessarist" position)
3. relative frequency (objectivistic)
4. personalistic (subjectivistic)
The classical interpretation is based on the "principle of insufficient reason" and was advocated by the determinists Bernoulli, Laplace, De Moivre, and Bayes. This interpretation has limited
applicability and is now subsumed under the personalistic interpretation.
The logical interpretation was favored by logicians, such as Keynes, Reichenbach, and Carnap, and is currently out of vogue.
The relative frequency interpretation is used by many statisticians and is currently the most favored. This interpretation requires the conceptualization of an infinite collective and is not
applicable in one-of-a-kind situations.
The personalistic interpretation is more universal and incorporates engineering and other knowledge. This interpretation is popular in many applications, including risk analysis and safety analysis.
Axioms of Probability: Dependence and Independence
All the interpretations of probability have a common set of axioms that tell us how to combine probabilities of different events. But why should risk analysts be interested in such mathematical
details? Because one of the axioms pertains to the notion of dependence (and independence), a matter that is not carefully addressed by either the FAA or industry.
Consider two events ε[1] and ε[2]:
For example, let
Then, the axioms are:
Fault tree analysis is an engineering tool that, among other things, can help assess probabilities of the occurrence of undesirable events. The undesirable event is called the "top event."
The "and" and "or" gates of a fault tree correspond to the ''and" and the "or" functions in the axioms (or the calculus) of probability. At the very bottom of the tree are "basic events,'' which
usually correspond to equipment failures. Fault trees are similar to block diagrams of a system. Examples are illustrated in Figures E-1 through E-4.
|
{"url":"http://www.nap.edu/openbook.php?record_id=6265&page=69","timestamp":"2014-04-16T14:26:15Z","content_type":null,"content_length":"45402","record_id":"<urn:uuid:2c33eda8-5da6-418c-a18e-062aedb37ba8>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Speex-dev] question about spx_fft
Jean-Marc Valin jean-marc.valin at usherbrooke.ca
Sun Apr 15 01:41:49 PDT 2007
> Is the spx_fft function in fftwrap.c a standard fft function?
> void spx_fft(void *table, spx_word16_t *in, spx_word16_t *out)
> When I say standard, I mean the input "in" is 128 point short data for
> example and
> the output "out" is 128 short complex value which is stored in 256
> short array with real and
> image part. Looks like the function did some manipulation on the
> output from the kiss_fft. I am
> still reading the codes and am not completely understand the code yet.
> I am trying to replace
> the fft module with some hardware logic. I would greatly appreciate it
> if you could clarify my
> doubts.
spx_fft() computes a real-data FFT. For a size N, it takes N real values
as input and outputs N/2+1 real values and N/2-1 imaginary values.
The packing format it uses is:
(note that it starts with two real values and ends with one).
Also, unlike some floating-point conventions, the forward FFT normalises
by 1/N (to avoid overflows), while the inverse FFT doesn't do
normalisation at all (because the output is assumed to have proper scaling.
As for the mangling in fftwrap.c, that was just to change the ordering
of kiss-fft to fit the one Speex wants. I removed it recently because I
directly modified kiss_fft to give me the results with the right ordering.
More information about the Speex-dev mailing list
|
{"url":"http://lists.xiph.org/pipermail/speex-dev/2007-April/005481.html","timestamp":"2014-04-20T09:10:25Z","content_type":null,"content_length":"3949","record_id":"<urn:uuid:2f8d4b84-1946-4888-bc73-5cdccfdf66c6>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
|
is this a quadratic or not?
May 25th 2008, 10:01 AM
is this a quadratic or not?
I'm doing this math course through correspondence and really the only indication they've given to determine a quadratic function is that the first difference numbers are not constant, while the
second difference numbers are constant (same #). Well I'm stuck on this question because it doesn't follow the above rule, plus aren't the points always at the same distance on both sides of the
parabola (mirrored)?
Question A says 'confirm' that it is quadratic though, so I'm wondering if it really is, so could someone explain WHY it is, if it is?
May 25th 2008, 10:14 AM
Using Excel, I get a quadratic of $y=-0.3167x^{2}+2.3286x+1.5024$
This has an R^2 of .9994. Which is pretty dang good.
Plug in x=7 to find the height at 7 seconds. I get 2.3
It hits the ground when y=0. That occurs when x=7.96 seconds.
May 25th 2008, 11:11 AM
Well no where in the explanations or examples did they show how to form an equation from the chart or turn it into a graph...this is the first unit so maybe I'm not there yet. Like I said in the
original post, it just covers the first and second differences. In all four examples given, the second differences were the same number...that's why I'm confused.
May 25th 2008, 11:52 AM
Well no where in the explanations or examples did they show how to form an equation from the chart or turn it into a graph...this is the first unit so maybe I'm not there yet. Like I said in the
original post, it just covers the first and second differences. In all four examples given, the second differences were the same number...that's why I'm confused.
I don't know much about this subject but form what I understand the second derivative of a quadratic is constant but the method of second difference is a numerical approximation of the derivative
(so there is bound to be variations). I think what you're looking for is a fairly constant second difference and linear relationship in with first differences.
Out of interest have you been taught to use technology to solve these problems or have you been taught to do them using graphical method. I plotted a graph of the first differences to get an
approximations of the linear equation that relates them then integrated and I got an answer similar to what galactus posted.
May 25th 2008, 11:59 AM
Go here and download the program and install it.
1)Click F4 to insert a point series. And insert all these points.
2)Right click on the left where it says "Series 1" and select "Insert Tredline".
3)Click on polynomial and set the order be 2, then click okay.
|
{"url":"http://mathhelpforum.com/math-topics/39571-quadratic-not-print.html","timestamp":"2014-04-20T10:28:16Z","content_type":null,"content_length":"7400","record_id":"<urn:uuid:e76af8d0-e95a-4415-a603-e50373bea187>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the Oxidation number of Sulfur (S)?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Zero, because sulfur is an element.
Best Response
You've already chosen the best response.
The oxidation number of all elements in the elemental state is 0. So typically you need to know the compound in which the element is before you can calculate the oxidation number. S is typically
-2 in sulfides (FeS).
Best Response
You've already chosen the best response.
Preetha is right and made a fuller answer than mine. In compounds, sulfur has one of the greatest varieties of oxidation states, including some that lead to uncommon results (peroxodisulfate ion
for instance) where you must draw the ion's structure to understand what happens. S is also -2 is another famous sulfide: H2S But it can be +4 is SO2 for instance.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/510d86fae4b0d9aa3c477459","timestamp":"2014-04-21T02:22:12Z","content_type":null,"content_length":"32914","record_id":"<urn:uuid:626f6844-a6dd-4470-8acc-d0d4917c7888>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: statistics
Hi cassrl;
What have you done with this problem? What is the null hypthesis and the alternative one? Do you know how to start the problem? How to get z here? Is it a two tailed test or a one tailed? Can I see a
little of your work?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=18196","timestamp":"2014-04-18T13:25:57Z","content_type":null,"content_length":"9693","record_id":"<urn:uuid:0ecf95ca-5d78-493f-9fbf-3af229a7c939>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
|
magnetism (physics) :: Induced and permanent atomic magnetic dipoles
Update or expand this article!
In Edit mode, you will be able to click anywhere in the article to modify text, insert images, or add new information.
Once you are finished, your modifications will be sent to our editors for review.
You will be notified if your changes are approved and become part of the published article!
Update or expand this article!
In Edit mode, you will be able to click anywhere in the article to modify text, insert images, or add new information.
Once you are finished, your modifications will be sent to our editors for review.
You will be notified if your changes are approved and become part of the published article!
Article Free Pass
Induced and permanent atomic magnetic dipoles
Whether a substance is paramagnetic or diamagnetic is determined primarily by the presence or absence of free magnetic dipole moments (i.e., those free to rotate) in its constituent atoms. When there
are no free moments, the magnetization is produced by currents of the electrons in their atomic orbits. The substance is then diamagnetic, with a negative susceptibility independent of both field
strength and temperature.
In matter with free magnetic dipole moments, the orientation of the moments is normally random and, as a result, the substance has no net magnetization. When a magnetic field is applied, the dipoles
are no longer completely randomly oriented; more dipoles point with the field than against the field. When this results in a net positive magnetization in the direction of the field, the substance
has a positive susceptibility and is classified as paramagnetic.
The forces opposing alignment of the dipoles with the external magnetic field are thermal in origin and thus weaker at low temperatures. The excess number of dipoles pointing with the field is
determined by (mB/kT), where mB represents the magnetic energy and kT the thermal energy. When the magnetic energy is small compared to the thermal energy, the excess number of dipoles pointing with
the field is proportional to the field and inversely proportional to the absolute temperature, corresponding to Curie’s law. When the value of (mB/kT) is large enough to align nearly all the dipoles
with the field, the magnetization approaches a saturation value.
There is a third category of matter in which intrinsic moments are not normally present but appear under the influence of an external magnetic field. The intrinsic moments of conduction electrons in
metals behave this way. One finds a small positive susceptibility independent of temperature comparable with the diamagnetic contribution, so that the overall susceptibility of a metal may be
positive or negative. The molar susceptibility of elements is shown in Figure 11.
In addition to the forces exerted on atomic dipoles by an external magnetic field, mutual forces exist between the dipoles. Such forces vary widely for different substances. Below a certain
transition temperature depending on the substance, they produce an ordered arrangement of the orientations of the atomic dipoles even in the absence of an external field. The mutual forces tend to
align neighbouring dipoles either parallel or antiparallel to one another. Parallel alignment of atomic dipoles throughout large volumes of the substance results in ferromagnetism, with a permanent
magnetization on a macroscopic scale. On the other hand, if equal numbers of atomic dipoles are aligned in opposite directions and the dipoles are of the same size, there is no permanent macroscopic
magnetization, and this is known as antiferromagnetism. If the atomic dipoles are of different magnitudes and those pointing in one direction are all different in size from those pointing in the
opposite direction, there exists permanent magnetization on a macroscopic scale in an effect known as ferrimagnetism. A simple schematic representation of these different possibilities is shown in
Figure 12.
In all cases, the material behaves as a paramagnet above the characteristic transition temperature; it acquires a macroscopic magnetic moment only when an external field is applied.
When an electron moving in an atomic orbit is in a magnetic field B, the force exerted on the electron produces a small change in the orbital motion; the electron orbit precesses about the direction
of B. As a result, each electron acquires an additional angular momentum that contributes to the magnetization of the sample. The susceptibility χ is given by
where Σ < r^2 > is the sum of the mean square radii of all electron orbits in each atom, e and m are the charge and mass of the electron, and N is the number of atoms per unit volume. The negative
sign of this susceptibility is a direct consequence of Lenz’s law (see above). When B is switched on, the change in motion of each orbit is equivalent to an induced circulating electric current in
such a direction that its own magnetic flux opposes the change in magnetic flux through the orbit; i.e., the induced magnetic moment is directed opposite to B.
Since the magnetization M is proportional to the number N of atoms per unit volume, it is sometimes useful to give the susceptibility per mole, χ[mole]. For a kilogram mole (the molecular weight in
kilograms), the numerical value of the molar susceptibility is
For an atom, the mean value of Σ < r^2 > is about 10^−21 square metre and χ[mole] has values of 10^−9 to 10^−10; the atomic number Z equals the number of electrons in each atom. The quantity Σ < r^2
> for each atom, and therefore the diamagnetic susceptibility, is essentially independent of temperature. It is also not affected by the surroundings of the atom.
A different kind of diamagnetism occurs in superconductors. The conduction electrons are spread out over the entire metal, and so the induced magnetic moment is governed by the size of the
superconducting sample rather than by the size of the individual constituent atoms (a very large effective < r^2 >). The diamagnetism is so strong that the magnetic field is kept out of the
Do you know anything more about this topic that you’d like to share?
|
{"url":"http://www.britannica.com/EBchecked/topic/357334/magnetism/71543/Induced-and-permanent-atomic-magnetic-dipoles","timestamp":"2014-04-16T16:27:13Z","content_type":null,"content_length":"104519","record_id":"<urn:uuid:303e3bf6-8290-42fa-a0f5-1f4909a61092>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-User] Unit testing of Bayesian estimator
Anne Archibald peridot.faceted@gmail....
Sun Nov 8 16:51:08 CST 2009
2009/11/8 <josef.pktd@gmail.com>:
> When I do a Monte Carlo for point estimates, I usually check bias,
> variance, mean squared error,
> and mean absolute and median absolute error (which is a more
> robust to outliers, e.g. because for some cases the estimator produces
> numerical nonsense because of non-convergence or other numerical
> problems). MSE captures better cases of biased estimators that are
> better in MSE sense.
I can certainly compute all these quantities from a collection of
Monte Carlo runs, but I don't have any idea what values would indicate
correctness, apart from "not too big".
> I ran your test, test_bayes.py for M = 50, 500 and 1000 adding "return
> in_interval_f"
> and inside = test_credible_interval()
> If my reading is correct inside should be 80% of M, and you are pretty close.
> (M=1000 is pretty slow on my notebook)
Yeah, that's the problem with using the world's simplest numerical
integration scheme.
>>>> inside
> 39
>>>> 39/50.
> 0.78000000000000003
>>>> inside
> 410
>>>> inside/500.
> 0.81999999999999995
>>>> inside/1000.
> 0.81499999999999995
> I haven't looked enough on the details yet, but I think this way you could
> test more quantiles of the distribution, to see whether the posterior
> distribution is roughly the same as the sampling distribution in the
> MonteCarlo.
I could test more quantiles, but I'm very distrustful of testing more
than one quantile per randomly-generated sample: they should be
covariant (if the 90% mark is too high, the 95% mark will almost
certainly be too high as well) and I don't know how to take that into
account. And running the test is currently so slow I'm inclined to
spend my CPU time on a stricter test of a single quantile. Though
unfortunately to increase the strictness I also need to improve the
sampling in phase and fraction.
> In each iteration of the Monte Carlo you get a full posterior distribution,
> after a large number of iterations you have a sampling distribution,
> and it should be possible to compare this distribution with the
> posterior distributions. I'm still not sure how.
I don't understand what you mean here. I do get a full posterior
distribution out of every simulation. But how would I combine these
different distributions, and what would the combined distribution
> two questions to your algorithm
> Isn't np.random.shuffle(r) redundant?
> I didn't see anywhere were the sequence of observation in r would matter.
It is technically redundant. But since the point of all this is that I
don't trust my code to be right, I want to make sure there's no way it
can "cheat" by taking advantage of the order. And in any case, the
slow part is my far-too-simple numerical integration scheme. I'm
pretty sure the phase integration, at least, could be done
> Why do you subtract mx in the loglikelihood function?
> mx = np.amax(lpdf)
> p = np.exp(lpdf - mx)/np.average(np.exp(lpdf-mx))
This is to avoid overflows. I could just use logsumexp/logaddexp, but
that's not yet in numpy on any of the machines I regularly use. It has
no effect on the value, since it's subtracted from top and bottom
both, but it ensures that the largest value exponentiated is exactly
>>>> I can even generate models and then data
>>>> sets that are drawn from the prior distribution, but what should I
>>>> expect from the code output on such a data set?
>>> If you ignore the Bayesian interpretation, then this is just a
>>> standard sampling problem, you draw prior parameters and observations,
>>> the rest is just finding the conditional and marginal probabilities. I
>>> think the posterior odds ratio should converge in a large Monte Carlo
>>> to the true one, and the significance levels should correspond to the
>>> one that has been set for the test (5%).
>>> (In simplest case of conjugate priors, you can just interpret the
>>> prior as a previous sample and you are back to a frequentist
>>> explanation.)
>> This sounds like what I was trying for - draw a model according to the
>> priors, then generate a data set according to the model. I then get
>> some numbers out: the simplest is a probability that the model was
>> pulsed, but I can also get a credible interval or an estimated CDF for
>> the model parameters. But I'm trying to figure out what test I should
>> apply to those values to see if they make sense.
>> For a credible interval, I suppose I could take (say) a 95% credible
>> interval, then 95 times out of a hundred the model parameters I used
>> to generate the data set should be in the credible interval. And I
>> should be able to use the binomial distribution to put limits on how
>> close to 95% I should get in M trials. This seems to work, but I'm not
>> sure I understand why. The credible region is obtained from a
>> probability distribution for the model parameters, but I am turning
>> things around and testing the distribution of credible regions.
> If you ignore the Bayesian belief interpretation, then it's just a
> problem of Probability Theory, and you are just checking the
> small and large sample behavior of an estimator and a test,
> whether it has a Bayesian origin or not.
Indeed. But with frequentist tests, I have a clear statement of what
they're telling me that I can test against: "If you feed this test
pure noise you'll get a result this high with probability p". I
haven't figured out how to turn the p-value returned by this test into
something I can test against.
>> In any case, that seems to work, so now I just need to figure out a
>> similar test for the probability of being pulsed.
> "probability of being pulsed"
> I'm not sure what test you have in mind.
> There are two interpretations:
> In your current example, fraction is the fraction of observations that
> are pulsed and fraction=0 is a zero probability event. So you cannot
> really test fraction==0 versus fraction >0.
> In the other interpretation you would have a prior probability (mass)
> that your star is a pulsar with fraction >0 or a non-pulsing unit
> with fraction=0.
This is what the code currently implements: I begin with a 50% chance
the signal is unpulsed and a 50% chance the signal is pulsed with some
fraction >= 0.
> The probabilities in both cases would be similar, but the interpretation
> of the test would be different, and differ between frequentists and
> Bayesians.
> Overall your results look almost too "nice", with 8000 observations
> you get a very narrow posterior in the plot.
If you supply a fairly high pulsed fraction, it's indeed easy to tell
that it's pulsed with 8000 photons; the difficulty comes when you're
looking for a 10% pulsed fraction; it's much harder than 800 photons
with a 100% pulsed fraction. If I were really interested in the
many-photons case I'd want to think about a prior that made more sense
for really small fractions. But I'm keeping things simple for now.
More information about the SciPy-User mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-user/2009-November/023182.html","timestamp":"2014-04-20T10:54:29Z","content_type":null,"content_length":"10842","record_id":"<urn:uuid:d692edaa-e0a4-4c28-ad65-ca665d2dc816>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Solve (x+1)(x+2)(x+3)(x+4)=1 by hand
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f5bcae6e4b0602be4380125","timestamp":"2014-04-20T16:25:38Z","content_type":null,"content_length":"70765","record_id":"<urn:uuid:aff8819e-f685-4704-a080-2f848fc39241>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fractions, Equivalent Fractions and Decimals.
GRADE LEVEL: 7-8
MATERIALS: Pen, Pencil, Paper, Calculator, Worksheet, Paper strips, charts on the blackboard, Thermometer, Micrometer and Gauge and overhead projector.
OBJECTIVE: The student, after a demonstration and explanation on number sense for fraction and decimals, will be able to relate fractions to decimals and find and compare equivalent fractions
-Revision on the definition of fractions and equivalent fractions.
-Concept of fraction, and decimals.
-Number sense for fractions and decimals.
-How to relate fractions to decimals to find equivalent fractions.
NCTM STANDARDS: STANDARD # 12: Fractions and decimals. -Develop concepts of fractions, mixed numbers, and decimals. -Use models to explore operations on fractions and decimals. -Apply fractions and
decimals to problem situations.
PROCEDURE: The teacher will do a quick review on the concept of fractions: "What are numerator and denominator?"
The teacher will say that this is an exciting world we are living on and we want and we need to know exact information about things.
Things usually come out so many units and a little bit more, this little bit more is a fraction or a decimal part, fractions and decimals are the same thing. They both are an accurate measure for
something that is smaller than one complete unit. Micrometers, Thermometers, and Gauges measure decimal and fractional parts. " The teacher will bring and show these three instruments to the class."
(passing them around).
An example will be given to the class as to how we use the thermometer to calculate temperature. The teacher will take one student's temperature, the thermometer might register 97.5 oF and teacher
will demonstrate in this way that normal average temperature is being expressed as a decimal on the real life.
The teacher will mention that in order to measure fractions and decimals we have to divide the unit of measurement into many equal parts. i.e.:
1/4 1/4 1/4 1/4
3 out of 4 equal parts is equal to 3/4
So, in any fraction, the lower number "DENOMINATOR" shows the total number of parts in the unit. The top number "NUMERATOR" shows how many of these parts we are using.
Usually, we can choose the number of total parts into which a unit is divided. 32 PARTS TO ONE INCH, 16 PARTS TO ONE POUND.
We can expect to find fractions with any numbers of equal parts. Fractions written in this manner are called decimals: 0.5, 0.75, 0.25.
Fractions are easier to visualize: if we have a pizza divided into 8 equal slices and you remove two slices, what fraction of the pizza have you removed?. Answer: 2/8 or 1/4.
Decimals are easier to add, subtract, multiply and divide.
Fractions are changed to decimals by dividing the numerator by the denominator, students should be able to recognize that ½ is the same as 0.5, and that 0.4 and 0.45 are a little less than ½, also
the teacher will explain that 0.6 and 0.57 are a little more than ½ by using a simple chart.
The teacher will show in the overhead projector two graphs as follows:
FIRST GRAPH
one fifth two fifths three fifths four fifths five fifths
six fifths seven fifths eight fifths nine fifths ten fifths
SECOND GRAPH
one-fifth two-fifths three-fifths four-fifths one
one & one one & two one & three one & four two
fifth fifths fifths fifths
The teacher will explain that five-fifths and one is the same, that six-fifths and one and one fifth is the same introducing in this way the concept of equivalent fractions and mixed numbers.
Children will be given physical material to explore and compare equivalent fractions such as: PAPER STRIPS.
½ SAME AS 2/4
2/3 IS SMALLER THAN 3/4
As part of Closure, the teacher will also deliver a chart per group to sort the following fractions:
Each group will have to check the correct box and they will do this by drawing each fraction, or with paper strips divided in equal parts as required per fraction or by dividing numerator by
After the group activity has finished, the teacher will go over the results and correct them if wrong using the overhead projector.
ASSESSMENT: As part of the assessment, the students will watch 2 of their favorite TV programs and find out using a clock, how much time is devoted to commercials. Each student should time in minutes
the whole program as well as each commercial from beginning to end. The student will compare number of minutes spent on commercials versus the time of the whole program and express the results in
fraction and decimal form and compare the fraction and decimal to determine which program devotes more time to commercials.
REFERENCE: Ajose, S.A. (1994). "Problems, Patterns and Recreations", The Mathematics Teacher , 87(7), 516-19
Lola May, (May 1994). "Teaching Pre k-8". Teaching Math, 25(5), 24-25.
CONTRIBUTORS: Major: Maritza Simauchi.
Minor: Carol A. Marinas.
FRACTION ABOUT 0 ABOUT ½ ABOUT 1
|
{"url":"http://euclid.barry.edu/~marinas/mat476/sflattr/sim319a.html","timestamp":"2014-04-19T20:08:18Z","content_type":null,"content_length":"7244","record_id":"<urn:uuid:aa39e73c-eff4-4960-a602-61bfab931e5a>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: FW: st: Query..
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: FW: st: Query..
From "Roger B. Newson" <r.newson@imperial.ac.uk>
To statalist@hsphsun2.harvard.edu
Subject Re: FW: st: Query..
Date Wed, 17 Apr 2013 11:18:00 +0100
A more rigorous demonstration of this point, and other points about the 2-sample t-test, using numerical integration, is given in Moser, Stevens and Watts (1989) and in Moser and Stevens (1992).
Best wishes
Moser, B.K., Stevens, G.R., and Watts, C.L. 1989. The two-sample t-test
versus Satterthwaite’s approximate F-test. Communications in Statistics - Theory and Methods 18, 3963-3975.
Moser, B.K. and Stevens, G.R. 1992. Homogeneity of variance in the two-sample means test. The American Statistician 46, 19-21.
Roger B Newson BSc MSc DPhil
Lecturer in Medical Statistics
Respiratory Epidemiology and Public Health Group
National Heart and Lung Institute
Imperial College London
Royal Brompton Campus
Room 33, Emmanuel Kaye Building
1B Manresa Road
London SW3 6LR
Tel: +44 (0)20 7352 8121 ext 3381
Fax: +44 (0)20 7351 8322
Email: r.newson@imperial.ac.uk
Web page: http://www.imperial.ac.uk/nhli/r.newson/
Departmental Web page:
Opinions expressed are those of the author, not of the institution.
On 17/04/2013 04:26, Lachenbruch, Peter wrote:
Rich Goldstein sent this to me
Peter A. Lachenbruch,
Professor (retired)
From: Richard Goldstein [richgold@ix.netcom.com]
Sent: Tuesday, April 16, 2013 6:00 PM
To: statalist@hsphsun2.harvard.edu
Cc: Lachenbruch, Peter
Subject: Re: st: Query..
Tony, et al.
the quote is "To make the preliminary test on variances is rather like
putting to sea in a rowing boat to findout whether conditions are
sufficiently calm for an ocean liner to leave port!"
this is on p. 333 of Box, GEP (1953), "Non-normality and tests on
Variances," _Biometrika_, 40 (3/4): 318-335
On 4/16/13 7:01 PM, Lachenbruch, Peter wrote:
The context i was referring to was an old article by George Box in Biometrika
aboutg 1953 in which he commented that testing for heteroskedasticy was
like setting
to see in a rowboat to see if it was safe for the Queen Mary to sail.
Sorry i don't
have the quote, and my books are all bundled up due to a flood in my
Peter A. Lachenbruch,
Professor (retired)
From: owner-statalist@hsphsun2.harvard.edu [owner-statalist@hsphsun2.harvard.edu] on behalf of John Antonakis [John.Antonakis@unil.ch]
Sent: Tuesday, April 16, 2013 1:47 PM
To: statalist@hsphsun2.harvard.edu
Subject: Re: st: Query..
Hello Peter:
Can you please elaborate? The chi-square test of fit--or the likelihood
ratio test comparing the saturated to the target model--is pretty
robust, though as I indicated, it does not behave as expected at small
samples, when data are not multivariate normal, when the model is
complex (and the n to parameters estimated ration is low). However, as I
mentioned there are remedies to the problem. More specifically see:
Bollen, K. A., & Stine, R. A. (1992). Bootstrapping goodness-of-fit
measures in structural equation models. Sociological Methods & Research,
21(2), 205-229.
Herzog, W., & Boomsma, W. (2009). Small-sample robust estimators of
noncentrality-based and incremental model fit. Structural Equation
Modeling, 16(1), 1–27.
Swain, A. J. (1975). Analysis of parametric structures for variance
matrices (doctoral thesis). University of Adelaide, Adelaide.
Yuan, K. H., & Bentler, P. M. (2000). Three likelihood-based methods for
mean and covariance structure analysis with nonnormal missing data. In
M. E. Sobel & M. P. Becker (Eds.), Sociological Methodology (pp.
165-200). Washington, D.C: ASA.
In addition to elaborating, better yet, if you have a moment give us
some syntax for a dataset that you can create where there are
simultaneous equations with observed variables, an omitted cause, and
instruments. Let's see how the Hansen-J test (estimated with reg3, with
2sls and 3sls) and the normal theory chi-square statistic (estimated
with sem) behave (with and with robust corrections).
John Antonakis
Professor of Organizational Behavior
Director, Ph.D. Program in Management
Faculty of Business and Economics
University of Lausanne
Internef #618
CH-1015 Lausanne-Dorigny
Tel ++41 (0)21 692-3438
Fax ++41 (0)21 692-3305
Associate Editor
The Leadership Quarterly
On 16.04.2013 22:04, Lachenbruch, Peter wrote:
I would be rather cautious about relying on tests of variances. These are notoriously non-robust. Unless new theory has shown this not to be the case, i'd not regard this as a major issue.
Peter A. Lachenbruch,
Professor (retired)
From: owner-statalist@hsphsun2.harvard.edu [owner-statalist@hsphsun2.harvard.edu] on behalf of John Antonakis [John.Antonakis@unil.ch]
Sent: Tuesday, April 16, 2013 10:51 AM
To: statalist@hsphsun2.harvard.edu
Subject: Re: st: Query..
In general I find Acock's books helpful and I have bought two of them.
The latest one he has on SEM was gives a very nice overview of the SEM
module in Stata. However, it is disappointing on some of the statistical
theory, in particular with respect to fact that he gave too much
coverage to "approximate" indexes of overidentification, which are not
tests, and did not explain enough what the chi-square statistic of
overidentification is.
The Stata people are usually very good about strictly following
statistical theory, as do all econometricians, and do not promote too
much these approximate indexes. So, I was a bit annoyed to see how much
airtime was given to rule-of-thumb indexes that have no known
distributions and are not tests. The only serious test of
overidentification, analogous to the Hansen-Sargen statistic is the
chi-square test of fit. So, my suggestion to Alan is that he spends some
time to cover that in the updated addition and not to suggest that
models that fail the chi-square test are "approximately good."
For those who do not know what this statistic does, it basically
compares the observed variance-covariance (S) matrix to the fitted
variance covariance matrix (sigma) to see if the difference (residuals)
are simultaneously different from zero. The fitting function that is
minimized is:
Fml = ln|Sigma| - ln|S| + trace[S.Sigma^-1] - p
As Sigma approaches S, the log of the determinant of Sigma less the log
of the determinant of S approach zero; as concerns the two last terms,
as Sigma approaches S, the inverse of Sigma premultiplied by S makes an
identity matrix, whose trace will equal the number of observed variables
p (thus, those two terms also approach zero). The chi-square statistic
is simply Fml*N, at the relevant DF (which is elements in the
variance-covariance matrix less parameters estimated).
This chi-square test will not reject a correctly specified model;
however, it does not behave as expected at small samples, when data are
not multivariate normal, when the model is complex (and the n to
parameters estimated ration is low), which is why several corrections
have been shown to better approximate the true chi-square distribution
(e.g., Swain correction, Yuan-Bentler correction, Bollen-Stine bootstrap).
In all, I am thankful to Alan for his nice "how-to" guides which are
very helpful to students who do not know Stata need a "gentle
introduction"--so I recommend them to my students, that is for sure.
But, I would appreciate a bit more beef from him for the SEM book in
updated versions.
John Antonakis
Professor of Organizational Behavior
Director, Ph.D. Program in Management
Faculty of Business and Economics
University of Lausanne
Internef #618
CH-1015 Lausanne-Dorigny
Tel ++41 (0)21 692-3438
Fax ++41 (0)21 692-3305
Associate Editor
The Leadership Quarterly
On 16.04.2013 17:45, Lachenbruch, Peter wrote:
> David -
> It would be good for you to specify what you find problematic with
Acock's book. I've used it and not had any problems - but maybe i'm
just ancient and not seeing issues
> Peter A. Lachenbruch,
> Professor (retired)
> ________________________________________
> From: owner-statalist@hsphsun2.harvard.edu
[owner-statalist@hsphsun2.harvard.edu] on behalf of Hutagalung, Robert
> Sent: Monday, April 15, 2013 2:06 AM
> To: statalist@hsphsun2.harvard.edu
> Subject: AW: st: Query..
> Hi David,
> Thanks, though I find the book very useful.
> Best, Rob
> -----Ursprüngliche Nachricht-----
> Von: owner-statalist@hsphsun2.harvard.edu
[mailto:owner-statalist@hsphsun2.harvard.edu] Im Auftrag von David Hoaglin
> Gesendet: Samstag, 13. April 2013 16:11
> An: statalist@hsphsun2.harvard.edu
> Betreff: Re: st: Query..
> Hi, Rob.
> I am not able to suggest a book on pharmacokinetics/pharmacodynamics,
> but I do have a comment on A Gentle Introduction to Stata. As a
statistician, I found it helpful in learning to use Stata, but a number
of its explanations of statistics are very worrisome.
> David Hoaglin
> On Fri, Apr 12, 2013 at 9:01 AM, Hutagalung, Robert
<Robert.Hutagalung@med.uni-jena.de> wrote:
>> Hi everyone, I am a new fellow here..
>> I am wondering if somebody could a book (or books) on Stata dealing
with pharmacokinetics/pharmacodinamics - both analyses and graphs.
>> I already have: A Visual Guide to Stata Graphics, 2' Edition, A
Gentle Introduction to Stata, Third Edition, An Introduction to Stata
for Health Researchers, Third Edition.
>> Thanks in advance, Rob.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2013-04/msg00764.html","timestamp":"2014-04-16T22:00:20Z","content_type":null,"content_length":"20935","record_id":"<urn:uuid:5d255ab4-3b43-437a-81fa-98706472d6e5>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Random Thoughts on a Trivial Lab MeasurementRandom Thoughts on a Trivial Lab Measurement
A while back in the lab we were conducting an experiment that involved passing a laser beam through a narrow iris, using a beamsplitter to take half the intensity of the beam and send it one way,
with the remaining half passing through undisturbed. Then we arranged our mirrors in such a way as to bring the beams together so they were propagating in the same direction parallel to one another
before doing some jazz that will hopefully make an interesting journal article.
To get a good handle on calibration, we wanted to characterize the two parallel beams. Laser beams, despite how they might look to the naked eye, are not pencils with zero light outside the beam and
uniform intensity within. Instead, they often (but not always) follow a Gaussian profile. Their peak intensity is at the center of the beam and uniformly falls off away from the center. However, in
this experiment we can’t guarantee that we split the intensity exactly 50:50, or that aberrations in our equipment haven’t made one of the beams a little wider than the other. So while in a perfect
world the two identical beams might look like this projected on a wall…
…in practice the two non-identical beams might look something more like this:
The definite way to characterize something like this is with a device called a beam profiler, which is essentially a specialized camera. But they’re expensive, fragile, and finicky. We took a
shortcut. We put a razor blade in on a translation stage so that we could gradually move it to the right and cut off first one beam, then both beams. While doing this, we measured the total power as
a function of how far over we had moved the razor. A picture might help. With the razor at (say) the position x = -2, the beams as they hit the power meter looked something like this:
So everything to the left of the razor edge is cut off. By assuming that the beams are in fact Gaussian (with possibly different amplitudes and widths), we can calculate that the power not blocked by
the razor and thus hitting the meter is:
Where erf is the error function, the “a” coefficients are the amplitudes, the μ are the beam positions, the σ are the beam widths, and c is the background signal of the detector. For the values I
picked in the uneven-beams example image above, the graph of this function looks like this:
In our experiment, we have to do that backwards – extrapolate the various constants using the power vs. razor position measurements. Here’s the data we actually had, with the y-axis in mW and the
x-axis in microns:
From that graph, it’s possible to eyeball things and make guesses for what the parameters might roughly be. Just a few decades ago that would have been just about the only possibility. But today,
computer algebra systems (Mathematica in this case) can take a starting guess for the parameters, calculate how far off that guess is, create a new guess, and iterate until a numerical best-fit is
found. This would take a human probably weeks with pen and paper, but a computer (my netbook even) can do it in about a second. The best fit in this case is:
a1 = 6.45147, a2 = 6.97507, μ1 = 1575.7, μ2 = 3656.74, σ1 = 370.544, σ2 = 294.698, c = 0.438761
So we know the relative powers of each beam, and how far apart their centers are (about 2.08 mm). Here’s the fit, superimposed on the points:
It’s not perfect, but it’s plenty good enough for what we needed.
As something of a philosophical aside, scientists (physicists especially, and I include myself) like to say falsifiability is one of the major defining parts of the scientific enterprise. If a theory
is at variance with what is actually observed in reality, so much the worse for the theory. But here I have presented a theory in the form of that P(x) equation, and in fact it fails to fit the
points perfectly. Is the theory wrong? Well, yes. The beams may not be perfectly Gaussian. Even if they are, the simple model treating the razor as a diffraction-free cutoff is at variance with the
full Maxwellian treatment. Even if it weren’t, the theory I’m testing also implicitly includes the “theory” of imperfect lab equipment. When falsifying a theory, it’s often not easy to say what
exactly you’ve falsified. And so philosophers of science write books on this sort of thing, and physicists devise ever more careful experiments to disentangle theory from approximation from
equipment, and everyone stays pretty busy.
Rather a lot of words from a 5-minute “we’d better double check our beam quality” measurement, but what else is the internet for?
1. #1 Eric Lund March 4, 2011
One obvious check on the quality of the model would be to repeat the test in the vertical direction. You will get different results for the μ’s, but the a’s and the σ’s should be very nearly the
same as before if the azimuthally symmetric Gaussian model is a good approximation. For that matter, μ1 and μ2 should come out nearly equal, if you did your alignment right.
2. #2 ObsessiveMathsFreak March 4, 2011
Not sure how your optimisation code is converging on the fit paramaters, but if you have some control over things, I would suggest you try using l1 rather than l2 norms in your code. That is,
instead on minimising the RMS differences between the curve and fit points, you could try simply minimising the sum of the absolute differences.
The reason this sometimes gives and improvement is because of the nature of the l2 norm when the differences becomes small. Small differences become even smaller when squared, so the l2 norm
tends to leave a lot of small but noticeable errors around. By contrast, the l1 norm (abs(d)) tries to minimise errors in a more evenhanded way once the fit closes in on a good solution and the
differences all become small.
3. #3 Torbjörn Larsson, OM March 4, 2011
Well, one or more obvious hypotheses likely falsified is that we observe separated beams in quantities of one, three, four, …
Or conversely, Springer’s razor tells us that we see two beams.
4. #4 arnoques March 5, 2011
We find it useful to make a numerical derivative of the data before fitting. This should give you the gaussians directly, that are much easier to visually check and treat mathematically.
On the other hand, I completely agree on the falsifiability paragraph. It’s oh so hard to be sure that the theoryis what’s wrong, and not any of the very very long list of things that could go
wrong in an experiment. But, hey! That’s one of the reasons science is interesting!
5. #5 prefabrik May 16, 2011
Tora Bora in particular was a cluster fuck and should simply not have happened.
|
{"url":"http://scienceblogs.com/builtonfacts/2011/03/04/random-thoughts/","timestamp":"2014-04-19T12:33:03Z","content_type":null,"content_length":"53163","record_id":"<urn:uuid:9f041577-6dd0-46b0-ab93-aab35837a03d>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The algorithm of the Gods
The algorithm of the Gods
Does the world function like a computer programme? Are the forms and states of nature created from a basic formula? Stephen Wolfram thinks so, and even claims to have discovered the source code
underpinning the complexity of the universe.
By day, 50-year-old Wolfram is the head of a company called Wolfram Research, which owns the wellknown calculation software Mathematica and the new search engine Wolfram Alpha. By night, he is a
researcher, a brilliant scientist with a reputation second to none. Mathematician, computer scientist and particle physicist, his research focuses on cellular automata, mathematical models that,
according to Stephen Wolfram, explain how the complexity of the world is constructed. In his book A New Kind of Science published in 2001, Wolfram challenges the very foundations of science in all
its fields. So, is he an arrogant megalomaniac or a misunderstood genius?
Prodigal son
Stephen Wolfram was born in London in 1959. At a very early age he showed signs of remarkable intelligence. At the age of 13 he was granted a study bursary for Eton College, a prestigious secondary
school where he rubbed shoulders with the cream of the British elite. One year later, Wolfram wrote an essay on particle physics. His first scientific article appeared in Nuclear Physics in 1975,
when he was only 15 years old. “At that time physics was one of the most innovative fields of research. Many advances were made, especially in particle physics, which attracted me”, he explains^(1).
The young genius pursued his career at Oxford University (UK) before crossing the Atlantic to work at the California Institute of Technology – Caltech (US), where he gained a doctorate in theoretical
physics at the age of 20. It was here that he began to forge his reputation. During this period he published more than 25 scientific articles. He dreamed up the Fox-Wolfram variables and discovered
the Politzer-Wolfram upper bound on the mass of quarks. In 1981, at the ripe old age of 22, he became the youngest ever winner of the MacArthur ‘Genius’ Fellowship, which offers a bursary to the most
talented researchers each year.
Wolfram left Caltech in 1982 for the Institute for Advanced Study at Princeton (US), an establishment devoted exclusively to scientific research. It was here that cellular automata first attracted
his interest. His goal was to understand the complexity of the world, a question that no mathematical equation or physical theory had ever succeeded in resolving. The origin of the complexity of the
Universe is a subject that had fascinated him since childhood. “This question arose not only when I was studying cosmology, but also neuroscience and artificial intelligence. Although I was working
on the development of what later became the Mathematica software and creating primitive operations on which to construct a whole series of more complex operations, I had an intuition that there was a
similar general principle on which the full complexity of nature was based, from the structure of the galaxies down to that of neurons. For this reason I set out to test very simple operations that
could lead to the formation of complex structures, which attracted me to cellular automata.”
So what exactly is a cellular automaton? Take a series of black and white cells. Then imagine that a new row is generated according to a series of rudimentary rules. For example, a white cell can
never be above another white cell unless it forms a diagonal of 10 white cells. The matrix resulting from this process randomly produces structures that can be extremely complex.
During the 1980s Wolfram discovered Rule 30, a cellular automaton that can generate forms similar to the patterns found on the shell of a snail, Conus textile. This convinced him that he had lifted a
corner of the veil of the universal code that he thought must exist.
This prompted him to publish a series of articles on the topic and to create a new discipline, the science of complex systems. He founded the Center for Complex Systems Research at the University of
Illinois (US), helped to set up a think tank at the Santa Fe Institute (US) and created the scientific review entitled Complex Systems Journal. “By putting these different elements in place, I hoped
to encourage other researchers to follow this avenue of research. But the response from the scientific community was too slow to satisfy my curiosity.”
The frustrated Wolfram left the academic world to devote himself fully to computer programming. “The aim was to build research infrastructure and a tool that would allow me to continue my work on
complex systems alone.” In 1987, Wolfram founded the firm Wolfram Research. One year later he brought to market the Mathematica software, which was capable of carrying out a wide range of
mathematical operations.
The company was a huge success. Today Mathematica has more than two million users in 90 countries, Wolfram Research generates annual turnover of USD 50 million and has more than 300 employees on its
payroll. In changing hats from researcher to businessman, Stephen Wolfram became a millionaire. Initially this was poorly received by the academic world. “Twenty years ago the software used in
laboratories was free. It was therefore seen as deeply shocking to ask for money for an application used for advanced R&D, an activity largely confined to universities. Nowadays attitudes have
changed radically.”
Hacker of the universal code?
During the 1990s the world of research forgot about Stephen Wolfram, but he had not given up on his research. At night he shut himself up in his laboratory to pursue his research into complex
systems. Armed with a computer, he tirelessly tested different cellular automata in order to identify those that best reproduced the structures found in nature. This led to the discovery of cellular
automata capable of generating the structure of ice and of certain leaves. Ten years later he published A New Kind of Science, in which he describes his research results and then, chapter by chapter,
demonstrates to what extent his theory challenges the bases of the various scientific disciplines.
He deliberately chose not to follow the traditional path of publishing his theory in a scientific review. “I wanted my research to be accessible to the widest possible audience. This approach also
allowed me to test my theory. Explaining an idea, then polishing it to make it as clear as possible, is an excellent path to better understanding.”
After applying his efforts to computer modelling of mathematics and the world, Wolfram took on the sphere of knowledge. In 2009, he launched Wolfram Alpha, a search engine capable of providing all
the information concerning a given subject in response to a written request. “It has always been important for me not to hide the fundamental research questions raised during the development of a
technology; in other words to adopt an integrated approach. Wolfram Alpha is a project that is particularly close to my heart precisely because it reflects this integrated vision of knowledge.”
As a businessman and researcher, the two hats he wears make Wolfram both fascinating and disconcerting. “Most people with the financial resources fund research indirectly, through a foundation for
example. Some people are frightened at the idea of financing fundamental research while at the same time contributing personally… although they are unable to say why!”
Julie Van Rossom
1. All quotes are from Stephen Wolfram.
|
{"url":"http://ec.europa.eu/research/research-eu/63/article_6334_en.html","timestamp":"2014-04-19T07:33:19Z","content_type":null,"content_length":"19096","record_id":"<urn:uuid:be7d65ca-5825-4877-b5de-d88bed76d1a8>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Binomial Pricing Trees in R
Binomial Tree Simulation
The binomial model is a discrete grid generation method from \(t=0\) to \(T\). At each point in time (\(t+\Delta t\)) we can move up with probability \(p\) and down with probability \((1-p)\). As the
probability of an up and down movement remain constant throughout the generation process, we end up with a recombining binary tree, or binary lattice. Whereas a balanced binomial tree with height \(h
\) has \(2^{h+1}-1\) nodes, a binomial lattice of height \(h\) has \(\sum_{i=1}^{h}i\) nodes.
The algorithm to generate a binomial lattice of \(M\) steps (i.e. of height \(M\)) given a starting value \(S_0\), an up movement \(u\), and down movement \(d\), is:
FOR i=1 to M
FOR j=0 to i
STATE S(j,i) = S(0)*u^j*d^(n-j)
We can write this function in R and generate a graph of the lattice. A simple lattice generation function is below:
# Generate a binomial lattice
# for a given up, down, start value and number of steps
genlattice <- function(X0=100, u=1.1, d=.75, N=5) {
X <- c()
X[1] <- X0
count <- 2
for (i in 1:N) {
for (j in 0:i) {
X[count] <- X0 * u^j * d^(i-j)
count <- count + 1
We can generate a sample lattice of 5 steps using symmetric up-and-down values:
> genlattice(N=5, u=1.1, d=.9)
[1] 100.000 90.000 110.000 81.000 99.000 121.000 72.900 89.100 108.900 133.100 65.610
[12] 80.190 98.010 119.790 146.410 59.049 72.171 88.209 107.811 131.769 161.051
In this case, the output is a vector of alternate up and down state values.
We can nicely graph a binomial lattice given a tool like graphviz, and we can easily create an R function to generate a graph specification that we can feed into graphviz:
function(S, labels=FALSE) {
shape <- ifelse(labels == TRUE, "plaintext", "point")
cat("digraph G {", "\n", sep="")
cat("node[shape=",shape,", samehead, sametail];","\n", sep="")
# Create a dot node for each element in the lattice
for (i in 1:length(S)) {
cat("node", i, "[label=\"", S[i], "\"];", "\n", sep="")
# The number of levels in a binomial lattice of length N
# is `$\frac{\sqrt{8N+1}-1}{2}$`
L <- ((sqrt(8*length(S)+1)-1)/2 - 1)
for (i in 1:L) {
tabs <- rep("\t",i-1)
j <- i
while(j>0) {
k <- k + 1
j <- j - 1
cat("}", sep="")
This will simply output a dot script to the screen. We can capture this script and save it to a file by invoking:
> x<-capture.output(dotlattice(genlattice(N=8, u=1.1, d=0.9)))
> cat(x, file="/tmp/lattice1.dot")
We can then invoke dot from the command-line on the generated file:
$ dot -Tpng -o lattice.png -v lattice1.dot
The resulting graph looks like the following:
If we want to add labels to the lattice vertices, we can add the labels attribute:
> x<-capture.output(dotlattice(genlattice(N=8, u=1.1, d=0.9), labels=TRUE))
> cat(x, file="/tmp/lattice1.dot")
|
{"url":"http://www.theresearchkitchen.com/archives/738","timestamp":"2014-04-20T22:09:03Z","content_type":null,"content_length":"16115","record_id":"<urn:uuid:4097df24-92b3-4912-91a8-b6d9276dabcc>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - Maths prep for Electromagnetism unit
Re: Maths prep for Electromagnetism unit
Vector calculus. Spend time studying cylindrical and spherical coordinates. Divergence and Stoke's theorem, line integral, surface integrals.
Look up cylindrical and spherical coordinates, it is NOT the ones you studied in your Calculus III multi-variables. I know they talking about cylindrical and spherical stuff, but they are really
still rectangular coordinates like:
[tex] \vec F=\hat x r\cos\theta+\hat y r \sin\theta +\hat z z \;\hbox { for cylindrical and }[/tex]
[tex] \hat x R \cos \phi \sin \theta +\hat y R \sin \phi \sin \theta + \hat z R \cos \theta\;\hbox { for spherical}[/tex]
These are not cylindrical and spherical coordinates in any stretch, they are just xyz coordinates with the amplitude of x, y and z represented in radial and angular components. I have a few EM books,
they are not very detail in explaining these coordinates. If you can study this, you'll be ahead of the game, these are very very important.
Make sure you review and UNDERSTAND vector field, line integral, divergence and stoke's theorems and get good at it. EM is a very hard subject, you need to get these out of the way and concentrate on
the EM part without having to struggle with the math. You are wise to get a 3 months head start......YOu really need it to get the most out of the class. If I am scaring you............Be scare.
I took a look at your book, it is an engineering EM book. If you are interested in EM theory, buy also "Introduction to Electrodynamics" by David Griffiths. I have 5 other engineering EM books, they
are not very detail in a lot of things, I studied two of the books and still found I missed the picture. Then I studied the third time using Griffiths, it was like a light bulb just light up.
Griffiths don't get too much into transmission lines, wave guide and antennas, but it make up in a lot more detail of the rest.
|
{"url":"http://www.physicsforums.com/printthread.php?t=657150","timestamp":"2014-04-20T15:55:54Z","content_type":null,"content_length":"7060","record_id":"<urn:uuid:cc905f65-92ab-4d30-a340-0d0f29ffa327>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
|
More Efficient Prime Searching Algorithm?
Join Date
Jun 2012
Rep Power
I started working on some Project Euler problems today, and I'm having problems with the one that seeks the sum of all prime number less than 2000000.
It's running very slowly, so I was wondering if there existed a faster prime searching algorithm than the one I have now. (It's currently some sort of inefficient prime sieve.)
Here is the code:
Java Code:
public class Problem10 {
public static void main(String args[]){
int sum = 0;
boolean[] a = primesList();
for(int i = 0; i<1999999; i++){
if(!a[i]) sum+=i+2;
public static boolean[] primesList(){
boolean[] a = new boolean[1999999];
for(int i = 0; i<1999999; i++){
int j = i+2;
for(int k = i+3; k<2000001; k++){
if(divTest(k, j)) a[k-2]=true;
if(i%10000==0) System.out.println(i);
return a;
public static boolean divTest(int i, int j){
boolean b = false;
if (i/j==((double) i)/j) b=true;
return b;
For a sieve you don't need to divide anything; for all multiples of 2 (except 2 itself) clear the corresponding flag. Check the first next number with its flag set, say x; clear all multiples
of x (except x itself) and repeat the process until done. While checking for those numbers with their flag set in the outer iteration, you might as well add them to a total value; that way,
when you're done with your sieve you're done with your entire algorithm.
kind regards,
cenosillicaphobia: the fear for an empty beer glass
Join Date
Jun 2012
Rep Power
Yes. there are some more efficient way.
Did you search it at Wiki . PRIME NUMBER
for (int i=1;i < sqrt(2000000);i++ )
for (int j=1;j <=2000000;j++)
if a[j]= multiplies of i then call missing a[j]
That isn't very efficient; better try an ordinary sieve:
Java Code:
boolean[] b= new boolean[100000];
b[0]= b[1]= true;
for (int x= 2; x < b.length; x++)
if (!b[x])
for (int y= x+x; y < b.length; y+= x)
b[y]= true;
if b[x] is true x is not a prime number.
kind regards,
cenosillicaphobia: the fear for an empty beer glass
Join Date
Mar 2009
Rep Power
Wouldn't it be more efficient to only check for values of x up to b.length/2? It seems a bit ridiculous to be incrementing x when x+x will always be greater than or equal to b.length, so the
inner for loop will never execute once x > b.length/2.
So instead of what josAH posted, unless I'm missing something, this would end faster, if not by much for tests up to less than a million or so:
Java Code:
boolean[] b= new boolean[100000];
b[0]= b[1]= true;
for (int x= 2; x < b.length/2; x++)
if (!b[x])
for (int y= x+x; y < b.length; y+= x)
b[y]= true;
If the above doesn't make sense to you, ignore it, but remember it - might be useful!
And if you just randomly taught yourself to program, well... you're just like me!
Wouldn't it be more efficient to only check for values of x up to b.length/2? It seems a bit ridiculous to be incrementing x when x+x will always be greater than or equal to b.length, so the
inner for loop will never execute once x > b.length/2.
So instead of what josAH posted, unless I'm missing something, this would end faster, if not by much for tests up to less than a million or so:
Java Code:
boolean[] b= new boolean[100000];
b[0]= b[1]= true;
for (int x= 2; x < b.length/2; x++)
if (!b[x])
for (int y= x+x; y < b.length; y+= x)
b[y]= true;
It is, indeed, a bit more efficient, i.e. for the last half (here 500,000) nothing needs to be done and the inner loop doesn't execute; but for the first half many more iterations have been
made; 1e6/2+1e6/3+1e6/4 ...
kind regards,
cenosillicaphobia: the fear for an empty beer glass
|
{"url":"http://www.java-forums.org/new-java/60944-more-efficient-prime-searching-algorithm.html","timestamp":"2014-04-16T19:13:31Z","content_type":null,"content_length":"88482","record_id":"<urn:uuid:e0c495cd-7b87-4f63-8f69-76faad9dd629>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Series and Sequences help
April 9th 2010, 03:47 PM
Series and Sequences help
My book already tells me the answer to a problem and does it step by step, but I have a question b/w a step.
The original question is here:
Sigma from 1 to infinity of 3^(2k)*5^(1-k)
After that, the book says to simplify it to this:
After that, this is where I have my problem. It say to simplify it once more to what I have beloew, only this does not make sense to me. How do you break up a 9^k into a 9 by itself? Furthermore,
why is a 9 put into the 1/5? Wouldn't it just be 1/5^k-1 times 9^k? I really do not understand simplifying these types of equations so if anyone could help me with this one or link me to a site
that shows how to simplify these things, I would greatly appreciate it. The only rule I know about any number ^k is that anything ^k+1 is the same thing as anything times anything ^k. Is there a
list of these rules somewhere dealing with numbers to the power of a variable?
April 9th 2010, 03:57 PM
${9^k \over 5^{k-1}} = {9^{1+k-1} \over 5^{k-1}} = {9^1\cdot 9^{k-1}\over 5^{k-1}} = 9 \cdot(\frac95)^{k-1}$
edit: $x^{a+b} = x^a x^b$ and $x^0=1$ are the defining property of exponents. Letting a=-b you get $1 = x^0=x^{a+(-a)} = x^a x^{-a}$ so $x^{-a} = 1/x^a$. Also, $x^1 = x^{\frac1n+\cdots+\frac1n} =
(x^{1/n})\cdots(x^{1/n}) = (x^{1/n})^n$ so $x^{1/n} = \sqrt[n]{x^1}$. You can look the others up on wikipedia I think.
April 9th 2010, 03:59 PM
April 9th 2010, 04:04 PM
My book already tells me the answer to a problem and does it step by step, but I have a question b/w a step.
The original question is here:
Sigma from 1 to infinity of 3^(2k)*5^(1-k)
After that, the book says to simplify it to this:
(9^k)/(5^k-1) $\textcolor{red}{= \frac{5}{5} \cdot \frac{9^k}{5^{k-1}} = 5 \cdot \left(\frac{9}{5}\right)^k}$
$\textcolor{red}{5 \sum_{k=1}^\infty \left(\frac{9}{5}\right)^k}$
... which diverges.
|
{"url":"http://mathhelpforum.com/calculus/138183-series-sequences-help-print.html","timestamp":"2014-04-17T10:21:27Z","content_type":null,"content_length":"8876","record_id":"<urn:uuid:526e6760-cf56-4a79-a49d-eed846613079>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Debt Ratio
Debt-to-assets ratio or simply debt ratio is the ratio of total liabilities of a business to its total assets. It is a solvency ratio and it measures the portion of the assets of a business which are
financed through debt.
The formula to calculate the debt ratio is:
Debt Ratio = Total Liabilities
Total Assets
Total liabilities include both the current and non-current liabilities.
Debt ratio ranges from 0.00 to 1.00. Lower value of debt ratio is favorable and a higher value indicates that higher portion of company's assets are claimed by it creditors which means higher risk in
operation since the business would find it difficult to obtain loans for new projects. Debt ratio of 0.5 means that half of the company's assets are financed through debts.
In order to calculate debt ratio from the balance sheet, divide total liabilities by total assets, for example:
Example 1: Total liabilities of a company are $267,330 and total assets are $680,400. Calculate debt ratio.
Debt ratio = $267,330/$680,400 = 0.393 or 39.3%
Example 2: Current liabilities are $34,600; Non-current liabilities are $200,000; and Total assets are $504,100. Calculate debt ratio.
Since total liabilities are equal to sum of current and non-current liabilities therefore,
Debt Ratio = ($34,600 + $200,000) / $504,100 = 0.465 or 46.5%.
Written by Irfanullah Jan
|
{"url":"http://accountingexplained.com/financial/ratios/debt-ratio","timestamp":"2014-04-18T23:16:19Z","content_type":null,"content_length":"12845","record_id":"<urn:uuid:06ff6338-f11a-4aa4-b684-15e039ad1e73>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Berkeley, CA Geometry Tutor
Find a Berkeley, CA Geometry Tutor
...I bring a background of strong ability and skill in math, and even, years ago, won the "math award" in my high school for receiving the top scores in math testing, and scored 770 in math on the
SAT at that time. I have experience in private tutoring in math for students struggling not only with ...
37 Subjects: including geometry, reading, Spanish, chemistry
...I am patient, organized, and I break down problems in fun and memorable ways. I taught Physics and Calculus to undergrads for 5 semesters and I absolutely loved it! Seeing them understand hard
concepts, succeed, and thanking me at the end of the semester was a very rewarding feeling.
26 Subjects: including geometry, physics, calculus, French
...Difficulties and bad experiences can slow down or off put a student. It is better to catch and address any issues as soon as possible. With over a decade of experience teaching and tutoring, I
have worked with the bright and gifted, the learning disabled, the gifted underachievers, and just plain folk.
49 Subjects: including geometry, reading, writing, algebra 1
When I retired from the United States Air Force I swore I would never get up early again. But I still wanted to do something to continue making the world a better place. So I turned to something I
had done for my friends in High School, my troops in the field, and my neighborhood kids, TUTORING!
10 Subjects: including geometry, calculus, precalculus, algebra 1
...Learning science and math can be difficult at times but with a little help anyone can master the principles and discover a vast, exciting, and ever expanding body of knowledge! I would be
honored to help you in your quest for this knowledge. I have a bachelor's degree in Physics from U.C.
12 Subjects: including geometry, chemistry, physics, calculus
|
{"url":"http://www.purplemath.com/Berkeley_CA_Geometry_tutors.php","timestamp":"2014-04-18T14:19:08Z","content_type":null,"content_length":"24019","record_id":"<urn:uuid:0d4b24e6-9bb2-414e-a6ac-5435bdd8e301>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-User] ndimage masked array handling
David Shean dshean@gmail....
Mon Nov 19 02:28:06 CST 2012
Hi all,
I'm running into issues similar to those described in this ticket (opened 3 years ago):
My workflows typically involve loading a geotif as a masked array:
#Given input dataset, return a masked array for the input band
def fn_getma(src_fn, bnum=1, ndv=0):
src_ds = gdal.Open(src_fn, gdal.GA_ReadOnly)
b = src_ds.GetRasterBand(bnum)
b_ndv = b.GetNoDataValue()
if (b_ndv is not None):
ndv = b_ndv
bma = numpy.ma.masked_equal(b.ReadAsArray(), ndv)
return bma
The scipy.ndimage package is great, and I use it often. Unfortunately, the filters and interpolation routines can't handle masked arrays.
I've had some success using the following hack, passing a masked array filled with numpy.nan to the desired scipy.ndimage function f_a:
def nanfill(a, f_a, *args, **kwargs):
ndv = a.fill_value
b = f_a(a.filled(numpy.nan), *args, **kwargs)
out = numpy.ma.fix_invalid(b, copy=False)
return out
For example:
b = fn_getma('img.tif')
nanfill(b, scipy.ndimage.gaussian_filter, 2)
This works, but the nan regions are improperly expanded. The desired behavior would be to ignore any nans within the filter window and return an output value using the remaining valid input values. Instead, nan is returned whenever a single nan is encountered within the filter window.
My current issue is with scipy.ndimage.interpolation.map_coordinates. I need to interpolate values for non-integer indices over a masked array containing "holes":
#Arrays containing pixel coordinates
pX = array([ 521.61974178, 520.18007917, 531.9847472 , ..., 382.70468842, 382.04165718, 381.38110078])
pY = array([ 1551.08334089, 1540.31571092, 1626.53056875, ..., 2556.5575993 , 2562.73110509, 2568.90483958])
#Rounding to nearest int
b[numpy.around(pY).astype(int), numpy.around(pX).astype(int)]
masked_array(data = [194.732543945 187.755401611 192.118453979 ..., 308.895629883 311.699554443 306.658691406], mask = [False False False ..., False False False], fill_value = 0.0)
#Interpolating with map_coordinates, uses default b.fill_value, which will be considered valid during interpolation
scipy.ndimage.interpolation.map_coordinates(b, [pY, pX])
array([ 194.61734009, 187.88977051, 192.112854 , ..., 309.16894531, 312.19412231, 306.87319946], dtype=float32)
#Fill holes with nan. Returns all nans, even for regions with sufficient valid values
scipy.ndimage.interpolation.map_coordinates(b.filled(numpy.nan), [pY, pX])
array([ nan, nan, nan, ..., nan, nan, nan], dtype=float32)
An alternative approach would be to fill all holes up front with inpainting, but I would prefer to limit my analysis to valid values.
Hopefully this is all clear. Any other ideas for workarounds? Perhaps the ticket should be updated, or priority bumped? Thanks.
More information about the SciPy-User mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-user/2012-November/033634.html","timestamp":"2014-04-16T11:27:21Z","content_type":null,"content_length":"5386","record_id":"<urn:uuid:dbd1c178-4ade-42ad-8f1b-e8b49bcb1494>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Measuring Distances in Space
Date: 06/02/2008 at 09:53:28
From: Mairead
Subject: How can we measure the distance from Earth to Mars accurate
How can we measure the distance from Earth to Mars accurately? Space
is infinite--how can you measure something that is always moving?
Date: 06/05/2008 at 10:09:17
From: Doctor Ian
Subject: Re: How can we measure the distance from Earth to Mars accurate
Hi Mairead,
It's a good question, and the answer is that we set up models for how
things move, and that allows us to use measurements to find things
like position and speed indirectly.
For example, suppose you're driving down the street, and a policeman a
block away uses his radar gun to determine that you're speeding. How
can he do that, from a distance?
The answer is: He can bounce a light wave off your car (radar is one
kind of light), and the frequency of the wave will change depending on
how fast you're going. If he sends out one frequency and gets another
back, from the amount of change he can calculate your speed. (Well,
HE probably can't, but the people who made the radar gun could, and
they included the necessary information in the device.)
Here's another example. Suppose you know that a certain tower is 200
feet tall, and you'd like to know how far away it is. You can measure
the angle that it makes above the ground (using a protractor, for
example), and then you can use trigonometry to relate the height, and
the angle, and the distance from you. Since you know two of those,
you can get the third. So you can tell how far away the tower is if
you already know the height; or you can tell how tall the tower is if
you already know your distance from it.
Can you see how we might be able to use this kind of reasoning to
figure out how far Mars is from Earth, even though it's far away and
moving quickly and we can't make any direct measurements? (What if we
know about how big it is, and about what angle it subtends in the sky?
Then it's a lot like the tower example, isn't it?)
When you're dealing with bigger distances and speeds, the math gets a
little more complicated, and it requires a lot more imagination, but
the ideas are the same. Take a look at this to see what I mean:
Size of the Universe
In the case of something like figuring out where Mars is, what happens
in practice is this: We have a theory about what happens when two
bodies with known masses orbit each other. So we can say: If the sun
has such-and-such a mass, and the Earth has such-and-such a mass, then
the Earth should move around the sun on this curve, described by this
equation. And we do the same thing for Mars. Then we pick a time,
and we can calculate where Earth is, and where Mars is, and to get the
difference we subtract.
Now, how do we know that these equations are right? Well, they're
never perfect, but we're always checking them using a process called
"least squares estimation". That is, we may calculate that on some
particular day, at some particular time, Mars should appear in the sky
next to a particular star. We go out and look, and it's close, but
not exactly where we thought it would be. So we look for ways to
change the numbers in our model (how much mass everything has, the
sizes of the orbits, the locations of the observatories, and lots of
other stuff) and then keep running the changed model until we get the
smallest possible difference between what we predict and what we see.
Then we have a better model, and we can use that to get a new distance
from Earth to Mars... or from anything to anything else.
This is how we can do things like predict when an eclipse of the moon
will happen, even though it's years away; or determine the best time
to launch a rocket so it can get to Pluto with the least amount of
fuel; or predict when a comet is going to collide with Jupiter, so we
can make sure we get pictures of it.
It's also a great illustration of why scientists and engineers like
algebra and calculus so much. Once you get some good equations that
describe something accurately, you can basically use it to predict the
future, whether that's where some planet is going to be at some
particular time, or how much weight it will take to make a bridge
collapse. It's not QUITE as good as time travel, but it's probably
the closest thing we have to it.
Does this help? If there's anything I said there that you don't
understand, let me know and I'd be happy to talk with you more about it.
- Doctor Ian, The Math Forum
Date: 06/05/2008 at 11:47:01
From: Mairead
Subject: Thank you (How can we measure the distance from Earth to Mars
Wow, I never really expected an answer and that's a really good
answer. I liked the least squares estimation explanation. I never
heard about that before. It sounds like a lot of work though to
figure out the answer. I really appreciate you taking the time to
explain this to me.
|
{"url":"http://mathforum.org/library/drmath/view/72244.html","timestamp":"2014-04-17T13:38:23Z","content_type":null,"content_length":"10243","record_id":"<urn:uuid:ecb7c08e-36ce-4def-bc62-11672288e0bf>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SCaVis can be used to evaluate, plot, simplify functions. There are several approaches to work with parametric or non-parametric functions using SCaVis Java API:
• User 2D/3D function plotter from SCaVis IDE (see the “Tools” menu)
• Simplify and plot functions using ether Octave or Matlab syntax. See the section
Symbolic calculation
for detail. Plotting functions in such approach is a bit too simplistic and there is no good integration with Java API and data structures
• Use the full-blown SCaVis Java libraries to create parametric, non-parametric, special functions as they are provided by Java API. One can also integrate, differentiate and simplify such
functions. In this case one can use the Java language, or any Java scripting languages (such as Jython or BeanShell). There is complete integration with data structures API and functions can be
plotted in 2D/3D using several powerful interactive canvaces.
In this section we will discuss the third option as it is the most flexible for integration with Java libraries. We will start with simple analytical functions with one variable (“x”) and then will
consider more complicated cases with several variables (“y”, “z”). Finally, we will show how to make function of arbitrary complexity with any number of variables
Functions with one variable
Functions can be defined using either strings or programically. First we will discuss how to show functions using strings and then we will show other, more flexible methods.
We will start this tutorial with a simple example of how to create and display a function with one and only one variable (“x”). First, let us create an object representing a function:
Such functions are represented by the Java class F1D (1D means one-dimensional function).
>>> from jhplot import *
>>> f1 = F1D("2*exp(-x*x/50)+sin(pi*x)/x")
Here the input function is given by java.lang.String.
One can evaluate this function at a certain point, say at 10, as:
>>> print f1.eval(10)
To draw the function, one should specify limits during the function initialization (or later using the methods of the Java object F1D. Then one should build a canvas and use the method “draw” to plot
the function. In the example below, we define the limits for this function (-2,5) and then we plot it:
Download for this example is disabled for non-members
1: # Function | P | 1.7 | S.Chekanov | Several 1D functions on 2 pads
2: from jhplot import *
3: f1 = F1D("2*exp(-x*x/50)+sin(pi*x)/x", 1.0, 10.0)
4: c1 = HPlot("Canvas")
5: c1.visible()
6: c1.setAutoRange()
7: c1.draw(f1)
Here we drop the Python invitation ”»>” since we assume that the this code snippet is executed as a file. Insert these lines to the SCaVis editor and hit the key [F8].
You will see a pop-up window with the output as shown below:
To learn about the method of SCaVis or any Jython or Java objects (like f1 and c1 in the above example):
• Hit <Ctrl>+<Space> after the dot when using the Jython shell prompt (bottom part of the SCaVis IDE).
• When using the SCaVis editor, use the key [F4] (after the dot) which brings up a table with all available Java methods.
• Use Eclipse or NetBean IDE when developing the code using Java. They have their own help system.
• Read the Java API of these classes.
Information about the syntax
Here is a short description about how to define functions. The following table shows the main mathematical operations:
() parenthesis
+ plus
- minus
* times
/ divide
^ raise to a power
** raise to a power
String Definition
abs absolute value
log(x) Natural Logarithm
exp(x) Exp
ceil nearest upper integer
floor nearest lower integer
cbrt cubic root
sin(x) Sine
cos(x) Cosine
tan(x) Tangent
asin(x) ArcSine
acos(x) ArcCosine
atan(x) ArcTangent
sinh(x) Hyperbolic Sine
cosh(x) Hyperbolic Cosine
tanh(x) Hyperbolic Tangent
One can also use some predefined constants, such as pi or Pi (the pi value, 3.14..). It is possible to use scientific notation for numbers:
Snippet from Wikipedia: Scientific notation
Scientific notation (commonly referred to as "standard form" or "standard index form") is a way of writing numbers that are too big or too small to be conveniently written in decimal form.
The number is split into a significand / mantissa (y) and exponent (x) of the form 'yEx' which is evaluated as 'y * 10^x'.
Let us give the examples:
from jhplot import *
f1=F1D("1+x+(x^2)-(x^3)") # correct answer -1
print f1.eval(2)
f1=F1D("1+x+(x*x)-(x*x*x)") # correct answer -1
print f1.eval(2)
f1=F1D("1+x+x^2-x^3") # correct answer -1
print f1.eval(2)
f1=F1D("1+x+x**2-x**3") # correct answer -1
print f1.eval(2)
Non-parametric functions
The most flexible way to draw functions is to use codding with objects and calling third-party Java libraries directly, instead of using strings with function definitions. This topic will be
discussed in Section Non-parametric functions
Symbolic manipulations
Symbolic manipulation with functions will be discussed in more details in Section symbolic. Still, one should note that symbolic manipulations can also be done with the F1D functions using Java (or
Jython) coding without using any other specialized language (as Matlab or Octave). One can simplify and expand F1D functions. Functions can also be rewritten in terms of elementary functions (log,
sqrt, exp). Finally, one can perform some elementary substitutions before attempting to plot a F1D function.
You are not full member and have a limited access to this section. One can unlock this part after becoming
a full member
Integration and differentiation
Functions can be numerically integrated. The program supports 5 methods of integration, which vary in evaluation time and accuracy. Below we will integrate the function “cos(x*x)+sin(1/x)*x^2)”
You have a limited access to this part. One can unlock this text after becoming
a full member
This code integrates the function between 10 and 100 using 5000 integration point (the large the number is, the more accurate the results are). The code also performs some benchmarking which gives
you ideas about the accuracy of calculations:
trapezium = 4949.64217622 time (ms)= 31.654937
gauss4 = 4949.64028115 time (ms)= 43.84014
gauss8 = 4949.64028111 time (ms)= 65.27855
richardson = 4949.64028393 time (ms)= 56.030938
simpson = 4949.64014049 time (ms)= 17.634015
Displaying functions
Here is a more detailed example showing how to plot several functions using different colors:
Download for this example is disabled for non-members
1: from java.awt import Color
2: from jhplot import *
4: # first function
5: f1 = F1D("2*exp(-x*x/50)+sin(pi*x)/x", 1.0, 10.0)
6: f1.setPenDash(4)
8: # second function
9: f2 = F1D("exp(-x*x/50)+pi*x", 1.0, 10.0)
10: f2.setColor(Color.green)
11: f2.setPenWidth(1)
13: # build a canvas with X and Y titles
14: c1 = HPlot("Canvas")
15: c1.setGTitle("2 functions", Color.red)
16: c1.setNameX("X")
17: c1.setNameY("Y")
18: c1.visible()
19: c1.setAutoRange()
21: c1.draw(f1)
22: c1.draw(f2)
Note that we have imported the Java class java.awt.Color. The output of this script is shown here
You can also plot objects on different pads as shown in the Section graphics.
Exporting functions
As any Java object, F1D functions can be saved into files. Read input_output for detailed discussion. In addition, one can convert a function in MathML or Java formats, or just display it as a table
with values after evaluation of a function.
>>> from jhplot import F1D
>>> f=F1D("10*sin(x)")
>>> print f.toMathML() # convert to MathML
>>> print f.toJava() # convert to Java code
>>> f.toTable() # show a table with X-Y values (the function should be plotted or evaluated before)
2D Functions
Functions in 2 dimensions can be build analogously using the Java class jhplot.F2D. The example below shows how to construct and evaluate a function 2D:
>>> from jhplot import *
>>> f1 = F2D("2*exp(-x*y/50)+sin(pi*x)/y")
>>> print f1.eval(10,20)
The code prints the answer: 0.270670566473.
Such functions can be displayed as will be shown later. Also, look at the Sections Graphics and 3D Graphics
3D Functions
Functions in 2 dimensions can be build by analogy using the Java class F3D. The example below shows how to construct and evaluate a function 3D:
>>> from jhplot import *
>>> f1 = F3D("2*exp(-x*y/50)+sin(pi*x)/z")
>>> print f1.eval(10,20,30)
The code prints the answer: 0.0366312777775.
Converting functions to histograms
Histograms can be created from 1D and 2D functions. In the example above, we created a 2D function and then used it to generate 2D histogram. Functions can be converted to histograms with arbitrary
number of bins. This often can be used for show selected regions in different color.
Consider the example in which a function is used to create a histogram with fine bins. We use this histogram to highlight a range of our original function.
Download for this example is disabled for non-members
1: from java.awt import Color
2: from jhplot import *
4: c1 = HPlot("Canvas")
5: c1.setAutoRange()
6: c1.setNameX("X")
7: c1.setNameY("Y")
8: c1.visible()
10: f1 = F1D("2*exp(-x*x/50)+sin(pi*x)/x", -2.0, 5.0)
11: c1.draw(f1)
13: h=f1.getH1D("Histogram",500,1,5) # histogram with 500 bins between 1 and 5
14: h.setColor(Color.red)
15: h.setFill(1)
16: h.setFillColor(Color.red)
17: c1.draw(h)
Histograms can have arbitrary number of bins, but if the bin number is close to 500 (a typical evaluation step for functions), the difference between function and a histogram will be almost
impossible to see.
Plotting in 3D
F2D functions can be shown using 3D canvaces. Two functions can be overlayed and shown on a single plot.
Here is a small example of showing 2 functions in 3D:
Download for this example is disabled for non-members
1: from jhplot import *
2: c1 = HPlot3D("Canvas")
3: c1.visible()
4: c1.setNameX("X")
5: c1.setNameY("Y")
6: f1 = F2D("2*exp(-x*y/20)+10*sin(pi*x)/y", 1.0, 10.0, 1.0, 10.0)
7: f2 = F2D("4*x*y", 1.0, 10.0, 1.0, 10.0)
8: c1.draw(f1,f2)
The output of this code is:
One can also overlay several different objects. Let us show an example of how to show functions and histograms in3D:
The code used to generate this image is given below.
You are not full member and have a limited access to this section. One can unlock advanced pages after becoming
a full member
. You can also request to edit this manual and insert comments.
More examples are given in Sections Graphics and 3D Graphics
Multidimensional functions
So far we have learned how to build functions in 1D (F1D class), 2D (F2D class) and 3D (F3D class). In addition to these “fixed dimension” classes, there is a FND class which can be used to build a
function with arbitrary parameters and variables. To build such function is very easy and similar to the above styles, only now one should be more explicit about the names of variables. As example,
below we make a function with 4 variables, called xx, yy,bb,kk:
from jhplot import *
print ans
Such functions can be evaluated at any variable values and plotted.
Syntax overview
Numeric integration of functions is discussed in section Integration. Symbolic integrations are discussed in Symbolic calculation section.
Let us give a small example showing how to integrate
from jhplot import F1D
print f1.integral(10000,1,10)
More examples are given in the section Integration
Expression Builder
Multidimensional functions can also be built using the package jhplot.math.exp4j.
In the example below we build a function of 4 variables x,y,z,u and evaluate it:
from de.congrace.exp4j import *
calc = ExpressionBuilder("f(x,y,z,u)=x * y *cos()- 2+z-u^2").build()
print calc.calculate()
Same can be done in a somewhat different way:
from de.congrace.exp4j import *
calc = ExpressionBuilder("x * y *cos(x)- 2+z-u^2")
print calc.claculate()
You can evaluate function for arrays or even make X-Y array to draw the function. For example, let us evaluate this function for an array:
print calc.eval("x",[1,2,3,4,5,6,7,9])
The output will be shown as :
array('d', [-384.1705803038421, -364.6281029269727, -392.532799516408, -461.54419962463425, -496.89242746631385, -434.5298597838711, -309.02187617936954, -326.81867265648384])
Next, let us draw this function as symbols with some step.
from jhplot import *
from de.congrace.exp4j import *
c1 = HPlot("Canvas",600,400)
calc = ExpressionBuilder("x * y*sin(x) - 2+z-u^2")
This will draw the function as symbols:
You can build an arbitrary function using Python and plot it, mixing Java and Jython libraries in one code. You can put an arbitrary complex expressions and “if” statements. In the example below we
make a custom function using a Jython class and build an “ExpressionBuilder” object. then we evaluate this function for array of “X” values and plot it.
Using special functions
You can integrate special functions from third-party Java libraries into your code and show them in 2D and 3D.
Unregistered users have a limited access to this section. You can unlock advanced pages after becoming
a full member
Parametric functions
Information for advanced users
|
{"url":"http://jwork.org/scavis/wikidoc/doku.php?id=man:math:functions","timestamp":"2014-04-21T13:13:06Z","content_type":null,"content_length":"77553","record_id":"<urn:uuid:2525400a-352c-45a2-ba01-46c0b00e2a15>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Basin of attraction
From Scholarpedia
Roughly speaking, an attractor of a dynamical system is a subset of the state space to which orbits originating from typical initial conditions tend as time increases. It is very common for dynamical
systems to have more than one attractor. For each such attractor, its basin of attraction is the set of initial conditions leading to long-time behavior that approaches that attractor. Thus the
qualitative behavior of the long-time motion of a given system can be fundamentally different depending on which basin of attraction the initial condition lies in (e.g., attractors can correspond to
periodic, quasiperiodic or chaotic behaviors of different types). Regarding a basin of attraction as a region in the state space, it has been found that the basic topological structure of such
regions can vary greatly from system to system. In what follows we give examples and discuss several qualitatively different kinds of basins of attraction and their practical implications.
A simple example is that of a point particle moving in a two-well potential with friction, as in Figure 1(a). Due to the friction, all initial conditions, except those at \( x=dx/dt=0 \) or on its
stable manifold eventually come to rest at either \( x=x_0 \) or \( x=-x_0 \ ,\) which are the two attractors of the system. A point initially placed on the unstable equilibrium point, \( x=0, \)
will stay there forever; and this state has a one-dimensional stable manifold. Figure 1(b) shows the basins of attraction of the two stable equilibrium point, \( x=\pm x_0, \) where the crosshatched
region is the basin for the attractor at \( x=x_0 \) and the blank region is the basin for the attractor at \( x=-x_0 \ .\) The boundary separating these two basins is the stable manifold of the
unstable equilibrium \( x=0.\)
Fractal basin boundaries
In the above example, the basin boundary was a smooth curve. However, other possibilities exist. An example of this occurs for the map \[ x_{n+1}=(3x_n)\mod 1\ ,\] \[ y_{n+1}=1.5+\cos 2\pi x_n\ .\]
For almost any initial condition (except for those precisely on the boundary between the basins of attraction), \( \lim _{n\rightarrow \infty} y_n \) is either \( y=+\infty \) or \( y=-\infty \ ,\)
which we may regard as the two attractors of the system. Figure 2 shows the basin structure for this map, with the basin for the \( y=-\infty \) attractor black and the basin of the \( y=+\infty \)
attractor blank. In contrast to the previous example, the basin boundary is no longer a smooth curve. In fact, it is a fractal curve with a box-counting dimension 1.62.... We emphasize that, although
fractal, this basin boundary is still a simple curve (it can be written as a continuous parametric functional relationship \( x=x(s), y=y(s) \) for \( 1>s>0 \) such that \( (x(s_1), y(s_1))\neq (x
(s_2), y(s_2)) \) if \( s_1\neq s_2\ .\))
Another example of a system with a fractal basin boundary is the forced damped pendulum equation, \[d^2\theta /dt^2+0.1 d\theta /dt+\sin \theta =2.1 \cos t\ .\] For these parameters, there are two
attractors which are both periodic orbits (Grebogi, Ott and Yorke, 1987). Figure 3 shows the basins of attraction of these two attractors with initial \(\theta\) values plotted horizontally and
initial values of \(d\theta /dt\) plotted vertically. The figure was made by initializing many initial conditions on a fine rectangular grid. Each initial condition was then integrated forward to see
which attractor its orbit approached. If the orbit approached a particular one of the two attractors, a black dot was plotted on the grid. If it approached the other attractor, no dot was plotted.
The dots are dense enough that they fill in a solid black region except near the basin boundary. The speckled appearance of much of this figure is a consequence of the intricate, finescaled structure
of the basin boundary. In this case the basin boundary is again a fractal set (its box-counting dimension is about 1.8), but its topology is more complicated than that of the basin boundary of Figure
2 in that the Figure 3 basin boundary is not a simple curve. In both of the above examples in which fractal basin boundaries occur, the fractality is a result of chaotic motion (see transient chaos)
of orbits on the boundary, and this is generally the case for fractal basin boundaries (McDonald et al., 1985).
Basin Boundary Metamorphoses
We have seen so far that there can be basin boundaries of qualitatively different types. As in the case of attractors, bifurcations can occur in which basin boundaries undergo qualitative changes as
a system parameter passes through a critical bifurcation value. For example, for a system parameter \( p<p_c \ ,\) the basin boundary might be a simple smooth curve, while for \( p>p_c \) it might be
fractal. Such basin boundary bifurcations have been called metamorphoses (Grebogi, et al., 1987).
The Uncertainty Exponent
Fractal basin boundaries, like those illustrated above, are extremely common and have potentially important practical consequences. In particular, they may make it more difficult to identify the
attractor corresponding to a given initial condition, if that initial condition has some uncertainty. This aspect is already implied by the speckled appearance of Figure 3. A quantitative measure of
this is provided by the uncertainty exponent (McDonald et al., 1985). For definiteness, suppose we randomly choose an initial condition with uniform probability density in the area of initial
condition space corresponding to the plot in Figure 3. Then, with probability one, that initial condition will lie in one of the basins of the two attractors [the basin boundary has zero Lebesgue
measure (i.e., 'zero area') and so there is zero probability that a random initial condition is on the boundary]. Now assume that we are also told that the initial condition has some given
uncertainty, \( \epsilon \ ,\) and, for the sake of illustration, assume that this uncertainty can be represented by saying that the real initial condition lies within a circle of radius \( \epsilon
\) centered at the coordinates \( (x_0,y_0) \) that were randomly chosen. We ask what is the probability that the \( (x_0,y_0) \) could lie in a basin that is different from that of the true initial
condition, i.e., what is the probability, \( \rho (\epsilon ) \ ,\) that the uncertainty \( \epsilon \) could cause us to make a mistake in a determination of the attractor that the orbit goes to.
Geometrically, this is the same as asking what fraction of the area of Figure 3 is within a distance \( \epsilon \) of the basin boundary. This fraction scales as \[\rho (\epsilon )\sim \epsilon ^\
alpha\ ,\] where \( \alpha \) is the uncertainty exponent (McDonald et al., 1985) and is given by \( \alpha =D-D_0 \) where \( D \) is the dimension of the initial condition space (\( D=2 \) for
Figure 3) and \( D_0 \) is the box-counting dimension of the basin boundary. For the example of Figure 3, since \( D_0\cong 1.8 \ ,\) we have \( \alpha \cong 0.2 \ .\) For small \( \alpha \) it
becomes very difficult to improve predictive capacity (i.e., to predict the attractor from the initial condition) by reducing the uncertainty. For example, if \( \alpha =0.2 \ ,\) to reduce \( \rho
(\epsilon )\) by a factor of 10, the uncertainty \( \epsilon \) would have to be reduced by a factor of \( 10^5 \ .\) Thus, fractal basin boundaries (analogous to the butterfly effect of chaotic
attractors) pose a barrier to prediction, and this barrier is related to the presence of chaos.
Riddled Basins of Attraction
We now discuss a type of basin topology that may occur in certain special systems; namely, systems that, through a symmetry or some other constraint, have a smooth invariant manifold. That is, there
exists a smooth surface or hypersurface in the phase space, such that any initial condition in the surface generates an orbit that remains in the surface. These systems can have a particularly
bizarre type of basin structure called a riddled basin of attraction (Alexander et al., 1992; Ott et al., 1994). In order to discuss what this means, we first have to clearly state what we mean by an
"attractor". For the purposes of this discussion, we use the definition of Milnor (1985): a set in state space is an attractor if it is the limit set of orbits originating from a set of initial
conditions of positive Lebesgue measure. That is, if we randomly choose an initial condition with uniform probability density in a suitable sphere of initial condition space, there is a non-zero
probability that the orbit from the chosen initial condition goes to the attractor. This definition differs from another common definition of an attractor which requires that there exists some
neighborhood of an attractor such that all initial conditions in this neighborhood generate orbits that limit on the attractor. As we shall see, an "attractor" with a riddled basin conforms with the
first definition, but not the second definition. The failure to satisfy the second definition is because there are points arbitrarily close to an attractor with a riddled basin, such that these
points generate orbits that go to another attractor (hence the neighborhood mentioned above does not exist.)
We are now ready to say what we mean by a riddled basin. Suppose our system has two attractors which we denote \(A\) and \(C\) with basins \( \hat A \) and \( \hat C \ .\) We say that the basin \( \
hat A \) is riddled, if, for every point \(p\) in \( \hat A \ ,\) an \( \epsilon \)-radius ball, \( B_\epsilon (p)\) centered at \( p \) contains a positive Lebesgue measure of points in \( \hat C\)
for any \( \epsilon >0\ .\) This circumstance has the following surprising implication. Say we initialize a state at \(p\) and find that the resulting orbit goes to \(\hat A\ .\) Now say that we
attempt to repeat this experiment. If there is any error in our resetting of the initial condition, we cannot be sure that the orbit will go to \(A\) (rather than \(C\)), and this is the case no
matter how small our error is. Put another way, even though the basin \(\hat A\) has positive Lebesgue measure (non-zero volume), the set \(\hat A\) and its boundary set are the same. Thus the
existence of riddled basins calls into question the repeatability of experiments in such situations. Figure 4 illustrates the situation we have been discussing. As shown in Figure 4, the attractor
with a riddled basin lies on a smooth invariant surface (or manifold) \(S\ ,\) and this is general for attractors with riddled basins. Typical systems do not admit smooth invariant manifolds, and
this is why riddled basins (fortunately?) do not occur in generic cases. Examples, where a dynamical system has a smooth invariant surface are a system with reflection symmetry of some coordinate \(x
\) about \(x=0\ ,\) in which case \(x=0\) would be an invariant manifold, and a predator-prey model in population dynamics, in which case one of the populations being zero (extinction) is an
invariant manifold of the model.
• Alexander, J., Yorke, J.A., You, Z., and Kan, I. Riddled Basins (1992) Int. J. Bif. Chaos 2:795.
• Grebogi, C., Ott, E. and Yorke, J.A. (1987) Basin Boundary Metamorphoses: Changes in Accessible Boundary Orbits, Physica D 24:243.
• McDonald, S.W., Grebogi, C., Ott, E., and Yorke, J.A. (1985) Fractal Basin Boundaries, Physica D 17:125.
• Milnor, J. (1985) On the Concept of an Attractor, Comm. Math. Phys. 99:177.
• Ott, E., Chapter 5 in Chaos in Dynamical Systems, Cambridge University Press, second edition 2003.
• Ott, E., Sommerer, J.C., Alexander, J.C., Kan, I., and Yorke, J.A. (1994) The Transition to Chaotic Attractors with Riddled Basins, Physica D 76:384.
Internal references
External Links
See Also
Attractor Dimension, Bubbling Transition, Chaos, Crises, Controlling Chaos, Dynamical Systems, Invariant Manifolds, Periodic Orbit, Stability, Transient Chaos, Unstable Periodic Orbits
|
{"url":"http://www.scholarpedia.org/article/Basin_of_attraction","timestamp":"2014-04-16T07:20:05Z","content_type":null,"content_length":"46757","record_id":"<urn:uuid:eaf30d94-6039-448e-95fd-35f773c7981f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MATH M120 ALL Brief Survey of Calculus II
Mathematics | Brief Survey of Calculus II
M120 | ALL | All
P: M119. A continuation of M119 covering topics in elementary
differential equations, calculus of functions of several variables and
infinite series. Intended for non-physical science students. Credit
not given for both M212 and M120. I Sem., II Sem., SS.
|
{"url":"http://www.indiana.edu/~deanfac/blfal00/math/math_m120_ALL.html","timestamp":"2014-04-20T03:32:45Z","content_type":null,"content_length":"804","record_id":"<urn:uuid:850f3481-52a4-4c49-8b74-d2ecdb655aa3>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What's all this E8 stuff about then? Part 1.
With all the buzz about E8 recently I want to give an introduction to it that's pitched slightly higher than pop science books, but that doesn't assume more than some basic knowledge of geometry and
vectors and a vague idea of what calculus is. There's a lot of ground to cover and so I can't possibly tell the truth exactly as it is without some simplification. But I'm going to try to be guilty
of sins of omission rather than telling outright lies.
Discrete symmetry groups.
Consider a perfect cube. If you turn it upside-down it still looks just like a cube in exactly the same configuration. Rotate it through 90 degrees about any axis going through the centre of a face
and it looks the same. Rotate it through 120 degrees around a diagonal axis and it also looks the same. In fact there are 24 different rotations, including simply doing nothing, that you can perform
on a cube, that leave it looking how you started. (Exercise: show it's 24.)
These 24 operations form the symmetries of the cube. Note how you can take any pair of symmetries and combine them to make another symmetry simply by applying one and then another. For example, if A
represents rotation by 90° about the x-axis and B represents a rotation by 180° about the same axis then we can use shorthand to write AB to mean a rotation by 270°. The convention is that we read
from right to left and so AB means "do B first, then A". If we define C=AB think about what AC means. It's a 270°+90° rotation, ie. a 360° rotation which does nothing. The convention is to call that
1. So C is the opposite or inverse of A because AC=CA=1. When you have a bunch of operations that you can combine like this, and every operation has an inverse, then the operations form what
mathematicians call a
. The 24 symmetries of the cube form a group of size 24. Although you can think of a group as a bunch of operations on something, you can also think of a group as a thing in its own right,
independent of the thing it acts on. For example, we can write down some rules about A, B and C above. AC=CA=1, C=AB, AAB=1, BB=1, C1=1C=C and so on. In principle we can give a single letter name
name to each of the 24 symmetries and write out a complete set of rules about how to combine them. At that point we don't need the cube any more, we can just consider our 24 letters, with their
rules, to be an object of interest in its own right.
We can think of the group as acting on the cube, but if we specified some suitable rules, we can make this group act on other things too. For example, we can use the coordinates (x,y,z) to specifiy
any point in space. We used B to mean a 180 degree rotation around the x-axis. If we apply that rotation to any point in space (x,y,z) it gets mapped to (x,-y,-z). Similary A maps the point (x,y,z)
to (x,z,-y). So the the 24 element cube group also acts on 3D space. This is an example of a
group representation
, and I'll get onto that later.
(So far I've been talking about rotations we can perform on the cube. But imagine you could also reflect the cube - as if looking at the cube in a mirror and then pulling the reflection back out into
the real world. You can't do this with a real cube but you could with an imaginary one. How many symmetries do you think you get now?)
SO(3) and Lie groups
With the cube example above, all of our rotations had to be multiples of certain angles like 90° and 120°. Any kind of rotation through an angle like 37° leaves the cube in a weird orientation that
doesn't look like the original. So the symmetries of the cube form a discrete group because there's only a discrete set of possibilities. But there are also continuous groups where you can smoothly
slide from one symmetry to another and everything in-between is also a symmetry. For example consider a perfect 3D sphere at the origin. You can rotate it through any angle and it still looks like
the same sphere. Any rotation about the origin is a symmetry of the sphere, and given two symmetries you can slide from one to the other. At every stage, the sphere will still look like the original
sphere. Like in the discrete case, this set of symmetries forms a group, because you can combine and invert operations. But there are now infinitely many operations. We can rotate around any axis by
any angle we choose and it's still a sphere. But even though there is an infinity of such rotations, it's still manageable. In fact, any rotation can be described by three parameters, for example the
three that aeronautical engineers might use called
pitch, roll and yaw
. Mathematicians call the group of 3D rotations SO(3). That 3 refers to the fact that we're talking about rotations of 3-dimensional space. But because it takes three parameters to describe a
rotation, SO(3) is itself 3-dimensional. This double appearance of the number 3 is a 'coincidence', as we'll see in the next section.
There's one more thing I need to add about continuous groups. Some are also differentiable. You don't need to worry in too much detail about what this means but it ultimately says that we can do
calculus with these groups. For example, we can ask questions not only about the orientation of a sphere, but also about the rate of change of its orientation. So SO(3) is a differentiable group. The
usual name for these is
Lie groups
, after the mathematician
Sophus Lie
Subgroups and SO(2)
SO(3) is the group of 3D rotations. But all of the 24 symmetries of the cube we considered are also rotations. So the cube is actually a subset of SO(3). And as it's a subset of a group that's also a
group, we call it a
Now consider the set of all rotations we could perform on a 2D space. These also form a group. In fact, given an element of this group we can apply it to some text to rotate it in 2D. As above, use 1
to mean "no rotation", use A to mean "rotate 45° anticlockwise" and B to mean "rotate 45° clockwise". We can illustrate how these act on Hello:
If you've ever playing with a drawing package, like the one I just used, this will all be very familiar. And it should also be obvious that 1=11111=AB=BA=AAAAAAAA=BBBBBBBB. (Note how I'm only talking
about the orientation of the word. If we were taking into account the position of the word on the page it'd be more complex. But there would still be a group describing it.)
Although the screen the text Hello is on is flat and two-dimensional, it can be considered to be embedded in the 3D space around it. And all of the 2D rotations can be considered to be 3D rotations
also, just 3D rotations that happen to be oriented along the axis pointing straight into the screen.
So 2D rotations can be considered to be particular 3D rotations and and SO(2) is a subgroup of SO(3).
Note that 2D rotations can be described by just a single number, the rotation angle. So SO(2) is 1-dimensional, even though it's defined through its effect on a 2D space. The two threes that appear
in connection with SO(3) above really are a coincidence. In 4-dimensions we have SO(4) which is actually a 6-dimensional group. In 4D you need 6 numbers like pitch, roll and yaw to define your
orientation. But I'm not asking you to try to visualise that now.
Rates of change and Lie algebras.
Imagine a sphere tumbling around. At any moment its orientation in space can be described by an element of SO(3). So we might naturally be led to ask the question "what is its rate of change of
orientation?". What would such an answer look like?
Let's review basic calculus first. Consider position instead of orientation. Suppose a vehicle is driving along a road. At time t it's at distance A and at time t+1s it's at distance B. Then its
average velocity over the 1s is (B-A)/1s. But that's an average over a period. What's its velocity exactly at time t? We could compute its distance, say C, at time t+0.0001s, and compute (C-A)/
0.0001s. But that's still an average over a short period of time.The tool for telling us about instantaneous rates of change is calculus, and here it tells us that if the motion is continuous we can
take a limit. So if at time t+d, its position is given by D, then the velocity is the limit, as d tends to zero, of (D-A)/d.
Back to the tumbling sphere. We could say "at time t it has orientation A and time t+1s it has orientation B". But as with velocity, that tells you what happened between t and t+1s, not what it was
doing at any instant. We haven't said what the rate of change is precisely at time t. Imagine we take some kind of limit of the above statement as we consider the change from time t to time t+d where
d becomes arbitrarily small. One thing we can say is that if the motion is continuous then the start and finish orientations, say A and D, will become arbitrarily close. The difference between them
will be an infnitesimally small rotation. You can describe an infinitesimally small rotation by an infinitesimally small vector: let the direction of the vector be the axis of rotation and its length
be the angle of rotation. Suppose this vector is V. Then the rate of change of orientation is V/d. This is commonly known as
angular velocity
and it's a vector. Note that it's not an infinitesimally small vector because the small V and small d will usually cancel to give a nice finite size angular velocity, just as the ordinary velocity
calculation above gives a finite answer. In fact, we can easily interpret the angular velocity vector. Its direction is the axis of rotation and its length is the rate of change of angle around that
axis (in degrees per second for example).
At this point you might say "why not represent any angle by a vector, not just an infinitesimal one". For example, you might think v=(180°,0,0) could be used to mean a rotation through 180° around
the x-axis. You can do that if you like, but this doesn't resepct the additive properties of vectors. For example, it'd be useful if v+v=(360°,0,0) then represented "no rotation" because a 360°
rotation is no rotation at all. But "no rotation" should be represented by (0,0,0). So this isn't a very convenient way to talk about rotations. But rates of change of orientation are different. If
we define w=(180°/s,0,0) then w represents rotation around the x-axis at one revolution every two seconds. w+w=(360°/s,0,0) is also a perfectly good angular velocity, one that's twice as fast. Unlike
angles, angular velocities don't wrap around when you get to 360°. You can crank angular velocities up as high as you like. So vectors are a good way to represent rates of change of orientation, even
if they're not good at representing orientations.
Let's do this in 2D. It's much easier. All rotations are in the plane and so we don't need to specify an axis. We just need to specify a single number to define an orientation, and a rate of change
of orientation is just given by a single number also. That one number can be thought of as a 1-dimensional vector.
So what kind of thing is this rate of change object? In each case it can be thought of as a vector (in the case of SO(2) it's a 1-dimensional vector), but it's not an element of the group because an
angular velocity is not the same thing as an orientation. Mathematically it's called an element of a Lie algebra. The
Lie algebra
of a Lie group is the set of vectors that describe rates of change of elements of the group. I mentioned above that SO(4) is 6-dimensional. This means that the Lie algebra is made of 6-dimensional
vectors. You can think of the 6 components of these vectors as describing how the 6 generalised pitch, roll and yaw parameters vary over time.
There's a convention that we use lower case names for Lie algebras. So the Lie algebra of SO(3) is called so(3). That makes a sort of intuitive sense, so(3) describes really small rotations and SO(3)
describes full-sized ones.
Closing words
That brings me close to the end of Part 1. But I can now say one very important thing. There are a number of things that mathematicians call E8, but one of them is a Lie algebra. And that's the thing
that Garrett Lisi is
talking about
PS I welcome feedback from anyone. I want to make this as understandable to as many people as possible.
33 comments:
Great intro. Do the members of the Lie algebra also form a group? I'm just thinking about the SO(2) example here, but it seems to me that the members of the Lie algebra are just like the members
of the Lie group, except we no longer do our addition modulo 360 degrees.
The members of a Lie algebra form a group in a trivial way. You can add and subtract the elements as they form vectors in a vector space. But this what is called a commutative or abelian group
where x+y is the same as y+x. Consider "full-size" rotations". For example let A by a 90 degree rotation about x and B a 90 degree rotation about y. With your hand you can easily show that AB is
not the same as BA. So infinitesimal rotations "commute" with each other and full size ones don't. However, there's a vestige of this non-commutativity still left in the Lie algebra. There's
another operation you can perform on the elements of the Lie algebra that I haven't mentioned: the "Lie bracket" operation. I probably won't get around to talking about this though. But very
roughly, it measures the amount by which Lie algebra elements fail to commute as you start growing them to be full-sized. This is all very vague but I hope it gives some intuition.
There's lots more here: http://en.wikipedia.org/wiki/Lie_algebra
Please continue this good work.
On the other hand, I am very curious if there is any mathematical relation between SU(3)xSU(2)xU(1) and E8.
Thanks for this. It was exactly at the right level for this almost-but-not-quite-math-major who took and liked Abstract, but doesn't remember much of it.
Challenging, but not so that I gave up.
typo at the end of the third paragraph of "Rates of change and Lie algebras" section:
Its length is the direction of rotation and its length is the rate of change of angle around that axis (in degrees per second for example).
This is commonly known as angular velocity and it's a vector.
Hey, no outright lies! It's a bivector, and you know it!
Seriously, it's a great article.
A couple of nitpicking points:
"Imagine a sphere tumbling around. At any moment its orientation in space can be described by an element of SO(3)."
That's true only after we've picked a 'standard' orientation. Then the orientation of the sphere at a particular moment is given by some SO(3) rotation from the fixed 'standard' one. But without
that fixed reference, there's no natural correspondence between orientations and SO(3) elements.
"So what kind of thing is this rate of change object? [...] Mathematically it's called a Lie algebra."
Don't you mean "an element of a Lie algebra"?
Fantastically clear! Well done :)
I'm really curious where all this is leading to, as I haven't heard about this E8 buzz before.
The cube rotations are not a dihedral group.
The third paragraph of the section "Rates of change and Lie algebras" ends with a sentence with a typo: "Its length is ... and its length is ..."
Thanks for the corrections, most of which I've applied.
Pseudonym, the bivector thing is one of my favourite nitpicks, but today I'm using the word 'vector' to mean 'an element of a vector space', so it doesn't apply. The word vector is a little
overloaded isn't it.
Thanks for writing this I am thoroughly enjoying it.
"If we apply that rotation to any point in space (x,y,z) it gets mapped to (x,-y,-z). Similary A maps the point (x,y,z) to (x,z,-y)."
Should it be (x,-y,z) at the end?
Thanks for a beautifully clear article. I'm looking forward to the next parts.
This has been posted at http://www.physicsforums.com/showthread.php?p=1510223#post1510223 Layman's explanation wanted.
I can see that you are doing a much better job than what I was attempting.
Time for the pros to take over.
You didn't correct the thing about the cube group being the dihedral group of order 24. It isn't. It's the symmetric group of order 24.
To see this, consider the action of the group on the four diagonals of the cube. Any of the 24 possible permutations of the diagonals can be effected by a single rotation of the cube. Thus the
group is S4.
Thank you for putting the time in to write this explanation! I have read a little bit of group theory and played around with a book/software package called "Exploring Abstract Algebra with
Mathematica," but did not get that far. Anyway, I am referring my friends to your blog for a basic introduction to E8. Most of them will probably get through the first few paragraphs and kind of
fade out, but that's OK... it's better than no understanding!
Is your browser acting up? I'm pretty sure I removed all occurrences of the word 'dihedral' as soon as you pointed out my error. I decided not to give this group any name (other than cube group)
because I don't really want to spend much time on finite groups. But thanks for pointing the issue.
So far I've only said roughly what a Lie group and Lie algebra are, and that E8 is an example of these things. But there are infinitely many Lie groups and Lie algebras. In part II, besides
talking more about Lie groups, I hope to zoom in a bit and single out E8 from all of the others and say something about what makes it special. In part III I'll then try to show why group theory
is so important to physics and how properties of Lie groups can be (or might be) interpreted as properties of the real world.
William, this is great! Nice job.
William? Who's William?
We used B to mean a 180 degree rotation around the x-axis. If we apply that rotation to any point in space (x,y,z) it gets mapped to (x,-y,-z). Similary A maps the point (x,y,z) to (x,z,-y).
I don't understand. Wouldn't 180 degrees (B) be (x,y,-z) or (x,y,+z), and wouldn't 90 degrees (A) be (x,-y,-z) or (x,+y,+z) or (x,-y,+z) or (x,+y,-z)?
So the cube is actually a subset of SO(3).
A bit unclear: you should probably say "the symmetry group of the cube" or something similar. The cube itself isn't a subset of SO(3)!
But generally a really nice article. Look forward to Part 2.
Since this post is still a lot about groups, I think I can advise to your readers this cool freeware : GroupExplorer
Unfortunately, I don't know any good software to help you draw your ADE diagrams for your next post :-)
Hi sigfpe,
This is an impressive post.
Is there any way to do this relatively easily in 3D rather than in 2D?
For example, some engineers appear to use 3D helical functions rather than 2D elliptical functions.
Oops, sorry Dan. I thought you were someone else. I'll blame it on my inbox trying to kill me.
When's part 2 coming out?
The next part comes out when I manage to come up with some kind of informal layman's description of what the weights of a representation are. I have some ideas. Probably Friday as I'll have the
day off.
Great post. Thoroughly understandable and fun. Seems this is a typo: "Its length is the direction of rotation and its length is the rate of change of angle around that axis"
Great job: my niece is going to use it, here in Italy, for a small work at school.
Looking forward for the second part.
I went back through some of your archives, so forgive the untimeliness of this post. Hopefully that won't be too hard, since most of the mathematics was timely a century ago ;-)
First, this article is great. It explains in comparably lay terms about groups and symmetry. (I always wondered what a Lie group was before this post, and now I at least have an idea of what
flavor they are).
My only confusion came about at the very end, where you began to talk about differentiability. (Calculus was always my weakness!) Correct me if I'm wrong, but when we talk about a group G being
lie, we're really saying that a function f:R->G is differentiable (in some manner or another)?
Also, took me a second to grok the idea at the beginning of when two transformations are different. In some hypothetical future revision of this blog post, it might make it a little easier to
explain that the equivalence between transformations is the same as equivalence of functions: that f = g iff for all points p, p transformed under f is the same as p transformed under g.
Dear Sigfpe,
As a non-mathematician it's pretty obvious to me that if you can use an angle to describe a 2D rotation - then you can use two angles to describe a 3D rotation - so I'm not sure your explanation
leading to 6 numbers needed to describe 4 dimensions is very convincing.
Otherwise many thanks for the article.
Dear Sigfpe,
Many thanks for a great article. However to a non-mathematician it's pretty obvious that if you can define a 2D rotation using an angle, then you can define a 3D rotation using two angles. So
your explanation leading to 6 numbers for 4 dimensional space is not very convincing. Have I missed something ?
Dear Sigfpe,
As a non-mathematician it's pretty obvious to me that if you can use an angle to describe a 2D rotation - then you can use two angles to describe a 3D rotation - so I'm not sure your explanation
leading to 6 numbers needed to describe 4 dimensions is very convincing.
Otherwise many thanks for the article.
Dear Sigfpe,
As a non-mathematician it's pretty obvious to me that if you can use an angle to describe a 2D rotation - then you can use two angles to describe a 3D rotation - so I'm not sure your explanation
leading to 6 numbers needed to describe 4 dimensions is very convincing.
Otherwise many thanks for the article.
2 angles certainly won't do for 3D rotations even if it might seem obvious to you. You can read about it (with animations) here.
In n dimensions you need n(n-1)/2 angles.
|
{"url":"http://blog.sigfpe.com/2007/11/whats-all-this-e8-stuff-about-then-part.html?showComment=1195488480000","timestamp":"2014-04-19T09:24:22Z","content_type":null,"content_length":"118184","record_id":"<urn:uuid:f073f867-aa49-4bfd-81bf-3411194c1639>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Proving that a variety is not (isomorphic to) a toric variety
up vote 8 down vote favorite
Is there an algorithmic (or other) way to prove that a (projective) variety is not isomorphic to a toric variety?
I'd be happy with an algebraic answer (for affine or projective varieties), using the fact that toric ideals are binomial prime ideals. There ne could use that the coordinate rings are characterized
as those admitting a fine grading by an affine semigroup , i.e. presented by a binomial prime ideal (Prop. 1.11 in Eisenbud/Sturmfels "Binomial ideals").
This question resulted from an Example that I discussed with Mateusz Michalek. The example is: let $V$ be the Zariski closure of the image of the parameterization: $$(p_1,p_2,a_1,a_2,a_3,b_1,b_2,b_3)
\to \begin{pmatrix} p_1a_1a_2a_3+p_2b_1b_2b_3 \\ p_1a_1a_2b_3+p_2b_1b_2a_3 \\ p_1a_1b_2a_3+p_2b_1a_2b_3 \\ p_1a_1b_2b_3+p_2b_1a_2a_3 \\ p_1b_1a_2a_3+p_2a_1b_2b_3 \\ p_1b_1a_2b_3+p_2a_1b_2a_3 \\
p_1b_1b_2a_3+p_2a_1a_2b_3 \\ p_1b_1b_2b_3+p_2a_1a_2a_3 \\ \end{pmatrix}$$
Implicitization using Macaulay2 is quick and yields a complete intersection: $$\langle et-ry-qu+wo, wt-qy-ru+eo, we-qr-yu+to \rangle \subset k[q,w,e,r,t,y,u,o]$$
How to prove that $V$ is not toric?
toric-varieties ac.commutative-algebra
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged toric-varieties ac.commutative-algebra or ask your own question.
|
{"url":"http://mathoverflow.net/questions/93224/proving-that-a-variety-is-not-isomorphic-to-a-toric-variety","timestamp":"2014-04-19T07:40:02Z","content_type":null,"content_length":"46329","record_id":"<urn:uuid:f060bccd-e3a0-4ac8-a074-92dbb377b9a2>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Partial Derivative question.
September 17th 2009, 01:21 AM #1
Jul 2009
Partial Derivative question.
Find the partial derivative of z with respect to x if
$u = x^4y^3$
$y = 7+4x^4$
Would the answer be
$f_x= 24x^3y^3+16x^3$
or have i done it wrong?
Whole question please!
Total derivative of u with respect to x is the bottom row here...
... bearing some resemblance to your version. As always,
... is the chain rule, and...
... the product rule. Straight continuous lines differentiate downwards (integrate up) with respect to x, and the straight dashed line similarly but with respect to the dashed balloon expression
(which is the inner function of the composite and hence subject to the chain rule).
Don't integrate - balloontegrate!
Balloon Calculus Forum
Balloon Calculus Drawing with LaTeX and Asymptote!
September 17th 2009, 03:02 AM #2
MHF Contributor
Oct 2008
|
{"url":"http://mathhelpforum.com/calculus/102763-partial-derivative-question.html","timestamp":"2014-04-18T17:47:22Z","content_type":null,"content_length":"32826","record_id":"<urn:uuid:7226d866-c69c-4974-89a3-4a348200dfad>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sieve of Eratosthenes
The Sieve of Eratosthenes
To generate all prime numbers, i.e. primes, in a given range, the sieve of Eratosthenes is an old, but nevertheless still the most efficiently known algorithm. It works as follows: Put into an array
all natural numbers up to a given limit size. Set the first sieve prime = 2. Then cross out all multiples of the current sieve prime. Next, look for the next larger, not crossed-out number. It will
become the new sieve prime. Repeat this process with the next not crossed out number until all numbers are worked off. Here this process is illustrated for the odd numbers < 360:
Obviously, crossing out of multiples need to start firstly at the square of the new chosen sieve prime. So the total effort is about size SUM[p] 1/p where the sum is taken over all primes less than
sqrt(size). The sum can be estimated rather well to be about ln(ln(size)/2)+0.261497. Hence, for large sieve sizes, the time of the algorithm should be roughly proportional to the sieve size, because
the function ln(ln(x)) increases so slowly.
After seen some "fast" implementations of this algorithm by other people, I decided to write my own, really fast and less memory consuming computer program. There are four main improvement points for
The Art of Prime Sieving
Dense bit packing for the crossing-out flags
To use the memory efficiently, it is necessary to mark the numbers not in a byte or even a word of the computer, but in a bit only. This must be done very clever, because bit-access is much more
expensive in cpu-time than byte-access or the CPU-prefered word-access!
Only presieved numbers in the sieve
With exception of 2, all other primes are odd, so we need only to store the flags for the odd numbers in the sieve. But furthermore, only the numbers 6k+1 and 6k+5 can be primes (except 2 and 3).
So, we can reduce the total amount of sieve memory by a factor of 3 storing only the flags for these numbers. If we even exclude all multiples of 5, resulting in a factor of 3.75, we need only 8
flags each 30 numbers. This is really nice, because 1 byte has 8 bits!
Don't bother with multiples of the smallest primes
We know, that the primes, except 2 and 3, can occur only for those numbers which have modulo 6 a remainder 1 or 5. So we can avoid to cross out all multiples of 2 and 3, saving a factor of 3 in
sieving speed. What is more, this list of smallest primes can be extended, e.g. including 5 and 7, and we need only to consider 48 numbers out of 210, achieving a speed-up factor of even 4.375.
Each further small prime p will decrease the run time by a factor 1-1/p, but increase the code of the program by a factor p-1.
Choose an appropriate sieve size
Typically, the sieve should fit into the computers main memory and even better into its cache in modern high speed computers. Therefore, one can't use a sieve of the size of the sieve limit, but
must use far smaller sieve sizes normally. So, one has to choose an appropriate fixed sieve size and sieve the total range in many parts sequentially. And hence, the dense bit packing in the
sieve pays very well!
With these tricks in mind (and a lot of optimization), I wrote a C-program which was the fastest sieve-of-Eratosthenes implementation, I ever became aware of. In May 1998, I have further refined the
algorithm with an even denser sieve, resulting in access to fixed bit-positions, and a quicker presieving. These improvements gain at least 15% speed-up over the old version. To give you a feeling of
its speed: it generates all primes less than 1 milliard in less than 1 minute and all primes up to 2^32 in less than 3.5 minutes on a 133 MHz Pentium CPU (I used sieve size = 8000 Bytes (processor
has 8 KB Data-Cache), smallest primes = 2,3,5 and gcc-2.7.2!).
Thanks to Thomas Fiedler, University of Jena, who discovered in May 2003 an important bug (a 1 not a 0 should have been there) when segmentating sieving, I polished the source a bit and thus you can
fetch the latest prime_sieve.c version 2.0c. And here is the corresponding README.
• source code availible
• very fast
• interval sieving possible
• adjustable to CPU cache size
• works up to numbers 2^64
• user definable macro called for each found prime number
│Limit│ Prime Count│ CPU1│ CPU2│ CPU3│ CPU4│ CPU5│ CPU6│ CPU7│ CPU8│ CPU9│Factor│
│ 10^2│ 25│ 0.0 s│ 0.0 s│ 0.00 s│ 0.0 s│ 0.00 s│ 0.00 s│ 0.00 s│ 0.00 s│ 0.00 s│1.0955│
│ 10^3│ 168│ 0.0 s│ 0.0 s│ 0.00 s│ 0.0 s│ 0.00 s│ 0.00 s│ 0.00 s│ 0.00 s│ 0.00 s│1.5010│
│ 10^4│ 1229│ 0.0 s│ 0.1 s│ 0.00 s│ 0.0 s│ 0.00 s│ 0.00 s│ 0.00 s│ 0.00 s│ 0.00 s│1.7887│
│ 2^16│ 6542│ 0.0 s│ 0.1 s│ 0.01 s / 0.01 s│ 0.0 s│ 0.00 s│ 0.00 s│ 0.00 s│ 0.00 s│ 0.00 s│1.9744│
│ 10^5│ 9592│ 0.1 s│ 0.1 s│ 0.01 s / 0.01 s│ 0.0 s│ 0.00 s│ 0.00 s│ 0.00 s│ 0.00 s│ 0.00 s│2.0118│
│ 10^6│ 78498│ 0.1 s│ 0.2 s│ 0.02 s / 0.03 s│ 0.1 s│ 0.01 s│ 0.01 s│ 0.01 s/ 0.01 s│ 0.00 s│ 0.00 s│2.1941│
│ 10^7│ 664579│ 0.2 s│ 1.6 s│ 0.14 s / 0.20 s│ 0.5 s│ 0.13 s│ 0.08 s│ 0.04 s/ 0.04 s│ 0.02 s│ 0.01 s│2.3483│
│ 10^8│ 5761455│ 1.2 s│ 17.0 s│ 1.42 s / 2.02 s│ 4.9 s│ 1.36 s│ 0.80 s│ 0.36 s/ 0.37 s│ 0.18 s│ 0.11 s│2.4818│
│ 10^9│ 50847534│ 11.3 s│187.8 s│ 16.2 s / 21.7 s│ 51.3 s│ 15.30 s│ 8.84 s│ 3.62 s/ 3.68 s│ 1.55 s│ 1.16 s│2.5996│
│ 2^32│ 203280221│ 50.7 s│889.5 s│79.6 s / 104.7 s│249.5 s│ 73.85 s│43.01 s│15.97 s/ 15.98 s│ 7.55 s│ 5.14 s│2.6676│
│10^10│ 455052511│ 120.4 s│ │ 268.9 s│ │191.35 s│ │ 38.49 s│ 16.72 s│ 12.34 s│2.7050│
│10^11│ 4118054813│ 1268.6 s│ │ 4122.7 s│ │ │ │ 482.77 s│ 214.51 s│ 143.71 s│2.8003│
│10^12│ 37607912018│ 15207.7 s│ │ │ │ │ │ 7466.49 s│ 3071.43 s│ 2074.49 s│2.8873│
│10^13│346065536839│249032.9 s│ │ │ │ │ │ │32327.34 s│30955.69 s│2.9673│
CPU1: HP PA-8000 180MHz with 400 MB RAM (256 KB Data-Cache) running HP-UX 10.20 (sieve size=200KB).
CPU2: MIPS 3000 33MHz with 32 MB RAM (no Cache) running Ultrix V4.4 (sieve size=15KB).
CPU3: AMD K6 233MHz with 64 MB RAM (32 KB Data-Cache) running Linux 2.032 (sieve size=22KB). The C-source was compiled using gcc 2.7.2.1 for i486-linux one time for 32bit LONG and the other time for
64 bit LONG. Further, for limit > 2^32 one should increase the sieve size to get shorter running times.
CPU4: Intel Pentium 133MHz with 64 MB RAM (8KB Data-Cache) running Windows 95 (sieve size=8000B). Compiler: Visual C++ (max speed)
CPU5: DEC Alpha 21164a 400 MHz with 64 MB RAM (8 KB Data-Cache) running OSF1 V4.0 (sieve size=8000)
CPU6: Intel Pentium III 450 MHz with 128 MB RAM (16 KB Data-Cache) running Linux 2.2.14 using gcc 2.7.2.3 (i386 Linux/ELF) (sieve size=16384).
As you see, it is very important how well the code-optimizer and the caching logic of the cpu does. The sieve size are nearly optimal chosen for limits < 10^10.
CPU7: AMD Thunderbird 900 MHz with 256 MB RAM (64 KB 1st level Data-Cache) running Linux 2.2.19 using gcc 2.95.2 (i386 Linux/ELF) (sieve size=64000/64KB).
CPU8: PowerPC 970FX 2.5 GHz with 1.5 GB RAM running Mac OSX 10.3.8 using compiler IBM XLF 6.0 Advanced Edition (sieve size=32000 for limit <= 10^11 else the minimal necessary sieve size).
CPU9: AMD Athlon64 Winchester 2 GHz with 3 GB RAM (64 KB 1st level Data-Cache) running Linux 2.6.9 using gcc 3.4.2 (x86-64 Linux/ELF) (sieve size=65000).
Factor = ln(1/2 ln(Limit))+0.2615
The average access for each bit in the sieve is PROD(1-1/p) (Factor - SUM1/p) whereby the primes p in the sum and the product runs over the smallest not-bother-primes, --- here 2,3,5, resulting in 8/
30(Factor -31/30).
BTW: Because the gaps between successive primes are <= 250 up to p=436273009 and <= 500 up to 304599508537, to hold a list of primes only their differences need to be stored in a byte.
For interested persons: the frequency of primes and a little problem for the mathematicians: estimate the magnitude of
SUM[p<=n] 1/p - ln(ln(n))-0.2614972128....
Hint: Hardy & Wright tell us O(1/ln(n)), but this seems to be too pessimisticly.
For intervals larger about 10^9, surely for those > 10^10, the Sieve of Eratosthenes is outperformed by the Sieve of Atkins and Bernstein which uses irreducible binary quadratic forms. See their
paper for background informations as well as paragraph 5 of W. Galway's Ph.D. thesis. Achim Flammenkamp
created: 1998-06-02 17:30 UTC+2
updated: 2005-05-19 17:10 UTC+2
|
{"url":"http://wwwhomes.uni-bielefeld.de/achim/prime_sieve.html","timestamp":"2014-04-18T23:54:57Z","content_type":null,"content_length":"14236","record_id":"<urn:uuid:0916bbff-251a-48fd-b58e-c2fcffda9d4f>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Conversion of Customary Units of Measurement ( Read ) | Measurement
Remember the tables that Mr. Potter suggested? Take a look at this dilemma.
Tyrone has his first measurement done when he meets Mr. Potter in the auditorium. He has written the measurement of the first table on the paper.
The first table is $8' \times 4'$
When Tyrone arrives at the auditorium, Mr. Potter has another table all clean and set up for Tyrone to check out.
“I think this one is larger than the other one,” Mr. Potter says. “It measures $96'' \times 30''$
Tyrone looks at the table. He doesn’t think this one looks larger, but he can’t be sure.
On his paper he writes.
Table $2 = 96'' \times 30''$
To figure out which table is larger, Tyrone will need to convert customary units of measure. Then he will need to compare them.
This Concept will teach you all that you need to know about how to do this. When finished, you will know which table is larger and so will Tyrone.
Imagine that you are cooking with a recipe that calls for 13 tablespoons of whipping cream. Since you are cooking for a large banquet you need to make 4 times what the recipe makes. So you are
multiplying all of the ingredient quantities by 4 and combining them in a very large bowl. You realize that this requires you to use 52 tablespoons of whipping cream. To measure out 1 tablespoon 52
times will take forever!
Don’t worry, though.
You can convert tablespoons to a larger unit of measurement like cups and be able to measure out the whipping cream in larger quantities.
Look at the chart of customary units again.
Customary Units of Length
$& \text{inch} \ (in)\\& \text{foot} \ (ft) && 12 \ in.\\& \text{yard} \ (yd) && 3 \ ft.\\& \text{Mile} \ (mi) && 5,280 \ ft.$
Customary Units of Mass
$& \text{ounce} \ (oz)\\& \text{pound} \ (lb) && 16 \ oz.$
Customary Units of Volume
$& \text{ounce} \ (oz)\\& \text{cup} \ (c) && 8 \ oz.\\& \text{pint} \ (pt) && 16 \ oz.\\& \text{quart} \ (qt) && 32 \ oz.\\& \text{gallon} \ (gal) && 4 \ qt.$
Customary Units of Volume Used in Cooking
$& \text{teaspoon} \ (tsp)\\& \text{tablespoon} \ (tbsp) && 3 \ tsp.\\& \text{cup} \ (c) && 16 \ tbsp.$
Do you notice a relationship between the various units of volume? If we have 8 ounces of a liquid, we have 1 cup of it. If we have 16 ounces of a liquid, we have 1 pint of it, or 2 cups of it. If we
have 2 pints of a liquid, we have 1 quart, or 4 cups, or 32 ounces of it.
We just looked at volume relationships by going from ounces to quarts, let’s look at a larger quantity and break it down to smaller parts. If we have 1 cup, how many teaspoons do we have? 1 cup is 16
tablespoons and 1 tablespoon is 3 teaspoons, so 1 cup is $16 \cdot 3$
Once we know how the different units of measurement relate to each other it is easy to convert among them. Don’t forget that 12 inches is equal to 1 foot. As you work more with measurements, it will
be helpful for you to learn many of these relationships by heart.
Convert 374 inches into feet.
There are 12 inches in 1 foot, so to go from inches to feet we divide 374 by 12.
Our answer is $31 \frac{1}{6}$
Notice this rule again.
If you go from a smaller unit to a larger unit, we divide. If we go from a larger unit to a smaller unit, we multiply.
Dividing and multiplying any units always has to do with factors and multiples of different numbers.
Studying measurement is particularly helpful because you learn to estimate more accurately what things around you measure. You catch a bass that weighs about 10 pounds, or your book bag filled with
books weighs about 25 pounds. Maybe your good friend is about $5 \frac{1}{2}$
Once you start measuring things, you can compare different quantities. Is the distance from your house to school farther than the distance from your house to the grocery store? Does your math book
weigh more than your history book?
To be accurate in comparing and ordering with measurements, it is essential that you are comparing using the same unit. So, if you compare pounds and ounces, you should convert ounces to pounds and
then compare pounds to pounds. Or, you can convert pounds to ounces and compare ounces to ounces.
Compare $4 \frac{1}{2}$
First, notice that we have two different units of measurement. To accurately compare these two quantities, we need to make them both into the same unit. We can do this by multiplying. Multiply the
pounds by 16 to get ounces.
$4 \frac{1}{2} \cdot 16 = 72 \ oz.$
Our answer is that $4 \frac{1}{2} \ lbs < 74 \ oz$
Compare 62 ft. ___ 744 in.
First, we need to convert the units so that they are both the same. We can do this by converting inches to feet. The inches measurement is so large, that it is difficult to get an idea the exact
size. Divide inches by 12 to get feet.
$744 \div 12 = 62 \ feet$
Our answer is that 62 ft. = 744 inches.
You can also use this information when ordering units of measurement from least to greatest and from greatest to least.
Now it's time for you to practice. Convert these measurements into quarts.
Example A
82 pints
Solution: 41 quarts
Example B
80 ounces
Solution:2.5 quarts
Example C
$8 \frac{1}{2}$
Solution:34 quarts
Here is the original problem once again.
Tyrone has his first measurement done when he meets Mr. Potter in the auditorium. He has written the measurement of the first table on the paper.
The first table is $8' \times 4'$
When Tyrone arrives at the auditorium, Mr. Potter has another table all clean and set up for Tyrone to check out.
“I think this one is larger than the other one,” Mr. Potter says. “It measures $96'' \times 30''$
Tyrone looks at the table. He doesn’t think this one looks larger, but he can’t be sure.
On his paper he writes.
Table $2 = 96'' \times 30''$
To figure out which table is larger, Tyrone will need to convert customary units of measure. Then he will need to compare them.
To figure out which table is larger, Tyrone needs to convert both tables to the same unit of measure. One has been measured in inches and one has been measured in feet. Tyrone will convert both
tables to inches.
He takes his measurements from table one.
$8' \times 4'$
There are 12 inches in 1 foot, so if he multiplies each by 12 they will be converted to inches.
$8 \times 12 = 96''$
The length of both tables is the same. Let’s check out the width.
$4 \times 12 = 48''$
The first table is $96'' \times 48''$
The second table is $96'' \times 30''$
Tyrone shows his math to Mr. Potter. The first table that the students already have is the larger of the two tables. Tyrone thanks Mr. Potter, but decides to stick with the first table.
Customary System
a system of measurement common in the United States. It involves units of measurement such as inches, feet, miles.
Metric System
a system of measurement developed by the French and common in Europe. It involves meters, grams, liters.
Guided Practice
Here is one for you to try on your own.
Henrietta is having her 8 best friends over for a luncheon. She wants to prepare salads on which she uses exactly 7 tablespoons of Romano cheese. If she is preparing 8 salads, how many cups of Romano
cheese does Henrietta require?
We know that each salad requires 7 tablespoons and that Henrietta is making 8 salads. To find out the total amount of Romano cheese she needs in tablespoons, we multiply 7 by 8 to get 56 tablespoons.
Now we need to convert 56 tablespoons to cups. There are 16 tablespoons in a cup, so we need to divide 56 by 16.
$56 \div 16 = 3 \frac{1}{2}$
Henrietta will need $3 \frac{1}{2}$
Video Review
- This is a James Sousa video on equivalent customary units of length.
- This is a James Sousa video on equivalent customary units of capacity.
- This is a James Sousa video on equivalent customary units of mass.
Directions: Convert the following measurements into yards.
1. 195 inches
2. 0.2 miles
3. 88 feet
4. 90 feet
5. 900 feet
Directions: Convert the following measurements into pounds.
5. 2,104 ounces
6. 96 ounces
7. 3 tons
8. 15 tons
Directions: Convert the following measurements into pints.
9. 102 quarts
10. 57 ounces
11. 9.5 gallons
12. 4 quarts
13. 18 quarts
14. 67 gallons
15. 500 gallons
Directions: Compare the following measurements. Write <, >, or = for each ___.
16. 41 ounces ___ 2.5 quarts
17. 89 feet ___ 31 yards
18. 79 inches ___ 6 feet
19. 47 tablespoons ___ 144 teaspoons
20. Order the following measurements from least to greatest: 0.25 mi., 1525 ft., 18,750 in., 492 yd.
21. Order the following measurements from least to greatest: 42 pts, 282 oz., 24 gal., 64 qt.
|
{"url":"http://www.ck12.org/measurement/Conversion-of-Customary-Units-of-Measurement/lesson/Conversion-of-Customary-Units-of-Measurement/","timestamp":"2014-04-21T05:07:03Z","content_type":null,"content_length":"122316","record_id":"<urn:uuid:da3b3634-0f13-47c5-a041-2040916ca7a2>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pattern of Remainders
Date: 7/10/96 at 18:14:30
From: Anonymous
Subject: Pattern of Remainders
A number divided by 2 gives a remainder of 1,
and when divided by 3 gives a remainder 2.
Answer: 5
This number when divided by 4 gives a remainder 3
" " 3 gives a remainder 2
" " 2 gives a remainder 1
Answer: 11
The problem continues with the remainder being one less than the
number it was divided by. I know there is a pattern and I've tried
adding, multiplying and dividing the numbers. Help would be very much
Date: 7/10/96 at 21:32:7
From: Doctor Pete
Subject: Re: Pattern of Remainders
Think of the related question, "What is the smallest number n(k) for
which 2, 3, 4, ..., k divides n(k)?" For example, 60 is the smallest
number for which 2, 3, 4, and 5 are divisors. Note also that 6 also
divides 60. So here is a short table n(k) for different values of k:
k: 2 3 4 5 6 7 8 9 10
n(k): 2 6 12 60 60 420 840 2520 2520
Notice that n doesn't follow a simple rule.
Now, what does this have to do with your original question? Well,
since 2, 3, 4, ..., k divides n(k), n(k)-1 will have remainders of k-1
when divided by k. Since we asked that n(k) be the smallest such
number with the above property, it follows that n(k)-1 will be the
smallest number which, upon dividing by 2, 3, 4, etc., will leave
remainders of 1, 2, 3, etc.
So the final question to be asked is, "How do you calculate n(k)?"
Well, look at the case where k=4. Note it's not 24, but 12; this is
because we already had 2 as a factor in n(3), so we only needed to
multiply n(3) by 2 to get a factor of 4 in n(4). Similarly, n(5)=
n(6)=60, since 6=2*3. So intuitively we will see that the prime
factorization of n(k-1) and k will play an important role. But off
the top of my head, I don't see an immediate formula to calculate
these. I'll follow this up if I find one.
-Doctor Pete, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
|
{"url":"http://mathforum.org/library/drmath/view/56885.html","timestamp":"2014-04-21T10:25:32Z","content_type":null,"content_length":"6937","record_id":"<urn:uuid:803b3810-a97c-4cb1-af77-4dd0f8ba263e>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Orthogonal group over finite field
Let O(n,F_q) be the orthogonal group over finite field F_q. The question is how to calculate the order of the group.
The answer is given in
. This seems to be a standard result, but I could not find a proof for this in the basic representation theory books that I have. Neither could I solve it myself from the (direct sum) construction
they have given.
Can someone please help me? It is enough if you give some references (books/papers) where it is solved.
|
{"url":"http://www.physicsforums.com/showthread.php?p=2097041","timestamp":"2014-04-18T08:29:49Z","content_type":null,"content_length":"16985","record_id":"<urn:uuid:09a7690e-70e6-445f-b3a6-e462e9de8d99>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
prove that sin54-sin18=1/2
• one year ago
• one year ago
Best Response
You've already chosen the best response.
0.80902-0.30902=0.5 or 1/2
Best Response
You've already chosen the best response.
or should i say 0.5=1/2
Best Response
You've already chosen the best response.
@GoldRush18 u have to prove it using trigonometric functions and rules , not using calculator :D
Best Response
You've already chosen the best response.
oh ok
Best Response
You've already chosen the best response.
Use cofunction identity.
Best Response
You've already chosen the best response.
well i got cos36-sin18 and dont know what to do , can u prove it with explanation pls :)
Best Response
You've already chosen the best response.
Idk if this helps, but Sin(3theta)=3Sin(theta)-4(Sin(theta))^3 I tried using it and I got - 4 sin³ 18 + 2 sin 18 - 1/2 = 0
Best Response
You've already chosen the best response.
Use this identity: \[\sin A-\sin B=2 \cos{\frac{A+B}{2}}\sin{\frac{A-B}{2}}\]
Best Response
You've already chosen the best response.
thanks will do that :D @blockcolder
Best Response
You've already chosen the best response.
thanks i got it now it should be like this 2cos(36)sin(18) and multiply by cos18/cos18 as it equals 1 so cos(36)sin(36)/cos(18) and then multiply by 2 and 1/2 so 1/2sin72/cos18 then use co
function so sin72= cos18 1/2cos(18)/cos(18) cosines cancel each other so = 1/2
Best Response
You've already chosen the best response.
Good job! :)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4fa1b2d1e4b029e9dc32b3e7","timestamp":"2014-04-23T07:48:08Z","content_type":null,"content_length":"51512","record_id":"<urn:uuid:8334b12d-c0e0-4e19-8747-6bcb134d6180>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
|
a simple overview of how the prover works
Major Section: INTRODUCTION-TO-THE-THEOREM-PROVER
Six built-in proof techniques are used by ACL2 to decompose the goal formula into subgoals.
simplification -- decision procedures and rewriting with previously proved rules, but actually including a host of other techniques under your control. Simplification is the only proof technique that
can reduce a formula to 0 subgoals (i.e., prove it) rather than just transform it to other formulas. The predominant activity in most proofs is simplification. There are many ways you can affect what
the simplifier does to your formulas. Good users spend most of their time thinking about how to control the simplifier.
destructor elimination -- getting rid of ``destructor terms'' like (CAR X) and (CDR X) by replacing a variable, e.g., X, by a ``constructor'' term, e.g., (CONS A B). But you can tell ACL2 about new
destructor/constructor combinations.
cross-fertilization -- using an equivalence hypothesis by substituting one side for the other into the conclusion and then throwing the hypothesis away. This is a heuristic that helps use an
inductive hypothesis and prepare for another induction.
generalization -- replacing a term by a new variable and restricting the new variable to have some of the properties of the term. You can control the restrictions imposed on the new variable. This is
a heuristic that prepares the goal for another induction.
elimination of irrelevance -- throwing away unnecessary hypotheses. This is a heuristic that prepares the goal for another induction.
induction -- selecting an induction scheme to prove a formula. Inductions are ``suggested'' by the recursive functions appearing in the formula. But you can control what inductions are suggested by
But you can add additional techniques, called clause processors.
The various techniques are tried in turn, with simplification first and induction last. Each technique reports one of three outcomes: it found nothing to change (i.e., the technique doesn't apply to
that subgoal), it decided to abort the proof attempt (typically because there is reason to believe the proof is failing), or it decomposed the goal into k subgoals.
The last outcome has a special case: if k is 0 then the technique proved the goal. Whenever k is non-0, the process starts over again with simplification on each of the k subgoals. However, it saves
up all the subgoals for which induction is the only proof technique left to try. That way you see how it performs on every base case and induction step of one induction before it launches into
another induction.
It runs until you or one of the proof techniques aborts the proof attempt or until all subgoals have been proved.
Note that if simplification produces a subgoal, that subgoal is re-simplified. This process continues until the subgoal cannot be simplified further. Only then is the next proof technique is tried.
Such suboals are said to be stable under simplification.
While this is happening, the prover prints an English narrative describing the process. Basically, after each goal is printed, the system prints an English paragraph that names the next applicable
proof technique, gives a brief description of what that technique does to the subgoal, and says how many new subgoals are produced. Then each subgoal is dealt with in turn.
If the proof is successful, you could read this log as a proof of the conjecture. But output from successful proofs is generally never read because it is not important to The Method described in
The output of an unsuccessful proof attempt concludes with some key checkpoints which usually bear looking at.
|
{"url":"http://www.cs.utexas.edu/users/moore/acl2/seminar/2010.02-17-moore/HTML/ARCHITECTURE-OF-THE-PROVER.html","timestamp":"2014-04-19T10:48:17Z","content_type":null,"content_length":"4673","record_id":"<urn:uuid:4ac623c8-9c0d-4080-94f8-6989ce494a8e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Instructor Class Description
Time Schedule:
Barry Minai
B CUSP 122
Bothell Campus
Introduction to Elementary Functions
Covers college algebra with an emphasis on polynomial, rational, logarithmic, exponential, and trigonometric functions. Prerequisite: either a minimum grade of 2.5 in B CUSP 121 or a score of 147-150
on the MPT-GSA assessment test. Offered: AWSp.
Class description
The goal of this course is to show students that math can be understood and enjoyed. The emphsis of the course is to learn and improve the algebraic skills necessary to go on taking more math
courses. Lots of fractions, exponents, radicals, factorization, linear functions, quadratic functions and some disscussion of polynomials and rational functions. Graphing of various functions.
Student learning goals
*Improve algebraic skills
*Learn functional notation, algebraic manipulation of functions, and composition of functions.
*Recognize and be comfortable using polynomial, exponential, and rational functions.
* Able to graph and manipulate functions symbolically.
*Apply functions and concepts to solve real world problems.
* Learn to become problem solvers.
General method of instruction
Group discussion and discovery. Lots of additional worksheets to supplement the material from the text.
Recommended preparation
The Placement test and the desire to learn.
Class assignments and grading
Attendance Policy: Since participation is vital for a successful experience, please arrive on time for class. Late arrivals interrupt our in-progress activities and discussions. If you must miss a
class session, let me know as soon as possible so that you can make up the work that you miss.
Grading: HW and quizzes – 50 pts HW/Attendance/Quizzes – 5 points for each completed HW Participation and attendance- 10pts 1st Midterm – 40 pts. 2nd Midterm – 40 pts. Final Exams – 60 pts The course
is not graded on a curve. Following is a rough grading scale: <104 points (52%) – 0.0 104 points (52%) – 0.7 110 points (55%) – 1.0 130 points (65%) – 2.0 160 points (80%) – 3.0 180 points (90%) –
4.0 I reserve the right to change this scale, most likely in your favor. Homework Details: There will be homework nearly every night – about two to three hours each night. Occasionally, there may be
worksheets to turn in. Homework should be turned in with the following: - Your name - Date due - Assignment #, section number, and the problem# written neatly on the outside. - Homework must be neat.
- Space the problems far enough apart so I can find them easily. - Homework must be done on graph paper, as much as possible. No Tear offs! - Much of the assignments require lots of accurate
graphing, so this is a must for those assignments. Homework does count toward your course grade, but almost entirely on a basis of how seriously you tried. Homework will be graded as follows: - If
every problem is given a strong effort, you will receive 5 point. - If the effort on the assignment is considerably lacking, you will receive 0 points. Homework is generally due on Wednesdays before
the lecture. I will collect one problem at RANDOM. No late HW. Please circle the problem that is being collected.
Quizzes: Expect pop quizzes! Stay on top of things! NO make up quizzes.
The median score of each exam are calculated and used to determine the grade. Improvement is heavily considered when determining the grade.
The information above is intended to be helpful in choosing courses. Because the instructor may further develop his/her plans for this course, its characteristics are subject to change without
notice. In most cases, the official course syllabus will be distributed on the first day of class. Last Update by Barry Minai
Date: 04/23/2012
|
{"url":"http://www.washington.edu/students/icd/B/bcusp/122bminai.html","timestamp":"2014-04-19T22:09:28Z","content_type":null,"content_length":"6868","record_id":"<urn:uuid:618d7ca8-0704-4856-9f74-093e6c573035>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
Pseudo-slant submanifolds of a Sasakian manifold.
(English) Zbl 1117.53043
be a Riemannian manifold equipped with an almost contact metric structure
$\left(\varphi ,\xi ,\eta ,g\right)$
. A submanifold
is said to be pseudo-slant if the structure vector field
is tangent to
everywhere, and if there exist two subbundles
of the tangent bundle
such that
decomposes orthogonally into
$TM={D}_{1}\oplus {D}_{2}\oplus ℝ\xi$
$\varphi {D}_{1}$
is a subbundle of the normal bundle of
, and there exists a real number
$0\le \theta <\pi /2$
such that for each nonzero vector
$X\in {D}_{2}$
the angle between
$\varphi X$
is equal to
. The authors derive some equations for certain tensor fields and investigate the integrability of some distributions on pseudo-slant submanifolds for the special case that the almost contact metric
structure is Sasakian.
53C40 Global submanifolds (differential geometry)
53C25 Special Riemannian manifolds (Einstein, Sasakian, etc.)
|
{"url":"http://zbmath.org/?q=an:1117.53043","timestamp":"2014-04-20T08:53:18Z","content_type":null,"content_length":"23461","record_id":"<urn:uuid:6ac1ee9e-f7e1-4567-9956-fb589096b054>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Uniformly distribute a population in a given search space
up vote 0 down vote favorite
I am trying to uniformly distribute a finite number of particles into a 2D search space to get me started with an optimization problem, but I am having a hard time doing it. I am thinking that it
might have something to do with convex sets, but I might as well be totally off, so I am asking you guys of a proper way to do it .
Edit: Ok, so I have to implement the Particle Swarm Optimization algorithm in order to get the polynomial input for Baker's algorithm and to get started with PSO, I have to uniformly distribute the
particles in the search space (the initial example I got was of the distribution of particles inside of a cube, but that's kind of vague for me). What does it mean to uniformly distribute in the
search space?
1 What's wrong with choosing one coordinate uniformly and then another? – Qiaochu Yuan Jan 13 '10 at 18:15
2 If your space is irregularly shaped, use rejection sampling (en.wikipedia.org/wiki/Rejection_sampling). See also mathoverflow.net/questions/9854/… – Steve Huntsman Jan 13 '10 at 18:19
Well, what if the search space is a rectangle, say with a 2^16 size ? I guess your solution will still work. – user984 Jan 13 '10 at 20:30
@Leonid: yes you are right ... I am not talking about the probability density function, but rather how to uniformly distribute those points in the space. – user984 Jan 14 '10 at 17:57
@Hyperboreean -- you should try to indicate what you mean by "uniformly distribute", perhaps with an example of the sort of thing you're looking for. I suspect you're not getting any answers
1 because no one is sure what you want. You should also explicitly ask a question --- our experience is generally that the effort of putting a problem into the explicit form of a question really
pays off! – Scott Morrison♦ Jan 15 '10 at 21:14
show 1 more comment
3 Answers
active oldest votes
Despite the lack of formalization in your question I'm going to take a guess that you don't really want points that are distributed uniformly at random, as that tends to result in clusters
and voids that you probably want to avoid. Rather you may want to be using something like Lloyd's algorithm: start with randomly generated points but then repeatedly move each point to the
up vote 1 centroid of its Voronoi cell, resulting in a set of points that are nearly equally spaced across the domain and that, within the domain, are spaced in a pattern approximating a hexagonal
down vote close-packing.
add comment
Since you are working in 2D, how about choosing the center of the 2D space as the center of an imaginary circle and the distance between the center and one of the sides as the radius. For
up vote 0 each quadrant of the circle generate some random number of individuals, at a random distance from the center. That way, you have a fairly good representation of the search space, I think.
down vote
add comment
It sounds to me like this problem is a a candidate for quasi-random sequences: use, for example Sobol, to distribute the points very evenly over the search space (avoiding the
up vote 0 down clustering that would occur from using randomly chosen points).
add comment
Not the answer you're looking for? Browse other questions tagged oc.optimization-control or ask your own question.
|
{"url":"http://mathoverflow.net/questions/11676/uniformly-distribute-a-population-in-a-given-search-space","timestamp":"2014-04-18T03:48:41Z","content_type":null,"content_length":"62552","record_id":"<urn:uuid:7bea05a3-9783-4436-b3e3-4876bdd8603a>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: iweights and ROC curves
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: iweights and ROC curves
From Steven Samuels <sjhsamuels@earthlink.net>
To statalist@hsphsun2.harvard.edu
Subject Re: st: iweights and ROC curves
Date Thu, 11 Dec 2008 20:04:10 -0500
I should have made clear: to get the ROC Curve, use ordinary - logistic-, not -svy: logistic-. Mimic the -subpop- option with an - if- statement.
On Dec 11, 2008, at 7:30 PM, Steven Samuels wrote:
Do not use iweights.-lroc- after -logistic- with frequency weights works fine..
Austin Nichols once pointed out that you can compute fweights that give the same results as pweights to any number of decimal places:
For k decimal places, use
new frequency weight = round(10^k * probability weight,1)
So for 2 decimal places:
233.43 -> 23,343 a whole number.
On Dec 11, 2008, at 1:30 PM, Patrick McCabe wrote:
Does any one know if the ROC curves produced by a logistic regression model using svy subpopulation and importance weights are 'legit'? The resulting model is the same as the model produced
with pweights but if you use pweights lroc does not work.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2008-12/msg00620.html","timestamp":"2014-04-18T14:24:32Z","content_type":null,"content_length":"7107","record_id":"<urn:uuid:916d4c1e-b307-41e5-81c9-ac0c8195509f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quantum mechanics
Quantum mechanics: symmetries
Dec 31, 1989
368 pages
"Quantum Dynamics" is a major survey of quantum theory based on Walter Greiner's long-running and highly successful courses at the University of Frankfurt. The key to understanding in quantum theory
is to reinforce lecture attendance and textual study by working through plenty of representative and detailed examples. Firm belief in this principle led Greiner to develop his unique course and to
transform it into a remarkable and comprehensive text. The text features a large number of examples and exercises involving many of the most advanced topics in quantum theory. These examples give
practical and precise demonstrations of how to use the often subtle mathematics behind quantum theory. The text is divided into five volumes: Quantum Mechanics I - An Introduction, Quantum Mechanics
II - Symmetries, Relativistic Quantum Mechanics, Quantum Electrodynamics, Gauge Theory of Weak Interactions. These five volumes take the reader from the fundamental postulates of quantum mechanics up
to the latest research in particle physics. Volume 2 presents a particularly appealing and successful theme in advanced quantum mechanics - symmetries. After a brief introduction to symmetries in
classical mechanics, the text turns to their relevance in quantum mechanics, the consequences of rotation symmetry and the general theory of Lie groups. The Isospin group, hypercharge, SU (3) and
their applications are all dealt with in depth before a chapter on charm and SU (3) leads to the frontiers of research in particle physics. Almost a hundred detailed, worked examples and problems
make this a truly unique text on a fascinating side of modern physics.
We haven't found any reviews in the usual places.
Symmetries in Quantum Mechanics 1
Contents of Examples and Exercises 4
Angular Momentum Algebra Representation of Angular Momentum 37
14 other sections not shown
Bibliographic information
|
{"url":"http://books.google.com/books?id=8vNAAQAAIAAJ&q=construct&source=gbs_word_cloud_r&cad=6","timestamp":"2014-04-19T20:13:47Z","content_type":null,"content_length":"115198","record_id":"<urn:uuid:a30326bb-653d-4049-b55c-f175cd2a008a>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
|
binary operation on a set
May 27th 2010, 04:48 PM #1
binary operation on a set
Given a set S, a function f: S X S $\longrightarrow$ S is called a binary operation on S. If S is a finite set, then how many different binary operations on S are possible?
I have no clue on this one? Could someone point me in the right direction?
I realize that if S has n elements then SxS would have $n^2$ possible ordered pairs. However, I'm not sure what is meant by how many different binary operations. Would we have addition,
subtraction, multiplication, division, exponentiation, etc. times $n^2$ binary operations?
Given a set S, a function f: S X S $\longrightarrow$ S is called a binary operation on S. If S is a finite set, then how many different binary operations on S are possible?
I have no clue on this one? Could someone point me in the right direction?
I realize that if S has n elements then SxS would have $n^2$ possible ordered pairs. However, I'm not sure what is meant by how many different binary operations. Would we have addition,
subtraction, multiplication, division, exponentiation, etc. times $n^2$ binary operations?
It's asking how many functions are there from a set with $n^2$ elements to a set with $n$ elements.
With the help of Drexel28, we have a set S with n elements which gives $n^2$ mapped to n. This should give us $n^3$ binary operations, correct?
Re: binary operation on a set
Yes I believe that (n)^(n^2) is correct
Re: binary operation on a set
in general, the number of functions f:A→B is:
it is easiest to see this when |B| = 2, such as when B = {0,1}, so that the functions f:A→{0,1} can be put in a 1-1 correspondence with the subset of A:
given a subset S of A, we define:
f(a) = 1, if a is in S
f(a) = 0, if a is not in S.
thus the number of functions f:A→{0,1} is the same number as 2^|A|, the cardinality of the power set of A.
so, yes, the correct answer is n^(n^2).
May 27th 2010, 07:28 PM #2
May 28th 2010, 05:18 AM #3
May 28th 2010, 05:28 AM #4
Super Member
Apr 2009
June 16th 2012, 04:50 PM #5
Jun 2012
June 16th 2012, 09:12 PM #6
MHF Contributor
Mar 2011
|
{"url":"http://mathhelpforum.com/discrete-math/146688-binary-operation-set.html","timestamp":"2014-04-23T11:07:33Z","content_type":null,"content_length":"48454","record_id":"<urn:uuid:97c5e5cf-54f3-4b3a-8a4d-0fff27f2cf90>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
|
PROOF by counterexample
October 4th 2010, 04:08 PM #1
PROOF by counterexample
give a counterexample for each of the two
directions of:
The product xy of a rational number x and a real number y is irrational if
and only if y is irrational.
cant figure out the first direction
Last edited by mremwo; October 4th 2010 at 04:12 PM. Reason: actually i want to know how to EITHER give a counterexample OR proveone/ both directions
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/discrete-math/158426-proof-counterexample.html","timestamp":"2014-04-17T04:37:14Z","content_type":null,"content_length":"29030","record_id":"<urn:uuid:ba52b565-ca64-4caf-be29-1398f73225c9>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: question regd wmpstata
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: question regd wmpstata
From Nick Cox <njcoxstata@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: question regd wmpstata
Date Sun, 30 Sep 2012 10:02:28 +0100
Tashi Lama gave an example in the thread cited earlier.
On Sun, Sep 30, 2012 at 5:45 AM, Pradipto Banerjee
<pradipto.banerjee@adainvestments.com> wrote:
> Thanks, Nick. Out of curiosity, would it be possible for you to share how to do it?
Nick Cox
> My impression was that you can do it awkwardly, but it's not a good
> idea. Same implication: no need to bother.
On Fri, Sep 28, 2012 at 6:08 PM, Pradipto Banerjee
> <pradipto.banerjee@adainvestments.com> wrote:
>> Nick,
>> Thanks for the link.
>> You made some valid points
>> 1. I was planning to supply the name of the database to the program
>> 3. You are right, but I wanted to explore if wmpstata has the capability to run an -ado- file. Seems like it can't.
>> Thanks
>> -----Original Message-----
>> From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Nick Cox
>> Sent: Friday, September 28, 2012 12:58 PM
>> To: statalist@hsphsun2.harvard.edu
>> Subject: Re: st: question regd wmpstata
>> I see. This is, more or less, a question raised by Tashi Lama a few days ago
>> http://www.stata.com/statalist/archive/2012-09/msg00598.html
>> and you might want to follow that thread. I presume you are interested
>> in programs defined by .ado files.
>> Speculating wildly, but building on that discussion, here are some reflections:
>> 1. To do very much useful the program would usually have to read in
>> some data, and how would that be done? (There are exceptions, such as
>> doing simulations.)
>> 2. Passing arguments to the program sounds as if it could be messy and
>> you would have to ensure that your Stata syntax was not misinterpreted
>> by the operating system.
>> 3. The do-file method lets you run one or more programs and so appears
>> to be general enough already to meet foreseeable needs. Conversely, if
>> Stata is running a program that you name, then at a guess everything
>> that you want to do has to be defined by that program, which by many
>> tastes is likely to seem poor programming style, but the choice is
>> yours.
>> Nick
>> On Fri, Sep 28, 2012 at 5:38 PM, Pradipto Banerjee
>> <pradipto.banerjee@adainvestments.com> wrote:
>>> Hi Nick,
>>> Historically, I have used "wmpstata /e do mydofile.do dofileoption1 dofileoption2" where the -do- is run with the options -dofileoption1-, -dofileoption2- etc. However, I was thinking to do the same with an -ado- file instead of a -do- and wanted to explore if there are any differences - and also what other options might be available.
>> Nick Cox
>>> My answer is "the manuals" or http://www.stata.com/statamp/ depending
>>> on whether you have it installed, but I suspect that I don't
>>> understand what is being sought here.
>> Pradipto Banerjee
>>>> Is there any place where I can get information on wmpstata.exe, i.e. what the options etc?
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2012-09/msg01194.html","timestamp":"2014-04-16T10:18:46Z","content_type":null,"content_length":"12497","record_id":"<urn:uuid:3b793a4f-e2d2-4141-8f93-776e89c09b9b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A heuristic approach to the formulas for population attributable fraction
1. Dr Hanley (James.Hanley{at}McGill.CA)
BACKGROUND As the definitional formula for population attributable fraction is not usually directly usable in applications, separate estimation formulas are required. However, most epidemiology
textbooks limit their coverage to Levin's formula, based on the (dichotomous) distribution of the exposure of interest in the population. Few present or explain Miettinen's formula, based on the
distribution of the exposure in the cases; and even fewer present the corresponding formulas for situations with more than two levels of exposure. Thus, many health researchers and public health
practitioners are unaware of, or are not confident in their use of, these formulas, particularly when they involve several exposure levels, or confounding factors.
METHODS/RESULTS A heuristic approach, coupled with pictorial representations, is offered to help understand and interconnect the structures behind the Levin and Miettinen formulas. The pictorial
representation shows how to deal correctly with several exposure levels, and why a commonly used approach is incorrect. Correct and incorrect approaches are also presented for situations where
estimates must be aggregated over strata of a confounding factor.
The population attributable fraction (AF[p]) is defined1 (page 295) as “the fraction of all cases (exposed and unexposed) that would not have occurred if exposure had not occurred.” This fraction can
be estimated using two equivalent formulas, based on the distribution of exposure in the population,2 or in the cases.3 Most textbooks deal only with the exposure in the population and do not
consider confounding factors. Many of them focus on deriving the formula algebraically or minimising the number of computational steps, thereby providing limited insight into the structure of the
formula. Only a few texts elaborate on the “case-based” formula. As a result, although it is increasingly used to derive estimates of AF[p]s from complex data,4-8 the case-based formula is less
widely known and less well understood.
Likewise, despite the long existence3 9 of the corresponding AF[p] formulas for more than two levels of an exposure of interest, and despite the fact that three advanced textbooks10-12 do present and
even illustrate them, many authors seem to be unaware of them. This author has recently encountered three pre-publication examples where, with multiple exposure levels, the AF[p] was calculated
incorrectly. Table1 shows a published example13 of this same error.
Stratification and—increasingly—regression models are used to provide confounder adjusted rate ratio (RR) estimates as inputs to the calculation of AF[p]s. As textbooks do not discuss such
situations, and understanding of first principles is limited, the AF[p] is often miscalculated in such instances too.14 Indeed, in addition to mishandling a trichotomous exposure, the above cited
report13 also fails to correctly incorporate the adjusted RR into the AF[p] calculation.
The primary aim of this article is to promote understanding of the AF[p] formulas for a polytomous exposure. To do so, the article begins with the more familiar all or none exposure. A numerical
example and a diagram allow the Levin and the Miettinen formulas to be understood directly from first principles, without algebra. This heuristic approach provides a foundation from which to extend
the AF[p] formulas correctly to polytomous exposure data, and to data stratified on a confounding variable.
Population (or population time) at risk and cases will be denoted by the letters P and C respectively. The fractions of the population (or population time) in the various exposure categories are
denoted by “population fractions” (PFs), while the distribution of exposure in the cases is denoted by “case fractions” (CFs).3 The terms overall and population attributable fraction are used
interchangeably. Given the difficulties15 of interpreting it as a true “aetiological” fraction, particularly when a long time span and competing risks can substantially change denominators, the AF[p]
is simply regarded as an “excess” fraction.15
All or none exposure
The exposed and unexposed categories are denoted by 1 and 0 and the ratio of the event rates in these two categories as RR: 1.
Classic (Levin) structure for AF[p], based on distribution of exposure in population
Denote by PF[1] the proportion (or fraction) of the total population time in the exposed category, and by PF[0]the proportion in the unexposed category. The most popular1 11 16-18 formula for AF[p]
is Levin's original version. Levin began by defining the AF[p]: its denominator is the rate (or number of cases) in the overall population, and its numerator is the difference between this and the
one that would prevail if all of the person time were in the unexposed category. From this, he algebraically derived the estimating formula
Attributable fractions for specific exposure categories.
The case-based version uses as one of its inputs the “attributable fraction in the exposed”, namely
This is a specific AF, as it restricts attention to exposed cases. To emphasise this specificity, we label it AF[1]. The under-appreciated fact that the “attributable fraction in theunexposed” is 0
becomes important later, and so we label the AF specific to cases in that category as AF[0] = 0.
(Miettinen) structure for AF[p], based on distribution of exposure in cases
The case-based version uses as its other input the number of exposed cases, expressed as a fraction of the overall number of cases. Denote this fraction as the “case fraction”,3CF[1]. Then the
case-based formula for AF[p]is
or in the notation used here,
AF[p] = AF[1] × CF[1]. [1C]
Suppose, as is depicted in figure 1, that PF[1] = 2/5th of the population time (PT) is in the exposed category. Although the AF[p] involves relative rather than absolute rates, suppose—for
concreteness—that the event rates in the exposed and unexposed categories are 1.5 and 1.0 cases per 10^4 PT units, so that the RR = 1.5, and the rate difference = 0.5 cases per 10^4 PT units. Suppose
further that the total population time is 10^6 PT units.
Substitution of PF[1] = 2/5 and RR—1 = 0.5 into formula [1P] yields
Conceptually, the Levin formula directly divides the total number of cases into “expected” cases—those that would occur even if all of the PT were in the unexposed category—and “excess” cases. With a
total of 10^6 units of PT, there are 10^6 × (1.0 × 10^–4) = 100 “expected” cases. Some 2/5th of the overall 10^6 PT units are in the exposed category, where the excess rate is 0.5 per 10^4 PT units.
The product of this PT and the excess rate in this category is 20 “excess” cases. These 20 represent 1/6th of the overall total of 100 + 20 = 120 cases. Note that the 20 can also be represented as
the “observed−expected” number, while the 120 represent the “observed” number, in keeping with the structure in Miettinen's 1985 text (page 254–256),3 and Levin's2original conceptualisation.
The case-based formula begins with the same total of 120 cases and immediately “rules out” all unexposed cases, as, by definition, none of them are “excess” cases. Based on the amount of unexposed
PT, they number 60, or 3/5th of the 100 “expected” cases discussed above. This leaves 60 exposed cases (1/2 of the overall total); if RR > 1 and finite, then only a fraction of these 60 exposed cases
(or of the 1/2) are excess cases (that is, the maximum possible AF[p] is 1/2). What fraction of these 60 represent excess cases? The RR=1.5 implies that of every 1.5 exposed cases, 1 is “expected”,
while 0.5 of the 1.5, or 1/3rd, are excess. Thus, of the subtotal of exposed 60 cases, 20 are “excess” cases. As the subtotal of 60 exposed cases constitute 1/2 of all cases, and as only 1/3rd of the
60 are excess, then 1/3rd of 1/2, that is, 1/6 of all cases are excess cases. This 1/6th is thus a “fraction of a fraction”. The one fraction (1/2) is simply what fraction of all cases are exposed
cases—which we have denoted by CF[1]. The other, RR—1 as a fraction of RR, that is, 1/3, is the AF specific to the exposed category, namely AF[1].
Figure 1 begins “at the base” with the denominators—the PT distribution—that generated the cases. Some 3/5th (= PF[0]) of the PT are in the unexposed category (empty area) and 2/5th (= PF[1]) are in
the exposed category (shaded area). The cases that arise from these are represented by empty or shaded circles respectively; “excess” cases are marked with an “X” (for “excess”), while “expected”
cases are not.
The number of excess cases as a fraction of all cases can be seen from two different views. In the first (Levin), the total number of cases is directly subdivided into two arrays—the (bottom) square
array of expected cases, and the (upper) rectangular array of excess cases. With its height (representing the rate in category 0) arbitrarily scaled to 1, and with its entire base of 1, the square
array of “expected” cases has an area of 1, representing one expected case. The width of the rectangle of excess cases is PF[1] = 2/5 and its height is RR—1 = 1.5—1 = 0.5, so that its area (the
number of “excess cases per one expected case”) is (RR—1) × PF[1] = 0.5 × 2/5 = 0.2, yielding AF[p] = 0.2/(1 + 0.2) = 0.2/1.2 = 1/6.
In the other view (Miettinen), the total number of cases is first subdivided into two arrays (and thus “case-fractions”) on the basis of exposure. Only the exposed (shaded) cases (a fraction CF[1] of
all cases) are “eligible” to be excess cases. These exposed cases are then further subdivided into subarrays of excess and expected cases, yielding the attributable fraction AF[1] specific to the
exposed cases. An attraction of this “fraction of a fraction” structure of the overall AF[p]is that it does not explicitly involve the 3/5 : 2/5 exposure distribution in the source PT—a distribution
that is not always easy to estimate—but rather the 60:60 split of the cases themselves.
For those who prefer algebra to pictures, an algebraic derivation of the case-based formula is given in the .
Unfortunately, immediately “eliminating” the unexposed cases, and focusing on the exposed ones, distracts from the fact that the case-based AF[p] can also be viewed as a weighted average of the two
category specific attributable fractions AF[0](= 0) and AF[1]. Naturally, as the focus is on all cases, the weights are given by the relative numbers of cases in exposure categories 0 and 1—that is,
by the proportions CF[0] and CF[1]. Thus the weighted average of the two category specific fractions AF[0] and AF[1] across both categories of cases is 0 × CF[0] + AF[1] × CF[1] = AF[1] × CF[1]. This
representation of AF[p] as a weighted average3 is key to understanding the case-based formula for the polytomous exposure situation considered next.
Polytomous exposure
Figure 2 depicts the data, and illustrates the correct AF[p] calculations, for the “three exposure levels” example given in table 1.
Again, the expected cases are shown in the square array of unmarked circles. Now, there are two sets of excess cases, denoted by the lightly shaded and more heavily shaded rectangular arrays of cases
marked with an “X”. The scaled heights, {RR[1]—1} and {RR[2]—1}, of these rectangles, multiplied by their widths, PF[1] and PF[2] , yield excess “areas” of {RR[1]—1} × PF[1] and {RR[2]—1} × PF[2]
respectively. These products represent the number of excess cases for every one “expected”. This “expected” versus “excess” partition of the cases leads immediately to the formula
For every one expected case, there are 0.12 + 0.14 = 0.26 “excess” cases, yielding an AF[p] of 0.26/1.26 = 20.6% (last column of table 1). Note that applying formula 1P [all or none exposure] twice13
(second last column of table 1), overestimates the overall fraction of excess cases.
In this view, the AF[p] is a sum of 2 “fractions of fractions”, that is,
as originally given in reference 3. As AF[0]=0, the AF[p] can also be seen as a weighted average of the category specific AFs over all 3 levels 0, 1 and 2
AF[p] = CF[0] × AF[0] + CF[1]× AF[1] + CF[2] × AF[2][2C']
For the data in table 1 and figure 2, the appropriate calculation is
(50/126) × 0 + (42/126) × (0.4/1.4) + (34/126) × (0.7/1.7),
yielding the “CF weighed” average, AF[p] = 20.6%.
The explains a version that is useful when there are several strata or covariate patterns.
Several authors have noted11 19 20 or shown7 that the AF[p] involving an exposure with levels 0, 1, ..., k is the same as if one first combined categories 1 to k and used the formula for the “all or
none” situation. This is easy to see from figure 2, where the RR for the “moderate or high” category, relative to low, is 1.52, and PF[moderate/high] = 0.5. Thus, for every one expected case, there
are 0.5 × 0.52 = 0.26 excess cases, yielding AF[p] = 0.26/1.26 = 20.6%.
The “distributive”21 property of the AF[p]is useful in multiple regression if, instead of aggregating exposure categories, one subdivides them, to the point that each case defines its own exposure
category. Details are given in the .
Stratified data
To correctly understand how to aggregate stratum specific AF[p]s, first write
Then, with Σ denoting summation over the strata that form the aggregate, dis-aggregate the numbers of cases so that
Finally, rewrite this as
that is, as a weighted average of stratum specific AF[p]s, with the numbers of cases in the each stratum as weights.
Figure 3 illustrates the correct calculation. Whether one arrives at the stratum specific AF[p]s “by P or by C”, one must average them using the stratum specific numbers (or proportions) of cases as
weights. The figure also illustrates the commonly used, but incorrect practice of coupling adjusted (RR-1)s with the marginal distribution of exposure in the overall source.
The primary aim of this article is to promote understanding of the AF[p] formulas for a polytomous exposure, and for stratified data. To this end, you must begin with the simpler and more familiar,
but not fully understood, representations for an all or none exposure. The article also shows how the two seemingly very different representations can both be derived—without algebraic
manipulations—directly from the same diagram. A third aim is to increase awareness of the case-based formulas.
There are a number of possible explanations for the limited awareness and understanding of the case-based formulas. Some textbooks focus on the overall AF before (or without ever) dealing with the
specific AFs that are aggregated to create the overall AF. Also, the case-based formula is given in fewer textbooks, usually without a complete derivation. The usually cited source3 does not
explicitly derive it; instead it cites another source,22 which in turn cites an unpublished source. It was acknowledged (page 331)3 that the basis for the formula “may not be immediately obvious” and
a cryptic explanation was offered. The “fraction of a fraction” formula is derived 11 years later (equation A.2.17, page 256)10 but in a seemingly different context, and using a purely algebraic
manoeuvre that does not reveal the logic behind it (see ). The lengthy way in which the formula continues to be algebraically derived in subsequent articles and textbooks4 5 7 11 23 suggests that the
simplicity and “immediate obviousness” of its structure have not been fully or widely understood.
The most important practical benefit of the case-based version is AF[p] estimation from stratified, or individually matched, case-control studies22 where the classic formula is inappropriate.19 24 25
Variations on this version (see) are also increasingly used to derive—and quantify the sampling variability of—estimators of AF[p] from stratified data or multiple logistic regression.4-8
The case-based structure also has conceptual benefits. Firstly, it emphasises that AFs refer to cases, and that the observed numbers of cases are the denominators of these AFs. This is in contrast
with most epidemiological calculations, where numbers of cases serve as the numerators of statistics. As exemplified in figure 3, this difference has important implications for how to correctly
aggregate stratum specific AF[p]s—no matter which version of the formula (classic or case-based) is used to calculate the stratum specific AF[p]s. Failure to appreciate this focus on cases may
explain why authors, such as those of reference 13, incorrectly couple adjusted (RR-1)s with the marginal distribution of exposure in the source via formulas [1P] and [2P]. This is a common mistake.
14 It is of note, and testimony to the naturalness of the case-based representation, that in the example in figure 3, the weighted average of the stratum specific AF[p]s (the AF[p]s having been
derived from adjusted RRs), using the stratum specific numbers (or proportions) of cases as weights, yields the correct AF[p]for the aggregate.
Secondly, although originally derived for empirical estimates from case-control studies, the versatility of the case-based representation can be used in a broader context—for example, to structure
the very AF[p] parameter itself.3 8 (section 2, page 866). For these practical and conceptual reasons, the case-based representation needs to be better understood, and not presented in textbooks and
articles simply as an algebraic fact.
Greater awareness and understanding of the formulas for polytomous exposure should also decrease computational errors. Even in the absence of confounding, the repeated application of formula [1P],
once for each exposure category separately13 yields an overestimate of AF[p]. As is made obvious by figure 2, a single application of formula [2P] yields the correct estimate.
Although its purpose was “multivariate” from the outset, the paper by Eide and Gefeller26 uses a graphical depiction similar to that presented here. It is helpful to think all of the covariate
patterns shown in figure 1 of the Eide and Gefeller article as different levels of a single composite factor, in the spirit of the single polytomous factor in figure 2 of the present article.
The heuristic approach also gives insights into more realistic, and more complex, scenarios than are discussed in introductory textbooks. Indeed, it was questions from a colleague, in a study
involving four levels of risk, and the consequences of switching, not to the lowest risk category, but to lower risk categories, that prompted the author to produce diagrams similar to figure 2.
Readers are referred elsewhere10 (appendix 2.3, page 254–6)12 27 28 for more on this topic.
Technical details on estimating AFp from regression models can be found in papers by Benichou7 and Greenland and Dresher (page 1763).8 Benichou warns that his method for calculating the precision of
the estimates is “complex”. The methods used by Greenland and Dresher are more tractable, but the matrix notation and associated calculations may still require the help of a statistician. The portion
of the article by Ojaet al 29 that deals with conventional logistic regression modelling, and in particular the hand workable example in the , is a useful point of departure before tackling either of
these papers. If you wish to avoid matrix calculations, then bootstrap confidence intervals are an attractive alternative.30
The author is grateful to Drs Robert Allard, Jean-François Boivin, Michael Kramer and Olli Miettinen for comments and advice on the various versions of this manuscript.
Even for those who prefer algebra to pictures, the majority of the published derivations of formula 1C are much more tedious than they need be. The simplest algebraic derivation is found in
Miettinen's text (page 256).10 It uses the same “fraction of a fraction” logic used to determine that the percentage of eligible subjects who respond to a survey is the percentage of eligible
subjects contacted × the percentage of contacted subjects who respond.
The version of the “case-based” structure that has become popular as a point of departure for multivariate applications in the past 15 years is, for the three exposure levels example (equation 12,
page 327).3
where RR[0] = 1. One can algebraically derive formula 2C* from formula 2C, by rewriting each specific AF in terms of the corresponding RR, and simplifying terms. However, it is more instructive to
view the process as taking the complement of the “expected” fraction. In figure 2, the overall “expected” fraction is the sum of three fractions: Of the (50) cases in exposure category 0, the
fraction of “expected” cases is 1; of the (42 and 34) cases in categories 1 and 2, the corresponding fractions are 1/RR[1] = 10/14th, and 1/RR[2] = 10/17th. Thus, the fraction of the overall cases
that are “expected” is the weighted average of the fractions 1, 10/14, and 10/17, with weights given by the case fractions CF[0] = 50/126, CF[1] = 42/126, and CF[2] = 34/126. Thus, the complement
of AF[p]
= (50/126) × 1 + (42/126) × 10/14 + (34/126) × 10/17
= 100/126,
leading immediately, by subtraction, to formula 2C*. Note, however, that unlike formula 2C, this “complement” method requires summation over all levels of the exposure.
Imagine that, instead of aggregating exposure categories, you continue to subdivide them, to the point that each case defines its own exposure category (this would happen if the exposure takes on
values on a continuum, or is a multivariate “x” vector in a multiple regression. Then by the “distributive” property21 of the AF[p],
where RR[i] is the (unconfounded) RR for the covariate pattern of the i-th case, and where the summation is over all of the individual cases. This structure is useful in complex designs30 and when
constructing AF[p] from a logistic regression in which each case has a unique covariate pattern.
• Funding: this work was supported by an operating grant from the Natural Sciences and Engineering Research Council of Canada.
• Conflicts of interest: none.
|
{"url":"http://jech.bmj.com/content/55/7/508.long","timestamp":"2014-04-18T22:17:21Z","content_type":null,"content_length":"217476","record_id":"<urn:uuid:3d953d14-106c-4816-82f5-612201ae4bec>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Write the negation of the statement, "you are not 16 years old." I'm not really sure how to do this. Can anyone help?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
"Not 16 years old" I think
Best Response
You've already chosen the best response.
negation just means opposite, so the negation of the statement would be "you are 16 years old"
Best Response
You've already chosen the best response.
@zonazoo gave the correct answer. Negation just means find the opposite
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5093353be4b0b86a5e530380","timestamp":"2014-04-17T18:54:37Z","content_type":null,"content_length":"32530","record_id":"<urn:uuid:6e53e9b7-c149-4ab1-a2d8-6eac85ea8a9a>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] Is there a compendium of examples of the "power" of deductive logic?
Neil Tennant neilt at mercutio.cohums.ohio-state.edu
Sat Dec 10 10:49:43 EST 2005
On Fri, 9 Dec 2005, Richard Haney wrote:
> So I am wondering about the possibility of stunning examples that can
> serve as paradigms for the power of logic in empirical applications.
> In other words I am looking for empirical evidence that demonstrates
> unequivocally that deductive logic is indeed worthy of study.
How about Newton's derivation of Kepler's laws of planetary motion, using
as premises the law of gravitation, the second law of motion, and whatever
mathematical axioms are needed for the calculus involved?
Another example, drawing on less technically demanding mathematics (if any
at all) would be the deduction underlying the central Darwinian insight.
This is the insight that whenever the three conditions
Differential reproduction
hold ofr a population of self-reproducing entities, there will be
Adaptive evolution.
(Note that this is the case even if the individuals never die.)
Neil Tennant
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2005-December/009440.html","timestamp":"2014-04-20T23:29:50Z","content_type":null,"content_length":"3866","record_id":"<urn:uuid:8885163b-487c-43e4-a9a8-00b903c10db7>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Posts by tara
Total # Posts: 311
social studies
I need information on the process charles kettering went through to invent the electric cash register or the car self starter and did he have problems inventing them. I can only find out that he did
invent them. Here are some sites that will help you. http://en.wikipedia.org/w...
After doing an experiment you learn that 70 percent of the carrot seeds from Company A germinated within 10 days, while only 50 percent of the seeds from company B germinated within 10 days. What was
the comparative germination ratio? looks like a simple 7:5 ratio to me That...
if i have to do reserch about french guiana and i cant find information on the internet or books. what should i do Thank you for using the Jiskha Homework Help Board. Not to worry on French Guiana! I
have more if you need it, but here's a start for you: http://en.wikipedia...
math check please
1.Write 4x4x4x4x4 as a power of 4. Answer: 4^5 2. Evaluate. 72 divide 8-9 divide 3 Answer: 0 3. Give the place value for the indicated digit 8 in the number 138,350 Answer: Thousands 4. Fencing a
rectangular field that measures 37 ft by 27 ft How much fence should i buy? Answe...
math help please
I am having a hard time understanding recprocial with fractions I would like to find a web site to help me understand this better I have a problem that is need to know if i have done it right the
question is find the reciprocial of (-1/7)(14/3)(9) This how I done the problem i...
math check
1.A caterer made 150 sandwiches for 120 people. How many sandwiches are there for each person? A 1.25 or 1 1/4 sandwich/person 2.On the blueprint of Lauren's new office building, the scale is 3 in.
equals 8 ft. What will be the actual length of Lauren's office if it me...
math check
Would some please check these and see if I am doing them right thank you 1. Gasoline is about $2 per gallon. You use about 1 1/2 gallons each day driving to and from work. How much should you budget
for gasoline over the next month (about 20 workdays)? Answer 1 1/2 x $2.00 = $...
math check
Would some please check these and see if I am doing them right thank you 1. Gasoline is about $2 per gallon. You use about 1 1/2 gallons each day driving to and from work. How much should you budget
for gasoline over the next month (about 20 workdays)? Answer 1 1/2 x $2.00 = $...
Julio works as a quality control expert in a beverage factory. The assembly line that he monitors produces about 20,000 bottles in a 24-hour period. Julio samples about 120 bottles an hour and
rejects the line ifhe finds more than 1/50 of the sample to be defective. About how ...
idk PLEASE HELP ME!!!!!!!!!!
orientalism of muslim and arab american
I am also a student at UOP and I will say that this is not the first class I have had problems with questions that they don't provide the answer to in any of the readings. I have a 4.0 and I am still
having problems with this question!
Pages: <<Prev | 1 | 2 | 3 | 4
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=tara&page=4","timestamp":"2014-04-16T20:11:05Z","content_type":null,"content_length":"9663","record_id":"<urn:uuid:c8518b2b-6ab7-420a-bd09-63b728a02b5e>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Field of fractions of an integral domain
Domain Creation
Element Creation
Domain Creation
Dom::Fraction(R) creates a domain which represents the field of fractions of the integral domain R.
An element of the domain Dom::Fraction(R) has two operands, the numerator and denominator.
If Dom::Fraction(R) has the axiom Ax::canonicalRep (see below), the denominators have unit normal form and the gcds of numerators and denominators cancel.
The domain Dom::Fraction(Dom::Integer) represents the field of rational numbers. But the created domain is not the domain Dom::Rational, because it uses a different representation of its elements.
Arithmetic in Dom::Rational is much more efficient than it is in Dom::Fraction(Dom::Integer).
Element Creation
If r is a rational expression, then an element of the field of fractions Dom::Fraction(R) is created by going through the operands of r and converting each operand into an element of R. The result of
this process is r in the form , where x and y are elements of R. If R has Cat::GcdDomain, then x and y are coprime.
If one of the operands can not be converted into the domain R, an error message is issued.
Example 1
We define the field of rational functions over the rationals:
F := Dom::Fraction(Dom::Polynomial(Dom::Rational))
and create an element of F:
a := F(y/(x - 1) + 1/(x + 1))
To calculate with such elements use the standard arithmetical operators:
Some system functions are overloaded for elements of domains generated by Dom::Fraction, such as diff, numer or denom (see the description of the corresponding methods "diff", "numer" and "denom"
For example, to differentiate the fraction a with respect to x enter:
If one knows the variables in advance, then using the domain Dom::DistributedPolynomial yields a more efficient arithmetic of rational functions:
Fxy := Dom::Fraction(
Dom::DistributedPolynomial([x, y], Dom::Rational)
b := Fxy(y/(x - 1) + 1/(x + 1)):
Example 2
We create the field of rational numbers as the field of fractions of the integers, i.e., :
Q := Dom::Fraction(Dom::Integer):
Another representation of ℚ in MuPAD^® is the domain Dom::Rational where the rationals are of the kernel domains DOM_INT and DOM_RAT. Therefore it is much more efficient to work with Dom::Rational
than with Dom::Fraction(Dom::Integer).
R An integral domain, i.e., a domain of category Cat::IntegralDomain
r A rational expression, or an element of R
┃ "characteristic" │ is the characteristic of R. ┃
┃ "coeffRing" │ is the integral domain R. ┃
┃ "one" │ is the one of the field of fractions of R, i.e., the fraction 1. ┃
┃ "zero" │ is the zero of the field of fractions of R, i.e., the fraction 0. ┃
Mathematical Methods
_divide(x, y)
This method overloads the function _divide for fractions, i.e., one may use it in the form x / y or in functional notation: _divide(x, y).
This method overloads the function _invert for fractions, i.e., one may use it in the form 1/r or r^(-1), or in functional notation: _invert(r).
_less(q, r)
An implementation is provided only if R is an ordered set, i.e., a domain of category Cat::OrderedSet.
This method overloads the function _less for fractions, i.e., one may use it in the form q < r, or in functional notation: _less(q, r).
_mult(q, r)
If q is not of the domain type Dom::Fraction(R), it is considered as a rational expression which is converted into a fraction over R and multiplied with q. If the conversion fails, FAIL is returned.
The same applies to r.
This method also handles more than two arguments. In this case, the argument list is splitted into two parts of the same length which both are multiplied with the function _mult. The two results are
multiplied again with _mult whose result then is returned.
This method overloads the function _mult for fractions, i.e., one may use it in the form q * r or in functional notation: _mult(q, r).
This method overloads the function _negate for fractions, i.e., one may use it in the form -r or in functional notation: _negate(r).
_power(r, n)
This method overloads the function _power for fractions, i.e., one may use it in the form r^n or in functional notation: _power(r, n).
_plus(q, r, …)
If one of the arguments is not of the domain type Dom::Fraction(R), then FAIL is returned.
This method overloads the function _plus for fractions, i.e., one may use it in the form q + r or in functional notation: _plus(q, r).
An implementation is provided only if R is a partial differential ring, i.e., a domain of category Cat::PartialDifferentialRing.
This method overloads the operator D for fractions, i.e., one may use it in the form D(r).
This method overloads the function denom for fractions, i.e., one may use it in the form denom(r).
diff(r, u)
This method overloads the function diff for fractions, i.e., one may use it in the form diff(r, u).
An implementation is provided only if R is a partial differential ring, i.e., a domain of category Cat::PartialDifferentialRing.
The factors u, r[1], …, r[n] are fractions of type Dom::Fraction(R), the exponents e[1], …, e[n] are integers.
The system function factor is used to perform the factorization of the numerator and denominator of r.
This method overloads the function factor for fractions, i.e., one may use it in the form factor(r).
An element of the field Dom::Fraction(R) is zero if its numerator is the zero element of R. Note that there may be more than one representation of the zero element if R does not have Ax::canonicalRep
This method overloads the function iszero for fractions, i.e., one may use it in the form iszero(r).
This method overloads the function numer for fractions, i.e., one may use it in the form numer(r).
The returning fraction is normalized (see the methods "normalize" and "normalizePrime".
Conversion Methods
convert_to(r, T)
If the conversion fails, FAIL is returned.
The conversion succeeds if T is one of the following domains: Dom::Expression or Dom::ArithmeticalExpression.
Use the function expr to convert r into an object of a kernel domain (see below).
The result is an object of a kernel domain (e.g., DOM_RAT or DOM_EXPR).
This method overloads the function expr for fractions, i.e., one may use it in the form expr(r).
The method TeX of the component ring R is used to get the TeX-representations of the numerator and denominator of r, respectively.
Technical Methods
normalize(x, y)
Normalization means to remove the gcd of x and y. Hence, R needs to be of category Cat::GcdDomain. Otherwise, normalization cannot be performed and the result of this method is the fraction .
normalizePrime(x, y)
In rings of category Cat::GcdDomain, elements are assumed to be relatively prime. Hence, there is no need to normalize the fraction .
In rings not of category Cat::GcdDomain, normalization of elements can not be performed and the result of this method is the fraction .
See Also
MuPAD Domains
|
{"url":"http://www.mathworks.com/help/symbolic/mupad_ref/dom-fraction.html?nocookie=true","timestamp":"2014-04-20T21:27:59Z","content_type":null,"content_length":"72504","record_id":"<urn:uuid:cd59ed76-d710-4263-b24a-59303504b797>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Glenarden, MD Math Tutor
Find a Glenarden, MD Math Tutor
...Instead, test-takers earn points for properly analyzing the text they are given. I achieved a perfect 800 score for Reading when I was 16, and began teaching others when I was an
undergraduate. With six years, and a lot of intuitive insight, under my belt, I can help your student maximize his or her potential, and get the right start to college applications.
37 Subjects: including algebra 1, algebra 2, study skills, biology
...Between 2006 and 2011 I was a research assistant at the University of Wyoming and I used to cover my advisor’s graduate level classes from time to time. And, since August 2012 I have tutored
math (from prealgebra to calculus II), chemistry and physics for mid- and high-school students here in th...
14 Subjects: including statistics, GED, algebra 1, algebra 2
I recently graduated from UMD with a Master's in Electrical Engineering. I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a
single B, regardless of the subject. I did this through perfecting a system of self-learning and studyi...
15 Subjects: including prealgebra, probability, algebra 1, algebra 2
I am a graduate student currently pursuing a degree in nutritional sciences I have my undergraduate degree in biochemistry. I also have an extensive experience as a tutor at both the elementary
and tertiary level of education. I am very comfortable with using information technology in teaching and learning.
11 Subjects: including calculus, geometry, biochemistry, algebra 1
...In my high school, which was for the gifted students, we were focused more on studying Math and Sciences than the other subjects, and it helped me to be a good math tutor when I was a college
student. While I was studying economics in the master's program, I was a teaching assistant to Principle...
14 Subjects: including prealgebra, Korean, probability, algebra 1
Related Glenarden, MD Tutors
Glenarden, MD Accounting Tutors
Glenarden, MD ACT Tutors
Glenarden, MD Algebra Tutors
Glenarden, MD Algebra 2 Tutors
Glenarden, MD Calculus Tutors
Glenarden, MD Geometry Tutors
Glenarden, MD Math Tutors
Glenarden, MD Prealgebra Tutors
Glenarden, MD Precalculus Tutors
Glenarden, MD SAT Tutors
Glenarden, MD SAT Math Tutors
Glenarden, MD Science Tutors
Glenarden, MD Statistics Tutors
Glenarden, MD Trigonometry Tutors
Nearby Cities With Math Tutor
Ardmore, MD Math Tutors
Bladensburg, MD Math Tutors
Capitol Heights Math Tutors
Cheverly, MD Math Tutors
District Heights Math Tutors
Fairmount Heights, MD Math Tutors
Glenn Dale Math Tutors
Landover Hills, MD Math Tutors
Lanham Math Tutors
Lanham Seabrook, MD Math Tutors
New Carrollton, MD Math Tutors
North Englewood, MD Math Tutors
Riverdale Park, MD Math Tutors
Riverdale Pk, MD Math Tutors
Seat Pleasant, MD Math Tutors
|
{"url":"http://www.purplemath.com/glenarden_md_math_tutors.php","timestamp":"2014-04-17T07:19:44Z","content_type":null,"content_length":"24209","record_id":"<urn:uuid:e902aa53-f343-43d7-9456-59ffaa9fe215>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How can I really motivate the Zariski topology on a scheme?
up vote 16 down vote favorite
First of all, I am aware of the questions about the Zariski topology asked here and I am also aware of the discussion at the Secret Blogging Seminar. But I could not find an answer to a question that
bugged me right from my first steps in algebraic geometry: how can I really motivate the Zariski topology on a scheme?
For example in classical algebraic geometry over an algebraically closed field I can define the Zariski topology as the coarsest $T_1$-topology such that all polynomial functions are continuous. I
think that this is a great definition when I say that I am working with polynomials and want to make my algebraic set into a local ringed space. But what can I say in the general case of an affine
Of course I can say that I want to have a fully faithful functor from rings into local ringed spaces and this construction works, but this is not a motivation.
For example for the prime spectrum itself, all motivations I came across so far are as follows: well, over an algebraically closed field we can identify the points with maximal ideals, but in general
inverse images of maximal ideals are not maximal ideals, so let's just take prime ideals and...wow, it works. But now that I know that one gets the prime spectrum from the corresponding functor (one
can of course also start with a functor) by imposing an equivalence relation on geometric points (which I find very geometric!), I finally found a great motivation for this. What is left is the
Zariski topology, and so far I just came across similar strange motivations as above...
ag.algebraic-geometry soft-question
Can you explain what you mean by imposing an equivalence relation on geometric points? My understanding is that this only makes sense for algebras over a field and presumably you want to talk about
commutative rings in general. – Qiaochu Yuan Dec 8 '09 at 18:18
Let k be a commutative ring and A a k-algebra. Let t:A -> K, t':A -> K' be two geometric points (i.e. K,K' are fields). Say that t and t' are equivalent if and only if there exists a third
geometric point s:A -> L and k-algebra morphisms f:K -> L, f':K' -> L such that s = ft = f't'. By taking kernels of the morphisms t you get a bijection between the equivalence classes of this
relation and the prime ideals of A. And this is my first motivation, why I take the spectrum of A! (I think of this as compressing the corresponding zero set functor into something which still
contains all I want). – user717 Dec 8 '09 at 18:48
I'm not sure I understand. If t, t' are not required to be surjective then their kernels are already prime ideals and not maximal ideals. If t, t' are required to be surjective then it is not
possible to recover the prime ideals of A from its maximal ideals; consider the case A = k[[x, y]], k a field. Am I misunderstanding something? – Qiaochu Yuan Dec 8 '09 at 19:08
Yes, I think you misunderstand something. It's precisely my point that the kernels are prime ideals, because this is what gives you the bijection between equivalence classes (wrt the relation
described above) of geometric points and prime ideals! you can read about this (probably in exactly the same words) in the 1971 edition of EGA, introduction, nr. 13.! – user717 Dec 8 '09 at 23:33
I just realized that normally 'geometric point' means that the field K is algebraically closed. I used the terminology of the introduction to EGA and there 'geometric point' is what I was talking
about above...Perhaps this was the source of your confusion. Sorry. – user717 Dec 9 '09 at 14:24
add comment
10 Answers
active oldest votes
Here is what Eisenbud and Harris ('The Geometry of Schemes') have to say on this (page 10):
[comments by myself are in square brackets]
"By using regular functions, we make Spec R [R being a arbitrary comm. ring with 1] into a topological space; the topology is called the Zariski topology. The closed sets are defined as
up vote For each subset S ⊂ R, let
9 down
vote V (S) = {x ∈ Spec R | f (x) = 0 for all f ∈ S} = {[p] ∈ Spec R | p ⊃ S}.
The impulse behind this definition is to make each f ∈ R behave as much like a continuous function as possible. Of course the fields κ(x) [the residual fields at x ∈ Spec R] have no
topology, and since they vary with x the usual notion of continuity makes no sense. But at least they all contain an element called zero, so one can speak of the locus of points in Spec R on
which f is zero; and if f is to be like a continuous function, this locus should be closed. Since intersections of closed sets must be closed, we are led immediately to the definition above:
V (S) is just the intersection of the loci where the elements of S vanish."
add comment
In the category of sets there is no such thing as the initial local ring into which R maps, i.e. a local ring L and a map f:R-->L such that any map from R into a local ring factors through
But a ring R is a ring object in the topos of Sets. Now if you are willing to let the topos vary in which it should live, such a "free local ring on R" does exist: It is the ring object in
the topos of sheaves on Spec(R) which is given by the structure sheaf of Spec(R). So the space you were wondering about is part of the solution of forming a free local ring over a given
ring (you can reconstruct the space from the sheaf topos, so you could really say that it "is" the space).
Edit: I rephrase that less sloppily, in response to Lars' comment. So the universal property is about pairs (Topos, ring object in it). A map (T,R)-->(T',R') between such is a pair
(adjunction $f_*:T \leftrightarrow T':f^* $ , morphism of ring objects $f^*R'\rightarrow R$).
Note that by convention the direction of the map is the geometric direction, the one corresponding to the direction of a map topological spaces - in my "universal local ring" picture I was
stressing the algebraic direction, which is given by $f^*$.
Now for a ring R there is a map $(Sh(Spec(R)), O_{Spec(R)})\rightarrow(Set,R)$: $f^* R$ is the constant sheaf with value R on Spec(R), the map $f^* R \rightarrow O_{Spec(R)}$ is given by
up vote 9 the inclusion of R into its localisations which occur in the definition of $O_{Spec(R)}$. This is the terminal map (T,L)-->(Set,R) from pairs with L a local ring. For a simple example you
down vote might want to work out how such a map factors, if the domain pair happens to be of the form (Set,L).
This universal property of course determines the pair up to equivalence. It thus also determines the topos half of the pair up to equivalence, and thus also the space Spec(R) up to
homeomorphism.(end of edit)
An even nicer reformulation of this is the following (even more high brow, but to me it gives the true and most illuminating justification for the Zariski topology, since it singles out
just the space Spec(R)):
A ring R, i.e. a ring in the topos of sets, is the same as a topos morphism from the topos of sets into the classifying topos T of rings (by definition of classifying topos). There also is
a classifying topos of local rings with a map to T (which is given by forgetting that the universal local ring is local). If you form the pullback (in an appropriate topos sense) of these
two maps you get the topos of sheaves on Spec(R) (i.e. morally the space Spec(R)). The map from this into the classifying topos of local rings is what corresponds to the structure sheaf.
Isn't that nice? See Monique Hakim's "Schemas relatifs et Topos anelles" for all this (the original reference, free of logic), or alternatively Moerdijk/MacLane's "Sheaves in Geometry and
Logic" (with logic and formal languages).
This is a cool answer, but it seems completely contrary to what the OP was asking for. – Harry Gindi Feb 6 '10 at 0:41
In your first construction, would the "free local ring on R" also exist in the topos of sheaves on wrt the, say, fppf or etale topology on Spec R. If so, then this argument would not
"motivate" the Zariski topology, since there is still an (a priori) arbitrary choice involved. Now if the Zariski sheaves on Spec R were in some sense a minimal topos with the property
that the "free local ring on R" existed... – Lars Feb 6 '10 at 8:49
Ok, I was sloppy. Zariski sheaves are the "minimal" topos in the sense now explained above. Intuitively you give the ring R just enough space to spread out into a collection of local
rings. – Peter Arndt Feb 6 '10 at 14:13
Thanks for the edit...this is pretty neat. – Lars Feb 6 '10 at 16:50
3 And by the way, in the same fashion the etale topos gives you a universal Henselian ring. I haven't heard of any such thing for fppf, though. – Peter Arndt Feb 6 '10 at 17:27
show 4 more comments
If you buy into the idea that you want a topological model for your ring, then right away it becomes sensible to ask that any map Ring -> Top be functorial. Of course, m-Spec -- which is
already classically motivated -- doesn't lend itself to this, simply because there isn't an obvious way to use a ring homomorphism $f : A \to B$ to move maximal ideals around.
Such a map can move around ideals, both by extension and contraction, and this is a good first thing to investigate. Your choice of whether or not you want to push ideals forward or pull
them back will determine if your functor should be co- or contravariant.
up vote 4 To decide between these two, look at the initial and terminal objects in Ring, as well as the initial and terminal objects in Top. The ring {0,1} has a single ideal, hence its topological
down vote space has (at-most) a single point, hence should probably be sent to the final object in Top. The 0 ring has no ideals, hence its topological space has no points, hence should be sent to
the initial object in Top. Your hand has thus been forced: you need a contravariant functor, hence contraction is the thing to look at.
Now observe that $f^{-1}(\mathfrak m)$ need not be maximal, even if $\mathfrak m$ is, but it will be prime. You're thus immediately led to seeing if you can put a topology on Spec the same
way you did for m-Spec. It works, and you move on.
1 The way I think about contravariance is that one should always associate with an ideal I the homomorphism A \to A/I. Then it's the most natural thing in the world to take a homomorphism
B \to A and compose it with a homomorphism A \to A/I. The reason prime ideals pull back to prime ideals is that if A/I is an integral domain then B \to A/I is still a map with image an
integral domain; however, a subring of a field need not be a field, so maximal ideals don't pull back. – Qiaochu Yuan Dec 8 '09 at 18:10
ring {0,1} = k? – user2035 Dec 8 '09 at 18:14
Contravariance is also sensible in light of the ideal correspondence for quotients and localization, which suggest that the topological spaces associated to A/p and A_p should both be
subspaces of the space for A, a fact which runs in the opposite direction of the canonical maps A->A/p and A->A_p. – Tim Carstens Dec 8 '09 at 18:15
add comment
Here is my favourite way to motivate the Zariski topology: it is the coarsest topology which makes the functions defined (below) by ring elements "continuous" in the following sense:
Given a classical variety $V$ over $\mathbb{C}$ and a "regular function" $f:V\to\mathbb{C}$, one can identify the value
$f(x)$ with the the image of $f$ in $A_V/m_x$, where $A_V$ is the coordinate ring of $V$ and $m_x$ is the maximal ideal at $x$. This perpsective has the advantage of generalizing to any
ring, if you allow the target field to vary from point to point:
First, motivate working with primes instead of maximal ideals because primes pull back under ring maps, and because non-maximal primes act like "generic points" in classical algebraic
Next, at each prime ideal of a ring $p\triangleleft A$, you get a domain $A/p$ (which people often like to think of as living inside a residue field, $k(p):=Frac(A/p)$). Then an element
of the ring $a\in A$ defines a function $f_a$ on $Spec(A)$ taking values in various domains (or fields): $f_a(p):=image_{A/p}(a)$.
up vote 4
down vote All domains/fields have the element $0$ in common, so it makes sense to talk about the vanishing set
$f_a^{-1}(0)=V(a):=$ {$p\in Spec(A) | a\in p$}, and these sets form a base for the closed sets of the Zariski topology.
Moreover, the finite unions and arbitrary intersections we need turn out to be extremely manageable because of the definition of primes, in a way that is intuitively meaningful in the
context above: For any collection of basic closed sets $V(a)$ with $a$ ranging over a set $E\subset Spec(A)$, we get
• $\bigcap_{a\in E} V(a) = V(E) := $ {$p\in Spec(A) | E \subset P $}. These are the primes where
"all of $E$ vanishes" in the residue domain/field.
• $V(a)\cup V(b) = V(ab)$, the primes where $a$ and $b$ "both vanish".
This is nice. Besides, this viewpoint allows to say every element of my ring is giving rise to a continuous function from Spec(A) to the corresponding domain/field. – Csar Lozano
Huerta Dec 10 '09 at 8:43
This looks like the easiest and most natural way to motivate Zariski topology. – Fei YE Feb 6 '10 at 19:11
add comment
EDIT: I'm rewriting this answer.
So, let's accept for now that considering the points of an affine scheme as a set with no topological structure makes sense.
Then each element of your ring either vanishes at a point (i.e. lies in the ideal) or doesn't. In any topology on this set that's compatible with the idea that ring elements are sections of
up vote 4 some bundle on it, this zero set had better be closed.
down vote
By general topology nonsense, there is a unique coarsest topology where these are closed sets, the one that uses their complements as a basis (this exists because they cover, and the
intersection of the sets for a and b is the set for ab). This is the Zariski topology.
1 Hm, does this make sense? Do you mean morphisms into arbitrary local ringed spaces (whether this makes sense or not)? – user717 Dec 8 '09 at 17:40
So for example, in my motivation for the Zariski topology on varieties over an algebraically closed field k, you first have to equip the field k with the Zariski topology to make this
definition well-defined. So, you need some "initial data". – user717 Dec 8 '09 at 17:45
Sorry if I am getting on your nerves :) But I don't see why this is well-defined. Because "scheme" already carries a topology (the Zariski topology) and for "coarsest" to make sense you
somehow need a (non trivial) set of topologies on your scheme. "My" case above makes sense because you just consider morphisms into one fixed space already carrying a topology and so you
consider all topologies on the domain satisfying a condition and then you take the coarsest one. – user717 Dec 8 '09 at 18:53
1 Note to passersby: comments above apply to an earlier version of the answer. – Ben Webster♦ Dec 8 '09 at 19:24
add comment
I don't think that you have to motivate the Zariski topology as anything other than a correct description of something that can be defined without it.
Suppose from the beginning that you are interested in commutative rings $R$, in general. Suppose that you would like to interpret the reverse category of ring homomorphisms as "geometry".
After all, many other kinds of geometry are reverse categories of ring homomorphisms, for certain kinds of rings and homomorphisms. For example, any smooth map $M \to N$ between smooth
manifolds is equivalent to an algebra homomorphism $C^\infty(N) \to C^\infty(M)$ with certain favorable properties. To keep things as general as possible, and in a strictly algebraic setting,
let's call any ring homomorphism $R \to S$ a geometric map in the opposite direction. Let's call the map $\text{Spec}(S) \to \text{Spec}(R)$, where for the moment "Spec" doesn't mean anything
other than reversing arrows. Then this is the category of affine schemes with scheme morphisms as the morphisms.
Having taken the plunge to call this an abstract "geometry", we can try to fill in a tangible geometry. Certainly maximal ideals should be called points in this "geometry", given the
motivating example of polynomial rings and their ring homomorphisms. (Proposition: A homomorphism between polynomial rings is equivalent to a polynomial map between affine spaces.) Should we
up vote perhaps stop at the maximal ideals? That would be nice, if it were consistent. However, having committed ourselves to all ring homomorphisms between all rings as "geometry", it isn't
4 down consistent. For example, $\mathbb{Z} \to \mathbb{Q}$ is an important ring homomorphism. However, the inverse of maximal ideal $\{0\}$ in the target is a prime ideal which is not maximal. As
vote this example suggests, prime ideals are the smallest viable extension of the maximal ideals in the contravariant geometry of ring homomorphisms.
They aren't the only viable extension. We could have taken all ideals instead of just the prime ideals. As far as I know, another Grothendieck and another Zariski would have defined the
points of an affine scheme using all ideals instead of just the prime ideals. In any case, the prime ideals work; the maximal ideals don't.
Okay, what about topology. I think that the Zariski topology is still, as you say, the coarsest $T_1$ topology available to be able to call regular maps, by definition the maps on prime
ideals induced by ring homomorphisms, continuous.
To summarize, the framework is a ruse to study all rings and ring homomorphisms in a geometric language.
You define points to be prime ideals and not just any ideals because you want to be able to localize at points, i.e. if $R$ is supposed to be the set of functions over the space of points
$SpecR$, you want to be allowed to speak of the "local functions" around each point. – Qfwfq Mar 8 '11 at 14:19
add comment
Fields are characterized in the category of rings by the property that an epimorphism whose source is a field must be an isomorphism. The prime ideals of a ring can be identified with the
epimorphisms from that ring to fields. From a categorical perspective, the prime spectrum therefore seems a more natural object than the max. spectrum.
I don't have a good explanation for the Zariski topology. It is the coarsest topology in which the maps induced by homomorphisms of rings are continuous and the origin in $\mathbf{A}^1$ is
closed. This is not very satisfying, though.
up vote 3
down vote The real reason the Zariski topology is important is because descent works for lots of things. For example, a module defined Zariski locally over a ring can be glued to give a module over
the original ring. I wonder if the Zariski topology is the finest topology (i.e., finest Grothendieck topology defined by covers by [S:subfunctors:S] subobjects) in which descent works for
quasi-coherent sheaves. Does anyone know?
Isn't there faithfully flat descent for quasi-coherent sheaves? – Dinakar Muthiah Dec 9 '09 at 7:47
Maybe you should ask your last question as a new question. It is very interesting, but Grothendieck topologies may not be part of the original question here. – Konrad Voelkel Dec 9 '09
at 15:03
Dinakar: yes, but the fppf topology is not defined by subobjects (I changed the wording above to emphasize this). The induced topology defined by subobjects is the Zariski topology
(since an fppf embedding is an open embedding in the Zariski topology). – Jonathan Wise Dec 9 '09 at 18:03
Ah, I see your distinction. – Dinakar Muthiah Dec 10 '09 at 0:00
add comment
think there are two questions here: (1) why study the prime spectrum, and (2) why think of it in terms of the axioms for a topology. (1) has been pretty well handled by other commenters.
And a number of them point out that (2) isn't really especially useful.
up vote 3 Part of the problem, according to a very interesting article I read by Grothendieck (maybe where he introduces dessins d'enfants?), is that the axioms for topological spaces are "wrong".
down vote Alas, he doesn't know what the right axioms are; he's just sure that the field of general topology should never have existed. From that point of view, discovering that the prime spectrum
has a topology automatically isn't that interesting. (This guy has a contrary viewpoint on the field of general topology, but unsurprisingly I find Grothendieck more convincing.)
If you (or somebody else) can find that article, that would be awesome. – Kevin H. Lin Feb 6 '10 at 7:50
to Kevin: I heard from my advisor, he said the definition of grothendieck topology(not pretopology) "is equivalent to" all the possible existing topology in various branches of
mathematics – Shizhuo Zhang Feb 6 '10 at 12:41
Steven Gubkin found the article; see mathoverflow.net/questions/14634/what-is-a-topological-space – Allen Knutson Feb 8 '10 at 15:07
This article, and all of Grothendeick's writing, have just recently been removed from the Grothendieck Circle's webpage, apparently on Grothendieck's request (Wikipedia says the request
was made in January 2010 in a letter from Grothendieck to Illusie). Too bad. – Dan Ramras Jun 17 '10 at 5:00
2 Grothendieck's article can be found here: dl.dropbox.com/u/1963234/EsquisseEng.pdf – Gunnar Magnusson Nov 14 '10 at 1:50
add comment
This is probably a standard question, so allow me write down (what I think) the standard answer. Viewing any ring $R$ as a ring of functions (allowing nilpotents and all that) on the prime
spectrum $Spec R$, you naturally want all such functions (elements of $R$) to be continuous, thus it needs that, for any $f \in R$, $V(f) = [p \in Spec\; R: f(p) = 0 ] = [p \in Spec \; R: f
\; mod \; p = 0] = [p \in Spec \; R: f \in p]$ (I used [ ] to denote a set...don't know why { } doesnt work)) being a closed set, where $p$ is a prime ideal and the 'value' of a function $f \
up vote in R$ at a point $p$ is the image of the residual of $f$ mod $p$ in the field of fractions of $R/p$ (which is an integral domain). I think there's no difficulty in showing that the field of
2 down fraction of $R/p$ is isomorphic to the residual field $k(p)$ of the local ring $O_p$, coinciding with the other definition of the 'value' of function $f \in R$.
Now any ideal $I$ of $R$ is generated by its elements, and so in order for a bunch of functions to be continuous, it needs to have the closed set $V(I)=[p \in Spec \; R: I \subset p]$.
Hm, what do you really mean by continuity of f? I mean, f is in general not a proper function into some topological space!? – user717 Dec 8 '09 at 16:50
1 It's really a motivation for Zariski topology on an affine scheme. If you look at affine varieties over a algebraically closed field $k$, then as you say you can see why Zariski is
motivated; similar here, although the only difference is that the residual fields are different (and can be of different characteristics), unlike a unique field $k$ for the "functions" to
take values. These are heuristics anyway, you may not need to take the motivations too seriously - otherwise why bother a formal theory. – user2148 Dec 8 '09 at 16:57
But when your "functions" aren't functions, then this motivation is just wrong, or am I wrong? Of course, I know about this intuition, but for exactly this reason I don't know what I
should do with it... – user717 Dec 8 '09 at 17:42
I don't think so. If you want to think of topological spaces in terms of their ring of functions, then for continuous real-valued functions into R a topological space has the initial
topology if and only if it's completely regular, and this is equivalent to the zero sets of continuous functions being a basis for the closed sets. So the Zariski topology makes Spec R
behave as if it were completely regular in the sense that elements of R separate points and closed sets. – Qiaochu Yuan Dec 8 '09 at 17:49
You can take a look at James Milne's notes on AG. It uses the ringed space to define an affine variety over an algebraically closed field, and the structure sheaf indeed consists of
regular functions (from the variety to the filed) on Zariski open subsets. The motivation from this example is clear enough, and the functions here are indeed functions. – user2148 Dec 8
'09 at 17:52
show 5 more comments
Here's an idea related to Tim Carstens' answer. As in Ben's answer we start from the point of view that it makes sense to think of $\text{Spec } R$ as a set. Given an ideal $I$ we have a
homomorphism $R \to R/I$ and by the correspondence theorem the prime ideals of $R/I$ are precisely the prime ideals of $R$ containing $I$, so we get an injection $\text{Spec } R/I \to \text
{Spec } R$. In any reasonable assignment of a topology to the spectrum this injection should be an embedding.
The Zariski topology accomplishes this by the simple requirement that the above map be both closed and continuous. Why is being closed a reasonable requirement? Well, an embedding of a
compact Hausdorff spaces into another is always closed, so if we think of the relationship between $\text{Spec } R$ and $R$ as analogous to the relationship between a compact Hausdorff
up vote 2 space $X$ and its C*-algebra then this is a natural requirement. (This is similar to the comment I made about completely regular spaces, so if you don't like that reasoning you probably
down vote won't like this argument either.)
This is perhaps a little unsatisfying until it's shown that there are no other reasonable ways to turn the map $\text{Spec } R/I \to \text{Spec } R$ into an embedding (certainly the
discrete topology works, but I wouldn't call that reasonable), but hopefully somebody else has some insight here.
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry soft-question or ask your own question.
|
{"url":"http://mathoverflow.net/questions/8204/how-can-i-really-motivate-the-zariski-topology-on-a-scheme/14354","timestamp":"2014-04-17T07:09:46Z","content_type":null,"content_length":"136583","record_id":"<urn:uuid:d8b8d970-e88f-46fe-b1c9-0f982ab47a81>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Course Number: MATH 1113
Course Number: MATH 1112
Course Title: Trigonometry and Analytic geometry
Hours Credit: 3 hours
Prerequisites: Math 1111 or consent of the department
Courses Description: Topics in analytic trigonometry and analytic geometry.
Text: Precalculus (2^nd edition) by Robert Blitzer, Prentice-Hall
Learning Outcomes: Students should be able to demonstrate:
1. An understanding of how to find the values of the trigonometric functions from right triangles and circles
2. An understanding of how to graph the trigonometric functions
3. An understanding of how to prove trigonometric identities
4. An understanding of how to use the sum, difference, double-angle and half-angle formulas for sine and cosine
5. An understanding of how to solve triangle using the law of sines and law of cosines
6. An understanding of polar coordinates and graphs
7. An understanding of conic sections and the graphs
8. An understanding of how to analyze and solve applied problems
Topics: The following sections of Blitzer’s book will be covered:
4.1 Angles and Their Measures
4.2 Trigonometric Functions: The Unit Circle
4.3 Right Triangle Trigonometry
4.4 Trigonometric Functions for any Angle
4.5 Graphs of Sine and Cosine Functions
4.6 Graphs of Other Trigonometric Functions
4.7 Inverse Trigonometric Functions
4.8 Applications of Trigonometric Functions
5.1 Verifying Trigonometric Identities
5.2 Sum and Difference Formulas
5.3 Double-Angle and Half-Angle Formulas
5.4 Product to Sum and Sum to Product Formulas
5.5 Trigonometric Equations
6.1 Law of Sines
6.2 Law of Cosines
6.3 Polar Equations
6.4 Graphs of Polar Equations
6.5 Complex Numbers in Polar Form; DeMoivre’s Theorem
6.6 Vectors
6.7 The Dot Product
9.1 The Ellipse
9.2 The Hyperbola
9.3 The Parabola
9.5 Parametric Equations
9.6 Conic Sections in Polar Equations
Grading Method: To be determined by instructor.
|
{"url":"http://www.westga.edu/~math/syllabi/syllabi/fall05/MATH1112_gen.htm","timestamp":"2014-04-21T02:25:19Z","content_type":null,"content_length":"12090","record_id":"<urn:uuid:14eff82f-efdb-4d7c-a140-326778a7a76d>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] CfP Symposium Mathematics and Computation: Historical and epistemological issues [from Liesbeth De Mol]
Liesbeth De Mol martin at eipye.com
Fri Jan 11 13:11:14 EST 2013
The Centre for Logic and Philosophy of Science of
Ghent University was founded in 1993. On the
occasion of its 20th anniversary the Centre
organises an international Conference on Logic
and Philosophy of Science (CLPS13) on the themes
that are central to its research:
- Logical analysis of scientific reasoning processes
- Methodological and epistemological analysis of scientific reasoning processes
Conference dates: 16-18 September 2013
Keynote talks will be given by Diderik Batens
(the founder of the Centre), three logicians
(Natacha Alechina, Graham Priest and Stephen
Read) and three philosophers of science (Hanne
Andersen, Hasok Chang, and Jim Woodward).
We will also schedule parallel sessions with
contributed papers and special symposia with a
limited number of papers. I organise the symposium (#2) on
Mathematics and Computation: Historical and epistemological issues
Traditionally, mathematics is the home of
computation. This is one of the reasons why ``eo
ipso computers are mathematical machines''
(Dijkstra, 1985). Therefore, it is not surprising
that when the first electronic computers were
being developed it was to study and solve
mathematical problems. It was partly by way of
(applied) mathematics, viz. through the
simulation of mathematical models, that the other
sciences like biology, physics, etc started to feel the impact of the computer.
While several mathematicians have, in the
meantime, embraced massive computation, this
almost natural relation between computation and
mathematics is not always evaluated positively,
as witnessed, for instance, by some of the
commotion that still surrounds computer-assisted
proofs like the four-color theorem. Such
commotion lays bare some fundamental issues
within (the philosophy of) mathematics and
challenges our understanding of notions such as
proof, mathematical understanding, abstraction,
etc. Because of this natural and problematic
relation between computation, computers and
mathematics, the impact of computation and
computers on mathematics, and vice versa, is far from trivial.
The aim of this special session is to bring
together researchers to reflect on this relation
by way of a historical and/or epistemological
analysis. We welcome contributions from
mathematicians, computer scientists, historians
and philosophers with a strong interest in
history and epistemology. Topics include but are not restricted to:
discrete vs. continuous mathematics
time and processes in mathematics
mathematical software systems (e.g. Mathematica, Maple, etc)
computer-assisted proofs (e.g. Hales' proof)
"experimental" mathematics
computation before or without the electronic computer
numerical tables
role of programs in mathematics
on-line mathematics (e.g. Polymath or Sloane's encyclopedia)
mathematical style(s)
If you want to present a paper at this symposium,
please upload an abstract in PDF format (between 500 and 1000 words) to:
before 1 April 2013.
You will be asked to choose between one of the following submission categories:
- Logical analysis of scientific reasoning processes
- Methodological and epistemological analysis of scientific reasoning processes
- Symposium submission
Select the last option and mention the symposium
number in the title of your abstract.
If you do not have an EasyChair account you can create one here:
Unfortunately, we cannot offer any financial
support for symposium speakers. Neither can we waive the registration fee.
All abstracts for symposia will be refereed by
the organisers and other members of the programme
committee. Notification of acceptance will be given by 15 May 2013.
All further information (e.g. accommodation,
registration, maps) can be found at the
conference website: <http://www.clps13.ugent.be/>http://www.clps13.ugent.be/.
The programme will be available on the website by 1 July 2013.
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2013-January/016883.html","timestamp":"2014-04-20T16:34:57Z","content_type":null,"content_length":"7643","record_id":"<urn:uuid:1cbe8822-476d-4966-8e7f-bc53110627a2>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent US6707790 - Enforceable and efficient service provisioning
1. Field of the Invention
This invention relates to allocation of network resources and Quality of Service (QoS) management for regulated traffic flows.
2. Description of the Related Art
A key challenge for future packet networks is to efficiently multiplex bursty traffic flows while simultaneously supporting Quality of Service (QoS) objectives in terms of throughput, loss
probability, and end-to-end delay. At one extreme, performance can be assured even in the worst case via deterministic service. In addition to its absolute guarantee, deterministic service also has
the advantage of enforceability: when the network guarantees QoS based on the client's worst-case descriptions of their traffic, the network can easily verify that these traffic specifications are
satisfied. On the other hand, the most important drawback of a deterministic service is that, by its very nature, it must reserve resources according to a worst-case scenario, and hence has
fundamental limits in its achievable utilization.
To overcome the utilization limits of deterministic service, statistical multiplexing is introduced to exploit the fact that the worst-case scenario will occur quite rarely. To account for such
statistical resource sharing, the traffic flows' rate fluctuations and temporal correlation must be characterized. In the literature, such properties are often represented via stochastic traffic
models, including Markov Modulated, Self-Similar, and others. However, in a shared public network with misbehaving or malfunctioning users, provisioning resources according to such stochastic source
characterizations incurs a significant risk, as the underlying assumptions of the model are inherently difficult for the network to enforce or police.
Various disadvantageous attempts to address the fundamental conflicting requirement for deterministic traffic models to isolate and police users, and statistical multiplexing to efficiently utilize
network resources, have considered network services only for single time scale flows. When traffic flows have rate variations over even two time scales, significant inaccuracies are encountered when
applying a single time scale solution. Consequently, for traffic flows more complex than periodic on-off, new techniques are needed for enforcing network services.
Providing statistical services for deterministically policed traffic flows encounters the problem of statistical characterization based on deterministic policing parameters. Specifically, one usually
needs to compute the marginal distributions of the flows' rate in different time scales in order to perform the call admission control.
There are competing considerations in a network. The network needs to carry as much traffic as possible, and it must support the QoS requirements of each individual flow. In other words, while as
much traffic as possible is multiplexed together, the traffic is policed or regulated such that the traffic flows do not interfere with each other. As such, there continues to be a need for a
solution to the aforementioned problems.
The invention provides a solution to the problem of allocating network resources and maintaining a level of Quality of Service (QoS) such that traffic flows in the network do not interfere with each
other, and the network can provide an efficient statistical multiplexing operation. According to the principles of the invention, an admission control operation characterizes the statistical behavior
of every flow using a maximum entropy function. If the traffic envelope, or the upper bound of the traffic rates of all time scales, are known, but nothing else about the traffic is known, the
distribution of the traffic flow can be approximated by maximizing the entropy. An upper bound of the mean rate, which can be derived from the traffic envelope, and a lower bound of the traffic
rates, which is zero, are always known. The entropy of the distribution is maximized subject to the considerations of mean rate, minimum rate, and maximum rate.
In accordance with an illustrative embodiment of the invention, a network element is deployed in a network of interconnected elements, such as the Internet. The network element includes a shared
buffer multiplexer connected to a link, a set of regulators connected to the shared buffer mulitplexer, and an admission control unit communicatively coupled to the set of regulators and the shared
buffer multiplexer. Each of the set of regulators has an input traffic flow policed by the regulator. Each traffic flow to a regulator has a varying rate of flow at different time scales.
In order to control admission of the traffic flow through the regulator to the link in the network element, the admission control unit evaluates each traffic flow. For each traffic flow, the
admission control unit performs a series of process steps. Each traffic flow signals its traffic envelope to its regulator and to the admission control unit. The traffic envelope varies over
different time scales. The admission control unit determines the maximum-entropy distribution for the flow at different time scales by maximizing the entropy of the distribution of the rate of the
flow at each time scale. The admission control unit then approximates the rate variance of the maximum-entropy distribution for a given maximum rate. The admission control unit decides whether the
flow should be admitted into the network using the approximated rate variance.
Other aspects and advantages of the invention will become apparent from the following detailed description and accompanying drawings, illustrating by way of example the features of the invention.
FIG. 1 is a block diagram illustrating a network element in accordance with the principles of the invention.
FIG. 2 is a process flow diagram illustrating a call admission process in accordance with the principles of the invention.
FIG. 3 is a block diagram of an example computer system for implementing the invention.
Network elements, such as routers or switches, on the Internet or other networks encounter conflicting requirements for both deterministic traffic models (i.e., worst case traffic parameters) in
order to effectively isolate and police users, and statistical resource allocation to efficiently share and utilize network resources, such as bandwidth and buffer capacity. In accordance with the
principles of the invention, a maximum-entropy method is utilized to characterize each traffic flow in a network element. Since user behavior is unknown except for a set of policing parameters,
nothing is assumed about the rate's distribution besides those parameters. Consequently, the uncertainty of the rate distribution, i.e., the distribution's entropy, is maximized in accordance with
the principles of the invention. As shown in the drawings for purposes of illustration, the rate distribution of the traffic flow is approximated by maximizing the entropy of the source. A Gaussian
approximation is used to characterize the aggregate traffic and calculate the loss probability using admission control techniques. An admission control decision is made by determining if loss
probability requirements can be met by admitting a new flow while ensuring that all calls in progress and the new one to be admitted will be guaranteed the requested Quality of Service (QoS). The
invention can be applied in the call admission control within such network elements as well as in network planning and dimensioning.
The preferred embodiment is discussed in detail below. While specific steps, configurations and arrangements are discussed, it should be understood that this is done for illustrative purposes only. A
person skilled in the relevant art will recognize that other steps, configurations and arrangements can be used without departing from the spirit and scope of the invention.
FIG. 1 is a generalized block diagram illustrating a network element 100. The network element includes a shared buffer multiplexer 102 connected to a link 104. The link has a bandwidth associated
therewith. The network element includes a set of j regulators, R[1 ]to R[j], where j is an integer. A respective flow, F[1 ]to F[j], is received by each of the set of regulators, R[1 ]to R[j]. The
regulators police the flows and make sure that the flows conform to their own specified parameters. The network element includes an admission control unit 106. The admission control unit 106 is
communicatively coupled to the set of regulators R[1 ]to R[j ]and the shared buffer multiplexer 102.
Signaling and connection management software resides in the admission control unit 106. The admission control unit 106 functions as the on-board processor, provides real-time control, and monitors
the status of the network element. Requests for set-up are accepted and calls are admitted based on the bandwidth available in the link 104, the available buffer, and the delay and loss probability
requirements of the flow.
The admission control unit includes a processor, which is subsequently described with respect to FIG. 3. The processor causes the admission control unit to regulate traffic flow through the set of
regulators into the link.
Each traffic flow has a statistical envelope. The statistical envelopes of the traffic flows are approximated using their policing parameters. The only information available is the deterministic
envelope (and the mean rate derived from it). Thus, the problem is to approximate the rate variance, RV[j](t), or more generally the distribution of B[j](t) given b[j](t) (which is the upper bound of
the maximum rate) and φ[j ](which is the upper bound of the mean rate). To approximate the rate variance, the entropy of the probability density function is maximized. As a measure of uncertainty or
randomness, entropy is defined in Eq. 1 as: $h ( f ) = - ∫ S f ( x ) ln f ( x ) x ( Eq . 1 )$
for a continuous random variable with probability density function f(x). The maximum entropy principle is that among the many possible distributions satisfying the known constraints, the one that
maximizes the entropy should be chosen.
The maximum entropy principle is applied to approximate the distribution (and therefore the variance) based on the peak-rate envelope and information about the mean rate. The only information known
about the distribution is its range (from 0 to rate envelope v[j](t)=b[j](t)/t) and the upper bound of mean rate φ[j]=lim b[j](t)/t as t →∞. Based on the maximum entropy principle taught herein, we
have the following result: given a flow j's deterministic traffic rate envelope v[j](t) and traffic mean rate upper bound φ[j], the maximum entropy estimate of rate variance RV[j](t) is shown in Eq.
2: $RV ⋒ j ( t ) = A j λ j , 1 3 [ ( ( λ j , 1 v j ( t ) - 1 ) 2 + 1 ) λ j , 1 v j ( t ) - 2 ] - φ j 2 ( Eq . 2 )$
where λ[j,1 ]is the non-zero solution of: $ λ j , 1 v j ( t ) + 1 + φ j λ j , 1 ( v j ( t ) - φ j ) λ j , 1 - 1 = 0 ( Eq . 3 ) and A j is : A j = λ j , 1 λ j , 1 v j (
t ) - 1 . ( Eq . 4 )$
Thus, according to the principles of the invention, each traffic envelope is represented as a set of policing parameters. The admission control unit applies Equations 3 and 4 to obtain the parameters
of the maximum-entropy distribution for each traffic flow. Then, the admission control unit approximates the rate variance for the traffic flow using Eq. 2. The admission control unit uses the rate
variance in the connection admission control decision. If the network element decides to admit the new flow, the flow is set up and the transmission begins. The regulator begins to police the flow so
that the flow conforms to its original parameters.
FIG. 2 is a process flow diagram depicting the call admission process of the present invention according to a specific embodiment. In the specific embodiment, this process is executed by admission
control unit 106, which is attached to the network element 100. An example of the structure of admission control unit 106 is discussed in detail below.
Referring to FIG. 2, in step 200, the admission control unit receives a traffic envelope b(t) of a flow having a varying rate of flow. If a flow has a traffic envelope, b(t), then the maximum rate
over any interval length t is v(t)=b(t)/t.
In step 202, the admission control unit maximizes the entropy of a distribution of a rate of low over all time scales. The entropy of the distribution is presented in Eq. 1. It can be shown that the
maximum-entropy distribution is an truncated exponential distribution from 0 to v(t). The admission control unit obtains the parameters of the distribution by solving Equations 3 and 4. The admission
control unit therefore determines the maximum-entropy distribution for the flow.
In step 204, the admission control unit approximates the rate variance of the maximum-entropy distribution for the flow for a given maximum rate, b(t)/t, using the set of policing parameters, which
set contains the mean rate upper bound, the maximum rate upper bound, and the minimum rate bound, in accordance with Eq. 2. In step 206, the admission control unit uses the approximated rate variance
to determine whether the loss probability requirement can be met and control admission of the flow to the regulator. If the network element decides to admit the flow, the flow is set up and the
transmission begins in step 208. The regulator begins to police the flow so that the flow conforms to its original parameters.
The admission control unit 106 of the present invention may be implemented using hardware, software or a combination thereof and may be implemented in a computer system or other processing system. In
an illustrative embodiment, the invention is directed toward one or more computer systems capable of carrying out the functionality described herein. An example computer system 300 is shown in FIG. 3
. The computer system 300 includes one or more processors, such as processor 304. The processor 304 is connected to a communication bus 306. Various software embodiments are described in terms of
this example computer system. After reading this description, it will become apparent to a person skilled in the relevant art how to implement the invention using other computer systems and/or
computer architectures.
Computer system 300 also includes a main memory 308, preferably random access memory (RAM), and can also include a secondary memory 310. The secondary memory 310 can include, for example, a hard disk
drive 312 and/or a removable storage drive 314, representing a floppy disk drive, a magnetic tape drive, an optical disk drive, etc. The removable storage drive 314 reads from and/or writes to a
removable storage unit 318 in a well-known manner. Removable storage unit 318 represents a floppy disk, magnetic tape, optical disk, etc. which is read by and written to by removable storage drive
314. As will be appreciated, the removable storage unit 318 includes a computer usable storage medium having stored therein computer software and/or data.
In alternative embodiments, secondary memory 310 may include other similar means for allowing computer programs or other instructions to be loaded into computer system 300. Such means can include,
for example, a removable storage unit 322 and an interface 320. Examples of such include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory
chip (such as an EPROM, or PROM) and associated socket, and other removable storage units and interfaces which allow software and data to be transferred from the removable storage units to computer
system 300.
Computer system 300 can also include a communications interface 324. Communications interface 324 allows software and data to be transferred between computer system 300 and external devices. Examples
of communications interface 324 can include a modem, a network interface (such as an Ethernet card), a communications port, a PCMCIA slot and card, etc. Software and data transferred via
communications interface 324 are in the form of signals which can be electronic, electromagnetic, optical or other signals capable of being received by communications interface 324. These signals are
provided to communications interface 324 via a communications path 326. This communications path 326 carries signals and can be implemented using wire or cable, fiber optics, a telephone line, a
cellular phone link, an RF link and other communications channels.
In this document, the terms “computer program medium” and “computer usable medium” are used to generally refer to media such as removable storage device 318, a hard disk installed in hard disk drive
312, and communications path 326. These computer program products are means for providing software to computer system 300.
Computer programs (also called computer control logic) are stored in main memory 308 and/or secondary memory 310. Computer programs can also be received via communications interface 324. Such
computer programs, when executed, enable the computer system 300 to perform the features of the present invention as discussed herein. In particular, the computer programs, when executed, enable the
processor 304 to perform the features of the present invention. Accordingly, such computer programs represent controllers of the computer system 300.
In an embodiment where the invention is implemented using software, the software may be stored in a computer program product and loaded into computer system 300 using removable storage drive 314,
hard drive 312 or communications interface 324. The control logic (software), when executed by the processor 304, causes the processor 304 to perform the functions of the invention as described
In another embodiment, the invention is implemented primarily in hardware using, for example, hardware components such as application specific integrated circuits (ASICs). Implementation of the
hardware state machine so as to perform the functions described herein will be apparent to persons skilled in the relevant art(s). In yet another embodiment, the invention is implemented using a
combination of both hardware and software.
While there have been illustrated and described what are considered to be example embodiments of the present invention, it will be understood by those skilled in the art and as technology develops
that various changes and modifications may be made, and equivalents may be substituted for elements thereof without departing from the true scope of the present invention. For example, the present
invention is applicable to all types of data networks, including, but is not limited to, a local area network (LAN), a wide area network (WAN), a campus area network (CAN), a metropolitan area
network (MAN), a global area network (GAN), a system area network (SAN), and the Internet. Further, many other modifications may be made to adapt the teachings of the present invention to a
particular situation without departing from the scope thereof. Therefore, it is intended that the present invention not be limited to the various example embodiments disclosed, but that the present
invention includes all embodiments falling within the scope of the appended claims.
|
{"url":"http://www.google.com/patents/US6707790?dq=7,496,943","timestamp":"2014-04-17T06:54:04Z","content_type":null,"content_length":"83361","record_id":"<urn:uuid:ccdf3215-82c0-4b78-85b0-6f8cc9f79143>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts about PIE on The Math Less Traveled
Tag Archives: PIE
[This is part six in an ongoing series; previous posts can be found here: Differences of powers of consecutive integers, Differences of powers of consecutive integers, part II, Combinatorial proofs,
Making our equation count, How to explain the principle of … Continue reading
|
{"url":"http://mathlesstraveled.com/tag/pie/","timestamp":"2014-04-20T00:38:53Z","content_type":null,"content_length":"50185","record_id":"<urn:uuid:47352e8e-f5ef-4d36-ad08-1ee73293d568>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
|
matching family
Locality and descent
A matching family of elements is an explicit component-wise characterizaton of a morphism from a sieve into a general presheaf.
Since such morphisms govern the sheaf property and the operation of sheafification, these can be discussed in terms of matching families.
Let $(C,\tau)$ be a site and $P:C^{\mathrm{op}}\to\mathrm{Set}$ a presheaf on $C$. Let $S\in \tau(c)$ be a covering sieve on object $c\in C$ (in particular a subobject of the representable presheaf
A matching family for $S$ of elements in $P$ is a rule assigning to each $f:d\to c$ in $S$ an element $x_f$ such that for all $g:e\to d$
$P(g)(x_f) = x_{f\circ g}.$
Notice that $f\circ g\in S$ because $S$ is a sieve, so that the condition makes sense; furthermore the order of composition and the contravariant nature of $P$ agree. If we view the sieve $S$ as a
subobject of the representable $h_c$, then a matching family $(x_f)_{f\in S}$ is precisely a natural transformation $x:S\to P$, $x: f\mapsto x_f$.
An amalgamation of the matching family $(x_f)_{f\in S}$ for $S$ is an element $x\in P(c)$ such that $P(f)(x) = x_f$ for all $f\in S$.
Characterization of sheaves
$P$ is a sheaf for the Grothendieck topology $\tau$ iff for all $c$, for all $S\in\tau(c)$ and every matching family $(x)_{f\in S}$ for $S$, there is a unique amalgamation. Equivalently $P$ is a
sheaf if any natural transformation $x:S\to P$ has a unique extension to $h_C\to P$ (along inclusion $S\hookrightarrow h_c$); or to phrase it differently, $P$ is a sheaf (resp. separated presheaf)
iff the precomposition with the inclusion $i_S : S\hookrightarrow h_C$ is an isomorphism (resp. monomorphism) $i_S:\mathrm{Nat}(h_C,P)\to \mathrm{Nat}(S,P)$.
Suppose now that $C$ has all pullbacks. Let $R = (f_i:c_i\to c)_{i\in I}$ be any cover of $c$ (i.e., the smallest sieve containing $R$ is a covering sieve in $\tau$) and let $p_{ij}:c_i\times_c c_j\
to c_i$, $q_{ij}:c_i\times_c c_j\to c_j$ be the two projections of the pullback of $f_j$ along $f_i$. A matching family for $R$ of elements in a presheaf $P$ is by definition a family $(x_i)_{i\in I}
$ of elements $x_i\in P(c_i)$, such that for all $i,j\in I$, $P(p_{ij})(x_i) = P(q_{ij})(x_j)$.
Let $\mathrm{Match}(R,P)$ be the set of matching families for $R$ of elements in $P$. Sieves over $c$ form a filtered category, where partial ordering is by reverse inclusion (refinement of sieves).
There is an endofunctor $()^+ : PShv(C,\tau)\to PShv(C,\tau)$ given by
$P^+(c) := \mathrm{colim}_{R\in\tau(C)} \mathrm{Match}(R,P)$
In other words, elements in $P^+(c)$ are matching families $(x^R_f)_{f\in R}$ for all covering sieves modulo the equivalence given by agreement $x^R_f = x^{R'}_f$, for all $f\in R''$, where $R''\
subset R\cap R'$ is a common refinement of $R$ and $R'$. This is called the plus construction.
Endofunctor $P\mapsto P^+$ extends to a presheaf on $C$ by $P^+(g:d\to c) : (x_f)_{f\in R}\mapsto (x_{g\circ h})_{h\in g^*R}$ where $g^* R = \{h:e\to d | e\in C, g\circ h\in R\}$ (recall that by the
stability axiom of Grothendieck topologies, $g^*(d)\in \tau(d)$ is a covering sieve over $d$).
The presheaf $P^+$ comes equipped with a canonical natural transformation $\eta:P\to P^+$ which to an element $x\in P(c)$ assigns the equivalence class of the matching family $(P(f)(x))_{f\in Ob(C/
c)}$ where the maximal sieve $Ob(C/c)$ is the class of objects of the slice category $C/c$.
$\eta$ is a monomorphism (resp. isomorphism) of presheaves iff the presheaf $P$ is a separated presheaf (resp. sheaf); moreover any morphism $P\to F$ of presheaves, where $F$ is a sheaf, factors
uniquely through $\eta:P\to P^+$. For any presheaf $P$, $P^+$ is separated presheaf and if $P$ is already separated then $P^+$ is a sheaf. In particular, for any presheaf $P^{++}$ is a sheaf. A
fortiori, $P^+(\eta)\circ\eta:P\to P^{++}$ realizes sheafification.
A standard reference is
|
{"url":"http://ncatlab.org/nlab/show/matching+family","timestamp":"2014-04-16T19:00:40Z","content_type":null,"content_length":"47009","record_id":"<urn:uuid:94709b0a-90c2-4425-af92-d51bf73e1515>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math organizations, math societies and math associations.
Organizations, Societies, and Assoc.
Math organizations, societies, associations and groups. (International)
NCTM National Organization of Teachers of Mathematics
NCTM National Organization of Teachers of Mathematics
American Mathematical Society
Founded in 1888, offers programs that promote mathematical research, increase the awareness of the value of mathematics to society, and foster excellence in mathematics education.
ASA - American Statistical Organization
ASA was founded in 1839 to foster excellence in the use and application of statistics.
Association for Women in Mathematics
Founded in 1971. This non-profit organization continues to encourage women in the mathematical sciences.
Australian Mathematical Society
Information about Mathematics in Australia. Includes publications, events, careers etc.
Canadian Mathematical Society
Founded in 1945. Focusing on the future and to form new partnerships with the users of mathematics in business, governments and universities, educators in the school and college systems as well as
other mathematical associations
Consortium for Mathematics
A non-profit organization whose mission is to improve mathematics education for students in elementary, high-school and college level.
European Mathematical Society
Founded in 1990 to further the development of all aspects of mathematics in the countries of Europe.
MAA Online
The Mathematical Association of America.
National Council of Teacher of Mathematics NCTM
Founded in 1920. NCTM is not-profit, professional and an educational association. It is the largest organization dedicated to the improvement of mathematics education and to meeting the needs of
teachers of mathematics
NCSM - National Council of Supervisors of Math
A excellent site for those interested in leadership in mathematics education.
SIAM Society
Society for Industrial and Applied Mathematics. SIAM provides information on mathematics related books, journals, and conferences.
|
{"url":"http://math.about.com/od/organizations/","timestamp":"2014-04-18T20:48:22Z","content_type":null,"content_length":"39052","record_id":"<urn:uuid:ddea1d33-3d1b-4dec-b5fb-dadd27994ebd>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
|
18 October 1999 Vol. 4, No. 42
THE MATH FORUM INTERNET NEWS
AIMS Education Foundation | Math Webquests - Mcoy |
NRICH Maths - Cambridge
AIMS EDUCATION FOUNDATION
A non-profit, independent organization established to develop
integrated math/science materials for grades K-9, offering:
- the AIMS Puzzle corner, offering a new puzzle each month
- the AIMS@Kids Interactive Area, with facts and trivia,
contests, sights and sounds, interactive puzzles, kid
links, and stories
- a Sample Activities Archive
- Sample Math History Activities
Also see three articles on AIMS' pattern-based curriculum:
MATH WEBQUESTS - Leah P. Mcoy
Math webquests are projects that use Internet resources to
obtain data to analyze and use in various mathematical
exercises. Most math webquests involve cooperative groups
where students work together in a constructivist setting to
explore and understand mathematics.
Sample lesson plans on this site include:
- Best Weather City
- National Park Vacation
- Most Thrilling Roller Coaster
- World Shopping Spree
- Baseball Prediction
For more, see The Webquest Page:
NRICH MATHS, THE ONLINE MATHS CLUB - Univ. of Cambridge, UK
A permanent national centre for curriculum enrichment and
mathematical learning support for very able children of all
ages. The learning and enjoyment of mathematics are promoted
through an Internet newsletter and the participation of
university students as peer teachers for an electronic
answering service, Ask a Mathematician (AskNRICH):
NRICH Maths provides support, advice, and inservice training
to teachers. There are also resources for mathematics clubs,
and Interact Magazine offers mathematical challenges,
articles, and news. In addition, a Resource Bank of
competitions, stored articles, problems, solutions,
inspirations, and discussions can be found at the site.
CHECK OUT OUR WEB SITE:
The Math Forum http://mathforum.org/
Ask Dr. Math http://mathforum.org/dr.math/
Problems of the Week http://mathforum.org/pow/
Mathematics Library http://mathforum.org/library/
Teacher2Teacher http://mathforum.org/t2t/
Discussion Groups http://mathforum.org/discussions/
Join the Math Forum http://mathforum.org/join.forum.html
Send comments to the Math Forum Internet Newsletter editors
_o \o_ __| \ / |__ o _ o/ \o/
__|- __/ \__/o \o | o/ o/__/ /\ /| |
\ \ / \ / \ /o\ / \ / \ / | / \ / \
|
{"url":"http://mathforum.org/electronic.newsletter/mf.intnews4.42.html","timestamp":"2014-04-20T22:41:07Z","content_type":null,"content_length":"7797","record_id":"<urn:uuid:3cf02e4d-c861-4731-a9e1-8d5596a2bdfe>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ImageAnalyst <imageanalyst@mailinator.com> wrote in message <dba81928-387f-46d6-a867-e5d56468d3c8@g23g2000yqh.googlegroups.com>...
> On Nov 18, 7:29?pm, "Eli " <elech...@ryerson.ca> wrote:
> > Dear Matlab users,
> > I am trying to write a program that could efficiently determine which voxel a point in space lies inside.
> > Supposing I have a 3 dimensional cube in space, that is segmented into n^3 smaller voxels. Each of these voxels will have an index ranging from 1:n^3 and each voxel centre can be determined
> > No suppose I take p number of random points with coordinates (Xp,Yp,Zp) located somewhere inside the big cube.
> > I would really appreciate suggestions to writing a very efficient code to determine which voxel index each of these points lie inside.
> > At this point I have an array Mapper, with dimensions (n^3,3) indicating the centre of each small voxel.
> > Thanks very much,
> > -Eli
> ---------------------------------------------------------------------------------------
> You need to use the sub2ind() function. For example:
> workspace; % Show the Workspace panel.
> voxels = zeros(3,4,5) % Create a 3D array - I don't care what values
> it has.
> % Find index of voxel at (x,y,z) = (1,1,1).
> index1 = sub2ind(size(voxels), 1,1,1)
> % Find index of voxel at (x,y,z) = (1,2,4).
> index2 = sub2ind(size(voxels), 1,2,4)
> % Find index of voxel at (x,y,z) = (3,4,5).
> index3 = sub2ind(size(voxels), 3,4,5)
thanks for the help, but I think you misunderstood my problem.
I am able to index the voxels. but suppose I choose a random point (x,y,z), which will not in general have integer values, and will not in general lie in the centre of the voxel. I wish to find which
voxel this point falls in. What I am doing now. is calculating the difference between my point and the centres of all voxels, and then finding the minimum distance, however, this is quite
computationally heavy. Is there a faster way to determine in which voxel index my point falls?
|
{"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/266264","timestamp":"2014-04-21T07:41:39Z","content_type":null,"content_length":"40254","record_id":"<urn:uuid:e21b960e-f26f-4b2c-b41b-0bdb9a33c803>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chapter 2 Summarizing Data.
Problem I message: SPSS will do an array. We will skip part 2 and let SPSS set the class width.
Those using Quick's data files should load spss006 by choosing File, Open and Data. Use the scroll bar next to the Drives box to locate and click on a:. Double click on Dataspss. Load the page 6 data
by double clicking on spss006.
Others must create a data file for CD sales. You may want to review creating a data file on page 1 of Quick Start for SPSS. I named my data file spss006 for the software being used and the problem's
page number. I named my variable "cdsales."
All SPSS users should choose Data, Sort Cases, double click on cdsales, and accept the default Ascending by choosing OK. See page PS 6 and 7 for the answer.
Problem II message: SPSS will determine a 3 class frequency distribution for this data. SPSS procedures to change the number of classes will not be explored. Practice set graphs for the actual data
and not the frequency distribution may be done using SPSS. As a result, SPSS answers to Problem I can not be checked.
All SPSS users should choose Statistics, Summarize, and Explore. Click on the right button to load cdsales into the Dependent List. Click on the Statistics rectangle, click on the Grouped frequency
tables bull's-eye, and choose Continue. Choose Plots, make sure Stem-and-leaf is checked, choose Continue, and choose OK for a 3 class frequency table.
All SPSS users should choose Graphs from the main menu. Choose Bar, Simple, and Define. Click on the right button to load the variable (cdsales) into the Category Axis box. Accept the default N of
cases and choose OK. After viewing this ungrouped vertical bar chart choose discard. Choose Graphs, Bar, Simple, and Define. This time choose Cum n of cases and choose OK. After viewing this
cumulative bar chart try the 2 percentage graphs. Repeat this process for line, area, and histogram graphs.
Saving Charts is easy. To save the current graph choose File and Save As. Type spss006 in the File name box and SPSS will add cht as an extension. The file spss006.cht will be stored on the a: drive.
This file can be used by choosing File, Open, and Charts. If necessary, use the scroll bar to select the a: drive and then double click on the desired chart.
Chapters 3 and 4 Measuring Central Tendency and Dispersion of Ungrouped Data
Problems message: SPSS will calculate many of these statistics. Those using Quick's data files should load spss006 by choosing File, Open and Data. Use the scroll arrow next to the Drives box to
highlight the a: drive. Load by double clicking on spss006. Others should load their page 6 data file.
All SPSS users should choose Statistics, Summarize, and Frequencies from the main menu. Use the right button to load cdsales into the variable column. Select the Statistics button at the bottom of
the screen. Click on every empty box so all available statistics will be calculated. Quick Notes Statistics has or will explore many of these statistics. After you check the Percentiles box, it will
be necessary to type a percentage in the space provided. Type in 60 because you are asked to calculate the 6th decile on page 13 of Quick Notes Statistics. Click the Add button. Check remaining
squares, Continue, and OK.
Practice Set 3 answers: 1A) 17 4) 16 5) 16 7A) 14 7B) 21 7D) 17 Do other problems by hand.
Practice Set 4 answers: 1A) 21 1C) 30.6 1D) 5.532 Do other problems by hand.
Print the statistics by choosing the print icon below the SPSS main menu. Printer Setup is located under File of the main menu.
Saving output is easy. Whenever you want to save output such as the above statistics, choose File and Save SPSS Output. Type spss006 in the File name: box and press OK. SPSS will add the extension
lst and spss006.lst will be stored on the a drive. This file can be used by choosing File, Open, and SPSS Output. Select the a: drive and double click on the desired output file.
Chapters 5 and 6 Measuring Central Tendency and Dispersion of Grouped Data
Chapters 5 and 6 problems are very similar to those of chapters 3 and 4. These SPSS Practice Set Instructions and Answers will explore only the ungrouped Practice Sets of chapters 3 and 4.
Part II Practice Sets on Probability
Part II answers begin on page PS 42 and 43 of Quick.
Expect minor rounding differences between Quick answers and SPSS answers.
Chapter 7 and 8 on Understanding Probability
These problems should be done by hand.
|
{"url":"http://www.businessbookmall.com/Instspss.htm","timestamp":"2014-04-16T10:14:05Z","content_type":null,"content_length":"118616","record_id":"<urn:uuid:187a74f4-cc5f-40d0-9992-c5b676ec188e>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Grading and Report Cards | Print
Grading and Report Cards
Figuring your grades
One of the skills that is usually NOT taught in teacher education programs is "how to grade." Unfortunately, new teachers are expected to pick this up on their own.
However, it is also an area where new teachers are closely scrutinized by administrators and parents. The following are some basic tips that can be used by new teachers to guide them as they begin to
develop their own grading systems.
When in doubt check with veteran teachers in your school, including your local MEA-MFT president.
Computer grading
Check with your school to find out what computer grading program is used. Get very comfortable with the program and back up, back up, back up! Nothing is more stressful than a crashed computer. Back
it up daily! (Electronically or with a hard copy.)
Grading systems
Check with your school policy to find out what grading system is used in your school. There is no common grading system used throughout Montana.
If your school does not have a standard grading system
Here are some suggestions in case your school does not have a standard grading system:
100% (percentage) system: Convert all grades and numbers to a system of 100. It will not only be easier for you to figure out overall; it also gives your students an easily understood index to
evaluate their own performance.
Convert letter grades to numbers: It is always easier to average numbers; it is always more understandable for other adults to see a percentage/number total.
Percentage system: All letter grades are converted to a numerical equivalent, based on a 100-point system. Use the following example, or consult with other teachers in your building or community to
get a sense of what is typical.
A++ = 100 (perfect A = 95 C+ = 78 D- = 62
paper w/ extra-credit) A- = 92 C = 75 F = <60
A+ = 98 B+ = 88 C- = 72
B = 85 D+= 68
B- = 82 D = 65
Grade point system: In this system, all letter grades are converted to a grade equivalent, based on the 4.0 system. You can use the following example, or consult with other teachers in your building
or community to get a sense of what is typical.
A+=4.3 B=3.0 C-=1.7 F=0.0
A=4.0 B- = 2.7 D+=1.3 D=1.0
A-=3.7 C+=2.3 D-=0.7
B+=3.3 C=2.0
After the point values are averaged, they are converted back into a letter grade. Here is a chart you can use ("borderline" grades are of course up to the discretion of the teacher):
4.0-4.3 = A+ 2.5-2.7 = B- 0.7-1.3 = D
3.7-4.0 = A 2.3-2.5 = C+ 0.5-0.7 = D-
3.5-3.7 = A- 1.7-2.3 = C 0.0-0.5 = F
3.3-3.5 = B+ 1.5-1.7 = C-
2.7-3.3 = B 1.3-1.5 = D+
Establish your final grading formula numerically: Determine ahead of time the weight given to each of the sections of your grade book. Explain to the students your grading system - let them know your
expectations! Below are some common numerical weights for various classroom activities, but you will need to develop your own system based on the types of assignments and assessments you use.
TESTS 50%, QUIZZES 25% PROJECTS 25% or
TESTS 45%, HOMEWORK 10%, QUIZZES 25%, PROJECTS 20%
Always be objective when dealing with negative areas. Probably the one area that gets teachers into the most report card trouble is subjective negative comments. You need to figure that any time you
give an opinion, a protective parent could have an opposite one. One way to get around this problem is to use a calculator and hard numbers. Here are some examples of subjective and objective
SUBJECTIVE: "He rarely does his homework"
OBJECTIVE: "He has missed 12/15 (80%) of the homework assignments this quarter."
SUBJECTIVE: "She has failed most of her tests."
OBJECTIVE: "Her percentage on our tests is 46%, which is equal to an F."
SUBJECTIVE: "He is constantly talking out of turn."
OBJECTIVE: "He talks out of turn between 5-8 times every day."
Always give at least one positive statement at the beginning. Say SOMETHING nice about the student. For example:
"He's a great student, however..."
"I really enjoy having her in my class. And, she needs to work on her..."
"He's always enthusiastic. However..."
"She always tries to do well. However..."
|
{"url":"http://mea-mft.org/Print/192.aspx","timestamp":"2014-04-18T05:30:44Z","content_type":null,"content_length":"8607","record_id":"<urn:uuid:b67dbcd4-46b3-41de-bf79-e5107f9cbeaf>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
|
-records in geometrically distributed random variables
d-records in geometrically distributed random variables
Helmut Prodinger
We study d–records in sequences generated by independent geometric random variables and derive explicit and asymptotic formulæ for expectation and variance. Informally speaking, a d–record occurs,
when one computes the d–largest values, and the variable maintaining it changes its value while the sequence is scanned from left to right. This is done for the “strict model,” but a “weak model” is
also briefly investigated. We also discuss the limit q → 1 (q the parameter of the geometric distribution), which leads to the model of random permutations.
Full Text:
PDF PostScript
|
{"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/dmtcs/article/viewArticle/543","timestamp":"2014-04-19T13:13:13Z","content_type":null,"content_length":"11172","record_id":"<urn:uuid:760ca404-572d-4c8c-8f95-e5ff3671b895>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Shrewsbury, MA
Find a Shrewsbury, MA Math Tutor
...I use several manipulatives, math games, and elementary resources in my quest for all students to master the basic middle school skills so they can move onto 7th grade and be as successful a
student that they can be. I work with a broad range of 6th grade students, from high achieving, independe...
17 Subjects: including geometry, logic, reading, chemistry
...I can teach reading and writing skills to students at all education levels who want to develop better writing skills. Doing well on the SAT, and any other national test, in part depends on
knowing the tricks of the test. My test prep courses include not only the math skills to do the problems, ...
27 Subjects: including algebra 1, algebra 2, ACT Math, calculus
...I am a professional programmer. I have been steadily employed in this field for over 25 years. I have used these languages in high tech companies (Robotics, Electrical Engineering, Computer
vision). I desire to teach C#, C++, and C.
17 Subjects: including algebra 1, algebra 2, discrete math, prealgebra
...Understanding the periodic table is critical. I can help with understanding geometric relationships and therefore solving problems using relationships. Algebra is still important here.
14 Subjects: including geometry, algebra 1, algebra 2, trigonometry
...I have been tutoring for four years, beginning with the years when I was an NHS volunteer. I explain concepts being learned, and look for gaps that students have in learning Math facts and
concepts in order to do work at their current level. As a Physics and Mathematics double major and Astrono...
17 Subjects: including discrete math, linear algebra, algebra 1, algebra 2
Related Shrewsbury, MA Tutors
Shrewsbury, MA Accounting Tutors
Shrewsbury, MA ACT Tutors
Shrewsbury, MA Algebra Tutors
Shrewsbury, MA Algebra 2 Tutors
Shrewsbury, MA Calculus Tutors
Shrewsbury, MA Geometry Tutors
Shrewsbury, MA Math Tutors
Shrewsbury, MA Prealgebra Tutors
Shrewsbury, MA Precalculus Tutors
Shrewsbury, MA SAT Tutors
Shrewsbury, MA SAT Math Tutors
Shrewsbury, MA Science Tutors
Shrewsbury, MA Statistics Tutors
Shrewsbury, MA Trigonometry Tutors
Nearby Cities With Math Tutor
Auburn, MA Math Tutors
Berlin, MA Math Tutors
Boylston Math Tutors
Franklin, MA Math Tutors
Holden, MA Math Tutors
Hudson, MA Math Tutors
Marlborough, MA Math Tutors
Milford, MA Math Tutors
Millbury, MA Math Tutors
Natick Math Tutors
Northborough Math Tutors
West Boylston Math Tutors
Westboro, MA Math Tutors
Westborough Math Tutors
Worcester, MA Math Tutors
|
{"url":"http://www.purplemath.com/Shrewsbury_MA_Math_tutors.php","timestamp":"2014-04-19T23:15:09Z","content_type":null,"content_length":"23776","record_id":"<urn:uuid:a52db012-3835-4fcc-a0f1-f34bcce398dc>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
|