content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
angle addition postulate examples
Best Results From Wikipedia Yahoo Answers Youtube
From Wikipedia
In traditional logic, an axiom or postulate is a proposition that is not proved or demonstrated but considered to be either self-evident, or subject to necessary decision. Therefore, its truth is
taken for granted, and serves as a starting point for deducing and inferring other (theory dependent) truths.
In mathematics, the term axiom is used in two related but distinguishable senses: "logical axioms" and "non-logical axioms". In both senses, an axiom is any mathematical statement that serves as a
starting point from which other statements are logically derived. Unlike theorems, axioms (unless redundant) cannot be derived by principles of deduction, nor are they demonstrable by mathematical
proofs, simply because they are starting points; there is nothing else from which they logically follow (otherwise they would be classified as theorems).
Logical axioms are usually statements that are taken to be universally true (e.g., A and B implies A), while non-logical axioms (e.g., ) are actually defining properties for the domain of a specific
mathematical theory (such as arithmetic). When used in the latter sense, "axiom," "postulate", and "assumption" may be used interchangeably. In general, a non-logical axiom is not a self-evident
truth, but rather a formal logical expression used in deduction to build a mathematical theory. To axiomatize a system of knowledge is to show that its claims can be derived from a small,
well-understood set of sentences (the axioms). There are typically multiple ways to axiomatize a given mathematical domain.
Outside logic and mathematics, the term "axiom" is used loosely for any established principle of some field.
The word "axiom" comes from the Greek word ἀξίωμα (axioma), a verbal noun from the verb ἀξιόειν (axioein), meaning "to deem worthy", but also "to require", which in turn comes from
ἄξιος (axios), meaning "being in balance", and hence "having (the same) value (as)", "worthy", "proper". Among the ancient Greekphilosophers an axiom was a claim which could be seen to be true
without any need for proof.
The root meaning of the word 'postulate' is to 'demand'; for instance, Euclid demands of us that we agree that some things can be done, e.g. any two points can be joined by a straight line, etc.
Ancient geometers maintained some distinction between axioms and postulates. While commenting Euclid's books Proclus remarks that "Geminus held that this [4th] Postulate should not be classed as a
postulate but as an axiom, since it does not, like the first three Postulates, assert the possibility of some construction but expresses an essential property". Boethius translated 'postulate' as
petitio and called the axioms notiones communes but in later manuscripts this usage was not always strictly kept.
Historical development
Early Greeks
The logico-deductive method whereby conclusions (new knowledge) follow from premises (old knowledge) through the application of sound arguments (syllogisms, rules of inference), was developed by the
ancient Greeks, and has become the core principle of modern mathematics. Tautologies excluded, nothing can be deduced if nothing is assumed. Axioms and postulates are the basic assumptions underlying
a given body of deductive knowledge. They are accepted without demonstration. All other assertions (theorems, if we are talking about mathematics) must be proven with the aid of these basic
assumptions. However, the interpretation of mathematical knowledge has changed from ancient times to the modern, and consequently the terms axiom and postulate hold a slightly different meaning for
the present day mathematician, than they did for Aristotle and Euclid.
The ancient Greeks considered geometry as just one of several sciences, and held the theorems of geometry on par with scientific facts. As such, they developed and used the logico-deductive method as
a means of avoiding error, and for structuring and communicating knowledge. Aristotle's posterior analytics is a definitive exposition of the classical view.
An “axiom�, in classical terminology, referred to a self-evident assumption common to many branches of science. A good example would be the assertion that When an equal amount is taken from
equals, an equal amount results.
At the foundation of the various sciences lay certain additional hypotheses which were accepted without proof. Such a hypothesis was termed a postulate. While the axioms were common to many sciences,
the postulates of each particular science were different. Their validity had to be established by means of real-world experience. Indeed, Aristotle warns that the content of a science cannot be
successfully communicated, if the learner is in doubt about the truth of the postulates.
The classical approach is well illustrated by Euclid's Elements, where a list of postulates is given (common-sensical geometric facts drawn from our experience), followed by a list of "common
notions" (very basic, self-evident assertions).
# It is possible to draw a straight line from any point to any other point.
# It is possible to extend a line segment continuously in a straight line.
# It is possible to describe a circle with any center and any radius.
# It is true that all right angles are equal to one another.
# ("Parallel postulate") It is true that, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if
produced indefinitely, From Yahoo Answers
Question:I would like to know an alternate way to illustrate the Angle Addition Postulate. I've already made a typical acute angle with points ABC, and with D sticking out diagonally from point
B, the vertex. But I would like another one or two samples of different kinds of angles (not acute) Thank you.
Answers:When two angles share a common ray, say AB, where A is the vertex of both angles, and the angles are on opposite sides of AB (for example angle CAB is opening on the left side of AB and
angle DAB opening on the right side of AB), then angle CAD = angle CAB + angle ABD. If B is on segment AC and between A and B then the sum of the lengths AB and BC is equal to the length of
segment AC. AB+BC+AC. Both are really very obvious results similar to saying if I split a pile of rocks into two piles of rocks then the the number of rocks in the original pile is equal to the
sum of the number of rocks in the two piles created from the first pile.
Question:What is the definition of an addition postulate? I don't need examples of problems because I understand that part.
Answers:Please clarify with the type of addition postulate. For example, Definition of Angle Addition Postulate Angle Addition Postulate states that if a point S lies in the interior of PQR, then
PQS + SQR = PQR. ----------------------- Segment Addition Postulate If B is between A and C, then AB + BC = AC. There are several others.
Answers:simple trig? I barely remember back that far but heres your answer enjoy~
From Youtube
How to Use the Arc Addition Postulate to Find Arc Lengths :This video math lesson teaches the arc addition postulate. The arc addition postulate states the measures of two adjacent arcs can be
added. This is similar to the segment and angle addition postulates. The example in this geometry video lesson involves finding the the measures of two arcs in a diagram. One central angle
measure is given and one arc measure is given.
Angle Addition Postulate - YourTeacher.com - Geometry Help :For a complete lesson on angle addition postulate and angle bisector, go to www.yourteacher.com - 1000+ online math lessons featuring a
personal math teacher inside every lesson! In this lesson, students learn the angle addition postulate and the definition of an angle bisector. Students also learn the definitions of congruent
angles and adjacent angles. Students then use algebra to find missing angle measures and answer various other questions related to angle bisectors, congruent angles, adjacent angles, and segment
|
{"url":"http://www.edurite.com/kbase/angle-addition-postulate-examples","timestamp":"2014-04-17T21:31:09Z","content_type":null,"content_length":"77120","record_id":"<urn:uuid:8426ef4f-d46e-499c-bc70-b841613c9a7b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find the area of a parallelogram if a base and corresponding altitude have the indicated lengths. Write your answer as a mixed number. Base 3 1/2 feet, 3/4 altitude feet. PLEASE HELPP!!!!!! THANKS :)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4fac107ee4b059b524f7cef0","timestamp":"2014-04-18T18:50:19Z","content_type":null,"content_length":"61022","record_id":"<urn:uuid:bd23cd56-80bc-425f-a084-e4fef621fc08>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
|
graph6 and sparse6 graph formats
graph6 and sparse6 are formats for storing undirected graphs in a compact manner, using only printable ASCII characters. Files in these formats have text type and contain one line per graph.
graph6 is suitable for small graphs, or large dense graphs. sparse6 is more space-efficient for large sparse graphs.
If you really want the formal definition of these formats, the details are available here.
Using files in graph6 or sparse6 format
The gtools package that comes with nauty contains many programs for manipulating graphs in these formats. However, there is also a stand-alone reader showg which can convert them into many other
formats, including some good for human consumption.
showg is available as a program written in a portable subset of C: showg.c.
We also provide some ready-made executables. Note that a command line interface is required for showg to be used properly.
Linux X86, libc6
Sun SPARC, Solaris 2
Alpha, OSF
If you can help to make more of these, please contact me.
Using showg
Brief instructions for using showg can be obtained by entering showg -help:
Usage: showg [-p#:#l#o#Ftq] [-a|-A|-c|-d|-e] [infile [outfile]]
Write graphs in human-readable format.
infile is the input file in graph6 or sparse6 format
outfile is the output file
Defaults are standard input and standard output.
-p#, -p#:#, -p#-# : only display one graph or a sequence of
graphs. The first graph is number 1. A second number
which is empty or zero means infinity.
-a : write the adjacency matrix
-A : same as -a with a space between entries
-d : write output to satisfy dreadnaut
-c : write compact dreadnaut form with minimal line-breaks
-e : write a list of edges, preceded by the order and the
number of edges
-o# : specify number of first vertex (default is 0)
-t : write upper triangle only (affects -a, -A, -d and default)
-F : write a form-feed after each graph except the last
-l# : specify screen width limit (default 78, 0 means no limit)
This is not currently implemented with -a or -A.
-q : suppress auxiliary output
-a, -A, -c, -d and -e are incompatible.
We will give an example. The regular graphs or order 8 and degree 2 look like this in graph6 format:
Suppose you have them in a file called reg82.g6. Here are some of your options:
List of neighbours: showg reg82.g6 output
List of edges: showg -e reg82.g6 output
The same starting the numbering at 1: showg -eo1 reg82.g6 output
Adjacency matrix: showg -a reg82.g6 output
Only the upper triangle: showg -ta reg82.g6 output
The same, less verbosely: showg -taq reg82.g6 output
The same, just the first two graphs: showg -tap1:2 reg82.g6 output
The options can be concatenated as in the examples, or given separately.
"-tap1:2" is the same as "-t -a -p1:2".
Page Master: Brendan McKay, bdm@cs.anu.edu.au and http://cs.anu.edu.au/~bdm.
Thanks to Gordon Royle for ideas on the design of this page.
|
{"url":"http://cs.anu.edu.au/~bdm/data/formats.html","timestamp":"2014-04-21T05:01:59Z","content_type":null,"content_length":"4286","record_id":"<urn:uuid:5158e13f-c6e2-4bc2-a90b-4ae5c8373565>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Working with expressions.
March 11th 2011, 09:44 AM #1
Feb 2011
Working with expressions.
Please, if anyone could solve and explain to me the process of solving these two problems I would be extremely grateful. (It's not about the result, I really need to know HOW to do this, so If I
can see a process of solving these problems I hope I will be able to understand it).
1) IF (b-2a)/(4a+3b)=2 (a,b!=0, 4a+3b!=0), THEN (2a^2-3ab+5b^2)/(4a^2+3b^2) EQUALS?
2) Value of the expression (25^(0,3)*5^(0,4))/12^(-1/3) belongs to the interval (which)?
Last edited by mr fantastic; March 11th 2011 at 12:16 PM. Reason: Re-titled.
Since $b eq 0$ you can divide by b
Let $q = \frac{a}{b}$
from which you can get the value for q that you can then plug into
Since I have started writing,
For 1), the advice above is equivalent to expressing b through a from the first equation and substituting it into the second expression.
For 2), does 0,3 mean 3/10 or an interval from 1 to 3? In the English-speaking world, period is most commonly used as a decimal mark. If it means 3/10, then the value of this expression belongs
to many intervals. For example, $1\le 25^{0.3}\cdot5^{0.4}/12^{-1/3}=25^{0.3}\cdot5^{0.4}\cdot12^{1/3}<25\cdot5\cdot12$. To get a tighter upper bound, $25^{0.3}\cdot5^{0.4}\cdot12^{1/3}<25^{1/3}\
Ok, I understand the principle... thank you very much.
But I dont understand the following: If I divide the second expression by b, how do I get from -3ab to -3q? Wouldn't it be -3a?
You should divide the second expression by b^2.
March 11th 2011, 10:48 AM #2
MHF Contributor
Nov 2008
March 11th 2011, 10:58 AM #3
MHF Contributor
Oct 2009
March 11th 2011, 11:11 AM #4
Feb 2011
March 11th 2011, 11:22 AM #5
MHF Contributor
Oct 2009
March 11th 2011, 11:29 AM #6
Feb 2011
|
{"url":"http://mathhelpforum.com/algebra/174257-working-expressions.html","timestamp":"2014-04-16T20:09:51Z","content_type":null,"content_length":"47470","record_id":"<urn:uuid:3e4ce375-8a21-431c-a385-12f854de3bd9>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Teaching Math in America
Teaching Math in 1950:
A logger sells a truckload of lumber for $100.
His cost of production is 4/5 of the price. What is his profit?
Teaching Math in 1960:
A logger sells a truckload of lumber for $100.
His cost of production is 4/5 of the price, or $80. What is his
Teaching Math in 1970:
A logger exchanges a set "L" of lumber for a set "M" of money. The
carnality of set "M" is 100. Each element is
worth one dollar. Make 100 dots representing the elements of the set
"M." The set "C," the cost of production contains 20 fewer points than set
"M." Represent the set "C" as a subset of set "M" and answer the following
question: What is the carnality of the set "P" of profits?
Teaching Math in 1980:
A logger sells a truckload of lumber for $100.
His cost of production is $80 and his profit is $20. Your assignment:
Underline the number 20.
Teaching Math in 1990:
By cutting down beautiful forest trees, the logger makes $20. What do
you think of this way of making a living? What's wrong about it?
Topic for class participation after answering the question: How did
the forest birds and squirrels feel as the logger cut down the trees?
(There are no wrong answers.)
Teaching Math in 2000:
A logger sells a truckload of lumber for $100.
His cost of production is $120. How does Arthur Andersen determine
that his profit margin is $60? And how many documents were shredded to achieve
this number?
Teaching Math in 2010:
El Loggero se habla with the truckero y se ponen de acuerdo con otro
driver de la competencia y etc...
It would be funnier if it weren't true!
|
{"url":"http://glocktalk.com/forums/showthread.php?t=182224","timestamp":"2014-04-16T17:18:34Z","content_type":null,"content_length":"78679","record_id":"<urn:uuid:c47a3994-ad90-4a4e-858c-eef8b1b732df>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hardy-Weinberg Equilibrium Testing of Biological Ascertainment for Mendelian Randomization Studies
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Am J Epidemiol. Feb 15, 2009; 169(4): 505–514.
Hardy-Weinberg Equilibrium Testing of Biological Ascertainment for Mendelian Randomization Studies
Mendelian randomization (MR) permits causal inference between exposures and a disease. It can be compared with randomized controlled trials. Whereas in a randomized controlled trial the randomization
occurs at entry into the trial, in MR the randomization occurs during gamete formation and conception. Several factors, including time since conception and sampling variation, are relevant to the
interpretation of an MR test. Particularly important is consideration of the “missingness” of genotypes that can be originated by chance, genotyping errors, or clinical ascertainment. Testing for
Hardy-Weinberg equilibrium (HWE) is a genetic approach that permits evaluation of missingness. In this paper, the authors demonstrate evidence of nonconformity with HWE in real data. They also
perform simulations to characterize the sensitivity of HWE tests to missingness. Unresolved missingness could lead to a false rejection of causality in an MR investigation of trait-disease
association. These results indicate that large-scale studies, very high quality genotyping data, and detailed knowledge of the life-course genetics of the alleles/genotypes studied will largely
mitigate this risk. The authors also present a Web program (http://www.oege.org/software/hwe-mr-calc.shtml) for estimating possible missingness and an approach to evaluating missingness under
different genetic models.
Keywords: epidemiologic methods, genetics, Hardy-Weinberg equilibrium, random allocation, research design
Epidemiologic association studies are susceptible to unresolved confounding, reverse causation, and selection bias (1). Mendelian randomization (MR) is an epidemiologic method that, through the use
of informative genotypes, permits the testing of causal relations between exposures and diseases. MR is based on the assumption that the association between a disease and a genetic polymorphism of
known function (that mimics the biologic link between a proposed exposure and disease) is not generally susceptible to reverse causation or confounding (1). Thus, for example, if it is known that
there is an association between a plasma protein and a disease, the direction of that association can be tested if genotypes of the gene encoding the protein are associated with the level of the
protein. Because the quantity of the protein cannot cause the genotype but the genotype can influence the quantity of the protein, if the genotype is also associated with the disease it may be
inferred that the level of the plasma protein is causally influencing the disease.
The MR approach is conceptually analogous to a randomized controlled trial (see Figure 1). Randomization in randomized controlled trials is undertaken at entry into the trial, and since many trials
concern major diseases arising in later life, many studies are of older persons. The same applies to many cohort studies upon which MR will be based. Randomization in MR (random assignment to
genotype) occurs during gamete formation and conception (1). This means that the principle of MR has the potential to strengthen inference from an observational study (such as a cohort study) toward
the inferential robustness of a randomized controlled trial. In randomized controlled trials, one analyzes the data on the basis of “intention to treat” in most cases—this ensures that all possible
differences between drug and placebo groups are taken into account in the final analysis. For MR to match this, there is a need to analyze the data on “intention to MR.” It follows that the randomly
breeding population as a whole suits an MR test. However, when a random sample is taken from a population, especially at a time point likely to be far from the original fertilization (i.e.,
randomization) events, the nature of the sample needs to be carefully considered with respect to “intention to MR.”
Mendelian randomization and randomized controlled trials. Adapted from the paper by Hingorani and Humphries (12). HWE, Hardy-Weinberg equilibrium.
A particular genetic feature of randomly breeding populations is that of Hardy-Weinberg equilibrium (HWE) (2, 3). Since alleles of diploid loci are randomly reassorted at each new conception, it
follows that for allele frequencies p and q (where pqpq)^2.
In genetics, the test of whether the proportions of genotypes observed in a population sample are consistent with the prediction (p^2, 2pq, and q^2) offers a fundamental test of biologic
ascertainment for the genotypes. A sample from a homogeneous randomly mating population should only deviate from perfect HWE by small chance amounts, conforming to parametric sampling statistics.
Large, statistically significant deviations are often caused by quality issues with laboratory typing data. However, given high-quality genotyping data, the HWE feature offers the opportunity to look
for possible biologic ascertainment biases for the population sample relative to the genotype of interest.
In this paper, we consider the relations between sample HWE testing, genotyping error, chance deviations from exact HWE in the sample, and the use of genotype as an instrumental variable for MR
analyses. We also present a Web tool for estimating possible missingness and a scheme for examining missingness under different genetic models (additive, threshold, heterosis). This approach more
securely equates the principle of “intention to MR” with the “intention to treat” analysis principle of randomized controlled trials.
We consider studies in which either there is an additive effect per allele in subjects or 1 homozygous group would be considered the phenotypically different group for trait and disease. For a locus
with 2 alleles P and Q and overall population frequencies p and q, and taking a sample of size N, the genotypes in the possible resultant sample can be parameterized. When considering the sampling of
P homozygotes, in a sample of N subjects, the variance of the observed number μ relative to the expected number Np^2 will be that of a binomial distribution—that is, Np^2(1 − p^2). These values were
used to determine the standard deviation σ of the observed number in the sample relative to that expected from the population if chance variation did not occur. In turn, this permitted modeling for
chosen 0pk, arbitrarily varied from a 20% loss to a 20% gain (from ×0.8 to ×1.2).
We started the simulations with allelic frequencies p[true] and q[true] and computed the expected true values for the 2 homozygote groups and for the heterozygotes assuming HWE. We then computed the
number of persons in the no-call group who were lost/gained. These numbers were $2(1−k)Nptrueqtrue$ for heterozygotes and $(1−k)Nptrue2$ for homozygotes. For the computation of heterozygote losses/
gains, we computed p[false] and q[false]—that is, the erroneous values of p and q that are obtained according to the loss or gain (chance or genotype assay-related or genotype-dependent
ascertainment-related) of a genotype group dependent on k. Calculations were performed using programs written in Python (http://www.python.org).
and q[false]p[false].
Using both values, we computed the expected numbers of both homozygote groups and the heterozygotes. The observed number of heterozygotes was computed by subtracting/adding the number of the no-call
group to the expected true values for the heterozygotes. We then computed the HWE Pearson χ^2 value for expected genotype counts versus observed genotype counts.
For the computation of homozygote losses or gains, we followed an analogous procedure, but with
To approach the typical study size generally anticipated for MR, we considered a range of N's from 1,000 (expected to be sufficient for major genotype effects) to 100,000 (believed to be reasonably
powered for modest complex trait effects (4)).
We considered a range of cases with perfect HWE, for given combinations of allele frequencies (1pN5)
We estimated expected genotypic counts from these estimates and computed the HWE χ^2 value. Analogous analyses were carried out for gain/loss of homozygotes and heterozygotes.
We evaluated the HWE χ^2 distributions for the single nucleotide polymorphisms (SNPs) analyzed in 2 genome-wide association studies, the study by the Wellcome Trust Case Control Consortium (WTCCC) (6
) and the Framingham Study (7). For the WTCCC study, we computed the HWE χ^2 values for more than 400,000 SNPs from the observed genotype counts for each SNP in more than 2,000 controls. For the
Framingham Study, we computed the HWE χ^2 values from the P values available for nearly 100,000 SNPs analyzed for association with prostate cancer in more than 600 unselected men. We also analyzed
the HWE χ^2 value from publications in the literature that reported the HWE χ^2, the P value for the HWE χ^2 test, or the genotypic counts for the apolipoprotein E (APOE) polymorphism. We considered
the first 30 studies (see Web Table 1, which is posted on the Journal’s Web site (http://aje.oxfordjournals.org/)) observed with the scientific research tool “scirus” (http://www.scirus.com/) using
the search terms “APOE” and “Hardy-Weinberg” on April 11, 2008.
We created Q-Q plots in SPSS, version 15.0 (SPSS, Inc., Chicago, Illinois), in order to compare the observed and expected χ^2 values (with 1 df) for WTCCC controls and for cases with the 7 diseases
analyzed in the WTCCC study. We also analyzed the Q-Q plots for the Framingham prostate cancer cases and for the 30 studies of APOE.
and 2Npq, Np^2, and Nq^2 represent the 3 genotype counts in a sample in perfect HWE (χ^2http://www.oege.org/software/hwe-mr-calc.shtml) both to conduct a standard HWE test (Pearson χ^2) based on the
3 groups and to calculate counts (changing any 1 group) to create perfect HWE.
We applied the effect of missingness and deviations from HWE to an MR example consisting of a sample of 10,000 persons with genotypic, intermediate trait, and outcome information. We added persons
who were missing based on HWE and computed the significance of the new MR studies. MR analyses were carried out in Stata/IC 10.0 for Windows (Stata Corporation, College Station, Texas) using the
command “ivregress.”
In Web Table 2 (http://aje.oxfordjournals.org/), for representative values of p (0pN (1,000 ≤ N ≤ 100,000), we present σ/μ and 95% confidence intervals for expected numbers of p homozygotes.
Estimated σ/μ ranged from 0.10% to 3.16% and was inversely related to both p and N. Table 1 shows a subset of these results.
Measures of Dispersion for Sample Sizes of 5,000, 20,000, and 50,000 and for Allele Frequencies (p and q) Ranging From 0.05 to 0.95
Table 2 gives values of N and p with 1 homozygote group adjusted to the lower 95% confidence limit. Table 3 applies the same approach to heterozygotes. Results for a larger range of conditions can be
seen in Web Tables 3 and 4 (http://aje.oxfordjournals.org/). In each instance, these demonstrate a loss of persons in 1 genotype group, which could frequently occur by chance, through genotyping
failure, or through biologic/clinical ascertainment bias. The tables show that in most instances, the χ^2 values would generate a firm conclusion: “no significant deviation from HWE.” The HWE test is
insensitive to heterozygote loss (for detection of the 95% confidence interval of group count), whatever the value of p, and is only sensitive to homozygote loss where p has a low value. As an
example, for Np^2 value was 2.48, well below the 5% limit (χ^2 ≥3.84) typically used for samples in which all 3 genotype groups would be free to vary by chance. There is, nonetheless, deviation from
perfect HWE (χ^2
Deviation From Hardy-Weinberg Equilibrium for Sample Sizes of 5,000, 20,000, and 50,000 and for Allele Frequencies Ranging From 0.05 to 0.95, After Subtracting From the Homozygote 1 Group a Number
Equivalent to 1.96 Standard Deviations
Deviation From Hardy-Weinberg Equilibrium for Sample Sizes of 5,000, 20,000, and 50,000 and for Allele Frequencies Ranging From 0.05 to 0.95, After Subtracting From the Heterozygote Group a Number
Equivalent to 1.96 Standard Deviations
Tables 4 and and55 present combinations of allele frequency p, sample size N, and proportional multiplier k (0.8 ≤ k ≤ 1.2), modifying the observed number of 1 genotype group such that HWE testing
would suggest no significant deviation from HWE. Although a significant deviation from HWE is more likely with large values of N and k (Web Tables 5 and 6 (http://aje.oxfordjournals.org/)), there is
a wide range of observed population samples in which potential ascertainment bias would not be recognized under a usual statistical test. These (for HWE χ^2http://aje.oxfordjournals.org/) as
3-dimensional plots with axes for N, p, and |1 − k|. Web Figure 1 shows that a greater variation of the parameter |1 − k| in homozygotes results in more nonsignificant deviations from HWE for
homozygotes than for heterozygotes.
Hardy-Weinberg χ^2 Values (in Parentheses) for Particular Combinations of Allele Frequency, Gain/Loss of Homozygotes, and Sample Size
Hardy-Weinberg χ^2 Values (in Parentheses) for Particular Combinations of Allele Frequency, Gain/Loss of Heterozygotes, and Sample Size
Web Table 7 (http://aje.oxfordjournals.org/) shows the boundaries of allele frequencies and the associated χ^2 values observed for a combination of allele frequencies and sample sizes in situations
of perfect HWE from observed allele frequencies. The maximum values of HWE χ^2 computed from the 95% confidence interval allele frequencies range from 3.56 to 4.18.
For gain/loss of heterozygotes (Web Table 8 (http://aje.oxfordjournals.org/)), the maximum difference between χ^2 computed from the 95% confidence interval of allele frequencies and χ^2 computed from
the observed allele frequencies is 1,132 (corresponding to |1 − k|pNk|pNhttp://aje.oxfordjournals.org/)) were comparable: a maximum of 3,686, a minimum of 0.19, and a median of 23.87.
The observed Q-Q plots in WTCCC controls are not dissimilar to those observed for the 7 WTCCC case collections (Web Figure 2 (http://aje.oxfordjournals.org/)). Figure 2 shows results for WTCCC
controls and cases with bipolar disorder. In all instances, there are considerably more high χ^2 values than expected, a situation that is more extreme in the 7 WTCCC case collections. A similar
result is observed for the 30 studies of APOE (Figure 2). For Framingham Study prostate cancer cases, the Q-Q plot is different, with an inflection around χ^2 values of 11.5 (Figure 2).
Q-Q plots comparing the expected and observed χ^2 values for controls from the Wellcome Trust Case Control Consortium (WTCCC) (6) and one of the 7 case collections (cases of bipolar disorder). The
Q-Q plots (which are similar) for all of the other ...
Table 6 shows a nonsignificant MR association observed in the original study. This association becomes significant after the addition of missing persons (to give perfect HWE) with intermediate
phenotypes following 3 different criteria. Note that deviations from HWE are not significant in the original study.
Effect on Mendelian Randomization of Both Deviations From Hardy-Weinberg Equilibrium and Additions of Missing Persons to Conform to Perfect Hardy-Weinberg Equilibrium^a
In Figure 3 we show, with an explanatory legend, the Web tool written both to perform a Pearson χ^2 HWE test and, using 2 genotype groups, to estimate what the third genotype would be under perfect
HWE (χ^2
Illustration of output from the Web tool (see Materials and Methods section and http://www.oege.org/software/hwe-mr-calc.shtml) developed for Hardy-Weinberg equilibrium (HWE) analysis with
estimations of possible ascertainment bias. In this example, ...
Missingness of subjects of a particular genotype group from a population sample may be due to chance, genotyping assay errors, or clinical ascertainment bias related to that genotype. We have
examined chance variation of the observed count for a particular genotype group (common or rare homozygotes (1pp
We demonstrate, in both genome-wide and specific published data, despite high-quality data from the use of modern methods, a clear excess of inflated (though not necessarily statistically
significant) χ^2 values, pointing either to residual erroneous typing or to sample ascertainment biases. We also present an approach and a Web tool for estimating possible missingness by genotype
within HWE analyses and a flow chart showing how, for different genetic models, to estimate the bounds of effect of possible genotype-dependent missingness in MR analyses.
The results from the analyses of the Q-Q plots in WTCCC controls and cases, in Framingham prostate cancer cases, and in APOE studies published in the literature suggest that there might be
missingness in all of these studies, since the observed distribution of χ^2 values does not fit well the expected distribution for 1 degree of freedom. Other possible reasons for the deviations from
HWE (particularly in controls but also in cases) may include residual technical issues with SNP assays and SNP calling, ascertainment bias, sample subdivision (stratification or admixture), and other
unknown or unrecognized sequence variation confounding SNP assays. Overall, these points put into perspective the difference between the often-used results statement “in HWE” and the reality “not
statistically significantly deviant from HWE.” They also draw attention to the general excess of statistically significant deviations from HWE still evident in both focused and genome-wide studies,
and to the general considerations necessary to be able to deploy HWE testing to be maximally informative regarding sample collection.
These points are relevant to MR studies. In a very large (outbred) population, there should be exact HWE at the point of conception. That is the moment at which the intention to randomize takes
place, making MR directly comparable with a model randomized controlled trial. However, taking a population sample in later life and entering the successfully genotyped set into an MR analysis has
the potential to violate the principle of “intention to analyze.”
It is outside the scope of this discussion to consider in detail the issue of biases in genotyping, except to note that even minor biases ranging from technical issues to covert null alleles (8), if
unrecognized, may defeat the value of HWE assessment of the sample. A large sample size, which is necessary anyway in genetic epidemiologic studies of markers of small effect, is also important in
increasing the power of the HWE test of the sample. In addition to being a test of genotype-dependent clinical ascertainment bias, the HWE test, in population genetics terms, assumes the absence of
migration, mutation, natural selection, and assortative mating. Under these conditions, genotype frequencies at any locus are a function of the allele frequencies (p^2, 2pq, and q^2), forming the
basis of the standard HWE test (2, 3). Where genome-wide data are available, ancestral outliers may be inferred (and excluded) based on their constellation of SNP alleles (6). Where single markers
are studied, reliance remains on good clinical data about ancestry.
We have created (Figure 3) a new Web program (http://www.oege.org/software/hwe-mr-calc.shtml) which undertakes a standard Pearson χ^2 test following Hardy (2) and Weinberg (3)—that is, a parsimonious
test based on observed allele counts. In addition, noting that (2Npq)^2Np^2)(Nq^2), it is possible to use the ratio of 2 genotype counts, assumed unbiased, to estimate what should be the third
genotype count under perfect HWE (χ^2Figure 4).
1. For the threshold model, it may be possible to specify which homozygote group should be checked for missingness. If A were observed whereas the other 2 genotype groups gave the prediction that B
was expected, then B minus A dummy subjects can be introduced into an MR analysis, assigned a distribution of the intermediate trait values that is characteristic of their genotype group, and all
assigned either disease-positive or disease-negative. This allows reasonable bounds of inference to be characterized in relation to possible genotype-related missingness. If it is unknown in
advance which homozygous group might be in deficit, then both models should be examined.
2. For heterosis, the heterozygous excess or deficiency can be considered along the same lines as in point 1 above.
3. For the additive model, HWE testing may not be sensitive to ascertainment biases. For the codominant (additive) model, consider that each allele q is subject to an ascertainment bias k (0kk will
be quite near 1 for complex traits). Thus, the ascertained heterozygote frequency will be 2kpq and the homozygote frequency will be k^2q^2. However, since k^2q^2 is (kq)^2, this will retain
perfect HWE with new apparent q′kq and p′q′. Where k is near 1, the loss of subjects could be stated as loss m, where km and k^2 is approximately 1 − 2m (the m^2 term being negligible); so then,
even under an additive model of loss (twice as much—that is, 2m loss of a homozygote group vs. the m loss of heterozygotes), relative genotype counts will remain very near HWE. This means that,
under the additive model, the sample may show little or no deviation from HWE. However, life-course information on allele frequencies will be useful to confirm that effect sizes in
genotype-trait-disease have not been underestimated, and this emphasizes the value of having data on allele frequencies in a large-scale early-life cohort that is matched to the general
Steps needed to utilize Hardy-Weinberg equilibrium (HWE) testing for possible biologic ascertainment in genotype-phenotype or Mendelian randomization (MR) studies. HWD, Hardy-Weinberg disequilibrium.
It is impossible to prove that possible missing subjects were not unusually extreme for either an intermediate trait or disease status (e.g., numerous genotypes predispose to more than 1 disease),
although sometimes reductio ad absurdum may suffice—for example, in a study of 100,000 persons and a possible ascertainment-driven missingness of 10 homozygotes, the missingness might be incapable of
having any bearing on genotype-intermediate trait-clinical outcome associations even if those 10 subjects had an “absurd” value for an intermediate trait or clinical outcome. Even if this is not
possible, it may be considered unlikely (though not proven) that a genotype could have driven missingness. For a gene believed only to be expressing a highly specific function in later life, this may
be reasonable, but for genes with pleiotropy and particularly with expression in early fetal development, there might be more concern. Multifunctionality of the many forms recognized from detailed
molecular studies of specific regions of the genome as in the ENCODE (Encyclopedia of DNA Elements) Project (9), microRNAs (10) with other regulatory functions, etc., arising from many genes, and
other genes in the same linkage disequilibrium block (11) should be considered in drawing a reasonable conclusion.
The effects of missingness (chance vs. genotype-dependent clinical ascertainment) are no different in an MR study than in any other genotype-phenotype (genetic association) study. However, the
consequences of misinference would be different. In principle, a false-negative result may be obtained by chance in any genotype-phenotype study, although one hopes that genotype-dependent clinical
ascertainment bias might be evident with high-quality genotyping data combined with attention to HWE testing. Indeed, this is why case-control studies use HWE of controls (not cases) as a primary
exploration of genotyping quality. However, MR studies are undertaken because there is already a focus based on knowledge of a potentially important association (albeit of unknown direction) between
an intermediate trait (e.g., dietary factor, blood measure) and an outcome (e.g., disease event). In this case, if the genotype-outcome association draws a false-negative inference (e.g., because
persons with the genotype-dependent outcome had failed to enter the study), the finding of important causality will have been falsely excluded. In this paper, we have illustrated the effects of
“Hardy-Weinberg uncertainty” for “intention to MR”; have identified general strategies for gaining maximal information from the HWE test; and have developed a Web tool and scheme which will allow
possible genotype-dependent missingness to be factored into MR analyses.
The main limitation of our study is that the phenotypes of any subjects missing or potentially missing in a real MR study are unknown. Potentially missing subjects may have had atypical intermediate
trait values or disease status (or other diseases related to the same genotype). While the analytical scheme presented here takes account of missingness based on “reasonable” trait values, one cannot
predict extremes; this would require tracking of a complete cohort from conception throughout the life course.
Supplementary Material
[HTML Page - index.htslp]
Author affiliations: Bristol Genetic Epidemiology Laboratories and MRC Centre for Causal Analyses in Translational Epidemiology, Department of Social Medicine, University of Bristol, Bristol, United
Kingdom (Santiago Rodriguez, Tom R. Gaunt, Ian N. M. Day).
T. R. G. is the recipient of an Intermediate Fellowship (grant FS/05/065/19497) from the British Heart Foundation.
The authors thank Dr. Nikolas Maniatis and Professor Vilmundur Gudnason for helpful comments on the manuscript.
S. R. and T. R. G. should be regarded as joint first authors of this article.
Conflict of interest: none declared.
apolipoprotein E
Hardy-Weinberg equilibrium
Mendelian randomization
single nucleotide polymorphism
Wellcome Trust Case Control Consortium
Davey-Smith G, Ebrahim S. ‘Mendelian randomization’: can genetic epidemiology contribute to understanding environmental determinants of disease? Int J Epidemiol. 2003;32(1):1–22. [PubMed]
Hardy GH. Mendelian proportions in a mixed population. Science. 1908;28(706):49–50. [PubMed]
3. Weinberg W. Über den Nachweis der Vererbung beim Menschen. Jahresh Wuertt Ver vaterl Natkd. 1908;64:368–382.
Kavvoura FK, Ioannidis JP. Methods for meta-analysis in genetic association studies: a review of their potential and pitfalls. Hum Genet. 2008;123(1):1–14. [PubMed]
5. Hedrick PW. Genetics of Populations. Sudbury, MA: Jones and Bartlett Publishers; 2005.
The Wellcome Trust Case Control Consortium. Genome-wide association study of 14,000 cases of seven common diseases and 3,000 shared controls. Nature. 2007;447(7145):661–678. [PMC free article] [
Cupples LA, Arruda HT, Benjamin EJ, et al. The Framingham Heart Study 100K SNP genome-wide association study resource: overview of 17 phenotype working group reports. BMC Med Genet. 2007;8(suppl
1):S1. [PMC free article] [PubMed]
Gu DF, Hinks LJ, Morton NE, et al. The use of long PCR to confirm three common alleles at the CYP2A6 locus and the relationship between genotype and smoking habit. Ann Hum Genet. 2000;64(pt
5):383–390. [PubMed]
The ENCODE Project Consortium. The ENCODE (ENCyclopedia Of DNA Elements) Project. Science. 2004;306(5696):636–640. [PubMed]
Guarnieri DJ, DiLeone RJ. MicroRNAs: a new class of gene regulators. Ann Med. 2008;40(3):197–208. [PubMed]
The International HapMap Consortium. A haplotype map of the human genome. Nature. 2005;437(7063):1299–1320. [PMC free article] [PubMed]
Hingorani A, Humphries S. Nature's randomised trials. Lancet. 2005;366(9501):1906–1908. [PubMed]
Articles from American Journal of Epidemiology are provided here courtesy of Oxford University Press
• PubMed
PubMed citations for these articles
Your browsing activity is empty.
Activity recording is turned off.
See more...
|
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2640163/?tool=pubmed","timestamp":"2014-04-20T23:49:06Z","content_type":null,"content_length":"97896","record_id":"<urn:uuid:98f00a39-3fd6-4722-a068-e842be056b92>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Spring 1999 LACC Math Contest - Problem 5
Problem 5.
A right cylindrical tank standing on end holds 120 gallons of water when filled to capacity, and is 24 inches high. The tank is filled to the 21 inch mark, and 60 gallons then drained out.
Find the height of the remaining water, i.e., find x.
|
{"url":"http://www.lacitycollege.edu/academic/departments/mathdept/samplequestions/1999prob5.html","timestamp":"2014-04-17T00:48:58Z","content_type":null,"content_length":"2539","record_id":"<urn:uuid:bd2ad61e-8141-4761-ab12-4e8b2427734e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Differentiable function germs on differentiable manifolds
up vote -5 down vote favorite
Hello everyone, I was wondering if anyone knew how to prove that the map from $C^{\infty}(M)$ to $\xi (p)$, that is, from the infinitely differentiable functions on a manifold M to the space of
(once)-differentiable function germs, where the map is associating to each f in $C^{\infty}(M)$ its class in $\xi (p)$ is onto. By the way, since you ask, the reason I'm interested in this is because
its a question that WAS on my final for differential topology, I've tried to work it out since then but no luck so far, this is not homework it's just curiosity now, hope its ok ill have to check the
post regultaions, sorry, if not just tell me and i'll delete the question...
3 1. Your question's well-formed, but nearly impossible to parse. Edit? 2. I don't know any differential geometry, but is this homework? It reads kind of like it might be, and it's gotten downvoted.
If it's homework, don't bother editing. – Harrison Brown Dec 6 '09 at 0:37
1 Assuming you mean $C^{\infty}$ germs, then this is basically the existence of a cutoff function. – Akhil Mathew Dec 6 '09 at 2:12
Upvoted since reasonableness should be encouraged. :) – Harrison Brown Dec 6 '09 at 2:21
add comment
closed as off-topic by Ricardo Andrade, Andrey Rekalo, Olivier Benoist, Stefan Kohl, Willie Wong Nov 28 '13 at 12:46
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question does not appear to be about research level mathematics within the scope defined in the help center." – Ricardo Andrade, Andrey Rekalo
If this question can be reworded to fit the rules in the help center, please edit the question.
1 Answer
active oldest votes
I think your question is mis-stated, because this map is not onto:
There is no smooth function on the reals whose germ at $0$ is the germ of $x|x|$, a once-differentiable function germ.
up vote 2 down vote
Assuming this is not what you wanted, please restate the question, and include an explanation of why you are interested in it ;)
Thanks, I'll look into your counterexample! – user2333 Dec 6 '09 at 1:17
add comment
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/7933/differentiable-function-germs-on-differentiable-manifolds?sort=votes","timestamp":"2014-04-21T16:03:55Z","content_type":null,"content_length":"48061","record_id":"<urn:uuid:2a971cf7-83fe-438f-ba09-8d7a1843f546>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Useful Tips on StatCrunch for Ma
Useful Tips on StatCrunch for Math 201:
Summary Statistics
This is where you can find the mean, median, mode, Q1, Q2, and the Standard Error.
1. Input the data
2. Go to Stat -> Summary Stats -> Columns
3. Click on the variable that you want the summary stats. (You should always name the variable)
4. Click on Calculate
The summary stats should appear.
To save your results, click on Options-> Export/Save/Print
This will also put the results at the bottom of your spread sheet. You can then copy and paste it into a Word document. Make sure that when you copy your results, you also highlight the title along
with the graphics. Otherwise, it will not paste correctly.
Confidence Intervals
To construct a confidence interval for a mean.
1. Enter the data
2. Go to Stat -> T-Statistics -> One Sample
3. Click on the variable that you want for the confidence interval.
4. Click on Next
5. Click on the Confidence Interval Radio Button and enter your confidence level if it is not 95%.
6. Click on Calculate
The Confidence interval should appear. You can save and export as in the Summary Statistics
Hypothesis Testing
To construct a Hypothesis Test for a mean.
1. Enter the data
2. Go to Stat -> T-Statistics -> One Sample
3. Click on the variable that you want for the hypothesis test.
4. Click on Next
5. Type in the right hand side of the hypotheses. For example if H[0]: m = 10 then type in 10.
6. Click on the alternative hypothesis arrow and select according to whether it is a left, right or two tailed test.
7. Click on Calculate
The results of the hypothesis test should appear. You can save and export as in the Summary Statistics
Regression Analysis
To find the equation of the least squares regression line, the correlation, a confidence interval for y given x, the P-Value for the slope and correlation, and to plot the scatter plot and regression
1. Enter the data in two columns
2. Go to Stat -> Regression -> Simple Linear
3. Select your X-Variable (the independent variable or the predictor)
4. Select your Y-Variable (the dependent variable or the variable you want to make a prediction of.
5. Click Next
6. Click on Predict Y for X = and fill in the value of x that you want a confidence interval for.
7. Click Next
8. Click on Plot the fitted line
9. Click Calculate
The readout should appear.
The fourth line shows the equation of the regression line. Note that is will not have x and y shown, but rather the names that you have given for x and y. For example:
HoursWorked = 18.3345123 - 0.2513213UnitsTaken
The sixth and seventh lines are the correlation and the coefficient of determination.
The P-Value for both the slope and the correlation are given in the first table in the cell with row Slope and column P-Value.
In the third table you will see a confidence interval (CI) and a prediction interval (PI). The confidence interval is the confidence interval for the average of y values that have the specified value
of x. The prediction interval is the confidence interval for a specific value if y given a value of x.
If you click on the next button, you will find the scatter plot and regression line.
The Difference Between With Data and With Summary
There are two ways of making charts. One With Data and the other With Summary. For example if you want to make a pie chart of data that has five greens, ten reds and 19 blues, you can create two
columns. Label the first column "color" and the second column "count". The rows of the first column should contain the words: green, red, and blue. The second column should contain the numbers 5, 10,
and 19. Then when you go to Graphics -> Pie Chart -> With Summary. For Categories In select color and for Counts In select count. The rest should be self explanatory.
If instead the data is given by listing green five time, red ten times blue 19 times all in one column, then go to
Graphics -> Pie Chart -> With Data
And the rest should be self explanatory.
|
{"url":"http://www.ltcconline.net/greenl/courses/201/projects/StatCrunchTips.htm","timestamp":"2014-04-19T09:25:48Z","content_type":null,"content_length":"5959","record_id":"<urn:uuid:a13d3e56-4363-4203-a0ae-47296908f025>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
|
solving equations in the braid group
up vote 10 down vote favorite
Is there a systematic way to solve equations in the braid groups? In particular, if $B_3$ is the braid group on three strands with the presentation $\{ a,b\ | \ aba = bab \}$, how do I find $x$ so
that $xaxbx^{-1}b^{-1}x^{-1} = bax^{-1}b^{-1}x$ (if such an $x$ exists)?
Some terminology: if there is an algorithm to solve systems of equations and inequations in a group G, then the universal theory of G is said to be decidable. – HJRW Mar 13 '10 at 21:23
add comment
3 Answers
active oldest votes
This group is a central extension of $PSL_2\mathbb{Z}$, which is a virtually free group. There is an algorithm to solve equations in such groups, and parameterize the solutions. Since your
equation is degree zero in $a,b,x$, if the lift of the solution in $PSL_2\mathbb{Z}$ to $B_3$ solves the equation for one lift, it should work for any other lift. I'm not quite sure though
up vote how to determine this uniformly over all lifts of the solution. The solutions are given by Makanin-Razborov diagrams, and they are parameterized by various automorphisms. So I think you
10 down just need to check one solution in each equivalence class coming from each orbit.
Technical quibble: I believe Dahmani and Guirardel don't do enough to compute the full Makanin--Razborov diagram. Their algorithm tells you whether or not there is a solution, but doesn't
compute all solutions. There are very few concrete examples in which one actually knows the full Makanin--Razborov diagram. – HJRW Mar 13 '10 at 18:22
I see, thanks for the correction. I saw Gurardel talk about this a couple of years ago, but I haven't actually read the paper. – Ian Agol Mar 14 '10 at 15:57
add comment
The braid group is automatic, so has a solvable word problem. You might find this helpful. However, it's not clear to me that this means that equations can actually be solved.
up vote 5
down vote
1 You're quite right. Having solvable word problem is much weaker than having decidable universal theory. For instance, solving the word problem in free groups is very easy, and proving
that the universal theory of a free group is decidable is quite hard (see Agol's answer for a link with some references). – HJRW Mar 13 '10 at 21:28
1 More specifically: here's an example of a fg nilpotent group with undecidable universal theory. ams.org/mathscinet/search/… – HJRW Mar 13 '10 at 21:31
add comment
Your equation is $xaxbXBXXbxAB = 1$; thus the substitution $x = b$ answers your particular question. May I ask where this equation comes from?
up vote 5 down vote
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/18009/solving-equations-in-the-braid-group/18077","timestamp":"2014-04-17T09:53:53Z","content_type":null,"content_length":"62293","record_id":"<urn:uuid:61f1cbd1-e9f9-4992-8997-b9b5f557eaff>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Square roots of complex numbers.
November 7th 2010, 12:01 PM
As part of a question I have to find the square root of -24 + 10i. I know that 1 + 5i is one square root, but I have no idea how to find this other than guess and check. I tried converting to
polar form, but yet again it's the argument that's getting me. I know that x = -24, y = 10, and r = 26. But there doesn't seem to be a "nice" argument based on those values. arctan(10/-24) =
157.380... degrees, arccos(-24/26) = 157.380... degrees, and arcsin(10/26) = 157.380... degrees. So it's consistent, but it seems like there's no way it's going to give me an exact value for the
square root, even though I know that there's a 'nice' root (1 + 5i). I thought about maybe just leaving the argument as (arccos(-24/26)) and then manipulating double angle identities, but I
wasn't sure how to properly do this. Any help?
November 7th 2010, 01:00 PM
In this the answer is simple: $(-1-5i)$.
Square roots occur as negatives,
November 7th 2010, 02:00 PM
I guess my question wasn't clear enough. I understand that (-1-5i) would also be a square root, I just don't understand how to find +/-(1+5i) in the first place other than guess and check. The
method I was taught was to convert everything into polar form, but (-24+10i) doesn't seem to convert nicely, and decimal answers won't do, I need exact answers.
November 7th 2010, 02:20 PM
mr fantastic
I guess my question wasn't clear enough. I understand that (-1-5i) would also be a square root, I just don't understand how to find +/-(1+5i) in the first place other than guess and check. The
method I was taught was to convert everything into polar form, but (-24+10i) doesn't seem to convert nicely, and decimal answers won't do, I need exact answers.
The easiest approach here is to let the square root be $a + ib$ where a and b are real. Then:
$-24 + 10 i = (a + ib)^2 = (a^2 - b^2) + i(2ab)$.
Equate real and imaginary parts:
$-24 = a^2 - b^2$ .... (1)
$10 = 2ab \Rightarrow 5 = ab \Rightarrow a = \frac{5}{b}$ .... (2)
Substitute (2) into (1): $-24 = \frac{25}{b^2} - b^2 \Rightarrow b^4 - 24b^2 - 25 = 0 \Rightarrow (b^2 - 25)(b^2 + 1) \Rightarrow b^2 = 25$ (the other solution is rejected - why?).
November 7th 2010, 05:56 PM
Prove It
Another approach is to remember that there are 2 square roots and they are evenly spaced around a circle. So the other square root has the same length and differs from the other by an angle of $\
displaystyle \frac{\pi}{2}$.
November 7th 2010, 08:20 PM
The easiest approach here is to let the square root be $a + ib$ where a and b are real. Then:
$-24 + 10 i = (a + ib)^2 = (a^2 - b^2) + i(2ab)$.
Equate real and imaginary parts:
$-24 = a^2 - b^2$ .... (1)
$10 = 2ab \Rightarrow 5 = ab \Rightarrow a = \frac{5}{b}$ .... (2)
Substitute (2) into (1): $-24 = \frac{25}{b^2} - b^2 \Rightarrow b^4 - 24b^2 - 25 = 0 \Rightarrow (b^2 - 25)(b^2 + 1) \Rightarrow b^2 = 25$ (the other solution is rejected - why?).
So why is the one solution rejected?
$(b^2 + 1) = 0 \Rightarrow b^2 = -1 \rightarrow b = i, -i$
I've substituted these values back into equation (2) to get a value for a, $a = \frac{5}{b} = \frac{5}{i} = \frac{5i}{i^2} = \frac{5i}{-1} = -5i$ and then used that value to form the complex
number $a +bi = -5i + (i)i = -5i -1 = -1 - 5i$ which is one of the roots I'm looking for. Using $b = -i$ just yields $1 + 5i$ as a root. Is it thrown out because a is supposed to represent the
real part, but in that case it becomes the imaginary part? Or is it maybe because it represents the exact same roots as the other?
November 7th 2010, 10:34 PM
mr fantastic
So why is the one solution rejected?
$(b^2 + 1) = 0 \Rightarrow b^2 = -1 \rightarrow b = i, -i$
I've substituted these values back into equation (2) to get a value for a, $a = \frac{5}{b} = \frac{5}{i} = \frac{5i}{i^2} = \frac{5i}{-1} = -5i$ and then used that value to form the complex
number $a +bi = -5i + (i)i = -5i -1 = -1 - 5i$ which is one of the roots I'm looking for. Using $b = -i$ just yields $1 + 5i$ as a root. Is it thrown out because a is supposed to represent the
real part, but in that case it becomes the imaginary part? Or is it maybe because it represents the exact same roots as the other?
Re-read my previous reply:
$a + ib$where a and b are real.
I will let you think about why a and b are required to be real.
$b = 5 \Rightarrow a = 1$, $b = -5 \Rightarrow a = -1$. Therefore the two roots are 1 + 5i and -1 - 5i.
|
{"url":"http://mathhelpforum.com/pre-calculus/162471-square-roots-complex-numbers-print.html","timestamp":"2014-04-20T23:43:55Z","content_type":null,"content_length":"15321","record_id":"<urn:uuid:6edd1eed-f633-4cf4-b9f4-30da6573e06c>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sturtevant Precalculus Tutor
Find a Sturtevant Precalculus Tutor
...Organic chemistry is specifically difficult because it is brute memorization as well as understanding the types of reactions and mechanisms that you need to know. I can help you to learn why
things happen on a mechanism and reaction level. A basic understanding of general chemistry is also impo...
46 Subjects: including precalculus, chemistry, geometry, statistics
I have been a math teacher for 25 years, having taught first at the middle school level for five years years then the last 20 have been at the high school level. I have been teaching mainly
Algebra 2/Trigonometry, CP Geometry, Statistics and Algebra, and I would definitely be able to help with your...
12 Subjects: including precalculus, calculus, ESL/ESOL, statistics
...Students who have trouble with basic concepts need lots of enrichment activities with hands on manipulates and examples from real life. An experienced math teacher teaching students with
different learning styles can help. I have a Certification in Math by the State of Wisconsin to teach grades 6 through 12.
9 Subjects: including precalculus, calculus, geometry, algebra 1
...Math subjects I can assist with include basic math, algebra, statistics, geometry, trigonometry, and pre-calculus. Finally, I also have an extensive background in Spanish ranging from Spanish
classes in high school to Spanish language, grammar, and linguistics classes in college. My tutoring radius is 20 miles.
15 Subjects: including precalculus, chemistry, Spanish, statistics
...I will be graduating this coming May (2014) with a degree in Elementary Education and a minor in Mathematics! I also have experience with music since I played the violin for 9 years. My primary
goal is to work with middle school math students, but I have experience with students of all ages and can help with a variety of subjects!
25 Subjects: including precalculus, reading, geometry, algebra 1
|
{"url":"http://www.purplemath.com/Sturtevant_Precalculus_tutors.php","timestamp":"2014-04-17T07:20:59Z","content_type":null,"content_length":"24267","record_id":"<urn:uuid:1852c661-1921-49bc-ba88-6dccabf37510>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Haskell-cafe] I Need a Better Functional Language!
Ryan Ingram ryani.spam at gmail.com
Tue Apr 10 00:00:57 CEST 2012
A concurring opinion here, and an example.
iff :: Bol -> a -> a -> a
iff True x _ = x
iff False _ x = x
f, g :: Bool -> Bool
f x = x
g x = iff x True False
Are these two functions equal? I would say yes, they are. Yet once you
can pattern match on functions, you can easily tell these functions apart,
and create a function
h :: (Bool -> Bool) -> Bool
such that h f => True but h g => False.
-- ryan
On Thu, Apr 5, 2012 at 8:52 AM, Dan Doel <dan.doel at gmail.com> wrote:
> On Thu, Apr 5, 2012 at 10:14 AM, Grigory Sarnitskiy <sargrigory at ya.ru>
> wrote:
> > First, what are 'functions' we are interested at? It can't be the usual
> set-theoretic definition, since it is not constructive. The constructive
> definition should imply functions that can be constructed, computed. Thus
> these are computable functions that should be of our concern. But
> computable functions in essence are just a synonym for programs.
> This is a flawed premise. The point of working with functions is
> abstraction, and that abstraction is given by extensional equality of
> functions:
> f = g iff forall x. f x = g x
> So functions are not synonymous with programs or algorithms, they
> correspond to an equivalence class of algorithms that produce the same
> results from the same starting points. If you can access the source of
> functions within the language, this abstraction has been broken.
> And this abstraction is useful, because it allows you to switch freely
> between programs that do the same thing without having to worry that
> someone somewhere is relying on the particular algorithm or source.
> This is the heart of optimization and refactoring and the like.
> There are places for metaprogramming, or perhaps even a type of
> algorithms that can be distinguished by means other than the functions
> they compute. But to say that functions are that type is false, and
> that functions should mean that is, I think, wrong headed.
> -- Dan
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/haskell-cafe/attachments/20120409/e39e4bf0/attachment.htm>
More information about the Haskell-Cafe mailing list
|
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2012-April/100675.html","timestamp":"2014-04-16T05:47:30Z","content_type":null,"content_length":"5836","record_id":"<urn:uuid:9574c78f-9622-490c-87b6-296f8319f901>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SPSSX-L archives -- February 1999 (#310)LISTSERV at the University of Georgia
Date: Tue, 23 Feb 1999 18:11:29 -0300
Reply-To: "Hector E. Maletta" <hmaletta@OVERNET.COM.AR>
Sender: "SPSSX(r) Discussion" <SPSSX-L@UGA.CC.UGA.EDU>
From: "Hector E. Maletta" <hmaletta@OVERNET.COM.AR>
Subject: Re: Coefficient of Variation
Comments: To: shailendra@CANADA.COM
Content-Type: text/plain; charset=us-ascii
Its the ratio of standard deviation to the mean. More than 'precision'
it reflects the relative dispersion of data around the mean. Besides, in
a normal distribution 95% of the cases lie within 1.96 standard
deviations around the mean, thus if the coefficient is, say, 0.15, and
the distribution is normal, then about 95% of the cases will lie in the
interval delimited by (1-0.30)*mean and (1+0.30)*mean. Many
psychological, medical and biological variables are known or expected to
have a nearly normal distribution of frequencies, and thus this concept
may be handy. Sociological and economic variables, though, have usually
non-normal distributions.
All this, of course, is not to be confused with the distribution of
sampling errors, which is always expected to be normal if the samples
are randomly selected and of sufficient size. The standard deviation of
the sampling distribution (variability among many hypothetical samples
of the same size) is usually estimated as the standard deviation
observed in your sample, divided by the square root of the sample size.
But in this context the notion of a coefficient of variation, though
feasible, is not commonly used.
Hector Maletta
Universidad del Salvador
Buenos Aires, Argentina
shailendra@CANADA.COM wrote:
> Recently I came across a term called coefficient of variation, which measures the precision of the estimated data.
> My curiosity how important it is know the coefficient of variation in survey analysis, how it is calculated at the sample level, and is there any % bench-mark for accepting data for analysis.
> Any reference or comments are thankfully welcome.
> --------------------------------------------------------------------
> Get free personalized email from Canada.com at http://www.ehmail.com
|
{"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind9902&L=spssx-l&D=0&O=A&P=32700","timestamp":"2014-04-17T06:59:41Z","content_type":null,"content_length":"11098","record_id":"<urn:uuid:04577390-2737-4fac-be5d-09925622d89b>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
|
differentiating exponential functions
October 26th 2009, 03:23 PM #1
Oct 2009
differentiating exponential functions
It is true that e^x is the only exponential function whose graph is also the graph of its derivative. But how should I go about finding the derivative of e^-x or e^x/10?
Actually a*e^x is. To solve your problems use the Chain rule.
October 26th 2009, 03:29 PM #2
Senior Member
Oct 2009
|
{"url":"http://mathhelpforum.com/calculus/110671-differentiating-exponential-functions.html","timestamp":"2014-04-16T06:11:59Z","content_type":null,"content_length":"30678","record_id":"<urn:uuid:e316e3d6-0641-40bd-a748-65a22e5b4036>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is the current status for Lusztig's positivity conjecture for symmetric Cartan datum?
up vote 2 down vote favorite
This is related to the earlier question here
In Conjecture 25.4.2 in his "Introduction to Quantum Groups," Lusztig conjectures that "If the Cartan datum is symmetric, then the structure constants $m_{a,b}^c$, $\hat m_c^{a,b}$ are in $N[v,v^
In what cases is this proven? I only care about classical groups, specifically split reductive groups over $Z$ as constructed via Lusztig's canonical bases (specialize at $v=1$). I would love it if
the structure constants were non-negative, so that there were no troublesome signs floating around in the antipode. When is this known to be the case? My googling leads me to suspect something is
known in the simply-laced case. Anything more general yet? References? I'm not interested in the non-symmetric case.
qa.quantum-algebra canonical-bases
Marty- this question would be a lot better if you provided a little more reference for those of us who don't have Lusztig's book at hand. Perhaps you could mention what object you're looking at the
canonical basis of? There are quite a few. – Ben Webster♦ Jun 27 '11 at 4:55
I'm sorry - part of my trouble is that I'm not familiar with other literature on canonical bases. I'm talking about the canonical basis of Lusztig's modification of the universal enveloping algebra
-- the basis $\dot B$ of $\dot U$. This is the modification in which there is a basis element $1_\lambda$ for each $\lambda$ in the cocharacter lattice $Y$. This is a system of idempotents for the
nonunital algebra $\dot U$ (making it an algebra with approximate identity). – Marty Jun 27 '11 at 12:01
add comment
1 Answer
active oldest votes
You should take this with a grain of salt, but I would guess that this is stated in the literature in type A and no other types. It follows in type A from the
Beilinson-Lusztig-MacPherson construction, I believe. This is discussed a bit in this paper of Yiqiang Li.
For ADE type, a clever person can derive this from Theorem 1.19 in my paper KI-HRT II; it isn't stated explicitly (I'm writing up a separate canonical basis paper at the moment), but
it only requires a small twist on the current arguments. (Essentially, one must show that Theorem 1.19 implies that the orthodox basis of $\dot U$ is canonical, and orthodox bases
up vote 2 down always have positive structure coefficients).
vote accepted
For arbitrary symmetric types, this is trickier (the discussion above uses very strongly that highest and lowest weight reps coincide in finite type). At the moment, I think the main
roadblock is proving 4.13 of Li's paper; I think I know how to do this, but it's not written up yet, and hasn't been completely vetted for dumb mistakes. That would establish this for
all symmetric types.
Thanks Ben! Looking at two papers (Li's and yours) is much better than my previous hunting around. Fortunately for me, I realized yesterday that I can work around the positivity
conjecture, so the stakes aren't so high. But it also seems like an incredibly interesting problem, and I'm glad progress is being made. – Marty Jun 30 '11 at 11:32
add comment
Not the answer you're looking for? Browse other questions tagged qa.quantum-algebra canonical-bases or ask your own question.
|
{"url":"http://mathoverflow.net/questions/68897/what-is-the-current-status-for-lusztigs-positivity-conjecture-for-symmetric-car","timestamp":"2014-04-16T11:03:19Z","content_type":null,"content_length":"56541","record_id":"<urn:uuid:4a022615-40ff-47a0-8793-364be246ceec>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Task Five at Um Habiba Primary school
TASK FIVE: What is Teaching all About?
A. description of class activities
With the help of the mentor teacher, choose two of the classrooms and complete your observations.
Grade: 1 Subject: Arabic Duration of the class period: 2
· What was the main topic of the lesson?
Deprive letter L
· How did the teacher begin or introduce the topic to the students?
She starts by asking them to by attention to the words that she will say. Then they will tell her the letter that repeated in all the words.
· What kinds of activities did the teacher do? And what did the students do?
The teacher and the students read the lesson together. She asks them some question about it. Then, each student should circle on the letter (L) in the lesson. Also, they point on the pictures that
include letter (L).
· Closing Activities (How did the lesson end? For example; did the teacher review, make comments, play a game, give a quiz, give students an assignment, homework, etc.)
She gives them worksheets which include a sentence which students have to re-write it and write letter (L) in different color.
Grade: 1 Subject: Math Duration of the class period: 3
· What was the main topic of the lesson?
Colum graphing.
· How did the teacher begin or introduce the topic to the students?
She asks them to count until 9. Then she asks them if they want to know which sweet is a favorite for the most the girls in the class and if you want to know then I will teach you how to do it.
· What kinds of activities did the teacher do? And what did the students do?
She plays game with them in the computer. Throw that game they understand the lesson, so she gives them worksheet to do it.
· Closing Activities (How did the lesson end? For example; did the teacher review, make comments, play a game, give a quiz, give students an assignment, homework, etc.)
She gave them a worksheet which includes some questions as homework. Also, she tell them if anyone do it in correct way and color it in beautiful way, she will hung the sheet in the class board.
b. reflections and comments
a. What did you learn about teaching today?
· How to control the lesson.
· How to keep students interns in the lesson.
· How to encourage students to do their homework.
No comments:
|
{"url":"http://my-first-practicum.blogspot.com/2008/12/task-five-at-um-habiba-primary-school.html","timestamp":"2014-04-20T05:43:08Z","content_type":null,"content_length":"31256","record_id":"<urn:uuid:4155d3d6-d6f9-4e14-9db6-18e6bfc994e2>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lauderhill, FL Algebra Tutor
Find a Lauderhill, FL Algebra Tutor
While I was a high school student I began tutoring younger students in math. Over the years I have continued tutoring on and off while increasing the variety of subjects I teach. Now that I have
completed my BA in Biology from Brandeis University and my Master's in Biomedical Science rom FAU I hope to use my science knowledge to teach and inspire students.
10 Subjects: including algebra 1, algebra 2, chemistry, biology
...Specific subjects in physics that I am comfortable teaching are mechanics, kinematics, work and energy, momentum, vectors, angular quantities, fluids, thermodynamics, optics, electronics,
magnetism, quantum physics, and more... I have had success tutoring several students in algebra. My methods ...
27 Subjects: including algebra 1, algebra 2, physics, chemistry
...I firmly believe that with a proper structure and customized guideline every student has the ability to succeed in their class. My love for teaching stems from my passion to learn. I was a
Biology major in college and hope to one day go to medical school.
15 Subjects: including algebra 1, algebra 2, chemistry, calculus
...I graduated from Nova Southeastern with my PharmD in 2007 and my Bachelor's in Biology in 2001. If I learned anything, it's that study skills are important for success. I would love to teach
your students everything I've learned.
8 Subjects: including algebra 1, algebra 2, Spanish, elementary math
...Lots of topics and concepts, lots of methods, lots of tough problem solving. Any weakness that a student has in Algebra will be exposed here and must be addressed to succeed in this course.
Success with this course will make Calculus a lot easier.
9 Subjects: including algebra 1, algebra 2, calculus, geometry
Related Lauderhill, FL Tutors
Lauderhill, FL Accounting Tutors
Lauderhill, FL ACT Tutors
Lauderhill, FL Algebra Tutors
Lauderhill, FL Algebra 2 Tutors
Lauderhill, FL Calculus Tutors
Lauderhill, FL Geometry Tutors
Lauderhill, FL Math Tutors
Lauderhill, FL Prealgebra Tutors
Lauderhill, FL Precalculus Tutors
Lauderhill, FL SAT Tutors
Lauderhill, FL SAT Math Tutors
Lauderhill, FL Science Tutors
Lauderhill, FL Statistics Tutors
Lauderhill, FL Trigonometry Tutors
Nearby Cities With algebra Tutor
Cooper City, FL algebra Tutors
Coral Springs, FL algebra Tutors
Dania algebra Tutors
Dania Beach, FL algebra Tutors
Davie, FL algebra Tutors
Fort Lauderdale algebra Tutors
Lauderdale Lakes, FL algebra Tutors
Margate, FL algebra Tutors
North Lauderdale, FL algebra Tutors
Oakland Park, FL algebra Tutors
Plantation, FL algebra Tutors
Pompano Beach algebra Tutors
Sunrise, FL algebra Tutors
Tamarac, FL algebra Tutors
Wilton Manors, FL algebra Tutors
|
{"url":"http://www.purplemath.com/Lauderhill_FL_Algebra_tutors.php","timestamp":"2014-04-16T22:11:53Z","content_type":null,"content_length":"24163","record_id":"<urn:uuid:572c5a49-c29a-46f4-be7d-c12f3a3f220c>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sturtevant Precalculus Tutor
Find a Sturtevant Precalculus Tutor
...Organic chemistry is specifically difficult because it is brute memorization as well as understanding the types of reactions and mechanisms that you need to know. I can help you to learn why
things happen on a mechanism and reaction level. A basic understanding of general chemistry is also impo...
46 Subjects: including precalculus, chemistry, geometry, statistics
I have been a math teacher for 25 years, having taught first at the middle school level for five years years then the last 20 have been at the high school level. I have been teaching mainly
Algebra 2/Trigonometry, CP Geometry, Statistics and Algebra, and I would definitely be able to help with your...
12 Subjects: including precalculus, calculus, ESL/ESOL, statistics
...Students who have trouble with basic concepts need lots of enrichment activities with hands on manipulates and examples from real life. An experienced math teacher teaching students with
different learning styles can help. I have a Certification in Math by the State of Wisconsin to teach grades 6 through 12.
9 Subjects: including precalculus, calculus, geometry, algebra 1
...Math subjects I can assist with include basic math, algebra, statistics, geometry, trigonometry, and pre-calculus. Finally, I also have an extensive background in Spanish ranging from Spanish
classes in high school to Spanish language, grammar, and linguistics classes in college. My tutoring radius is 20 miles.
15 Subjects: including precalculus, chemistry, Spanish, statistics
...I will be graduating this coming May (2014) with a degree in Elementary Education and a minor in Mathematics! I also have experience with music since I played the violin for 9 years. My primary
goal is to work with middle school math students, but I have experience with students of all ages and can help with a variety of subjects!
25 Subjects: including precalculus, reading, geometry, algebra 1
|
{"url":"http://www.purplemath.com/Sturtevant_Precalculus_tutors.php","timestamp":"2014-04-17T07:20:59Z","content_type":null,"content_length":"24267","record_id":"<urn:uuid:1852c661-1921-49bc-ba88-6dccabf37510>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Geometry & Topology
Beautiful Geometry
Lowest new price: $17.38
Lowest used price: $19.31
List price: $27.95
Author: Eli Maor
If you've ever thought that mathematics and art don't mix, this stunning visual history of geometry will change your mind. As much a work of art as a book about mathematics, Beautiful Geometry
presents more than sixty exquisite color plates illustrating a wide range of geometric patterns and theorems, accompanied by brief accounts of the fascinating history and people behind each. With
artwork by Swiss artist Eugen Jost and text by acclaimed math historian Eli Maor, this unique celebration of geometry covers numerous subjects, from straightedge-and-compass constructions to
intriguing configurations involving infinity. The result is a delightful and informative illustrated tour through the 2,500-year-old history of one of the most important and beautiful branches of
Similar Products:
Sacred Geometry (Wooden Books)
Lowest new price: $6.94
Lowest used price: $4.99
List price: $12.00
Author: Miranda Lundy
Geometry is one of a group of special sciences - Number, Music and Cosmology are the others - found identically in nearly every culture on earth. In this small volume, Miranda Lundy presents a unique
introduction to this most ancient and timeless of universal sciences.
Sacred Geometry demonstrates what happens to space in two dimensions - a subject last flowering in the art, science and architecture of the Renaissance and seen in the designs of Stonehenge, mosque
decorations and church windows. With exquisite hand-drawn images throughout showing the relationship between shapes, the patterns of coin circles, and the definition of the golden section, it will
forever alter the way in which you look at a triangle, hexagon, arch, or spiral.
Similar Products:
Geometry (Barron's Regents Exams and Answers)
Lowest new price: $2.50
Lowest used price: $0.01
List price: $7.99
Author: Lawrence S. Leff M.S.
This edition includes the most recent Geometry Regents tests through August 2013. These ever popular guides contain study tips, test-taking strategies, score analysis charts, and other valuable
features. They are an ideal source of practice and test preparation. The detailed answer explanations make each exam a practical learning experience. Topics reviewed include the language of geometry;
parallel lines and quadrilaterals and coordinates; similarity; right triangles and trigonometry; circles and angle measurement; transformation geometry; locus and coordinates; and an introduction to
solid geometry.
Similar Products:
The Fractal Geometry of Nature
Lowest new price: $35.83
Lowest used price: $18.34
List price: $60.00
Author: Benoit B. Mandelbrot
Brand: Baker and Taylor
Clouds are not spheres, mountains are not cones, and lightening does not travel in a straight line. The complexity of nature's shapes differs in kind, not merely degree, from that of the shapes of
ordinary geometry, the geometry of fractal shapes.
Now that the field has expanded greatly with many active researchers, Mandelbrot presents the definitive overview of the origins of his ideas and their new applications. The Fractal Geometry of
Nature is based on his highly acclaimed earlier work, but has much broader and deeper coverage and more extensive illustrations.
Imagine an equilateral triangle. Now, imagine smaller equilateral triangles perched in the center of each side of the original triangle--you have a Star of David. Now, place still smaller equilateral
triangles in the center of each of the star's 12 sides. Repeat this process infinitely and you have a Koch snowflake, a mind-bending geometric figure with an infinitely large perimeter, yet with a
finite area. This is an example of the kind of mathematical puzzles that this book addresses.
The Fractal Geometry of Nature is a mathematics text. But buried in the deltas and lambdas and integrals, even a layperson can pick out and appreciate Mandelbrot's point: that somewhere in
mathematics, there is an explanation for nature. It is not a coincidence that fractal math is so good at generating images of cliffs and shorelines and capillary beds.
Similar Products:
Topology (2nd Edition)
Lowest new price: $143.12
Lowest used price: $37.39
List price: $183.33
Author: James Munkres
This introduction to topology provides separate, in-depth coverage of both general topology and algebraic topology. Includes many examples and figures. GENERAL TOPOLOGY. Set Theory and Logic.
Topological Spaces and Continuous Functions. Connectedness and Compactness. Countability and Separation Axioms. The Tychonoff Theorem. Metrization Theorems and paracompactness. Complete Metric Spaces
and Function Spaces. Baire Spaces and Dimension Theory. ALGEBRAIC TOPOLOGY. The Fundamental Group. Separation Theorems. The Seifert-van Kampen Theorem. Classification of Surfaces. Classification of
Covering Spaces. Applications to Group Theory. For anyone needing a basic, thorough, introduction to general and algebraic topology and its applications.
Similar Products:
Common Core Math Workouts, Grade 8
Lowest new price: $5.24
Lowest used price: $5.56
List price: $9.99
Author: Karice Mace
Brand: Carson-Dellosa
Each page in Common Core Math Workouts for grade 8 contains two workouts”--one for skills practice and one for applying those skills to solve a problem. These workouts make great warm-up or
assessment exercises. They can be used to set the stage and teach the content covered by the standards. They can also be used to assess what students have learned after the content has been taught.
Content is aligned with the Common Core State Standards for Mathematics and includes Geometry, Ratio and Proportional Relationships, The Number System, Expressions and Equations, and Statistics and
Probability. The workbooks in the Common Core Math Workouts series are designed to help teachers and parents meet the challenges set forth by the Common Core State Standards. They are filled with
skills practice and problem-solving practice exercises that correspond to each standard. With a little time each day, your students will become better problem solvers and will acquire the skills they
need to meet the mathematical expectations for their grade level.
• 11.00L 8.50W 0.30H
Similar Products:
The Humongous Book of Geometry Problems
Lowest new price: $11.32
Lowest used price: $5.99
List price: $21.95
Author: W. Michael Kelley
An ingenious problem-solving solution for befuddled math students.
A bestselling math book author takes what appears to be a typical geometry workbook, full of solved problems, and makes notes in the margins adding missing steps and simplifying concepts so that
otherwise baffling solutions are made perfectly clear. By learning how to interpret and solve problems as they are presented in courses, students become fully prepared to solve any obscure problem.
No more solving by trial and error!
• Includes 1000 problems and solutions
• Annotations throughout the text clarify each problem and fill in missing steps needed to reach the solution, making this book like no other geometry workbook on the market
• The previous two books in the series on calculus and algebra sell very well
Similar Products:
Let's Review: Geometry (Barron's Review Course)
Lowest new price: $4.98
Lowest used price: $0.01
List price: $13.99
Author: Lawrence S. Leff M.S.
This classroom text presents a detailed review of all topics prescribed as part of the high school curriculum. Separate chapters analyze and explain: the language of geometry; parallel lines and
polygons; congruent triangles and inequalities; special quadrilaterals and coordinates; similarity (including ratio and proportion, and proving products equal); right triangles and trigonometry;
circles and angle measurement; transformation geometry; locus and coordinates; and working in space (an introduction to solid geometry). Each chapter includes practice exercises with answers provided
at the back of the book.
Similar Products:
Proofs from THE BOOK
Lowest new price: $32.50
Lowest used price: $36.97
List price: $49.95
Author: Martin Aigner
Brand: Brand: Springer
This revised and enlarged fourth edition features five new chapters, which treat classical results such as the "Fundamental Theorem of Algebra", problems about tilings, but also quite recent proofs,
for example of the Kneser conjecture in graph theory. The new edition also presents further improvements and surprises, among them a new proof for "Hilbert's Third Problem".
From the Reviews:
"... Inside [this book] is indeed a glimpse of mathematical heaven, where clever insights and beautiful ideas combine in astonishing and glorious ways. There is vast wealth within its pages, one gem
after another. ..., but many [proofs] are new and brilliant proofs of classical results. ...Aigner and Ziegler... write: "... all we offer is the examples that we have selected, hoping that our
readers will share our enthusiasm about brilliant ideas, clever insights and wonderful observations." I do. ... " AMS Notices 1999
"... the level is close to elementary ... the proofs are brilliant. ..." LMS Newsletter 1999
• Used Book in Good Condition
Similar Products:
Sacred Geometry: Philosophy & Practice (Art and Imagination)
Lowest new price: $10.52
Lowest used price: $7.01
List price: $19.95
Author: Robert Lawlor
An introduction to the geometry which, as modern science now confirms, underlies the structure of the universe.
The thinkers of ancient Egypt, Greece and India recognized that numbers governed much of what they saw in their world and hence provided an approach to its divine creator. Robert Lawlor sets out the
system that determines the dimension and the form of both man-made and natural structures, from Gothic cathedrals to flowers, from music to the human body. By also involving the reader in practical
experiments, he leads with ease from simple principles to a grasp of the logarithmic spiral, the Golden Proportion, the squaring of the circle and other ubiquitous ratios and proportions.
Art and Imagination: These large-format, gloriously-illustrated paperbacks cover Eastern and Western religion and philosophy, including myth and magic, alchemy and astrology. The distinguished
authors bring a wealth of knowledge, visionary thinking and accessible writing to each intriguing subject. 202 illustrations and diagrams, 56 in two colors
Similar Products:
|
{"url":"http://www.linuxcoding.com/skedoodle/Books/226700/2","timestamp":"2014-04-20T00:37:38Z","content_type":null,"content_length":"41210","record_id":"<urn:uuid:e77ffd10-1b18-4384-bb01-7cee582fc589>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Charles Émile Picard
Born: 24 July 1856 in Paris, France
Died: 11 December 1941 in Paris, France
Click the picture above
to see five larger pictures
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
Émile Picard's father was the manager of a silk factory who died during the siege of Paris in 1870. The siege was a consequence of the Franco-Prussian War which began on 19 July 1870. It went badly
for France and on 19 September 1870 the Germans began a siege of Paris. This was a desperate time for the inhabitants of the town who killed their horses, cats and dogs for food. It was during this
siege that Émile's father died. Paris surrendered on 28 January 1871 and The Treaty of Frankfurt, signed on 10 May 1871, was a humiliation for France.
Picard's mother, the daughter of a medical doctor, was put in an extremely difficult position when her husband died. As well as Émile, she had a second young son, and in order to support them through
their education she had to find employment. Only her determination to give her sons a good start, despite the tragedy, allowed Émile to receive the education which gave him the chance to achieve the
highest international stading in mathematics. Picard's secondary education was at the Lycée Napoléon, later called the Lycée Henri IV. Strangely he was a brilliant pupil at almost all his subjects,
particularly in translating Greek and Latin poetry, but he disliked mathematics. He himself wrote that he hated geometry but he:-
... learned it by heart in order to avoid being punished.
It was only during the vacation after completing his secondary studies that Picard read an algebra book and suddenly he became fascinated in mathematics. He took the entrance examinations for École
Polytechnique and École Normale Supérieure; he was placed second and first respectively in the two examinations. Hadamard wrote in [8]:-
As any young Frenchman of our time who was gifted in science, he was obliged to choose between the École Polytechnique which, in principle, prepared one to be an engineer, and the École Normale,
with its pure scientific orientation. He was ranked first and chose the latter. It is said that he made this decision after an exciting visit to Pasteur, during which the father of bacteriology
spoke about pure science in such lofty terms that the young man was completely persuaded.
Picard received his agrégation in 1877, being placed first. He remained at the École Normale Supérieure for a year where he was employed as an assistant. He was appointed lecturer at the University
of Paris in 1878 and then professor at Toulouse in 1879. In 1881 he returned to Paris when appointed maître de conférence in mechanics and astronomy at the École Normale.
In 1881 Picard was nominated for membership of the mathematics section of the Académie des Sciences. It says much of the extraordinary ability that he was showing at such a young age that he was
nominated. He had already proved two important theorems which are both today known under Picard's name, yet it was still a little soon to gain admission to the prestigious academy and he would have
to wait a few more years. In this year of his first nomination he married Hermite's daughter. Picard and his wife had three children, a daughter and two sons, who were all killed in World War I. His
grandsons were wounded and captured in World War II.
In 1885 Picard was appointed to the chair of differential calculus at the Sorbonne in Paris when the chair fell vacant on the death of Claude Bouquet. However a university regulation prevented anyone
below the age of thirty holding a chair. The regulations were circumvented by making Picard his own suppléant until he reached the age of thirty which was in the following year. He requested
exchanging his chair for that of analysis and higher algebra in 1897 so that he was able to train research students.
Picard made his most important contributions in the fields of analysis, function theory, differential equations, and analytic geometry. He used methods of successive approximation to show the
existence of solutions of ordinary differential equations solving the Cauchy problem for these differential equations. Starting in 1890, he extended properties of the Laplace equation to more general
elliptic equations. Picard's solution was represented in the form of a convergent series.
In 1879 he proved that an entire function which is not constant takes every value an infinite number of times, with one possible exception. Picard used the theory of Hermite's modular functions in
the proof of this important result.
Building on work by Abel and Riemann, Picard's study of the integrals attached to algebraic surfaces and related topological questions developed into an important part of algebraic geometry. On this
topic he published, with Georges Simart, Théorie des fonctions algébriques de deux variables indépendantes which was a two volume work, the first volume appearing in 1897 and the second in 1906.
Picard also discovered a group, now called the Picard group, which acts as a group of transformations on a linear differential equation.
His three volume masterpiece Traité d'analyse was published between 1891 and 1896. The treatise [1]:-
... immediately became a classic and was revised with each subsequent edition. The work was accessible to many students through its range of subjects, clear exposition, and lucid style. Picard
examined several specific cases before discussion his general theory.
Picard also applied analysis to the study of elasticity, heat and electricity. He studied the transmission of electrical pulses along wires finding a beautiful solution to the problem. As can be seen
his contributions were both wide ranging and important.
Among the honours given to Picard was his election to the Académie des Sciences in 1889, eight years after he was first unsuccessfully nominated. He later served the Academy as its permanent
secretary from 1917 until his death in 1941. In this role [1]:-
... he wrote an annual notice on either a scientist or a subject of current interest. He also wrote many prefaces to mathematical books and participated in the publication of works of C Hermite
and G H Halphen.
Picard was awarded the Poncelet Prize in 1886 and the Grand Prix des Sciences Mathématiques in 1888. In addition to honorary doctorates from five universities and honorary membership of thirty-seven
learned societies he received the Grande Croix de la Légion d'Honneur in 1932 and the Mittag-Leffler Gold Medal in 1937. He became a member of the Académie Française in 1924. Another honour was given
to him was making him President of the International Congress of Mathematicians at Strasbourg in September 1920.
Hadamard had this to say of Picard as a teacher when he addressed him in 1937:-
You were able to make [mechanics] almost interesting; I have always wondered how you went about this, because I was never able to do it when it was my turn. But you also escaped, you introduced
us not only to hydrodynamics and turbulence, but to many other theories of mathematical physics and even of infinitesimal geometry; all this in lectures, the most masterly I have heard in my
opinion, where there was not one word too many nor one word too little, and where the essence of the problem and the means used to overcome it appeared crystal clear, with all secondary details
treated thoroughly and at the same time consigned to their right place.
Hadamard wrote in [8]:-
A striking feature of Picard's scientific personality was the perfection of his teaching, one of the most marvellous, if not the most marvellous, that I have ever known.
It is a remarkable fact that between 1894 and 1937 he trained over 10000 engineers who were studying at the École Centrale des Arts et Manufactures.
Article by: J J O'Connor and E F Robertson
Click on this link to see a list of the Glossary entries for this page
List of References (13 books/articles)
A Poster of Émile Picard Mathematicians born in the same country
Additional Material in MacTutor
Honours awarded to Émile Picard
(Click below for those honoured in this way)
LMS Honorary Member 1898
Speaker at International Congress 1908
Fellow of the Royal Society 1909
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © March 2001 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is:
|
{"url":"http://www-history.mcs.st-and.ac.uk/Biographies/Picard_Emile.html","timestamp":"2014-04-18T10:35:27Z","content_type":null,"content_length":"20012","record_id":"<urn:uuid:7845d4ed-f575-4af6-abf7-925aa72d4f2f>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SBEC Competencies
DOMAIN I - NUMBER CONCEPTS Module One
Competency 001
The teacher understands the structure of number systems, the development of a sense of quantity, and the relationship between quantity and symbolic representations.
Analyzes the structure of numeration systems and the roles of place value and zero in the base ten system.
Understands the relative magnitude of whole numbers, integers, rational numbers, and real numbers.
Demonstrates an understanding of a variety of models for representing numbers (e.g., fraction strips, diagrams, patterns, shaded regions, number lines).
Demonstrates an understanding of equivalency among different representations of rational numbers.
Selects appropriate representations of real numbers (e.g., fractions, decimals, percents, roots, exponents, scientific notation) for particular situations.
Understands the characteristics of the set of whole numbers, integers, rational numbers, real numbers, and complex numbers (e.g., commutativity, order, closure, identity elements, inverse elements,
Demonstrates an understanding of how some situations that have no solution in one number system (e.g., whole numbers, integers, rational numbers) have solutions in another number system (e.g., real
numbers, complex numbers).
Competency 002
The teacher understands number operations and computational algorithms.
Works proficiently with real and complex numbers and their operations.
Analyzes and describes relationships between number properties, operations, and algorithms for the four basic operations involving integers, rational numbers, and real numbers.
Uses a variety of concrete and visual representations to demonstrate the connections between operations and algorithms.
Justifies procedures used in algorithms for the four basic operations with integers, rational numbers, and real numbers, and analyzes error patterns that may occur in their application.
Relates operations and algorithms involving numbers to algebraic procedures (e.g., adding fractions to adding rational expressions, division of integers to division of polynomials).
Extends and generalizes the operations on rationals and integers to include exponents, their properties, and their applications to the real numbers.
Competency 003
The teacher understands ideas of number theory and uses numbers to model and solve problems within and outside of mathematics.
Demonstrates an understanding of ideas from number theory (e.g., prime factorization, greatest common divisor) as they apply to whole numbers, integers, and rational numbers, and uses these ideas in
problem situations.
Uses integers, rational numbers, and real numbers to describe and quantify phenomena such as money, length, area, volume, and density.
Applies knowledge of place value and other number properties to develop techniques of mental mathematics and computational estimation.
Applies knowledge of counting techniques such as permutations and combinations to quantify situations and solve problems.
Applies properties of the real numbers to solve a variety of theoretical and applied problems.
DOMAIN Il - PATTERNS AND ALGEBRA Module Two
Competency 004
The teacher understands and uses mathematical reasoning to identify, extend, and analyze patterns and understands the relationships among variables, expressions, equations, inequalities, relations,
and functions.
Uses inductive reasoning to identify, extend, and create patterns using concrete models, figures, numbers, and algebraic expressions.
Formulates implicit and explicit rules to describe and construct sequences verbally, numerically, graphically, and symbolically.
Makes, tests, validates, and uses conjectures about patterns and relationships in data presented in tables, sequences, or graphs.
Gives appropriate justification of the manipulation of algebraic expressions.
Illustrates the concept of a function using concrete models, tables, graphs, and symbolic and verbal representations.
Uses transformations to illustrate properties of functions and relations and to solve problems.
Competency 005
The teacher understands and uses linear functions to model and solve problems.
Demonstrates an understanding of the concept of linear function using concrete models, tables, graphs, and symbolic and verbal representations.
Demonstrates an understanding of the connections among linear functions, proportions, and direct variation.
Determines the linear function that best models a set of data.
Analyzes the relationship between a linear equation and its graph.
Uses linear functions, inequalities, and systems to model problems.
Uses a variety of representations and methods (e.g., numerical methods, tables, graphs, algebraic techniques) to solve systems of linear equations and inequalities.
Demonstrates an understanding of the characteristics of linear models and the advantages and disadvantages of using a linear model in a given situation.
Competency 006
The teacher understands and uses nonlinear functions and relations to model and solve problems.
Uses a variety of methods to investigate the roots (real and complex), vertex, and symmetry of a quadratic function or relation.
Demonstrates an understanding of the connections among geometric, graphic, numeric, and symbolic representations of quadratic functions.
Analyzes data and represents and solves problems involving exponential growth and decay.
Demonstrates an understanding of the connections among proportions, inverse variation, and rational functions.
Understands the effects of transformations such as f(x ? c) on the graph of a nonlinear function f(x).
Applies properties, graphs, and applications of nonlinear functions to analyze, model, and solve problems.
Uses a variety of representations and methods (e.g., numerical methods, tables, graphs, algebraic techniques) to solve systems of quadratic equations and inequalities.
Understands how to use properties, graphs, and applications of nonlinear relations including polynomial, rational, radical, absolute value, exponential, logarithmic, trigonometric, and piecewise
functions and relations to analyze, model, and solve problems.
Competency 007
The teacher uses and understands the conceptual foundations of calculus related to topics in middle school mathematics.
Relates topics in middle school mathematics to the concept of limit in sequences and series.
Relates the concept of average rate of change to the slope of the secant line and instantaneous rate of change to the slope of the tangent line.
Relates topics in middle school mathematics to the area under a curve.
Demonstrates an understanding of the use of calculus concepts to answer questions about rates of change, areas, volumes, and properties of functions and their graphs.
DOMAIN Ill - GEOMETRY AND MEASUREMENT Module Three Module Four
Competency 008
The teacher understands measurement as a process.
Selects and uses appropriate units of measurement (e.g., temperature, money, mass, weight, area, capacity, density, percents, speed, acceleration) to quantify, compare, and communicate information.
Develops, justifies, and uses conversions within measurement systems.
Applies dimensional analysis to derive units and formulas in a variety of situations (e.g., rates of change of one variable with respect to another) and to find and evaluate solutions to problems.
Describes the precision of measurement and the effects of error on measurement.
Applies the Pythagorean theorem, proportional reasoning, and right triangle trigonometry to solve measurement problems.
Competency 009
The teacher understands the geometric relationships and axiomatic structure of Euclidean geometry.
Understands concepts and properties of points, lines, planes, angles, lengths, and distances.
Analyzes and applies the properties of parallel and perpendicular lines.
Uses the properties of congruent triangles to explore geometric relationships and prove theorems.
Describes and justifies geometric constructions made using a compass and straight edge and other appropriate technologies.
Applies knowledge of the axiomatic structure of Euclidean geometry to justify and prove theorems.
Competency 010
The teacher analyzes the properties of two- and three-dimensional figures.
Uses and understands the development of formulas to find lengths, perimeters, areas, and volumes of basic geometric figures.
Applies relationships among similar figures, scale, and proportion and analyzes how changes in scale affect area and volume measurements.
Uses a variety of representations (e.g., numeric, verbal, graphic, symbolic) to analyze and solve problems involving two- and three-dimensional figures such as circles, triangles, polygons,
cylinders, prisms, and spheres.
Analyzes the relationship among three-dimensional figures and related two-dimensional representations (e.g., projections, cross-sections, nets) and uses these representations to solve problems.
Competency 011
The teacher understands transformational geometry and relates algebra to geometry and trigonometry using the Cartesian coordinate system.
Describes and justifies geometric constructions made using a reflection device and other appropriate technologies.
Uses translations, reflections, glide-reflections, and rotations to demonstrate congruence and to explore the symmetries of figures.
Uses dilations (expansions and contractions) to illustrate similar figures and proportionality.
Uses symmetry to describe tessellations and shows how they can be used to illustrate geometric concepts, properties, and relationships.
Applies concepts and properties of slope, midpoint, parallelism, and distance in the coordinate plane to explore properties of geometric figures and solve problems.
Applies transformations in the coordinate plane.
Uses the unit circle in the coordinate plane to explore properties of trigonometric functions.
DOMAIN IV - PROBABILITY AND STATISTICS Module Five
Competency 012
The teacher understands how to use graphical and numerical techniques to explore data, characterize patterns, and describe departures from patterns.
Organizes and displays data in a variety of formats (e.g., tables, frequency distributions, stem-and-leaf plots, box-and-whisker plots, histograms, pie charts).
Applies concepts of center, spread, shape, and skewness to describe a data distribution.
Supports arguments, makes predictions, and draws conclusions using summary statistics and graphs to analyze and interpret one-variable data.
Demonstrates an understanding of measures of central tendency (e.g., mean, median, mode) and dispersion (e.g., range, interquartile range, variance, standard deviation).
Analyzes connections among concepts of center and spread, data clusters and gaps, data outliers, and measures of central tendency and dispersion.
Calculates and interprets percentiles and quartiles.
Competency 013
The teacher understands the theory of probability.
Explores concepts of probability through data collection, experiments, and simulations.
Uses the concepts and principles of probability to describe the outcome of simple and compound events.
Generates, simulates, and uses probability models to represent a situation.
Determines probabilities by constructing sample spaces to model situations.
Solves a variety of probability problems using combinations, permutations, and geometric probability (i.e., probability as the ratio of two areas).
Uses the binomial, geometric, and normal distributions to solve problems.
Competency 014
The teacher understands the relationship among probability theory, sampling, and statistical inference, and how statistical inference is used in making and evaluating predictions.
Applies knowledge of designing, conducting, analyzing, and interpreting statistical experiments to investigate real-world problems.
Demonstrates an understanding of random samples, sample statistics, and the relationship between sample size and confidence intervals.
Applies knowledge of the use of probability to make observations and draw conclusions from single variable data and to describe the level of confidence in the conclusion.
Makes inferences about a population using binomial, normal, and geometric distributions.
Demonstrates an understanding of the use of techniques such as scatter plots, regression lines, correlation coefficients, and residual analysis to explore bivariate data and to make and evaluate
|
{"url":"http://online.math.uh.edu/MiddleSchool/Appendices/sbec_competencies.htm","timestamp":"2014-04-18T23:15:16Z","content_type":null,"content_length":"28046","record_id":"<urn:uuid:ed017e91-efaa-4e44-a937-7bd9a621e984>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
|
moment generating functions for independent var.
November 14th 2009, 01:46 PM
moment generating functions for independent var.
let X~N(mu,sigma^2),Y~Gamma(alpha,Beta) . X and Y are independent , the moment generating function is e^(ut + sigma^2*t^2)/2 for X and (1-Beta)^-alpha for Y. Find mgf of X+Y....I know if you have
two indp. var. you multiply them to get the answer, but what form do you leave it in. My answer is e^(ut + sigma^2*t^2)/2 ((1-Beta)^-alpha . Can this answer be broken down anymore?
2)Also how do you compute mgf for 3X? Is it just 3(e^(ut + sigma^2*t^2)/2) or do you find the differiate 3 times?
3) last question computing mgf for these types of problems 2+ Y or 2 + 2X + 3Y. Is it just 2 + (1-Beta)^ -alpha and 2 + (3X)*(3Y)?
I just need to make sure I understand how to compute the mgf.
November 15th 2009, 12:22 AM
let X~N(mu,sigma^2),Y~Gamma(alpha,Beta) . X and Y are independent , the moment generating function is e^(ut + sigma^2*t^2)/2 for X and (1-Beta)^-alpha for Y. Find mgf of X+Y....I know if you have
two indp. var. you multiply them to get the answer, but what form do you leave it in. My answer is e^(ut + sigma^2*t^2)/2 ((1-Beta)^-alpha . Can this answer be broken down anymore?
It's rather
the moment generating function is e^(ut + sigma^2*t^2/2) for X
2)Also how do you compute mgf for 3X? Is it just 3(e^(ut + sigma^2*t^2)/2) or do you find the differiate 3 times?
The definition of the mgf is $E(e^{Xt})$
So the mgf of 3X is $E(e^{3Xt})=E(e^{X(3t)})$, which is the mgf of X taken at the point 3t.
3) last question computing mgf for these types of problems 2+ Y or 2 + 2X + 3Y. Is it just 2 + (1-Beta)^ -alpha and 2 + (3X)*(3Y)?
I just need to make sure I understand how to compute the mgf.
Same method as question 2) : write it in the form of the exponential.
Try it (Wink)
|
{"url":"http://mathhelpforum.com/advanced-statistics/114545-moment-generating-functions-independent-var-print.html","timestamp":"2014-04-18T04:17:46Z","content_type":null,"content_length":"6464","record_id":"<urn:uuid:f1a5c126-4fe4-4862-a6ba-e726e7b3c470>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lambda Calculus Models and Extensionality
Results 1 - 10 of 27
- Theoretical Computer Science , 1997
"... . The variety (equational class) of lambda abstraction algebras was introduced to algebraize the untyped lambda calculus in the same way Boolean algebras algebraize the classical propositional
calculus. The equational theory of lambda abstraction algebras is intended as an alternative to combinatory ..."
Cited by 20 (11 self)
Add to MetaCart
. The variety (equational class) of lambda abstraction algebras was introduced to algebraize the untyped lambda calculus in the same way Boolean algebras algebraize the classical propositional
calculus. The equational theory of lambda abstraction algebras is intended as an alternative to combinatory logic in this regard since it is a first-order algebraic description of lambda calculus,
which allows to keep the lambda notation and hence all the functional intuitions. In this paper we show that the lattice of the subvarieties of lambda abstraction algebras is isomorphic to the
lattice of lambda theories of the lambda calculus; for every variety of lambda abstraction algebras there exists exactly one lambda theory whose term algebra generates the variety. For example, the
variety generated by the term algebra of the minimal lambda theory is the variety of all lambda abstraction algebras. This result is applied to obtain a generalization of the genericity lemma of
finitary lambda calculus...
- IN: COMPUTER SCIENCE LOGIC. VOLUME 4646 OF LECTURE NOTES IN COMPUTER SCIENCE , 2007
"... Models of the untyped λ-calculus may be defined either as applicative structures satisfying a bunch of first order axioms, known as “λ-models”, or as (structures arising from) any reflexive
object in a cartesian closed category (ccc, for brevity). These notions are tightly linked in the sense that: ..."
Cited by 19 (9 self)
Add to MetaCart
Models of the untyped λ-calculus may be defined either as applicative structures satisfying a bunch of first order axioms, known as “λ-models”, or as (structures arising from) any reflexive object in
a cartesian closed category (ccc, for brevity). These notions are tightly linked in the sense that: given a λ-model A, one may define a ccc in which A (the carrier set) is a reflexive object;
conversely, if U is a reflexive object in a ccc C, having enough points, then C ( , U) may be turned into a λ-model. It is well known that, if C does not have enough points, then the applicative
structure C ( , U) is not a λ-model in general. This paper: (i) shows that this mismatch can be avoided by choosing appropriately the carrier set of the λ-model associated with U; (ii) provides an
example of an extensional reflexive object D in a ccc without enough points: the Kleisli-category of the comonad “finite multisets ” on Rel; (iii) presents some algebraic properties of the λ-model
associated with D by (i) which make it suitable for dealing with non-deterministic extensions of the untyped λ-calculus.
- Journal of Logic and Computation , 2004
"... Lambda theories are equational extensions of the untyped lambda calculus that are closed under derivation. The set of lambda theories is naturally equipped with a structure of complete lattice,
where the meet of a family of lambda theories is their intersection, and the join is the least lambda theo ..."
Cited by 19 (11 self)
Add to MetaCart
Lambda theories are equational extensions of the untyped lambda calculus that are closed under derivation. The set of lambda theories is naturally equipped with a structure of complete lattice, where
the meet of a family of lambda theories is their intersection, and the join is the least lambda theory containing their union. In this paper we study the structure of the lattice of lambda theories
by universal algebraic methods. We show that nontrivial quasi-identities in the language of lattices hold in the lattice of lambda theories, while every nontrivial lattice identity fails in the
lattice of lambda theories if the language of lambda calculus is enriched by a suitable finite number of constants. We also show that there exists a sublattice of the lattice of lambda theories which
satisfies: (i) a restricted form of distributivity, called meet semidistributivity; and (ii) a nontrivial identity in the language of lattices enriched by the relative product of binary relations.
- In Proc. of the 18 th International Symposium on Mathematical Foundations of Computer Science, MFCS'93, Springer LNCS , 1993
"... ion Problem for Restricted Lambda Calculi Furio Honsell and Marina Lenisa Dipartimento di Matematica e Informatica, Universit`a di Udine, via Zanon,6 - Italy
E-mail:fhonsell,lenisag@udmi5400.cineca.it dedicated to Corrado Bohm on the occasion of his 70 th birthday Abstract. Issues in the mathemat ..."
Cited by 14 (1 self)
Add to MetaCart
ion Problem for Restricted Lambda Calculi Furio Honsell and Marina Lenisa Dipartimento di Matematica e Informatica, Universit`a di Udine, via Zanon,6 - Italy
E-mail:fhonsell,lenisag@udmi5400.cineca.it dedicated to Corrado Bohm on the occasion of his 70 th birthday Abstract. Issues in the mathematical semantics of two restrictions of the -calculus, i.e.
I-calculus and V -calculus, are discussed. A fully abstract model for the natural evaluation of the former is defined using complete partial orders and strict Scott-continuous functions. A correct,
albeit non-fully abstract, model for the SECD evaluation of the latter is defined using Girard's coherence spaces and stable functions. These results are used to illustrate the interest of the
analysis of the fine structure of mathematical models of programming languages. 1 Introduction D. Scott, in the late sixties, discovered a truly mathematical semantics for - calculus using continuous
lattices. His D1 model was the first example of ...
- Intersection Types and Related Systems, volume 70 of Electronic Notes in Computer Science , 2002
"... Dipartimento di Informatica Universit`a di Venezia ..."
, 1996
"... The distinction between the conjunctive nature of non-determinism as opposed to the disjunctive character of parallelism constitutes the motivation and the starting point of the present work.
λ-calculus is extended with both a non-deterministic choice and a parallel operator; a notion of reduction i ..."
Cited by 12 (6 self)
Add to MetaCart
The distinction between the conjunctive nature of non-determinism as opposed to the disjunctive character of parallelism constitutes the motivation and the starting point of the present work.
λ-calculus is extended with both a non-deterministic choice and a parallel operator; a notion of reduction is introduced, extending fi-reduction of the classical calculus. We study type assignment
systems for this calculus, together with a denotational semantics which is initially defined constructing a set semimodel via simple types. We enrich the type system with intersection and union
types, dually reflecting the disjunctive and conjunctive behaviour of the operators, and we build a filter model. The theory of this model is compared both with a Morris-style operational semantics
and with a semantics based on a notion of capabilities.
- ACM TOCL , 2000
"... M. DEZANI-CIANCAGLINI Universita di Torino, Italy F. HONSELL Universita di Udine, Italy F. ALESSI Universita di Udine, Italy Abstract We characterize those intersection-type theories which yield
complete intersection-type assignment systems for l-calculi, with respect to the three canonical ..."
Cited by 12 (5 self)
Add to MetaCart
M. DEZANI-CIANCAGLINI Universita di Torino, Italy F. HONSELL Universita di Udine, Italy F. ALESSI Universita di Udine, Italy Abstract We characterize those intersection-type theories which yield
complete intersection-type assignment systems for l-calculi, with respect to the three canonical set-theoretical semantics for intersection-types: the inference semantics, the simple semantics and
the F-semantics. Keywords Lambda Calculus, Intersection Types, Semantic Completeness, Filter Structures. 1 Introduction Intersection-types disciplines originated in [6] to overcome the limitations of
Curry 's type assignment system and to provide a characterization of strongly normalizing terms of the l-calculus. But very early on, the issue of completeness became crucial. Intersection-type
theories and filter l-models have been introduced, in [5], precisely to achieve the completeness for the type assignment system l" BCD W , with respect to Scott's simple semantics. And this result,
- In Proc. Ninth International Conference on the Mathematical Foundations of Programming Semantics (MFPS'93 , 1993
"... . In [Mil90] Milner examines the encoding of the -calculus into the ß-calculus [MPW92]. The former is the universally accepted basis for computations with functions, the latter aims at being its
counterpart for computations with processes. The primary goal of this paper is to continue the study of M ..."
Cited by 11 (1 self)
Add to MetaCart
. In [Mil90] Milner examines the encoding of the -calculus into the ß-calculus [MPW92]. The former is the universally accepted basis for computations with functions, the latter aims at being its
counterpart for computations with processes. The primary goal of this paper is to continue the study of Milner's encodings. We focus mainly on the lazy -calculus [Abr87]. We show that its encoding
gives rise to a -model, in which a weak form of extensionality holds. However the model is not fully abstract: To obtain full abstraction, we examine both the restrictive approach, in which the
semantic domain of processes is cut down, and the expansive approach, in which -calculus is enriched with constants to obtain a direct characterisation of the equivalence on -terms induced, via the
encoding, by the behavioural equivalence adopted on the processes. Our results are derived exploiting an intermediate representation of Milner's encodings into the Higher-Order ß-calculus, an !-order
extension of ...
, 2005
"... Invariance of interpretation by #-conversion is one of the minimal requirements for any standard model for the #-calculus. With the intersection type systems being a general framework for the
study of semantic domains for the #-calculus, the present paper provides a (syntactic) characterisation of t ..."
Cited by 11 (1 self)
Add to MetaCart
Invariance of interpretation by #-conversion is one of the minimal requirements for any standard model for the #-calculus. With the intersection type systems being a general framework for the study
of semantic domains for the #-calculus, the present paper provides a (syntactic) characterisation of the above mentioned requirement in terms of characterisation results for intersection type
assignment systems.
- IN CAAP '92, VOLUME 581 OF LNCS , 1992
"... This paper studies the interplay between functional application and nondeterministic choice in the context of untyped λ-calculus. We introduce an operational semantics which is based on the idea
of must preorder, coming from the theory of process algebras. To characterize this relation, we build a ..."
Cited by 11 (1 self)
Add to MetaCart
This paper studies the interplay between functional application and nondeterministic choice in the context of untyped λ-calculus. We introduce an operational semantics which is based on the idea of
must preorder, coming from the theory of process algebras. To characterize this relation, we build a model using the classical inverse limit construction, and we prove it fully abstract using a
generalization of Böhm trees.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=184619","timestamp":"2014-04-16T09:11:11Z","content_type":null,"content_length":"38546","record_id":"<urn:uuid:8040447f-1311-49f7-a333-77114241e485>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is maths REALLY required for programming?
07-09-2003 #1
Is maths REALLY required for programming?
Do programmers need to be excellent at maths?
I dont really enjoy maths, but I think I like programming, tho I've only been exposed to it for a few weeks.
Is there hope for me?
Last edited by FloatingPoint; 07-09-2003 at 10:11 AM.
You don't have to be excellent. Hell, for some of it, you don't have to be good at all. There are portions of programming where math is totally required, though. It just depends on what you're
trying to program.
Just what I wanted to hear!
It helps if you can count, you know: 1,2,3,4,5,6,7,8,9,10. Then you have to relearn how to count: 0,1,2,3,4,5,6,7,8,9. All set after that.
You don't need to be excellent at mathematics, but it does help a bit if you understand mathematics a bit. Then it will probably be a bit easier understand specific programming things. Algorithms
for processing sound or graphics, for example, often require understanding of the underlying math to make optimal use of those algorithms. It also depends on what you want to program, which areas
of mathematics you will need.
Originally posted by Casey
It helps if you can count, you know: 1,2,3,4,5,6,7,8,9,10. Then you have to relearn how to count: 0,1,2,3,4,5,6,7,8,9. All set after that.
then you have to relearn to count
all set after that...
You forgot 1, 10, 11, 100, 101...
"Think not but that I know these things; or think
I know them not: not therefore am I short
Of knowing what I ought."
-John Milton, Paradise Regained (1671)
"Work hard and it might happen."
or maybe 0 1 2 3 4 5 6 7 10
Originally posted by Shiro
You don't need to be excellent at mathematics, but it does help a bit if you understand mathematics a bit. Then it will probably be a bit easier understand specific programming things. Algorithms
for processing sound or graphics, for example, often require understanding of the underlying math to make optimal use of those algorithms. It also depends on what you want to program, which areas
of mathematics you will need.
There's always the text book for reference if I dont wanna have to remember all those equations, and here's hoping I wont have to memorize them, just like in physics, chemistry etc., hell how one
is supposed to remember everything?
Well, I guess I shouldnt be too worried abt it now unless somebody there wants me to count in hex!
Do programmers need to be excellent at maths?
I think, they should......
It's sucks... when you write code and you u don't understand mathematics and physics......
you work in a company.... and u have to write an algorithm for your prog.... and u get a mathematical formula.... and you dunno anything....
So, it's better to know the most things, you need in mathematics and in physics...
But, for a job, like webdesigner you need no mathematics......
It's sucks... when you write code and you u don't understand mathematics and physics......
I guess here arises the slight difference b/ween merely understanding and memorizing those formulas.
For anyone who'd been to college and at least achieved a credit for any maths paper, understanding a revisited formula wouldnt much pose a problem.
It's the "remembering most if not all of those formulas" that's been bothering me, suffice it to say that I'm not that "excellent" at maths. But I'm certain that I can get at least a C+ for my
math papers, given a lil extra push
Maths is not really about memorising, IMO its about comprehension and application.
I think you'll do fine. Just determine which of the following 10 categories you fit into:
- Those who understand binary
- Those who don't
Yea, I got your point
Actually that was just a joke I saw inside the Tanner Building at BYU. But seriously, if you can do simple arithmetic, and do the same in other numerical systems, you'll probably be better than
most programmers. My book for BASIC (unnecesary, I know), uses a whole bunch of trig, but I've never found a practical use for it.
07-09-2003 #2
07-09-2003 #3
07-09-2003 #4
07-09-2003 #5
Join Date
Aug 2001
Groningen (NL)
07-09-2003 #6
Registered User
Join Date
Jan 2002
07-09-2003 #7
07-09-2003 #8
07-10-2003 #9
07-10-2003 #10
07-10-2003 #11
07-10-2003 #12
07-10-2003 #13
Super Moderator
Join Date
Sep 2001
07-10-2003 #14
07-10-2003 #15
Super Moderator
Join Date
Sep 2001
|
{"url":"http://cboard.cprogramming.com/brief-history-cprogramming-com/41727-maths-really-required-programming.html","timestamp":"2014-04-16T20:36:06Z","content_type":null,"content_length":"99585","record_id":"<urn:uuid:f948469a-cd6b-466f-81b6-b1856f3c4497>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A redshift is a shift in the frequency of a photon toward lower energy, or longer wavelength. The redshift is defined as the change in the wavelength of the light divided by the rest wavelength of
the light, as
z = (Observed wavelength - Rest wavelength)/(Rest wavelength)
Note that postive values of z correspond to increased wavelengths (redshifts).
Different types of redshifts have different causes.
The Doppler Redshift results from the relative motion of the light emitting object and the observer. If the source of light is moving away from you then the wavelength of the light is stretched out,
i.e., the light is shifted towards the red. These effects, individually called the blueshift, and the redshift are together known as doppler shifts. The shift in the wavelength is given by a simple
(Observed wavelength - Rest wavelength)/(Rest wavelength) = (v/c)
so long as the velocity v is much less than the speed of light. A relativistic doppler formula is required when velocity is comparable to the speed of light.
The Cosmological Redshift is a redshift caused by the expansion of space. The wavelength of light increases as it traverses the expanding universe between its point of emission and its point of
detection by the same amount that space has expanded during the crossing time.
The Gravitational Redshift is a shift in the frequency of a photon to lower energy as it climbs out of a gravitational field.
|
{"url":"http://www.astro.virginia.edu/~jh8h/glossary/redshift.htm","timestamp":"2014-04-17T04:46:07Z","content_type":null,"content_length":"1964","record_id":"<urn:uuid:8b757a1e-2a37-41e0-94bb-dbdadecc8223>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hippopede Coffman Deposit #12
1. The implicit equation in the xy-plane is
where a and b are positive constants. This is a curve with reflectional symmetry about both the horizontal and vertical axes.
2. Any Hippopede is the intersection of a torus with one of its tangent planes that is parallel to its axis of rotational symmetry, as demonstrated in the animation.
3. Converting the implicit equation to polar coordinates gives
so the origin at r = 0 is a solution, and the remainder of the curve is given by
4. If 0 < b< a, the point at the origin is an isolated node and the balance of the solution set is a simple closed curve, also called an Elliptic Lemniscate of Booth.
The b < a special case is given by the rational parametrization
Each of these curves is the image of an ellipse under an inversion in a circle. The circle and the ellipse must have the same center. The curve is non-convex for b < a < 2b, and convex for a > 2b
when the shape is oval, but is not exactly a strictly defined ellipse.
5. If a = b, the quartic implicit equation factors into two quadratics; thus, the curve is the union of two circles, centered at ( - b,0 ) and ( b, 0 ), each of radius b, and mutually tangent at the
6. If 0 < a < b, the curve intersects itself at the origin, and it is also called a Hyperbolic Lemniscate of Booth.This is given by the rational parametrization
Each of these curves is the image of a hyperbola under an inversion in a circle. The circle and the hyperbola must have the same center.
The animation.
Now that you have investigated the equations and read of the special properites, we suggest you use the "Reload" or "Refresh" button at the top of your computer. This will replay the animation.
Visually "slice" the torus to see the plane lemniscate evolve into a pair of circles, and then the oval shape.
|
{"url":"http://curvebank.calstatela.edu/hippopede/hippopede.htm","timestamp":"2014-04-20T13:33:39Z","content_type":null,"content_length":"12169","record_id":"<urn:uuid:285c669f-425f-4d75-9989-7ac76b50896b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
|
La Canada, CA Math Tutor
Find a La Canada, CA Math Tutor
I am a certified teacher who now works exclusively as a private tutor. I have taught Biology, Chemistry, and Physics (including Honors and A.P. classes). I have been working professionally as a
teacher and tutor for over ten years. In the past year I have tutored students from the following school...
24 Subjects: including prealgebra, ACT Math, algebra 1, algebra 2
...I'm a senior at Caltech, where I study mathematics. My interests range from the sciences to humanities, focusing especially on math, physics, and art history. I am currently focusing on
research, and planning to attend a graduate program for a PhD in art history.
51 Subjects: including logic, English, trigonometry, prealgebra
...This includes genres such as, symphonies, operas, musicals, computer, monastic and chamber music. As a recent graduate from Pepperdine University with a degree in Applied Music, I feel that I
have gained necessary skills to teach flute. I have been playing since I was in fifth grade and since then have won numerous competitions and been in youth orchestras across Southern California.
8 Subjects: including prealgebra, algebra 1, elementary (k-6th), general music
I am a PhD in physics tutoring in physics, chemistry and mathematics at all schooling levels from grade school through grad school. My technical expertise is in the fields of optical physics,
laser science and chemical physics. I can also tutor college and grad school application writing and preparation.
22 Subjects: including calculus, chemistry, linear algebra, Macintosh
Dear Student,I consider myself a "student of life." I am excited about the possibilities of helping you teach yourself more about the subjects you are interested in. It is my belief that we all
teach ourselves, and sometimes that process takes longer than we would like. As a teacher, I hope to present some ideas and concepts for you to consider, that might help enhance your
40 Subjects: including calculus, chemistry, piano, probability
Related La Canada, CA Tutors
La Canada, CA Accounting Tutors
La Canada, CA ACT Tutors
La Canada, CA Algebra Tutors
La Canada, CA Algebra 2 Tutors
La Canada, CA Calculus Tutors
La Canada, CA Geometry Tutors
La Canada, CA Math Tutors
La Canada, CA Prealgebra Tutors
La Canada, CA Precalculus Tutors
La Canada, CA SAT Tutors
La Canada, CA SAT Math Tutors
La Canada, CA Science Tutors
La Canada, CA Statistics Tutors
La Canada, CA Trigonometry Tutors
Nearby Cities With Math Tutor
Century City, CA Math Tutors
Flint, CA Math Tutors
Flintridge, CA Math Tutors
Glendale Galleria, CA Math Tutors
Hansen Hills, CA Math Tutors
La Canada Flintridge Math Tutors
La Tuna Canyon, CA Math Tutors
Magnolia Park, CA Math Tutors
Montrose, CA Math Tutors
Playa, CA Math Tutors
Rancho La Tuna Canyon, CA Math Tutors
Santa Western, CA Math Tutors
Sherman Village, CA Math Tutors
West Toluca Lake, CA Math Tutors
Westwood, LA Math Tutors
|
{"url":"http://www.purplemath.com/La_Canada_CA_Math_tutors.php","timestamp":"2014-04-19T17:28:25Z","content_type":null,"content_length":"24332","record_id":"<urn:uuid:39cfdab8-946f-446a-82f7-90630731ce97>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lecture to Accompany Tax Experiment
The lecture material is developed on the assumption that students have previously been introduced to step function market supply and demand functions that correspond to goods traded in discrete units
(as in everyday life).
Textbook derivations of key results on the theory of taxation are developed using linear supply and demand graphs. Parallel derivations of some of the results are developed using demand and supply
step functions.
• The liability to pay a tax falls on the economic agents who are legally liable to pay the tax to the government.
• The incidence of a tax falls on the economic agents whose real incomes are reduced by the tax.
• The deadweight loss or excess burden of a tax is the amount by which the economic agents' loss in real income due to the tax exceeds the tax revenue.
Central Results in the Theory of Taxes
Given the simplifying assumption of zero transactions costs, one can show theoretically that:
• Except for limiting special cases, a tax has a non-zero deadweight loss.
• The incidence (on buyers and sellers) of a tax is independent of whether the liability to pay the tax is on buyers or sellers.
• The relative incidence (on buyers and sellers) of a tax is determined by relative price elasticities of demand and supply.
Market Equilibrium with No Tax
Define Q
Q for a>0 and b>0.
Define P
Define Q
Q for
Define P
In the absence of taxes and subsidies, the equilibrium (market clearing) condition can be written as
or as
Figure 1. Equilibrium with Linear Demand and Supply and No Tax
Figure 2. Equilibrium with Step Function Demand and Supply and No Tax
Market Equilibrium with a Per Unit Tax
Now assume that a tax of t per unit is imposed on the commodity. In the presence of the tax, the total amount per unit that buyers must pay (the demand price) exceeds the total amount per unit that
sellers get to keep (the supply price) by the amount of the tax per unit:
With tax per unit of t, the market clearing condition can be written as
or as
which can be rewritten as
or as
Figure 3. Equilibrium with Linear Demand and Supply and a Per Unit Tax
Figure 4. Equilibrium with Step Function Demand and Supply and a Per Unit Tax
Deadweight Loss of a Tax
Except in limiting special cases, a tax imposes a deadweight loss or excess burden on buyers and sellers. The deadweight loss is the amount by which the reduction in buyers' surplus and sellers'
surplus exceeds the tax revenue.
Figure 5. Deadweight Loss with Linear Demand and Supply and a Per Unit Tax
Figure 6. Deadweight Loss with Step Function Demand and Supply and a Per Unit Tax
Equal Incidence of a Tax with Symmetric Demand and Supply
Regardless of whether the liability to pay a tax falls on buyers or on sellers, the incidence of the tax falls on both sides of the market.
The most direct way to see this central result in tax theory is to observe that the following diagram is the same whether the liability to pay the tax falls on buyers or sellers.
The only thing that is relevant is that the tax drives a wedge between the total amount per unit that buyers must pay (the demand price) and the total amount per unit that sellers get to keep (the
supply price) in the amount of the tax per unit:
Figure 7. BL is the Buyers' Loss from the tax and SL is the Sellers' Loss. In this case of symmetric demand and supply, BL = SL regardless of whether the tax liability falls on buyers or sellers.
Unequal Incidence of a Tax with Asymmetric Demand and Supply
We have observed that the incidence of a tax is independent of whether the liability to pay the tax is on buyers or sellers.
If tax liability doesn't determine tax incidence then what does? Relative price elasticities of demand and supply determine tax incidence, as we shall now explain.
Figure 8. If demand is less elastic than supply then tax incidence falls more on buyers than on sellers.
Figure 9. If supply is less elastic than demand then tax incidence falls more on sellers than on buyers.
Figure 10. If supply is perfectly inelastic then tax incidence falls entirely on sellers.
Figure 11. If supply is perfectly elastic then tax incidence falls entirely on buyers.
Figure 12. If demand is perfectly inelastic then tax incidence falls entirely on buyers.
|
{"url":"http://www.econport.org/content/teaching/modules/Taxes/Lecture.html","timestamp":"2014-04-19T04:21:20Z","content_type":null,"content_length":"20144","record_id":"<urn:uuid:5c07869c-3927-414a-8f47-8500b2d9971c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mission Viejo SAT Math Tutor
Find a Mission Viejo SAT Math Tutor
...The first reason is they lack the required skills to proceed to the next topic and this starts to frustrate them. The second reason is statistical, to get better average grade, they need to
earn extremely high grades which seems impossible to them. For example, if a student gets 65 and 70 in first two exams, he/she needs to get 90 and 95 in next exams to get an average of 80.
11 Subjects: including SAT math, calculus, statistics, geometry
...These include: City of God (St. Augustine), the Aeneid (Virgil), De Anima - On the Soul (Aristotle, the Divine Comedy (Dante), the Birds, the Clouds(both by Aristophanes), the Elements
(Euclid), and the Histories (Herodotus).My philosophy of learning is that people learn best when they arrive at...
8 Subjects: including SAT math, reading, writing, algebra 1
...For tests or quizzes I review sample tests with the students and discuss test-taking strategy. I don't believe in studying just to pass a test (and forget the material later) but put value on
learning and understanding the material. I specialize in college level finance, accounting, statistics, and economics classes as well as high school math.
24 Subjects: including SAT math, statistics, geometry, algebra 1
...Then, following up with a systematic and timely review is the way way to math mastery and success. In my opinion, the most important assets students possess are: curiosity, good daily studying
habits, and confidence. My job as a math tutor is to help them acquire these traits.
11 Subjects: including SAT math, calculus, geometry, accounting
...Precalculus is actually my favorite subject to teach. This is where all forms of math converge into a ball of extremely interesting concepts. Whereas many teachers just throw random formulas at
students, I work from the ground up, deriving each equation as they were originally discovered thousands of years ago.
36 Subjects: including SAT math, chemistry, Spanish, English
|
{"url":"http://www.purplemath.com/Mission_Viejo_SAT_math_tutors.php","timestamp":"2014-04-20T21:14:21Z","content_type":null,"content_length":"24358","record_id":"<urn:uuid:32b0aa44-ec01-44cf-874f-22f6af98d3fc>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - basic math converting factors.
1. The problem statement, all variables and given/known data
use the following conversion factors to complete the calculations that follow
4.00 flam=1.00kern 5.00 folt= 1.00kern
1.00 burb=3.00 flam 1.00 sapper= 6.00 folt
a)15.2 kern equals how many flam?
b)47.65 sapper equals how many kern?
c)0.845 flam equals how many folt?
d)one burb is equivalent to how many sapper?
3. The attempt at a solution
a) 15.2 x 4.00= 60.8 flam
c)im not sure where to begin for this one im at work right now but i will add my own attempt when I get back home if anyone wants to give me a start that would be good.
d)for this one im pretty sure I can do it but I have to leave work right now so I will get back to this in about 45 minutes.
|
{"url":"http://www.physicsforums.com/showpost.php?p=3127502&postcount=1","timestamp":"2014-04-16T19:02:46Z","content_type":null,"content_length":"9249","record_id":"<urn:uuid:cb9b016a-e324-4bb5-94d8-fe163b3c147b>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Discus Flight
Because coefficients of lift and drag for a discus are available from wind tunnel experiments, you can model the flight of a discus where the wind varies. As is well known to throwers, a properly
thrown discus will go farther against the wind than with the wind. This Demonstration lets you vary the wind and the two angles controlled by the thrower: the release angle (the angle between the
initial velocity and the horizontal) and the inclination angle of the discus. You can set the angles to be the optimal angles for the given wind. Then, using the best angles and varying the wind,
you see that a headwind can yield a longer throw than a tailwind.
The first snapshot shows an optimally thrown discus with a 10-meter-per-second tailwind; the second is the same with a 10-meter-per-second headwind. The discus goes much farther with the headwind.
The last snapshot shows how a large headwind and some strange throwing angles can cause the discus to go backward, a phenomenon that will be familiar to frisbee throwers. More details on the
equations needed to model the flight of the discus as well as a reference to the wind tunnel experiments can be found in Chapter 13 of the book by A. Slavik, S. Wagon, and D. Schwalbe,
, 2nd ed.
|
{"url":"http://demonstrations.wolfram.com/DiscusFlight/","timestamp":"2014-04-16T16:17:23Z","content_type":null,"content_length":"42618","record_id":"<urn:uuid:79dbab9f-5319-40f1-8ab8-1a5caabc881c>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
|
definition homogeneous solution
Best Results From Wikipedia Yahoo Answers Youtube
From Wikipedia
Homogeneous (chemistry)
A substance that is uniform in composition is a definition of homogeneous (IPA: /həmɔ�dʒɪnʌs, ho�modʒi�niʌs/) in Chemistry. This is in contrast to a substance that is heterogeneous. The
definition of homogeneous strongly depends on the context used. In Chemistry, a homogeneous suspension of material means that when dividing the volume in half, the same amount of material is
suspended in both halves of the substance. However, it might be possible to see the particles under a microscope. In Chemistry, another homogeneous substance is air. It is equally suspended, and the
particles and gases and liquids cannot be analyzed separately or pulled apart.
Homogeneity of mixtures
In Chemistry, some mixtures are homogeneous. In other words, mixtures have the same proportions throughout a given sample or multiple samples of different proportion to create a consistent mixture.
However, two homogeneous mixtures of the same pair of substances may differ widely from each other and can be homogenized to make a constant. Mixtures can be characterized by being separable by
mechanical means e.g. heat, filtration, gravitational sorting, etc.
A solution is a special type of homogeneous mixture. Solutions are homogeneous because, the ratio of solute to solvent remains the same throughout the solution even if homogenized with multiple
sources, and stable because, the solute will not settle out, no matter how long the solution sits, and it cannot be removed by a filter or a centrifuge. This type of mixture is very stable, i.e., its
particles do not settle, or separate. As homogeneous mixture, a solution has one phase (liquid) although the solute and solvent can vary: for example, salt water. In chemistry, a mixture is a
substance containing two or more elements or compounds that are not chemically bound to each other but retain their own chemical and physical identities; - a substance which has two or more
constituent chemical substances. Mixtures, in the broader sense, are two or more substances physically in the same place, but these are not chemically combined, and therefore ratios are not
necessarily considered.
Homogeneous coordinates
In mathematics, homogeneous coordinates, introduced by August Ferdinand Möbius in his 1827 work Der barycentrische Calcül, are a system of coordinates used in projective geometry much as Cartesian
coordinates are used in Euclidean geometry. They have the advantage that the coordinates of a point, even those at infinity, can be represented using finite coordinates. Often formulas involving
homogeneous coordinates are simpler and more symmetric than their Cartesian counterparts. Homogeneous coordinates have a range of applications, including computer graphics and 3D computer vision,
where they allow affine transformations and, in general, projective transformations to be easily represented by a matrix.
If the homogeneous coordinates of a point are multiplied by a non-zero scalar then the resulting coordinates represent the same point. An additional condition must be added on the coordinates to
ensure that only one set of coordinates corresponds to a given point, so the number of coordinates required is, in general, one more than the dimension of the projective space being considered. For
example, two homogeneous coordinates are required to specify a point on the projective line and three homogeneous coordinates are required to specify a point on the projective plane.
The projective plane can be thought of as the Euclidean plane with additional points, so called points at infinity, added. There is a point at infinity for each direction, informally defined as the
limit of a point that moves in that direction away from a fixed point. Parallel lines in the Euclidean plane are said to intersect at a point at infinity corresponding to their common direction. A
given point on the Euclidean plane is identified with two ratios , so the point corresponds to the triple where . Such a triple is a set of homogeneous coordinates for the point . Note that, since
ratios are used, multiplying the three homogeneous coordinates by a common, non-zero factor does not change the point represented – unlike Cartesian coordinates, a single point can be
represented by infinitely many homogeneous coordinates.
The equation of a line through the point may be written where l and m are not both 0. In parametric form this can be written . Let Z=1/t, so the coordinates of a point on the line may be written . In
homogeneous coordinates this becomes . In the limit as t approaches infinity, in other words as the point moves away from , Z becomes 0 and the homogeneous coordinates of the point become . So are
defined as homogeneous coordinates of the point at infinity corresponding to the direction of the line .
To summarize:
• Any point in the projective plane is represented by a triple , called the homogeneous coordinates of the point, where X, Y and Z are not all 0.
• The point represented by a given set of homogeneous coordinates is unchanged if the coordinates are multiplied by a common factor.
• Conversely, two sets of homogeneous coordinates represent the same point only if one is obtained from the other by multiplying by a common factor.
• When Z is not 0 the point represented is the point in the Euclidean plane.
• When Z is 0 the point represented is a point at infinity.
Note that the triple is omitted and does not represent any point. The origin is represented by .
Some authors use different notations for homogeneous coordinates which help distinguish them from Cartesian coordinates. The use of colons instead of commas, for example (x:y:z) instead of ,
emphasizes that the coordinates are to be considered ratios. Brackets, as in emphasize that multiple sets of coordinates are associated with a single point. Some authors use a combination of colons
and brackets, as in [x:y:z].
Homogeneous coordinates are not uniquely determined by a point, so a function defined on the coordinates, say , does not determine a function defined on points as with Cartesian coordinates. But a
condition defined on the coordinates, as might be used to describe a curve, determines a condition on points if the function is homogeneous. Specifically, suppose there is k so that
f(\lambda x, \lambda y, \lambda z) = \lambda^k f(x,y,z).\,
If a set of coordinates represent the same point as then it can be written for some non-zero value of λ. Then
f(x,y,z)=0 \iff f(\lambda x, \lambda y, \lambda z) = \lambda^k f(x,y,z)=0.\,
A polynomial of degree k can be turned into a homogeneous polynomial by replacing x with x/z, y with y/z and multiplying by z^k, in other words by defining
f(x, y, z)=z^k g(x/z, y/z).\,
The resulting function f is a polynomial so it makes sense to extend its domain to triples where . The process can be reversed by setting , or
g(x, y)=f(x, y, 1).\,
The equation can then be thought of as the homogeneous form of and it defines the same curve when restricted to the Euclidean plane. For example, the homogeneous form of the equation of the line is
Other dimensions
The discussions in the preceding sections apply analogously to projective spaces other than the plane. So the points on the projective line may be represented by pairs of coordinates , not both zero.
In this case point at infinity in this case is . Similarly the points in projective n-space are represented by (n + 1)-tuples.
Alternate definitions
Another definition of project space can be given in terms of equivalence classes. For non-zero element of R^3, define to mean there is a non-zero λ so that . Then ~ is an equivalence relation and
the projective plane can be defined as the equivalence classes of R^3 − {0}. If is one of elements of the equivalence class p then these are taken to be homogeneous coordinates of p.
Lines in this space are defined to be sets of solutions of equations of the form where not all of a, b and c are zero. The condition depends only on the equivalence class of so the equation defines a
set of points in the projective line. The mapping defines and inclusion from the Euclidean plane to the projective plane and the complement of the image is the set of points with z=0. This
From Yahoo Answers
Answers:By definition a solution is a homogeneous mixture.
Question:can u please describe these 4 defeinitions in a way i can understand because i read a lot of definitions, but i still get confused when im doing word problems related to them. So please tell
me. Thank You
Answers:HOMOGENEOUS (SAME) Homogeneous mixtures are mixtures that are the same all the way throughout, such as salt water or lemonade. No matter what size a sample is, it will be exactly the same as
all other samples of the same ingredients. HETERGENEOUS (DIFFERENT) Heterogeneous mixtures are mixtures with differences in different areas of itself. Chex Mix or salad demonstrate this well.
Although a salad may have tomatoes, lettuce, onions, dressing, cucumbers, carrots, etc., samples will not be of the exact composure each time the original mixture is split.
Question:explain and give examples please and name whether or not these are a heterogeneous mixture or a solution maple syrup-- [i got solution] seawater-- [i got solution] melted rocky road ice
cream-- [i got heterogeneous mixture] but i'm not sure. confused, but i am. i did my homework, but i said i wasn't sure if i got it right. i kind of know the definition of these words, but it's hard
to identify it. but thanks
Answers:Yes, you are correct. Homogeneous appears the same throughout. Hetro: you can see bits and pieces in it.
Question:Is air a solution or solvent because of the water vapours. Or a mixture of compounds. What ever it is please explain why.
Answers:You know that a solution is a homogeneous mixture and that in a homogeneous mixture, the solute is the substance that gets dissolved while the solvent is the substance that does the
dissolving (of course you knew that). Solutions may involve any of the states of matter - we can have solutions of solids in liquids (what we usually think of), solids in gases (cigarette smoke),
solids in solids (alloys), etc. Sometimes the solute can be more than the solvent, but usually the solute is the one that is present in smaller amount. Mixtures involving solids and liquids are
interesting in that some of one substance can be soluble in another, but as the amount increases, we can reach a limit of solubility. Gases are not like that. Since by definition there is very little
interaction between the particles of a gas, all gases are soluble in all other gases (they mix in all proportions or are miscible with each other). OK, enough generalities - air is a homogeneous
mixture (solution) of nitrogen (N2) containing about 80% nitrogen, 19% oxygen (O2), 1% argon (Ar) and traces of other gases (including water vapor). Since it is homogeneous, the percentage of the
usual gases is the same at the bottom of Death Valley as it is on the top of Mount Everest. Because of mixing problems, here around Los Angeles, we have some more (cough, cough) compounds dissolved
in the mixture than they have in Des Moines, Iowa, but it will all be spread or precipitated eventually.
From Youtube
A Solution :The definition of the word solution is a homogeneous mixture of two or more substances. I believe that to be the answer to the problems we face today. Finding a solution becomes the
solution to today's problems.
|
{"url":"http://www.edurite.com/kbase/definition-homogeneous-solution","timestamp":"2014-04-19T22:10:03Z","content_type":null,"content_length":"80724","record_id":"<urn:uuid:5ddd92de-8b7f-456c-a52f-10838080c4e7>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Copyright © University of Cambridge. All rights reserved.
'Writing Digits' printed from http://nrich.maths.org/
Lucy and Alice from Stoke By Nayland Middle School sent us their work. They explained their answer very clearly.
We wrote the numbers in consecutive orders from 1 to 20 on a piece of paper.
We then counted to the 17th digit. This happened to be the number 3 out of the number "13". (If you count the 17th digit from the number line below, you will see that it is the number 3, not the 1st
3!) So the last
number she wrote was 13.
Then Lucy and Alice went on to think about how they might apply this method to another similar problem.
For example, if you were given this problem below, using our solution above you could quickly work out the answer. Lee was writing all the counting numbers from 1 to 100. She stopped for a rest after
writing eleven digits. What was the last number she wrote? Using our solution, I found the answer to be the 0 from the number "10".
|
{"url":"http://nrich.maths.org/161/solution?nomenu=1","timestamp":"2014-04-19T09:32:25Z","content_type":null,"content_length":"4158","record_id":"<urn:uuid:f5726014-743a-4de2-9273-2cdbfb561734>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
|
AP Calculus AB and BC
Click here to go to the NEW College Discussion Forum
Discus: SAT/ACT Tests and Test Preparation: October 2003 Archive: AP Calculus AB and BC
Next year, I'll be moving on to AP Calculus. Now from what I understand, the class is Calc AB, but there were about 3 or 4 who took BC. I really want to get into the BC class, and wondered what the
differences were.
Im assuming its based on either a choice, grades, or taking AB during summer school (because there is no other working way at my school)
Dont even try getting in to BC without taking AB. BC is a continuation with the assumption that you know and can apply everything you learned in AB.
Thats what I figured. I wonder how these kids got into BC, though.
Our school does not offer AP Calc AB in the summer, and there is no possible way for a senior to already take AB (Advanced Freshmen start with Geometry, then Algebra II, then PreCalc and Trig, then
AP Calc AB. Only Geometry and Algebra II can be taken the same year and thats for non-advanced students.)
Really strange. I'll have to ask the teacher about this. I really want to learn as much as possible.
Not true at all. I'm taking BC at this very moment and we're into integration, and this is my first year of calculus as a junior. Last year I took precalculus.
BC has a more difficult curriculum in terms of the degree of sophistication, but AB is no cakewalk either. However, they're both year-longs. I have no idea how it is done at your school, so it's more
of a question to ask your teacher.
yeah thats true, in BC i learned euler's method and going to learn more stuff; a continuing of AB.
it really depends on your teacher.... for example, last year we finished AB topics before the AP test and worked on BC topics after that.
We will finish this years BC topics by christmas and do strictly multi-variable calc for the rest of the year
In my school no one does AP Calc AB then BC...in fact we were having a discussion today about how dumb that would be since it would be redoing half of it
I agree with Junior and Encomium. I went straight from Precalculus to AP Calc BC, and actually even had a bit of overlap between those two courses. I don't see how you could stretch out the material
you only learn in BC and not AB over a full year.
maybe your BC class is different. You guys must have started on an earlier chapter or have done some sort of a review or something. We started in the middle of the textbook so we didn't go over the
basics which you guys had to. We immediately started with trig substitution integration (extremely hard stuff).
yeah, at my school, BC is a seperate start to finish course, for calculus, from AB, then presumably II. BC=calc AB + calc II, though we do some extra stuff in calc II than is covered in the BC
At D's school its Pre-Calc/CalcA (usually junior year) and then CalcBC. A weak grade in Pre-Calc/CalcA gets you invited to take CalcAB instead. As near as I can figure, at *most*, 25 percent of those
who start with Geometry Honors in freshman year make it to CalcBC. The CalcBC teacher is amazing...in ten years she's had *one* student get lower than a 3 on the AP exam...about 50 percent get 5's.
I'm in AP Calculus BC this year and I was in Pre Calculus last year. Taking AB helps, but usually isn't necessary.
At my school the dumb kids take AB and the smart ones take BC. AP Calc is only offered senior year so you cannot take both.
uhhh, I took BC as a first year calc course and did fine, i even got a 5 on the exam. in my school AB is kind of the median between BC (motivated students) and regular Calc (unmotivated students)
I dono but my school it goes geometry-alg2-precalc/trig-calcab/bc. There is no room to make ab and bc two seperate classes. After precalc, everyone goes to ab or bc. Im in bc and were learning the
same stuff as ab but faster so by second semester we will have started bc work. We are on differentiation now.
There are many ways to handle AP Calculus.
AP-Calc AB is usually considered worth 1.5 semester of college calculus and AP-Calc BC is considered worth 2 semesters. So BC covers only a bit more material than AB.
Some schools offer AB and BC separately right after Pre-Calc. Students who feel they need a slightly slower pace go into AB; those who can go at a faster pace go into BC. In other schools, AB and BC
are taught within the same class, perhaps because there are not enough teachers to teach separate classes or perhaps because there are not enough students to justify them. It is considerably harder
to hae AB and BC together both for the students and the teachers. Some schools do let students take AB first then BC, supplemented by math modelling or differential equations to round out the year.
The AB and BC exams are substantially the same except for one section on the MC part and one section on the Free REsponse Question part. When a student takes the BC exam, his or her score will
include an AB subscore.
Whoever doesnt take AB and enters my BC class will not ace the class. In fact, nobody in my class aces it and some people in that class are the smartest people I know. YOu guys must have really easy
BC classes then that would start w/ the derivative then go on to integration, etc. YOu'd fall so behind in BC because you would already have to know stuff like volumes of revolutions for example
At my school, you must take AP Calc AB before taking AP Calc BC. The advanced students can take Algebra 1 is 8th grade, so as Freshmen they can start with Geometry. Last year for the first time there
was the option to "double up" in math by taking Algebra 2 first semester and Pre-Calculus second semester, and many of my friends and I took advantage of this, so right now we have AP Calc AB as
juniors. Most of the students who took AB though, are seniors. There was also supposed to be a normal non-AP Calc class, but it was cancelled because of scheduling conflicts. Many students dropped
the class in the first month because they found it too difficult. So far, we did a chapter of review, limits, and derivatives. Next year whatever members of the class of '05 who want to continue with
math can take Calc BC, but the class will probably have way less than 10 students. I think the calc class now has around 6 students, and they must have taken summer courses to reach that level,
because AB is a prereq for BC.
at my school, u can skip pre cal and go straight to bc calc. as of now, the class is ok. its not too hard, and not as east as sequential or something
c ya
Yeah my BC calculus class is pretty easy, but it's well-taught. Last year we had over 60 out of 70 kids in my school get 5s on the BC AP and all of them get at least a 4. My teacher's class last year
only had one person get a 4 in the entire class. No one takes AB before BC; it's not even allowed. If you take AB Calculus, you are not allowed to take BC or go to Multivar/Linear Alg. Your only
choice is AP Stats. I don't see any reason to take AB if you think you can handle BC. Then again I go to a really good school and the teachers know how to make everything understandable. Calculus
really isn't that complicated.
So you're expected to self-teach integration, taking the derivative, volumes of revolutions, related rates, etc?
At my school, it is possible to take AB and then BC or just BC. Some people start the year in BC but then drop to AB, but because BC is so well-taught at my school, those people have a solid
foundation going down to AB. It seems to me that the theory of teaching BC is to beat everyone to death with a course that is twice as hard as the AP Test. Because of the great teaching and large
amount of work, well over 90 percent of BC students get 4's or 5's.
interesting interpretations ...
well, at my school, the (new) typical path is H Adv Alg (Alg 2 with a fancier name), H Pre-Calc, AP Calc AB or Calc BC, AP Stat/Class TBA ... yeah, our school is the nascent stage of developing a
higher level calc course ... on the first day of BC, our teacher was said, "alright, do you have any questions on the first three chapters?" (no reply) "alright, open your books to Section 4.1" ...
we still had to go over some facets of differentiation and integration but we're moving at a pretty fast pace and we'll catch up with other BC classes
Stanton's typical path:
Algebra 2 or Geometry
Geometry or Algebra 2
Pre-Calculus with Trig elements
Calculus AB or Statistics
My path:
Algebra 2
Geometry & Pre-Calculus with Trig elements
Calculus AB
Calculus BC
One senior is currently taking both Calculus BC & Calculus AB. He is taking the AB course online at a virtual school.
At my school you cannot take BC until you have already completed AB. The BC course reviews AB, teaches BC material, and then goes beyond what the AP covers. Our AB classes score about 90% 5s, and the
rest 4s, with maybe one or two threes. Our BC class always score 99% 5s, with maybe one four.
I'm in BC having taken precalc last year. At my school, BC is two class periods, so we cover AB and BC in one year.
what does AB and BC actually stand for?
A - the first semester of college calc.
B - 2nd semester
C - 3rd semester
Report an offensive message on this page E-mail this page to a friend
|
{"url":"http://www.collegeconfidential.com/discus/messages/69/31038.html","timestamp":"2014-04-21T04:32:41Z","content_type":null,"content_length":"36731","record_id":"<urn:uuid:f4d49a67-be59-44d0-a570-ac3ebd5b44b9>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
|
UD Math News Item
Professor Sebastian Cioaba joins the Editorial Board of Linear and Multilinear Algebra
Professor Cioaba has been named to the editorial board of the journal "Linear and Multilinear Algebra". This journal publishes original research papers that advance the study of linear and
multilinear algebra, or that apply the techniques of linear and multilinear algebra in other branches of mathematics and science. Linear and Multilinear Algebra also publishes research problems,
survey articles and book reviews of interest to researchers in linear and multilinear algebra. Appropriate areas include, but are not limited to: spaces over fields or rings, tensor algebras,
nonnegative matrices,inequalities in linear algebra, combinatorial matrix theory, numerical linear algebra, representation theory, Lie theory, invariant theory and operator theory. The audience for
Linear and Multilinear Algebra includes both industrial and academic mathematicians.
Article created: Jan. 28, 2013
|
{"url":"http://www.math.udel.edu/news/cioaba_ed_announce.html","timestamp":"2014-04-17T18:45:21Z","content_type":null,"content_length":"10066","record_id":"<urn:uuid:bdab295d-76b1-4787-8b0c-6a81fbc10d1d>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Let's stop making excuses for Aunt Sally
Let's stop making excuses for dear Aunt Sally. You know who I mean: Please Excuse My Dear Aunt Sally. Many students can still remember the mnemonic PEMDAS for the rules of order of operations, but
PEMDAS omits or confuses some situations. Most prominently, even my current college algebra textbook does not make it clear that in the M and D of PEMDAS multiplication and division are of equal
priority, so a division is performed before a multiplication if the division appears to the left of the multiplication.
I like to invite my students to use all their calculators on expressions like 300 - 200/50*10 to see which calculators obey the rules of operations and which do not. I am always surprised when
students first turn to the calculator on their cell phone to answer this.
PEMDAS doesn't address fraction bars, nor that with multiple grouping symbols you start with the innermost grouping symbols first. And I suppose it's too much to expect that PEMDAS address that in -3
the negation precedes the exponentiation; by the way, Excel disagrees with this.
Speaking of exponentiation, what do you do with x
^ y^ z
? Exponentiation is not associative. Johnny Lott in
A Problem Solving Approach to Mathematics
says exponentiation is done in order from right to left (page 276), although Excel disagrees with this one too.
So there's more to the rules of order of operation than PEMDAS. But if you are getting a little tired of Aunt Sally, why not switch to Please Email My Dad A Shark, by the authors of xkcd.com
Anyone have any good ideas on teaching order of operations?
|
{"url":"http://onlinecollegemathteacher.blogspot.com/2013/02/lets-stop-making-excuses-for-dear-aunt.html","timestamp":"2014-04-18T10:58:16Z","content_type":null,"content_length":"70126","record_id":"<urn:uuid:1131d2ab-9dbb-4feb-9454-6bd49c61902a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Crossing the Parallelepiped
Another fine puzzle submitted by Bilbao!
A parallelepiped (rectangular box) measures 150 x 324 x 375 and is built of 1 x 1 x 1 cubes. A diagonal crosses the solid from one corner to the opposite corner. The diagonal passes through the
interior of how many of these cubes?
|
{"url":"http://www.smart-kit.com/s3257/crossing-the-parallelepiped/","timestamp":"2014-04-17T10:12:36Z","content_type":null,"content_length":"52333","record_id":"<urn:uuid:a5213afe-6b0b-4e36-b9aa-a30241431907>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Photon's Mass?
The photon interacts with charged particles (like the electrons in atoms) via QED (EM forces basically). Neutrinos only interact with particles via the weak force.
Photons do not have rest mass.
In general, it's better to read the faq if you have some basic misunderstandings. If you can't find the answer in the faq or a quick search, then post a new thread, don't bump a ~decade old thread
(holy crap PF is ~decade old!?).
|
{"url":"http://www.physicsforums.com/showthread.php?t=9764&page=4","timestamp":"2014-04-19T12:29:16Z","content_type":null,"content_length":"22216","record_id":"<urn:uuid:fd7a34eb-5d49-4be0-a0ae-60171d46beb9>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Alhambra, CA Algebra 2 Tutor
Find an Alhambra, CA Algebra 2 Tutor
Hello! In 2010, I graduated from the University of Southern California (USC), Marshall School of Business with a Bachelor of Science (B.S.) in Business in just 3 years. I've had the incredible
experience of working in the field of education (my passion!) as an assistant math teacher in, a branch m...
75 Subjects: including algebra 2, reading, English, writing
...Doing well also helped tutor many of my peers each year as I moved up the mathematical ladder. I am open to tailor your needs; if you are unsatisfied with a session, it will be free of cost,
and we can discuss improvements. I look forward to hearing how I can help you improve.
16 Subjects: including algebra 2, chemistry, calculus, geometry
...In my senior year, I possessed the title of -AVID Tutor- where I aided my former AP Language and Composition teacher in teaching essay-writing and critical reading and analysis because of my
previous success in the course and 5 on the AP test. I also led the class multiple times through lectures...
22 Subjects: including algebra 2, reading, writing, English
...In addition, part of the code we use for black hole modeling at Caltech is written in Fortran. I have recently spent some weeks working on this code, improving the way it interfaces with the
rest of the software (written in C++). With all of this, I should be able to help one learn this langua...
44 Subjects: including algebra 2, reading, physics, calculus
...The software is very powerful and a joy to learn all that it can offer in the way of tools to enhance you word processing projects. I have taken seven quarters of math in college.
Understanding Prealgebra gives one a strong foundation for being successful in higher levels of math as well as managing one's financial needs in the future.
5 Subjects: including algebra 2, algebra 1, elementary math, prealgebra
Related Alhambra, CA Tutors
Alhambra, CA Accounting Tutors
Alhambra, CA ACT Tutors
Alhambra, CA Algebra Tutors
Alhambra, CA Algebra 2 Tutors
Alhambra, CA Calculus Tutors
Alhambra, CA Geometry Tutors
Alhambra, CA Math Tutors
Alhambra, CA Prealgebra Tutors
Alhambra, CA Precalculus Tutors
Alhambra, CA SAT Tutors
Alhambra, CA SAT Math Tutors
Alhambra, CA Science Tutors
Alhambra, CA Statistics Tutors
Alhambra, CA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Alhambra_CA_Algebra_2_tutors.php","timestamp":"2014-04-16T07:56:27Z","content_type":null,"content_length":"24210","record_id":"<urn:uuid:f0269819-a207-424d-8011-4e9ade3807e6>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
PLEASE HELP :( Jocelyn Denot took out a $4,800 installment loan at 17% for 12 months for remodeling expenses. Her monthly payments were $437.78. After five payments, the balance due was $2,897.95. If
Jocelyn pays off the loan with the sixth payment, how much will she save? (Hint: First calculate the payoff: Find the interest on the balance due and add it to the balance due. Add the amount of the
first five payments to the payoff in month 6. Compare this total to the monthly payment multiplied by 12.)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50f38956e4b0694eaccf9cd9","timestamp":"2014-04-16T10:15:39Z","content_type":null,"content_length":"25718","record_id":"<urn:uuid:3d8d1fa2-c0a0-4c27-9efc-335f4b8d0bab>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
|
15 search hits
A dynamic programming approach to constrained portfolios (2012)
Holger Kraft Mogens Steffensen
This paper studies constrained portfolio problems that may involve constraints on the probability or the expected size of a shortfall of wealth or consumption. Our first contribution is that we
solve the problems by dynamic programming, which is in contrast to the existing literature that applies the martingale method. More precisely, we construct the non-separable value function by
formalizing the optimal constrained terminal wealth to be a (conjectured) contingent claim on the optimal non-constrained terminal wealth. This is relevant by itself, but also opens up the
opportunity to derive new solutions to constrained problems. As a second contribution, we thus derive new results for non-strict constraints on the shortfall of inter¬mediate wealth and/or
Asset Pricing Under Uncertainty About Shock Propagation (2013)
Nicole Branger Patrick Grüning Holger Kraft Christoph Meinerding Christian Schlag
We analyze the equilibrium in a two-tree (sector) economy with two regimes. The output of each tree is driven by a jump-diffusion process, and a downward jump in one sector of the economy can
(but need not) trigger a shift to a regime where the likelihood of future jumps is generally higher. Furthermore, the true regime is unobservable, so that the representative Epstein-Zin investor
has to extract the probability of being in a certain regime from the data. These two channels help us to match the stylized facts of countercyclical and excessive return volatilities and
correlations between sectors. Moreover, the model reproduces the predictability of stock returns in the data without generating consumption growth predictability. The uncertainty about the state
also reduces the slope of the term structure of equity. We document that heterogeneity between the two sectors with respect to shock propagation risk can lead to highly persistent aggregate
price-dividend ratios. Finally, the possibility of jumps in one sector triggering higher overall jump probabilities boosts jump risk premia while uncertainty about the regime is the reason for
sizeable diffusive risk premia.
Consumption habits and humps (2013)
Holger Kraft Claus Munk Frank Thomas Seifried Sebastian Wagner
We show that the optimal consumption of an individual over the life cycle can have the hump shape (inverted U-shape) observed empirically if the preferences of the individual exhibit internal
habit formation. In the absence of habit formation, an impatient individual would prefer a decreasing consumption path over life. However, because of habit formation, a high initial consumption
would lead to high required consumption in the future. To cover the future required consumption, wealth is set aside, but the necessary amount decreases with age which allows consumption to
increase in the early part of life. At some age, the impatience outweighs the habit concerns so that consumption starts to decrease. We derive the optimal consumption strategy in closed form,
deduce sufficient conditions for the presence of a consumption hump, and characterize the age at which the hump occurs. Numerical examples illustrate our findings. We show that our model
calibrates well to U.S. consumption data from the Consumer Expenditure Survey.
Financing asset growth (2013)
Michael J. Brennan Holger Kraft
In this paper we provide new evidence that corporate financing decisions are associated with managerial incentives to report high equity earnings. Managers rely most heavily on debt to finance
their asset growth when their future earnings prospects are poor, when they are under pressure due to past declines in earnings, negative past stock returns, and excessively optimistic analyst
earnings forecasts, and when the earnings yield is high relative to bond yields so that from an accounting perspective equity is ‘expensive’. Managers of high debt issuing firms are more likely
to be newly appointed and also more likely to be replaced in subsequent years. Abnormal returns on portfolios formed on the basis of asset growth and debt issuance are strongly positively
associated with the contemporaneous changes in returns on assets and on equity as well as with earnings surprises. This may account for the finding that debt issuance forecasts negative abnormal
returns, since debt issuance also forecasts negative changes in returns on assets and on equity and negative earnings surprises. Different mechanisms appear to be at work for firms that retire
Foundations of continuous-time recursive utility : differentiability and normalization of certainty equivalents (2009)
Holger Kraft Frank Thomas Seifried
This paper relates recursive utility in continuous time to its discrete-time origins and provides a rigorous and intuitive alternative to a heuristic approach presented in [Duffie, Epstein 1992],
who formally define recursive utility in continuous time via backward stochastic differential equations (stochastic differential utility). Furthermore, we show that the notion of Gâteaux
differentiability of certainty equivalents used in their paper has to be replaced by a different concept. Our approach allows us to address the important issue of normalization of aggregators in
non-Brownian settings. We show that normalization is always feasible if the certainty equivalent of the aggregator is of expected utility type. Conversely, we prove that in general L´evy
frameworks this is essentially also necessary, i.e. aggregators that are not of expected utility type cannot be normalized in general. Besides, for these settings we clarify the relationship of
our approach to stochastic differential utility and, finally, establish dynamic programming results. JEL Classifications: D81, D91, C61
Growth options and firm valuation (2013)
Holger Kraft Eduardo S. Schwartz Farina Weiss
This paper studies the relation between firm value and a firm's growth options. We find strong empirical evidence that (average) Tobin's Q increases with firm-level volatility. However, the
significance mainly comes from R&D firms, which have more growth options than non-R&D firms. By decomposing firm-level volatility into its systematic and unsystematic part, we also document that
only idiosyncratic volatility (ivol) has a significant effect on valuation. Second, we analyze the relation of stock returns to realized contemporaneous idiosyncratic volatility and R&D expenses.
Single sorting according to the size of idiosyncratic volatility, we only find a significant ivol anomaly for non-R&D portfolios, whereas in a four-factor model the portfolio alphas of R&D
portfolios are all positive. Double sorting on idiosyncratic volatility and R&D expenses also reveals these differences between R&D and non-R&D firms. To simultaneously control for several
explanatory variables, we also run panel regressions of portfolio alphas which confirm the relative importance of idiosyncratic volatility that is amplified by R&D expenses.
How does contagion affect general equilibrium asset prices? (2013)
Nicole Branger Holger Kraft Christoph Meinerding
This paper analyzes the equilibrium pricing implications of contagion risk in a Lucas-tree economy with recursive preferences and jumps. We introduce a new economic channel allowing for the
possibility that endowment shocks simultaneously trigger a regime shift to a bad economic state. We document that these contagious jumps have far-reaching asset pricing implications. The risk
premium for such shocks is superadditive, i.e. it is 2.5\% larger than the sum of the risk premia for pure endowment shocks and regime switches. Moreover, contagion risk reduces the risk-free
rate by around 0.5\%. We also derive semiclosed-form solutions for the wealth-consumption ratio and the price-dividend ratios in an economy with two Lucas trees and analyze cross-sectional
effects of contagion risk qualitatively. We find that heterogeneity among the assets with respect to contagion risk can increase risk premia disproportionately. In particular, big assets with a
large exposure to contagious shocks carry significantly higher risk premia.
Investment, income, incompleteness (2009)
Björn Bick Holger Kraft Claus Munk
The utility-maximizing consumption and investment strategy of an individual investor receiving an unspanned labor income stream seems impossible to find in closed form and very dificult to find
using numerical solution techniques. We suggest an easy procedure for finding a specific, simple, and admissible consumption and investment strategy, which is near-optimal in the sense that the
wealthequivalent loss compared to the unknown optimal strategy is very small. We first explain and implement the strategy in a simple setting with constant interest rates, a single risky asset,
and an exogenously given income stream, but we also show that the success of the strategy is robust to changes in parameter values, to the introduction of stochastic interest rates, and to
endogenous labor supply decisions. JEL-Classification: G11
Life insurance demand under health shock risk (2014)
Holger Kraft Lorenz S. Schendel Mogens Steffensen
This paper studies the life cycle consumption-investment-insurance problem of a family. The wage earner faces the risk of a health shock that significantly increases his probability of dying. The
family can buy term life insurance with realistic features. In particular, the available contracts are long term so that decisions are sticky and can only be revised at significant costs.
Furthermore, a revision is only possible as long as the insured person is healthy. A second important and realistic feature of our model is that the labor income of the wage earner is unspanned.
We document that the combination of unspanned labor income and the stickiness of insurance decisions reduces the insurance demand significantly. This is because an income shock induces the need
to reduce the insurance coverage, since premia become less affordable. Since such a reduction is costly and families anticipate these potential costs, they buy less protection at all ages. In
particular, young families stay away from life insurance markets altogether.
Optimal housing, consumption, and investment decisions over the life-cycle (2009)
Holger Kraft Claus Munk
We provide explicit solutions to life-cycle utility maximization problems simultaneously involving dynamic decisions on investments in stocks and bonds, consumption of perishable goods, and the
rental and the ownership of residential real estate. House prices, stock prices, interest rates, and the labor income of the decision-maker follow correlated stochastic processes. The preferences
of the individual are of the Epstein-Zin recursive structure and depend on consumption of both perishable goods and housing services. The explicit consumption and investment strategies are simple
and intuitive and are thoroughly discussed and illustrated in the paper. For a calibrated version of the model we find, among other things, that the fairly high correlation between labor income
and house prices imply much larger life-cycle variations in the desired exposure to house price risks than in the exposure to the stock and bond markets. We demonstrate that the derived
closed-form strategies are still very useful if the housing positions are only reset infrequently and if the investor is restricted from borrowing against future income. Our results suggest that
markets for REITs or other financial contracts facilitating the hedging of house price risks will lead to non-negligible but moderate improvements of welfare. JEL-Classification: G11, D14, D91,
|
{"url":"http://publikationen.stub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Holger+Kraft%22/start/0/rows/10/sortfield/title/sortorder/asc","timestamp":"2014-04-18T13:25:24Z","content_type":null,"content_length":"48281","record_id":"<urn:uuid:108226c9-4df3-4dcb-bf65-9903896e9772>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Stoughton, MA Math Tutor
Find a Stoughton, MA Math Tutor
...Students should do the same to help them through the learning process. I've helped teacher candidates prepare for the Massachusetts Tests for Education Licensure--the MTEL--in Biology,
Chemistry, General Science, and Physics. I've also worked with new teachers to develop their course material a...
23 Subjects: including algebra 1, algebra 2, biology, calculus
...I strengthened my skills by taking classes and studying in Spain while in college. Since then, I have deepened my knowledge further through working in jobs where Spanish has been critical to
my duties, such as running elementary ELL classes in high-immigrant areas where I communicated with paren...
26 Subjects: including algebra 2, reading, precalculus, prealgebra
...During my MBA studies, I worked as a Research and Teaching Assistant for Marketing and Strategy courses. I researched material for class lectures and exams, and I graded exams. I have taken
and excelled at numerous sociology classes during my years of schooling.
67 Subjects: including precalculus, marketing, logic, geography
...Recently, I have moved to the South Shore, and would like to expand my tutoring to this area. I love working one on one with students and seeing their confidence grow and their degree of
success improve. Because I have taught so many grade levels over the years, I know what to focus on to help ...
6 Subjects: including algebra 1, algebra 2, geometry, prealgebra
Hello, I am an English major with a minor in Theater Arts. I began tutoring through the National Honor Society at my high school. My grade point average after my freshman year of college is a
18 Subjects: including probability, algebra 1, SAT math, English
Related Stoughton, MA Tutors
Stoughton, MA Accounting Tutors
Stoughton, MA ACT Tutors
Stoughton, MA Algebra Tutors
Stoughton, MA Algebra 2 Tutors
Stoughton, MA Calculus Tutors
Stoughton, MA Geometry Tutors
Stoughton, MA Math Tutors
Stoughton, MA Prealgebra Tutors
Stoughton, MA Precalculus Tutors
Stoughton, MA SAT Tutors
Stoughton, MA SAT Math Tutors
Stoughton, MA Science Tutors
Stoughton, MA Statistics Tutors
Stoughton, MA Trigonometry Tutors
Nearby Cities With Math Tutor
Avon, MA Math Tutors
Braintree Math Tutors
Bridgewater, MA Math Tutors
Brockton, MA Math Tutors
Canton, MA Math Tutors
Dedham, MA Math Tutors
Easton, MA Math Tutors
Holbrook, MA Math Tutors
Mansfield, MA Math Tutors
Mattapan Math Tutors
Milton, MA Math Tutors
Norwood, MA Math Tutors
Randolph, MA Math Tutors
Sharon, MA Math Tutors
Walpole, MA Math Tutors
|
{"url":"http://www.purplemath.com/Stoughton_MA_Math_tutors.php","timestamp":"2014-04-17T01:27:46Z","content_type":null,"content_length":"23668","record_id":"<urn:uuid:f664c349-2865-4348-a4c2-b23da9e489bf>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
|
HCSSiM Workshop day 5
math education
> HCSSiM Workshop day 5
HCSSiM Workshop day 5
July 7, 2012
A continuation of this, where I take notes on my workshop at HCSSiM.
Modular arithmetic
We examined finite sets with addition laws and asked whether they were the “same”, which for now meant their addition table looked the same except for relabeling. Of course we’d need the two sets to
have the same size, so we compared $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ and $\mathbb{Z}/4\mathbb{Z}.$ We decided they weren’t the same, but then when we did it for $\mathbb{Z}/3\
mathbb{Z} \times \mathbb{Z}/4\mathbb{Z}$ and $\mathbb{Z}/12 \mathbb{Z},$ and decided those were. We eventually decided it worked the second time because the moduli are relatively prime.
We essentially finished by proving the base case of the Chinese Remainder Theorem for two moduli, which for some ridiculous reason we are calling the Llama Remainder Theorem. Actually the reason is
that one of the junior staff (Josh Vekhter) declared my lecture to be insufficiently silly (he designated himself the “Chief Silliness Officer”) and he came up with a story about a llama herder named
Lou who kept track of his llamas by putting them first in groups of n and then in groups of m and in both cases only keeping track of the remaining left-over llamas. And then he died and his sons
were in a fight over whether someone stole some llamas and someone had to be called in to arbitrate. Plus the answer is only well-defined up to a multiple of mn, but we decided that someone in town
would have noticed if an extra mn llamas had been stolen.
After briefly discussing finite sets and their sizes, we defined two sets to have the same cardinality if there’s a bijection from one to the other. We showed this condition is reflexive, symmetric,
and transitive.
Then we stopped over at the Hilbert Hotel, where a rather silly and grumpy hotel manager at first refused to let us into his hotel even though he had infinitely many rooms, because he said all his
rooms were full. At first when we wanted to just add us, so a finite number of people, we simply told people to move down a few times and all was well. Then it got more complicated, when we had an
infinite bus of people wanting to get into the hotel, but we solved that as well by forcing everyone to move to the hotel room number which was double their first. Then finally we figured out how to
accommodate an infinite number of infinite buses.
We decided we’d proved that $\mathbb{N} \times \mathbb{N}$ has the same cardinality as $\mathbb{N}$ itself. We generalized to $\mathbb{Q}$ having the same cardinality as $\mathbb{N},$ and we decided
to call sets like that “lodgeable,” since they were reminiscent of Hilbert’s Hotel.
We ended by asking whether the real numbers is lodgeable.
Complex Geometry
Here’s a motivating problem: you have an irregular hexagon inside a circle, where the alternate sides are the length of the radius. Prove the midpoints of those sides forms an equilateral triangle.
It will turn out that the circle is irrelevant, and that it’s easier to prove this if you actually Circle is entirely prove something harder.
We then explored the idea of size in the complex plane, and the operation of conjugation as reflection through the real line. Using this incredibly simple idea, plus the triangle inequality, we can
prove that the polynomial
$3z^{17} + 2iz^{12} - (1+3i)z^{10} + .017z^{5} - 17$
has no roots inside the unit circle.
Going back to the motivating problem. Take three arbitrary points A, B, C on the complex plane (i.e. three complex numbers), and a fourth point we will assume is at the origin. Now rotate those three
points 60 degrees counterclockwise with respect to the origin – this is equivalent to multiplying the original complex numbers by $e^{\frac{i \pi}{6}}.$ Call these new points A’, B’, C’. Show that
the midpoints of the three lines AB’, BC’, and CA’ form an equilateral triangle, and that this result also is sufficient to prove the motivating problem.
1. July 7, 2012 at 10:07 am |
If you haven’t run into it, Kevin Wald’s filk “Banned from Aleph”, riffing on the Cantor Hotel theme, is at
|
{"url":"http://mathbabe.org/2012/07/07/hcssim-workshop-day-5/","timestamp":"2014-04-20T20:56:37Z","content_type":null,"content_length":"50497","record_id":"<urn:uuid:a151c889-784e-4b11-b91c-883c07144355>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
|
(This Is Not a) Genius Test
First, it is a test. It's not a genius test, like the title says, but it is a real intelligence test, devised with the assistance of some very experienced test makers, so you'll need a pencil.
Second, please resist the impulse to ask for help. Third, make sure that you feel well and have had a little sleep. Give yourself an hour or so, and follow the easy instructions that accompany each
problem. And last, on the multiple-choice questions, it's good to guess if you can eliminate even one of the choices. (But beware: There will be a quarter of a point deducted for each wrong guess.)
Now go to it. Your IQ awaits.
1. Street is to curb as river is to ____.
A. bank
B. bed
C. dam
D. ocean
E. stream
2. Size is to grow as knowledge is to ____.
A. believe
B. decide
C. know
D. learn
E. persuade
3. Canyon is to bridge as mountain is to ____.
A. cave
B. mine
C. peak
D. ridge
E. tunnel
4. Water droplet is to air bubble as island is to ____.
A. bay
B. continent
C. crater
D. lake
E. peninsula
5. Which image best completes the grid?
6. Which image best completes the sequence?
7. Which image best completes the sequence?
8. Which image best completes the sequence?
9. Which image best completes the sequence?
10. Which figure is different from the others?
11. Which image best completes the sequence?
12. Mountain is to pyramid as river is to ____.
A. bathtub
B. canal
C. fountain
D. reservoir
E. well
13. Water is to stalactite as wind is to ____.
A. dune
B. geyser
C. skyscraper
D. stalagmite
E. volcano
14. What number best completes the sequence?
1000, 1002, 1006, 1012, 1020, ____.
15. What number best completes the sequence?
13, 57, 911, 1315, 1719, ____.
16. Which image best completes the sequence?
17. Which image best completes the sequence?
18. If the leftmost wheel rotates as indicated, then
A. Weight X will rise, and weight Y will descend.
B. Weight X will descend, and weight Y will rise.
C. Both weights will rise.
D. Both weights will descend.
E. Weight X will remain at its current position, and weight Y will rise.
19. Which one of the shapes below can be made with the pieces above?
20. One (infinite) line divides an (infinite) plane into two regions. Two distinct lines can divide a plane into three or four regions. Three distinct lines can divide a plane into how many regions?
List all possible answers in ascending order.
21. Reassemble the pieces above to form the shape shown below. What nine-letter word is formed?
22. Which image best completes the sequence?
23. The cities Weston and Easton are connected by parallel eastbound and westbound railroad tracks. Trains run twenty-four hours daily. Trains depart Weston for Easton every hour on the hour (i.e.,
at 6:00, 7:00, et cetera). Trains depart Easton for Weston every hour on the half hour (6:30, 7:30, et cetera). In either direction, the trip takes six hours. If you take a train from Weston to
Easton, how many trains do you pass on the way?
24. If market equals 7.6.8.1.5.9 and bond equals 4.10.3.2, then bank equals ____.
25. What is 1000 - 999 + 998 - 997 + . . . + 4 - 3 + 2 - 1?
26. A couple plans to have three children. If the odds of having a girl are the same as the odds of having a boy (1 in 2), then the odds of all three children being of the same sex are 1 in ____.
27. Which image best completes the sequence?
28. You have a two-inch-by-two-inch square with a flap on the top edge. Unfolding this flap reveals a second flap, and unfolding this second flap reveals a third, as shown. If there are an infinite
number of these flaps, what is the area of the entire figure (in square inches) when they're all unfolded?
29. Which image best completes the set?
30. The eighty-one islands of Amorovia abound in beautiful women. Don Juan is born on the central island and is soon banished (forced to leave and forbidden to return). He crosses a bridge to another
island but is soon banished again. Crosses a bridge, gets banished: It happens every time, until one day he has nowhere to go, having already visited each neighboring island. Sadly, he realizes that
poor planning led to his visiting as few islands as possible before being trapped like this. Including his birthplace, how many islands did he visit?
31. The following diagram is a street map. Certain locations on these streets are designated P, Q, R, and S. A man starts walking along the streets from one of these locations and makes a sequence of
turns: right, left, right. Which choice is a possible path he could take?
A. P to Q
B. R to P
C. R to Q
D. S to R
E. S to Q
32. Which image best completes the analogy?
33. Which one of the objects below could produce the above shadow?
34. How many squares can you draw on the first figure below? The corners must be on dots, and squares may overlap, as shown in the example.
35. Which figure best completes the grid?
1. A
2. D
3. E
4. D
5. A
6. B
7. A
8. D
9. A
10. C
11. C
12. B
13. A
14. 1030
15. 2123
16. E
17. B
18. D
19. D
20. 4 6 7 (graphic explanation)
21. Explosion
22. B
23. 12
24. 4.6.3.1
25. 500
26. 4
27. D
28. 6
29. C
30. 8 (graphic explanation)
31. E
32. A
33. C
34. 11 (graphic explanation)
35. B
If you got 8 answers correct, then you are the average American. That's good. If you got 17 correct, then you're probably thinking more than you need to. Be careful. If you got 23 right, there's good
news and bad news: Good news is that you should consider taking the Mensa test. Bad news is that you'll now have to hang out with those people. And if you got 31 or even more correct, we, friend, are
not smart enough to help you.
|
{"url":"http://www.esquire.com/junk-drawer/quiz/genius-test-1199","timestamp":"2014-04-16T22:47:12Z","content_type":null,"content_length":"85571","record_id":"<urn:uuid:e7f534a4-359b-4e5c-baa4-42d39c75b7cf>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: May 2007 [00103]
[Date Index] [Thread Index] [Author Index]
Re: A program check
• To: mathgroup at smc.vnet.net
• Subject: [mg75523] Re: A program check
• From: Szabolcs <szhorvat at gmail.com>
• Date: Fri, 4 May 2007 04:11:22 -0400 (EDT)
• Organization: University of Bergen
• References: <f1c4nv$h6t$1@smc.vnet.net>
JGBoone at gmail.com wrote:
> i was wondering if someone could check this program and see if its
> running like the Description typed under each line says its suppose
> to.
What makes you think that it doesn't work as it should? It would be much
easier to answer your question if you provided a minimal example that
doesn't work as you expect it to.
Take all the commands one by one and test them. Take all the functions
and see if they produce the desired result.
> ogspecies=10000;(*set up sizes for communitysize and metacommunity*)
> communitysize= Range[ogspecies];(*makes a vector called communitysize
> with sequencal elements 1 to ogspecies*)
> metacommunity=Range[ogspecies];(*makes a vector called communitysize
> with sequencal elements 1 to ogspecies*)
> unlikeliness= 6000;(*sets up chance of speciation.bigger number means
> less likely*)
> metalikeliness=30;
> (*sets up chance of immigration. bigger number means less likely*)
> chance:=Random[Integer,{1,unlikeliness}]==1;
> (*creates a function that tests if a random integer from 1 to
> unlikeliness is equal to 1. this makes the chance of speciation equal
> to 1/unlikeliness*)
> metachance:=Random[Integer,{1,metalikeliness}]\[Equal]1;(*creates a
> function that tests if a random integer from 1 to metalikeliness is
> equal to 1. this makes the chance of speciation equal to 1/
> metalikeliness*)
> size:=Length[communitysize];(*gives size a value equal to the number
> of elements in communitysize*)
> generations=1000000;(*time steps that the funtion runs which is given
> as size squared; this excepted time to monodomince*)
> hubl[group_]:= group[[Random[Integer,{1,size}]]] ;(*takes a random
> element of group*)
> metareplace:=(communitysize[[Random[Integer,
> {1,size}]]]=hubl[metacommunity]);
> (*takes random element of community size and replaces it with random
> element of metacommunity by way of the function hubl[group_]*)
> community:= (communitysize[[Random[Integer,
> {1,size}]]]=hubl[communitysize]);
> (*takes a random element of communitysize and replaces it with another
> random element of community size by way of the function
> hubl[group_]*)
> speciation:=(hubl[communitysize]=Random[Integer,{size+1,size+100}]);
> (*creates a function that produces a random integer from 11 to 110*)
Every time you evaluate the symbol speciation, hubl gets an additional
definition depending on the value of communitysize. Maybe this is the
> immigration:=If[metachance,metareplace,community];
> (*asks if metachance is true. if it is true, runs metareplace if not
> it runs community*)
> changing:=If[chance,speciation,immigration ];
> (*makes new vector by replacing random elements of communitysize by
> way of the function speciation if the function chance is true or by
> way of the function immigration if chance is false*)
> SpeciesCount[lyst_]:= Length[Split[Sort[communitysize]]]; (*counts the
> number groupings of like values there are in communitysize. it does
> this by sorting elements together then Split takes identical elements
> and puts them in sublist then Length counts the sublist*)
It is good practice to use symbol names that start with small letters.
This assures that built-ins are not redefined by accident (though
SpeciesCount is not a built-in, of course.)
> randomwalks:=Table[changing;{t,SpeciesCount[communitysize]},{t,
> 1,generations}];
> (*creates a table of the function SpeciesCount from time step 1 till
> generation*)
> lyst = randomwalks;
> averagespecies:=N[Mean[Map[Last,lyst]]];
> (*function produces the mean from SpeciesCount of communitysize over
> the
> total time steps*)
> halfaveragespecies:=N[Mean[Last/@Take[lyst,{generations/2}]]];
> (*function produces the mean from SpeciesCount of communitysize over
> half the total time steps*)
> Print[lyst,{averagespecies},
> {halfaveragespecies}];ListPlot[lyst,AxesLabel[Rule]
> {"Generations","Number of Species"}]
> THANKS!
You do not need to terminate commands with a semicolon in Mathematica.
The semicolon suppresses the output. Only use it if you know that the
output is too long. Look at the result of every evaluation: this way you
can see if something went wrong.
Most of the time Print[] is not needed (I only use it for debugging).
Just evaluate a symbol to get its value.
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2007/May/msg00103.html","timestamp":"2014-04-18T20:56:04Z","content_type":null,"content_length":"38280","record_id":"<urn:uuid:e91dc18d-f6eb-40df-82b0-9aadae44b68f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chebyshev-like polynomials with integral roots
up vote 7 down vote favorite
Chebyshev polynomials have real roots and satisfy a recurrence relation. I was wondering if one can find a sequence of polynomials with integral or rational roots with similar properties. More
precisely, one is looking for a sequence of polynomials $(f_n),f_n\in\mathbf{Q}[t]$ such that
1. $\deg f_n\to\infty$ as $n\to\infty$;
2. $\sum_{n=0}^\infty f_n(t) x^n$ is (the Taylor series of) a rational function $F$ in $x$ and $t$.
3. All roots of any $f_n$ are integer and have multiplicity 1. (A weaker version: the roots are allowed to be rational and are allowed to have multiplicity $>1$ but there should be an $a>0$ such
that the number of distinct roots of $f_n$ is at least $a\deg f_n$.)
nt.number-theory ca.analysis-and-odes
add comment
2 Answers
active oldest votes
Here's one thought. For each integer k, f_n(k) satisfies a recurrence relation. If the roots of f_n are all integers, then f_n(k) and f_n(k+1) have to be "in sync" in the sense that they
never have opposite sign. This is a strong condition! For instance, suppose the sequences f_n(k) and f_n(k+1) each have unique largest eigenvalue: then these eigenvalues would have to
have the same argument.
Update: Qiaochu's answer suggests that in fact working mod p would be just better than the "archimedean" picture sketched above, since it is really F_q[t], not Z[t] or Q[t], that is
up vote 3 analogous to the integers. Let F_n(t) be the reduction of f_n(t) to F_p[t]. If f_n(t) has all roots rational for every n, then the reduction of f_n(t) mod p has the same property. But
down vote now we are saying something quite strong; that f_n(t) lies in a finitely generated subgroup of F_q(t)^*! This is presumably ruled out by Mason's theorem (ABC for function fields.)
accepted Indeed, you could probably prove in this way that not only are the roots of f_n(t) not rational, but f_n(t) has irreducible factors of arbitrarily large degree.
I don't think this approach would touch a harder question along the same lines like this one.
Thanks, JSE! It looks like my initial guess was too optimistic. – algori Dec 8 '09 at 4:11
add comment
I can satisfy conditions 1, 3 and almost satisfy condition 2. Letting $f_n(t) = {t+n-1 \choose n}$ we have the well-known generating function
$\displaystyle \sum_{n \ge 0} f_n(t) x^n = \frac{1}{(1 - x)^{-t}}$
which is rational in $x$ for any fixed integer value of $t$. I think condition 2 will end up being the hardest to satisfy because rational functions are quite rigid.
Edit 1: My current opinion is that the conditions are not satisfiable. Based on the analogous situation with linear homogeneous recurrences on the integers I am going to conjecture that any
up vote polynomial sequence which obeys a polynomial linear recurrence and is not essentially a geometric series has terms divisible by irreducible polynomials of arbitrarily high order.
4 down
vote Edit 2: A very strong result available in the integer case is Zsigmondy's theorem, but we don't need a result this strong. Here's a nice result in the integer case. Suppose an integer
sequence $a_n$ satisfies a linear homogeneous recurrence with integer coefficients, and let $p$ be a prime not dividing those coefficients. Then the sequence $a_n \bmod p$ is periodic (not
just eventually periodic) $\bmod p$ by Pigeonhole. If in addition there exists $n$ such that $a_n = 0$ and $a_n$ is unbounded, then it follows that there is a nonzero term of the sequence
divisible by $p$. For example, this is true of the Fibonacci sequence; in fact we have the much stronger result that for $p > 5$, either $p | F_{p+1}$ or $p | F_{p-1}$.
My guess is that a result like this holds in the polynomial case with $p$ replaced by a monic irreducible polynomial (say, of degree $2$), although the argument above breaks down as written.
Thanks, Qiaochu! That's a good example. But it gives a recurrence relation for $(f_n(t))$ for any individual integer $t$, rather than for $(f_n)$. By the way, this is also the problem with
the "weaker form" of condition 2 in my posting. Sorry for confusion. – algori Dec 6 '09 at 23:24
What analogy with the integers do you have in mind? – JSE Dec 7 '09 at 4:37
Qiaochu: re the new version -- do you mean irreducible over $\mathbf{Z}$ or $\mathbf{Q}$? – algori Dec 7 '09 at 4:38
JSE, algori: I've added some details. – Qiaochu Yuan Dec 7 '09 at 5:05
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory ca.analysis-and-odes or ask your own question.
|
{"url":"http://mathoverflow.net/questions/8038/chebyshev-like-polynomials-with-integral-roots?answertab=votes","timestamp":"2014-04-16T20:11:21Z","content_type":null,"content_length":"62186","record_id":"<urn:uuid:9087ffa5-ed0b-46f9-ba4e-7c6f36a0980a>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Aliso Viejo Accounting Tutor
...We take the time necessary to discuss the definitions of the terms we use and the principles of the subject area in which we're working. We work together through as many specific questions and
problems as necessary to make the client comfortable enough to work without me. The best tutor is the one who works to make his help unnecessary.
13 Subjects: including accounting, English, physics, calculus
...Please let me know what other information you may need. I hold a 2nd dan black belt in Tae Kwon Do. I have 15 years of teaching experience including group classes and private lessons.
24 Subjects: including accounting, statistics, finance, economics
...I also studied organic chemistry while preparing for the MCAT exam, so I know the subject well. This will not be the first time I have tutored it. Rather than forcing students to memorize, I
focus more on the conceptual understanding of the material.
51 Subjects: including accounting, reading, English, chemistry
...I help you with homework and on-line tests. I prefer to teach you the concepts, but I understand when you mostly just want to get the tests and homework done / passed. Many of my students noted
I am able to very quickly dive into their curriculum and understand and explain how to derive the (ri...
3 Subjects: including accounting, finance, Microsoft Excel
...I am an excellent public speaker and am currently in training to become a corporate Keynote Speaker. I will be attending the Gove-siebold Speech Class in Atlanta June 7 - June 9 2013. I hope to
be a fee speaker by year end.
24 Subjects: including accounting, reading, English, writing
|
{"url":"http://www.purplemath.com/Aliso_Viejo_accounting_tutors.php","timestamp":"2014-04-18T23:31:53Z","content_type":null,"content_length":"24062","record_id":"<urn:uuid:e9f7fdf2-c2a5-4765-8d34-a4a174b7e07d>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fuzzy Logic and Musical Decisions
Peter Elsea, University of California, Santa Cruz
This article is copyright Peter Elsea, 1995; parts or the entirety of it may be distributed in any form, provided there is no charge beyond actual copying costs, no changes are made, and this
statement accompanies all such copies.
Abstract: This article presents some of the core concepts of Fuzzy Logic and demonstrates how they may be applied to problems common in music analysis and composition.
In the max environment, pitches are necessarily represented as numbers, typically by the MIDI code required to produce that pitch on a synthesizer. We must begin with and return to this
representation, but for the actual manipulation of pitch data other methods are desirable, methods that are reflective of the phenomena of octave and key.
A common first step is to translate the midi pitch number (mpn) into two numbers, representing pitch class (pc) and octave (oct) this is done with the formulas:
oct = mpn / 12
pc = mpn % 12
The eventual reconstruction of the mpn is done by
mpn = 12*oct + pc
In this system pc can take the values 0 - 11, in which 0 represents a C. Oct typically ranges from 0 to 10. Middle C, which is called C3 in the MIDI literature, and C4 by most musicians, is octave 5
after this conversion.
The major drawback of this representation is that it does not easily reflect the nearness of B (pc = 11) to C (pc = 0). Stating that C is above B is easy enough. The relationship:
pabove = (pc + 1)%12
holds true in all cases, but the reciprocal operation:
pbelow = pc-1
breaks when the zero boundary is crossed. The proper expression is counterintuitive:
pbelow = (pc+11)%12
The extra constraints of key and tonality are even more awkward to account for, and chords are difficult to manipulate.
A more flexible system of representation is the pitch set. A pitch set has twelve members, which are either 0 or 1:
{1 0 0 0 0 0 0 0 0 0 0 0}
The 1 represents the presence of the pitch in the set, which runs from C chromatically to B. This one is {C}. For any other pitch, the 1 is placed in the location that corresponds to the pitch.[1] .
Any pitch can be produced by rotating {C} right by the desired interval[2].
The concept "pitch below" is produced by a left rotation, and is not upset by zero crossings, as:
{1 0 0 0 0 0 0 0 0 0 0 0}
{0 0 0 0 0 0 0 0 0 0 0 1}
The Max object that produces these sets is Lror[3], which will rotate left when given a negative rotation value. Lror is designed to default to the C pitch set, so any pitch set is easily produced by
simply applying the pitch class to the left inlet of an empty Lror object.
Pitch sets can represent chords.
{1 0 0 0 1 0 0 1 0 0 0 0}
This set is a C major chord, and can be rotated to produce the other major chords.
The concepts of key and modality can also be represented by a set. This is the the C major scale:
{1 0 1 0 1 1 0 1 0 1 0 1}
All major scales can be produced from this by rotating (key) steps.
It is easy to exclude any pitch that does not belong to a key by a simple multiplication[4] between the scale set and the pitch set. Using this techinque, we can generate most chords in a given key.
Start with a chordgen set, which is the union of the major and minor chords:
{1 0 0 1 1 0 01 0 0 0 0}
Rotate it to the desired position, say E (+4):
{0 0 0 0 1 0 0 1 1 0 0 1}
Finally, multiply it by the key set:
{0 0 0 0 1 0 0 1 1 0 0 1}
{1 0 1 0 1 1 0 1 0 1 0 1}
{0 0 0 0 1 0 0 1 0 0 0 1}
This produces an E minor chord.
These operations can be accomplished by two Max objects, Lror initialized with the chordgen set, and Lmult. (You will discover that this fails to generate the diminished chord on B. This problem can
be solved with a rather complex patcher that, although interesting, is the beginning of a finally unproductive path. I will instead lay some more groundwork and return to the subject later.)
Any list of pitch classes can be converted to a pitch set by the Ltoset object. Sets may be converted back to pitch classes by the Ltop object, which reports the positions of the highest n values of
a list. In the example above Ltop 3 whould give the three chord members 4, 7, 11.
Crisp and fuzzy logic
The preceeding exercise was really an example of Boolean logic. The multiplication of the scale set by the chord generator is equivalent to the the logical AND operation, which gives the intersection
of two sets. In traditonal logic systems, an item is either a member of a set or it is not. G sharp is not a member of the C major scale, so it is represented by a 0 in the Cmajor scale set. G is and
gets a "truth value" of 1.
In Fuzzy Logic it is possible for items to have partial membership in a set. In other words, you might indicate a C minor scale like this:
{1 0 1 1 0 1 0 1 1 0 0.7 0.6}
Here the pitches C, D, E flat, F, G, and A flat are definitely members of the C minor scale. However, there are two different possibilities for the seventh degree. Some of the time B flat is used,
sometimes B natural. A fractional membership value reflects these possibilities. Note that this is not a probability. That would imply that you knew how many lowered and how many raised sevenths
there were going to be. These fractions merely indicate that either is possible, and that the rules for generating the pitches favor lowered sevenths somewhat[5].
Fuzzy logic makes it simple to represent concepts that are more linguistic than mathamatical. For instance, in crisp logic[6] , the concept "just below C" may be expressed by:
(x < 12) && (x > 9)
This implies that we do not consider A to be just below C, and that B and B flat are equally just below C. But the usual meaning of the statement "just below C" implies a gradation of below C-ness
that may include A in some circumstances. This can be represented with the fuzzy set:
{0 0 0 0 0 0 0 0 0 0.2 0.5 1}
Again, the membership values do not have to add up to anything, or fit any regular curve. They simply reflect our judgement of how well each pitch fits the criterion of "just below C". We can
contrast this with the gentler constraint "below C" which might be represented by:
{0 0 0 0 0 0 0 0.3 0.6 0.7 0.9 1}
This is a bumpy curve that includes the feeling that there is a bigger jump between scale degrees than between the major and minor versions of the same degree.
The next linguistic step is to combine two descriptions. To find notes that belong to both sets "below C" and "in C minor" we find the intesection of the two sets. In fuzzy logic, intersections are
most commonly found by taking the lower value of the equivalent members of each set. This is performed by the Lmin object.
The calculation of "below C in C minor" would yield:
{1 0 1 1 0 1 0 1 1 0 0.7 0.6}
{0 0 0 0 0 0 0 0.3 0.6 0.7 0.9 1}
{0 0 0 0 0 0 0 0.3 0.6 0 0.7 0.6}
To get the result of the operation, you take the highest valued member with Ltop 1, in this case 10 (B flat). If you said "just below C in C minor"
{1 0 1 1 0 1 0 1 1 0 0.7 0.6}
{0 0 0 0 0 0 0 0 0 0.2 0.5 1}
{0 0 0 0 0 0 0 0 0 0 0.5 0.6}
The result would be B natural.
Here is an example of this technique in use:
The ChordGen patcher will produce the chords of the C major scale. To solve the problem of the diminished chord on seven, we modify the chordgen set to include the possibility of a diminished fifth,
{1 0 0 1 1 0 0.5 1 0 0 0 0}
rotate it to the B position and do the same calculation as before:
{0 0 1 1 0 0.5 1 0 0 0 0 1}
{1 0 1 0 1 1 0 1 0 1 0 1}
{0 0 1 0 0 0.5 0 0 0 0 0 1}
This produces the pitches 2, 5, 11 as required.
The rest of the example is a chord sorter to leave the chord in root position. To do this I use Lmatch to find the position of the root in the pitchlist. (Lmatch returns the position of one list or a
constant within another list.) If no match is found there will be no output at the left outlet, so ChordGen will not produce chords foreign to C major.
The position of the root in the unmodified list is 0 for root position chords, 1 for second inversion, and 2 for first inversion.
To get these into the proper order I rotate them using instructions stored in the Lbuf object. Lbuf will return the value found at a particular position in a list.[7] In this case, the values are the
ones necessary to rotate the chord into root order by the Lror object.
The ChordGen patcher will work with any scale.
The greatest advantage of fuzzy logic is the ease with which tasks may be translated into terms the computer can deal with. Most problems can be solved with mathamatical models and advanced
probability, but the construction of such models is difficult and the effective approaches are not often obvious. In addition, such models usually do not work at all until they are complete, and
later addition of new factors can be a monumental task.
Fuzzy models, on the other hand, are a fairly straightforward translation of the linguistic statements of a group of rules. The model begins to function roughly as soon as two or three rules are
stated, and is easily refined by tuning up the sets or by addition of more rules.
In practical industrial applications, the fuzzy approach tends to lead to simpler, more easily maintained code in a shorter development time than other techniques.
Using fuzzy reasoning to produce chord inversions.
To show how fuzzy procedures are applied to musical problems, I will continue with the issue of chord inversions. The choice of a chord inversion depends on many factors, the quality of sound
desired, the avoidance of parallel fifths, fingering difficulties, and so forth. We begin the design process by formulating a set of rules as if...then... statements. Assume we want to choose
inversions that will keep common tones where possible, that will not follow a root position with another root position, and will otherwise change inversions from time to time. The following rules
state these criteria:
• If root position keeps common tones, then root position.
• If first inversion keeps common tones, then first inversion.
• If second inversion keeps common tones, then second inversion.
• If last position was root, then first inversion or second inversion.
• If there have been too many firsts in a row, then root or second.
• If there have been too many seconds in a row, then root or first.
The order in which the rules are listed makes no difference, as we are going to test all of them and combine the results. The final answer should be a number that can be used to rotate the chord to
the desired inversion: 0 to produce a root, 2 to produce first inversion, 1 to produce second.
All of these rules have a predicate (if...) and a consequent (then...). We evaluate the predicate of each rule to find out whether to add the consequent into the combined result. If the predicate is
crisp (as in "if last positon was root") the consequent will either be reported or not.
In this example, the consequents are all a set of three members. The value for member 0 is a vote for root positon, the value in 1 is a vote for first inversion, and the value in 2 is a vote for
second inversion. The consequent "then root or second inversion" will output the set {0 1 1}. The mechanism in figure 2 will do the work:
Figure 2.
If the predicate is fuzzy ("too many first inversions") the truth value extracted from the fuzzy set is used to modify the consequent in some way. One common modification is to clip the consequent
set to the truth value obtained from the predicate rule; that is, make all members of the consequent set lower than the truth of the predicate. That is illustrated graphically by figure 3
Figure 3.
The triangles[8] represent fuzzy sets for the predicate and the vertical lines the consequent. Some input value is used to derive a truth value from the predicate, which is then used to truncate the
consequent. Figure 4 shows a Max mechanism to do this.
Figure 4.
In this operation, Linterp is used to find the value at a particular position in a set. Linterp can also find values between members by interpolation. That provides a lot more accuracy than these
coarse examples suggest. Lmin, as we have seen gives intersections between sets. When the input to Lmin is a single value, the output set is truncated at that value.
Once all the rules have been evaluated, the answer is then derived from the accumulated consequences, the solution set. Many fuzzy applications involve fifty to a hundred rules, and the solution sets
can get quite complex. Reducing these to an answer is called "defuzzification", and there are many approaches. One of the more common ones is to take the the position of the maximum value produced in
the solution set.
Figure 5 shows how these principles are applied to the problem of finding inversions.
Figure 5.
The simplest rule to evaluate is the one about not repeating roots. In this case we can fire the result of the last cycle into the sel object. If that result was 0 (root position), the list {0 1 1}
which is the union of 6 and 6-4[9] will be sent to the accumulator.
The concept "too many" is definitely a fuzzy construct. It is represented by a set with members that describe how much "too many" each is. For instance
{0 0 0.2 0.4 0.6 1 }
will allow two or three repeats easily. At 5 it will be pretty emphatically complaining.
We evaluate the predicate of the rule "too many first inversions" by looking in the "too many" set at the position that corresponds to the number of times the first inversion has occured. That is
monitored by a counter, as illustrated. Note that the counter is turned back to 0 by any other inversion. The Linterp object does the lookup.
To produce the consequent of the rule, the result of the lookup is used to truncate the set "root or 6-4" {1 0 1}. The Lmin object will produce a set like {0.2 0 0.2}, taking the lower of each pair
of values between the input and initialized set.
The rule for "too many 6-4" is identical except for the consequent.
The three result sets are added in the Laccum object, which is then flushed into Ltop 1 to reveal the winner. That value is transformed and fed back to another Lror (which is holding the chord in
question) to produce the final output. The value is also saved in an int object for the next iteration.
Even at this early stage, the model will begin to function in a rudimentary way, flipping between root and first inversion. This allows testing of the control aspects of the patcher while it is still
relatively uncomplicated. In this case, testing showed that a minor modification was needed to improve performance. An additional rule
If last position was not root, then root.
insures that there is always an output from this part of the patcher. (It can be removed later when there is more information being processed.) It is given a very low weight, so that the main rules
are not hindered.
Figure 6
Figure 6 shows how the rules involving common tones were included in the the patcher. The L== object compares two lists and returns a value between 0 and 1 that reflects the degree of similarity[10]
. To decide if an inversion of a new chord will have any tones in common, we generate the inversion and compare it with the last chord output. If there are no common tones, the result will be 0. One
common tone gives a 0.3, two 0.6, and if all tones are the same the output is 1. These weightings work well with the values produced by the previous inversion rules. Note that the conseequent set for
6-4 is contains 0.7 instead of 1. This was edited to discourage second inversions slightly.
The newrule subpatcher in the upper right corner of the inversions patcher shows the flexibility of the fuzzy logic methodology. Since evauation is always based on a sum of rules, it is easy to add
new ones. Experimentation with the working model showed that the progression V-I needed some special treatment. If the V chord happened to fall in root position, the tonic chord would be a 6-4, as
the logic strived to keep common tones. The new rule simply detects a root of 7 followed by 0[11] and injects a set designed to skew the outcome toward root position.
Figure 7.
This rule also illustrates how working with pitch sets can keep patchers simple. The numerical version of this patcher involves considerable mathamatical gymnastics to allow for changing key. Here,
the key set to work with is simply an input.
At this point, the pitch classes are showing up in the output list in the desired order. The only thing left is to get them into the proper octave and play them. There are a variety of simple crisp
ways of doing this, but I am going to indulge in a little more complexity to illustrate another point in fuzzy logic, as shown in figure 8.
Figure 8.
The rules for deciding what value to add to the pitch class to wind up in the proper octave are simple:
• Add 48 (or another octave offset) to all three pitches.
• If the second pitch is equal or below the first pitch, add 12 to the second pitch.
• If the third pitch is equal or below the first pitch or the second pitch, add 12 to the third pitch.
Adding a constant to a all members of a list is simple with the Ladd object.
To generate a set for "equal or below a pitch" we use the Lshiftr [12] object on the C pitch set. Thus the set "equal or below E" is produced as in figure 9.
Figure 9.
This is fed into the Lbuf object, which will report a 1 if a requested pitch is equal to or below the first pitch.
We unpack the input list to apply the first and second pitch to Lshifr objects.[13] The output of the object labled "below or equal to first pitch" is sufficient to test the second rule.
For the third rule, we need to evaluate "below the first or below the second". The word "or" implies a union. In fuzzy logic, unions are made by taking the greater value for any member in the two
sets, here done by the Lmax object.[14]
We have already dealt with the and operation. The chordgen patcher described earlier really evaluates the rule "in the (root) chordgen set and in the Cmajor scale." As we have seen, "and" implies the
intersection of sets and is accomplished with the Lmin object.
Using "or" or "and" to link simple tests, we can build rule predicates that are as complex as we wish.
To complete this operation, we evaluate each rule with Lbuf objects. Note that the list is unpacked again to produce the left inputs. This is necessary for timing purposes. If all of the pitch values
were taken from the same unpack object, the rule for the third pitch would be evaluated before the union derived from the second and first pitches was constructed.
The results of these evaluations will be 0 or 1. They are multiplied by sets containing 12 in the appropriate position, and the union of the two resultant sets is added to the original list.
The output of the addoct patcher is a list of the pitches desired in the chord. This may be passed through an iter object and fed to makenote in the usual manner for producing notes in Max. However,
more sophisticated performance is desired here. The criteria are,
• New chords should curtail old chords.
• Common tones between chords should not be rearticulated.
• All tones should end after a preset duration.
As you have probably begun to gather by now, once you have stated the rules of a process, you are halfway to a working patcher. Figure 10. shows how the makechord patcher developed.
Figure 10.
The clocker and associated objects merely count the miliseconds since a note came in. If 2 seconds pass by, a bang is issued. This is a routine Max function.
The Lbuf object is used to detect any differences between the old chord and the new one. Up to now, I have been using Lbuf as a means for interrogating a particular member of a set. However, that is
an incidental feature of Lbuf. Its main function is to store sets. If a list is input to the left inlet of Lbuf, the stored list is output and the new list is stored in its place[15].
To see which parts of the lists change, the last list is subtracted from the new list. Then the absolute values are found, and Lhigh 3[16] is used to convert all non-zero members of the result list
to 1. This will create a control list that can be multiplied by the chords. The old chord is itered (after the 0s are removed) and each note is paired with a 0, which will create a note off when sent
to note out. The new chord is treated the same way, but paired with a velocity value so a new note will be generated.
If the timer should happen to go off, the last chord played, which was stored in another Lbuf, is sent to the note off part of the output structure.
So far, the logic has been crisp. The fuzzy aspect of this patcher is simply that the outputs of various rules are accumulated to create the final control set. Again, that makes it ethe next chord
would be played in full, so a new rule:
• If time has expired, play all notes of the new chord.
was required. The timer simply feeds a control set of all ones into the Laccum. Likewise, the rule:
• If root is tonic, play all notes.
was found desirable after some testing.
Figure 11 shows the master patcher. It takes notes in from a MIDI keyboard, applies all the processes discussed, and sends the chords out. There is obviously room for refinement, but even at this
simple level it behaves in a very musical and unsurprising manner.
Figure 11.
Some more fuzzy concepts
The rule construction process desribed so far is only one aspect of the system of Fuzzy Logic. Many more concepts basic to Fuzzy studies are appropriate for describing musical situations and coding
Many musical concepts are harder to express mathematically than would seem apparent at first blush. Take as an example, mezzo-forte (mf). Literally, this translates as medium-loud, a fuzzy concept if
there ever was one. Musicians define mf as softer than f, and louder than mp, which is louder than p.
If we consider that all these have a partial membership in the fuzzy set "loudness", we can place them in order and give them a membership that reflects common practice:
0.1 0.2 0.3 0.45 0.55 0.7 0.9 1
ppp pp p mp mf f ff fff
Notice that the values do not grow evenly. In actual performance, the difference between mp and mf is not as great as that between mf and f. [17] Also note that ppp does not have zero loudness, since
zero loudness would imply silence.
The usefulness of the fuzzy approach becomes apparent when we want to perform a crescendo over n notes. We simply pick the starting and ending values and interpolate from one to the other through n
The Linterp object will do this. It accepts fractional indices and reports a value interpolated from the previous and following entries. So if you feed 2, 2.2, 2.4, ....3 into
Linterp (0.1 0.2 0.3 0.45 0.55 0.7 0.9 1)
You would get a smooth crescendo from p to mp.[18] The advantage of this over a more direct approach, like assigning a velocity value to each dynamic and doing the math, is that this can be applied
flexibly to various situations.
For instance, consider two instruments with differing velocity to loudness characteristics; one hits its loudest at vel=100 and the other maxes out at vel=120, with a non linear velocity curve. You
can code these two curves into lists for the Lfind object:
Lfind (0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1)
Lfind (0 0.05 0.1 0.15 0.2 0.3 0.4 0.6 0.7 0.85 0.9 0.95 1)
Lfind searches a list for the input value and reports its position. If the value falls between two points, an interpolated fractional position is calculated. In this case, multiplying that position
by 10 gives a velocity equivalent to the loudness desired[19]. We can use the loudness value calculated earlier to find the appropriate velocity to send to the desired instrument.
The curves in Lfind can be written in simplified notation. That is because Lfind expects a monotonic curve. If there are dips in the curve, a value may exist in two locations. Lfind gets around this
by interpolating between the first value it finds that is lower than the desired value and the nearest higher value. Therefore zeros within a curve are ignored, and you only have to enter the
inflection points[20].
This curve:
Lfind (0 0.05 0 0 0.2 0 0.4 0.6 0.7 0.85 0 0 1)
would give the same results as the second of the pair above.
Linfer uses this simplified notation also. Given a list with two non-zero values, it will report an interpolated value for any intermediate position. An object like this:
Linfer (0.1 0 0 0 0 1 0 0 0.5 0 0 0 0 0.1)
is a very simple envelope generator that can be reprogrammed by swapping lists around.
One of the strongest features of fuzzy logic is that it allows classification of data within vague guidelines.
As an example, take the problem of finding dynamic markings for a group of notes input from a keyboard for transcription. The usual approach is to assign a range of velocities to each mark; 40-49 is
piano, 50-59 mezzo piano, and so forth. The flaw in this method is that when group of notes cluster around the edge of a range the output will thrash between two markings. If the data for a phrase
read 44, 51, 46, 58, 62, 67, 72, 74, most notes would get a marking, p , mp., mf, or even f. If we simply averaged the dynamics we would get 59.25, just over the edge into mf.
The fuzzy approach is to assign every velocity a membership value in a set for each dynamic. In figure 12 the sides of the triangles represent the membership values[21] in each dynamic set for the
velocities across the bottom.
figure 12.
The sets overlap, so velocity 44 has both a membership value of 0.7 in piano and a membership value of 0.4 in mezzo piano.
The solution set for this problem will have eight members, one for each dynamic. The memberships in each dynamic set for each note in the phrase are added to the solution set, and then all members of
the solution set are divided by 8 to get a set of averages:
{0 0 0.15 0.52 0.48 0 0 0}
The average amount of "pianoness" for our eight notes is 0.15. The average for mp is 0.52, and the average for mf is 0.48. The biggest average falls clearly within mezzo piano.
The fuzzy approach makes it easy to add more factors to the classification, such as giving more weight to notes near the beginning of a phrase. You would do this by finding a value for "near the
beginning" from a set that looked something like Figure 13.
Figure 13.
This value would be used to scale the dynamic memberships for each note before adding them to the solution set. As you can see, the last few notes in a phrase would not affect the dynamic marking.
For the next step, you could compare the starting dynamic with the dynamic found for the notes "near the end" and if necessary insert a crescendo or decrescendo.
The fuzzy number is central to fuzzy logic and reasoning. Basically, a fuzzy number represents a concept similar to "aproximately 5". It is a set of the type:
{0 0 0 0.2 0.6 1 0.6 0.2 0 0 0 0 0}
where the one is in the position corresponding to the number, and the shape of the flanking curve is appropriate to the application.
In music, intervals are fuzzy numbers. The concept "a second above" can have two possible answers depending on the scale and the starting point. The interval a second above is represented by the set:
{0 1 1 0 0 0 0 0 0 0 0 0}
We evaluate the complete construct "a second above D sharp in E major" by rotating "a second above" by trhree steps and then taking the intersection with the E major scale:
{0 0 0 0 1 1 0 0 0 0 0 0}
{0 1 0 1 1 0 1 0 1 1 0 1}
{0 0 0 0 1 0 0 0 0 0 0 0}
This set may be evaluated by Ltop if we want the answer (E) or accumulated with other sets if we are building toward a more complex solution.
There is a complimentry set for a second below
{0 0 0 0 0 0 0 0 0 1 1}
as well as above and below pairs for all of the inperfect intervals. These could be generated from the second by rotation, but it is probably most efficient to have them stored in a coll object.
Fuzzy logic is a well defined discipline, and we should not play fast and loose with its conventions just because it deals with vagueness. However, some unique properties of music and the
practicalities of the Max environment sometimes require us to do things that might raise an eyebrow in theoretical circles.
In fuzzy sets, all members are normalized to values from 0 to 1. This is an important convention, and must be observed if sets derived from a variety of processes are to be compared or merged.
Generally, a normalization is performed after addition or subtraction of sets, and may be appropriate after intersection in some cases.
However, if the addition is the final step in an accumulation process that will then be evaluated by Ltop, normalization is not necessary. If a further intersection with a scale set is to be
performed on the un-normalized results of the accumulation (which often saves processing steps) the Lmult object should be used instead of Lmin.
Another situation in which normalization is not needed is if the results will be passed to a table or other object that does not deal with floats. In that case the set needs to be multiplied by 10 or
100 and passed through Lround to insure that the values are in an appropriate range.
I have also found negative membership values to be of some use, particularly in the case of suppressing unwanted repeats. Throwing the last pitch into Laccum with a negative weight is a simple way of
making repetitions unlikely.
Fuzzy logic allows a vague description of the rules that will lead to a desired outcome, but it is still highly accurate and repeatable, given consistent initial conditions. However, in music
composition, we do not always want the same outcomes, so we often add some inderminancy to the program.
The simplest way to add this indeterminacy is to play roulette with the sets. Figure 14 shows a technique for doing this.
The result of this operation, when acumulated with a group of rule evaluations, will randomly encourage one choice. The weighting of this random rule must be carefully chosen to maintain the
character of the rule set.
A more powerful way of including indetermancy is to load a set into a table. The LtoTab object is made specifically for this transformation, as it converts a list into a series of co-ordinated
addresses and values that will transfer the list contents to the first locations in the table. Tables can only manage ints, so the set should be multiplied by ten or a hundred and rounded off.
The bang or quantile function of the table object will then output the address of one of the non zero members. The probability of getting any particular address is the value stored there over the sum
of all nonzero values. So, if you loaded the set
{5 0 0 0 2 0 0 3 0 0 0 0}
into a table, 10 bangs should produce 5 Cs, 2 Es, and 3 Gs.
Indeterminancy with a higher level of organization can be achieved with a coll and the unlist object. The patcher in figure 15 processes the empty signal ( a bang) from the unlist object to choose a
list from the coll. Unlist then produces one member of the list at a time.
Figure 15
This patcher will randomly work its way through rhythm patterns stored in the coll object. The drunk object could be replaced by any rule evaluator, fuzzy or probabilistic.
The Lchunk and Ltocoll objects are useful for loading lists into coll objects.
Indeterminancy in fuzzy composition systems is most effective when it is used judiciously at key points in the program. A simple composition engine may be modeled as in figure 16.
Figure 16.
This process is reiterative, with each output initiating and affecting the computation of the next. There are four major components that determine the results:
• User input includes initial settings as well as real time events. It is important that these inputs be accepted in terms the user understands.
• Progress monitors impose a temporal framework on the piece. This could be as simple as a metromone, or some complex phrasing system with its own user input.
• The Fuzzy evaluation rules incorporate the systems knowledge about the piece and music in general. They will provide desired amounts of predictability and self similarity, as well as conformance
to the composers plan.
• The indeterminate section adds unpredictability.
Choosing from a set of reasonable options seems to me to be the essense of composition. Indeterminate choice is probably not the best way to do this, but until someone discovers an algorithm for
creativity it will have to do. A practical system will have several of these composition engines, each working on some aspect of the music and all interlinked.
The patcher in figure 17 is an example of such an engine, designed to produce fairly traditional chord progressions.
Figure 17.
The possible chords are grouped in the rows of a matrix according to function. The order of the chords in each row is suggestive of their order of probability. Translating the interval numbers into
chord symbols, the matrix is really:
I iii vi
IV ii vi
V vii V
The chord will be determined by both the chooseRow and chooseColumn subpatchers. Working down the rows produces tonic, sub dominant, dominant, tonic progressions, moving across introduces variant
chords in each function. The duplicate chords will not appear more often than the others because the rules within the subpatchers disfavor the third column.
The chooseRow patcher is shown in figure 18.
Figure 18.
This patcher uses fuzzy rules to choose a row according to the progress through the phrase. The choice is made on each beat, but the possibilities are controlled by the input from measure (here 0-7).
The three main rules:
• If early in phrase then tonic
• If midphrase then subdominant
• If late in phrase then dominant
Will assert themselves strongly at times, but in the one-quarter and three-quarters progress points, the milder rules
• Generally move forward
• If all else fails then tonic
Will produce beatwise motion. These interactions are determined by the shape of the rule curves. Short, tall curves have high influence within a narrow region. Adjusting these curves (which could be
done by user input) has a strong effect on the piece.
The chooseColumn patcher is shown in figure 19. It provides the indeterminant part of the puzzle, selecting one of two or three chords within each row.
The chooseColumn patcher uses the quantile function of the table object to make a probability weighted choice of columns. The basic probabilities are loaded into the table by a fuzzy mechanism, but
the rules are crisp. The values chosen reflect my preferences at the moment. They are an excellent candidate for user input.
This is a classic markov function that could easily be expanded to more dimensions of choice. A second order function can be acheived by using multiple predicates in the rules, such as:
• If row is 0 and last column was 0 then 0.2 0.4 0.4.
There is no reason the production of probabilities can't be fuzzy. You simply have to insure that all probabilities don't come out zero, or there will be no output from the table.
There are three additional rules in the chooseColumn patcher.
• On srtong beats favor 1
• At beginning of phrase choose really favor 1
• If too many repeats, change columns
The TooMany rule is contained in a subpatcher, and works exactly like the too many repeats rule in the inversions patcher.
The entire structure is contained in the patcher shown in figure 20. This uses elements from the Fuzzy_Harmony patcher to play continuous arpeggios. The 12/8time subpatcher contains counters to
produce beat and measure numbers within an 8 measure phrase.
Figure 20.
This tutorial has barely begun to explore the musical possibilities of Fuzzy Logic methodology. The ability of Fuzzy Logic to express traditional musical learning, and the ease with which Fuzzy
procedures can be implemented, tested and expanded offer tremendous promise of a fruitful region of exploration. The examples offered here are literally my first experiments in Fuzzy technique, and
represent only a few hours of development (mostly spent in tidying up the graphics). They are crude, but offer surprisingly sophisticated performance. I can foresee (as I hope you can by now) many
problems that will benefit by the Fuzzy approach, and look forward to future developments.
This tutorial is also sadly deficient in expressing the principles of Fuzzy Logic itself. I leave that to better and more experienced writers. For beginning studies I can recommend:
McNeill, Daniel and Freiberger, Paul; Fuzzy Logic: The Revolutionary Computer Technology that is Changing our World. Simon and Schuster, 1993
And for practical application (including C++ examples and code on disk)
Cox, Earl; The Fuzzy Systems Handbook. AP Professional, 1994
Because this is an emergent technology, important new material is released every day. An ongoing discussion and review of new texts can be found in the usenet newsgroup comp.ai.fuzzy.
|
{"url":"http://artsites.ucsc.edu/ems/music/research/FuzzyLogicTutor/FuzzyTut.html","timestamp":"2014-04-21T02:16:04Z","content_type":null,"content_length":"45605","record_id":"<urn:uuid:cff9af01-f1e5-4191-b0b5-eb8ed4495c49>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Posts by sam
Total # Posts: 3,849
Americans are urged to make at least half of their grains whole grains. An example of a food with whole grains includes ________. A. Brown rice B. Pasta C. White bread D. White rice d?
Based upon the government s research, they recommend that we do what with regard to our intake of dairy? A. Switch to fat-free or low-fat milk B. Drink whole milk C. Avoid dairy if it causes gas D.
Eat more ice cream a?
Man jogs 1.5 m/s. Dog waits 1.8 s then runs at 3.7 m/s. How far will they have each travelled when dog catches up to man? Answer in units of m
Two ants race across a table 57 cm long. One travels 4.14 cm/s and the other 3cm/s. when the first crosses the line, how far behind is the second one? Answer in cm.
2. The complete predicate of the sentence begins with the word A. great. B. met. C. the. D. sailed
us history
how did the New Deal affected gender roles in American society
Betsy is making a flag. She can choose 3 colors from red white blue and yellow. How many choices does Betsy have? I get 8 or 16. Am I right? Do I count choices that are in diferint order as another
choice? r w b r b w Is that 1 choice or 2?
maybe B
yeah is c
This is not an answer!!! Thankyou so much Elena, that was correct and a BIG BIG help. Thankyou so much!!!! This is not an answer!!!
While working on her bike, Amanita turns it upside down and gives the front wheel a counterclockwise spin. It spins at approximately constant speed for a few seconds. During this portion of the
motion, she records the x and y positions and velocities, as well as the angular po...
Market analysts often use cross-price elasticities to determine a measure of the competitiveness of a particular good in a market. How might cross-price elasticities be used in this manner? What
would you expect the cross-price elasticity coefficient to be if the m...
Indicate how an understanding of demand and of elasticity of demand is useful in analysing the following statement. If total consumer expenditure remains the same after a new tax is imposed on
cigarettes then spending on cigarettes will decrease and spending on other goo...
when doing formula p=2L+2w how do you do the problem L=2meters and w=1/2 meters and p=?
help please
4 boards (1 inch X 8inchs X 6feet} and 3 boards (1 inch X 6 inches x 4feet}= how many board feet? 1 board foot =1 inch by 12 inches x 12inches
Thank you so much.
The formaula works for a rectangle, but this is a circle. How do you convert sides of a circle? The answer is $176.63, but I can't "make it fit".
carpet is $25 per square yard. what is cost of circulare rug 9ft in diameter?
Thank you, it works.
steel is $50 per cubic foot. What is cost of cone with radius 7.5 inches and 25 inches height?
-6x+5+5=47 -6x+10=47 -6x +10-10=47-10 -6x=37 --- --- -6 -6 x=37/-6 x=-37/6 check by putting -6(-37/6)+5+5=47. It will balance
hydrogen burns in oxygen to for water.caculate the volume?
i had 2 question left to finish but i cant really undersdand how to do it 9)design an elextrochemical cell using Pb(s) and Mn(s) and theire solutions and answer the following questions: i answerd a-i
all that is left is j which is: indicate the substance oxidise,substance redu...
elimination method -3x+4y=19.5 -3x+y=10.5
suppose that A=(58,3,4),B=(-2,18,24),and C=(-2,3,4) is triangle ABC a right triangle
What is the freezing point of a solution that contains 10.0 g of glucose (C6H12O6) in 100 g of H2O? Kf for water is 1.86oC/m
Chemistry II
A solution is made by mixing 38.0 mL of ethanol, C2H6O, and 62.0 mL of water. Assuming ideal behavior, what is the vapor pressure of the solution at 20 °C?
Chemistry II
Quinine is a natural product extracted from the bark of the cinchona tree, which is native to South America. Quinine is used as an antimalarial agent. When 3.28 g of quinine is dissolved in 25.0 g of
cyclohexane, the freezing point of the solution is lowered by 8.43 °C. Th...
An ice cream cone is 3 inches wide at the widest point, and is 6 inches tall. What is the volume of the cone in cubic inches? Use Pi
Solve the problem by stating the equation you would use to solve and then solving it. Be sure to show all of your work. In a local election, 53,400 people voted. This was a decrease of 7% over the
last election. How many people voted in the last election? Round to the nearest ...
How many grams of calcium phosphate are theoretically produced if we start with 10.3 moles of Ca(NO3)2 and 7.6 moles of Li3PO4? Reaction: 3Ca(NO3)2 + 2Li3PO4 → 6LiNO3 + Ca3(PO4)2
Reaction: 3Ca(NO3)2 + 2Li3PO4 → 6LiNO3 + Ca3(PO4)2
How many grams of calcium phosphate are theoretically produced if we start with 10.3 moles of Ca(NO3)2 and 7.6 moles of Li3PO4?
9. Elias Howe invented which of the following?
AFM (MATH)
a) Write an equation to represent the volume of an open box constructed by cutting congruent squares from the corners of a 24" by 14" piece of cardboard. b) What is the domain of this model?
Below, you will find a list of sentences. After each sentence, select true if the topic of the statement is suitable to form the basis of a single persuasive paragraph. If the topic of the statement
in your textbook is unsuitable for a single persuasive paragraph, select false...
i say sunrise
Thank you
Social Studies
1. For revolutionaries like Sam Adams, the term _______ embraced a way of life, a core ideology, and an uncompromising commitment to liberty and equality. A. independency B. republicanism C.
federalism D. Americanism 2. Which of the following best describes the Columbian Excha...
Determine the sum of money that must be invested today at 9% interest compounded annually to give an investor annuity (annual income) payments of $5,000 per year for 10 years starting 5 years from
Find the length of segment CE if AE = 8 inches, EB = 10 inches, and ED = 5 inches.
The salaries at a corporation are normally distributed with an average salary of $29,000 and a standard deviation of $4,000. What is the probability that an employee will have a salary between
$22,520 and $23,480? Enter your answer rounded to exactly 4 decimal places.
The salaries at a corporation are normally distributed with an average salary of $29,000 and a standard deviation of $4,000. Sixteen percent of the employees make more than Sara. What is Sara's
salary? Round your answer to the nearest dollar.
Thanks a lot for your help, Count Iblis! :)
Evaluate the integral: 16csc(x) dx from pi/2 to pi (and determine if it is convergent or divergent). I know how to find the indefinite integral of csc(x) dx, but I do not know how to evaluate the
improper integral, at the following particular step. I know I need to take the li...
Reimbursement methodologies
CMS-1450 There is no form ub-4. there is UB-04 CMS_1450 form for out patient claims UB-92 MANAGED CARE
Thank you for your help, Bob. I will be more clear regarding exactly where I do not understand. I know I need to take the limit on the upper bound (e.g. A) as it approaches pi from the left side.
After doing so, I get ln|csc(x)+cot(x)|. Approaching pi from the left side, I get...
Evaluate the integral: 16csc(x) dx from pi/2 to pi (and determine if it is convergent or divergent). I know how to find the indefinite integral of csc(x) dx, but I do not know how to evaluate the
improper integral.
Let A=number of animals, D=number of dogs, B=number of bunnies, and C=number of cats. Given: 1/2*A = D 1/4*A = B C = 6 C = B Solution: C=B=6 1/4*A=B=6 1/4*A=6 A=6*4=24 D=1/2*A D=1/2*24 D=12
Therefore, there are 12 dogs.
You need to post the options given in the question. For instance, the folloiwng are proper ways to express the teacher-student ratio: 1:20, 1/20, 0.05
Two number cubes are rolled .both cubes are Labelled to 6. The numers rolled are added . What is the probability of each outcome? A) the sum is 12. B) the sum is less then 4. C) the sum is 7. D) the
sum is 2.
Two cubes are rolled .both cubes are labelled 1 to 6.the number rolled are added. What is the probability of each out come ? a) sum is 12 B)the sum is less than 4. C)the sum is 7. D)the sum is 2.
the reaction of N2+3H2-->2NH3 is used to produce ammonia commercially. if 1.40 g of N2 are used in the reaction, how many grams of H2 will be needed?
A 0.93 kg block is shot horizontally from a spring, as in the example above, and travels 0.539 m up a long a frictionless ramp before coming to rest and sliding back down. If the ramp makes an angle
of 45.0° with respect to the horizontal, and the spring originally was com...
What is the standard form of this parabola: -2x^2+16x+24y-224=0 Please Help, parabolas are evil.
An irregular octagon R has vertices 6, 2), (2, 6), (−1, 5), (−5, 1), (−6, −2), (−2, −6), (1, −5) and (5, −1). Using standard notation, write down the elements of the symmetry group S(R) of R, giving
a brief description of the geo...
Introduction to Ethics& Social Responsibility
The outlawing of slavery and extending voting rights to women seem to indicate that ethics can have beneficial results. Give an example from the past that indicates a similar result or a current
social policy that might be regarded as wrong, which therefore needs to be changed...
Renee belongs to a bowling club. She scores 50, 52, 55, and 59 on her first four games. She hopes to continue improving according to this pattern. Part A: What are the next four terms in Renee s
sequence? Part B: Write an recursive function rule for the sequence? Part C: ...
Water is to be pumped to the top of a skyscraper, which is 1270 ft high. What gauge pressure is needed in the water line at the base of the building to raise the water to this height?
8th grade math
how do you solve x+1/10= a whole number
I need help with these two review calculus problems for my final exam. What is the derivative of a). (5x^2+9x-7)^5 b). Lne^x^3 For a. I got 50x+45(5x^5+9x-7)^4 For b. I got 3x^2 Can you verify
Thank you so so much! I finally understand what to do! :)
I am not sure what to do here: Find the derivative (f '(x) using the limit process a). f(x)= 2x^2+5x+2 find f ' (2) b.) Find the equation of the Tangent Line at (1,5)
Mutually Assured Destruction" is a policy based on which game theory strateg
HOw do I find the domain of f(x)= x^2+3x+5?
Pb(NO3)2(aq) 2 x 10-3 M and Na2SO4(aq) 2 x 10-3 M Identify the ions that are present in each solution, and calculate their concentrations. If 1 L of each solution are mixed, will a precipitate be
formed? Why?
Pb(NO3)2(aq) + NaSO4(aq) → PbSO4(s)+ 2NaNO3(aq) Identify the ions that are present in each solution, and calculate their concentrations.
solve equation. 2p^2-50=0 PLEASE HELP
multiply: -4x^3(6x-3x^4) show your work. please help me!!
When adding or subtacting polynomials, what is very important to remember? PLEASE HELP
what is 18x^5+3x^3+6x after it is divided by 3x? show your work. PLEASE HELP MEEE
What is the result when (4x^2-5x+7) is added to (x^2-3x-6)? show your work. PLEASE HELP ME!!
multiply: a)(2x+5)(2x-5) b) -4x^3(6x-3x^4) show your work. please help me!!
what is the result when 2x^2+3xy-6 is subtracted from x^2-7xy+2? show your work. please help me!
ok but could you give me an example of what state in terms of h means
a cuboid with length 13cm width 4cm and height h State in terms of h the shaded face of the cuboid. write an expression in terms of h for the volume of the cuboid. if the volume of the cuboid is
286cm cube calculate the height h of the cuboid.
I don't understand how to fnd it though Reiny :( perhaps you could help?
Determine whether the vectors are parallel, perpendicular or neither. 1) v = (4,1), w = (1,-4) 2) v = (2,3), w = (4,6) 3) v = (2,3), w = (3,-3) Please and thanks!
Find the component form for vector v with the given amplitude and direction angle theta. |v|=20.6, theta=102.5deg
Find the dot product of u and v u= 3i+7j v= 11i-5j
What is the measure of each angle in a Googolgon
Suppose it can be assumed that 60 percent of adults favor capital punishment. Find the probability that on a jury of twelve members: a) All will favor capital punishment. b) There will be a tie.
how to find the length of a triangular prism? when the volume is 1470cm cube the height is 10.5cm base 14cm
algebra 2
Sorry, that wasn't one of the choices!!!! Hopefully someone who knows their algebra can help me answer it correctly :-)
algebra 2
Graph the equation. Identify the vertices, co-vertices, and foci of the ellipse. 64x^2+16y^2=1024 This is what i have so far vertices (0,8)(0,-8) foci(0,4√3),(0,-4√3)
algebra 2
Graph the equation. Identify the vertices, foci, and asymptotes of the hyperbola. Y^2/16-x^2/36=1 I think....vertices (0,4),(0,-4) foci??? asymptotes y=+/- 2x/3
algebra 2
Write an equation of the hyperbola with vertices at ( 7, 0) and (7, 0) and foci at ( 9, 0) and (9, 0)
algebra 2
Write an equation of an ellipse with center at (0, 0), co-vertex at (-4 , 0), and focus at (0, 3). I think its... X^2/16+y^2/9=1
algebra 2
Write an equation of the hyperbola with foci (0, -2) and (0, 2) and vertices of (0, -1) and (0, 1).
algebra 2
Write an equation of the ellipse with center at (0, 0), vertex at (4, 0), and co-vertex at (0, 2)
algebra 2
Thank you very much!!
algebra 2
A swimming pool in the shape of an ellipse is modeled by x^2/324+y^2/196=1, where x and y are measured in feet. Find the shortest distance (in feet) across the center of the pool? i think its 28
algebra 2
Thank you :-)
algebra 2
What is the equation of an ellipse with a vertex of (0, 3) and a co-vertex of (-2 , 0)? i think its 9x^2+4y^2=36...not sure
algebra 2
Is it (0,±6)?
algebra 2
Thank you again.
Pages: <<Prev | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | Next>>
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=sam&page=11","timestamp":"2014-04-16T05:40:12Z","content_type":null,"content_length":"27114","record_id":"<urn:uuid:22ef63d9-05f4-4c54-aad6-86137732317d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
|
September 2012
Math Digest
Summaries of Media Coverage of Math
Edited by Mike Breen and Annette Emerson, AMS Public Awareness Officers
Contributors: Mike Breen (AMS), Claudia Clark (freelance science writer), Lisa DeKeukelaere (2004 AMS Media Fellow), Annette Emerson (AMS), Brie Finegold (University of Arizona), Baldur Hedinsson
(2009 AMS Media Fellow), Allyn Jackson (Deputy Editor, Notices of the AMS), and Ben Polletta (Harvard Medical School)
September 2012
Brie Finegold summarizes blogs on MOOCs and mathematical patterns:
"Liftoff: MOOC planning – Part 7," by Keith Devlin, MOOCTalk, 21 September 2012.
While most universities boast no more than 50,000 students, Keith Devlin's class--Introduction to Mathematical Thinking--has almost 60,000 students enrolled. Even if only one-tenth of the students
complete the course, it's still a massive number of people. This huge enrollment is made possible by the format of the MOOC (Massively Open Online Course). There are 11 math MOOCS currently being
offered on Coursera (the site hosting Devlin's course). These courses are free, create an opportunity for people to learn from professors at prestigious universities, and are becoming interesting to
universities, especially for the purpose of teaching remedial courses. Managing such a massive course seem mind-boggling, but Devlin has deputized 900 "Teaching Assistants" and created discussion
boards to facilitate students' learning. Devlin chronicles his experiences, including his "mistakes," on his blog mooctalk.
"Me, Myself, and Math," by Steven Strogatz. Opinionator Blog, New York Times, 24 September 2012.
Whether you are inspecting your fingerprints, your roster of friendships, or the proportions of your body, Steven Strogatz wants you to know that mathematical patterns are key to explaining your
observations. In his series, "Me, Myself, and Math," he has so far explored the topology of the fingerprint using index theory, the idea that your friends have more friends than you do, and most
recently, the "divine" proportion phi (the golden ratio), which is purported to be the ratio of a man's height to the distance from the floor to his navel. After divulging his own measurements as
being non-golden, Strogatz debunks the ubiquitousness of phi while still highlighting its geometric beauty. Prompting over a hundred comments, his blog entry seems to have caught the attention of
many readers (even if many of them wrote in only to disagree with his categorization of phi as the second most popular number). In addition to being well-written, each piece includes an annotated
list of references (both online and hardcopy) for those whose curiosity was roused. See all posts by Strogatz.
--- Brie Finegold
Return to Top
"To the Selfish Go the Spoils?" by Tom Bartlett. The Chronicle of Higher Education, 28 September 2012, page A3.
This charming piece describes a very new result on that old chestnut, the Prisoner's Dilemma, a two-player game dating to 1950 which is a staple of game theory, and has long been used to study
cooperation in an evolutionary context. The new result comes to us from the computational biologist William H. Press--the president of the American Association for the Advancement of Science, as well
as a professor at the University of Texas at Austin--and the physicist Freeman Dyson--a scientist with a wide-ranging intellect, most famous for his work on quantum electrodynamics. The pair of
former wunderkind, as Bartlett amusingly describes them, published what appears to have been a weekend's worth of work in May's Proceedings of the National Academy of Sciences, and since then their
paper, "Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent," has been met with a multitude of proclamations about the demise of cooperation, which this piece
falls a little short of debunking.
While the Prisoner's Dilemma was originally proposed by mathematicians Merrill Flood and Melvin Dresher of the RAND corporation in 1950, its name comes from mathematician Albert Tucker's formulation
involving two criminals. These criminals are arrested, but the police do not have enough information to convict them for their crime. As a result, if neither man confesses, both will be sentenced to
a month in jail on a minor charge. If one man testifies against his partner, while the partner remains silent (and so "cooperates"), the betrayer will be granted immunity and go free while his fellow
perpetrator gets a one-year sentence. Finally, if both men testify, each will receive a three-month sentence. If we assume each man wants to minimize his own jail time, his only option is to rat out
his colleague; whether his partner betrays him or cooperates with him, he will receive less jail time if he himself defects. The inevitable outcome is that the two criminals rat on each other, and
get three months in jail--more jail time than if they had cooperated--and that rational self-interest yields worse outcomes for both players than altruism (something only an economist could find
The link between the Prisoner's Dilemma and the evolution of cooperation was made by Robert Axelrod in his 1984 book, called, ahem, The Evolution of Cooperation. Axelrod organized a tournament in
which players competed at the iterated Prisoner's Dilemma--a more realistic game in which two players play a large number of rounds of Prisoner's Dilemma, and can base their decision to cooperate or
defect on their opponent's history. The contestants in the tournament were not criminals, or even mathematicians, but computer programs. The one that won (or is it the other way around?) was designed
by mathematician Anatol Rapoport. Called Tit-for-Tat, this simple program starts out cooperating, and then copies its opponent's previous move--rewarding cooperation and punishing defection. Axelrod
went on to provide rigorous proofs of Tit-for-Tat's winning ways. For example, a population of Tit-for-Tatters is stable against single invaders, and groups of invaders, with a different strategy.
Other strategies, such as the self-explanatory Always Defect, are stable against single invaders, but not groups. And a group of Tit-for-Tatters, cooperating with each other, can invade a population
of Always Defectors--leading to the dominance of altruistic cooperation over rational self-interest in this evolutionary context.
Press and Dyson's result introduces a new family of strategies for iterated Prisoner's Dilemma which are capable of beating every other strategy--including both cooperative and greedy strategies.
Using some elementary linear algebra, the two show that there exists a strategy, followed by one player, which either fixes a linear combination of the long-run scores of the players, or the ratio of
the players' scores. As a result, strategies can be derived, using simple algebra, to fix your opponent's average score, or to fix the ratio between your score and your opponent's. Thus, these
strategies--called Zero Determinant strategies--will win against any opponent in long enough games. The rub, as explained in an excellent blog and a new paper on Zero Determinant strategies, is that
these strategies don't win in the same way Tit-for-Tat wins. Populations of Zero Determinators (or maybe Zero-d Terminators?) can be invaded by other strategies, and in the long run, are bound to
evolve into less coercive strategies. And in tournaments, winning Zero Determinant strategies do so at the cost of earning very few points--just like Always Defectors. In other words, the tricks you
should pick depend on the kinds of spoils you're after. If you prize winning the competition above all else, you might go for a Zero-d strategy. If you'd rather have a smaller piece of a bigger pie,
though, you might want to pick a more cooperative strategy. [Note: The online version of this piece is titled, "To the Trickster Go the Spoils."]
--- Ben Polletta
Return to Top
"Retrospective: R. Duncan Luce (1925-2012)," by James L. McLelland. Science, 28 September 2012, page 1619.
In this retrospective, Stanford University professor of psychology James McClelland describes the pioneering career of recently deceased Robert Duncan Luce, "a mathematician who sought to provide
axiomatic formulations for the social sciences." After earning his PhD in mathematics from MIT in 1950, Luce's "first publication applied ideas from [abstract algebra] to provide a mathematical
definition of a 'clique' within a social network, and explored using a matrix to capture connections among individuals that might also be represented in a group," a method that "has become standard
in computer science." At the same time, Luce addressed the problem of choice: in his 1959 book Individual Choice Behavior: A Theoretical Analysis, Luce "provided the foundation for a vast range of
investigations in psychology and economics." From the 1960s to the present, "Luce's focus shifted to fundamental questions of measurement, with a particular emphasis on measuring psychological
quantities such as value or loudness...Luce sought to establish fundamental principles relevant to the measurement of these psychological quantities."
Luce's "many substantive, institutional, and person contributions to the mathematical social sciences...were recognized early in his career," notes McClelland, "with election to the National Academy
of Sciences in 1972, and later at the highest level, with the U.S. National Medal of Science in 2003."
--- Claudia Clark
Return to Top
"Unlocking ponytail's secrets cost a bundle," by Jordan Day. Cambridge News, 24 September 2012.
The 2012 Ig Nobel Award Ceremony again recognized the contribution of mathematics to esoteric scientific discoveries. This year, Joseph Keller, and Raymond Goldstein, Patrick Warren, and Robin Ball
won the Ig Nobel Prize in Physics, "for calculating the complex mathematics that controls the shape and movement of a human ponytail." The research was undertaken at the expense of the company,
Unilever, maker of hair care products. His team "worked out a mathematical equation that described the collective properties of a bundle." The original research paper, "Shape of a Ponytail and the
Statistical Physics of Hair Fiber Bundles," was published in Physical Review Letters (Vol. 198, no. 7, 2012), and covered in February on National Public Radio (summarized here on Math Digest). The Ig
Nobel Awards "honor achievements that first make people laugh, and then make them think. The prizes are intended to celebrate the unusual, honor the imaginative — and spur people's interest in
science, medicine, and technology," and are widely covered in the press.
--- Annette Emerson
Return to Top
"When Networks Network," by Elizabeth Quill. Science News, 22 September 2012.
Networks are complicated enough but mathematicians and others are now investigating networks of networks to understand systems of interconnected systems ranging from cells to the Internet. Quill
talked to several researchers in different fields and gives many examples of networks that are connected to one another. Some researchers are interested in what causes systems to have catastrophic
failure, such as what happened to the world economy in 2008, while others study the dynamics that keep networks, such as power grids, functioning. Alessandro Vespignani, a physicist and computational
scientist at Northeastern University, says that it's a new field for which "We need to define new mathematical tools...We need to gather a lot of data. We need to do the exploratory work to really
chart the territory." [Hear Jon Kleinberg (Cornell University) talk about his research in network theory in the Mathematical Moment podcast "Finding Friends."]
--- Mike Breen
Return to Top
"The SciFri Book Club Visits 'Flatland'," an interview by Ira Flatow with guest Ian Stewart. Science Friday Book Club, National Public Radio, 21 September 2012.
The Science Friday Book Club meets to discuss Edwin Abbott's classic novel published in 1884, Flatland: A Romance of Many Dimensions. Ian Stewart, emeritus professor of mathematics at the University
of Warrick in England, joins host Ira Flatow, multimedia editor Flora Lichtman and senior producer Annette Heist. Ian Stewart is also the author of the book Flatterland: Like Flatland, Only More So
and The Annotated Flatland: A Romance of Many Dimensions. Flatland plays around with the concept of dimensions and tells the story of a two-dimensional world where women are straight lines and men
are polygons. The creatures of this two-dimensional world are faced with the daunting task of trying to comprehend the third dimension. Ian Stewart sums up the message of the book by saying "I think
it is the message of realizing that your own particular parochial little bit of universe is not necessarily everything there is, and that you really should keep an open mind, but not so open that
your brains fall out."
---Baldur Hedinsson
Return to Top
"Mathematics of Opinion Formation Reveals How Moderation Trumps Extremism," Technology Review, 20 September 2012.
In an era when the importance of extremist viewpoints is increasing, Steve Strogatz created a model of zealots and moderates to study how the population of each group changes over time. Strogatz’s
initial model was based on encounters between a "listener" and a "speaker," chosen at random from a group of moderates or two different viewpoint camps, with the condition that a speaker from one
viewpoint camp would bring a moderate listener over to his own side or turn an opposing viewpoint listener into a moderate, while a moderate speaker would have no effect. The model demonstrated that
if one group of zealots constitutes a sufficiently small percentage of the overall population, then the reigning viewpoint will prevail; when the percentage of zealots reaches a certain threshold,
however, almost the entire population is converted to the zealot viewpoint. In an effort to increase the size of the moderate camp, Strogatz and his collaborators tried a variety of scenarios and
found that only allowing moderates to evangelize was successful. The research is in the paper "Encouraging moderation: Clues from a simple model of ideological conflict," Seth A. Marvel, et al.
--- Lisa DeKeukelaere
Return to Top
"UNC-CH mathematician studies how little hearts develop," by Kerstin Nordstrom. News & Observer, 16 September 2012.
In this article, writer Kerstin Nordstrom describes some of the work of Laura Miller, a University of North Carolina at Chapel Hill professor of mathematics and biology "who studies the flow of
fluids in (or around) living creatures. One of Miller's projects looks at blood flow in the embryonic heart and lays the groundwork for people to surgically correct heart defects, possibly in utero."
Blood flow in an embryo's heart tube appears to directly influence the development of heart chambers and heart valves in the heart. "It is…thought that many congenital heart diseases may begin to
appear at this critical state," Miller notes. Her group has studied this problem by taking measurements in actual animals, as well as using mathematical and physical models. "In her lab's
simulations, they can easily tweak different conditions and see what effect it has on the flow rate and pattern. They can verify their simulations by looking at experimental systems—not actual
hearts, but Plexiglas models that capture the same physics," Nordstrom writes.
See more about Miller’s work with fluid dynamics of the heart. Image courtesy of the UNC Mathematical Physiology Lab.
--- Claudia Clark
Return to Top
"Planes write out pi over the skies of San Francisco Bay Area," by Christopher MacManus. CNET, 13 September 2012.
Is it a bird? Is it a plane? No it is the mathematical constant pi (3.14159..) in the sky. On September 12th five aircrafts used dot-matrix skywriting technology to write out a thousand digits of pi
stretching for a 100-mile loop each with numeral standing a quarter-mile tall. This endeavor was the brainchild of California artist Ishky as a part of the 2012 Zero1 Biennial, an event that
celebrates the convergence of contemporary art and technology. On a Facebook page devoted to the Pi in the Sky affair, the event is said to "explore the boundaries of scale, public space, permanence,
and the relationship between Earth and the physical universe. The fleeting and perpetually incomplete vision of pi's never-ending random string unwinding in the sky will create a gentle provocation
to the Bay Area's seven million inhabitants."
Photo ISHKY Studios © 2012.
--- Baldur Hedinsson
Return to Top
"Predicting scientific success," by Daniel E. Acuna, et al. Nature, 13 September 2012, pages 201-202.
In 2005, University of California San Diego physics professor Jorge Hirsch invented an index, generally known as the h-index, for quantifying a scientist's publication productivity. Noting that the
h-index and similar indices only capture "past accomplishments, not future achievements," the authors of this article describe their work developing a formula to predict a scientist’s future h-index,
based on the information available in a typical CV. They began with a large initial sampling of neuroscientists, Drosophila researchers, and evolutionary scientists. The application of several
restrictions to this group reduced the sampling to "3,085 neuroscientists, 57 Drosophila researchers and 151 evolutionary scientists for whom we constructed a history of publication, citation and
funding." Then, "for each year since the first article published by a given scientist, we used the features that were available at the time to forecast their h-index." These features included the
number of articles written, the current h-index, the number of years since publishing the first article, the number of distinct journals published in, and the number of articles in prestigious
neuroscience journals. The resulting formulas, for neuroscientists in particular, yielded "respectable" predictions, and showed that while the importance of the current h-index in predicting future
h-indices decreased over time, "the number of articles written, the diversity of publications in distinct journals and the number of articles published in five prestigious journals all became
increasingly influential over time."
Learn more information on the authors' methodology.
--- Claudia Clark
Return to Top
"Proof claimed for deep connection between primes," by Philip Ball. Nature News, 10 September 2012.
On August 30th, Shinichi Mochizuki uploaded a series of four mathematics papers totaling about 500 pages to his website, and catapulted himself to fame. With the final paper, Mochizuki--a
highly-acclaimed number theorist from Kyoto University in Japan--claimed to have proven the abc conjecture, which Princeton University number theorist Dorian Goldfeld of Princeton University has
called "the most important unsolved problem in Diophantine analysis". The abc is a kind of grand unified theory of Diophantine curves: "The remarkable thing about the abc conjecture is that it
provides a way of reformulating an infinite number of Diophantine problems," says Goldfeld, "and, if it is true, of solving them." Proposed independently in the mid-80s by David Masser of the
University of Basel and Joseph Oesterle of Marie Curie University, the abc conjecture describes a kind of balance or tension between addition and multiplication, formalizing the observation that when
two numbers a and b are divisible by large powers of small primes, a + b tends to be divisible by small powers of large primes. The abc implies--in a few lines--the proofs of many difficult theorems
and outstanding conjectures in Diophantine equations-- including Fermat's Last Theorem. "No wonder mathematicians are striving so hard to prove it," said Goldfeld in 1996, "like rock climbers at the
base of a sheer cliff, exploring line after line of minute cracks in the rock face..." The possibility of finally reaching this summit has led to reports in countless media outlets, and serious
attention from luminaries like Terence Tao and Minhyong Kim. But Mochizuki has done much more than prove the abc conjecture--instead of toiling alongside his fellow climbers, Mochizuki seems to have
built an airplane to take him to the top of the abc cliff and far beyond. Mochizuki's papers (bottom of the page) are the fruit of at least ten years of intense focus. They develop a theory Mochizuki
calls "inter-universal Teichmuller theory." "He’s taking what we know about numbers and addition and multiplication and really taking them apart," says Kim in the Times. "He creates a whole new
language--you could say a whole new universe of mathematical objects--in order to say something about the usual universe." Perhaps the work is more like a UFO than an airplane: mathematician Jordan
Ellenburg of the University of Wisconsin, Madison, has described reading Mochizuki's magnum opus as "a bit like ... reading a paper from the future, or from outer space." The abstracts of the first
and fourth paper are worth reading, for gems like this one from the close of the last abstract: "Finally, we examine the foundational/set-theoretic issues surrounding [the new theory] by introducing
and studying the basic properties of the notion of a 'species,' which may be thought of as a sort of formalization, via set-theoretic formulas, of the intuitive notion of a 'type of mathematical
The four papers, which heavily reference Mochizuki's own prior work developing theories including "p-adic Teichmuller theory," "Abstract anabelian geometry," and "The Hodge-Arakelov theory of
elliptic curves," are out of the realm of understanding of even expert number theorists. "Most of the people who say positive things about it cannot say what are the ingredients of the proof," says
Nets Katz of Indiana University. But while it's too soon to draw any conclusions--and may be for several years--there is reason to be optimistic, as Mochizuki "has a long track record," says
Ellenberg, "and he has a long track record of being original." University of Tokyo professor Yujiro Kawamata concurs, calling Mochizuki a "researcher who has built unique theories on his own and uses
singular terminologies in his often voluminous papers."
See also:
"An ABC proof too tough even for mathematicians," by Kevin Hartnett. The Boston Globe, 4 November 2012;
"Kyoto professor offers elusive proof of number theory's abc conjecture," Japan Times, 21 September 2012;
"A Possible Breakthrough in Explaining a Mathematical Riddle," by Kenneth Chang. New York Times, 17 September 2012;
"ABC Proof Could Be Mathematical Jackpot," by Barry Cipra. Science, 12 September 2012;
"ABC Conjecture: Mathematician Shinichi Mochizuki Claims Proof Of Important Number Theory Problem," by Natalie Wolchover. Huffington Post, 12 September 2012; and
"Ein Problem mit den Grundrechenarten (A problem with the basic arithemetic operations)", by Ulf von Rauchhaupt. Frankfurter Allgemeine Sonntagszeitung, 23 September 2012.
--- Ben Polletta
Return to Top
"Making Data Work," by Tom Siegfried. Science News, 8 September 2012, page 26.
Traditional statistics has a measure, a "p value," for the likelihood that you will observe a given outcome (for example, obtaining 4 heads in 10 coin tosses, for example), but there is no absolute
scale for measuring how well observed "evidence" (tossing 4 heads) correlates with a hypothesis (that the coin is unfair). This is where statistical geneticist Veronica Vieland’s work comes in.
Starting with the Kelvin temperature scale as a base, Vieland drew parallels between molecules in a gas, which are responsible for temperature, and units of data from repeated trials, which are the
"evidence" to be weighed, in order to create a so-called equation of state for evidence. Vieland’s equation is designed such that the addition of new information has a consistent effect on the
overall strength of the evidence, and it allows for objective determination of how strongly a set of evidence supports different hypotheses. Vieland notes that her work is only a first step in
developing an absolute measure of evidence strength, but it shows that such a measure can be found.
--- Lisa DeKeukelaere
Return to Top
"Maths demystifier: Q&A Glen Whitney," an interview by Jascha Hoffman. Nature, 6 September 2012, page 32.
This short interview with Glen Whitney manages to be a sweet gloss on New York's soon-to-be-opened Museum of Math (MoMath), which will become the only North American museum dedicated exclusively to
mathematics on December 15th. Whitney, MoMath's brainfather, describes his own career in mathematics by saying "I had a voracious appetite for math in high school but ... no illusion that I was going
to be one of the top researchers in the country." Instead, he found a niche working in statistical trading at Renaissance Technologies. He left that job four years ago wanting to do something with a
broader impact. Whitney hopes that by filling the vacuum in modern mathematical content at existing museums--one of the rare existing exhibits about math has inhabited the New York Hall of Science
since 1960--MoMath will combat a widespread prejudice against mathematics. But MoMath will also be different from other mathematics outreach efforts, having a focus on physical interaction and
"whole-body involvement," a broad perspective on mathematics, and striving to produce in visitors the thrill of discovery--the "Aha!" moments that make mathematics so exciting. These factors also
distinguish Whitney's mathematical walking tours, in which he talks about the geometry underlying architecture and natural patterns, the mathematics of the subway and the algorithms that control
traffic lights. These walking tours are as open-ended as mathematics itself: says Whitney, "If you give me a route, I'll make a tour. There is maths everywhere." (Photo: Rendering of museum's upper
level, courtesy of the Museum of Mathematics.)
--- Ben Polletta
Return to Top
"Slicing a Cone for Art and Science," by Daniel S. Silver. American Scientist, September-October 2012, pages 408-415.
Albrecht Dürer’s contributions to the world include not only famous prints, but also a famous book for both artists and mathematicians: The Painter’s Manual. With detailed yet accessible descriptions
of geometric concepts first written by Euclid and Ptolemy, and elaborations on the construction of parabolas and ellipses, Dürer’s book represented his own study of the math behind representations of
the world, undertaken in progressive scientific times. This article follows Dürer’s life from his humble beginnings as a son of a goldsmith, through his friendship with a wealthy purchasing agent who
provided access to rare books, his fascination with a geometrically based drawing of male and female figures, and his assessment that German artists could rise to the level of their Italian peers
only through study of geometric principles. The article also provides an overview of the historical development of conical sections, the creation of ellipses, parabolas, and hyperbolas by
intersecting a plane with a cone. Image from the Albrecht Dürer Wikipedia page.
--- Lisa DeKeukeleare
Return to Top
|
{"url":"http://ams.org/news/math-in-the-media/mathdigest-md-201209-toc","timestamp":"2014-04-18T16:03:33Z","content_type":null,"content_length":"48107","record_id":"<urn:uuid:420bbfb7-1150-469f-93d1-8c669b84015d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Understand and use cell references
One of the key things you’ll calculate in Excel are values in cells, the boxes you see in the grid of an Excel worksheet. Each cell is identified by its reference—the column letter and row number
that intersect at the cell's location. For example, a cell in column D and row 5 is cell D5. When you use cell references in a formula, Excel calculates the answer automatically using the numbers in
the referenced cells.
Applies to:
File Size:
Inside this course:
Cell references (1:44)
Watch this video to learn the basics. When you use cell references in a formula, Excel calculates the answer using the numbers in the referenced cells. When you change the value in a cell, the
formula calculates the new result automatically.
Copying formulas (3:13)
We just saw how to create a formula that adds two cells. Now let’s copy the formula down the column so it adds the other pairs of cells.
Course summary
A brief reminder of the key points in this course.
More courses available at Microsoft Office Training.
|
{"url":"http://office.microsoft.com/en-za/excel-help/understand-and-use-cell-references-RZ104046879.aspx?CTT=5&origin=HA104032083","timestamp":"2014-04-18T07:04:30Z","content_type":null,"content_length":"29277","record_id":"<urn:uuid:32947e1d-61ba-4607-8c1b-0177d4e157e7>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
|
HCSSiM Workshop day 5
math education
> HCSSiM Workshop day 5
HCSSiM Workshop day 5
July 7, 2012
A continuation of this, where I take notes on my workshop at HCSSiM.
Modular arithmetic
We examined finite sets with addition laws and asked whether they were the “same”, which for now meant their addition table looked the same except for relabeling. Of course we’d need the two sets to
have the same size, so we compared $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ and $\mathbb{Z}/4\mathbb{Z}.$ We decided they weren’t the same, but then when we did it for $\mathbb{Z}/3\
mathbb{Z} \times \mathbb{Z}/4\mathbb{Z}$ and $\mathbb{Z}/12 \mathbb{Z},$ and decided those were. We eventually decided it worked the second time because the moduli are relatively prime.
We essentially finished by proving the base case of the Chinese Remainder Theorem for two moduli, which for some ridiculous reason we are calling the Llama Remainder Theorem. Actually the reason is
that one of the junior staff (Josh Vekhter) declared my lecture to be insufficiently silly (he designated himself the “Chief Silliness Officer”) and he came up with a story about a llama herder named
Lou who kept track of his llamas by putting them first in groups of n and then in groups of m and in both cases only keeping track of the remaining left-over llamas. And then he died and his sons
were in a fight over whether someone stole some llamas and someone had to be called in to arbitrate. Plus the answer is only well-defined up to a multiple of mn, but we decided that someone in town
would have noticed if an extra mn llamas had been stolen.
After briefly discussing finite sets and their sizes, we defined two sets to have the same cardinality if there’s a bijection from one to the other. We showed this condition is reflexive, symmetric,
and transitive.
Then we stopped over at the Hilbert Hotel, where a rather silly and grumpy hotel manager at first refused to let us into his hotel even though he had infinitely many rooms, because he said all his
rooms were full. At first when we wanted to just add us, so a finite number of people, we simply told people to move down a few times and all was well. Then it got more complicated, when we had an
infinite bus of people wanting to get into the hotel, but we solved that as well by forcing everyone to move to the hotel room number which was double their first. Then finally we figured out how to
accommodate an infinite number of infinite buses.
We decided we’d proved that $\mathbb{N} \times \mathbb{N}$ has the same cardinality as $\mathbb{N}$ itself. We generalized to $\mathbb{Q}$ having the same cardinality as $\mathbb{N},$ and we decided
to call sets like that “lodgeable,” since they were reminiscent of Hilbert’s Hotel.
We ended by asking whether the real numbers is lodgeable.
Complex Geometry
Here’s a motivating problem: you have an irregular hexagon inside a circle, where the alternate sides are the length of the radius. Prove the midpoints of those sides forms an equilateral triangle.
It will turn out that the circle is irrelevant, and that it’s easier to prove this if you actually Circle is entirely prove something harder.
We then explored the idea of size in the complex plane, and the operation of conjugation as reflection through the real line. Using this incredibly simple idea, plus the triangle inequality, we can
prove that the polynomial
$3z^{17} + 2iz^{12} - (1+3i)z^{10} + .017z^{5} - 17$
has no roots inside the unit circle.
Going back to the motivating problem. Take three arbitrary points A, B, C on the complex plane (i.e. three complex numbers), and a fourth point we will assume is at the origin. Now rotate those three
points 60 degrees counterclockwise with respect to the origin – this is equivalent to multiplying the original complex numbers by $e^{\frac{i \pi}{6}}.$ Call these new points A’, B’, C’. Show that
the midpoints of the three lines AB’, BC’, and CA’ form an equilateral triangle, and that this result also is sufficient to prove the motivating problem.
1. July 7, 2012 at 10:07 am |
If you haven’t run into it, Kevin Wald’s filk “Banned from Aleph”, riffing on the Cantor Hotel theme, is at
|
{"url":"http://mathbabe.org/2012/07/07/hcssim-workshop-day-5/","timestamp":"2014-04-20T20:56:37Z","content_type":null,"content_length":"50497","record_id":"<urn:uuid:a151c889-784e-4b11-b91c-883c07144355>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tangent to Graph, Approximation
September 27th 2009, 11:30 AM #1
Aug 2009
Tangent to Graph, Approximation
I just wanted to check my answer and to see if I'm doing this right as well as get help on the b.
Let f be the function with f(1) = 4 such that for all points (x,y) on the graph of f the slope is [3x^2 + 1]/2y
A) Write an EQ for line tangent to graph @ x=1 and use it to approximate f(1.2)
B) f(1.4) may also be approximated using the line you computed in A, which is a better approximation f(1.2) or f(1.4) ?
A) y - 4 = ((3(1)+1)/2(4))(x-1)
y - 4 = 1/2 (x-1)
2y - 8 = x - 1
2y - x = 7
Substituing 1.2 for x, 2y - 1.2 = 7 so then f(1.2) = 4.1
B) I think f(1.2) would be a better approximation since it's closer to the inital x, 1
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/calculus/104626-tangent-graph-approximation.html","timestamp":"2014-04-17T19:01:10Z","content_type":null,"content_length":"29519","record_id":"<urn:uuid:fdf52b65-f967-4556-baf4-726ea7a17118>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
|
nag_ode_ivp_adams_roots (d02qfc)
NAG Library Function Document
nag_ode_ivp_adams_roots (d02qfc)
1 Purpose
nag_ode_ivp_adams_roots (d02qfc) is a function for integrating a non-stiff system of first order ordinary differential equations using a variable-order variable-step Adams method. A root-finding
facility is provided.
2 Specification
#include <nag.h>
#include <nagd02.h>
nag_ode_ivp_adams_roots (Integer neqf,
void (*fcn)(Integer neqf, double x, const double y[], double f[], Nag_User *comm),
void double *t, double y[], double tout,
double (*g)(Integer neqf, double x, const double y[], const double yp[], Integer k, Nag_User *comm),
Nag_User *comm, Nag_ODE_Adams *opt, NagError *fail)
3 Description
Given the initial values
$x , y 1 , y 2 , … , y neqf$
the function integrates a non-stiff system of first order ordinary differential equations of the type,
$y i ′ = f i x , y 1 , y 2 , … , y neqf$
, for
, from
using a variable-order variable-step Adams method. The system is defined by
, which evaluates
$f i$
in terms of
$y 1 , y 2 , … , y neqf$
, and
$y 1 , y 2 , … , y neqf$
are supplied at
. The function is capable of finding roots (values of
) of prescribed event functions of the form
$g j x,y, y ′ = 0 , j = 1 , 2 , … , neqg .$
nag_ode_ivp_adams_setup (d02qwc)
for the specification of
$g j$
is considered to be independent of the others so that roots are sought of each
$g j$
individually. The root reported by the function will be the first root encountered by any
$g j$
. Two techniques for determining the presence of a root in an integration step are available: the sophisticated method described in
Watts (1985)
and a simplified method whereby sign changes in each
$g j$
are looked for at the ends of each integration step. The event functions are defined by
, which evaluates
$g j$
in terms of
$x , y 1 , … , y neqf$
$y 1 ′ , … , y neqf ′$
. In one-step mode the function returns an approximation to the solution at each integration point. In interval mode this value is returned at the end of the integration range. If a root is detected
this approximation is given at the root. You need to select the mode of operation, the error control, the root-finding technique and various integration inputs with a prior call of the setup function
nag_ode_ivp_adams_setup (d02qwc)
For a description of the practical implementation of an Adams formula see
Shampine and Gordon (1975)
Shampine and Watts (1979)
4 References
Shampine L F and Gordon M K (1975) Computer Solution of Ordinary Differential Equations – The Initial Value Problem W H Freeman & Co., San Francisco
Shampine L F and Watts H A (1979) DEPAC – design of a user oriented package of ODE solvers Report SAND79-2374 Sandia National Laboratory
Watts H A (1985) RDEAM – An Adams ODE code with root solving capability Report SAND85-1595 Sandia National Laboratory
5 Arguments
1: neqf – IntegerInput
2: fcn – function, supplied by the userExternal Function
3: t – double *Input/Output
4: y[neqf] – doubleInput/Output
5: tout – doubleInput
6: g – function, supplied by the userExternal Function
7: comm – Nag_User *
8: opt – Nag_ODE_Adams *
9: fail – NagError *Input/Output
6 Error Indicators and Warnings
The value of
, indicates a change in the integration direction. This is not permitted on a continuation call.
The maximum number of steps have been attempted.
If integration is to be continued then the function may be called again and a further
steps will be attempted (see
nag_ode_ivp_adams_setup (d02qwc)
for details of
The value of
supplied is not the same as that given to the setup function
nag_ode_ivp_adams_setup (d02qwc)
but the value given to
nag_ode_ivp_adams_setup (d02qwc)
Root finding has been requested by setting
, but argument
is a null function.
The setup function
nag_ode_ivp_adams_setup (d02qwc)
has not been called.
The error tolerances are too stringent.
should be scaled up by the factor
and the integration function re-entered.
Section 8
The call to setup function
nag_ode_ivp_adams_setup (d02qwc)
produced an error.
A change in sign of an event function has been detected but the root-finding process appears to have converged to a singular point of
rather than a root.
Integration may be continued by calling the function again.
The problem appears to be stiff.
(See the
d02 Chapter Introduction
for a discussion of the term ‘stiff’). Although it is inefficient to use this integrator to solve stiff problems, integration may be continued by resetting
and calling the function again.
The value of
has been changed from
. This is not permitted on a continuation call.
On entry,
was set Nag_TRUE in setup call and integration cannot be attempted beyond
An error weight has become zero during the integration, see d02qwc document; $atol[value]$ was set to 0.0 but $y[value]$ is now 0.0. Integration successful as far as $t=value$.
The value of the array index is returned in $fail.errnum$.
7 Accuracy
The accuracy of integration is determined by the arguments
in a prior call to
nag_ode_ivp_adams_setup (d02qwc)
. Note that only the local error at each step is controlled by these arguments. The error estimates obtained are not strict bounds but are usually reliable over one step. Over a number of steps the
overall error may accumulate in various ways, depending on the properties of the differential equation system. The code is designed so that a reduction in the tolerances should lead to an
approximately proportional reduction in the error. You are strongly recommended to call nag_ode_ivp_adams_roots (d02qfc) with more than one set of tolerances and to compare the results obtained to
estimate their accuracy.
The accuracy obtained depends on the type of error test used. If the solution oscillates around zero a relative error test should be avoided, whereas if the solution is exponentially increasing an
absolute error test should not be used. If different accuracies are required for different components of the solution then a component-wise error test should be used. For a description of the error
test see the specifications of the arguments
in the function document for
nag_ode_ivp_adams_setup (d02qwc)
The accuracy of any roots located will depend on the accuracy of integration and may also be restricted by the numerical properties of $g x,y, y ′$. When evaluating $g$ you should try to write the
code so that unnecessary cancellation errors will be avoided.
8 Further Comments
If the function fails with
, then the combination of
may be so small that a solution cannot be obtained, in which case the function should be called again using larger values for
when calling the setup function
nag_ode_ivp_adams_setup (d02qwc)
. If the accuracy requested is really needed then you should consider whether there is a more fundamental difficulty. For example:
(a) in the region of a singularity the solution components will usually be of a large magnitude. The function could be used in one-step mode to monitor the size of the solution with the aim of
trapping the solution before the singularity. In any case numerical integration cannot be continued through a singularity, and analytical treatment may be necessary;
for ‘stiff’ equations, where the solution contains rapidly decaying components, the function will require a very small step size to preserve stability. This will usually be exhibited by excessive
(b) computing time and sometimes an error exit with $fail.code=NE_ODE_TOL$, but usually an error exit with $fail.code=NE_MAX_STEP$ or NE_STIFF_PROBLEM. The Adams methods are not efficient in such
cases. A high proportion of failed steps (see argument $opt→nfail$) may indicate stiffness but there may be other reasons for this phenomenon.
nag_ode_ivp_adams_roots (d02qfc) can be used for producing results at short intervals (for example, for graph plotting); you should set
to the last output point required in a prior call to
nag_ode_ivp_adams_setup (d02qwc)
and then set
appropriately for each output point in turn in the call to nag_ode_ivp_adams_roots (d02qfc).
The structure
will contain pointers which have been allocated memory by calls to
nag_ode_ivp_adams_setup (d02qwc)
. This allocated memory is then accessed by nag_ode_ivp_adams_roots (d02qfc) and, if required,
nag_ode_ivp_adams_interp (d02qzc)
. When all calls to these functions have been completed the function
nag_ode_ivp_adams_free (d02qyc)
may be called to free memory allocated to the structure.
9 Example
We solve the equation
$y ′′ = -y , y 0 = 0 , y ′ 0 = 1$
reposed as
$y 1 ′ = y 2 y 2 ′ = - y 1$
over the range
with initial conditions
$y 1 = 0.0$
$y 2 = 1.0$
using vector error control (
) and computation of the solution at
). Also, we use nag_ode_ivp_adams_roots (d02qfc) to locate the positions where
$y 1 = 0.0$
or where the first component has a turning point, that is
$y 1 ′ = 0.0$
9.1 Program Text
9.2 Program Data
9.3 Program Results
|
{"url":"http://www.nag.co.uk/numeric/CL/nagdoc_cl23/html/D02/d02qfc.html","timestamp":"2014-04-18T10:35:47Z","content_type":null,"content_length":"53832","record_id":"<urn:uuid:565dd87d-2e0e-4fa6-bc8b-e49881e8dd97>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Space Shuttle: Electric BoogalooSpace Shuttle: Electric Boogaloo
A space shuttle launch is pretty expensive. Exactly how expensive depends on who you ask, but if you divide the yearly cost of the program by the number of launches you get something in the
neighborhood of half a billion dollars. That’s actually pretty trivial by federal budget standards, but it’s still not chump change. It’s a measure of just how difficult it is to get to space. Let’s
try to put that number in perspective by running a few numbers describing the cost of energy. Think of it as a Fermi problem.
An orbiting shuttle has potential energy by virtue of its height above the ground. It also has kinetic energy by virtue of its very considerable orbital speed. It’s a one-line calculation to figure
our how much once you look up some numbers.
Potential energy is m*g*h, which for typical shuttle takeoff weights and operating altitudes gives something like 2.13 x 10^11 J. The acceleration g due to gravity does vary with altitude, but at
shuttle altitudes the variation is small enough that we can overlook it for this rough of a calculation.
Kinetic energy is the usual mv^2/2. Plugging some more fairly typical figures I come up with a kinetic energy of 3.27 x 10^12 J. The kinetic energy thus dominates the total energy, which is by adding
the two is about 3.38 x 10^12 J.
Here’s the interesting comparison. Here in Texas, electricity costs about ten cents per kilowatt hour. How much would a space shuttle launch worth of energy cost at that rate? Well, there’s 3,600,000
J in a kilowatt hour, so doing the division we see that much energy costs about $90,750.
Obviously this is a grossly unfair comparison. Evewn simple technology like an incandescent light bulb turns only a very small fraction of its electricity into visible light. Launching a shuttle is
not every remotely a matter of just turning electricity into altitude and speed. But it does suggest that there is some room for improvement. I think the ideal method would be a space elevator
system, where you actually would be getting pretty close to turning electricity into an orbit. Unfortunately that’s a long way away if indeed it ever becomes practical.
Hmm. Probably would have made a good quiz problem, given just the orbital altitude and separately deriving the orbital velocity from that. Unfortunately that chapter was a month ago. Maybe next
1. #1 Jesse October 24, 2008
I have often wondered whether we could get better energy efficiency for strictly non-living payloads using a particle accelerator system. Since it’s magnetic, I’d expect a pretty efficient
electricity to acceleration ratio. I presume that putting all the acceleration up front would not only kill any astronauts but would squish their soft fleshy bodies flat… but perhaps a robotic
launch vehicle could survive? Is this system feasible?
2. #2 Uncle Al October 24, 2008
The kinetic energy of a lump of coal in low Earth orbit is its enthalpy of combustion. The choice launch platform is the (already flattened) top of Mt. Kilimanjaro elevation 15,000 feet and
3°4’33″S. (Cape Canaveral is elevation 9 feet and 28°29?20?N). NASA is an ass.
Tabulated heats of combustion for coals range from 2.4 – 3.2×10^7 J/kg. The all-purpose heat of combustion rule: 5 eV for every molecule of O2 used. This gives 4×10^7 J/kg for pure carbon
oxidized to CO2, and is consistent. For an object in low Earth orbit v^2 = Rg, and the kinetic energy/kilogram is Rg/2. With R = 6×10^6 m (from the center of mass) and g =10 m/s^2 that works out
to 3×10^7 J/kg. A rocket uses 100X as much energy to insert a kilogram of payload into low Earth orbit.
The reusable (two exceptions noted) Space Scuttle is at least 3X as expensive/boosted mass as a disposable Saturn V three-stage booster in constant dollars. How can that be? The vast majority of
the Space Scuttle’s payload is the Space Scuttle. Recovery and refurbishment costs are “off budget”.
Forget a space elevator. The minimum energy path upward is a spiral. If it conducts electricity it will Roman candle with the first geomagnetic storm (inductance of a half-turn coil). The fastest
elevator on Earth is 37.7 mph (Taipei 101).
3. #3 Zifnab October 24, 2008
So I guess that rules out a sky hook as well?
4. #4 Alex Besogonov October 24, 2008
No, it’s not (very) feasible. It’s not very hard to create a linear accelerator which can boost small payloads into LEO. The problem is acceleration, for linear accelerator with 10 km length
acceleration will be:
S = a * t^2/2
a = v_orb / t
S = v_orb * t /2
Substituting real values:
10km = 8 km/s * t /2
t~= 3 seconds
a ~= 3 km/s^2 ~= 300g.
That’s FAR too much even for most cargo payloads. You’ll need an accelerator with something like 1000km length to launch humans.
5. #5 CCPhysicist October 24, 2008
The method discussed in #1 and #4 was used in Jules Verne’s “From the Earth to the Moon”. They built a large, vertical cannon in the ground – at or near Cape Canaveral in Florida !! if I recall
correctly – and fired the bullet-shaped projectile to the moon using gun cotton as an explosive.
The dimensions are in the book, and the answer is worse (a lot worse!) than what Alex worked out. They would have been very flat astronauts soon after launch.
Other random comments:
Virial theorem. In circular orbit, KE = GMm/(2r), PE = -GMm/r, and total E = -GMm/(2r). The KE in low earth orbit is abut half of what it takes to escape from the surface since r is not much
bigger than the radius of the earth.
You ignored air friction, which is the main problem with launching a satellite as a projectile from the surface by giving it all of the required energy in advance. Well, that and getting the
velocity pointed the right way.
If you decide to store that electricity (or whatever) on the spaceship, you have to include the mass of the storage system when figuring out the specific impulse of the proposed power system.
Using coal in your Space Scuttle (see
if you didn’t get Uncle Al’s joke) might be better than NiMH or LiIon batteries, or even capacitors. I’ll leave that for your students to work out.
6. #6 Johnny Vector October 24, 2008
In fact, a slingshot or railgun alone can never work, because you can’t put something in orbit with a single impulse from the ground. Once you stop accelerating it, it is in orbit (neglecting air
friction), and will return 97 minutes later to the same point, going in the same direction. Uh-oh, everybody at the launch site duck!
Except of course unless it’s moving horizontally at the release point, it would have to go through the Earth to get there. And including air friction only makes it worse. You need additional
delta-v sometime after launch. Ideally half an orbit later, but it could be some other time. So anything you launch has to either have a working propulsion system, or rendezvous with something
already up there which has a working propulsion system.
As Dennis Moore would say, “This redistribution of energy is trickier than I thought.”
7. #7 Bill October 25, 2008
Uncle Al:
You’re forgetting two important things, it’s not height above sea level, it’s radius from the center of the Earth. The equatorial bulge means that local “g” is lower near the equator. The second
part is the “free” kinetic energy from the Earth’s rotation closer to the equator. There’s also the unspoken third factor of failed launches coming down over the ocean rather than inhabited
8. #8 Uncle Al October 26, 2008
Ah, Bill… We are in agreement that Mt. Kilimanjaro massively trumps Cape Canaveral as the planetary choice launch platform for every technical reason. Tanzania’s annual GDP of $15 billion implies
the US could purchase the entire country for less than the cost of the War on Drugs.
9. #9 Andres Villarreal October 28, 2008
The fact that the Shuttle would be many times more expensive than disposable rockets was known as soon as the first cost overruns, several years before first takeoff, brought the message home: it
is less expensive to throw away sophisticated machinery than to bring it back and refurbish it.
When the infamous tiles kept falling off, there was a great opportunity to say “for every reentry we are bringing back a building with wings, maybe this is not such a good idea”, but politics
When the calculations of financial viability for the shuttles required about a launch per week, lifting more cargo than anyone ever thought usable, it was a great moment to say “stop”.
When the logistics of one launch per week proved impossible, NASA could have diversified its operation with small rockets for most lifting, but they decided to go ahead, keeping a once-per-month
schedule at the expense of security, and look what happened.
|
{"url":"http://scienceblogs.com/builtonfacts/2008/10/24/space-shuttle-electric-boogalo/","timestamp":"2014-04-17T04:34:00Z","content_type":null,"content_length":"55920","record_id":"<urn:uuid:640afc14-af15-4de7-bb8c-ff0264a6622f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matches for: Author/Editor=(Lin_Song-Sun)
Memoirs of the American Mathematical Society
2013; 60 pp; softcover
Volume: 221
ISBN-10: 0-8218-7290-7
ISBN-13: 978-0-8218-7290-1
List Price: US$60
Individual Members: US$36
Institutional Members: US$48
Order Code: MEMO/221/1037
This work is concerned with zeta functions of two-dimensional shifts of finite type. A two-dimensional zeta function \(\zeta^{0}(s)\), which generalizes the Artin-Mazur zeta function, was given by
Lind for \(\mathbb{Z}^{2}\)-action \(\phi\). In this paper, the \(n\)th-order zeta function \(\zeta_{n}\) of \(\phi\) on \(\mathbb{Z}_{n\times \infty}\), \(n\geq 1\), is studied first. The trace
operator \(\mathbf{T}_{n}\), which is the transition matrix for \(x\)-periodic patterns with period \(n\) and height \(2\), is rotationally symmetric. The rotational symmetry of \(\mathbf{T}_{n}\)
induces the reduced trace operator \(\tau_{n}\) and \(\zeta_{n}=\left(\det\left(I-s^{n}\tau_{n}\right)\right)^{-1}\).
The zeta function \(\zeta=\prod_{n=1}^{\infty} \left(\det\left(I-s^{n}\tau_{n}\right)\right)^{-1}\) in the \(x\)-direction is now a reciprocal of an infinite product of polynomials. The zeta function
can be presented in the \(y\)-direction and in the coordinates of any unimodular transformation in \(GL_{2}(\mathbb{Z})\). Therefore, there exists a family of zeta functions that are meromorphic
extensions of the same analytic function \(\zeta^{0}(s)\). The natural boundary of zeta functions is studied. The Taylor series for these zeta functions at the origin are equal with integer
coefficients, yielding a family of identities, which are of interest in number theory. The method applies to thermodynamic zeta functions for the Ising model with finite range interactions.
• Introduction
• Periodic patterns
• Rationality of \(\zeta_{n}\)
• More symbols on larger lattice
• Zeta functions presented in skew coordinates
• Analyticity and meromorphic extensions of zeta functions
• Equations on \(\mathbb{Z}^{2}\) with numbers in a finite field
• Square lattice Ising model with finite range interaction
• Bibliography
|
{"url":"http://ams.org/cgi-bin/bookstore/booksearch?fn=100&pg1=CN&s1=Lin_Song-Sun&arg9=Song-Sun_Lin","timestamp":"2014-04-18T07:17:47Z","content_type":null,"content_length":"16235","record_id":"<urn:uuid:55da4305-b3d9-45f6-9718-74b1f61f314f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This website, unless otherwise expressly stated, is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported licence.
British Institute of Precise Physics
What is Temperature?
Temperature is Orbiting Electron Frequency – OEF measured in GHz
GHz = GigaHertz = 1,000 Million Orbits per second.
One degree Centigrade is an OEF of 0.737 GHz.
10 degrees Centigrade is 0.764 GHz
One degree K or -272 degrees C is 0.0027 GigaHertz.
Temperature, Entropy & Heat are orderly OEF phenomena of Heat Laws & Formulae -
Gas molecules expand with temperature/pressure to fill the whole container. It’s easy to explain IC engine compression, refrigeration, conduction, radiation, & combustion.
Also Nuclear Energy using Newton’s Laws of Planetary Motion. “No Mass need disappear”.
Einstein’s hoax that heat is like “fixed size, solid marbles, bouncing about in random, chaotic motion against each other and sides of container”. This is Brownian Motion – Botany.
Quantum Physicists claim temperature is a little outside their scope!
All Einstein’s absurd theories have no use in the Real World of Engineers.
None of Einstein’s Relativity Theories are used by NASA for space flight calculations.
Microwave Oven Frequency range is further Proof that-
Temperature is Orbiting Electron Frequency
Microwave Ovens use 2.45 GHz an OEF temperature of 245^ºC.
245^ºC is a high enough microwave oven temperature for quick 30 sec heating of food.
Random Chaotic Vibration has no relevance to microwave 2.45 GHz frequencies.
Ist OEF Law of Temperature:
The Orbiting Electron Frequency of all matter at same temperature is the same.
2nd OEF Law of Temperature:
The higher the temperature, the higher the OEF.
Conversely, the lower the temperature, the lower the OEF.
3rd OEF Law of Temperature:
Heat only flows from hot bodies to colder bodies – higher OEF to lower OEF.
This satisfies Conduction and the 1st law of thermodynamics.
Radiation: All bodies radiate electromagnetic waves at the OEF of that body.
Entropy is Orbiting Electron Peripheral Velocity measured in meters/sec.
“Orbiting Electron Frequency“ concepts & OEF LAWS explain concisely
the phenomenon of Temperature, Heat, Radiation, Entropy, Adiabatic, Chemical,
Combustion, Thermo Hydraulic and Thermo Nuclear processes.
OEF States of Matter:
Solid – a state of matter in which Orbiting Electrons Peripheral Velocity is less than the Escape Velocity relative to the Molecular Nucleus. In solids molecules are constrained by significant
interlocking of electron orbits
Liquid – a state of matter in which Orbiting Electrons Peripheral Velocity is equal to the Escape Velocity relative to the Molecular Nucleus. Liquids don’t expand like gases because the orbiting
electrons behave like satellites.
Gas – a state of matter in which Orbiting Electrons Peripheral Velocity is greater than the Escape Velocity relative to the Molecular Nucleus. In a gas expansion is free unless constrained by a
sealed container or gravity.
Nuclear Energy
OEF explains how Nuclear Energy works – without Mass loss#
# Einstein & Professors wrongly believe Mass disappears & turns into energy?
E = M x C^2 - this formula is logistically correct in Dimensional Analysis.
But if Mass = 0, Then E = 0 x C^2 = 0, so Nuclear Energy = Zero Energy! – QED
The formula E = M x C^2 is valid. Einstein’s hoax that mass disappears is absurd as below.
A fission (or disintegration) atomic bomb and/or nuclear energy plant comprise:-
unstable heavy elements Uranium235 &/or Plutonium to form a critical mass sufficient for,
atom split chain reactions to release neutrons to trigger more splits & widowed orbiting electrons.
They partner smaller orbits of lighter atoms viz Tin, keeping entropy – former peripheral velocity,
for awesome increase in electron orbit frequency, temperature & shock wave heat energy.
the gross mass of the neutrons, protons & electrons remains unchanged,
No mass disappears? Calculations of before & after widowed electron satellite orbits is precise – no mass lost!
Nobel Prize Physicists & Professors don’t know ”What is Temperature” nor Heat, nor Entropy.
Entropy is orbiting electron peripheral velocity – unchanged in Adiabatic Compression/Expansion.
Entropy precisely obeys all Newton’s Laws of Motion.
All us real world Thermodynamic Mechanical Design daily use Formulae & Maths,
precisely worked out by Isaac Newton over 300 years ago.
It is a travesty that hoaxers, Eddington, Einstein, Fusion Energy &
Physic Fraudsters are allowed to get away with it by inept, nepotee officials
in the UK NAO & Public Accounts Committee.
Source – BIPP – British Institute of Precise Physics
BIPP is dedicated to seeking the truth & exposure of Fraudulent Physicists!
1. Concise Concepts on What is Temperature?, Laws of Heat, Nuclear Energy & Thermodynamics.
2. Expose Royal Society 1919 Hoax that duped & misled Science for 96 years!
3. Expose Fusion Energy Fraud & Quantum Physics Fraud?
The 1919 Royal Society Hoax, duped Schools, Universities & Nobel Prize Commitees to the present day, with Einstein’s Hoaxes!
The 1919 Hoax gave Einstein kudos he still enjoys today to discredit brilliant , like Newton. The Hoax Theories include “Relativity, Space Warps Time, Quantum, Mass changes into energy, Temperature
is an energy? – the average kinetic energy of “Random Chaotic Molecular Vibration”. None have ever found a use for mankind in the real world; only fraud by bilking £Billions from inept & corrupt
Einstein & Plank rubbished their theories before they died. Quantum & Fusion Fraud is still rampant!
The definite orderly OEF concepts debunks uncertain, random chaotic molelcular vibration still taught in Schools & Universities.
Nobel Prize Physicists & Proffessors don’t know ”What is Temperature” nor Heat, nor Entropy.
Entropy is orbiting electron peripheral velocity – unchanged in Adiabatic Compression/Expansion.
Entropy precisely obeys all Newton’s Laws of Motion.
All us real world Thermodynamic Mechanical Design daily use Formulae & Maths,
precisely worked out by Isaac Newton over 300 years ago.
It is a travesty that hoaxers, Eddington, Einstein, Fusion Energy &
Physic Fraudsters are allowed to get away with it by inept, nepotee officials
in Government, NAO & Public Accounts Committee.
BIPP is dedicated to seeking the truth & exposure of Fraudulent Physicists!
1. Concise Concepts on What is Temperature?, Laws of Heat, Nuclear Energy & Thermodynamics.
2. Expose Royal Society 1919 Hoax that duped & misled Science for 96 years!
3. Expose Fusion Energy Fraud & Quantum Physics Fraud?
The 1919 Royal Society Hoax, duped Schools, Universities & Nobel Prize Commitees to the present day, with Einstein’s Hoaxes!
The 1919 Hoax gave Einstein kudos he still enjoys today to discredit brilliant Scientists like Newton. Einstein’s
Hoax Theories include “Relativity, Space Warps Time, Quantum, Mass changes into energy, Temperature is an energy? – the average kinetic energy of “Random Chaotic Molecular Vibration”. None have ever
found a use for mankind in the real world;
only fraud by bilking £Billions from inept & corrupt Goverments.
Einstein & Plank rubbished their theories before they died. Quantum & Fusion Fraud is still rampant!
© Copyright & Copyright Permission:
Material on this website is the copyright of BIPP Founder President – Michael V. Rodrigues – BSc (Eng) Mech & Elect. – Copyright Holder. Permission is given to Knowledge Tree Websites, like
Wikipedia, to publish material from all pages of this website on condition, that the source link or hyperlink www.whatistemperature.com is given prominence in all instances,
in addittion this website is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported licence.
|
{"url":"http://whatistemperature.com/","timestamp":"2014-04-20T20:56:23Z","content_type":null,"content_length":"20444","record_id":"<urn:uuid:8773092c-2cc7-4fa5-a9c6-d22fc10eb34b>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: March 2008 [00344]
[Date Index] [Thread Index] [Author Index]
Question on Sum[] function
• To: mathgroup at smc.vnet.net
• Subject: [mg86417] Question on Sum[] function
• From: sigmundv at gmail.com
• Date: Tue, 11 Mar 2008 02:55:58 -0500 (EST)
Dear group members,
If I evaluate Sum[Log[x^n],{n,1,Infinity}] I get -Log[x]/12 as an
answer, but if I plug any other value than -1 or +1 in, Mathematica
tells me that the series is divergent; for x=-1 the sum is -I*Pi/12
and for x=1 the sum is 0, of course.
If we restrict ourselves to real numbers, I would say that the series
is only meaningful for x>0, because for x<0, Log[x^n] is not defined
for odd n. For x>0, we write Sum[Log[x^n],{n,1,Infinity}] as
Sum[n*Log[x],{n,1,Infinity}], and clearly this series is only
convergent for x=1, with sum 0.
Well, my actual question was how to interpret the closed form
expression that Mathematica gives for the sum of the afore-mentioned
series. Mathematica ought to return to me some condition on x, because
Sum[Log[x^n],{n,1,Infinity}] == -Log[x]/12 is not true for all real,
or complex, x.
I hope that you can shed some light on this.
Kind regards,
Sigmund Vestergaard
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2008/Mar/msg00344.html","timestamp":"2014-04-16T13:22:18Z","content_type":null,"content_length":"25765","record_id":"<urn:uuid:55d9cc9c-aedd-4d33-9f4e-9f4f71d3c2dd>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
|
category-extras-0.53.6: Various modules and constructs inspired by category theory Contents Index
Portability portable
Control.Monad.Free Stability experimental
Maintainer Edward Kmett <ekmett@gmail.com>
See http://wwwtcs.inf.tu-dresden.de/%7Evoigt/mpc08.pdf for the background on rep, abs and improve and their use. NB: the C type in that paper is just the right Kan extension of a monad along itself,
also known as the monad generated by a functor: http://www.tac.mta.ca/tac/volumes/10/19/10-19.ps
module Control.Monad.Parameterized
type PFree = PAp Either
type Free f = Fix (PFree f)
runFree :: Free f a -> Either a (f (Free f a))
free :: Either a (f (Free f a)) -> Free f a
class (Functor f, Monad m) => MonadFree f m | m -> f where
class MonadFree f m => RunMonadFree f m | m -> f where
cataFree :: (c -> a) -> Algebra f a -> m c -> a
Produced by Haddock version 2.1.0
|
{"url":"http://comonad.com/haskell/category-extras/dist/doc/html/category-extras/Control-Monad-Free.html","timestamp":"2014-04-20T16:59:49Z","content_type":null,"content_length":"7609","record_id":"<urn:uuid:77826d43-2f24-4f10-8455-846ea5df1c3d>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Order of linear algebra in GLSL
03-14-2004, 06:51 AM #1
Junior Member Newbie
Join Date
Feb 2004
Bristol, England
Order of linear algebra in GLSL
DirectX had a funny way of dealing with this. I wonder if OGL is the same...
Are vectors in OGL and GLSL column or row vectors?
For instance, usually:
/ 1 2 0 2 \ / 1 \ / 1*1 + 2*2 + 4*0 + 3*2 \
| 3 0 1 3 | | 2 | = | 1*3 + 2*0 + 4*1 + 3*3 |
| 5 6 1 0 | | 4 | | 1*5 + 2*6 + 4*1 + 3*0 |
\ 1 0 0 2 / \ 3 / \ 1*1 + 2*0 + 4*0 + 3*2 /
but if GLSL is treating vectors as row vectors then most of the short cuts I've prepared suddenly go out the window. DX used column vectors in C++ and row vectors in shaders, so it was necessary
to pass the inverse transpose model-view-projection matrix to the shader. I really hope OGL doesn't do anything similar.
If I multiply a vector by a matrix, will GLSL treat it as the same operation as multiplying a matrix by a vector? Or does it not care about premultiplication and postmultiplication?
If at first you don't succeed, destroy all evidence that you ever tried.
Re: Order of linear algebra in GLSL
In GLSL you can use both:
gl_Position = gl_Vertex * gl_ModelViewProjectionMatrix;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
In the first example gl_Vertex is threated as a row vector. In the second example gl_Vertex is a row vector.
But because OGL uses row vectors the second example is quite faster!
There is a theory which states that if ever anybody discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and
There is another theory which states that this has already happened...
Re: Order of linear algebra in GLSL
You got a typo in the second sentence and the last statement is completely wrong.
matrix * vector in math terminology is multiplying row * column, means vector is a column vector as all vectors in OpenGL.
Matrices are downloaded in column-major order which makes OpenGL different from C notations.
The performance should be the same because you need four instructions for both of the above calculations.
Re: Order of linear algebra in GLSL
You got a typo in the second sentence and the last statement is completely wrong.
Damn... *knockmyheadonthetable*
Thanks a lot for correcting me!
There is a theory which states that if ever anybody discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and
There is another theory which states that this has already happened...
Re: Order of linear algebra in GLSL
Thanks for clarifying. I've done a few expansions on paper too and it's becoming a lot clearer now.
How much overhead is associated with switching Shaders? My skysphere renderer is using one Shader for sky, one for planets, and one for planet rings. Will switching between these (as little as
possible, mind you) incur a great deal of overhead and, more importantly, do I have to set up Projection and Camera matrices again after changing Shaders?
If at first you don't succeed, destroy all evidence that you ever tried.
Re: Order of linear algebra in GLSL
If I multiply a vector by a matrix, will GLSL treat it as the same operation as multiplying a matrix by a vector? Or does it not care about premultiplication and postmultiplication?
Just to be clear
vector * matrix != matrix * vector
If this is the case with D3D, then it's broken.
For transforming the vertex into clipspace, you do
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
There is some significant overhead. If you can avoid redundent switches by reordering how you render, it would be better.
Projection and Modelview (Camera?) are uniforms and they belong to the state engine.
They don't change unless you change them.
Changing shaders or anything else doesn't change them.
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
Re: Order of linear algebra in GLSL
I don't think D3D Shaders let you choose which order you multiply in anyway. I haven't used the DirectX HLSL. DX Shader Assembly has an m4x4 instruction, which only allows you to multiply vector
* matrix. And this has just answered my eternal question: why do DirectX Shaders need the Inverse Transpose Model-View-Projection matrix...
Answer: DirectX Shader handling is broken.
If at first you don't succeed, destroy all evidence that you ever tried.
Re: Order of linear algebra in GLSL
Originally posted by Descenterace:
...why do DirectX Shaders need the Inverse Transpose Model-View-Projection matrix...
Because tranforming normals can cause problems if you multiply a normal with the Model-View Matrix:
For example take the matrix:
Code :
(1 0 0 0)
(0 2 0 0)
(0 0 2 0)
(0 0 0 1)
if you multiply a normal with that matrix the normal isn't correct transformed. The inverse-transpose Model-View Matrix handels that (this matrix is also call normal matrix in GLSL).
There is a theory which states that if ever anybody discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and
There is another theory which states that this has already happened...
Re: Order of linear algebra in GLSL
"m4x4 instruction"
They can always change the compiler so that it handles both cases. The problem is that you would need the transpose.
matrix * vector == vector * (matrix)^T
"Inverse Transpose Model-View-Projection"
This is for transforming vertices? No, they do use the mvp.
One strange thing I noticed about D3D shaders is that some people were transforming the normal with the modelview matrix and it seemed to work. I'm not sure what the explanation for that is
Anyway, in this forum, we pour whiskey, not orange juice.
If you want to transform your normal in GL,
DP3 result.x, state.matrix.modelview.invtrans[0], normal;
DP3 result.y, state.matrix.modelview.invtrans[1], normal;
DP3 result.z, state.matrix.modelview.invtrans[2], normal;
result = gl_NormalMatrix * gl_Normal;
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
Re: Order of linear algebra in GLSL
One strange thing I noticed about D3D shaders is that some people were transforming the normal with the modelview matrix and it seemed to work. I'm not sure what the explanation for that is
If you only use rigid body transformations (rotation and translation) than the inverse transpose model-view matrix is equal to the model view matrix:
((MV)^T)^(-1) = MV
This is only the case if you use tranformation where the elementar tranformation matrix has this property ( ((M)^T)^(-1) = M ). As said, this is true for rotation and translation matrices.
There is a theory which states that if ever anybody discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and
There is another theory which states that this has already happened...
03-14-2004, 08:20 AM #2
Member Regular Contributor
Join Date
Apr 2002
03-15-2004, 06:28 AM #3
03-15-2004, 09:29 AM #4
Member Regular Contributor
Join Date
Apr 2002
03-15-2004, 11:45 AM #5
Junior Member Newbie
Join Date
Feb 2004
Bristol, England
03-16-2004, 06:54 PM #6
Super Moderator OpenGL Guru
Join Date
Feb 2000
Montreal, Canada
03-16-2004, 11:46 PM #7
Junior Member Newbie
Join Date
Feb 2004
Bristol, England
03-17-2004, 06:24 AM #8
Member Regular Contributor
Join Date
Apr 2002
03-17-2004, 06:40 AM #9
Super Moderator OpenGL Guru
Join Date
Feb 2000
Montreal, Canada
03-17-2004, 06:45 AM #10
Member Regular Contributor
Join Date
Apr 2002
|
{"url":"http://www.opengl.org/discussion_boards/showthread.php/162283-Order-of-linear-algebra-in-GLSL","timestamp":"2014-04-19T02:07:26Z","content_type":null,"content_length":"76451","record_id":"<urn:uuid:69c63c7a-29f5-4a2b-83a2-f694ab8d7a31>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Induction derivative help
April 12th 2013, 11:46 AM
Math Induction derivative help
Hi, if anybody can help me with this question, it would be greatly appreciated :)
Use mathematical induction to prove that f^n(x) = (2^nx+ n2^n-1) e^2x where f^n(x) represents the nth derivative of f(x)
f(x) = xe^2x
April 12th 2013, 11:59 AM
Re: Math Induction derivative help
Well, obviously when n= 1, $f'(x)= e^{2x}+ 2xe^{2x}= (2x+ 1)e^{2x}= (2^1x+ 2^0)e^{2x}$.
Now, assume it is true for n= k: $f^k(x)= (2^k+ k^{k-2}})e^{2x}$
What is the derivative of that?
|
{"url":"http://mathhelpforum.com/calculus/217330-math-induction-derivative-help-print.html","timestamp":"2014-04-21T15:32:23Z","content_type":null,"content_length":"4202","record_id":"<urn:uuid:c2f77b04-2b3a-49fe-bc3f-d70716481fa0>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
|
find the derivative
what an ugly function ... $g(x) = \frac{3}{4} \cdot \frac{\ln{x}}{(3x+5)^5}$ $g'(x) = \frac{3}{4} \cdot \frac{(3x+5)^5 \cdot \frac{1}{x} - \ln{x} \cdot 15(3x+5)^4}{(3x+5)^{10}}$ $g'(x) = \frac{3}{4}
\cdot \frac{(3x+5) \cdot \frac{1}{x} - \ln{x} \cdot 15}{(3x+5)^6}$ $g'(x) = \frac{3(3x+5) - 15x\ln{x}}{4x(3x+5)^6}$
then you should have used brackets to indicate ln[your expression], like so ... g(x)= ln[x^(3/4)/((3x+5)^5)] or ... $g(x) = \ln\left[\frac{x^{\frac{3}{4}}}{(3x+5)^5}\right]$ use the laws of logs to
break up the expression into several logs, then find the derivative of each logarithmic term. $<br /> \ln\left[\frac{x^{\frac{3}{4}}}{(3x+5)^5}\right] = \frac{3}{4}\ln{x} - 5\ln(3x+5)$
|
{"url":"http://mathhelpforum.com/calculus/71618-find-derivative.html","timestamp":"2014-04-17T22:04:18Z","content_type":null,"content_length":"50575","record_id":"<urn:uuid:12682682-b228-4642-a290-3a8bb44b25a1>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ludlow, MA Math Tutor
Find a Ludlow, MA Math Tutor
As my profile title suggests, I am a huge proponent of conceptual learning and practical thinking. It is true that in many cases, success in a class or on an exam does depend on one's ability to
memorize and recall individual facts, figures, dates, etc. However, as far as long-term success is conc...
17 Subjects: including algebra 2, algebra 1, reading, SAT math
...Most of my time there was spent working with younger kids to help them with their homework, but I also conducted one-on-one tutoring sessions with high-school and middle-school aged students. I
became the go-to staff member for help with math or science. I really enjoyed getting to know the students I worked with and watching their grades improve when they put in the time to work with
15 Subjects: including algebra 1, algebra 2, calculus, Microsoft Excel
...I worked with autistic children at a high school level, teaching them Mathematics, Science, History, Personal Care and Cooking for a living. I am certified to teach Math for grades 5-8 and
Business (Finance, Accounting, Marketing, Statistics, Economics, Management, Personal Finance) for grades 5...
16 Subjects: including prealgebra, algebra 1, geometry, SAT math
...I was a 2002 National Merit Scholarship semifinalist after achieving a PSAT score in the 99% percentile of my home state (Washington). I went on to be selected as one of my state's National
Merit Scholars. In 2001, I received a ACT composite score of 35. This was in the 99% percentile nationwide.
17 Subjects: including algebra 2, ACT Math, public speaking, SAT math
...I am also comfortable proofreading for writers who use English as a second or other language. I work as a freelance survey analyst and use SPSS for most of my statistical analysis. I am
comfortable importing and exporting data, formatting data and labels, and performing statistical tests such as t-tests, chi-squared tests and ANOVAs.
45 Subjects: including geometry, statistics, prealgebra, SPSS
|
{"url":"http://www.purplemath.com/ludlow_ma_math_tutors.php","timestamp":"2014-04-18T21:27:51Z","content_type":null,"content_length":"23953","record_id":"<urn:uuid:ba2a0d9f-1d11-4f3a-a4e2-3bed2a75a8fb>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical Magic Trick: 8 4-Digit Numbers Summing To 35712?
Is it possible to create a pool of distinct four digit numbers, so that the sum of 8 randomly chosen numbers from this pool is 35712? If yes, how?
I posted this in Quora also.
The basis of this question is actually a performance by a magician. He first wrote down his prediction on a sheet of paper. Then he picks someone from the audience to randomly select 8 four digit
numbers from about 1000 (according to him) distinct four digit numbers kept in a bowl. The magician then adds these 8 numbers and finds the sum as 35712, which happens to be his prediction as
well. I feel that the pool of numbers has a high probability that sum of any 8 numbers is 35712. Also, think 35712 can be replaced by any 5 digit number, and re-create the pool accordingly.
Am I wrong in asking the question. Or should I think that the fellow has got magical powers. He has chosen at least 11/12 numbers from the bowl that were distinct; 8 for performing the magic and
3/4 to show that they are distinct, before actually going in to the act.
Thank you so much for your valuable time.
You came to the right place because I have studied “Mentalism” i.e. mental magic, i.e. tricks which make it appear that one has paranormal powers, for some time. The top lesson gleaned from years of
reading is that the audience never remembers what happened.
I have confirmed this wisdom time and again, both in my own “performances” and in the work of professional magicians. So I hope you won’t take it badly if I suggest you might not have perfectly
recalled the exact sequence of events.
Now, there are three broad ways this trick can be accomplished. I won’t tell you exactly how the first two work, but we’ll try to figure out the third. It will be obvious that the methods in all
three can be mixed.
The first is substitution. That is, the picking is genuine, as it appears. The magician really does have a bowl of many 4-digit numbers, all different. Eight are pulled out and somebody—probably not
him—does the addition. Usually, the magician will have it done on a large chalkboard so that the audience can see the numbers, or he’ll have one or two people use calculators so that the sum can be
verified. It helps (but it not necessary) the magician to be able to add rapidly. This is a skill that can be mastered easily.
The sum is announced, perhaps double-checked, then the magician reveals his “prediction”, which is found to match. Off the top of my head, I can name about a dozen ways that the magician “swaps” the
prepared prediction with another “prediction” he made after learning the true sum.
The second is a force. Here, the picking is not genuine, despite appearances. There are literally hundreds of ways, with new ones invented monthly, whereby a magician can make it seem that you have a
free choice where in reality the outcome is predetermined. (Sort of like how neurologists view all of human behavior.) Of course, he needn’t force every number of the eight picked. He would only have
to force enough of them to cause the final solution to belong to a small set (a dozen or less) of possible sums. He could then use any number of methods to reveal the “prediction.” A cheesy, but
effective, way to do this is to have, say, four different predictions in your suit pockets, two in the breast and two in the outside pockets: you pull out the one that was the final force. A miracle!
The third is mathematical. That is, the 4-digit numbers on the slips are designed such that picking eight of them force a predetermined single sum of 35712. There cannot be 10,000 slips with all the
4-digit numbers 1000, 1001, 1002,…, 9998, 9999 because, of course, by chance you could end up with the sum 8028 (the lowest possible) or 79964 (the highest possible). So the bowl must be loaded with
slips such that the sum is fixed. If such a set of numbers exists, such a trick would be called “self working.” If this method is used, I’d lay my money on a set of sums, not just 35712, married to a
I saw the solution posted on Quora—the writer there suggests labeling all slips “4464″. This works mathematically, but it makes for a poor performance. You can get away with it only by walking to 8
different, widely separated audience members, have them silently pick a number, then have them add it to the sum shown on, say, a calculator. If you’re blustery enough you might get away with this,
but chances are you’ll get caught.
So I leave it for a homework problem for everybody. Does a set of N different 4-digit numbers exist such that pulling 8 out of N leads to a sum of 35712 every time?
Mathematical Magic Trick: 8 4-Digit Numbers Summing To 35712? — 9 Comments
1. Only if N = 8.
2. I’d agree with Mr. Sears.
Think about the state of affairs after drawing 7 numbers: there is only one number that can be added to the sum of the 7 to total 35712. Therefore, the last draw must depend on the previous 7.
There would have to be some mechanism for representing the state of the system and constraining the choice of the final number.
If N = 9, then, after 7 draws, there is still only 1 number that can bring our total to the required 35712, yet there is a choice of 2. Unless those 2 numbers are the same, you would expect the
wrong sum 50% of the time, were you to repeat the experiment a large number of times.
Therefore, the 2 numbers must be the same, which contradicts our requirement that the 9 numbers be distinct.
3. That’s settled then … it’s magic!
4. Speaking of magic numerical feats: I’ve heard of individuals who, by employing a Conjuring Interval in their Mind Magic, are able to produce a number with amazing properties. But I’ve also heard
in other circles that isn’t magic all; simply Slight of Mind.
Watched a program on paranormal debunking where the host had 8 (I think) different people give him a number. He predicted the sum in advance. Each had a card in their possession which when they
turned over had the number they had given. Said magic wasn’t explained.
I’m certain there are sets of numbers adding to N and maybe the result could be forced by switching bowls each containing slips of a single value. OTOH I once saw Penn & Teller stuff a fortune
cookie so calculating the answer in real time then stuffing the prediction isn’t out of the question.
5. I’m sure that I have seen Derren Brown doing similar tricks but don’t recall him saying how he did it.
I had a feeling that 35712 would divide by a large power of 2 and factorised it to 128*9*31. If the individual numbers were also divisible by high powers of 2 then this might be a clue as t how
the trick was done if it was only last number that would be “forced” to take a particular value to make the total add up.
6. Derren Brown famously did a live performance where throughout the show he had been subliminally imprinting various terms on the audience’s subconcious, including some numbers and a choice of
daily newspaper, so later on when he asked audience members to choose a newspaper, article, line, and word, they chose exactly what he wanted. It was interesting because after the fact he showed
flashbacks of the points where he had said these things, and in retrospect, in isolation they were blindingly obvious. In context, though, viewers just forgive the occasional slip of the tongue,
not realizing it’s deliberate.
7. I think that the best example of Derren leading people in their choices of what to say or do was with the blind athlete asked to describe a past event. Derren had written down fake details of the
event and got the guy to reproduce these details as he recounted the event. When presented with Derren’s “predictions” in braille, the guy was left wondering how Derren had known all the details
beforehand. You can rule out what the participant saw in this case so you have the full trick in audio.
It is worth noting that Derren usually filters his participants down to the most easily led bad liars to get the best results and you only see the successes on TV. He is still quite successful in
live performances though; far better than “real” TV psychics when doing their exact specialism.
8. So I leave it for a homework problem for everybody. Does a set of N different 4-digit numbers exist such that pulling 8 out of N leads to a sum of 35712 every time?
No (unless N=8). In every set of 8 numbers, replacing one of the 8 with a number not in the set gives a sum different from that of the original set.
9. George, your mentioning of that famous show by Derren Brown reminds me of the other occasion when he planted the fiendish suggestion in the audience’s minds that they would forget they had been
to his show immediately they left. Of course they did. People who were interviewed in the foyer immediately afterwards had no idea what they had just been to.
|
{"url":"http://wmbriggs.com/blog/?p=5205","timestamp":"2014-04-16T13:03:08Z","content_type":null,"content_length":"66992","record_id":"<urn:uuid:d282879b-d3bc-4464-b43e-1472dcabe63f>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Christoffel symbols from definition or Lagrangian
Hi, noospace, I seem to have lost track of this thread, hence my tardiness.
I'm borrowing my notation from Do Carmo `Differential Geometry of Curves and Surfaces', it's a text on classical differential geometry.
Right, unfortunately I don't have it front of me! If you have more questions you may want to provide a bit more detail, e.g. when you wrote
[latex]\vect{x} : U \subset \mathbb{R}^2 \to S[/latex] is a local parametrization of a regular surface, ie a smooth nonsingular map which is a homeomorphism onto a nbhd of some point. A regular
surface is just a subset of [latex]\mathbb{R}^3[/latex] for which each point in S has a local parametrization.
I am not sure without checking your textbook what S is supposed to be; from such clues as "parameterization" and first letter of "surface" I guess that indeed you are learning surface theory, and
that S is a surface immersed in [itex]E^3[/itex].
I didn't forgot the normal vector. The second derivatives of a local parametrization can be written in terms of the moving frame [latex]x_u,x_v, x_u\wedge x_v[/latex]. Do Carmo defines the
Christoffel symbols as the coefficients of the tangent vectors [latex]x_u,x_v[/latex]. The normal vectors don't come into it.
Let's back up a bit. You didn't say so, but I assume that [itex]x(u,v)[/itex] is a three-vector valued function, the parameterization of our two-dimensional surface. If so, [itex]x_u, \, x_v[/itex]
are the tangent vectors (the elements of the "coordinate basis" for the local coordinate chart we are about to construct, obtained by applying [itex] \partial_u, \; \partial_v[/itex] to our
parameterizing map [itex]x(u,v)[/itex].). Taking the [itex]E^3[/itex] inner product of these vectors gives the metric induced on S from [itex]E^3[/itex]. Hopefully this sounds familiar!
In your example, we can write the paraboloid (I made one small change in notation) as
x(r,\phi) =
\left[ \begin{array}{c} \frac{1}{2} \, a \, r^2 \\
r \, \cos(\phi) \\ r \, \sin(\phi) \end{array} \right]
Then at each point [itex]x(r, \phi)[/itex], the vectors
x_r = \left[ \begin{array}{c} a \, r \\ \cos(\phi) \\ \sin(\phi) \end{array} \right], \;
x_\phi = \left[ \begin{array}{c} 0 \\ -r \, \sin(\phi) \\ r \cos(\phi) \end{array} \right]
span the tangent plane to that point of S. Taking the [itex]E^3[/itex] inner product of these vectors gives the line element, written in a local coordinate chart using the parameters [itex]r,\phi[/
itex] as coordinates:
ds^2 = \left( 1 + a^2 \, r^2 \right) \, dr^2 + r^2 \, d\phi^2, \;
0 < r < \infty, \; -\pi < \phi < \pi
(In general, we need to impose restrictions on the ranges of the coordinates to avoid singularities, multiple values, and other possible problems. In the case of a surface of revolution we will in
general need another local coordinate chart made like this one, in order to cover the "date line" singularity at [itex]\phi=\pm \pi[/itex], and one more local coordinate chart to cover [itex]r=0[/
itex]. Or we can change coordinates to find a global coordinate chart, if this is possible, which in general need not be the case.)
Now, what can you say about the cross product [itex]x_r \times x_\phi[/itex], geometrically speaking?
Can you follow the procedure I outlined (probably also covered in Do Carmo's book) to compute the geodesic equations and compare it with the procedure you learned (mislearned?) from somewhere?
For computations it is usually best to adopt the frame field
\vec{e}_1 = \frac{1}{\sqrt{1+a^2 \, r^2}} \, \partial_r, \;
\vec{e}_2 = \frac{1}{r} \, \partial_\phi
rather than the coordinate basis [itex] \partial_r, \; \partial_\phi[/itex]. Applying the former to our parameterizing map gives
tangent vectors spanning the tangent plane at a given point on S.
Scalar multiplying to make a vector into a unit length vector is often called "normalization"; so is dividing through some equation to make the highest order term monic. Perhaps my usage of these two
slightly different meanings of "normalization" without comment was confusing; if so, I apologize.
If you treat the first fundamental form as a Lagrangian than you can retrieve the Christoffel symbols from the Lagrange equation.
Right, this is the "method of the geodesic Lagrangian", which is usually the most computationally convenient way to obtain the geodesic equations from a given line element. However, the
Euler-Lagrange operator applied to the geodesic Lagrangian will in general need to be normalized, by making the unique second order terms monic, before you can identify the coefficients of the
quadratic terms in first order derivatives with the Christoffel coefficients!
I guessed there must be a theorem that this holds in general, you seem to indicate otherwise. My question is whether or not there are any couterexamples...How can this be so? Surely they argree for a
simple sphere?
Sorry, I can't guess what is troubling you. It seems you might have misunderstood the minor point about normalization; the more important point is that you can always compute the geodesic equations
from a Lagrangian; indeed, this is often the most efficient route.
By the way, why did you take down Relwww?
For the reasons
stated here
! Or IOW, everyone should just read a good book!
|
{"url":"http://www.physicsforums.com/showthread.php?t=187761","timestamp":"2014-04-16T04:26:42Z","content_type":null,"content_length":"56772","record_id":"<urn:uuid:cd0aa21d-31bf-43f6-abb8-48584e402043>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Index for MGF 1107
Liberal Arts Mathematics II for Honors
Click on image for source and a larger image.
Ø Course Outline: This document will give you a good idea of what the course will be like.
Ø Schedule: This document gives a tentative schedule for the entire term.
Ø Assignment Sheet: This document contains a partial list of assignments for the course. The warm ups should be emailed to me, while the other assignments should be completed on paper and turned in
at the beginning of the class on the day they are due.
Ø Homework Sheet: This document contains a list of problems to complete in order learn the material.
Ø Course Objectives: This is the list of course of objectives established by the HCC mathematics cluster.
Computer Projects
Ø Hamilton Method Apportionment Project
Ø Congressional Apportionment Project
Ø Financial Mathematics Project
Ø Computer Algebra System Project
Ø HMTL Version of Computer Algebra System Project
Ø HTML Version of Number Theory Project
TOPICS LINKS
The following is a list of links for each of the topics in my Liberal Arts Mathematics II course.
Voting Methods
Ø Example Problems on Voting Methods: This site contains notes for a mathematics course at Princeton University. There are good example problems for each of the voting methods we will study in our
Ø Voting Methods Lectures: A nice set of lesson on voting method courtesy of Washington State University’s Department of Mathematics.
Ø 2005 Heisman Trophy Voting: The Heisman Trophy in college football is decided by a Borda count. Try working out the points for each player as given in at the bottom of this article. The article
comes from the website devoted to the Heisman Trophy—Heisman.com - Heisman Trophy.
Ø Solution to #15 in 1.1: This problem is a good example of a strategic voting question.
Ø Three Proofs of Arrow’s Impossibility Theorem: Although these proofs by John Geanakoplos are beyond the level of the course, I have included this link here for any interested students. It is a
pdf file and requires acrobat reader to open it.
Ø Instant Runoff Voting: In some parts of the country voters are advocating the use of Instant Runoff Voting as a fairer way to resolve local elections. This is a practical implementation of the
Hare method we discuss in class. My thanks to James Moore for bringing the issue to my attention.
Ø Online Voter Registration: A link to the voter registration site maintained by the Florida Department of State Division of Elections.
Ø Congressional Apportionment: This site contains information on apportionment from the U. S. Census Bureau. It includes information on the history of apportionment in the U.S. and how the
apportionments are calculated.
Ø United States House of Representatives: This page contains a list of the all the members of the United States House of Representatives. The link takes you to the section on the list with the
members from Florida.
Financial Mathematics
Ø Formulas for the Financial Mathematics: These are the formula we will learn about
Graph Theory
Ø Graph Theory -- from MathWorld: A short description of Graph Theory.
Ø The Konigsberg Bridge Problem: This site contains information about Euler’s well-known problem and the origins of Graph Theory.
Ø History of the TSP: A great site on the history of the traveling salesman problem.
Ø William Hamilton: A short biography of the mathematician for whom the hamiltonian circuit is named.
Ø Julia Robinson: A short biography of the mathematician believed to have coined the term “traveling salesman problem.” She is also the first women to be elected to the National Academy of
Number Theory
Ø Number Theory -- from MathWorld: A short description of Number Theory.
Ø An Example of Modular Arithmetic: This example shows how modular arithmetic can simplify some kinds of computations.
Ø Information Sheet for Cryptology: Formulas and other information for doing affine ciphers.
Ø Euclid's Elements: A wonderful HTML version of Euclid's Elements. It contains lots of explanatory text and diagrams. Many of the diagrams are java applets that can be manipulated to help in
understanding the theorems.
Ø Fermat: Fermat was an amateur mathematician who corresponded with the best mathematicians of his day in his effort to solve mathematical problems. He proposed, and claimed to prove, Fermat’s Last
Ø Euclid: The author of the Elements—a work which collected and expanded on the Geometry and Number Theory known to the ancient Greeks.
Ø Eratosthenes: Two hundred fifty years before the birth of Christ this African-born mathematician use Geometry to accurately calculate the circumference of the Earth.
Ø Andrew Wiles: In 1993, he proved Fermat Last Theorem a 300-year old unsolved mathematics problem in Number Theory.
Ambrioso Fall 2006
|
{"url":"http://content.hccfl.edu/faculty/Alex_Ambrioso/MGF1107H/Index.htm","timestamp":"2014-04-19T01:47:41Z","content_type":null,"content_length":"37544","record_id":"<urn:uuid:4cfa0ec2-a11b-49c7-82d1-bcf27522450b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1. Thanks to an anonymous editor for drawing my attention to Frege's essay.
2. Whitehead and Russell observe that a definition of, e.g., cardinal number, “contains an analysis of a common idea, and may therefore express a notable advance (1925, 12).” A little later they add,
“it will be found, in what follows, that the definitions are what is most important, and what deserves the reader's prolonged attention.”
3. The previous version of this entry misreported Urbaniak's views on this topic. Thanks to Urbaniak and Hämäri 2012 for the correction.
4. Recall that we have put no restrictions on D other than those stated at the outset: that its definiendum and definiens are of the same logical category and that the former contains the defined
The proof of the claim relies on the Replacement Theorem for equivalent formulas.
5. Notice that in a definition in normal form, the defined term is the only non-logical constant in the definiendum. Hence, in such a definition, the defined term need not be specified separately.
6. This requirement is the stumbling point when the Ontological Proof is formalized in classical logic. The definition of “God” as “that than which nothing greater can be thought” does imply the
existence of God. But the definition is legitimate only if there is one, and only one, being such that nothing greater can be thought than it. The definition cannot serve, therefore, as the ground
for a proof of the existence of God. (If the Ontological Proof is formalized in a logic that admits vacuous singular terms, then the definition may well be legitimate, but it will not imply the
existence of God.)
7. The traditional account allows contextual definitions—that is, definitions that provide a method of reducing sentences containing the defined terms to sentences of the ground language. (Such a
definition can be viewed as consisting of an infinity of instances of form (2), each sentence containing the defined term serving as the definiendum of one instance.) However, the traditional account
implies that a contextual definition adds no new power, for its effect can be gained by a definition in normal form.
It is instructive to reflect here on Russell's theory of definite descriptions. (For an account of this theory, see the entry on descriptions.) Suppose a definite description ‘the F’ is introduced
into a classical first-order language in the manner prescribed by Russell's theory. The Conservativeness and Eliminability criteria are, it appears, satisfied. Yet an equivalent definition in normal
form may well not exist. Why this incongruity?
The answer is that a definite description, under Russell's theory, is not a genuine singular term; it is not even a meaningful unit. When ‘the F’ is added to the ground language in the manner of
Russell, the resulting language merely looks like a familiar first-order language. In actuality its logic is quite different. For instance, one cannot existentially generalize on the occurrences of
‘the F’. The logical form and character of the formulas of the expanded language is revealed by their Russellian analyses, and these contain no constituent corresponding to ‘the F’. (There is also
the further fact that, under the Russellian analysis, formulas containing ‘the F’ are potentially ambiguous. The earlier observation holds, however, even if the ambiguity is somehow legislated
away—for instance, by prescribing rules for the scope of the definite description.)
Russell's theory is best thought of as providing a contextual elimination of the definite description, not a contextual definition of it.
8. Not all recursive definitions formulable in the language of Peano Arithmetic have normal forms. For instance, a recursive definition can be given in this language for truth—more precisely, for
“Gödel number of a true sentence of the language of Peano Arithmetic”—but the definition cannot be put in normal form. Recursive definitions in first-order arithmetic enable one to define Π^1[1] sets
of natural numbers, whereas normal forms exist only for those that define arithmetical sets. For a study of recursive definability in first-order languages, see Moschovakis 1974.
9. Note that we can regard a recursive definition such as (15) as an implicit definition by a theory that consists of the universal closures of the equations.
10. It is sometimes said that logical constants are implicitly defined by the logical laws, or by the logical rules, governing them. More specifically, it has been claimed that the “introduction and
elimination” rules for a logical connective are implicit definitions of the connective. (The idea has its roots in the work of Gerhard Gentzen.) For example, the sentential connective ‘and’, it is
claimed, is defined by following rules:
‘And’-Introduction: From φ and ψ, one may infer ‘φ and ψ’;
‘And’-Elimination: From ‘φ and ψ’, one may infer φ and one may also infer ψ.
These ideas invoke a notion of implicit definition that is quite different from the one under consideration here. Under the latter notion, non-logical constants are implicitly defined by a theory,
and the interpretation of logical constants is held fixed. The interpretation of the logical constants provides the scaffolding, so to speak, for making sense of implicit definitions. Under the
former notion, the scaffolding is plainly different. For further discussion, see the entry on logical constants and the works cited there.
11. If the aim is to explain the rationality of accepting a theory on the basis of actual observations, then almost the entire theory would need to be taken as implicitly defining theoretical terms.
Now both criteria would be a violated.
If the aim is to sustain the idea that the factual component of a theory is identical to its empirical content, then one can take what has come to be known as the “Carnap sentence” for the theory as
implicitly defining the theoretical terms. Now there is a violation only of the Eliminability criterion. For further discussion and for an explanation of the notion of “Carnap sentence,” see
Demopoulos 2003.
12. And also for systems of interdependent definitions. From now on, the expression ‘circular definition’ will be understood broadly to include these systems as well.
13. More precisely, finiteness is defined as follows. Let ground models be interpretations of the ground language. And call a hypothesis V reflexive iff, for some number n > 0, n applications of the
revision rule to V yields V again. A definition D is finite iff, for all ground models M, there is a natural number m such that for all hypotheses V, the result of m application to V of the revision
rule for D in M is reflexive.
14. The key features of C[0] are that (i) integer indices are assigned to each step in a derivation to distinguish revision stages, and (ii) the rules for definitions, namely, Definiendum
Introduction and Definiendum Elimination, are weakened. If an instance of the definiens is available as a premiss and its index is j then the corresponding instance of the definiendum may be inferred
but must be assigned the index j + 1. And, conversely, if an instance of the definiendum with an index j + 1 is available, then the corresponding instance of the definiens may be inferred but must be
assigned the index j. For a full account of C[0], see Gupta and Belnap 1993. For more information about finite definitions, see Martinez 2001 and Gupta 2006.
15. Since revision sequences are typically non-monotonic, the extension is not straightforward. The limit stages in a transfinite revision sequence can be treated in a variety of ways. This topic has
been studied by Anil Gupta, Hans Herzberger, Nuel Belnap, Aladdin Yaqūb, André Chapuis, and Philip Welch. See the entry on the revision theory of truth for references.
|
{"url":"http://plato.stanford.edu/entries/definitions/notes.html","timestamp":"2014-04-17T03:49:40Z","content_type":null,"content_length":"22043","record_id":"<urn:uuid:92eaa148-60bd-49c6-9439-39c95933bd1e>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
|
g/kwh to gal/h - Page 3 - OnlineConversion Forums
Originally Posted by
how to compute kwh/liter?
Measure the engine's power output (kilowatts) and fuel flow (liters per hour). Divide.
It varies with the heat content of the fuel and the efficiency of the engine. For a particular engine and fuel, it varies considerably with torque and rpm (the plot is called an engine map, and shows
brake specific fuel consumption contours vs torque and rpm).
|
{"url":"http://forum.onlineconversion.com/showthread.php?p=86306","timestamp":"2014-04-17T03:49:16Z","content_type":null,"content_length":"52195","record_id":"<urn:uuid:2ac497ee-dcf7-4ec4-a95b-d48cbab58d05>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: A VISIT WITH THE -LAPLACE EQUATION
In these notes we present an outline of the theory of the archetypal L
problem in the calculus of variations. Namely, given an open U IRn
and b C(U),
find u C(U) which agrees with the boundary function b on U and minimizes
(0.1) F(u, U):= |Du| L(U)
among all such functions. Here |Du| is the Euclidean length of the gradient Du of u. We
will also be interested in the "Lipschitz constant" functional as well. If K is any subset
of IRn
and u: K IR, its least Lipschitz constant is denoted by
(0.2) Lip(u, K):= inf {L IR : |u(x) - u(y)| L|x - y| x, y K} .
Of course, inf = +. Likewise, if any definition such as (0.1) is applied to a function
for which it does not clearly make sense, then we take the right-hand side to be +. One
has F(u, U) = Lip(u, U) if U is convex, but equality does not hold in general.
Example 2.1 and Exercise 2 below show that there may be many minimizers of F(·, U)
or Lip(·, U) in the class of functions agreeing with a given boundary function b on U.
While this sort of nonuniqueness can only take place if the functional involved is not
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/277/2811249.html","timestamp":"2014-04-20T22:01:47Z","content_type":null,"content_length":"8234","record_id":"<urn:uuid:e0939702-eb41-41e2-95aa-e416d207aa1e>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FINM 34500 / STAT 39000
Stochastic Calculus
M 6:30 -- 9:30 , 174 Kent
Greg Lawler , 415 Eckhart,
e-mail: lawler at math.uchicago.edu
This is an introduction to Brownian motion and stochastic calculus primarily for students in the Masters of Financial Mathematics program. Topics include: discrete time martingales, Brownian motion,
definition of stochastic integral, relation with partial differential equations (heat equation, Feynman-Kac formula), martingale theory (Girsanov theorem, equivalent measures), basics of European
option pricing. As time permits we will discuss: American options (optimal stopping), jump processes, fractional Brownian motion.
There will be weekly problems sets, a midterm on February 7, and a final exam. The problem sets as well as the slides from lectures will be posted on this site. The lectures will be posted after they
are given. See Lecture 1 for more information about the course.
A CHALK website has been set up for this class. Homeworks can be submitted electroncially to the website. In fact, I will require this for all problem sets after the first. (The first can be returned
by hard copy in class or electronically). However, there is a strict deadline for submission of the HW which is the beginning of the next lecture.
I have decided not to update this webpage --- please go to the CHALK website for course information.
|
{"url":"http://www.math.uchicago.edu/~lawler/m345fmw11.html","timestamp":"2014-04-18T05:58:41Z","content_type":null,"content_length":"2220","record_id":"<urn:uuid:4d6c91c5-82fd-4442-94e5-1b1ec182c296>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Does category theory help understanding abstract algebra?
up vote 10 down vote favorite
I'm studying category theory now as a "scientific initiation" (a program in Brazil where you study some subjects not commonly seen by a undergrad), but as I've never studied abstract algebra before,
so it's hard to understand most examples and to actually do most of the exercises. (I'm using Mac Lane's Categories for the Working Mathematician and Pareigis Categories and functors.)
To solve this, my advisor recommended me to get S. Lang's Algebra as a reference, but I don't know if that's the most appropriate book and if it's better to get Lang and study algebra through
category theory or to study (with a different book and approach, maybe Fraleigh) algebra and then category theory.
PS: I'll have to study by myself (with my advisor's help), as I can't enroll in the abstract algebra course without arithmetic number theory.
abstract-algebra ct.category-theory
Studying category theory without a fair amount of prior experience with abstract algebra is like studying calculus without knowing how to graph a linear equation: it makes no sense. There are
23 plenty of concrete topics in elementary abstract algebra that are not in the "standard curriculum" and don't need too much preparation and are a lot more "real" than category theory. For example,
classification of finite groups of rigid motions of 3-dimensional Euclidean space; needs just basic group theory and linear algebra. Someone who knows your background should suggest an
alternative topic. – BCnrd May 2 '10 at 0:11
7 huhm? abstract algebra is by far not the only source for motivating category theory, including proofs, constructions and notations. – Martin Brandenburg May 2 '10 at 0:26
3 The book I used in my first abstract algebra course was Dummit and Foote, which I thought was very well-written. I think I would go so far as to say that it was the best introductory math book
that I've used. – Peter Samuelson May 2 '10 at 1:22
The "Location" field on your profile and your use of the term "scientific initiation" makes me pretty sure that you attend the same mathematics department that I did. And your advisor's
5 recommendation -- that somebody's first contact with algebra should be a graduate textbook by Lang -- makes me all but sure who your advisor is. My advice to you is to take your advisor's advice
with hefty doses of salt. Presuming I'm correct in my guess, he has very good taste and knows a lot of math, but he doesn't always set realistic goals for his students. – Pietro KC May 2 '10 at
Sure, category theory helps in understanding abstract algebra, especially graduate algebra. But you know what is even more helpful in understanding graduate algebra? Undergraduate algebra.
6 Learning some categorical language in your first (undergrad) algebra course is not as crazy as some might think (this was in fact the perspective taken in the undergrad algebra course I took at
U. Chicago; it worked out okay), but learning category theory before you learn undergraduate algebra is about as far-fetched as any study plan I have ever heard of. – Pete L. Clark May 2 '10 at
show 5 more comments
5 Answers
active oldest votes
Silva, you're studying category theory way too early. You don't have a background yet that can give you an appreciation for the point of what you're being asked to understand, so probably
at best you can follow things line by line (maybe not even that much?) but can't get anything like a bird's eye view of the point of it all. This is like trying to teach abstract linear
algebra to someone who hasn't yet had any high school algebra. The motivation is nowhere to be found.
Ask your advisor what he considers to be some of the important inspiring examples for category theory. If you don't understand what those examples are, that's a pretty concrete
up vote 14 illustration that something is wrong (but then it seems like you already realize it). Then go speak to someone else who can suggest other topics more closely aligned with your background
down vote or that start at a more basic level.
To answer the question, yes category theory gives a lot of insight into the nature of abstract algebra, but only after you've studied enough of the subject on its own for certain basic
intuitions (like the meaning and significance of kernel or quotient constructions) to be in your head first.
add comment
You can also give a try to other books on category theory which are more accessible than, say, the MacLane's classics.
Here they go:
1. Conceptual mathematics: a first introduction to categories by Lawvere and Schanuel
up vote 7 down 2. Arrows, structures, and functors: the categorical imperative by Arbib and Manes
3. Category Theory by Awodey
Lawvere & Schanuel require almost no math background at all, Arbib & Manes and Awodey are somewhat more advanced but should be at least partially available to a math undergraduate
without much knowledge of abstract algebra.
1 Lawvere and Rosebrugh's Sets for Mathematics, is also a great read. It has set theory axioms interspersed throughout, but works nicely as an introduction to categories. It's a very
gentle read. – AndrewLMarshall May 2 '10 at 4:56
Lawvere and Schanuel is amazing; I second the recommendation. – Qiaochu Yuan May 2 '10 at 6:33
Awodey is excellent and should be well within reach. It is often tackled by computer scientists and logicians with minimal (or even no) knowledge of algebra. – Justin Hilburn May 2
'10 at 21:36
A softcover version of Awodey's book is supposedly becoming available this month. It's less than half the price of the hardcover version. – supercooldave May 15 '10 at 13:37
@supercooldave FINALLY,AMEN. – Andrew L Jun 9 '10 at 4:51
add comment
I can recommend several much better sources that will ease your transition into both abstract algebra and category theory, Silva.
Lang is far too difficult for a first brush with abstract algebra-and MacLane is even MORE difficult for a neophyte in algebra. Category theory has VERY far reaching conceptual implications
for most of modern mathematics,not just algebra.So no,in principle,you don't have to learn abstract algebra to learn it-but that's where most mathematicians have infused it.This is because
it's natural to organize types of structures into categories and that's really what algebra is all about:types of structures i.e. sets with binary relations on them.
There are a legion of great abstract algebra texts,but my favorite is E.B.Vinberg's A COURSE IN ALGEBRA,available through the AMS. It takes a very concrete,geometric approach and builds an
extraordinary amount of algebra from first principles all the way up to the elements of commutative algebra, Lie algebras and groups and multilinear algebra.It will help you learn a great
deal of algebra very quickly and without the confusion of learning category theory simultaneously. Another geometrically flavored-but a bit more challenging-book is the classic ALGEBRA by
up vote Micheal Artin. Indeed,I think the 2 books compliment each other very nicely. Mastering both books will give you a very good working knowledge of algebra and you'll be more then ready to
4 down tackle Lang's book after that.
As for category theory,the best introductory text I know is CATEGORY THEORY by Steven Awodey. Gentle,rigorous and masterly,it's the best book for undergraduates and the only one I'd use for
a beginning course in category theory for students that don't have strong backgrounds in algebra. It's pricey,but totally worth it. One other very good-and short-book you should look for and
I heartily recommend is T.S.Blyth's CATEGORIES-a terrific short introduction for any student with good mathematics background that wants just the basics in category theory. It's REALLY hard
to find now,but if you can get a copy,by all means,do so.
That should help you out. Good luck!
1 Vinberg's indeed pretty good and readable. As for categories per se, see two other books in my answer in addition to Awodey. – mathphysicist May 2 '10 at 0:29
add comment
There is the book Algebra: Chapter 0 by Paolo Aluffi that might fit your needs. It is a textbook on algebra (as the title says), but it uses the language of category theory from the
beginning. Category theory is mostly used to motivate definitions using universality properties.
up vote 4
down vote It is only in the last two chapters that the author introduces more advanced concepts from category theory (functors, abelian categories, etc.).
Already mentioned it,Alexander. It looks pretty good,but I personally think it's a gentle graduate level text. It would be pretty hard for someone without much mathematical training
to use. – Andrew L May 2 '10 at 6:59
If you mentioned it already, I'm sorry. I cannot find a reference to the book in your answer, however. The text is designed for graduate students, but I think that it is readable for
undergraduates as well. I used this book to learn abstract algebra for the first time, and at least for me it worked. – Alexander Noll May 2 '10 at 11:52
1 +1 I like Aluffi's book very much. I would have been glad to have known it earlier. I like Goldblatt's "Topoi", too. (Maybe not quite adequate because topoi are related more to
geometry, set theory and logic than to algebra. Nevertheless, I found it a gentle (general) introduction.) – Hans Stricker Sep 13 '11 at 15:02
add comment
Sounds like you might want to petition for an exception to the prerequisites. I don't think Lang's Abstract Algebra is probably your best bet (stick to something decidedly undergraduate --
maybe Gallian?), nor do I think that trying to digest either all of abstract algebra or all of category theory is your best bet. I'd aim for one major result in abstract algebra which has an
analogous statement in a variety of other categories, and then see what carries over to the category-theoretic framework. One idea would be to understand the classification of finite abelian
groups in your abstract algebra work, and try to understand how the result and the proof techniques carry over/generalize.
up vote p.s. The answer to the title question is definitely yes. :)
2 down
vote Edit: Let me add on what I think is almost certainly the best place for you to start on categories, which is Lawvere and Schanuel's "Conceptual Mathematics: A First Introduction to
Categories" (double edit: which I see mathphysicist also listed). In fact, with this book in mind, it's actually the abstract algebra part of your project that now sounds the most daunting
-- is that negotiable? Their discussion of Brouwer's fixed-point theorem would make an excellent topic.
I'm very surprised there are still mathematics programs across the world where undergraduates don't learn abstract algebra,Carn. Even a small amount-even a group theory course for
undergraduates.That's a HUGE obstacle for any undergraduate preparing for graduate studies to overcome even with an instructor's help. – Andrew L May 2 '10 at 2:04
Andrew, I think you misunderstood the question. The point is not that algebra is not part of the undergrad curriculum, but rather that the OP is just starting his undergrad and hasn't
gotten to it yet. What his sci-init covers, and is not part of the usual curriculum, is a substantial amount of category theory. His advisor, if I guessed right above, works in an area
which makes extensive use of it. The OP's curriculum includes a semester course in group theory and another which covers some commutative algebra plus fields and Galois theory. – Pietro KC
May 2 '10 at 4:22
@Pietro Ok,gotcha. But still asking a lot of your student. LANG as a reference for just learning algebra?!? This advisor either went to Yale when he was 17 or just doesn't care much..... –
Andrew L May 2 '10 at 7:02
add comment
Not the answer you're looking for? Browse other questions tagged abstract-algebra ct.category-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/23213/does-category-theory-help-understanding-abstract-algebra/23216","timestamp":"2014-04-21T05:01:36Z","content_type":null,"content_length":"94193","record_id":"<urn:uuid:c6ec3845-d199-488a-9dc3-45bb7034e5e6>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A 600-kg car is going over a curve with a radius of 120 meters that is banked at an angle of 25 degrees with a speed of 30 meters per second. The coefficient of static friction between the car and
the road is 0.3. What is the normal force exerted by the road on the car? a) 7240 N b) 1590 N c) 5330 N d) 3430 N e) 3620 N Note: The speed is not Vmax (roughly 32 m/s). I've been working under the
assumption that mgcos(theta) will be increased by the frictional component acting downward. I simply can't seem to get the solution.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
http://i.imgur.com/kkaO7.png the triangles that the dots/circles/particles are on are cross sections of the curve in the road. the triangles coming from the dots/circles/particles are force
vectors and their components parallel and perpendicular to the ramp. Fn = normal force Fc = centripetal force Fg = force of gravity Fgp = force of gravity's component perpendicular to the road Fc
= centripetal force's component perpendicular to the road Fn = Fgp - Fcp Fgp = cos(25)Fg Fcp = -sin(25)Fc Fg = mg -> (600kg)(9.81m/s/s) Fc = (mv^2)/r -> (600kg)(30m/s)(30m/s)/(120m) Fn = cos(25)
(600kg)(9.81m/s/s) - sin(25)(600kg)(30m/s)(30m/s)/(120m) Fn is approximately 3432.7N, so choice D. the centripetal acceleration pulls the particle (car) towards the center of the circle. this
decreases the normal force from the road, because the car is not pushing as hard into the ramp. you can also use vector projection ( http://en.wikipedia.org/wiki/Vector_projection#
Vector_projection_2 ) to project the centripetal acceleration along a unit vector perpendicular to the road. to do this, you would define: (note: ^ is denoting a unit vector, ~ is denoting a
vector) ^r = the unit vector along the ramp ^p = the unit vector perpendicular to the ramp ~Fc = the vector of centripetal acceleration. points in -x direction. Fc = magnitude of ~Fc, which is
just (mv^2)/r ||~A|| = magnitude of vector A. ^r = (cos(25))^i + (sin(25))^j ^p = (sin(25))^i + (-cos(25))^j ~Fc = (-Fc)^i projecting vector ~A along ~B: ( (~A dot ~B)/(||~B||^2) )*~B projecting
vector ~Fc along ^p ( (~Fc dot ^p)/(||^p||^2) )*^p note: unit vectors are defined to have a magnitude of 1. therefore ||^p|| = 1 (~Fc dot ^p) = -sin(25)(Fc) therefore the centripetal force
subtracts a force from the net force "pushing" into the road. the force of friction only acts tangential to the curve of the road, and therefore the dot product of this frictional force with the
unit vector that is PERPENDICULAR to the ramp results in 0, meaning it does not effect the normal force. The final answer is D.
Best Response
You've already chosen the best response.
@neugauss This is not the right answer. Sorry, but I have no time right now to point exactly where your equations are incorrect. This very question has already been answered on OS by
viniterranova in the question by jgonzales8.
Best Response
You've already chosen the best response.
i'm struggling to see how my answer is incorrect. whenever you have the time to tell me what exactly is wrong with it, i'd love to hear. viniterranova's solution is in broken english and it
insists that the value of gravity is unknown. however, the idea is silly because gravity near the surface of the earth is constant?? i don't see how the gravity would be different at all? and
then he seems to solve for the "gravity" by assuming the answer to the question is 7240N? I don't see at all how he comes to that number? please tell me where and how i'm incorrect, because the
solution posted by viniterranova makes no sense to me at all.
Best Response
You've already chosen the best response.
Have a look at the drawing. We have only 3 forces : - W = mg weight, vertical downwards - N normal force by the road, our unknown - T friction force, parallel to the road, also unknown |
dw:1354741925183:dw| N's 2nd law states that \(\vec W + \vec N + \vec T = m\vec a\) As the motion is known (steady circular), the acceleration is horizontal and known: a = v²/R Now, as we want N
and do not know the value of T, we project this relation on the axis normal to the road. It leads to: \(-mg\cos \theta+N+0=m\Large\frac{v^2}{R}\normalsize \sin\theta\) which leads to \(N=m(g\cos\
theta+\Large\frac{v^2}{R}\normalsize \sin\theta)\) with g = 9.8 or 10 m/s², the nearest answer is A.
Best Response
You've already chosen the best response.
ah yes, your answer makes more sense than viniterranova's answer, and our answers our essentially the same. i disagree with the sign between your answer, though. the normal force is from the road
/ramp pushing back against the car with equal and opposite force that is being applied on it. the car will be applying some force on it -- toward the ramp -- and the centripetal acceleration will
be applying some force as well -- AWAY from the ramp. the addition of the vector components of these two forces that are normal to the road/ramp will therefore be opposite in sign -- because one
is pulling the car away from the ramp, and the other is pushing the car into the ramp. therefore, the final answer must be Fn = m[ cos(25)g - ( ((v^2)/R)sin(25) ) ] which results in approximately
3430N, or choice D.
Best Response
You've already chosen the best response.
"and the centripetal acceleration will be applying some force as well -- AWAY from the ramp. " This cannot be true. You are probably mistaking centripetal acceleration and centrifugal force. Fn =
m[ cos(25)g - ( ((v^2)/R)sin(25) ) ] is not possible because the faster the car, the greater Fn must be to help it change its velocity inwards. Your formula, with the minus sign, says that the
faster the car, the weaker the normal force. If the car is fast enough, Fn would become zero and the car would hover above the road: this is simply not possible.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50bc2ea4e4b0bcefefa06b4d","timestamp":"2014-04-18T00:23:29Z","content_type":null,"content_length":"56303","record_id":"<urn:uuid:de14afde-f551-45ff-a1fe-0e212a1b04ad>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Incremental Java
Usually, expressions have no side effects. That is, they do not change the values of the variables in the expression. They only read from the variables during evaluation.
There are some operators that have side effects.
One of the most common operations in program is increasing a variable's value by 1 or decreasing by 1. Increasing by 1 is usually called incrementing. Decreasing by 1 is called decrementing.
Since we need to both read and write to the box for incrementing and decrementing, the thing we're incrementing/decrementing must be both an lvalue as well as an rvalue. The only expression that
is an lvalue (so far) is a variable. Thus, we can only increment and decrement variables.
Here's an example of pre-incrementing:
int i = 0 ;
++ i ; // Preincrment i
Notice that every Java statement ends in a semicolon. We've added a semicolon at the end.
The operator ++ goes before the variable when it is preincremening. When you just have ++ i by itself, it increases the value of i by 1. Thus, the value of i was read in (it was 0), then
incremented (to 1), and written back to i.
It's basically equivalent to i = i + 1.
However, preincrement gets very weird when you begin to use it in more complicated statements.
For example, look at:
int i = 0, j ;
j = ++ i ;
The effect of preincrementing is to add 1 to i just before evaluating. The side effect is that i has its value changed. In the example, we have the assignment statement j = ++ i.
Recall that we evaluate the RHS. But before we do that, we increment i to 1. Thus, the RHS evaluates to 1, and that is what is assigned to j. i has also been changed to 1 as a result of
evaluating ++ i.
What happens in this piece of code?
int i = 1, j ;
j = ++ i + ++ i;
This looks more complicated. First, we note that ++ has higher precedence than +. Thus, the following parenthesization occurs internally when Java runs the program: ((++i) + (++i)).
We evaluate one of the ++i. This increments to 2, and we have: (2 + (++i)). Then we evaluate the other ++. At this point i is already 2, so incrementing results in 3. We have: (2 + 3). The
evaluating results in 5.
Notice that we deviate from the usual evaluation rules for expressions. In particular, we don't substitute the current values of each variable into the expression. We must, instead, substitute
only after the preincrement has been done.
In general, you should avoid writing expressions with more than one preincrement (or predecrement), since many programmers find it hard to read and understand.
|
{"url":"http://www.cs.umd.edu/~clin/MoreJava/Intro/expr-incr-decr.html","timestamp":"2014-04-17T18:25:27Z","content_type":null,"content_length":"3632","record_id":"<urn:uuid:1137da9d-5e09-4ef2-9611-8fb1f8392af2>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
|
convergence of cesaro averages
February 22nd 2010, 12:42 AM #1
convergence of cesaro averages
if the sequence {an} converges to a , and if we have a sequence defined as
Sn = a1+a2+....+an / n
then Sn also converges to a.
how is that possible? isnt it supposed to be converged to 0?
Try to set...
$a_{n} = a - b_{n}$ with $\lim_{ n \rightarrow \infty} b_{n}=0$ (1)
... the compute $S_{n}$ as a function of the $b_{n}$ and finally 'push' n to infinity...
Kind regards
February 22nd 2010, 01:02 AM #2
|
{"url":"http://mathhelpforum.com/differential-geometry/130079-convergence-cesaro-averages.html","timestamp":"2014-04-16T19:48:31Z","content_type":null,"content_length":"33855","record_id":"<urn:uuid:5e81e857-6ce2-42bc-b583-59a89c6c0f9c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Relative Velocity
Learning About Relative Velocity
All motion, and even time, is relative to an observer. A person that is walking forward at 4km/h on a ship will be seen as moving faster by an observer on the shore. This occurs because the speed of
the ship is added to the person's speed when the situation is viewed relative to the shore. If the ship and person are moving in different directions then a situation might occur when the person will
think himself moving forward, while actually moing backwards. When solving problems that involve relative velocity ( an object moving on another moving object etc. ) simply add the velocity of the
two moving objects. Lets use the situation of a person walking on a ship to demonstrate relative velocity techniques. Let the velocity of the person relative to the ship be
Let the velocity of the ship relative to the ground be
Let the velocity of the person relative to the ground be
Vpg = Vps + Vsg
The above formula can be used for just about any relative velocity situation. When you are given
and you need to find out
rearrange the equation: Vps = Vpg - Vsg It is important to recognize what a given velocity is relative to ( the ground, the water, the boat etc. ) and to make it clear in the variable's name. Also,
the example above assumed that the water was still - if the water was moving you would have had to add its velocity to the velocity of the boat.
An airplane is flying with a constant velocity of 200km/h [S50°W], a wind of 15km/h [N4°W] is blowing and changing the plane's speed. What speed will the control tower, on the ground, see the plane
moving at? Let's define our variables: The plane produces enough energy output to make it move at 200km/h [S50°W] relative to the air so that,
Vpa = 200km/h [S50°W] The velocity of the air relative to the ground is 15km/h [N4°W] so that,
Vag = 15km/h [N4°W] And finally, the velocity of the plane relative to the ground is: Vpg = Vag + Vpa
Vpg = 15 [N4°W] + 200 [S50°W]
Vpg = 15 Cos 4°[N] + 15 Sin 4°[W] + 200 Cos 50°[S] + 200 Sin 50°[W]
Vpg = -14.9[S] + 1[W] + 128.5[S] + 153.2[W]
Vpg = 154.2[W] + 113.6[S] Lets find the speed, |Vpg|² = 154.2² + 113.6²
|Vpg|² = 36682.6
|Vpg| = 191.5 m/s And the angle of the plane's flight ( ß ), Tan ß = 113.6 / 154.2
Tan ß = 0.7367
ß = 36° The plane's velocity relative to the ground will be 190 km/h [W36°S].
Questions involving relative velocity are simple once you are comfortable with vectors. When solving a relative velocity question you have to take into account which velocity affects which and you
also have to represent relativity in your variables ( let
represent the velocity of a
erson relative to a
hip etc. ).
All content copyrighted to Jacob Filipp, 2003
|
{"url":"http://www.physicslearningsite.com/relative.html","timestamp":"2014-04-18T21:29:52Z","content_type":null,"content_length":"5621","record_id":"<urn:uuid:634b988f-03de-45ae-9800-ab4aed624841>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Errors with program
12-16-2004 #1
Registered User
Join Date
Oct 2004
Errors with program
I am taking a 1000 random integers and putting them in an array of 1000. Then I am copying the array to another to sort them least to greatest so I can do I binary sort. Then with the other array
I am doing a squential sort. The user is typing in a number to search the array for. Then we have to print the number searched, where in the array it was found, and how many times scanned.
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int BIN_NUM, SEQ_NUM;
int arraycpy(int *ia, int *ib);
int compare (void *a, void *b); /* function prototype */
int bin_search (int *data, int n, int key);
int seq_search(int *data, int n, int key);
int arraycpy(int *ia, int *ib) {
int x;
while((ib[x]=ia[x])!='\0') {
int main() {
int x, y, z, ia[1000], ib[1000], key;
srand(time(NULL)); /* seed random # generator */
for(x=0; x<1000; x++) { /* select 50 numbers */
y=rand()%1000+1; /* generate random numbers from 1-1000 */
printf("\nEnter an integer to search the array or -1 to quit:\n");
scanf("%d", &key);
while(key!=-1) {
qsort(ia, 1000, sizeof(int), compare);
printf("%d", z);
return 0;
int compare(void *a, void *b) { /* function to sort array */
return *(int*)a-*(int*)b;
int bin_search (int *data, int n, int key) {
int found, midpoint, first, last, BIN_NUM;
found = 0;
first = 0;
BIN_NUM = 0;
last = n - 1;
while ( ( first <= last ) && ! found ) {
midpoint = (first + last) / 2;
if ( data[midpoint] == key )
found = 1;
else if ( data[midpoint] > key )
last = midpoint - 1;
first = midpoint + 1;
if ( found )
return (midpoint);
return (-1);
int seq_search(int *data, int n, int key) {
int found, i, SEQ_NUM;
i = 0;
found = 0;
SEQ_NUM = 0;
while ( ! found && ( i < n ) ) {
if ( data[i] == key )
found = 1;
if ( found )
return (i);
return (-1);
And your question is?
My best code is written with the delete key.
Ooops, I dont know how or what I should do/use to get the arrays in the function arraycpy, so I can copy the one array to the other.
The error in your code lies here:
int bin_search(int *data, int n, int key);
The function takes 3 parameters, but you are only sending one.
Edit: More errors.
Another is with qsort(). If you read this reference you should know that the fourth parameter takes const void, not void. For example:
int compare (const void *a, const void *b);
Another error is with arraycpy(). You define the return value as int, but never return any thing.
- Stack Overflow
Last edited by Stack Overflow; 12-16-2004 at 05:42 PM. Reason: Displaying more errors.
Segmentation Fault: I am an error in which a running program attempts to access memory not allocated to it and core dumps with a segmentation violation error. This is often caused by improper
usage of pointers, attempts to access a non-existent or read-only physical memory address, re-use of memory if freed within the same scope, de-referencing a null pointer, or (in C) inadvertently
using a non-pointer variable as a pointer.
Guess I dont understand.
Let me break it down. You ask for three parameters in a function body. You can't send it one, and expect it to understand.
For example, there is something terribly wrong with this code:
int add(int x, int y) {
return x + y;
int main() {
return 0;
Can you find it? If so, then you know I didn't send y, just x. If you understand, then sending one parameter to bin_search() when it is looking for 3 should also make sense why it is returning an
- Stack Overflow
Segmentation Fault: I am an error in which a running program attempts to access memory not allocated to it and core dumps with a segmentation violation error. This is often caused by improper
usage of pointers, attempts to access a non-existent or read-only physical memory address, re-use of memory if freed within the same scope, de-referencing a null pointer, or (in C) inadvertently
using a non-pointer variable as a pointer.
And to answer your original question, you would just send ia and ib normally to arraycpy().
How is that possible? Because of the way C/C++ uses pointers and arrays, you can reference an array element either by subscription or * (the unary dereference operator). For example, you can pass
strings into functions as pointers to characters or as character arrays.
To explore in greater detail, this is how you would use the following code:
int arraycpy(int *ia, int *ib);
// Using it
arraycpy(ia, ib);
Simple. ia and ib are arrays. Pointer arguments enable a function to access and change objects in the function that called it. Any operation that can be acheived by array subscripting can also be
done with pointers.
- Stack Overflow
Segmentation Fault: I am an error in which a running program attempts to access memory not allocated to it and core dumps with a segmentation violation error. This is often caused by improper
usage of pointers, attempts to access a non-existent or read-only physical memory address, re-use of memory if freed within the same scope, de-referencing a null pointer, or (in C) inadvertently
using a non-pointer variable as a pointer.
I'm still getting compile errors. Part of my problem with this program the search code were given to us. If I wrote the code myself I would be easier. This is what I have:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int BIN_NUM, SEQ_NUM;
int arraycpy(int *a, int *b);
int compare (void *a, void *b); /* function prototype */
int bin_search (int *data, int n, int key);
int seq_search(int *data, int n, int key);
int arraycpy(int *a, int *b) {
int x;
while((a[x]=b[x])!='\0') {
int main() {
int x, y, z, a, b, ia[1000], ib[1000], key;
srand(time(NULL)); /* seed random # generator */
for(x=0; x<1000; x++) { /* select 1000 numbers */
y=rand()%1000+1; /* generate random numbers from 1-1000 */
printf("\nEnter an integer to search the array or -1 to quit:\n");
scanf("%d", &key);
while(key!=-1) {
a=bin_search(ib, ia, key);
b=seq_search(ia, ib, key);
arraycpy(ia, ib);
qsort(ia, 1000, sizeof(int), compare);
printf("%d", z);
printf("\nEnter an integer to search the array or -1 to quit:\n");
scanf("%d", &key);
return 0;
int compare(void *a, void *b) { /* function to sort array */
return *(int*)a-*(int*)b;
int bin_search (int *data, int n, int key) {
int found, midpoint, first, last, BIN_NUM;
found = 0;
first = 0;
BIN_NUM = 0;
last = n - 1;
while ( ( first <= last ) && ! found ) {
midpoint = (first + last) / 2;
if ( data[midpoint] == key )
found = 1;
else if ( data[midpoint] > key )
last = midpoint - 1;
first = midpoint + 1;
if ( found )
return (midpoint);
return (-1);
int seq_search(int *data, int n, int key) {
int found, i, SEQ_NUM;
i = 0;
found = 0;
SEQ_NUM = 0;
while ( ! found && ( i < n ) ) {
if ( data[i] == key )
found = 1;
if ( found )
return (i);
return (-1);
I'm still getting compile errors. Part of my problem with this program the search code were given to us. If I wrote the code myself I would be easier. This is what I have:
Would it have killed you to actually include the errors?
Hope is the first step on the road to disappointment.
Ok well its throwing errors because you are calling your search functions incorrectly. They are supposed to find a specific int in the data source, not an array of ints:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int BIN_NUM, SEQ_NUM;
int arraycpy(int *a, int *b);
// add const in your compare function to shut the compiler up.
int compare (const void *a, const void *b); /* function prototype */
int bin_search (int *data, int n, int key);
int seq_search(int *data, int n, int key);
int arraycpy(int *a, int *b) {
int x;
while((a[x]=b[x])!='\0') {
int main() {
int x, y, z, a, b, ia[1000], ib[1000], key;
srand(time(NULL)); /* seed random # generator */
for(x=0; x<1000; x++) { /* select 1000 numbers */
y=rand()%1000+1; /* generate random numbers from 1-1000 */
printf("\nEnter an integer to search the array or -1 to quit:\n");
scanf("%d", &key);
while(key!=-1) {
a=bin_search(ib, ia, key);
b=seq_search(ia, ib, key);
arraycpy(ia, ib);
qsort(ia, 1000, sizeof(int), compare);
printf("%d", z);
printf("\nEnter an integer to search the array or -1 to quit:\n");
scanf("%d", &key);
return 0;
int compare(const void *a, const void *b) { /* function to sort array */
return *(int*)a-*(int*)b;
int bin_search (int *data, int n, int key) {
int found, midpoint, first, last, BIN_NUM;
found = 0;
first = 0;
BIN_NUM = 0;
last = n - 1;
while ( ( first <= last ) && ! found ) {
midpoint = (first + last) / 2;
if ( data[midpoint] == key )
found = 1;
else if ( data[midpoint] > key )
last = midpoint - 1;
first = midpoint + 1;
if ( found )
return (midpoint);
return (-1);
int seq_search(int *data, int n, int key) {
int found, i, SEQ_NUM;
i = 0;
found = 0;
SEQ_NUM = 0;
while ( ! found && ( i < n ) ) {
if ( data[i] == key )
found = 1;
if ( found )
return (i);
return (-1);
Sure I can include errors, but wont they be different as the compilers are different? Line 8.1 syntax error.
>but wont they be different as the compilers are different?
Yes, but we're experienced enough at reading errors that the differences don't matter. An error message (or warning) is immensely helpful in determining where and what the problem is, so you
should always post them if you have them.
>Line 8.1 syntax error.
When we ask for error messages, we mean a direct copy and paste of what you compiler spits out at you. If that's all your compiler says then you might consider getting another one with more
informative errors.
My best code is written with the delete key.
Thats what my compiler says and that is what compiler I'm supposed to use. IBM RS/6000 Running AIX 5.2.0. Maybe you could recommend another compiler that I can use at home since this is the only
one I know?
>Thats what my compiler says
What a shame. I guess it assumes you'll find the syntax error with just a general line number.
>IBM RS/6000 Running AIX 5.2.0.
If you don't have GCC then you might consider giving it a shot.
My best code is written with the delete key.
Sorry I should have been more clear. I SSH into that server. It isn't mine. I need a compiler for windows.
12-16-2004 #2
12-16-2004 #3
Registered User
Join Date
Oct 2004
12-16-2004 #4
12-16-2004 #5
Registered User
Join Date
Oct 2004
12-16-2004 #6
12-16-2004 #7
12-17-2004 #8
Registered User
Join Date
Oct 2004
12-17-2004 #9
12-18-2004 #10
12-18-2004 #11
Registered User
Join Date
Oct 2004
12-18-2004 #12
12-18-2004 #13
Registered User
Join Date
Oct 2004
12-18-2004 #14
12-18-2004 #15
Registered User
Join Date
Oct 2004
|
{"url":"http://cboard.cprogramming.com/c-programming/59808-errors-program.html","timestamp":"2014-04-20T04:42:17Z","content_type":null,"content_length":"103059","record_id":"<urn:uuid:482fb979-bff7-48fc-830b-66b761a9bc0f>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Limit as X approaches infinity
September 28th 2010, 01:33 PM #1
Sep 2010
Limit as X approaches infinity
I was given some review questions for a test and one of them is
Lim as(x->infinity) of (2x-(4x^2+2x)^1/2). I've tried multiplying by the reciprocal but it didn't seem to get me anywhere; I'm not really sure how to approach the problem in any other way. Is
there something I'm missing?
but sorry actually, now that I'm looking at it, how do I pull the 2x out of the radical? Looking at the answer, the limit approaches -.5, but I don't see how one would end up with a 2 on the
bottom of the fraction...sorry, I'm sure i'm just missing something obvious here
$\displaystyle \frac{-2x}{2x+\sqrt{4x^2+2x}} = \frac{-2}{2 + \frac{\sqrt{4x^2+2x}}{x}} = \frac{-2}{2 + \sqrt{\frac{4x^2+2x}{x^2}}} = ....$
September 28th 2010, 01:43 PM #2
September 28th 2010, 02:05 PM #3
Sep 2010
September 28th 2010, 02:12 PM #4
Sep 2010
September 28th 2010, 02:30 PM #5
|
{"url":"http://mathhelpforum.com/calculus/157727-limit-x-approaches-infinity.html","timestamp":"2014-04-18T03:08:24Z","content_type":null,"content_length":"43936","record_id":"<urn:uuid:e24c3b0f-5fab-4656-afb8-8915874e184a>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Copyright © University of Cambridge. All rights reserved.
'Boxed In' printed from http://nrich.maths.org/
The diagram shows a rectangular box (a cuboid).
The areas of the faces are $3$, $12$ and $25$ square centimetres.
What is the volume of the box?
The areas of the faces of a cuboid are p, q and r. What is the volume of the cuboid in terms of p, q and r?
|
{"url":"http://nrich.maths.org/570/index?nomenu=1","timestamp":"2014-04-20T08:32:10Z","content_type":null,"content_length":"3291","record_id":"<urn:uuid:50b4900c-5978-43bd-b9bc-3444af30ff6e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebra Tutors
West Bloomfield, MI 48322
Master Certified Coach for Exam Prep, Mathematics, & Physics
...I look forward to speaking with you and to establishing a mutually beneficial arrangement in the near future! Best Regards, Brandon S.
1 covers topics such as linear equations, systems of linear equations, polynomials, factoring, quadratic equations,...
Offering 10+ subjects including algebra 1 and algebra 2
|
{"url":"http://www.wyzant.com/Birmingham_MI_algebra_tutors.aspx","timestamp":"2014-04-19T08:57:11Z","content_type":null,"content_length":"59589","record_id":"<urn:uuid:6ab36a70-8e54-4506-9f21-f6f3528a3329>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Turbo decoding apparatus - Ikeda, Norihiro
The present invention relates to a turbo decoding apparatus decoding data encoded by turbo-encoding.
Error correction codes are applied to systems which required to transmit data without any error in the case of transmitting the data or to read data to be stored in storages having a large capacity
such as magnetic disks, CD-ROMs, etc. The error codes are applied to systems such as a mobile communication system, a facsimile (FAX), and a cash dispenser at the bank.
The turbo code is known as one of the error correction codes having a high coding gain. The turbo code is employed for the third generation mobile phone system (3GPP: the 3rd Generation Partnership
Project) and is expected to be used for the next generation mobile phone system in the field of mobile communications.
FIG. 1 shows a diagram of an example of a configuration of a communication system including a turbo encoder and a turbo decoder. Information u having a length N is coded into pieces of coded data xa,
xb and xc by the turbo encoder. The data xa is the information u itself, the data xb is coded data into which the information u is convolution-coded, and the data xc is coded data into which the
information u is convolution-coded after interleaving. The pieces of coded data xa, xb, xc are affected by noises and fading when transmitting on a communication path and received as received signals
ya, yb, yc. In a receiver, the turbo decoder executes a decoding process with respect to the reception signals (reception data), whereby a decoded result u′ is acquired from the reception signals.
The symbols shown in FIG. 1 are expressed as follows.
Information (original data) u=[u1, u2, . . . , uN]
Coded data xa=[xa1, xa2, . . . , xak, . . . , xaN]
Coded data xb=[xb1, xb2, . . . , xbk, . . . , xbN]
Coded data xc=[xc1, xc2, . . . , xck, . . . , xcN]
Reception data ya=[ya1, ya2, . . . , yak, . . . , yaN]
Reception data yb=[yb1, yb2, . . . , ybk, . . . , ybN]
Reception data yc=[yc1, yc2, . . . , yck, . . . , ycN]
Decoded result (decoded data) u′=[u′1, u′2, . . . , u′N]
FIG. 2 illustrates an example of a configuration of the turbo decoding apparatus (turbo decoder). The turbo decoding apparatus, to start with, uses a first component decoder DEC 1 in order to decode
the reception signals ya, yb among the reception signals ya, yb and yc. Next, a second component decoder DEC 2 executes the decoding process by using a likelihood of a decoded result of the DEC 1 and
using the reception signal yc. On this occasion, the likelihood of the decoded result of the DEC 1 is subjected to an interleaving process by an interleaver and input to the DEC 2. A likelihood of
the decoded result of the DEC 2 is subjected to a deinterleaving process by a deinterleaver and input again to the DEC 1. The decoded result u′ is decoded data obtained by 0-or-1 determination with
respect to the deinterleaved result of the DEC 2. A characteristic of an error rate is improved by repeating the decoding process. Therefore, the turbo decoding apparatus executes the decoding
process in a way that repeats the decoding operation a predetermined number of times.
Note that FIG. 2 illustrates, in terms of explaining the principle, two pieces of component decoders (DEC 1 and DEC 2), the interleaver, and the deinterleaver as the components of the turbo decoding
apparatus, however, a general scheme in terms of a hardware configuration is that one element decoder that functions as both of the interleaver and the deinterleaver is employed.
A Conventional Example 1 will be described in a turbo decoding apparatus applied to Maximum A Posteriori Probability Decoding (MAP), which is employed as a component decoder of a turbo decoding
apparatus in general.
FIG. 3 shows a block diagram of a turbo decoding apparatus to which a MAP decoder is applied. A decoding procedure (sequence) using the MAP is given as below. A Random Access Memory (RAM) for
communication path values) can save the reception signals ya, yb, yc and can output the reception signals in the way of properly exchanging an output order.
In FIG. 3, for example, when a coding rate R is given by R=½ and N is an information length, the original information u, the coded data xa, xb, and the reception signals ya, yb are as the followings.
Original Information:
u=[u1, u2, u3, . . . , uN]
Coded Data:
xa=[xa1, xa2, xa3, . . . xak, . . . , xaN]
xb=[xb1, xb2, xb3, . . . xbk . . . ,
Reception Data:
ya=[ya1, ya2, ya3, . . . , yak, . . . , yaN]
yb=[yb1, yb2, yb3, . . . ybk . . . , .
Note that the information length N includes a tail bit added on the side of the turbo encoder.
In FIG. 3, a shift-probability calculation unit, when receiving the reception data (yak,ybk) at time k, calculates each of the following probabilities as a shift probability γs, k (where s=0 to 3):
probability γ0, k when (xak,xbk) is (0,0);
probability γ1, k when (xak,xbk) is (0,1);
probability γ2, k when (xak,xbk) is (1,0); and
probability γ3, k when (xak,xbk) is (1,1).
A backward-probability calculation unit calculates a backward-probability βk−1(m) in each status m (m=0 to 7) at time k−1 by use of a backward-probability βk(m) and the shift-probability γs, k (s=0
to 3) at the time k, and stores the backward-probability in a backward-probability RAM (memory). Hereinafter, the shift-probability calculation unit and the backward-probability calculation unit
repeat the calculations at the time k given by k=k−1, then calculate the probabilities with respect to the time k=N through k=0, and stores the backward-probability βk(m) corresponding to each of the
times k=0 through N in the backward-probability RAM.
Thereafter, the shift-probability calculation unit and a forward-probability calculation unit calculate a forward-probability a1,k with the original data uk being “1” and a forward-probability a0,k
(m) with the original data uk(m) being “0” in each status m at the time k by use of a forward-probability a1,k−1(m) with the original data uk−1 being “1”, a forward-probability a0,k−1(m) with the
original data uk−1 being “0”, and a shift-probability γs,k at the time (k−1).
A joint-probability calculation unit calculates a probability γ1,k(m) with the k-th original data uk being “1” in a way that multiplies a forward-probability a1,k(m) in each status m at the time k by
the backward-probability βk(m) stored in the backward-probability RAM. Similarly, the joint-probability calculation unit calculates a probability γ0,k(m) with the original data uk being “0” by use of
the forward-probability a0,k(m) in each status m at the time k and the backward-probability βk(m).
The a posteriori probability calculation unit obtains both of a total sum Smγ1,k(m) of the probabilities that are “1” by adding the probabilities γ1,k(m) that are “1” in each status m at the time k
and a total sum Smγ0,k(m) of the probabilities that are “0” by adding the probabilities γ0,k(m) that are “0” in each status m at the time k, and outputs a logarithmic likelihood (a posteriori
probability L(u1k)) given by the following formula (1):
L(u1k)=log[Smγ1,k(m)/Smγ0,k(m)] (1)
An external information likelihood calculation unit, with the a posteriori probability L(u1k) containing an a priori likelihood and a communication path value at time of inputting, removes these
values and thus calculates an external information likelihood Le(u1k). The external information likelihood Le(u1k) is stored in an interleave RAM, then is interleaved, subsequently is output as an a
priori likelihood L(u1k′) in a next decoding process, and is fed back to the shift-probability calculation unit.
The first half of the decoding process (single MAP decoding process) of the turbo decoding is finished in the way described above, next what the reception data ya interleaved and the a priori
likelihood L(u1k′) acquired in the first half of the decoding process are deemed as reception data ya′, and the MAP decoding process is executed by using ya′ and yc, thereby outputting a likelihood L
(u2k). Thus, one turbo decoding process is terminated. Hereinafter, the turbo decoding process is repeated a predetermined number of times (e.g., 8 times), in which a decoded result uk=1 is output
when the acquired posteriori probability L(u8k)>0, and a decoded result uk=0 is output when L(u8k)<0.
In this type of MAP decoding method, the first half of the decoding process involves the process of calculating the shift-probability and the backward-probability, and storing the
backward-probability β in the backward-probability RAM, and the second half of the process involves calculating the shift-probability, the forward-probability, the joint-probability, the a
posteriori-probability, and the external information likelihood. Namely, in this decoding method, the backward-probability RAM (memory) is stored with the backward-probability βk(m) without storing
the forward-probabilities a1,k(m), a0,k(m).
In other words, the processes are as the followings.
(1) Calculation of shift-probability
The shift-probabilities γs,k is obtained from the reception data (yak, ybk) received at the time k, with respect to all of combinations of the case where each piece of coded data (xa, xb) becomes “0”
and the case where the coded data (xa, xb) becomes “1”. The calculation of the shift-probability is performed respectively at the time of (2) the calculation of the backward-probability and (3) the
calculation of the forward-probability that follow.
(2) Calculation of Backward-Probability
Calculations of the backward-probabilities at the time N through the time 0 are sequentially conducted. The backward-probability βk (i.e., β0,k, β1,k) at the time k is calculated from an
backward-probability βk+1 at the time k+1 and a shift-probability at the time k+1, and results thereof are stored in the memory (backward-probability RAM) stored with the calculated results of the
(3) Calculation of Forward-Probability
Calculations of the forward-probabilities at the time 0 through the time N are sequentially performed. The forward-probability a0,k when the data is “0” at the time k and the forward-probability a1,k
when the data is “1” at the time k are calculated from the forward-probability a0,k−1 when the data is “0” at the time k−i, the forward-probability a1,k−1 when the data is “1”, and the
shift-probability γs,k at the time k.
(4) Calculation of Joint (Binding) Probability
A calculation of the joint-probability is executed simultaneously with (3) the calculation of the forward-probability. Probabilities when each status is “0” at the time k is calculated from a0,k and
β0,k, and probabilities when each status is “1” is calculated from a1,k and β1,k.
(5) Calculation of Posteriori Probability
The a posteriori probability uk and the likelihood of uk are calculated by adding the probabilities when each status is “1” at the time k and adding the probabilities when each status is “0”.
(6) Calculation of External Information Likelihood
The a posteriori probability calculation result uk at the time k contains the a priori likelihood and the communication path value when inputting, and hence these values are removed.
Decoding time (MAP decoding time) in the case of performing the calculations (1) to (6) once as the MAP decoding process is 2N. One turbo decoding process is completed with two MAP decoding
processes. Hence, the processing time for one turbo decoding process becomes 4N, and, if a repetition count of the turbo decoding process is “8”, the turbo decoding process time becomes 32N.
FIG. 4 shows a timing chart of the single MAP decoding process. In this case, a memory capacity for storing the backward-probability calculation results βk is required to all of nodes N.
Next, a Conventional Example 2 will be described. The system of the Conventional Example 1 has the large memory capacity for storing the backward-probability calculation results and is therefore not
suited to implementation of the circuit if as it is. Therefore, a method of reducing the memory capacity for storing the backward-probability calculation results is proposed (e.g., Patent Document
1). FIG. 5 shows a timing chart per MAP decoding process based on a system in the Conventional Example 2.
The decoding procedure of the method according to the Conventional Example 2 described in the Patent Document 1 has the following differences in the comparison with the Conventional Example 1.
(1) The information length N is segmented by a predetermined length L into a plurality of blocks, and the decoding is conducted in every segment L defined as a basic unit. In this case, the
information length N (time segments (section) 0 to N) is segmented into segments 0 to L1, L1 to L2, L2 to L3, . . . , Lx to N.
(2) The backward-probabilities are calculated toward time from time N+Δ. At this time, in the backward-probability calculation results in the respective periods of time N through 0, the memory is
stored with the backward-probability calculation results (discrete β) in periods of time Lx, . . . , L3, L2 for every segment L and also stored with the backward-probability calculation results
(continuous β) in all periods of time L1 to 0.
(3) Thereafter, the forward-probabilities in the periods of time 0 through L1 are calculated. At this time, the backward-probability calculation results (continuous β) in the periods of time 0
through L1, which are stored in the memory, are read, and the joint-probability, the a posteriori probability, and the external information likelihood with respect to each of the periods of time 0 to
L1 are calculated. Almost simultaneously with the start of calculating the forward-probabilities in the periods of time 0 to L1, the backward-probability calculation results (discrete B) in L2 are
read from the memory, then the backward-probabilities in the periods of time L2 to L1 are calculated by setting the backward-probability calculation results as an initial value, and the
backward-probability calculation results are stored in the memory.
(4) Hereinafter, the decoding is performed in every segment L down to N by repeating the processes (1) to (3).
The process time in the Conventional Example 2 becomes 2N by the single MAP decoding process in the same way as in the Conventional Example 1. Hence, if the repetition count is 8, the turbo decoding
process time becomes 32N.
In the Conventional Example 2, the memory capacity for storing the backward-probability calculation results is N/L+2L in a way that takes account of avoiding a read-write access confliction to the
RAM (memory) for storing the backward-probability calculation results.
Next, a Conventional Example 3 will be illustrated (e.g., Patent Document 1). The Conventional Example 3 is, unlike the Conventional Example 1, a backward-probability calculation method capable of
reducing the process time and the memory capacity. The system in the Conventional Example 3, as demerits, slightly declines in terms of the characteristic of the error rate and entails implementing
two pieces of backward-probability calculation units.
A decoding procedure of the method according to the Conventional Example 3 has the following differences from the Conventional Example 1. FIG. 6 shows a timing chart of one MAP decoding process in
the Conventional Example 3.
(1) The information length N is segmented by a predetermined length L into a plurality of blocks, and the decoding is conducted in every segment L defined as a basic unit. The information length N
(time segments 0 to N) is segmented into segments 0 to L1, L1 to L2, L2 to L3, . . . , Lx to N.
(2) A first backward-probability calculation unit calculates the backward-probabilities in a period of the time L2 to 0. At this time, the calculation results in the period of the time L2 through the
time L1 is not adopted because of its low reliability, and only the calculation results of the backward-probabilities in the time segment of L1 through 0 are stored in the RAM (memory).
(3) Next, the calculation of the forward-probabilities in the time 0 to L1 is executed, and the joint-probability, the a posteriori probability, and the external information likelihood are calculated
by employing the calculation results of the forward-probabilities and the calculation results of the backward-probabilities stored in the memory.
(4) At a point of time when the first backward-probability calculation unit finishes calculating the backward-probabilities up to L1, a second backward-probability calculation unit starts calculating
the backward-probabilities in the periods of time L3 through L1. The calculation of the backward-probabilities in the time segment L2 to L1 is executed in parallel with the calculation of the
forward-probabilities in the time segment 0 to L1. In this case, the calculation results of the backward-probabilities in the time L3 through the time L2 are not adopted because of its low
reliability, and only the calculation results of the backward-probabilities in the time segment of L2 through L1 are stored in the RAM (memory).
(5) Subsequent to calculating the forward-probabilities in the segment 0 to L1, the calculation of the forward-probabilities in the segment L1 to L2 are executed, and the joint-probability, the a
posteriori probability, and the external information likelihood are calculated by employing the calculation results of the forward-probabilities and the calculation results of the
backward-probabilities stored in the memory by the second backward-probability calculation unit.
(6) Hereinafter, the processes are repeated for every time segment L, and the decoding is conducted down to N.
The process time in the Conventional Example 3 becomes 2N by the single MAP decoding process in the same way as in the Conventional Example 1, and, if the repetition count is 8, the turbo decoding
process time becomes 32N. In the Conventional Example 3, a capacity of the RAM (memory) for storing the calculation results of the backward-probabilities is required 2L when taking into consideration
the write-read access conflict.
The Conventional Examples 2 and 3 are the methods for reducing a circuit scale by reducing the memory capacity as compared with the Conventional Example 1. On the other hand, a method described in
Patent Document 1 is given as a method of reducing the time required for decoding (Conventional Example 4). The system in the Conventional Example 4 is that the information length N is segmented into
M-pieces of blocks, and the MAP calculation operations are executed in parallel in the respective blocks, thereby reducing the process time.
The system in the Conventional Example 4 can be applied to each of the systems in the Conventional Examples 1 to 3, and the process time when applied to each system becomes 1/M. FIG. 7 shows a timing
chart in the case of applying the system of the Conventional Example 4 to the Conventional Example 2.
The Conventional Example 4 prepares the MAP decoders illustrated in FIG. 3, corresponding to a number of blocks M (MAP#1 to #M). The M-pieces of blocks are expressed such as blocks N/M, 2N/M, . . . ,
N(M−1)/M, N.
A first MAP decoder (MAP#1) executes the decoding process in the Conventional Example 2 with respect to the block in a segment 0 to N/M, a second MAP decoder (MAP#2) executes the decoding process in
the Conventional Example 2 with respect to the block in a segment N/M to 2N/M, and MAP decoders MAP#3, . . . , #M execute the same decoding process.
Herein, each MAP decoder, when calculating the backward-probability, performs the calculation of the backward-probability by a predetermined extra amount Δ, and writes, in the backward-probability
calculation results obtained after calculating the extra portions, only the calculation results of the backward-probability with respect to the last segmented data to the memory. For example, the MAP
#1 calculates the backward-probabilities of N/M+Δ to 0, and stores the calculation results of the backward-probabilities of L1 to 0 in the memory. The backward-probability is calculated by use of the
calculation result of the backward-probability one or more times before, and hence the backward-probability β having a high reliability in the time N/M can be acquired by calculating the
backward-probabilities from backward by the extra amount Δ.
When applying the Conventional Example 4, the memory capacity for storing the calculation results of the backward-probabilities is given as follows.
(1) Case of being Applied to the Conventional Example 1: N/M*M=N
(2) Case of being applied to the Conventional Example 2: N/L+2LM
(3) Case of being Applied to the Conventional Example 3: 2LM
Such a case is assumed that the turbo codes are used in a mobile phone system where the fast transmission is carried out. For example, in the case of realizing a transmission speed of 100 Mbps, it
requires the decoding process of 100 M bits per second.
Herein, for example, in the case of executing the turbo decoding process by employing a 100 MHz system clock while applying the system in the Conventional Example 2, 32 seconds are needed for the
data decoding per second. Accordingly, 32 pieces of turbo decoders must be implemented, resulting in an enlarged circuit scale. By contrast, the fast decoding process is executed by applying the
method as in the Conventional Example 4, thereby enabling the circuit scale to be reduced by decreasing the number of the turbo decoders to be implemented. Further, the Conventional Example 1
requires the large memory capacity and is not suited to implementation of the circuit, and the Conventional Examples 2 and 3 capable of reducing the memory capacity are considered to be applied
preferentially in terms of reducing the circuit scale.
[Patent Document 1] JP 2004-15285 A
[Patent Document 2] JP 3451246 A
Such a configuration is considered that the memory capacity is reduced by employing the Conventional Example 2 with no occurrence of deterioration of the characteristic of the error rate, which is
seen in the Conventional Example 1, and the process time is reduced by use of the Conventional Example 4. In this case, the memory capacity needed for storing the calculation results of the
backward-probabilities is given by N/L+2LM. Herein, when an M's value is increased for speeding up the process, the memory capacity approximates to an N's value in the Conventional Example 1, and an
effect in reducing the memory capacity owing to the application of the Conventional Example 2 fades away. This is derived from the necessity of providing the memories, decreased in the Conventional
Example 2, respectively to the plurality of MAP calculation units implemented in the Conventional Example 4. This is also applied to a case of reducing the memory capacity by adopting not the
Conventional Example 2 but the Conventional Example 3.
Thus, in a combination of the Conventional Examples 2 to 3 and the Conventional Example 4, when processed at a high speed, the effect in reducing the memory capacity owing to applying the
Conventional Examples 2 and 3 fades away. Therefore, this greatly affects the circuit scale of the turbo decoding device. In particular, this is a factor that hinders downsizing mobile terminal
device mounted with the turbo decoder.
It is an object of an embodiment of the present invention to provide a turbo decoder enabling a circuit scale to be reduced.
It is another object of an embodiment of the present invention to provide a turbo decoder enabling the circuit scale to be reduced while restraining an increase in process time.
An embodiment of a turbo decoding apparatus adopts the following means in order to achieve the objects.
That is, a first aspect of a turbo decoding apparatus, may comprise:
a backward-probability calculation unit that executes backward-probability calculation from time N to time 0 with respect to coded data having an information length N which is encoded with
a storage unit to store backward-probability calculation results extracted from a plurality of continuous backward-probability calculation results regarding a predetermined section of at intervals of
a forward-probability calculation unit that executes forward-probability calculation from time 0 to time N with respect to the coded data; and
a decoded result calculation unit that calculates a decoded result of the coded data through joint-probability calculation using forward-probability calculation results by the forward-probability
calculation unit and the backward-probability calculation results stored in the storage unit and backward-probability calculation results obtained through recalculation by the backward-probability
calculation unit.
According to the first aspect of the turbo decoding apparatus, backward-probability calculation results are stored in the storage unit at intervals of n-piece (n is natural number). Therefore, a
memory capacity is reduced. A circuit scale also is reduced.
The first aspect may be configured so that the joint-probability calculation is executed in parallel with the forward-probability calculation;
the decoded result calculation unit calculates, if a backward-probability calculation result at time i being one of the times 0 to N corresponding to a forward-probability calculation result at the
time i calculated by the forward-probability calculation unit is stored in the storage unit, a joint-probability at the time i by using the backward-probability calculation result at the time i
stored in the storage unit and the forward-probability calculation result at the time i;
the backward-probability calculation unit recalculates, if the backward-probability calculation result at the time i is not stored in the storage unit, the backward-probability at the time i based on
a shift-probability calculation result at time (i+1) for calculating a forward-probability at the time (i+1) and a backward-probability calculation result at the time (i+1) stored in the storage
unit; and
the decoded result calculation unit calculates the joint-probability at the time i by using the recalculated backward-probability calculation result at the time i and the forward-probability
calculation result at the time i.
According to the above-mentioned configuration, a shift-probability calculation result calculated for forward-probability calculation may be available for recalculation of a backward-probability
calculation. Therefore, shift-probability calculation for the recalculation may be omitted, and increase of processing time may be controlled.
A second aspect is a turbo decoding apparatus for decoding turbo-encoded data through repetition of element decoding. The second aspect of the turbo decoding apparatus may comprise:
M-number of element decoders each performing element decoding to one of M-pieces of divided data obtained by dividing turbo-encoded data having an information length N-bit by M; and
an interleaver/deinterleaver for alternately interleaving and deinterleaving M-pieces of results of element-decoding collectively,
wherein each of the M-number of element decoders includes:
a backward-probability calculation unit that executes backward-probability calculation with respect to one of the M-pieces of divided data;
a storage unit to store backward-probability calculation results extracted from N/M-pieces of backward-probability calculation results with respect to the one of the M-pieces of divided data at
intervals of n-piece;
a forward-probability calculation unit that executes a forward-probability calculation with respect to the one of the M-pieces of divided data; and
a decoded result calculation unit that outputs element-decoded results of the M-pieces of divided data through joint-probability calculation by using N/M-pieces of forward-probability calculation
results with respect to the one of the M-pieces of divided data and N/M-pieces of backward-probability calculation results obtained by reading from the storage unit or obtained through recalculation
by the backward-probability calculation unit.
A third aspect is a turbo decoding apparatus for decoding turbo-encoded data through repetition of element decoding. The third aspect of the turbo decoding apparatus may comprise:
M-number of element decoders each performing element decoding to one of M-pieces of divided data obtained by dividing turbo-encoded data having an information length N-bit by M; and
an interleaver/deinterleaver for alternately interleaving and deinterleaving M-pieces of results of element-decoding collectively,
wherein each of the M-number of element decoders includes:
a backward-probability calculation unit that executes backward-probability calculation with respect to one of the M-pieces of divided data;
a storage unit to divide one of the M-pieces of divided data into X-pieces of sections by a predetermined length L, to store backward-probability calculation results in each of X-th through second
sections within the X-pieces of sections for every predetermined length L discretely with intervals of the predetermined length L, and to store backward-probability calculation results extracted from
L-pieces of backward-probability calculation results with respect to a first section within the X-pieces of sections at intervals of n-piece;
a forward-probability calculation unit that executes forward-probability calculation with respect to one of the M-pieces divided data; and
a decoded result calculation unit that calculates, with respect to the respective sections of the M-pieces of divided data, element-decoded results in the respective sections through
joint-probability calculation by using L-pieces of forward-probability calculation results and L-pieces of backward-probability calculation results corresponding thereto, wherein:
the decoded result calculation unit outputs the element-decoded result with respect to the first section through the joint-probability calculation by using L-pieces of forward-probability calculation
results in the first section calculated by the forward-probability calculation unit and L-pieces of backward-probability calculation results in the first section obtained by reading from the storage
unit or obtained through recalculation by the backward-probability calculation unit;
the backward-probability calculation unit executes the backward-probability calculation with respect to a next section by using the backward-probability calculation results regarding the next section
stored in the storage unit, calculates L-pieces of backward-probability calculation results regarding the next section, and stores backward-probability calculation results extracted from the L-pieces
of backward-probability calculation results regarding the next section at intervals of n-piece in the storage unit;
the forward-probability calculation unit executes forward-probability calculation with respect to the next section to obtain L-pieces of forward-probability calculation results regarding the next
section; and
the decoded result calculation unit outputs the element-decoded result with respect to the next section through joint-probability calculation by using the L-pieces of forward-probability calculation
results regarding the next section and L-pieces of backward-probability calculation results regarding the next section obtained by reading from the storage unit or obtained through recalculation by
the backward-probability calculation unit.
A fourth aspect is a turbo decoding apparatus for decoding turbo-encoded data through repetition of element decoding. The fourth aspect may comprise:
M-number of element decoders each performing element decoding to one of M-pieces of divided data obtained by dividing turbo-encoded data having an information length N-bit by M; and
an interleaver/deinterleaver for alternately interleaving and deinterleaving M-pieces of results of element-decoding collectively,
wherein each of the M-number of element decoders includes:
a backward-probability calculation unit that executes backward-probability calculation with respect to one of the M-pieces of divided data;
a storage unit to store backward-probability calculation results regarding one of the M-pieces of divided data;
a forward-probability calculation unit that executes forward-probability calculation with respect to one of the M-pieces of divided data; and
a decoded result calculation unit that calculates the element-decoded results through calculation of a joint-probability using a forward-probability calculation result and the backward-probability
calculation results, wherein:
the backward-probability calculation unit divides one of the M-pieces of divided data into a plurality of sections for every predetermined length L, executes backward-probability calculation
regarding a 2L-th bit through a first bit, and stores backward-probability calculation results extracted from backward-probability calculation results of the 2L-th bit through the first bit at
intervals of n-piece, in the storage unit;
the forward-probability calculation unit executes forward-probability calculation regarding the first bit through an L-th bit of the one of the M-pieces of divided data;
the decoded result calculation unit executes joint-probability calculation by using forward-probability calculation results regarding the first bit through the L-th bit and backward-probability
calculation results regarding the first bit through the L-th bit obtained by reading from the storage unit or obtained through recalculation by the backward-probability calculation unit, and outputs
element-decoded results regarding a section from the first bit to the L-th bit;
the backward-probability calculation unit executes backward-probability calculation from a 3L-th bit to a (L+1)th bit of the one of the M-pieces of divided data during backward-probability
calculation from a 2L-th bit to the first bit, and stores backward-probability calculation results extracted from backward-probability calculation results from the 2L-th bit to the (L+1)th bit at
intervals of n-piece, in the storage unit;
the forward-probability calculation unit executes forward-probability calculation from the (L+1)th bit to the 2L-th bit; and
the decoded result calculation unit executes joint-probability calculation by using forward-probability calculation results from the (L+1)th bit to the 2L-th bit and backward-probability calculation
results from the (L+1)th bit to the 2L-th bit obtained by reading from the storage unit or obtained through recalculation by the backward-probability calculation unit, and outputs the element-decoded
results regarding a section from the (L+1)th bit to the 2L-th bit.
According to an embodiment of the turbo decoding apparatus enabling a circuit scale to be reduced is provided.
Also, according to the embodiment of the turbo decoding apparatus enabling the circuit scale to be reduced while restraining an increase in process time is provided.
FIG. 1 is a diagram showing an example of a communication system including a turbo encoder and a turbo decoder.
FIG. 2 is a diagram showing an example of a configuration of the turbo decoder (turbo decoding apparatus).
FIG. 3 is a diagram showing an example of the configuration of the turbo decoding apparatus to which MAP is applied.
FIG. 4 is a timing chart showing a single MAP decoding process performed in a Conventional Example 1.
FIG. 5 is a timing chart showing a single MAP decoding process by a system of a Conventional Example 2.
FIG. 6 is a timing chart showing a single MAP decoding process by the system of a Conventional Example 3.
FIG. 7 is a timing chart when applying the system of a Conventional Example 4 to the Conventional Example 2.
FIG. 8 is a conceptual diagram of a joint-probability calculating process according to an embodiment of the present invention.
FIG. 9 is a diagram showing an example of blocks of circuit units to be added in the embodiment of the present invention.
FIG. 10 is a timing chart for recalculating backward-probabilities in portions that are not saved in a memory.
FIG. 11 is a diagram showing an example of a configuration of a turbo decoding apparatus of a Specific Example 1 of the embodiment of the present invention.
FIG. 12 is a diagram showing an example of the configuration of a turbo decoding apparatus of a Specific Example 2 of the embodiment of the present invention.
FIG. 13 is a diagram showing an example of the configuration of a turbo decoding apparatus of a Specific Example 3 of the embodiment of the present invention.
Outline of Embodiment
Each of the Conventional Examples 1-3 is characterized by executing the forward-probability calculation after storing results of the backward-probability calculation in each time (time N to time 0)
in the memory, and executing the joint-probability calculation by use of the forward-probability calculation results in each time and the backward-probability calculation results in each time stored
in the memory. In the Conventional Examples 2 and 3, the memory is stored with the continuous calculation results of the backward-probabilities in the time segment (section) L1−0 (from time L to the
time 1).
A scheme in an embodiment of a turbo decoding apparatus is not that all of the calculation results of the backward-probabilities of each time are stored but that the memory capacity is reduced by
saving the calculation results of the backward-probabilities in a way that thins out the calculation results at intervals of n-time In this case, as to a portion of the backward-probability
calculation results that are not stored in the memory, the backward-probability is recalculated by using the forward-probability calculation result stored in the memory at a point of time when
executing the forward-probability calculation.
Normally, recalculation of the backward-probability is performed by use of a transmission path value and external likelihood information, which are stored in the memory. The transmission path value
and the external likelihood information are input values for calculating a forward-probability. Accordingly, when recalculating the backward-probability and calculating the forward-probability in
parallel, a memory access conflict occurs. In order to avoid the memory access conflict, if accessing the memory with a time shift, a rise in process time is brought about.
An input value for calculating the forward-probability and an input value for calculating the backward-probability of a portion that is saved on a thinning-out basis, however, involve using values in
the vicinity of a bit string. Accordingly, the same input value can be used at the same point of time in calculating these probabilities by reviewing a calculation order. Hence, the recalculation can
be executed simply by adding a backward-probability calculation unit for interpolating the thinning-out portion without increasing the process time.
The embodiment of the turbo decoding apparatus will hereinafter exemplify a configuration in a case of saving the calculation results of the backward-probabilities at intervals of one time
(alternately), in which a thinning-out quantity n is set such as n=1. FIG. 8 shows a conceptual diagram of a process of the forward-probability calculation and re-executing a process of the
backward-probability calculation, FIG. 9 illustrates an example of blocks of circuit units added to the Conventional Example, and FIG. 10 shows a timing chart of the recalculation of the portion that
is not stored in the memory.
In FIG. 8, let γ(i) be a shift-probability at time i, let a(i) be a calculation result of a forward-probability (forward-probability) at the time i, and let β(i) be a calculation result of a
backward-probability (backward-probability) at the time i. A system according to the embodiment involves previously executing the backward-probability calculation in the same way as by the
conventional system. In this embodiment, however, the calculation results of the backward-probability calculation are saved (buffered: stored) alternately (n=1). Herein, an assumption is that β(i+1),
β(i+3), . . . are saved as the calculation results of the backward-probabilities (β(i+2), β(i) are thinned out).
An i-th joint-probability is calculated at the time i. Herein, a forward-probability a at each time is in a status where the calculated result thereof has already been retained. Namely, such a status
exists, in which the forward-probability a(i) is acquired. The backward-probability β(i) is not stored in the memory (backward-probability RAM) for storing the calculation results of the
backward-probabilities, and hence a joint-probability of the forward-probability a(i) and the backward-probability β(i) cannot be calculated.
Therefore, the i-th joint-probability is not calculated at the timing of the time i, and the operation shifts to time i+1 while retaining the shift-probability γ(i) at the time i. A value of the
backward-probability β(i+1) is saved in the backward-probability RAM. Hence, the backward-probability β(i) can be calculated (recalculated) by employing the backward-probability β(i+1) and the
shift-probability γ(i+1) at the time i+1. The joint-probability at the time i can be calculated by using the backward-probability β(i) and the forward-probability a(i). Further, the joint-probability
at the time i+1 can be calculated by use of the backward-probability β(i+1) and the forward-probability a(i+1). Note that the forward-probability a(i+1) is calculated by employing the
forward-probability a(i) and the shift-probability γ(i+1) at the time i+1.
The shift-probability γ(i+1) is a value calculated for obtaining the forward-probability a(i+1), the shift-probability for the recalculation may not be calculated by using this value for
recalculating the backward-probability, and the process time can be reduced (an increase thereof can be restrained).
The blocks illustrated in FIG. 9 include a backward-probability calculation unit 10 that receives the shift-probability γ from a shift-probability calculation unit 9 and also receives an input of the
calculation result of the backward-probability stored in a backward-probability RAM 11 (storage unit). An output terminal of the backward-probability calculation unit 10 is connected to one input
terminal of a selector 12, and the backward-probability RAM 11 is connected to the other input terminal of the selector 12 via a phase shifter 13. The selector 12 selects one of output terminals of
the backward-probability calculation unit 10 and of the backward-probability RAM 11, and connects the selected output terminal as the backward-probability β(i) to a joint-probability calculation
unit. Moreover, a forward-probability calculation unit 14 is connected to the shift-probability calculation unit 9 via a phase shifter 15, and outputs a forward-probability calculation result a(i).
With this scheme, the forward-probability calculation result a(i) and the backward-probability calculation result β(i) at the same point of time are input to the joint-probability calculation unit
(not shown) at the same timing.
The circuit illustrated in FIG. 9 operates according to a system clock as shown in FIG. 10, and the shift-probability calculation unit 9 outputs the shift-probability γ at each point of time, every
clock cycle (<1> of FIGS. 9 and 10). In contrast with this, the backward-probability calculation result β at a certain point of time is read at every 2-clock cycle from the backward-probability RAM
11 (<2> of FIGS. 9 and 10). The example shown in FIG. 10 is that β(i+1), β(i+3) stored at the intervals of one time point are read at the 2-clock cycle.
In the second half of the time for reading one backward-probability calculation result (at the second half of the 2-clock cycle during the read-access to the RAM 11), the forward-probability a and
the backward-probability β at a certain point of time are output, the forward-probability a and the backward-probability β at a next point of time are output at the next clock cycle, and the
joint-probability at each point of time is calculated. In the example shown in FIG. 10, at output timing (timing t2) of the shift-probability γ(i+1) from the shift-probability calculation unit 9 (<1>
of FIG. 10), the backward-probability calculation unit 10 reads the backward-probability β(i+1) saved in the backward-probability RAM 11 (<2> of FIG. 10), and recalculates the backward-probability β
(i). The backward-probability calculation unit 10, at output timing (timing t3) of the next shift-probability γ(i+2), outputs the backward-probability β(i) (<3> of FIG. 10). The backward-probability
calculation unit 10 changes the output at every 2-clock cycle.
At the timing t3, the backward-probability β(i) is output as a selected output from the selector 12. On the other hand, the shift-probability γ(i) output at the timing t1 from the shift-probability
calculation unit 9 is input by the phase shifter 15 to the forward-probability calculation unit 14 with a 1-cycle delay. The forward-probability calculation unit 14 outputs the forward-probability a
(i) at the next output timing (timing t3). Hence, the forward-probability a(i) and the backward-probability β(i) are input at the same timing to the joint-probability calculation unit.
The backward-probability β(i+1) read as an output <2> from the backward-probability RAM 11 reaches, via the phase shifter 13, the selector 12 with the 1-cycle delay. The selector 12 changes over the
output at the intervals of one cycle. Hence, at timing t4, the backward-probability β(i+1) from the phase shifter 13 is output from the selector 12. On the other hand, the shift-probability is input
to the forward-probability calculation unit 14 with the 1-cycle delay behind the backward-probability calculation unit 10, and hence it follows that the forward-probability a(i+1) is output at timing
t4. Therefore, the forward-probability and the backward-probability at the same point of time are input at the same timing to the joint-probability calculation unit.
Thus, in the first half of the reading time of the backward-probability β(i+1), the backward-probability β(i) is recalculated, and, in the second half of the reading time, the backward-probability β
(i) and the forward-probability a(i) are output. Then, at the next cycle, the backward-probability β(i+1) and the forward-probability a(i+1) are output.
Thus, in this embodiment, the two joint-probabilities can be calculated once at the 2-clock cycle, and it is understood that the backward-probability can be recalculated and the memory capacity can
be reduced simply by adding the calculation units executing the backward-probability calculation results, which are not saved in the memory (the backward-probability RAM), in a way that takes the
configuration of processing the probability calculations on a time-division basis.
In the Conventional Example 2 or 3, when arithmetically operated at every system clock (per clock cycle), there is simultaneous occurrence of the write-access to the memory (the backward-probability
RAM) stored with the backward-probability calculation results and the read-access for calculating the joint-probability by reading the backward-probability calculation results stored therein. Hence,
the avoidance of the access conflict entails separating the write-access from the read-access by taking a dual configuration of the memories (the a posteriori probability RAMs).
A system according to this embodiment is that the backward-probability calculation results are thinned out at the intervals of n-time point and thus stored in the memory. This scheme eliminates the
necessity of taking the dual-memory configuration because of performing the writing operation once at an n+1 cycle and the reading operation once at the n+1 cycle.
This scheme is combined with a point that the memory saving capacity becomes 1/(n+1), resulting in an effect in reducing the memory capacity by ½(n+1). An increasing quantity of the circuit scale is
equivalent to an n-increase in the number of the backward-probability calculation units for performing the recalculation. The calculation of the backward-probability is attained by addition and
selection of a maximum value, and hence the circuit configuration is comparatively simple. Accordingly, the increase quantity of the circuit scale is sufficiently small for the reduction quantity of
the memory capacity.
Further, an increase in the process time is on the order of a several-clock cycle, and this increase quantity over the conventional process time is minute, with the result that the characteristic of
the error rate is absolutely the same because of a difference between storing in the memory and re-executing the same calculation.
Specific Example
The configuration according to the embodiment of the present invention illustrated in FIGS. 8 to 10 can be applied to each of the systems exemplified in the Conventional Examples 1 through 4. The
scheme of the Conventional Example 4 is to speed up the processing by executing the parallel process in a way that segments N-bit coded data having an information length N into M-pieces of blocks in
the respective systems in the Conventional Examples 1 through 3. With this scheme, the present system can be applied to the respective systems in the Conventional Examples 1 through 3, and can be
also applied after applying the Conventional Example 4 to the Conventional Examples 1 through 3.
Specific Example 1
An example in a case of applying the present system to what the speed-up is attained by applying the Conventional Example 4 to the Conventional Example 1, will be described as a Specific Example 1.
Herein, in order to simplify the description, the Specific Example 1 includes a case of conducting 1-fold speed-up in the Conventional Example 4, i.e., a case where the Conventional Example 4 is not
FIG. 11 is a block diagram showing a turbo decoding apparatus in the case of applying the Conventional Example 4 to the Conventional Example 1 and further applying the system according to this
embodiment. As illustrated in FIG. 11, a turbo decoding apparatus 20 includes a communication path value RAM 21, a plurality of MAP units (MAP decoders) 22(#1 to #M), and an interleave RAM 23.
Pieces of reception data (reception signals) ya, yb, yc are input to the communication path value RAM 21. The communication path value RAM 21 can store the reception data ya, yb, yc and can
selectively output the data. Namely, the communication path value RAM 21 can output the pieces of reception data ya, yb when starting the turbo decoding process, and thereafter can output the
reception data yc after finishing the first half of the turbo decoding process.
The interleave RAM 23 can store element decoded results given from the respective MAP units 22, can perform interleaving and deinterleaving alternately, and can feed interleaved/deinterleaved results
back to shift-probability calculation units 221 of the MAP decoders 22.
The MAP unit #1 is alternately used as first and second component decoders in FIG. 2, i.e., as a component decoder (DEC 1) for the first half of the turbo decoding process and a component decoder
(DEC 2) for the second half thereof. The MAP unit #2 alternately is employed as the DEC 1 and the DEC 2 in FIG. 2. Each of the MAP units #3 to #M is alternately employed as the DEC 1 and the DEC 2 in
FIG. 2.
The interleave RAM 23 is alternately used as an interleaver and a deinterleaver in FIG. 2. Each of the MAP units #1 to #M has the same construction and includes a shift-probability calculation unit
221, a first backward-probability calculation unit 222, a backward-probability RAM 223 serving as a storage unit, a second backward-probability calculation unit 224, a forward-probability calculation
unit 225, a joint-probability calculation unit 226, an a posteriori probability calculation unit 227, and an external information likelihood calculation unit 228. The joint-probability calculation
unit 226, the a posteriori probability calculation unit 227, and the external information likelihood calculation unit 228 correspond to a decoded result calculation unit (means) of the present
The backward-probability RAM 223 and the second backward-probability calculation unit 224 have improved or added configurations in order to realize the system according to this embodiment, and the
second backward-probability calculation unit 224 has, in detail, a configuration depicted by a broken line in FIG. 9. Namely, the backward-probability RAM 223 corresponds to the backward-probability
RAM 11 of FIG. 9, and the second backward-probability calculation unit 224 has the configuration circumscribed by the broken line of FIG. 9. Further, the shift-probability calculation unit 221 and
the forward-probability calculation unit 225 correspond to the shift-probability calculation unit 9 and the forward-probability calculation unit 14 of FIG. 9.
In the Specific Example 1, if a signal length (information length), in which a tail bit is added to information bits, has N bits (an N-bit decoding target signal (coded data)), the communication path
value RAM 21 segments the signal length into M-segments, and simultaneously inputs the N/M segmented data to each of the MAP units #1 to #M.
The MAP units #1 to #M execute the MAP decoding process simultaneously (in parallel) with respect to the segmented data 0 to N/M, N/M to 2N/M, . . . , (M−1)N/M to N that are segmented by N/M bits
according to the MAP decoding method in the Conventional Example 1.
To be specific, in the first MAP unit #1, the shift-probability calculation unit 221 and the backward-probability calculation unit 222 calculate the shift-probabilities and the backward-probabilities
from time point N/M+Δ toward time point 0, and the backward-probability calculation unit 222 stores and retains the calculation results of the backward-probabilities from time point N/M−1 to the time
point 0 at intervals of n-time (e.g., n=1) in the backward-probability RAM 223. Note that the backward-probabilities from N/M+Δ to N/M have no reliability and are therefore discarded.
Next, the shift-probability calculation unit 221 and the forward-probability calculation unit 225 calculate the shift-probabilities and the forward-probabilities from the time point 0 toward time
point N/M−1. Herein, if the calculation results of the backward-probabilities are stored in the backward-probability RAM 223, the values thereof are input to the joint-probability calculation unit
226 from the backward-probability calculation unit 224. On the other hand, if the calculation results of the backward-probabilities are not stored in the backward-probability RAM 223, the
backward-probability calculation unit 224 calculates a desired calculation result of the backward-probability from the calculation results of the backward-probabilities stored in the
backward-probability RAM 223 and from the calculation result of the shift-probability at that time, and inputs the thus-calculated result to the joint-probability calculation unit 226.
For example, if the calculation result of the backward-probability at time i is not stored in the backward-probability RAM 223, the backward-probability calculation unit 224 recalculates a
backward-probability calculation result β(i) at the time i from a backward-probability calculation result β(i+1) at time i+1 and from a shift-probability calculation result γ(i+1) at that time, and
inputs the backward-probability calculation result β(i) to the joint-probability calculation unit 226. The joint-probability calculation unit 226 calculates a joint-probability by use of the
backward-probability calculation result and the forward-probability calculation result, and the a posteriori probability calculation unit 227 and the external information likelihood calculation unit
228 calculate the a posteriori-probability and the external information likelihood, respectively.
Note that the reason why the calculation of the backward-probability starts not from time N/M but from time N/M+Δ is elucidated as below. In the calculation of the backward-probability, the
probability at time N/M is calculated, and hence the backward-probability is calculated based on a probability at time N/M+1. Therefore, if the backward-probability at N/M+1 is not calculated
correctly, the backward-probability at N/M cannot be calculated correctly. Such being the case, the calculation of the backward-probability at N/M+Δ uses a proper value as an initial value, and the
backward-probabilities are calculated in the sequence of N/M+(Δ−1), N/M+(Δ−2), N/M+(Δ−3), . . . N/M+2, N/M+1, N/M. The calculation is thus performed, thereby obtaining the calculation results of the
backward-probabilities exhibiting higher reliability, sequentially. Subsequently, the reliable calculation results are acquired at N/M−1, N/M−2 . . . because of being calculated from N/M with the
Moreover, in the i-th MAP unit #i, the shift-probability calculation unit 221 and the backward-probability calculation unit 222 calculate the shift-probabilities and the backward-probabilities from
time iN/M+L toward time (i−1)N/M, and the backward-probability calculation unit 222 stores and retains the calculation results of the backward-probabilities from the time iN/M−1 to time (i−1)N/M at
the intervals of one time (alternately) in the backward-probability RAM 223. Note that the backward-probabilities from time iN/M+Δ to time iN/M are not reliable and are therefore discarded.
Next, the shift-probability calculation unit 221 and the forward-probability calculation unit 225 calculate the shift-probabilities and the forward-probabilities from time (i−1)N/M−Δ toward time iN/
The joint-probability calculation unit 226, in parallel with calculating the forward-probabilities from time (i−1)N/M to time iN/M−1, acquires the calculation results of the backward-probabilities
from the backward-probability calculation unit 224, and executes joint-probability calculation by use of the backward-probabilities and the forward-probabilities. At this time, the
backward-probability calculation unit 224 outputs the values stored in the backward-probability RAM 223 to the joint-probability calculation unit 226, then recalculates non-stored values by the same
method as by the MAP unit #1, and outputs a recalculated result to the joint-probability calculation unit 226. The a posteriori calculation unit 227 and the external information likelihood
calculation unit 228 execute the a posteriori-probability calculation and the external information likelihood calculation, respectively.
Note that the reason why the calculation of the forward-probability is conducted not from time (i−1)N/M but from time (i−1)N/M−Δ is elucidated as below. In the forward-probability calculation, the
forward-probability at time N/M is calculated based on the forward-probability at time N/M−1. Therefore, if the forward-probability at N/M−1 is not calculated correctly, the forward-probability at N/
M can not be calculated correctly. Such being the case, a result of the forward-probability calculation at the time N/M-B uses a proper value as an initial value, and the forward-probabilities are
calculated in the sequence of N/M−(Δ−1), N/M−(Δ−2), N/M−(Δ−3), N/M−2, N/M−1, N/M. The calculation is thus performed, thereby obtaining the calculation results of the forward-probabilities exhibiting
higher reliability, sequentially. Subsequently, the reliable calculation results are acquired at N/M+1, N/M+2, . . . because of being calculated from N/M with the reliability.
Moreover, in the M-th MAP unit #M, the shift-probability calculation unit 221 and the backward-probability calculation unit 222 calculate the shift-probabilities and the backward-probabilities from
time N−1 toward time (M−1)N/M, and the backward-probability calculation unit 222 stores and retains the calculation results of the backward-probabilities from time N−1 to time (M−1)N/M at the
intervals of one time in the backward-probability RAM 223.
Next, the shift-probability calculation unit 221 and the forward-probability calculation unit 225 calculate the shift-probabilities and the forward-probabilities from time (M−1)N/M−Δ toward time N,
The joint-probability calculation unit 226, in parallel with calculating the a priori probabilities from time (M−1)N/M to time N−1, acquires the calculation results of the backward-probability
calculation from the backward-probability calculation unit 224, and executes the joint-probability calculation by employing the results of the backward-probability calculation and the results of the
forward-probability calculation.
At this time, the backward-probability calculation unit 224 outputs the values stored in the backward-probability RAM 223 to the joint-probability calculation unit 226, then recalculates non-stored
values by the same method as by the MAP unit #1, and outputs a recalculated result to the joint-probability calculation unit 226. The a posteriori-probability calculation unit 227 and the external
information likelihood calculation unit 228 execute the a posteriori-probability calculation and the external information likelihood calculation, respectively.
The respective MAP units #1 to #M perform the MAP decoding operation in parallel, the interleave RAM 23 stores the decoded result of each MAP unit in a predetermined storage area of the built-in RAM,
and M-tuples of data are simultaneously read (interleaved or deinterleaved) in a predetermined sequence from the RAM units, thereby executing the turbo decoding by repeating the operation a plural
number of times.
Each MAP unit operates as follows. The shift-probability calculation unit 221 calculates the shift-probability at each point of time i by use of the reception data, and inputs the shift-probability
calculation result γ(i) to the first backward-probability calculation unit 222 and the second backward-probability calculation unit 224.
The first backward-probability calculation unit 222 calculates the backward-probability at each point of time i in a predetermined segment in the N-bit, and stores the calculation results of the
backward-probabilities at the intervals of the n-pieces of probabilities in the backward-probability RAM 223. Herein, with respect to the MAP units #1, #2, #3, . . . , #M, the N-bit decoding target
signal (coded data) is segmented into M-pieces of signals (data), then each piece of segmented data is decoded by the corresponding MAP unit, and these element decoded results are bound by the
interleave RAM 23.
The backward-probability calculation results β(i), which are thinned out at intervals of n-time (n=1 in this example) of probabilities in the backward-probability calculation results from time N/M−1
to time 0, are stored in the backward-probability RAM 223 of the MAP unit #1. For example, as illustrated in FIG. 8, β(i) and β(i+2) are the tinned-out probabilities, and β(i+1), β(i+3), β(i+5), . .
. are stored in the backward-probability RAM 223.
The second backward-probability calculation unit 224, if the backward-probability calculation result (e.g., β(i)) at a certain point of time i is not stored in the backward-probability RAM 223,
recalculates the backward-probability β(i) by use of the shift-probability γ(i+1) input from the shift-probability calculation unit 221 and the backward-probability calculation result β(i+1) at a
time point (i+1) that is later than the time point i of the calculation result of the calculation target probability at one time point, which is stored in the backward-probability RAM 223, and
outputs this backward-probability β(i). With this operation, the backward-probability calculation result stored in the backward-probability RAM 223 and one of the backward-probability calculation
results recalculated by the second backward-probability calculation unit 224 are input, as the backward-probability calculation results β(i) at the respective points of time i, to the
joint-probability calculation unit 226.
The forward-probability calculation unit 225 calculates the forward-probability at each point of time i in the predetermined time segment in N bits, and inputs the calculated probability as the
forward-probability calculation result a(i) to the joint-probability calculation unit 226. The joint-probability calculation unit 226 calculates the joint-probability calculation result by
multiplying the forward-probability calculation result by the backward-probability calculation result at the same point of time, and inputs the calculated joint-probability to the a
posteriori-probability calculation unit 227.
The a posteriori-probability calculation unit 227 outputs a logarithmic likelihood (the a posteriori-probability L(u1k)) by use of the method described in the Conventional Example 1. The external
information likelihood calculation unit 228 removes the a priori likelihood and the communication path value at the input time from the a posteriori-probability L(u1k), thereby calculating the
external information likelihood Le(u1k). The external information likelihood Le(u1k) is stored in the interleave RAM 23, then interleaved, then output as the a priori likelihood Le(u1k′) in the next
decoding process, and fed back to the shift-probability calculation unit 221.
With the operations described above, the first half of the decoding process (one MAP decoding process) of the turbo decoding of the first time is terminated, next, what the reception data ya is
interleaved and the a priori likelihood L(u1k′) obtained in the first half of the decoding process are deemed to be new reception data ya′, and the MAP decoding process is executed by employing ya′
and yc, thereby outputting a likelihood L(u2k). So far, one turbo decoding process is terminated. The decoded results of the MAP units #1 to #M are stored in a predetermined area of the interleave
RAM 23 and simultaneously read (interleaved or deinterleaved) in a predetermined sequence, then such a process is executed repeatedly a predetermined number of times (e.g., 8 times), then, if the
acquired a posteriori-probability L(u8k)>0, a decoded result uk=1 is output, and, whereas if L(u8k)<0, a decoded result uk=0 is output.
Note that, if the N-bit coded data is not segmented into M-pieces of data, only one (e.g., the MAP unit #1) of the plurality of MAP units 22 is employed. In this case, the first backward-probability
calculation unit 222 of the MAP unit #1 calculates the backward-probabilities from the time N toward the time 0, and stores the backward-probability calculation results from the time N to the time 0
at intervals of one probability in the backward-probability RAM 223.
Thus, the scheme in the Specific Example 1 involves executing the processes according to the Conventional Example 1 and writing the backward-probability calculation results to the
backward-probability RAM (memory) 223. At this time, the Conventional Example 1 entails writing all of the backward-probability calculation results. By contrast, in the Specific Example 1, the
backward-probability calculation results are written only at the intervals of n-time.
Further, on the occasion of calculating the joint-probability by use of the backward-probability calculation result, the joint-probability is calculated by using the value of the backward-probability
read from the backward-probability RAM 223 with respect to the portion written to the RAM 223. In this point, with respect to the portion (e.g., β(i)) that is not written to the RAM 223, the second
backward-probability calculation unit 224 recalculates the backward-probability by use of the value (β(i+1)) written to the RAM 223 and the shift-probability calculation result (γ(i+1)) for the
forward-probability calculation, and the joint-probability is calculated by employing this recalculated result.
In FIG. 12, the backward-probability RAM 223 and the second backward-probability calculation unit 224 are the components changed from the Conventional Example 1. In the case of a combination of the
Conventional Examples 1 and 4 (1+4), a size (memory capacity) of the backward-probability RAM is given by M(N/M) through M(N/M/(n+1)) and has a 1/(n+1) scale (memory capacity). On the other hand, it
follows that the increase quantity of the circuit scale is equivalent to an M-increase in the number of the backward-probability calculation units (the calculation units 224). If n takes a small
value to some extent, however, the memory reduction quantity takes a large value as compared with the increase quantity in the number of the backward-probability calculation units, and hence a
circuit scale-down effect of the whole turbo decoding device can be expected.
Specific Example 2
Next, a case of applying the system according to this embodiment to the turbo decoding apparatus attaining the speed-up by applying the Conventional Example 4 to the Conventional Example 2 (the
Conventional Examples 2+4) will be described as a Specific Example 2. Herein, in order to simplify the description, the Specific Example 2 includes a case of conducting the 1-fold speed-up in the
Conventional Example 4, i.e., a case where the Conventional Example 4 is not applied.
FIG. 12 is a block diagram illustrating the turbo decoding apparatus, in which the system according to the embodiment is applied to the Conventional Examples 2 and 4 (2+4). A turbo decoding apparatus
20A illustrated in FIG. 12 has the configuration common to the turbo decoding apparatus 20 shown in FIG. 11, so the description will be focused mainly on a different configuration, while the
description of the common configuration is omitted.
In FIG. 12, in the turbo decoding apparatus 20A, the decoding target signal (coded data) of N bits containing the tail bit is segmented into M-pieces of blocks, and the segmented signals (data) are
subjected to the decoding process in the corresponding MAP units 22 (the MAP decoders #1 to #M).
Each MAP unit 22 has the same construction and is different from the Specific Example 1 of the following points. The MAP unit 22 is provided with two shift-probability calculation units 221A and 221
B. Further, the backward-probability RAM 223 has a first storage area and a second storage area. The first storage area is stored with discrete backward-probability calculation results (discrete β).
The second storage area is stored with continuous backward-probability calculation results (continuous β) in the Conventional Example 2. In the Specific Example 2, however, in the area, which is
stored with the continuous backward-probability calculation results in the Conventional Example 2, the backward-probability calculation results are stored at the intervals of n-time (e.g., intervals
of one time point).
In the Specific Example 2, if the signal length, in which the tail bit is added to the information bits, is an N-bit length, the communication path value RAM 21 segments (divides) the signal length
into M-pieces of blocks and inputs the segmented data (blocks) simultaneously to the respective MAP units #1, #2, . . . , #M. The MAP units #1, #2, . . . , . . . #M execute, in accordance with the
MAP decoding method in the Conventional Example 2, the MAP decoding process simultaneously (in parallel) with respect to the segmented pieces of data of 0 to N/M, N/M to 2N/M, . . . , (M−1)N/M to N,
which are segmented by N/M bits.
An operational example of the Specific Example 2 will be explained with reference to FIG. 7. A different point of the operational example from the Conventional Examples 2 and 4 (2+4) is that the time
segments (sections) (time segments L1 to 0, L1 to N/M. L1 to N(M−1)/M in FIG. 7), which are continuously stored with the backward-probability calculation results in the Conventional Example 2, are
stored with the backward-probability calculation results at the intervals of n-times, and the backward-probability calculation unit 224 recalculates the backward-probability calculation results that
are not stored therein.
The first MAP unit #1 shown in FIG. 12 executes the following process.
<1> The shift-probability calculation unit 221A and the backward-probability calculation unit 222 calculate the shift probabilities and the backward-probabilities from time N/M+Δ toward time 0, and
stores and retains the discrete backward-probability calculation results (discrete backward-probabilities β per L, which correspond to times Lx, . . . , L3, L2 in the MAP#1 of FIG. 7) in each of
times N/M−1, N/M−L−1, N/M−2L−1, N/M−3L−1, . . . , 2L−1 in the first storage area of the backward-probability RAM 223. Further, the backward-probability calculation results from time L−1 to time 0
(which correspond to the backward-probability calculation results in time L1 to time 0 in the MAP#1 of FIG. 7) are stored and retained, at the intervals of one time, in the second storage area of the
backward-probability RAM 223.
<2> The shift-probability calculation unit 221B and the forward-probability calculation unit 225 calculate the shift-probabilities and the forward-probabilities from the time 0 toward the time L−1
(L1−1), then the joint-probability calculation unit 226 calculates the joint-probability by receiving the backward-probability calculation results from the backward-probability calculation unit 224,
and the a posteriori-probability calculation unit 227 and the external information likelihood calculation unit 228 calculate the a posteriori-probability and the external information likelihood,
In this case, the joint-probability calculation unit 226, if the backward-probability calculation results are stored in the first storage area of the backward-probability RAM 223, receives the values
stored therein from the backward-probability calculation unit 224. Whereas if the backward-probability calculation results are not stored in the first storage area of the backward-probability RAM 223
, the backward-probability calculation unit 224 recalculates a desired backward-probability calculation result from the backward-probability calculation results stored in the backward-probability RAM
223 and from the shift probability calculation result at that time, and the recalculated backward-probability calculation result is input to the joint-probability calculation unit 226 (see FIGS. 8, 9
, and 10).
<3> After finishing the MAP decoding process from the time 0 to the time L−1(L1−1), or in parallel with this MAP decoding process, the backward-probability calculation unit 222 reads the
backward-probability calculation result in the time 2L−1 (which corresponds to time L2 in MAP#1 of FIG. 7) from the first storage area, and, by using this backward-probability as an initial value,
the shift-probability calculation unit 221A and the backward-probability calculation unit 222 store and retain the backward-probability calculation results at the intervals of one time point in the
second storage area while calculating the shift-probabilities and the backward-probabilities from time 2L−1 to time L (which correspond to time L2 through time L1 in MAP#1 of FIG. 7).
<4> Next, the shift-probability calculation unit 221 Band the forward-probability calculation unit 225 calculate the shift-probabilities and the forward-probabilities from the time L toward the time
2L−1 (which corresponds to L1->L2 in FIG. 7), then the joint-probability calculation unit 226 calculates the joint-probability by receiving the backward-probability calculation results from the
backward-probability calculation unit 224, and the a posteriori-probability calculation unit 227 and the external information likelihood calculation unit 228 execute the a posteriori-probability
calculation and the external information likelihood calculation, respectively. In this case, the backward-probability calculation unit 224, if the backward-probability calculation result is stored in
the second storage area of the backward-probability RAM 223, provides a value thereof to the joint-probability calculation unit 226. Whereas if the backward-probability calculation result is not
stored in the second storage area, the backward-probability calculation unit 224 provides the backward-probability calculation result obtained through the recalculation to the joint-probability
calculation unit 226.
<5> The external information likelihoods from 0 to N/M−1 are calculated by repeating the same processes in <3> and <4> on the L-by-L basis.
Further, the i-th MAP decoder #i executes the following processes.
<1> The shift-probability calculation unit 221A and the backward-probability calculation unit 222 calculate the shift-probabilities and the backward-probabilities from time iN/M+L toward time (i−1)N/
M (which corresponds the time 2N/M+Δ->the time N/M of MAP#2 of FIG. 7), and store and retain the discrete backward-probability calculation results (discrete backward-probabilities B per L:
corresponding to times Lx, . . . , L3, L of MAP#2 of FIG. 7) in the times iN/M−1, iN/M−L−1, iN/M−2L−1, iN/M−3L−1, . . . , (i−1)N/M+2L−1 in the first storage area of the backward-probability RAM 223.
Moreover, the backward-probability calculation results from the time (i−1)N/M+L−1 to the time (i−1)N/M (which correspond to the backward-probability calculation results from the time L1 to the time N
/M) are stored and retained at the intervals of one time in the second storage area of the backward-probability RAM 223.
<2> Next, the shift-probability calculation unit 221 Band the forward-probability calculation unit 225 calculate the shift-probabilities and the forward-probabilities from the time (i−1)N/M-A toward
the time (i-1)N/M (which corresponds to the time 2N/M−Δ-> the time N/M of MAP#2 of FIG. 7), and subsequently calculate the shift-probabilities and the forward-probabilities from the time (i−1)N/M
toward the time (i−1)N/M+L−1 (which corresponds to the time N/M-> the time L1).
The joint-probability calculation unit 226 calculates the joint-probability by receiving the backward-probability calculation result from the backward-probability calculation unit 224, and the a
posteriori-probability calculation unit 227 and the external information likelihood calculation unit 228 execute the a posteriori-probability calculation and the external information likelihood
calculation, respectively. At this time, the backward-probability calculation unit 224 executes the same process as in the case of the backward-probability calculation unit 224 of the MAP unit #1.
<3> After finishing the MAP decoding process from the time (i−1)N/M to the time (i−1)N/M+L−1 (which corresponds to the time 2N/M through the time N/M), or in parallel with this MAP decoding process,
the backward-probability calculation unit 222 reads the backward-probability calculation result in the time (i−1)N/M+2L−1 (which corresponds to L2 in FIG. 7) from the first storage area, and by using
the backward-probability as an initial value, the shift probability calculation unit 221A and the backward-probability calculation unit 222 store and retain the backward-probability calculation
results in the second storage area while calculating the shift probabilities and the backward-probabilities from the time (i−1)N/M+2L−1 toward the time (i−1)N/M+L (which correspond to L2->L1).
Next, the shift probability calculation unit 221B and the forward-probability calculation unit 225 calculate the shift-probabilities and the forward-probabilities from the time (i−1)N/M+L toward the
time (i−1)N/M+2L−1 (which corresponds to L1->L2), then the joint-probability calculation unit 226 calculates the joint-probability by receiving the backward-probability calculation result from the
backward-probability calculation unit 224, and the a posteriori-probability calculation unit 227 and the external information likelihood calculation unit 228 calculate the a posteriori-probability
and the external information likelihood, respectively.
<4> Hereinafter, the external information likelihoods from the time (i−1)N/M to the time iN/M−i (which corresponds to L2->2N/M) are calculated by repeating the processes on the L-by-L basis in the
same way as in <3>.
Further, the M-th MAP decoder #M performs the following operations.
<1> The shift-probability calculation unit 221A and the backward-probability calculation unit 222 calculate the shift-probabilities and the backward-probabilities from the time N−i to the time (M−i)N
/M (which corresponds to the time N->the time N(M−1)/M of MAP#M of FIG. 7), then store and retain the discrete backward-probability calculation results in the times N−L−1, N−2L−1, N−3L−1, . . . , and
(M−)N/M+2L−1 (backward-probabilities B: corresponding to the times Lx, . . . , L3, L2 of MAP#M of FIG. 7) in the first storage area of the backward-probability RAM 223, and also store and retain the
backward-probability calculation results from the time (M−1)N/M+L−1 to the time (M−1)N/M (which correspond to the backward-probability calculation results from the time L1 to the time N(M−1)/M of MAP
#M of FIG. 7) at the intervals of one time in the second storage area of the backward-probability RAM 223.
<2> Next, the shift probability calculation unit 221B and the forward-probability calculation unit 225 calculate the shift-probabilities and the forward-probabilities from the time (M−1)N/M-B toward
the time (M−1)N/M (which correspond to N(M−1)/M−Δ->N(M−1)/M), and subsequently calculate the shift-probabilities and the forward-probabilities from the time (M−1)N/M toward the time (M−1)N/M+L−1
(which corresponds to N(M−1)/M->L1). The joint-probability calculation unit 226 calculates the joint-probability by receiving the backward-probability calculation result from the backward-probability
calculation unit 224, and the a posteriori-probability calculation unit 227 and the external information likelihood calculation unit 228 execute the a posteriori-probability calculation and the
external information likelihood calculation, respectively.
<3> After finishing the MAP decoding process from the time (M−1)N/M to the time (M−1)N/M+L−1 ((M−1)N/M->L1), or in parallel with this MAP decoding process, the backward-probability calculation unit
222 reads the backward-probability calculation result in the time (M−1)N/M+2L−1 (which corresponds to the backward-probability calculation results of L2) from the first storage area, and by using the
backward-probability as an initial value, the shift-probability calculation unit 221A and the backward-probability calculation unit 222 store and retain the backward-probability calculation results
in the second storage area while calculating the shift-probabilities and the backward-probabilities from the time (M−1)N/M+2L−1 toward the time (M−1)N/M+L (which corresponds to L2->L1).
Next, the shift-probability calculation unit 221B and the forward-probability calculation unit 225 calculate the shift-probabilities and the forward-probabilities from the time (M−1)N/M+L toward the
time (M−1)N/M+2L−1 (which corresponds to L1->L2), then the joint-probability calculation unit 226 calculates the joint-probability by receiving the backward-probability calculation result from the
backward-probability calculation unit 224, and the a posteriori-probability calculation unit 227 and the external information likelihood calculation unit 228 execute the a posteriori-probability
calculation and the external information likelihood calculation, respectively.
<4> Hereinafter, the external information likelihoods from the time (M−1)N/M to the time N−1 (L2->N) are calculated by repeating the processes on the L-by-L basis in the same way as in <3>.
The respective MAP units #1, #2, . . . , #M perform the MAP decoding operations in parallel, store the respective decoded results in a predetermined storage area of the interleave RAM 23, and
interleave or deinterleave by reading the data from the interleave RAM 23 in a predetermined sequence, and the turbo decoding is conducted by repeating such operations a plural number of times.
According to Specific Example 2; in the case of writing the backward-probability calculation results in the second storage area of the backward-probability RAM 223 by executing the processes in
accordance with the Conventional Example 2, the backward-probability calculation results are written at the intervals of n-pieces of probabilities, and, the continuous backward-probability
calculation results are not written as in the Conventional Example 2. In a case of calculating the joint-probability by employing the backward-probability calculation results, with respect to the
portion written to the backward-probability RAM 223, the joint-probability is calculated by use of the value read from the backward-probability RAM 223. By contrast, with respect to the portion that
is not written to the backward-probability RAM 223, the backward-probability is recalculated by employing the value written to the RAM 223 and the shift-probability calculation result used when
calculating the forward-probability is calculated, and the joint-probability is calculated by use of the result thereof.
In FIG. 12, the backward-probability RAM 223 and the backward-probability calculation unit 224 are the components changed from the Conventional Example 2. A memory size (memory capacity) of the
backward-probability RAM 223 is given by the time N/L+2ML through the time N/L+ML/(n+1), in which the area discretely stored with the data per L is the same, and the area stored continuously with the
data in the L segment becomes ½(N+1). On the other hand, the increase quantity of the circuit scale is equivalent to an M-increase in the number of the backward-probability calculation units.
However, if n takes a small value to some extent, the memory reduction quantity takes a large value as compared with the increase quantity in the number of the backward-probability calculation units,
and hence a circuit scale-down effect of the whole turbo decoding device can be expected.
Note that when the coded data u is not segmented, the turbo decoding process is executed by use of only one (e.g., MAP unit #1) of the plurality of MAP units 22. In this case, the decoding process is
carried out according to the time chart as shown in FIG. 5. The continuous backward-probability calculation results (such as the backward-probability calculation results from L1 to 0) are, however,
stored at the intervals of n-time, and the non-stored portions (probabilities) are recalculated when the forward-probability is calculated and used for calculating the joint-probability.
Specific Example 3
Next, a case of applying the system according to the present embodiment to the turbo decoding apparatus (Conventional Examples 3+4) attaining speed-up by applying the Conventional Example 4 to the
Conventional Example 3, will be described by way of Specific Example 3. In this case, for simplifying the description, the Specific Example 3 includes a case of conducting the 1-fold speed-up in the
Conventional Example 4, i.e., a case where the Conventional Example 4 is not applied.
FIG. 13 is a block diagram illustrating the turbo decoding apparatus in which the system according to the present embodiment is applied to the Conventional Examples 3 and 4 (3+4). A turbo decoding
apparatus 20B illustrated in FIG. 13 has a configuration common to the turbo decoding apparatus 20 shown in FIG. 11, so the description will be focused mainly on a different configuration, while the
description of the common configuration is omitted.
A turbo decoding apparatus 20B illustrated in FIG. 13 is different from the turbo decoding apparatus 20 (FIG. 11) in the following points. The turbo decoding apparatus 20B has a plurality of MAP
units 22 (MAP units #1 to #M), and each MAP unit has the same configuration. The MAP unit includes three shift-probability calculation units 221A, 221B, and 221C and three backward-probability
calculation units 222A, 222B, and 222C, the shift-probability calculation unit 221A links with the backward-probability calculation, unit 222A, the shift-probability calculation unit 221B links with
the backward-probability calculation unit 222B, and the shift-probability calculation unit 221C links with the backward-probability calculation unit 224 and with the forward-probability calculation
unit 225.
In Specific Example 3, if the signal length which is obtained by adding the tail bit to the information bits is an N-bit length, the communication path value RAM 21 segments the signal length into
M-pieces and simultaneously inputs the segmented data to the respective MAP units #1, #2, . . . , #M. The MAP units #1, #2, . . . , #M simultaneously (in parallel) execute, based on the MAP decoding
method in the Conventional Example 3, the MAP decoding process with respect to the segmented pieces of data (data blocks) 0−N/M, N/M−2N/M, . . . , (M−1)N/M−N that are segmented by N/M bits.
The first MAP unit #1 executes the following processes. A premise is that the segmented data (data block) 0−N/M is sub-segmented into a plurality of blocks (0-L1, L1-L2, L2-L3, . . . , Lx-N/M) on the
L-by-L basis (by a predetermined length L), in which the L-based decoding process is conducted.
<1> The shift-probability calculation unit 221A and the backward-probability calculation unit 222A calculates the shift-probabilities and the backward-probabilities from the time L2(=L+Δ=2L, where Δ=
L) toward the time 0, and store and retain the backward-probability calculation results from the time L−1 to the time 0 at the intervals of n-time (e.g., n=1) in the backward-probability RAM 223. It
is to be noted that the backward-probabilities from the time L2(=L+B=2L) to the time L1(=L) have none of the reliability and are therefore discarded.
<2> Next, the shift probability calculation unit 221C and the forward-probability calculation unit 225 calculate the shift-probabilities and the forward-probabilities from the time 0 toward the time
L−1(L1−1), and the joint-probability calculation unit 226 calculates the joint-probability by employing the forward-probabilities and the backward-probabilities read from the backward-probability RAM
At this time, if the backward-probability calculation result used for calculating the joint-probability is not stored in the backward-probability RAM 223, the backward-probability calculation unit
224 recalculates the backward-probability by using the method explained in FIGS. 8 to 10, and the joint-probability calculation unit 226 calculates the joint-probability by employing the recalculated
backward-probability and the forward-probability corresponding thereto. The a posteriori-probability calculation unit 227 and the external information likelihood calculation unit 228 execute the a
posteriori-probability calculation and the external information likelihood calculation, respectively.
<3> During the calculation of the backward-probability by the backward-probability calculation unit 222A (for example, at a point of time when the calculation of the backward-probability from the
time L2 to the time L1 is finished), the shift-probability calculation unit 221B and the backward-probability calculation unit 222B calculate the shift probabilities and the backward-probabilities
from the time L3(=2L+B=3L) to the time L1(=L), and store and retain the backward-probability calculation results from the time 2L−1(L2−1) to the time L(L1) at the intervals of n-time (e.g., n=1) in
the backward-probability RAM 223. Note that the backward-probabilities from the time L3(=2L+B=3L) to the time L2(=2L) have no reliability and therefore are discarded.
<4> After finishing the operation <3>, the shift-probability calculation unit 221C and the forward-probability calculation unit 225 calculate the shift-probabilities and the forward-probabilities
from the time L(L1) to the time 2L−1(L2−1), and the joint-probability calculation unit 226 calculates the joint-probability by employing the calculated forward-probability and the
backward-probability read from the backward-probability RAM 223 or the backward-probability recalculated by the backward-probability calculation unit 224. The a posteriori-probability calculation
unit 227 and the external information likelihood calculation unit 228 calculate the posteriori-probability and the external information likelihood, respectively.
Hereinafter, the external information likelihoods from the time 0 to the time N/M−1 are similarly calculated by repeating the processes on the L-by-L basis.
Moreover, in the i-th MAP units #i (e.g., i=2), the next process targeted at the segmented data in the time N/M to the time 2N/M is executed. A premise is that the segmented data in the time N/M−2N/M
is sub-segmented into a plurality of blocks (0-L1, L1-L2, L2-L3, . . . , Lx-2N/M) on the L-by-L basis, in which the L-based decoding process is carried out.
<1> The shift-probability calculation unit 221A and the backward-probability calculation unit 222A calculate the shift-probabilities and the backward-probabilities from the time (i−1)N/M+L+Δ toward
the time (i−1)N/M (L2->0), and store and retain the backward-probabilities calculation results from the time (i−1)N/M+L−1 to the time (i−1)N/M at the intervals of n-time (e.g., n=1) in the
backward-probability RAM 223. Note that the backward-probabilities from the time (i−1)N/M+L+Δ to the time (i−1)N/M+L (L2 to L1) have no reliability and therefore are discarded.
<2> Next, the shift-probability calculation unit 221C and the forward-probability calculation unit 225 calculate the shift-probabilities and the forward-probabilities from the time (i−1)N/M−Δ toward
the time (i−1) N/M (N/M-Δ→N/M), and subsequently calculate the shift-probabilities and the forward-probabilities from the time (i−1)N/M toward the time (i−1)N/M+L−1 (N/M→L1).
Further, the joint-probability calculation unit 226 calculates the joint-probabilities by using the forward-probabilities from the time (i−1)N/M to the time (i−1)N/M+L−1 and the
backward-probabilities read from the backward-probability RAM 223 or the backward-probabilities recalculated by the backward-probability calculation unit 224, and the a posteriori-probability
calculation unit 227 and the external information likelihood calculation unit 228 calculate the a posteriori-probability and the external information likelihood, respectively.
<3> During the calculation by the backward-probability calculation unit 222A (for example, at a point of time when the calculation in L2 to L1 is finished), the shift-probability calculation unit 221
B and the backward-probability calculation unit 222B calculate the shift-probabilities and the backward-probabilities from the time (i−1)N/M+2L+A toward the time (i−1)N/M+L (L3->L1), and store and
retain the backward-probability calculation results from the time (i−1)N/M+2L−1 to the time (i−1)N/M+L (L2 to L1) at the intervals of n-time (e.g., n=1) in the backward-probability RAM 223. The
backward-probabilities from L3 to L2 are discarded.
<4> Next, the shift-probability calculation unit 221C and the forward-probability calculation unit 225 calculate the shift-probabilities and the forward-probabilities from the time (i−1)N/M+L toward
the time (i−1)N/M+2L−1 (L1−L2), then the joint-probability calculation unit 226 calculates the joint-probability by using the calculated forward-probability and the backward-probability stored in the
backward-probability RAM 223 or the backward-probability recalculated by the backward-probability calculation unit 224, and the a posteriori-probability calculation unit 227 and the external
information likelihood calculation unit 228 calculate the a posteriori-probability and the external information likelihood, respectively.
Hereinafter, the external information likelihoods from the time (i−1)N/M to the time iN/M−1 are similarly calculated by repeating the processes on the L-by-L basis.
Moreover, in the M-th MAP unit #M, the next process targeted at the segmented data in the time N/(M−1)/M to the time N is executed. A premise is that the segmented data in the time N/(M−1)M to the
time N is sub-segmented into a plurality of blocks (0-L1, L1-L2, L2-L3, . . . , Lx-2N/M) by the predetermined length L, in which the L-based decoding process is carried out.
<1> The shift-probability calculation unit 221A and the backward-probability calculation unit 222A calculate the shift-probabilities and the backward-probabilities from (M−1)N/M+L+Δ toward (M−1)N/M
(L2->0), and store and retain the backward-probabilities calculation results from (M−1)N/M+L−1 to (M−1)N/M at the intervals of n-pieces of probabilities (e.g., n=1) in the backward-probability RAM
223. Note that the backward-probabilities from (M−1)N/M+L+Δ to (M−1)N/M+L have no reliability and therefore are discarded.
<2> Next, the shift-probability calculation unit 221C and the forward-probability calculation unit 225 calculate the shift-probabilities and the forward-probabilities from (M−1)N/M−Δ toward (M−1)N/M,
and subsequently calculate the shift-probabilities and the forward-probabilities from (M−1)N/M toward (M−1)N/M+L−1 ((M−1)N/M->L1). Further, the joint-probability calculation unit 226 calculates the
joint-probabilities by use of the forward-probabilities from (M−1)N/M to (M−1)N/M+L−1 and the backward-probabilities stored in the backward-probability RAM 223 or the backward-probabilities
recalculated by the backward-probability calculation unit 224. The a posteriori-probability calculation unit 227 and the external information likelihood calculation unit 228 calculate the a
posteriori-probability and the external information likelihood, respectively.
<3> During the calculation by the backward-probability calculation unit 222A (for example, at a point of time when the calculation in L2 to L1 is finished), the shift-probability calculation unit 221
B and the backward-probability calculation unit 222B calculate the shift-probabilities and the backward-probabilities from (M−1)N/M+2L+B toward (M−1)N/M+L (L3→L1), and store and retain the
backward-probability calculation results from (M−1)N/M+2L−1 to (M−1)N/M+L at the intervals of n-pieces of probabilities (e.g., n=1) in the backward-probability RAM 223. The backward-probabilities
from L3 to L2 are discarded.
<4> Next, the shift-probability calculation unit 221C and the forward-probability calculation unit 225 calculate the shift-probabilities and the forward-probabilities from (M−1)N/M+L toward (M−1)N/
M+2L−1 (L1→L2). The joint-probability calculation unit 226 calculates the joint-probability by use of the calculated forward-probability and the backward-probability (one of the stored
backward-probability and the recalculated backward-probability). The a posteriori-probability calculation unit 227 and the external information likelihood calculation unit 228 calculate the a
posteriori-probability and the external information likelihood, respectively.
Hereinafter, the external information likelihoods from the time (M−1)N/M to the time N−1 are similarly calculated by repeating the processes on the L-by-L basis.
The respective MAP units #1, #2, . . . , #M perform the MAP decoding operations in parallel, store the decoded results in a predetermined storage area of the interleave RAM 23, and interleave or
deinterleave by reading the data from the RAM 23 in a predetermined sequence, and the turbo decoding is conducted by repeating such operations multiple times.
In the configuration of Specific Example 3, the backward-probability calculation results are written at the intervals of n-time to the backward-probability RAM 223. In a case of calculating the
joint-probability by employing the backward-probability calculation results, with respect to the portion written to the backward-probability RAM 223, the value read therefrom is employed, and with
respect to the portion that is not written, the backward-probability is recalculated by employing the value written to the RAM 223 and the shift-probability calculation result used when the
forward-probability is calculated, and the joint-probability is calculated by use of the result thereof.
In FIG. 13, the backward-probability RAM 223 and the backward-probability calculation unit 224 are the components changed from the Conventional Examples 3 and 4 (3+4). A memory size (memory capacity)
of the backward-probability RAM 223 is given by 2ML through ML/(N+1), resulting in ½(n+1). On the other hand, the increase quantity of the circuit scale is equivalent to M pieces of the
backward-probability calculation unit. If n takes a small value to some extent, however, the memory reduction quantity takes a large value as compared with the increase quantity in the number of the
backward-probability calculation units, and hence a circuit scale-down effect of the whole turbo decoding device can be expected.
It should be noted that in the embodiment of the present invention, if n is equal to or larger than 2, for example, the backward-probabilities β(i−1), β(i+2) . . . from the next one can be
recalculated by using the backward-probability β(i) recalculated from the backward-probability β(i+1) stored in the memory.
The disclosures of Japanese patent application No. JP2006-280655 filed on Oct. 13, 2006 including the specification, drawings and abstract are incorporated herein by reference.
|
{"url":"http://www.freepatentsonline.com/y2008/0092011.html","timestamp":"2014-04-17T12:40:53Z","content_type":null,"content_length":"154495","record_id":"<urn:uuid:9c964a6a-7f6b-4e13-bcb8-5607dca82d07>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the number of topologies on a finite set
October 12th 2008, 11:39 AM #1
Junior Member
May 2008
the number of topologies on a finite set
As far as I know there isn't an exact formula for the number of topologies on a finite set with n elements, for large n... I will apreciate any information on this topic
well, start with the basics, what is a topology? it is a pair $(X, \mathcal{O})$ of a set $X$ and a set of "open" subsets of $X$, which we denote here, $\mathcal{O}$, such that we have the
following axioms holding:
(1) arbitrary unions of open sets are open
(2) the intersection of any two open sets is open
(3) $\emptyset$ and $X$ are open
so really, counting the elements in a topology amounts to counting subsets of a set, $X$, and the numbers of subsets we can form from the subset of the set $X$. of course, the subset has to
contain at least $X$ and $\emptyset$.
now do you think you can answer your problem?
I think the member understands the problem he is rather asking how to find the formula which seems to be an unsolved combinatorical problem.
Just search the internet and something appears.
October 12th 2008, 12:34 PM #2
October 12th 2008, 01:35 PM #3
Global Moderator
Nov 2005
New York City
|
{"url":"http://mathhelpforum.com/advanced-algebra/53280-number-topologies-finite-set.html","timestamp":"2014-04-20T23:56:27Z","content_type":null,"content_length":"41016","record_id":"<urn:uuid:7805efca-f650-4159-9600-94ade94e2330>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Badal Joshi
Home | Publications | Research | Teaching | Math 242 | Math 448
Professional Trajectory:
☆ 2013-present: Assistant Professor at California State University, San Marcos.
☆ 2012-2013: Postdoctoral Fellow at University of Minnesota.
☆ 2009-2012: Assistant Research Professor at Duke University.
☆ In 2009, I completed a PhD in Mathematics at Ohio State University.
☆ In 2004, I received an MS in Physics from the Ohio State University.
☆ My research area is Mathematical Biology. I use probability and dynamical systems to study neuronal networks and biochemical reaction networks.
☆ You can find a complete list of my publications.
☆ Here's a somewhat detailed description of my research.
In Spring 2014, I am teaching Math 242: Introductory Statistics and Math 448: Mathematical Biology. I like teaching a wide variety of mathematics classes, including ones oriented towards
applications. You can find my complete teaching experience here.
|
{"url":"http://public.csusm.edu/bjoshi/","timestamp":"2014-04-18T09:33:39Z","content_type":null,"content_length":"2607","record_id":"<urn:uuid:0eaba277-9993-49d6-99cc-8e425f8e4b8b>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Monterey Park Algebra 1 Tutor
Find a Monterey Park Algebra 1 Tutor
...I am very flexible and available weekdays and weekends. I will be a great help for students who require science classes in their majors or for those who are looking to score high on their entry
exams. I am very friendly, and patient with the student.
11 Subjects: including algebra 1, chemistry, algebra 2, geometry
...I mainly specialize in tutoring English including grammar, literature, reading, and spelling. However, I am able to tutor some math as well, if it is prealgebra or algebra 1. My goal is to make
sure that the student goes home after every lesson fully understanding what they just learned and not just copying my examples.
5 Subjects: including algebra 1, reading, grammar, vocabulary
Hello! I am a Mathematics graduate from University of Riverside. I plan on becoming a teacher.
6 Subjects: including algebra 1, geometry, SAT math, elementary (k-6th)
...I have taught swimming privately, at Los Angeles Valley College, and for the City of Los Angeles Aquatics program. I participated in swim team in high school and college and I still swim
recreationally.I have a Bachelor's degree in Spanish, I have studied abroad, and I speak, read, and write Spa...
35 Subjects: including algebra 1, English, Spanish, elementary math
...When I was an organic chemistry tutor at UCI, we shared the tutoring room with general chemistry tutors; I would aid students that came by looking for help with general chemistry when there
weren't any general chemistry tutors available. Since joining WyzAnt, I have tutored various students with...
9 Subjects: including algebra 1, chemistry, physics, geometry
|
{"url":"http://www.purplemath.com/monterey_park_ca_algebra_1_tutors.php","timestamp":"2014-04-18T14:09:01Z","content_type":null,"content_length":"24030","record_id":"<urn:uuid:ae346153-6b53-464b-98ee-bb76e324514d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
|
search results
Expand all Collapse all Results 1 - 4 of 4
1. CJM 2011 (vol 64 pp. 805)
Quantum Random Walks and Minors of Hermitian Brownian Motion
Considering quantum random walks, we construct discrete-time approximations of the eigenvalues processes of minors of Hermitian Brownian motion. It has been recently proved by Adler, Nordenstam,
and van Moerbeke that the process of eigenvalues of two consecutive minors of a Hermitian Brownian motion is a Markov process; whereas, if one considers more than two consecutive minors, the Markov
property fails. We show that there are analog results in the noncommutative counterpart and establish the Markov property of eigenvalues of some particular submatrices of Hermitian Brownian motion.
Keywords:quantum random walk, quantum Markov chain, generalized casimir operators, Hermitian Brownian motion, diffusions, random matrices, minor process
Categories:46L53, 60B20, 14L24
2. CJM 2005 (vol 57 pp. 204)
On the Duality between Coalescing Brownian Motions
A duality formula is found for coalescing Brownian motions on the real line. It is shown that the joint distribution of a coalescing Brownian motion can be determined by another coalescing Brownian
motion running backward. This duality is used to study a measure-valued process arising as the high density limit of the empirical measures of coalescing Brownian motions.
Keywords:coalescing Brownian motions, duality, martingale problem,, measure-valued processes
Categories:60J65, 60G57
3. CJM 2003 (vol 55 pp. 292)
Infinitely Divisible Laws Associated with Hyperbolic Functions
The infinitely divisible distributions on $\mathbb{R}^+$ of random variables $C_t$, $S_t$ and $T_t$ with Laplace transforms $$ \left( \frac{1}{\cosh \sqrt{2\lambda}} \right)^t, \quad \left( \frac{\
sqrt{2\lambda}}{\sinh \sqrt{2\lambda}} \right)^t, \quad \text{and} \quad \left( \frac{\tanh \sqrt{2\lambda}}{\sqrt{2\lambda}} \right)^t $$ respectively are characterized for various $t>0$ in a
number of different ways: by simple relations between their moments and cumulants, by corresponding relations between the distributions and their L\'evy measures, by recursions for their Mellin
transforms, and by differential equations satisfied by their Laplace transforms. Some of these results are interpreted probabilistically via known appearances of these distributions for $t=1$ or
$2$ in the description of the laws of various functionals of Brownian motion and Bessel processes, such as the heights and lengths of excursions of a one-dimensional Brownian motion. The
distributions of $C_1$ and $S_2$ are also known to appear in the Mellin representations of two important functions in analytic number theory, the Riemann zeta function and the Dirichlet
$L$-function associated with the quadratic character modulo~4. Related families of infinitely divisible laws, including the gamma, logistic and generalized hyperbolic secant distributions, are
derived from $S_t$ and $C_t$ by operations such as Brownian subordination, exponential tilting, and weak limits, and characterized in various ways.
Keywords:Riemann zeta function, Mellin transform, characterization of distributions, Brownian motion, Bessel process, Lévy process, gamma process, Meixner process
Categories:11M06, 60J65, 60E07
4. CJM 1999 (vol 51 pp. 673)
Brownian Motion and Harmonic Analysis on Sierpinski Carpets
We consider a class of fractal subsets of $\R^d$ formed in a manner analogous to the construction of the Sierpinski carpet. We prove a uniform Harnack inequality for positive harmonic functions;
study the heat equation, and obtain upper and lower bounds on the heat kernel which are, up to constants, the best possible; construct a locally isotropic diffusion $X$ and determine its basic
properties; and extend some classical Sobolev and Poincar\'e inequalities to this setting.
Keywords:Sierpinski carpet, fractal, Hausdorff dimension, spectral dimension, Brownian motion, heat equation, harmonic functions, potentials, reflecting Brownian motion, coupling, Harnack
inequality, transition densities, fundamental solutions
Categories:60J60, 60B05, 60J35
|
{"url":"http://cms.math.ca/cjm/kw/Brownian%20motion","timestamp":"2014-04-19T12:16:14Z","content_type":null,"content_length":"32394","record_id":"<urn:uuid:43fa47f5-9f14-4b68-8a44-e3da420e9674>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
|
BDA3 table of contents (also a new paper on visualization) - Statistical Modeling, Causal Inference, and Social Science
BDA3 table of contents (also a new paper on visualization)
In response to our recent posting of Amazon’s offer of Bayesian Data Analysis 3rd edition at 40% off, some people asked what was in this new edition, with more information beyond the beautiful cover
image and the brief paragraph I’d posted earlier.
Here’s the table of contents. The following sections have all-new material:
1.4 New introduction of BDA principles using a simple spell checking example
2.9 Weakly informative prior distributions
5.7 Weakly informative priors for hierarchical variance parameters
7.1-7.4 Predictive accuracy for model evaluation and comparison
10.6 Computing environments
11.4 Split R-hat
11.5 New measure of effective number of simulation draws
13.7 Variational inference
13.8 Expectation propagation
13.9 Other approximations
14.6 Regularization for regression models
C.1 Getting started with R and Stan
C.2 Fitting a hierarchical model in Stan
C.4 Programming Hamiltonian Monte Carlo in R
And the new chapters:
20 Basis function models
21 Gaussian process models
22 Finite mixture models
23 Dirichlet process models
And there are various little changes throughout.
And, as a reward for those of you who have been patient enough to read this far, here’s a recent paper (by Tomoki Tokuda, Ben Goodrich, Iven Van Mechelen, Francis Tuerlinckx, and myself) on
visualizing distributions of covariance matrices:
We present some methods for graphing distributions of covariance matrices and demonstrate them on several models, including the Wishart, inverse-Wishart, and scaled inverse-Wishart families in
different dimensions. Our visualizations follow the principle of decomposing a covariance matrix into scale parameters and correlations, pulling out marginal summaries where possible and using
two and three-dimensional plots to reveal multivariate structure. Visualizing a distribution of covariance matrices is a step beyond visualizing a single covariance matrix or a single
multivariate dataset. Our visualization methods are available through the R package VisCov.
10 Comments
1. Looking forward to the Gaussian and Dirichlet Process chapters! In addition to the C.1 and C.2 on Stan, I’m assuming that Stan us used throughout the book in places where BUGS/JAGS was previously
used? (A big Stan Fan here.)
□ We haven’t implemented the BDA models in Stan yet. On the other hand, we’re almost done with the Gelman and Hill regression models, which can be found on GitHub at: https://github.com/
Also, Stan supports all the covariance and correlation distributions discussed in the paper Andrew references in the post (which I’d recommend if you want to understand covariance priors).
And soon, we’ll be optimizing all of them so they’ll be faster.
2. Your estimated delivery date is:
Thursday, November 21, 2013 -
Saturday, November 23, 2013
Well, all good things are worth waiting for.
□ Yeah, mine says November also. I didn’t even think to check: is there an electronic version?
☆ Andrew — there were a ton of complaints on Amazon about the Kindle edition of BDA 2 being broken. Do you know if the publishers can get this fixed for version 3?
3. Andrew, thank you very much for this post! Much appreciated.
4. […] Gelman, et al’s Bayesian Data Analysis 3rd edition is coming this Fall! The second edition was a classic, and they’ve added several chapters and polished everything […]
5. I am a grad student, I am not a fan of you. But your book is pretty darn clear.
6. BDA is a classic. I am really looking forward to the new book, however, I would really love to buy the PDF version instead of the hardcopy–hence I have not ordered yet. Please let us know if
there are any developments on that.
7. Any word on when we can buy BDA3? The publisher’s page says it’ll be released on 1 November…
|
{"url":"http://andrewgelman.com/2013/08/21/bda3-table-of-contents-also-a-new-paper-on-visualization/","timestamp":"2014-04-16T20:24:50Z","content_type":null,"content_length":"32850","record_id":"<urn:uuid:b952df25-3453-41f0-b5dc-b278638fa515>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Monterey Park Algebra 1 Tutor
Find a Monterey Park Algebra 1 Tutor
...I am very flexible and available weekdays and weekends. I will be a great help for students who require science classes in their majors or for those who are looking to score high on their entry
exams. I am very friendly, and patient with the student.
11 Subjects: including algebra 1, chemistry, algebra 2, geometry
...I mainly specialize in tutoring English including grammar, literature, reading, and spelling. However, I am able to tutor some math as well, if it is prealgebra or algebra 1. My goal is to make
sure that the student goes home after every lesson fully understanding what they just learned and not just copying my examples.
5 Subjects: including algebra 1, reading, grammar, vocabulary
Hello! I am a Mathematics graduate from University of Riverside. I plan on becoming a teacher.
6 Subjects: including algebra 1, geometry, SAT math, elementary (k-6th)
...I have taught swimming privately, at Los Angeles Valley College, and for the City of Los Angeles Aquatics program. I participated in swim team in high school and college and I still swim
recreationally.I have a Bachelor's degree in Spanish, I have studied abroad, and I speak, read, and write Spa...
35 Subjects: including algebra 1, English, Spanish, elementary math
...When I was an organic chemistry tutor at UCI, we shared the tutoring room with general chemistry tutors; I would aid students that came by looking for help with general chemistry when there
weren't any general chemistry tutors available. Since joining WyzAnt, I have tutored various students with...
9 Subjects: including algebra 1, chemistry, physics, geometry
|
{"url":"http://www.purplemath.com/monterey_park_ca_algebra_1_tutors.php","timestamp":"2014-04-18T14:09:01Z","content_type":null,"content_length":"24030","record_id":"<urn:uuid:ae346153-6b53-464b-98ee-bb76e324514d>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vectors again
May 25th 2011, 04:45 PM
Vectors again
v=-3i+j, w=4i-2j
1) 4v-3w does this equal -24i-2j
2) ||V+W|| does this equal [tex] \sqrt{i^2-j^2} [ /math] I have an error if I take out the space between [ / that I do not know how to fix.
3) How can I find the angle between v and w?
May 25th 2011, 05:17 PM
v=-3i+j, w=4i-2j
1) 4v-3w does this equal -24i-2j
2) ||V+W|| does this equal [tex] \sqrt{i^2-j^2} [ /math] I have an error if I take out the space between [ / that I do not know how to fix.
$|v+w| = |i - j| = \sqrt{1^2 + (-1)^2} = \sqrt{2}$
3) How can I find the angle between v and w?
$v \cdot w = |v| |w| \cos{\theta}$
btw, [tex] tags don't work ... use [tex] tags
May 25th 2011, 05:24 PM
I figured it out thanks
|
{"url":"http://mathhelpforum.com/pre-calculus/181653-vectors-again-print.html","timestamp":"2014-04-21T07:12:02Z","content_type":null,"content_length":"5056","record_id":"<urn:uuid:0cc29570-5178-4815-8f8d-1679f748bb91>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Print modules
Module C6-Differentiation - looking at change (PDF*790kb)
This document includes:
This document includes:
• Rate of change – the problem of the curve
• Instantaneous rate of change and the derivative function
• Shortcuts for differentiation (including: polynomial and other power functions, exponential functions, logarithmic functions, trigonometric functions, and where the derivative cannot be found)
• Some applications of differential calculus (including: displacement-velocity-acceleration: when derivatives are meaningful in their own right, twists and turns, and optimization)
A more detailed list of topics is also available: C6: Differentiation: looking at change – contents.
Module C7-Integration - looking at total change (PDF*550kb)
This document includes:
• Area under the curve
• The definite integral
• The antiderivative
Steps in integration (including: using standard rules of integration, integrals of functions with constant multiples, and integrals of sum and difference functions)
• More areas
• Applications of integral calculus
A more detailed list of topics is also available: C7: integration: looking at total change – contents.
Module D5-Differentiation (PDF*899kb)
This document includes:
• Derivatives
• Gradient functions
• Differentiability
• Derivatives of simple functions
• Practical interpretations of the derivative
• Simple applications of the derivative
• The product rule
• The quotient rule
• The chain rule
• Stationary points
• Curve sketching
• Maximum / minimum problems
• Newton-Raphson method for finding roots
• Solutions to exercise sets
A more detailed list of topics is also available: D5: Differentiation – contents.
Module D6-Integration (PDF*251kb)
This document includes:
• Integration of basic functions
• Integration by guess and check
• Integration by substitution
• Definite integration
• Trapezoidal Rule
• Simpson’s Rule
A more detailed list of topics is also available: D6: Integration – contents
Animated activities (videos, Breeze presentations, online tutorials)
Web links
SOS Mathematics (calculus) - short explanations and examples
|
{"url":"http://www.usq.edu.au/learningcentre/alsonline/mathsci/mathstop/calculus","timestamp":"2014-04-17T18:58:11Z","content_type":null,"content_length":"14422","record_id":"<urn:uuid:e1bd0406-2ad8-4520-a13d-f99b16650e23>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Top] [Contents] [Index] [ ? ]
Slipstream is a free real-time vehicle simulator. As such it can be used as a game, although it would then be targeted at the hardcore realism fans and those who like a challenge. Apart from this
Slipstream allows every parameter of the models to be specified and every calculated quantity, such as the positions of the vehicle parts, forces and torques generated at each joint, etc. to be
logged and saved or examined visually. It should therefore be useful for educational purposes and perhaps even for research.
This manual describes how Slipstream works and how it can be used.
Copyright © 2011 Dimitris Papavasiliou
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software
Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled “GNU Free Documentation License”.
[ < ] [ > ] [ << ] [ Up ] [ >> ] [Top] [Contents] [Index] [ ? ]
1. Overview
Slipstream is a free real-time vehicle simulator. As such it can be used as a game, although it would then probably be targeted at the hardcore realism fans and those who like a challenge: getting
the hang of it is not easy but it is not unreasonably hard, it’s just that all the complexities that add to the challenge, and the pleasure, of operating a vehicle are there.
Apart from gaming Slipstream should be usable for educational purposes or even research. Most parameters of the various models can be changed, usually on-the-fly and the changes they effect to the
behavior of the vehicle examined either by manually operating the vehicle or through scripted runs, logging of the state of the vehicle parts, such as positions, velocities, forces, torques etc. The
model for the race track is parametrically defined in terms of segments with a given length, curvature, elevation and slope. This might make it a little harder to design a track (until a proper
editor has been written at least) but it allows very accurate calculation of the contact geometry(1), that is, the way the vehicle’s tyres interact with the pavement which is the most important
factor of a vehicle’s behavior. Apart from that the currently implemented vehicle is based on a recently published model that has been derived from measurements of an actual machine, a modern
large-displacement sports motorcycle, so it should be more or less state-of-the-art. Its validity is another matter but attempts have been made to assert the correctness of the implementation where
possible, for example by producing plots and comparing them to published data. Qualitatively the model seems to behave correctly as it exhibits all the usual phenomena related to motorcycles, such as
weaves, wobbles, drifts, high-siding, low-siding etc.
Finally it should be stressed that Slipstream is a vehicle simulator at heart. It is generic and easily programmable so that it should be adaptable to any use that requires such a simulator. As an
example it should be very easy to use it as an animation tool. The motorcycle can be driven in real time and key-frames extracted and imported into a high-end animation package yielding realistic
motion of the motorcycle as a whole and each of it parts down to the rear shock absorber linkage if necessary. It would be very interesting if this and other such uses could be explored.
[ < ] [ > ] [ << ] [ Up ] [ >> ] [Top] [Contents] [Index] [ ? ]
2. Using Slipstream
Using Slipstream is simple enough. After you’ve installed it you should be able to start the graphical interface by running slipstream-browser on the command line or perhaps through the menu system
of your desktop. The interface consists of a set of HTML pages served by Slipstream and a custom browser (2) so it is fairly self-documenting. Once you select a vehicle and enter a track you can
control the motorcycle with the mouse. If you find that strange just give it a try. Controlling something like a modern vehicle through the keyboard is silly really. How hard do you apply the brakes
when you press down, how much do you push on the clip-ons when you press left? The only thing you can do decently is switch gears.
That’s why Slipstream uses the mouse instead. Once you’re in the track press the middle mouse button (the mouse wheel) somewhere on the screen. You can think of this as putting your hands on the
controls. Moving the mouse from that point on displays a dotted line from the current mouse pointer position to the place you clicked on the screen. This is technically termed the ‘rubber-band’. The
longer the rubberband is vertically the more the throttle is turned on assuming it points upwards from the point you clicked initially. If it extends downwards then it will become red and brakes will
be applied, again, the longer the harder.
The same goes for the horizontal length of the rubber-band. The longer it is the more pressure is put on the handle bars. I must stress this: you do not control the angle of the steering head, that
is you do not directly control where the front wheel is pointing. Instead what you do control is how hard you push and pull on the grips. Rubber-band horizontal length is translated into a torque on
the steering head. If you think this is odd looking into the process called counter-steering should convince you that you rarely know where the front wheel is actually pointing on a real motorcycle
too. But apart from that it is generally the case that a motorcycle is controlled by application of pressure not by pointing the front wheel where you wan to go.
In addition to the above you can use the left and right mouse button to switch to the previous or next gear and that’s about it as far as controlling the vehicle is concerned. Configuring is another
matter altogether. Once you’ve loaded a model in the graphical interface you can click the edit link to edit all of its parameters including, chassis geometry, suspension settings, powertrain
configuration, tyres and controls. Most of the the time the configuration pages assume you know a little bit about vehicle simulation. The layout is designed so as to be as self-documenting as
possible including the units of the various parameters and often whole formulas. This might seem confusing for the casual user but you can configure most parameters without knowing the theory behind
vehicle dynamics and simulation. Suspension setup for example is fairly straightforward event though you might not know how many N/m the spring stiffness you have in mind is. Most changes are
relative anyway so just putting in a 10% larger number will result in a 10% stiffer spring. The same goes for the chassis stiffness for example. It’s easy to experiment with making the chassis
stiffer or softer.
There are exceptions to this of course. Some of the parameters are simply not easy to tune by hand. The tyre parameters are such a case where you need to get your hands on published data to add more
configurations. You can still easily make a few very useful changes though. See section Contact response for details. In other cases such as the powertrain section you may read about what the some of
the more obscure parameters mean in the next chapter which discusses Slipstream’s physical model and then try to make changes. Finally don’t be daunted by the complex looking formulas in some of the
sections. They are meant to provide a precise definition of what the configurable parameters do to persons who need to know this information, but they can be useful even to people who don’t fully
understand them. For example in the control section, under ‘Steering sensitivity’ we find something like:
0.0005 + 0.004 V + \left(1 + 1.35 3\theta\over\pi\right) Nm\over pixel, where V the speed and \theta the chassis roll angle.
This probably is confusing if you don’t know how Slipstream’s control system works but it still just a sum of three terms where you can edit a number in each term. One is just a constant, the other
is multiplied by V which is defined as the vehicle’s speed and the other is somehow related to the angle \theta which is the chassis roll angle or the motorcycle’s lean angle. It’s therefore safe to
assume that you can set your general sensitivity in the first number, and how this changes depending on speed and lean angle in the other two. From that point experimentation will hopefully lead to a
setup that suits you. I’m interested in feedback in this area so if you’re having trouble drop me a line and I’ll try to help out and augment the documentation if needed.
[ < ] [ > ] [ << ] [ Up ] [ >> ] [Top] [Contents] [Index] [ ? ]
3. The physical model
This chapter describes Slipstream’s physics model. Some parts of it are more general than others, being used by whole classes of vehicles while other are specific to particular vehicles. They’re
divided into the following categories.
[ < ] [ > ] [ << ] [ Up ] [ >> ] [Top] [Contents] [Index] [ ? ]
3.1 General
Slipstream employs multibody simulation to animate the vehicles. What this means is that each separate moving part of the machine is modeled as a rigid body with a given mass and moment of inertia
and with given constraints or joints between it and other parts. All bodies are initially placed in some trim position with zero velocity and then the simulator takes over to calculate their motion
over time.
This is the general idea but there is some fine print. For one thing, not every moving part of the actual machine is represented in the model, just the ones that are deemed to ‘make important
contributions to the behavior of the machine’. Which these are is a matter to be resolved by careful analysis. Other parts may be lumped together in one model as is the case of the engine for example
which is generally not implemented by modeling each piston, connecting rod and gear found in a typical internal combustion engine but rather using some sort of more abstract, empirical model. The
other deviation of the implementation from the ideal view laid out in the first paragraph is that there are some parts which are too complex to be approximated as ‘rigid objects’. One such part is of
course the tyre, the complying nature of which is perhaps the most influential factor to the behavior of a modern vehicle but there may be other, for instance a transmission chain in a motorcycle.
[ < ] [ > ] [ << ] [ Up ] [ >> ] [Top] [Contents] [Index] [ ? ]
3.2 The race track
The most basic part of the physics model is the race track which is common to all vehicles. It is also very important because it affects the calculation of the tyre to tarmac contact which has the
most profound influence on the behavior of the vehicle. Get this wrong and the rest does not matter no matter how accurate the model.
In order to allow for accurate tyre contact resolution the model needs to be parametric in nature. It would be perhaps easiest on the track designer to allow the road to be some sort of polygonal
approximation as then the standard set of available modeling tools could be used but this would lead to weird irregularities in the contact response when the tyre crosses the boundaries between
triangles in all but the most detailed meshes. This is clearly unacceptable so a different representation is employed. The track is modeled as a sequence of segments each of which has a small set of
parameters that completely describe its geometry: its length, the distance from the center-line to the left and right edge, its curvature, its slope and its superelevation. This approach does not
allow for the design of arbitrarily shaped tracks but it closely follows the way civil engineers design roads so it allows ‘usual’ roads to be modeled in a simple manner.
Let us consider, as an example, the first few segments of the Laguna Seca Raceway as it is modeled in the current distribution of Slipstream:
{units.feet(269), units.feet(15), units.feet(15), 0, 0, 0},
{units.feet(150), units.feet(15), units.feet(15),
0, -0.045, -0.033},
{units.feet(182), units.feet(15), units.feet(15),
1 / units.feet(516), -0.045, -0.119},
{units.feet(239), units.feet(20), units.feet(20),
0, -0.045, -0.028},
Each set of numbers enclosed in curly braces corresponds to one segment and each number in the set defines one parameter in the order given above. Slipstream assumes it is given measurements in SI
units so if you want to use some other unit such as feet you need to convert it to meters which explains the use of ’units.feet()’.
So what the first line says is that we start of with a segment 269 feet in length which has a left half-width of 15 feet and a right half-width of 15 feet so a total width of 30 feet and zero
curvature, slope and superelevation. The length is straightforward enough but the rest of the parameters beg the question: at which point of the length of the segment is the curvature, slope or
superelevation equal to the specified amount? The answer is at the end of the segment and this has the consequence that the geometry of each segment is influenced not only by its own parameters but
also by the parameters of the previous segment since the previous segment’s endpoint is the current segment’s starting point. Lets consider each of the parameters separately to see what sort of
combinations they yield.
Assume that the current segment is defined to have zero curvature at its endpoint. Curvature is defined as the reciprocal of the turn radius so that zero curvature mean infinitely large turn radius
and therefore a straight line. This means that the segment will end up straight but the geometry throughout the whole segment depends on the curvature it started up with. So if the previous segment
has zero curvature this means that the current segment both starts end ends with zero curvature so the segment is just a straight line. If on the other hand the segment begins with some curvature
then the segment is a so-called transitional segment which starts of as a circular arc of the initial curvature but ends off straight, that is with infinite curvature. The resultant shape is an Euler
spire and civil engineers use such road segments to make the transition from turns to straights more natural. The other cases are what you might expect: if there is both some initial curvature a
(from the previous segment) and ending curvature b the resultant segment is again a transitional spire which can be used to join two turns of differing radii. If you start out and finish the segment
with the same (non-zero) curvature then you get a circular arc which is the only shape with constant curvature.
The situation is similar in the case of the other parameters. The slope at each point of the segment depends on its starting slope, dictated by the ending slope of the previous segment, and its own
ending slope. If both are zero what you get is a totally level segment. If both have the same non-zero value you get a segment with a constant slope, essentially an inclined plane, and if the start
and end segment differ you end up with a transitional segment again but this time the transition takes place in the vertical plane so the segment gradually changes slope and the resulting
cross-section is a parabola.
The same logic holds for superelevation which stands for the banking of the road. The same values for start and end yield constant (possibly zero) banks while different values yield a transition with
a sort of screw effect (if exaggerated).
There’s only one detail left: the initial conditions of the first segment. Since there is no preceding segment to define its starting point parameters these are defined explicitly to be equal to the
first segments ending parameters. So in the case above the first segment has a length of 269 feet, a width of 30 feet and zero curvature, slope and superelevation for both start and end so it is
straight and completely flat. The next has non-zero values for both slope and superelevation so there’s a parabolic transition from zero to -4.5% slope and also a screw-like transition from zero to
-3.3% bank. The next segment keeps the slope constant so it is just an inclined plane in terms of its vertical cross-section but it introduces some further bank, ending at -11.9%, and also some
curvature so its plan view is a spire starting out straight but ending at a radius of curvature of 516 feet. The next segment retains the same slope but straightens out again to zero curvature,
yielding a plan view that is the mirror image of the previous segment and it also transitions to a smaller bank (of 2.8%). It is also interesting to note that the last segment starts out with
half-widths of 15 feet but ends with half-widths of 20 feet. This introduces, yet again, a transitional broadening of the track from a total width of 30 feet to a total width of 40 feet.
[ < ] [ > ] [ << ] [ Up ] [ >> ] [Top] [Contents] [Index] [ ? ]
3.3 Tyre models
The tyre model defines the interaction of the vehicle with the ground. It can be broken into two parts: contact resolution which is the process of finding out if and where the tyres contact ground
and contact response which determines how the tyre reacts to the contact and what forces are generated in the process.
[ < ] [ > ] [ << ] [ Up ] [ >> ] [Top] [Contents] [Index] [ ? ]
3.3.1 Contact resolution
Due to its parametric specification it is possible to test analytically whether a point on the plane belongs to the plan view of the race track or not(3). It is also possible to calculate the height
of the track surface at this point therefore every point in space can be tested against the track by calculating the height of the track surface at the point specified by its x and y coordinates and
then comparing that height with its z coordinate. If the height of the track is smaller then the point lies above the track surface otherwise it either rests on the surface or has penetrated it.
The currently implemented tyre model is that of a motorcycle tyre which is considered to be a torus(4). Since a torus can be thought of as the bounding volume of a sphere of radius \rho swept along
the circumference of a circle of radius r, testing for intersection between a torus and a plane is equivalent to testing for intersection between this circle and the plane translated \rho units along
the direction of its normal. This last test can easily be performed by calculating the distance of the circle’s center from the plane and comparing it to the radius r.
The point of intersection between the tyre and a flat plane (or flat road segment) can thus be calculated exactly. If therefore the whole track was flat a valid contact resolution strategy would be
1. Calculate the contact point between the tyre and the plane of the track. This contact point, if it exists, is unique(5) and can be found thus:
□ Take the cross product of the road normal vector and wheel axis vector. This is the wheel’s longitudinal vector.
□ Take the cross product of the axial and longitudinal wheel vectors. This is the wheel’s radial vector.
□ The point of contact (if it exists) must lie at the point p = r \vec r + \rho\vec n, where r the major radius of the torus, \vec r its radial vector, \rho the minor radius of the torus and \
vec n the road surface normal vector.
2. Test whether the calculated contact point belongs to the track or not. As mentioned earlier this can be done analytically.
In our case though the track is not constrained to lie on a plane so this introduces some complications. A simple approach would be to assume that the track is at least ‘locally flat’, that is, that
its normal changes slowly over the track surface. We could then calculate the normal of the track somewhere in the neighborhood of where we expect the contact point to lie (this can be done
analytically) and then use that to calculate the position of the contact point as above. The current implementation finds the contact point at the point of the track that lies vertically under the
the center of the tyre. It can be shown that the error is negligible for all but the steepest changes in slope or superelevation.
[ < ] [ > ] [ << ] [ Up ] [ >> ] [Top] [Contents] [Index] [ ? ]
3.3.2 Contact response
Contact response is calculated using an adapted version of the magic formula. The details of the model are published in [1] so there’s no point in mentioning them here as well. A plot of the
longitudinal force under pure slip as computed by the model can be seen in tyreplot. This and others like it can be calculated by the model implementation as a way of asserting its correctness or to
fine-tune tyre parameters. They’re available through the graphical interface in the tyre pages which allows all parameters in the tyre models to be edited and the changes examined either practically
or through the updated graphs.
Although the magic formula was designed to fit experimental data so that its parameters are not given to manual editing, a few adjustments may be possible and useful. A very simple and short
description of the formula will therefore be given here. Consult [1] for the details.
Figure 3.1: A plot of the longitudinal force produced by a tyre under pure slip and different loads.
The general shape of all forces and torques generated by a tyre under pure slip conditions can be seen in tyreplot which shows a set of longitudinal force curves for varying tyre loads. Pure slip
means that the tyre slips either in the longitudinal or lateral directions but not both. When combined slip conditions apply the resultant forces and torques are functions of the pure slip versions
so the shape of tyreplot is still relevant.
In general therefore, we can say that the generated forces rise linearly for small slip ratios or angles until a peak value is reached and then drop asymptotically to a smaller level. The slope of
the initial linear segment is called the stiffness of the curve (due to its similarity to a spring’s linear force-deflection relationship presumably). The general form of the magic formula that
produces this shape is y = D \sin(C \arctan (B x - E (B x - B x))), where the input x is either the slip ratio or slip angle and y the resulting longitudinal or lateral force respectively.
The factor D, called the peak factor determines the peak of the curve and thus the peak force the tyre can generate in each direction. The factors B, C, E are called the stiffness, shape and
curvature factors and are such that the stiffness of the curve, the slope of its linear segment close to the origin, is equal to their product K = B C E. Each of these factors depends on the input
parameters, that is slip ratio \kappa or slip angle \alpha, camber angle \gamma and load F_z0 in a way that is determined by the parameters of the formula. For example the parameter C_x directly sets
the shape factor for the pure slip longitudinal force while the parameters p_Kx^* describe the longitudinal slip stiffness under pure slip conditions.
How each set of parameters affects each factor can be looked up in the formulas give in [1] but as an example and because it is probably the most easy and useful factor to tweak, we’ll look into the
parameters that define the peak factor D. In the longitudinal case we have D_x = (p_Dx1 + p_Dx2 df_z) F_z, where F_z the load on the tyre and df_z a term describing the difference between the current
load and the tyre’s rated load. The parameter set p_Dx1, p_Dx2 can therefore be seen as defining the coefficient of friction of the tyre in terms of a nominal coefficient p_Dx1 and a term describing
a dependence on the load p_Dx2, the tyre’s so-called load sensitivity. The situation is analogous in the lateral force case where D_y = p_Dy1 e^p_Dy2 df_z / (1 + p_Dy3 \gamma^2) F_z. Although the
details are different p_Dy1 can again be seen as the nominal coefficient of friction for lateral forces, p_Dy2 the tyre’s lateral load sensitivity and in this case there’s an extra parameter p_Dy3
describing a dependence of lateral force on the camber angle \gamma. The dependence of peak aligning moment on the parameters p_Dz* is very similar so we won’t go into further detail here.
We see therefore that by scaling p_Dx1, p_Dy1 and p_Dz* down we can easily scale the peak force and moment produced by the tyre while leaving the shape of the curves intact. This is an easy (although
perhaps not entirely accurate) way of simulating limited grip situations like wet or worn roads and tyres.
[ < ] [ > ] [ << ] [ Up ] [ >> ] [Top] [Contents] [Index] [ ? ]
3.4 Engine models
Slipstream, at this time, models a four-stroke internal combustion engine using a so-called ‘mean value’ model. This means that instead of trying to model each phase in a combustion cycle separately
only the mean state of the engine is considered, that is, a mean flow of mixture into and exhaust gasses out of the cylinders and a mean generated torque at the crankshaft. A ‘discrete event’ engine
model, although more complex would allow the pulsing nature of both the mixture ingestion as well as generated torque to be captured and hence their interesting effects such as throttle response lag
or backlash on the drivetrain. Whether this is feasible in a real-time simulation needs to be investigated. For now we must contend ourselves with a mean value model.
The engine model consists of a number of sections which include the throttle body, the cylinder and the flywheel. Normally the intake manifold should be considered as well but this has been omitted
for reasons that will be explained later.
The throttle body regulates the air mass flow into the cylinder and its modeling has mostly been based on [3]. In the discussion that follows only the points that diverge from this publication will
be examined in any depth. For more details refer to [3].
The air pressure before the throttle plate is assumed to be the atmospheric pressure although this can be easily modified in the future to account for restrictions in the intake or, perhaps, a ram
induction system. The air inside the intake manifold, which is the volume between the throttle plate and the intake valve, is assumed to be an ideal gas so according to the ideal gas law its pressure
can be related to the mass of air in the manifold and the volume and temperature of the manifold. The volume of the manifold does not affect the pressure of the gas in its steady state but it
determines the transient changes in pressure that arise when this steady state is disturbed due to a change in throttle angle or engine speed. Since these transients are small in time scale relative
to the time scale of a real-time simulation they have been omitted. The steady-state manifold absolute pressure on the other hand must satisfy the condition that the air mass flowing past the plate
and into the manifold be equal to the airmass flowing past the valve and into the cylinder (this condition ensures equilibrium and hence a steady state inside the manifold) so we can calculate it by
solving this equation instead if we’re willing to ignore any transients.
The air mass flow through the throttle plate depends on the pressure differential before and after the plate, the latter being the manifold pressure, the diameter of the plate as well as its
coefficient of discharge, a dimensionless number that determines how far the throttle body differs from an ideal orifice. Both of the cross-sectional diameter and discharge coefficient are
configurable in the simulation and can be used to tune the engine’s intake characteristics. The air mass flow past the intake valve and into the cylinder on the other hand follows the so-called
speed-density formula and depends on manifold pressure, cylinder volume, volumetric efficiency and engine speed. Volumetric efficiency indicates how good the engine is at moving mixture into or
exhaust gasses out of the cylinder. More specifically it is the ratio of the volume of charge ingested into the cylinder to the actual volume of the cylinder. It therefore varies in the range [0, 1]
in the absence of forced induction and can be specified as a set of four coefficients as \eta_v = a + b \omega + c \omega^2 + d p_m, where \eta_v is the volumetric efficiency, \omega the engine speed
and p_m the manifold pressure.
Thus for a given cylinder displacement the engine’s intake system can be tuned through selection of the throttle plate diameter, discharge coefficient and the engine’s volumetric efficiency.
It should be noted that the effects of manifold temperature in the discussion above has been omitted since the temperature is currently assumed to be constant and is hard-coded into the simulation.
This restriction may be lifted in the future if necessary.
The other major part of the engine model is the cylinder section which is responsible for burning the mixture and producing torque. With the air mass introduced into the cylinder calculated the
indicated torque of the engine can be calculated given its thermal efficiency. This assumes that the mixture is always stoichiometric and completely disregards spark advance and other similar
details. The aim of this model is to recreate a realistic internal combustion engine response as seen from the driver. There’s obviously no point in introducing details into the model that will later
have to be addressed through the implementation of an engine control unit. The thermal efficiency of the engine represents its ability to convert heat into mechanical energy or work and is assumed to
depend on engine speed. It is defined as \eta_t = a + b \omega + c \omega^2. The coefficients can be specified, allowing the designer to tune the engine’s power output throughout its speed range.
The indicated torque will have to be reduced to account for losses due to friction (either mechanical or in the pumping of gasses) in order to calculate the torque generated at the output shaft. The
frictional and pumping losses are assumed to depend on engine speed and manifold pressure respectively and are specified through two sets of parameters which allow the calculation of the losses as
p_me0g = a (1 - b p_m \over p_0) and p_me0f = c + d \omega + e \omega^2 respectively.
These are the mean effective pressures of the forces that arise due to mechanical and pumping friction and can be converted to torques given the displacement. By altering the parameters one can tune
the final power output of the engine with respect to speed and load. The graphical interface also produces plots of the various quantities described above in order to assist the process of
configuring the engine.
[ < ] [ > ] [ << ] [ Up ] [ >> ] [Top] [Contents] [Index] [ ? ]
3.5 Vehicle models
At this point the only implemented vehicle is that of a modern sports motorcycle featuring telescopic front forks and a single rear shock attached to the main frame and swing-arm via a linkage.
Figure 3.2: A plot of the stock configuration of the motorcycle model geometry. The points marked with a red cross are centers of mass while the small orange circles denote joints with axes drawn
with green dashed lines unless they’re normal to the plane of the drawing.
The vehicle model simply consists of a set of rigid bodies with given mass and moment of inertia that are joined together via constraints which remove degrees of freedom from the motion of the
bodies. The wheel body for example is allowed only one degree of freedom, a rotation around an axis that is always perpendicular to the plane of the swingarm. Once all moving parts of the vehicle
have been defined in terms of their properties the simulator starts to simulate the system forward. The biggest challenges when constructing the model therefore is to identify which of the moving
parts in a vehicle plays an important role in determining its behavior and finding suitable values for the various parameters of these parts; their inertial properties as well as initial positions
and orientations, the latter defining the geometry of the vehicle. The published data in [1] have proven invaluable in this respect.
The implemented model is the one described in [1] so the discussion here will be limited to the points that differ from the paper. At the present times these are only two. One is the model of the
shock absorbers. The paper considers these to be damped linear springs with the same damping coefficient for both compression and rebound. This is not very realistic(6) since the behavior exhibited
by real springs is usually not linear (due to the progressive nature of the springs themselves for example, or due to the spring-like behavior of the air trapped over the free surface of the oil in
the damper). The dampers in modern machines also have different coefficients for the compression and rebound strokes (and also for fast and slow compression) and probably cannot be said to exhibit
simple linear damping behavior in any of these modes. More research into this is needed but this is probably not of much importance at this stage as the current road model assumes a perfectly smooth
and uniform (in terms of traction) road. Once some model of irregularity has been introduced more realistic shock absorbers will become more relevant and easier to develop and test. For now the
models implement linear damping with different coefficients for compression and rebound strokes.
The other point where the current implementation and the model published in [1] diverge is the final drive. The published model simply acts the torque produced by the motor on the rear wheel and
reacts in on the chassis (although it should be noted that a later publication by the same authors describes a chain drive geometry among other things, see [2] for more details). This resembles a
shaft drive configuration although the reference machine, like most if not all sports machines, utilizes a chain drive. This is of some importance as the force couple generated by the chain tension
significantly affects the tendency of the machine to squat under acceleration. The current model implements a chain final drive with configurable sprocket positions and radii. The branch of the chain
that should be under tension is identified and a force is acted on the rear sprocket and reacted on the engine output shaft sprocket. At this point chain slack is not taken into account.
Although the model itself is not configurable on-line in the sense that you define all the bodies and joints between them in a script which is then loaded by the simulator, it is nevertheless
possible to change every parameter of this model through the graphical interface. By changing the positions of the bodies in their trim configuration it is possible to effect changes in the machine’s
geometry such as, for instance, changing the swing arm length, the swing arm pivot point, steering head angle, etc. It is also possible to change the masses and moments of inertia of the model. The
modified machine can be tested on-line and any changes can be saved to script files via the graphical interface, from where they can be reloaded at a later point. Although this might be useful when
the simulator is used as a game, it should prove invaluable when experimenting with various configurations for educational purposes or research.
Hopefully more models will be added along the way. Implementing a model in Slipstream is not a big deal, although finding, measuring or estimating the required parameters can be a different matter.
[ < ] [ > ] [ << ] [ Up ] [ >> ] [Top] [Contents] [Index] [ ? ]
1. Advances in the Modelling of Motorcycle Dynamics, R.S. Sharp, S. Evangelou and D.J.N. Limebeer
2. Multibody aspects of motorcycle modelling with special reference to Autosim, R.S.Sharp, S.Evangelou and D.J.N.Limebeer
3. Mean value modelling of a s.i. engine, Rob Karmiggelt
[ < ] [ > ] [ << ] [ Up ] [ >> ] [Top] [Contents] [Index] [ ? ]
A. GNU Free Documentation License
Version 1.2, November 2002
Copyright © 2000,2001,2002 Free Software Foundation, Inc.
51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
1. PREAMBLE
The purpose of this License is to make a manual, textbook, or other functional and useful document free in the sense of freedom: to assure everyone the effective freedom to copy and redistribute
it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being
considered responsible for modifications made by others.
This License is a kind of “copyleft”, which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a
copyleft license designed for free software.
We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms
that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We
recommend this License principally for works whose purpose is instruction or reference.
This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice
grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The “Document”, below, refers to any such manual or work. Any member of the
public is a licensee, and is addressed as “you”. You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.
A “Modified Version” of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.
A “Secondary Section” is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document’s
overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section
may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or
political position regarding them.
The “Invariant Sections” are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this
License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does
not identify any Invariant Sections then there are none.
The “Cover Texts” are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A
Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.
A “Transparent” copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document
straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to
text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup,
has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not
“Transparent” is called “Opaque”.
Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and
standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that
can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF
produced by some word processors for output purposes only.
The “Title Page” means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For
works in formats which do not have any title page as such, “Title Page” means the text near the most prominent appearance of the work’s title, preceding the beginning of the body of the text.
A section “Entitled XYZ” means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ
stands for a specific section name mentioned below, such as “Acknowledgements”, “Dedications”, “Endorsements”, or “History”.) To “Preserve the Title” of such a section when you modify the
Document means that it remains a section “Entitled XYZ” according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in
this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.
3. VERBATIM COPYING
You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License
applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the
reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also
follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
4. COPYING IN QUANTITY
If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document’s license notice requires Cover Texts, you must
enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly
and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the
covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with
each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the
Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent
copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to
the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version
of the Document.
5. MODIFICATIONS
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with
the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these
things in the Modified Version:
1. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History
section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.
2. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal
authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.
3. State on the Title page the name of the publisher of the Modified Version, as the publisher.
4. Preserve all the copyright notices of the Document.
5. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.
6. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum
7. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document’s license notice.
8. Include an unaltered copy of this License.
9. Preserve the section Entitled “History”, Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title
Page. If there is no section Entitled “History” in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item
describing the Modified Version as stated in the previous sentence.
10. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous
versions it was based on. These may be placed in the “History” section. You may omit a network location for a work that was published at least four years before the Document itself, or if the
original publisher of the version it refers to gives permission.
11. For any section Entitled “Acknowledgements” or “Dedications”, Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor
acknowledgements and/or dedications given therein.
12. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.
13. Delete any section Entitled “Endorsements”. Such a section may not be included in the Modified Version.
14. Do not retitle any existing section to be Entitled “Endorsements” or to conflict in title with any Invariant Section.
15. Preserve any Warranty Disclaimers.
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some
or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version’s license notice. These titles must be distinct from any other
section titles.
You may add a section Entitled “Endorsements”, provided it contains nothing but endorsements of your Modified Version by various parties—for example, statements of peer review or that the text
has been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one
passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover,
previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous
publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.
6. COMBINING DOCUMENTS
You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all
of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their
Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same
name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else
a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled “History” in the various original documents, forming one section Entitled “History”; likewise combine any sections Entitled
“Acknowledgements”, and any sections Entitled “Dedications”. You must delete all sections Entitled “Endorsements.”
7. COLLECTIONS OF DOCUMENTS
You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy
that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow
this License in all other respects regarding verbatim copying of that document.
A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an “aggregate” if the
copyright resulting from the compilation is not used to limit the legal rights of the compilation’s users beyond what the individual works permit. When the Document is included in an aggregate,
this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document’s Cover Texts may be
placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that
bracket the whole aggregate.
9. TRANSLATION
Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special
permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a
translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the
original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will
If a section in the Document is Entitled “Acknowledgements”, “Dedications”, or “History”, the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual
10. TERMINATION
You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is
void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so
long as such parties remain in full compliance.
11. FUTURE REVISIONS OF THIS LICENSE
The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License “or any later version” applies to it, you have
the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document
does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation.
ADDENDUM: How to use this License for your documents
To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page:
Copyright (C) year your name.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.2
or any later version published by the Free Software Foundation;
with no Invariant Sections, no Front-Cover Texts, and no Back-Cover
Texts. A copy of the license is included in the section entitled ``GNU
Free Documentation License''.
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the “with…Texts.” line with this:
with the Invariant Sections being list their titles, with
the Front-Cover Texts being list, and with the Back-Cover Texts
being list.
If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to
permit their use in free software.
[Top] [Contents] [Index] [ ? ]
It is not exact but the error is very small and in most cases negligible. See the physical model section for details.
Although you can use your own browser if you like by starting Slipstream with the command slipstream and then visiting the URL http://localhost:29176.
This does not hold for spiral segments of varying curvature. For this reason such segments are not truly spiral but are in fact approximations consisting of concatenated circular arcs whose
curvatures interpolate the segment’s curvature difference. The number of circular arc pieces used in a given segment depends on the difference between initial and final curvature and can be defined
in terms of it by means of the ’tessellation’ property of the race track.
This is of course not true for actual motorcycle tyres which have cross-sections that are arcs instead of circles but motorcycles are unable to sustain large enough lean angles for this to matter.
If we require that the normals of the surfaces on the point of contact oppose each other.
I don’t mean to belittle the work done by the authors of the paper here. In the context of their research a more accurate model was probably of little value.
[Top] [Contents] [Index] [ ? ]
Table of Contents
[Top] [Contents] [Index] [ ? ]
About This Document
This document was generated by Dimitris Papavasiliou on September 5, 2011 using texi2html 1.82.
The buttons in the navigation panels have the following meaning:
│ Button │ Name │ Go to │From 1.2.3 go to│
│ [ < ] │ Back │Previous section in reading order │1.2.2 │
│ [ > ] │ Forward │Next section in reading order │1.2.4 │
│ [ << ] │ FastBack │Beginning of this chapter or previous chapter │1 │
│ [ Up ] │ Up │Up section │1.2 │
│ [ >> ] │FastForward│Next chapter │2 │
│ [Top] │ Top │Cover (top) of document │ │
│[Contents]│ Contents │Table of contents │ │
│ [Index] │ Index │Index │ │
│ [ ? ] │ About │About (help) │ │
where the Example assumes that the current position is at Subsubsection One-Two-Three of a document of the following structure:
• 1. Section One
□ 1.1 Subsection One-One
□ 1.2 Subsection One-Two
☆ 1.2.1 Subsubsection One-Two-One
☆ 1.2.2 Subsubsection One-Two-Two
☆ 1.2.3 Subsubsection One-Two-Three <== Current Position
☆ 1.2.4 Subsubsection One-Two-Four
□ 1.3 Subsection One-Three
□ 1.4 Subsection One-Four
This document was generated by Dimitris Papavasiliou on September 5, 2011 using texi2html 1.82.
|
{"url":"http://www.nongnu.org/slipstream/manual.html","timestamp":"2014-04-19T22:52:14Z","content_type":null,"content_length":"87800","record_id":"<urn:uuid:cf033328-29f7-4b91-8364-7203a4336978>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebraically complete categories, in: Category Theory
, 1994
"... Both pre-orders and metric spaces have been used at various times as a foundation for the solution of recursive domain equations in the area of denotational semantics. In both cases the central
theorem states that a `converging' sequence of `complete' domains/spaces with `continuous' retraction pair ..."
Cited by 21 (0 self)
Add to MetaCart
Both pre-orders and metric spaces have been used at various times as a foundation for the solution of recursive domain equations in the area of denotational semantics. In both cases the central
theorem states that a `converging' sequence of `complete' domains/spaces with `continuous' retraction pairs between them has a limit in the category of complete domains/spaces with retraction pairs
as morphisms. The pre-order version was discovered first by Scott in 1969, and is referred to as Scott's inverse limit theorem. The metric version was mainly developed by de Bakker and Zucker and
refined and generalized by America and Rutten. The theorem in both its versions provides the main tool for solving recursive domain equations. The proofs of the two versions of the theorem look
astonishingly similar, but until now the preconditions for the pre-order and the metric versions have seemed to be fundamentally different. In this thesis we establish a more general theory of
domains based on the noti...
- GDP FESTSCHRIFT ENTCS, TO APPEAR
"... We motivate and define a category of topological domains, whose objects are certain topological spaces, generalising the usual ω-continuous dcppos of domain theory. Our category supports all the
standard constructions of domain theory, including the solution of recursive domain equations. It also su ..."
Cited by 13 (3 self)
Add to MetaCart
We motivate and define a category of topological domains, whose objects are certain topological spaces, generalising the usual ω-continuous dcppos of domain theory. Our category supports all the
standard constructions of domain theory, including the solution of recursive domain equations. It also supports the construction of free algebras for (in)equational theories, can be used as the basis
for a theory of computability, and provides a model of parametric polymorphism.
- Dipartimento di Informatica, Università di , 1998
"... C'era una volta un re seduto in canap`e, che disse alla regina raccontami una storia. La regina cominci`o: "C'era una volta un re seduto in canap`e ..."
Cited by 5 (2 self)
Add to MetaCart
C'era una volta un re seduto in canap`e, che disse alla regina raccontami una storia. La regina cominci`o: "C'era una volta un re seduto in canap`e
, 1998
"... We make an initial step towards a categorical semantics of guarded induction. While ordinary induction is usually modelled in terms of the least fixpoints and the initial algebras, guarded
induction is based on the unique fixpoints of certain operations, called guarded, on the final coalgebras. So f ..."
Add to MetaCart
We make an initial step towards a categorical semantics of guarded induction. While ordinary induction is usually modelled in terms of the least fixpoints and the initial algebras, guarded induction
is based on the unique fixpoints of certain operations, called guarded, on the final coalgebras. So far, such operations were treated syntactically [3,8,9,23]. We analyse them categorically. Guarded
induction appears as couched in coinductively constructed domains, but turns out to be reducible to coinduction only in special cases. The applications of the presented analysis span across the gamut
of the applications of guarded induction --- from modelling computation to solving differential equations. A subsequent paper [26] will provide an account of some domain theoretical aspects, which
are presently left implicit. "In order to establish that a proposition OE follows from other propositions OE 1 ; : : : ; OE q , it is enough to build a proof term e for it, using not only natural
- Journal of Pure and Applied Algebra , 1997
"... The object of study of the present paper may be considered as a model, in an elementary topos with a natural numbers object, of a non-classical variation of the Peano arithmetic. The new feature
consists in admitting, in addition to the constant (zero) s0 2 N and the unary operation (the success ..."
Add to MetaCart
The object of study of the present paper may be considered as a model, in an elementary topos with a natural numbers object, of a non-classical variation of the Peano arithmetic. The new feature
consists in admitting, in addition to the constant (zero) s0 2 N and the unary operation (the successor map) s1 : N ! N, arbitrary operations su : N u ! N of arities u `between 0 and 1'. That is, u
is allowed to range over subsets of a singleton set.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1691627","timestamp":"2014-04-18T14:49:10Z","content_type":null,"content_length":"23630","record_id":"<urn:uuid:fc85c934-422e-4afe-9c7f-0b0e9872703f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Planet X Gravity
At the start of ZetaTalk on Jul 15, 1995, The Zetas described Planet X as having 23 times the mass of Earth, while being 4 times the diameter of Earth, and that the hominoids on Planet X were 1.5
times the size of man. This issue came up during the May 5, 2007 live chat on GLP, as seemingly incompatible due to the gravity pull of Planet X. After the Zeta response, another poster presented the
mathematical formula that proves the Zeta had been correct in their 1.5 ratio, as the computed gravity on Planet X would be 1.56 times that of Earth! Note Nancy is not a mathematician and does not
speak math very well.
Question: ZetaTalk claims that Planet-X is 4 times the diameter of Earth. This would make it 64 times the volume of Earth, with a gravity field that no land animal from Earth could adapt to. Yet
ZetaTalk claims that the inhabitants of Planet-X send their people to the various planets and moons of our solar system (with weaker gravity than the Earth) to collect gold, on missions that span
for thousands of years. ZetaTalk claims that their descendants eventually return to Planet-X with the gold that was mined, and live as a lower class, forbidden from having children due to being
genetically impure. Given this scenario, how could any land animal (such as the supposed inhabitants of Planet-X) adapt to the gravity field of Planet-X that is more than 64 times as strong as
that which their bodies are adapted to? Would they not be crushed by their own bulk, suffocate from an inability to breathe, or have a heart attack due to being able to pump blood up and down
their tall bodies? Here's the math:
if Diameter of Earth = 1
Diameter of Planet-X = 4 according to ZetaTalk
thus Volume of Earth = 0.524
thus Volume of Planet-X = 33.51
33.51/ 0.5224 = 64.15 (the volume of the Earth thus devides into the volume of Planet-X 64.15 times)
Therefore Planet-X would have 64.15 times the volume of Earth, and since massive objects compress under their own weight, Planet-X would have more than 64.15 times the mass of the Earth.
Check the numbers at http://www.csgnetwork.com/circlecalc.html
ZetaTalk: Early in the life of ZetaTalk, Nancy was challenged as to our statement that the gravity of Planet X is approximately 1.5 times that of Earth. As we were stating a mass 23 times that of
Earth, this seemed not to compute. However, a mathematician came forward during these discussions and stated that the computations allow for a 1.6 pull of gravity on such a surface, in accordance
with our statement. Thus, your math is wrong, and designed to discombobulate Nancy during her chat.
From another poster, posting later in the chat after the Zeta response.
The Force ratio can be calculated as:
F2/F1 = m2/(r2)²
F2/F1 = 25/4² = 25/16
F2/F1 = 1.5625
|
{"url":"http://www.zetatalk4.com/theword/tworx557.htm","timestamp":"2014-04-21T01:13:53Z","content_type":null,"content_length":"4070","record_id":"<urn:uuid:c92844a1-2a3f-42cf-baa2-d855c612af80>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Spider algorithm
• The Spider Algorithm by John H. Hubbard and Dierk Schleicher :
• D. A. Brown   - Using spider theory to explore parameter spaces
This is a PhD thesis from Cornell University about family of polynomials P(z)=L(1+z/d)^d, where L is a complex parameter. In this work author is studing location of parameter L for wich
polynomial P has an attractive cycle of given length, multiplier and combinatioral type.
Two main tools are used in determining an algorithm for finding these parameters: the well-established theories of external rays in the dynamical and parameter planes and Teichmüller theory.
External rays are used to specify hyperbolic components in parameter space of the polynomials and study the combinatorics of the attracting cycle. A properly normalized space of univalent
mappings is then employed to determine a linearizing neighborhood of the attracting cycle.
Since the image of a univalent mapping completely determines the mapping, we visualize these maps concretely on the Riemann sphere; with discs for feet and curves as legs connected at infinity,
these maps conjure a picture of fat-footed spiders. Isotopy classes of these spiders form a Teichmüller space, and the tools found in Teichmüller theory prove useful in understanding the Spider
Space. By defining a contracting holomorphic mapping on this spider space, we can iterate this mapping to a fixed point in Teichmüller space which in turn determines the parameter we seek.
Finally, we extend the results about these polynomial families to the exponential family E(z)=L*e^z. Here, we are able to constructively prove the existence and location of hyperbolic components
in the parameter space of E(z). ( text from abstract)
David Brown's Home Page
• Yuval Fisher page *Spider* is an XView program which does various things:
* A variant of Thurston's algorithm for computing a postcritically
finite polynomials from the angles of the external rays landing at the critical point.
For example, enter 1/6 and get out c = i, for the quadratic case (If this makes no sense,
nevermind, but notice that the dynamics of 1/6 under multiplication by 2 modulo 1
has some relationship with the orbit of i under z2+i).
* It draws parameter (Mandelbrot set) and dynamical space (Julia sets)
pictures using the Koebe 1/4 theorem as in The Science of Fractal Images.
This part of the code was largely written by Marc Parmet, but it hasn't really seen
the light of day much. This is pretty fast, but I don't really know how fast people draw
things these days.
* It draws external angles on Julia sets.
If you want to understand the relationship between the Mandelbrot set
and the dynamics of Julia sets, this program is for you.
*Yuval Fisher*
Institute for Non-Linear Science 0402 University of California,
San Diego La Jolla, CA 92093
text from news from: comp.theory.dynamic-sys, sci.fractals Data:1992-11-20 09:55:38 PST
• Dan Erik Krarup Sorensen: The Universal Rabbit Work from Technical University of Danmark
• Symbolic dynamics, the spider algorithm and finding certain real zeros of polynomials of high degree. T. M. Jonassen .
Main Page
Author: Adam Majewski
Feel free to e-mail me!
|
{"url":"http://fraktal.republika.pl/spider.html","timestamp":"2014-04-19T11:56:53Z","content_type":null,"content_length":"6463","record_id":"<urn:uuid:8842d72d-1bf2-4800-94db-e52e02608747>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematics 210 - Spring term 2003 - eighth assignment
This assignment requires you to submit several spreadsheets concerned with solving differential equations and doing matrix manipulations. This lab and the re-done mid-terms are due Wednesday, March
19, by class time. I repeat the instructions from the first lab:
• Go to the MathSheet home page and then to the applet page. Open a running copy of the spreadsheet and return to this page.
• Log in immediately: File/Log in. Your login id is your Mathematics Department login name, and your password is your student number. This allows you to save and load spreadsheet files. You should
save your work frequently. The graph signature mechanism should now be working - please use this feature.
• Question 1. Use Euler's method to solve
x'(t) = 2 y(t)
y'(t) = -x(t) - 3y(t)
in the range t=0 to 10. Use initial conditions (1,0), (0,1), (-1,0), (0,-1) all on one sheet. Use a step size of 0.1. Graph the parametrized curves (x, y).
Save this sheet as m210.8.1.ms.
• Question 2. Use Improved Euler's method to answer the same question.
This question will require a ridiculous number of columns if you do the obvious thing. After you have made a good plot, "Edit/Convert" the (x,y) data to fixed numbers and move them somewhere
else. Then rebuild the part of the "live" area that you converted.
Save this sheet as m210.8.2.ms.
• Question 3. Find e^A dt where dt = 0.1, correct to about 8 significant figures. Do this by summing the Taylor's series to enough terms. Use this calculation to make the same graphs as in the
earlier questions.
Save this sheet as m210.8.3.ms.
• Question 4. Do the last question, but for the system
x' = -x - y
y' = x - y
Save this sheet as m210.8.4.ms.
That's it!
If you find these questions confusing, please write me.
|
{"url":"http://www.math.ubc.ca/~cass/courses/m210-03a/lab8/","timestamp":"2014-04-19T01:51:12Z","content_type":null,"content_length":"3249","record_id":"<urn:uuid:a3014842-11d2-4e30-8f90-6eebf37333fc>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math 106 – Math for the Liberal Arts (proposal)
Office Hours:
Text: Bennett, J. & Briggs, W., Essentials of Using and Understanding Mathematics. Addison-Wesley: Boston. 2003.
Prerequisites: MAT 103 or placement
Course Description:
Math for the Liberal Arts is an introduction to non-technical applications of mathematics in the modern world. The course is designed to cultivate an appreciation of the significance of
mathematics in daily life and develop students’ mathematical reasoning. Subjects include Quantitative Information in Everyday Life, Financial Management, Statistics, and Probability.
As for all college math classes, students should plan on spending two hours outside of class for each hour in class, i.e. a minimum of 6 hours a week.
Course Goals:
Upon completion of this course, students should:
1. Understand that mathematics is relevant to their lives;
2. Develop an ability to reason with quantitative information in new ways;
3. Improve their self-confidence in dealing with mathematical issues;
4. Strengthen the critical thinking skills needed in life.
Course Requirements:
1. Attendance is required and students are expected to arrive for class on time. The final grade will be lowered for more than three (3) unexcused absences, and three late arrivals count as one
absence. It is the student’s responsibility to inform the professor so as not to be counted as absent/late. If a student is absent or late to class, he/she is responsible for getting the
lecture notes, handouts, and/or assignments during that absence.
2. Homework will be given regularly on material covered in class and students are encouraged to work cooperatively. However each student is responsible for ALL the assigned material and must be
prepared to turn in homework and discuss it in class. All work must be shown and students must be able to explain their reasoning, either orally or in writing. In other words, students can
work on their homework together, but should not copy work from each other.
3. Quizzes, exams and a cumulative final exam will be given and students are expected to take them at the times scheduled. If the student is unable to do so, the student needs to contact the
professor before class and must provide a documented excuse. Students may leave messages with the Math Department secretary at x1211.
4. In the real world, problem solving is often done cooperatively, where individuals with different strengths and weaknesses work together, discussing and evaluating various solutions. With this
in mind, students will be working in collaborative groups on quiz problems throughout the semester. However, this does not mean that students can abdicate their personal responsibility -
every student is required to attend regularly, actively contribute to group efforts and each must understand the final solution. Students will be asked to evaluate the contributions of the
members of their group at the end of the semester.
5. Exams will be taken individually. Calculators are required and should be used appropriately.
6. Students needing help or looking for a place to work cooperatively are urged to take advantage of the tutoring services offered at Lincoln.
TENTATIVE SCHEDULE:
Ch. 3 Numbers in the Real World (week 1-3)
Uses and Abuses of Percentages
Putting Numbers in Perspective
Dealing with Uncertainty
How Numbers Deceive
Exam 1
Ch. 4 Financial Management (week 4-6)
The Power of Compounding
Savings Plans
Loan Payments, Credit Cards, and Mortgages
Exam 2
Ch. 5 Statistical Reasoning (week 7-10)
Fundamentals of Statistics
Should You Believe a Statistical Study
Statistical Tables and Graphs
Graphics in the Media
Correlation and Causality
Characterizing as Data Distribution
Exam 3
Ch 6 Probability: Living with the Odds (week 11-13)
Fundamentals of Probability
Combining Probability
The Law of Averages
Counting and Probability
Exam 4
Final Exam or Project (week 14)
Tentative Grading: Final grades will be determined approximately as follows:
Homework 100 points
Participation 100 points
Quizzes 200 points
Four Exams 400 points
Final Exam 200 points
Total 1000 points
The grading scale is as follows:
A 93-100% A- 89-92%
B+ 86-88% B 82-85% B- 79-81%
C+ 76-78% C 72-75% C- 69-71%
D+ 64-68% D 58-63% F 0-57%
Note: The professor reserves the right to alter this syllabus as needed.
Lincoln University of the Commonwealth of Pennsylvania
1570 Baltimore Pike, P.O. Box 179, Lincoln University, PA 19352 \ (484) 365-8000
Contact Admissions
|
{"url":"http://www.lincoln.edu/academicaffairs/reports05-06/math106proposal030406.html","timestamp":"2014-04-18T13:08:03Z","content_type":null,"content_length":"18496","record_id":"<urn:uuid:529200a6-9c2b-4fba-b412-99606027f165>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Numerical Methods/Equation Solving
From Wikibooks, open books for an open world
Solution of Algebraic and Transcendental Equations[edit]
An equation of the type $f(x)=0$ is either algebraic or transcendental.
E.g, these equations are algebraic.
$2x=5 \quad x^2+x=1 \quad x^7=x(1+2x)$
and these are transcendental
$\sin x = 0 \quad \quad e^\sqrt{x} = \pi \quad \tan x = x$
While roots can be found directly for algebraic equations of fourth order or lower, and for a few special transcendental equations, in practice we need to solve equations of higher order and also
arbitrary transcendental equations.
As analytic solutions are often either too cumbersome or simply do not exist, we need to find an approximate method of solution. This is where numerical analysis comes into the picture.
Some Useful Observations[edit]
• The total number of roots an algebraic equation can have is the same as its degree.
• An algebraic equation can have at most as many positive roots as the number of changes of sign in $f(x)$.
• An algebraic equation can have at most as many negative roots as the number of changes of sign in $f(-x)$.
• In an algebraic equation with real coefficients, complex roots occur in conjugate pairs
• If $f(x)=a_0x^n+a_1x^{n-1}+a_2x^{n-2}+...+a_{n-1}x+a_n$ with roots $\alpha_1, \alpha_2, ..., \alpha_n$ then the following hold good:
□ $\sum_i\alpha_i=\frac{-a_1}{a_0}$
□ $\sum_{i<j}{\alpha_i\alpha_j}=\frac{a_2}{a_0}$
□ $\prod_i\alpha_i=(-1)^n \frac{a_n}{a_0}$
• If $f(x)$ is continuous in the interval $[a,b]$ and $f(a)f(b)<0$ then a root must exist in the interval $(a,b)$
Initial Approximation[edit]
The last point about the interval is one of the most useful properties numerical methods use to find the roots. All of them have in common the requirement that we need to make an initial guess for
the root. Practically, this is easy to do graphically. Simply plot the equation and make a rough estimate of the solution. Analytically, we can usually choose any point in an interval where a change
of sign takes place. However, this is subject to certain conditions that vary from method to method.
A numerical method to solve equations will be a long process. We would like to know, if the method will lead to a solution (close to the exact solution) or will lead us away from the solution. If the
method, leads to the solution, then we say that the method is convergent. Otherwise, the method is said to be divergent.
Rate of Convergence[edit]
Various methods converge to the root at different rates. That is, some methods are slow to converge and it takes a long time to arrive at the root, while other methods can lead us to the root faster.
This is in general a compromise between ease of calculation and time.
For a computer program however, it is generally better to look at methods which converge quickly. The rate of convergence could be linear or of some higher order. The higher the order, the faster the
method converges.
If $e_i$ is the magnitude of the error in the $i$th iteration, ignoring sign, then the order is $n$ if $\frac{e_{i+1}}{e_i^n}$ is approximately constant.
It is also important to note that the chosen method will converge only if $e_{i+1}<e_i$.
Bisection Method[edit]
This is one of the simplest methods and is strongly based on the property of intervals. To find a root using this method, the first thing to do is to find an interval $[a,b]$ such that $f(a) \cdot f
(b) < 0$. Bisect this interval to get a point $(c,f(c))$. Choose one of $a$ or $b$ so that the sign of $f(c)$ is opposite to the ordinate at that point. Use this as the new interval and proceed until
you get the root within desired accuracy.
Solve $xe^x-3=0$ correct up to 2 decimal places.
$\Rightarrow f(2) \cdot f(3) < 0$$\Rightarrow \epsilon = \frac{1}{2}10^{-2}$$\Rightarrow i = \frac{2.3010}{0.3010} = 8$$\Rightarrow x_1=2.5$
$f(x_1)=5.625 > 0$
$\Rightarrow x_2 = 2.25$$\Rightarrow ... x_8 = 2.09$
Error Analysis[edit]
The maximum error after the $i$th iteration using this process will be given as
$\Rightarrow i\ge\frac{[\log(b-a)-\log\epsilon_i]}{\log 2}$
As the interval at each iteration is halved, we have $\frac{\epsilon_{i+1}}{\epsilon_i} = \frac{1}{2}$. Thus this method converges linearly.
If we are interested in the number of iterations the Bisection Method needs to converge to a root within a certain tolerance than we can use the formula for the maximum error.
How many iterations do you need to get the root if you start with a = 1 and b = 2 and the tolerance is 10^−4?
The error $\epsilon_i$ needs to be smaller than 10^−4. Use the formula for the maximum error:
$\epsilon_i = 2^{-i} \cdot (2-1) < 10^{-4}$
$\epsilon_i = 2^{-i} < 10^{-4}$
Solve for i using log rules
$\displaystyle\log_{10}{2^{-i}} < \log_{10}{10^{-4}}$
$\displaystyle -i \cdot \log_{10}{2} < -4$
$\displaystyle i > \frac{4}{\log_{10}{2}} = 13.29 = 14$
Hence 14 iterations will ensure an approximation to the root accurate to $\displaystyle 10^{-4}$. Note: the error analysis only gives a bound approximation to the error; the actual error may be much
False Position Method[edit]
The false position method (sometimes called the regula falsi method) is essentially same as the bisection method -- except that instead of bisecting the interval, we find where the chord joining the
two points meets the X axis. The roots are calculated using the equation of the chord, i.e. putting $y = 0$ in
The rate of convergence is still linear but faster than that of the bisection method. Both these methods will fail if f has a double root.
Consider f(x)=x^2-1. We already know the roots of this equation, so we can easily check how fast the regula falsi method converges.
For our initial guess, we'll use the interval [0,2].
Since f is concave upwards and increasing, a quick sketch of the geometry shows that the chord will always intersect the x-axis to the left of the solution. This can be confirmed by a little algebra.
We'll call our n^th iteration of the interval [a[n], 2]
The chord intersects the x-axis when
$-(a_n^2-1)=\frac{(2^2-1)-(a_n^2-1)}{2-a_n} (x-a_n)$
Rearranging and simplifying gives
Since this is always less than the root, it is also a[n+1]
The difference between a[n] and the root is e[n]=a[n]-1, but
$e_{n+1}=\frac{1+2a_n}{2+a_n}-1 = \frac{a_n-1}{a_n+2} = \frac{1}{a_n+2}e_n$
This is always smaller than e[n] when a[n] is positive. When a[n] approaches 1, each extra iteration reduces the error by two-thirds, rather than one-half as the bisection method would.
The order of convergence of this method is 2/3 and is linear.
In this case, the lower end of the interval tends to the root, and the minimum error tends to zero, but the upper limit and maximum error remain fixed. This is not uncommon.
Fixed Point Iteration (or Staircase method)[edit]
If we can write f(x)=0 in the form x=g(x), then the point x would be a fixed point of the function g (that is, the input of g is also the output). Then an obvious sequence to consider is
$x_{n+1}=g(x_n) \,$
If we look at this on a graph we can see how this could converge to the intersection.
Any function can be written in this form if we define g(x)=f(x)+x, though in some cases other rearrangements may prove more useful.
Error analysis[edit]
We define the error at the n^th step to be
$e_n=x_n-x \mbox{ where } x=g(x) \,$
Then we have
$\begin{matrix} e_{n+1} & = & x_{n+1}-x & \\ & = & g(x_n)-x & \\ & = & g(x+e_n) -x & \\ & = & -x + g(x)+ e_n g^\prime(x) + \cdots & \\ & = & e_n g^\prime(x) + \cdots & \mbox{since } g(x)=x \end
So, when |g′(x)|<l, this sequence converges to a root, and the error will be approximately proportional to n.
Because the relationship between e[n+1] and e[n] is linear, we say that this method converges linearly, if it converges at all.
When g(x)=f(x)+x this means that if
$x_{n+1}=x_n+f(x_n) \,$.
converges to a root, x, of f then
$-2 < f^\prime(x) < 0 \,$
Note that this convergence will only happen for a certain range of x. If the first estimate is outside that range then no solution will be found.
Also note that although this is a necessary condition for convergence, it does not guarantee convergence. In the error analysis we neglected higher powers of e[n], but we can only do this if e[n] is
small. If our initial error is large, the higher powers may prevent convergence, even when the condition is satisfied.
If |g′(x)|<1 is true at the root, the iteration sequence will converge in some interval around the root, which may be smaller than the interval where |g′(x)|<1. If |g′(x)| isn't smaller than one at
the root, the iteration will not converge to that root.
Lets consider $f(x)=x^3+x-2$, which we can see has a single root at x=1. There are several ways f(x)=0 can be written in the desired form, x=g(x).
The simplest is
1):$x_{n+1}=x_n+f(x_n)=x_n^3 +2x_n -2$
In this case, $g'(x)=3x^2+2$, and the convergence condition is
$1>|g'(x)|=3x^2+2, \qquad -1>3x^2$
Since this is never true, this doesn't converge to the root.
2)An alternate rearrangement is
This converges when
$1 > |g'(x)|= |-3x^2|, \qquad x^2 < \frac{1}{3}, \qquad |x| < \frac{1}{\sqrt{3}}$
Since this range does not include the root, this method won't converge either.
3)Another obvious rearrangement is
In this case the convergence condition becomes
$\frac{1}{3}\left| (2-x_n)^{-\frac{2}{3}} \right| <1, \qquad \left(2-x_n \right)^{-2} < 3^3, \qquad |x_n-2|>\sqrt{27}$
Again, this region excludes the root.
4)Another possibility is obtained by dividing by x^2+1
In this case the convergence condition becomes
$\frac{4|x|}{(1+x^2)^2}<1, \qquad 4|x| < (1+x^2)^2$
Consideration of this inequality shows it is satisfied if x>1, so if we start with such an x, this will converge to the root.
Clearly, finding a method of this type which converges is not always straightforwards.
In numerical analysis, Newton's method (also known as the Newton–Raphson method or the Newton–Fourier method) is an efficient algorithm for finding approximations to the zeros (or roots) of a
real-valued function. As such, it is an example of a root-finding algorithm.
Any zero-finding method (Bisection Method, False Position Method, Newton-Raphson, etc.) can also be used to find a minimum or maximum of such a function, by finding a zero in the function's first
derivative, see Newton's method as an optimization algorithm.
Description of the method[edit]
The idea of the Newton-Raphson method is as follows: one starts with an initial guess which is reasonably close to the true root, then the function is approximated by its tangent line (which can be
computed using the tools of calculus), and one computes the x-intercept of this tangent line (which is easily done with elementary algebra). This x-intercept will typically be a better approximation
to the function's root than the original guess, and the method can be iterated. Suppose f : [a, b] → R is a differentiable function defined on the interval [a, b] with values in the real numbers R.
The formula for converging on the root can be easily derived. Suppose we have some current approximation xn. Then we can derive the formula for a better approximation, xn+1 by referring to the
diagram on the right. We know from the definition of the derivative at a given point that it is the slope of a tangent at that point.
We can get better convergence if we know about the function's derivatives. Consider the tangent to the function:
Near any point, the tangent at that point is approximately the same as f('x) itself, so we can use the tangent to approximate the function.
The tangent through the point (x[n], 'f(x[n])) is
The next approximation, x[n+1], is where the tangent line intersects the axis, so where y=0. Rearranging, we find
Error analysis[edit]
Again, we define the root to be x, and the error at the n^th step to be
$e_n=x_n-x \,$
Then the error at the next step is
$e_{n+1} = e_n - \frac{f(x)+e_n f'(x)+ \frac{1}{2}e_n^2 f''(x)+ \cdots} {f'(x)+e_n f''(x)+ \cdots }$ (1)
where we've written f as a Taylor series round its root, x. Rearranging this, and using f(x)=0, we get
$\begin{matrix} e_{n+1} & = & e_n & - & e_n \left( f'(x)+\frac{1}{2}e_n f''(x) \right) /f'(x) + \cdots \\ & = & -\frac{f''(x)}{f'(x)}e_n^2 & + & \cdots \end{matrix}$
where we've neglected cubic and higher powers of the error, since they will be much smaller than the squared term, when the error itself is small.
Notice that the error is squared at each step. This means that the number of correct decimal places doubles with each step, much faster than linear convergence.
This sequence will converge if
$\left| \frac{f''(x)}{f'(x)}e_n^2 \right| < |e_n|, \qquad |e_n|< \frac{f'(x)}{f''(x)}$
If f′ isn't zero at the root, then there will always be a range round the root where this method converges.
If f′ is zero at the root, then on looking again at (1) we see that we get
$e_{n+1} = e_n /2 \,$
and the convergence becomes merely linear.
Overall, this method works well, provided f does not have a minimum near its root, but it can only be used if the derivative is known.
Let's consider f(x)=x^2-a. Here, we know the roots exactly, so we can see better just how well the method converges.
We have
$x_{n+1} = x_n-\frac{f(x_n)}{f'(x_n)} = x_n-\frac{x_n^2-a}{2x_n} = \frac{x_n^2+a}{2x_n} = \frac{1}{2}\left( x_n + \frac{a}{x_n} \right)$
This method is easily implemented, even with just pen and paper, and has been used to rapidly estimate square roots since long before Newton.
The n^th error is e[n]=x[n]-√a, so we have
$\begin{matrix} e_{n+1} & = & \frac{(e_n+\sqrt{a})^2+a}{2(\sqrt{a}+e_n)} - \sqrt{a} \\ & = & \frac{2a+2e_n\sqrt{a}+e_n^2}{2(\sqrt{a}+e_n)} - \sqrt{a} \\ & = & \frac{2(\sqrt{a}+e_n)\sqrt{a}+e_n^2}
{2(\sqrt{a}+e_n)} - \sqrt{a} \\ & = & \frac{e_n^2}{2(\sqrt{a}+e_n)} \end{matrix}$
If a=0, this simplifies to e[n]/2, as expected.
If a>0, e[n+1] will be positive, provided e[n] is greater than -√a, i.e provided x[n] is positive. Thus, starting from any positive number, all the errors, except perhaps the first will be positive.
The method converges when,
$|e_{n+1}| = \left| \frac{e_n^2}{2(\sqrt{a}+e_n)} \right| < |e_n|$
so, assuming e[n] is positive, it converges when
$e_n < 2 \left( e_n + \sqrt{a} \right)$
which is always true.
This method converges to the square root, starting from any positive number, and it does so quadratically.
Higher order methods[edit]
There are methods that converge even faster than Newton-Raphson. e.g
$x_{n+1} = x_n-\frac{f(x_n)}{f'(x_n)} + \frac{1}{2} \frac{f''(x_n)f^2(x_n)}{{f'}^3(x_n)}$
which converge cubicly, tripling the number of correct digits at each iteration, which is 50% faster than Newton-Raphson.
However, if iterating each step takes 50% longer, due to the more complex formula, there is no net gain in speed. For this reason, methods such as this are seldom used.
Main Page - Mathematics bookshelf - Numerical Methods
|
{"url":"http://en.wikibooks.org/wiki/Numerical_Methods/Equation_Solving","timestamp":"2014-04-21T00:44:20Z","content_type":null,"content_length":"59765","record_id":"<urn:uuid:26a31b9a-b6d6-4f42-b167-7ced8f7fc025>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Can someone please show me how to solve this problem step by step? (35-5x)+(18xroot2 +24x)
• one year ago
• one year ago
Best Response
You've already chosen the best response.
I mean its actuall (18xroot2) + (24x)
Best Response
You've already chosen the best response.
According to the orders of operation either way you wrote it is correct. Please Excuse My Dear Aunt Sally a x u i d u r p l v d b e o t i i t t n i s t r h e p i i a e n l o o c s t i n n t e s c
i s a o t n i o n So (18xroot2 +24x) and (18xroot2) + (24x) are the same.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/503e60b5e4b0822a79cdd503","timestamp":"2014-04-19T17:15:37Z","content_type":null,"content_length":"30644","record_id":"<urn:uuid:7f287bce-cbd9-4dd2-8a96-47a767680e1b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework (Initial velocity, gravity, and falling height)
Hey I am having a bit of trouble finding the proper equation to use when solving a problem such as this...
A physics book slides off a horizontal tabletop with a speed of 1.10 m/s. It strikes the floor at 0.350 s. Ignore air resistance. Find (a) the height of the tabletop above the floor, (b) the
horizontal distance from the edge of the table to the point where the book strikes the floor, and (c) the horizontal and vertical components of the book’s velocity, and the magnitude and direction of
its velocity, just before the book reaches the floor.
I used this equation with no success...
= .385 + (-.60025)
= -.215 m
where do i go from here? Thanks.
|
{"url":"http://www.physicsforums.com/showthread.php?t=471479","timestamp":"2014-04-21T14:47:58Z","content_type":null,"content_length":"28209","record_id":"<urn:uuid:5678aec6-0d60-4b31-96fd-0f837d1fdce4>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Unit Circle Stuff
October 7th 2012, 02:28 PM
Unit Circle Stuff
I am having a lot of trouble with this. Can anyone please help me?
Describe each of the following properties of the graph of the Cosine Function, f() = cos() and relate the property to the unit circle definition of cosine.
Thanks guys.
October 7th 2012, 03:07 PM
Re: Unit Circle Stuff
Why are you asking this question? If you do not know anything about the "unit circle", why would you be doing a problem like this? And if you do, why have you made not attempt at answering at
least some of these yourself?
October 7th 2012, 03:19 PM
Re: Unit Circle Stuff
It's an extra credit assignment. My class doesn't have to do the trigonometry lessions but my teacher is giving out extra credit if you answer one of his "questions" involving trigonometry. I
could use the extra credit but since I've never learned anything like this, I can't for the life of me figure out how to answer this.
October 7th 2012, 03:41 PM
Re: Unit Circle Stuff
It's an extra credit assignment. My class doesn't have to do the trigonometry lessions but my teacher is giving out extra credit if you answer one of his "questions" involving trigonometry. I
could use the extra credit but since I've never learned anything like this, I can't for the life of me figure out how to answer this.
Look at this webpage.
|
{"url":"http://mathhelpforum.com/trigonometry/204815-unit-circle-stuff-print.html","timestamp":"2014-04-17T18:24:24Z","content_type":null,"content_length":"5587","record_id":"<urn:uuid:a924cd8c-cd5f-43ec-8ab5-8a28ed3303e9>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Highland Park, IL Math Tutor
Find a Highland Park, IL Math Tutor
...I have completed my background check, which is viewable next to my profile online. I am available for an interview prior to any tutoring begins. Danti OTutor I have been a Wyzant math tutor
since Sept 2009.
18 Subjects: including prealgebra, algebra 1, algebra 2, elementary (k-6th)
...I enjoy the many successes both my English and instrumental students have achieved, including several who have 800's on the SAT or 36's on the ACT, many award winners, and countless with
college scholarships. In addition to tutoring, I currently consult with educational research and test-prep co...
15 Subjects: including ACT Math, reading, grammar, writing
...Even a few tutoring sessions can help improve your scores. I welcome the opportunity to discuss your student's needs and strengths to begin an enjoyable and productive learning experience with
your student.I am certified to teach students in all core subjects from pre-k through age 21. I am als...
34 Subjects: including algebra 1, ACT Math, prealgebra, reading
I have taught elementary school for 10 years. I have experience working with students with disabilities, test taking strategies, study skills, reading and math. I have a Master's degree in
Reading and Literacy and an Educational Specialist degree in School Leadership.
28 Subjects: including algebra 1, algebra 2, study skills, grammar
...I love making math and physics approachable, even easy for my students. I have Master's Degree in Math and Physics, and BS in Computer Information Systems, and 15 years of tutoring experience.
I tutor the following subjects: Algebra 1 Algebra 2 Geometry Precalculus Trigonometry Calculus Calculus AB/BCI taught Algebra 1 at Loyola Academy, and have tutored it for over 15 years.
9 Subjects: including geometry, algebra 1, algebra 2, calculus
Related Highland Park, IL Tutors
Highland Park, IL Accounting Tutors
Highland Park, IL ACT Tutors
Highland Park, IL Algebra Tutors
Highland Park, IL Algebra 2 Tutors
Highland Park, IL Calculus Tutors
Highland Park, IL Geometry Tutors
Highland Park, IL Math Tutors
Highland Park, IL Prealgebra Tutors
Highland Park, IL Precalculus Tutors
Highland Park, IL SAT Tutors
Highland Park, IL SAT Math Tutors
Highland Park, IL Science Tutors
Highland Park, IL Statistics Tutors
Highland Park, IL Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Highland_Park_IL_Math_tutors.php","timestamp":"2014-04-17T04:22:21Z","content_type":null,"content_length":"24162","record_id":"<urn:uuid:1fea1054-0fee-4b97-9481-2b11923064e6>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
|