content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
That pesky A of H
tivol at tethys.ph.albany.edu tivol at tethys.ph.albany.edu
Wed Jun 15 15:59:42 EST 1994
Sean Eddy still uses Bayes' rule. However...
The total number of west hands is 26!/(13!*13!), i.e. the number of ways the
26 missing cards can be taken 13 at a time. The number of hands with the A
of D in the west is 25!/(13!*12!). The A of H can be in one of the 25 pos-
itions not occupied by the A of D, so the number of hands with both red aces
in the west is (12/25)*[25!/(13!*12!)] or 24!/(13!*11!), the number of hands
with the A of D in west and the A of H in east is 24!/(12!*12!). No other
possibilities need be considered. The number of hands where a red ace is the
first card (equivalent to seeing one card) is [24!/(13!*11!)]*[2/13] for the
case where west has both red aces, and [24!/(12!*12!)]*[1/13] for the case
where west has the A of D, but not the A of H. The sum of these two is the
total number of hands which match the conditions of the problem, and it is
[24!/(12!*11!)]*[2/(13*13)+1/(13*12)]. The ratio of this to the number of
hands where west has both aces one of which is seen is [2/(13*13)]/[2/(13*13)
+1/(13*12), or 2/[2+(13/12)] Oops, I meant the ratio of both/one seen to the
total... Anyway, the probability is 24/37; closer to Sean's answer than mine.
One further consideration for the case that the trick is complete. Bradley
Sherman doesn't have things quite right. If east *follows* to the diamond
lead, the odds are not changed; however, if east sluffs on the diamond lead
then (assuming north-south do not have the remaining 12 diamonds in their
hands) the odds are changed--sometimes considerably, as when west holds all
13 diamonds! The odds also change dramatically if east sluffs the A of H on
the diamond lead! So, if east has not played to the first trick or if north
and south have all the remaining diamonds, Sean's Bayesian analysis holds if
the exact probabilities are used in the formula, and if east sluffs on the
first trick and north and south do not have all the remaining diamonds, a
more thorough analysis is required.
Bill Tivol
More information about the Bioforum mailing list | {"url":"http://www.bio.net/bionet/mm/bioforum/1994-June/009553.html","timestamp":"2014-04-18T08:21:24Z","content_type":null,"content_length":"4162","record_id":"<urn:uuid:5130f1e0-9f77-4cb8-9f5c-34c0b1378a2f>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00019-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Election Myths on Richard Charnin's Blog
The Election Fraud Quiz II
Richard Charnin
Sept. 23, 2013
1 The exit poll margin of error is not a function of
a) sample-size, b) 2-party poll share, c) national population size
2 In the 1988-2008 presidential elections, the Democrats won the recorded vote 48-46%. They won both the average unadjusted state and national exit polls by
a) 50-46%, b) 51-45%, c) 52-41%
3 In 2004 the percentage of living Bush 2000 voters required to match the recorded vote was
a) 96%, b) 98%, c) 110%
4 In 2000 the approximate number of uncounted votes was
a) 2, b) 4, c) 6 million
5 In 2008, Obama won by 52.9-45.6%. He led the unadjusted National Exit Poll (17,836 respondents) by
a) 53-45%, b) 58-40%, c) 61-37%
6 In 1988 Bush beat Dukakis by 7 million votes (53.4-45.6%). Dukakis won the National Exit Poll by
a) 49.9-49.1%, b) 50.7-48.3%, c) 51.0-48.0%
7 In 1988 the approximate number of uncounted votes was
a) 6, b) 9, c) 11 million
8 Of 274 state exit polls from 1988-2008, 135 exceeded the margin of error (14 expected). How many moved in favor of the GOP?
a) 85, b) 105, c) 131
9 Gore won the popular vote in 2000. In 2004, returning Nader voters were 5-1 for Kerry, new voters 3-2 for Kerry. In order for Bush to win, he must have
a) won 30% of returning Gore voters, b) won 90% of returning Bush voters, c) stolen the election.
10 In 2008 Obama won 58% of the state exit poll aggregate. Assuming it was his True Vote, how many True Electoral Votes did he have?
a) 365, b) 395, c) 420
11 What is the probability that 131 of 274 state exit polls from 1988-2008 would red-shift to the GOP beyond the margin of error?
a) 1 in 1 million, b) 1 in 1 trillion, c) 1 in 1 trillion trillion trillion trillion trillion trillion trillion trillion trillion (E-116)
12 In 2000 11 states flipped from Gore in the exit polls to Bush in the recorded vote. Gore would have won the election if he had won
a) 1 , b) 2, c) 3 of the 11 states
13 In 1988 24 states had exit polls (2/3 of the total recorded vote). Dukakis won the state polls by
a) 50-49%, b) 51-48%, c) 52-47%
14 Exit polls are always adjusted to conform to the recorded vote. The fact that it is standard operating procedure is
a) reported by the corporate media, b) noted by academia, c) statistical proof of election fraud
15 Bush had 50.5 million votes in 2000. Approximately 2.5 million died and 1 million did not return to vote in 2004. Therefore there could not have been more than 47 million returning Bush 2000
voters. But the 2004 National Exit Poll indicated that there were 52.6 million returning Bush voters. This is proof that
a) Bush stole the 2004 election, b) it was a clerical error, c) 6 million Bush votes were not recorded in 2000.
16 In 2000 Gore won the popular vote by 540,000 votes (48.4-47.9%). But he won the unadjusted state exit poll aggregate by 50.8-44.4% and the unadjusted National Exit Poll by 48.5-46.3%, indicating
a) the state exit poll aggregate was outside the margin of error, b) the National poll was within the margin of error, c) the election was stolen, d) all
17 Corporate media websites show that Bush won the 2004 National Exit Poll (13660 respondents) by 51-48%, matching the recorded vote. But the unadjusted National Exit Poll indicates that Kerry won by
51.0-47.6% (7064-6414 respondents). The discrepancy is proof that
a) the poll was adjusted to match the recorded vote, b) Bush stole the election, c) both, d) neither
18 The pervasive difference between the exit polls and the recorded vote in every election is due to
a) inexperienced pollsters, b) Republican reluctance to be polled, c) systemic election fraud
19 In 1992 Clinton defeated Bush by 43-37.5% (Perot had 19.5%). Clinton won the exit poll by 48-32-20%. Bush needed 119% turnout of returning 1988 Bush voters to match the recorded vote. These
anomalies were due to
a) bad polling, b) Bush voters refused to be polled, c) Bush tried but failed to steal the election.
20 Sensitivity analysis is a useful tool for gauging the effects of
a) various turnout assumptions, b) various vote share assumptions, c) both, d) neither
21 Monte Carlo simulation is a useful tool for
a) predicting the recorded vote, b) electoral vote, c) probability of winning the electoral vote.
22 The expected electoral vote is based on
a) state win probabilities, b) state electoral votes, c) both, d) neither
23 To match the recorded vote, which exit poll cross tab weights and shares are adjusted?
a) when decided, b) voted in prior election, c) party ID, d) gender, e) education, f) income, g) all
24 In 2004 Bush’s final approval rating was 48%. The National Exit Poll had 53%. The change was due to
a) late change in approval, b) different polls, c) forcing the exit poll to match the recorded vote
25 The True Vote Model is designed to calculate the fraud-free (true) vote. It utilizes exit poll shares and calculates returning voters based on the prior election
a) recorded vote, b) votes cast, c) unadjusted exit poll, d) true vote, e) all | {"url":"http://richardcharnin.wordpress.com/category/election-myths/","timestamp":"2014-04-20T08:44:19Z","content_type":null,"content_length":"184239","record_id":"<urn:uuid:97569f65-c2a6-406f-b7a8-3057bbcbd775>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00305-ip-10-147-4-33.ec2.internal.warc.gz"} |
AND ASAF SHAPIRA: A characterization of the (natural) graph properties testable with one-sided error
Results 1 - 10 of 81
- Proc. of STOC 2006 , 2006
"... A common thread in all the recent results concerning testing dense graphs is the use of Szemerédi’s regularity lemma. In this paper we show that in some sense this is not a coincidence. Our
first result is that the property defined by having any given Szemerédi-partition is testable with a constant ..."
Cited by 69 (14 self)
Add to MetaCart
A common thread in all the recent results concerning testing dense graphs is the use of Szemerédi’s regularity lemma. In this paper we show that in some sense this is not a coincidence. Our first
result is that the property defined by having any given Szemerédi-partition is testable with a constant number of queries. Our second and main result is a purely combinatorial characterization of the
graph properties that are testable with a constant number of queries. This characterization (roughly) says that a graph property P can be tested with a constant number of queries if and only if
testing P can be reduced to testing the property of satisfying one of finitely many Szemerédi-partitions. This means that in some sense, testing for Szemerédi-partitions is as hard as testing any
testable graph property. We thus resolve one of the main open problems in the area of property-testing, which was first raised in the 1996 paper of Goldreich, Goldwasser and Ron [24] that initiated
the study of graph property-testing. This characterization also gives an intuitive explanation as to what makes a graph property testable.
- Proc. of STOC 2005 , 2005
"... A graph property is called monotone if it is closed under removal of edges and vertices. Many monotone graph properties are some of the most well-studied properties in graph theory, and the
abstract family of all monotone graph properties was also extensively studied. Our main result in this paper i ..."
Cited by 43 (9 self)
Add to MetaCart
A graph property is called monotone if it is closed under removal of edges and vertices. Many monotone graph properties are some of the most well-studied properties in graph theory, and the abstract
family of all monotone graph properties was also extensively studied. Our main result in this paper is that any monotone graph property can be tested with one-sided error, and with query complexity
depending only on ɛ. This result unifies several previous results in the area of property testing, and also implies the testability of well-studied graph properties that were previously not known to
be testable. At the heart of the proof is an application of a variant of Szemerédi’s Regularity Lemma. The main ideas behind this application may be useful in characterizing all testable graph
properties, and in generally studying graph property testing. As a byproduct of our techniques we also obtain additional results in graph theory and property testing, which are of independent
interest. One of these results is that the query complexity of testing testable graph properties with one-sided error may be arbitrarily large. Another result, which significantly extends previous
results in extremal graph-theory, is that for any monotone graph property P, any graph that is ɛ-far from satisfying P, contains a subgraph of size depending on ɛ only, which does not satisfy P.
Finally, we prove the following compactness statement: If a graph G is ɛ-far from satisfying a (possibly infinite) set of monotone graph properties P, then it is at least δP(ɛ)-far from satisfying
one of the properties.
"... We define a distance of two graphs that reflects the closeness of both local and global properties. We also define convergence of a sequence of graphs, and show that a graph sequence is
convergent if and only if it is Cauchy in this distance. Every convergent graph se-quence has a limit in the form ..."
Cited by 37 (1 self)
Add to MetaCart
We define a distance of two graphs that reflects the closeness of both local and global properties. We also define convergence of a sequence of graphs, and show that a graph sequence is convergent if
and only if it is Cauchy in this distance. Every convergent graph se-quence has a limit in the form of a symmetric measur-able function in two variables. We use these notions of distance and graph
limits to give a general the-ory for parameter testing. As examples, we provide short proofs of the testability of MaxCut and the re-cent result of Alon and Shapira about the testability of
hereditary graph properties.
- Proc. of STOC 2005 , 2005
"... In the course of the proof we develop a framework for extending Szemer'edi's Regularity Lemma, both as a prerequisite for formulating what kind of information about the input graph will provide
us with the correct estimation, and as the means for efficiently gathering this information. In particular ..."
Cited by 29 (7 self)
Add to MetaCart
In the course of the proof we develop a framework for extending Szemer'edi's Regularity Lemma, both as a prerequisite for formulating what kind of information about the input graph will provide us
with the correct estimation, and as the means for efficiently gathering this information. In particular, we construct a probabilistic algorithm that finds the parameters of a regular partition of an
input graph using a constant number of queries, and an algorithm to find a regular partition of a graph using a TC0 circuit. This, in some ways, strengthens the results of [1].
, 2007
"... Suppose G is a graph of bounded degree d, and one needs to remove ɛn of its edges in order to make it planar. We show that in this case the statistics of local neighborhoods around vertices of G
is far from the statistics of local neighborhoods around vertices of any planar graph G ′. In fact, a sim ..."
Cited by 26 (3 self)
Add to MetaCart
Suppose G is a graph of bounded degree d, and one needs to remove ɛn of its edges in order to make it planar. We show that in this case the statistics of local neighborhoods around vertices of G is
far from the statistics of local neighborhoods around vertices of any planar graph G ′. In fact, a similar result is proved for any minor-closed property of bounded degree graphs. As an immediate
corollary of the above result we infer that many well studied graph properties, like being planar, outer-planar, series-parallel, bounded genus, bounded tree-width and several others, are testable
with a constant number of queries. None of these properties was previously known to be testable even with o(n) queries. 1
"... Property testing algorithms are “ultra”-efficient algorithms that decide whether a given object (e.g., a graph) has a certain property (e.g., bipartiteness), or is significantly different from
any object that has the property. To this end property testing algorithms are given the ability to perform ..."
Cited by 24 (3 self)
Add to MetaCart
Property testing algorithms are “ultra”-efficient algorithms that decide whether a given object (e.g., a graph) has a certain property (e.g., bipartiteness), or is significantly different from any
object that has the property. To this end property testing algorithms are given the ability to perform (local) queries to the input, though the decision they need to make usually concern properties
with a global nature. In the last two decades, property testing algorithms have been designed for many types of objects and properties, amongst them, graph properties, algebraic properties, geometric
properties, and more. In this article we survey results in property testing, where our emphasis is on common analysis and algorithmic techniques. Among the techniques surveyed are the following: •
The self-correcting approach, which was mainly applied in the study of property testing of algebraic properties; • The enforce and test approach, which was applied quite extensively in the analysis
of algorithms for testing graph properties (in the dense-graphs model), as well as in other contexts;
- SIGACT News , 2003
"... Abstract Sublinear time algorithms represent a new paradigm in computing, where an algorithmmust give some sort of an answer after inspecting only a very small portion of the input. We discuss
the sorts of answers that one might be able to achieve in this new setting. 1 Introduction The goal of algo ..."
Cited by 22 (2 self)
Add to MetaCart
Abstract Sublinear time algorithms represent a new paradigm in computing, where an algorithmmust give some sort of an answer after inspecting only a very small portion of the input. We discuss the
sorts of answers that one might be able to achieve in this new setting. 1 Introduction The goal of algorithmic research is to design efficient algorithms, where efficiency is typicallymeasured as a
function of the length of the input. For instance, the elementary school algorithm for multiplying two n digit integers takes roughly n2 steps, while more sophisticated algorithmshave been devised
which run in less than n log2 n steps. It is still not known whether a linear time algorithm is achievable for integer multiplication. Obviously any algorithm for this task, as for anyother
nontrivial task, would need to take at least linear time in n, since this is what it would take to read the entire input and write the output. Thus, showing the existence of a linear time
algorithmfor a problem was traditionally considered to be the gold standard of achievement. Nevertheless, due to the recent tremendous increase in computational power that is inundatingus with a
multitude of data, we are now encountering a paradigm shift from traditional computational models. The scale of these data sets, coupled with the typical situation in which there is verylittle time
to perform our computations, raises the issue of whether there is time to consider any more than a miniscule fraction of the data in our computations? Analogous to the reasoning thatwe used for
multiplication, for most natural problems, an algorithm which runs in sublinear time must necessarily use randomization and must give an answer which is in some sense imprecise.Nevertheless, there
are many situations in which a fast approximate solution is more useful than a slower exact solution.
"... Property testing deals with tasks where the goal is to distinguish between the case that an object (e.g., function or graph) has a prespecified property (e.g., the function is linear or the
graph is bipartite) and the case that it differs significantly from any such object. The task should be perfor ..."
Cited by 22 (4 self)
Add to MetaCart
Property testing deals with tasks where the goal is to distinguish between the case that an object (e.g., function or graph) has a prespecified property (e.g., the function is linear or the graph is
bipartite) and the case that it differs significantly from any such object. The task should be performed by observing only a very small part of the object, in particular by querying the object, and
the algorithm is allowed a small failure probability. One view of property testing is as a relaxation of learning the object (obtaining an approximate representation of the object). Thus property
testing algorithms can serve as a preliminary step to learning. That is, they can be applied in order to select, very efficiently, what hypothesis class to use for learning. This survey takes the
learning-theory point of view and focuses on results for testing properties of functions that are of interest to the learning theory community. In particular, we cover results for testing algebraic
properties of functions such as linearity, testing properties defined by concise representations, such as having a small DNF representation, and more. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=433932","timestamp":"2014-04-20T10:24:47Z","content_type":null,"content_length":"38057","record_id":"<urn:uuid:7c6e3e7e-2bf9-4af2-83f8-6fc9b62dd326>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
Applying Matrices to Vertices, and Saving the Transformed Vertices [Archive] - OpenGL Discussion and Help Forums
What should I do if I want to have matrix transformations not only affect vertices as they are drawn, but actually want to store their location in literal modelview space (thus, with the matrix
transformation) in an array or something?
Like in skeletal animation. I will have an individual mesh attached to each bone, which will be stored in the communal vertex array. In skeletal animation, each bone has a transformation matrix; when
you apply these to each mesh, in order, with pushes and pops when you go backwards, and draw each as you go (with glDrawRangeElements), it will output everything correctly in the model. However, the
rest of your program will have no information about where in real space everything in your model is, since you used transformation matrices and don't have real vertices representing where the
vertices in your meshes currently are relative to everthing else.
So how do you save a copy of the vertices having been transformed by the matrix? | {"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-141692.html","timestamp":"2014-04-20T21:05:58Z","content_type":null,"content_length":"5492","record_id":"<urn:uuid:2ea013c8-2f22-4995-92a4-6be07932a963>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00034-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Let R be the region in the first quadrant enclosed between the x-axis and the curve y = x - x^2. There is a straight line passing through the origin which cuts the region R into two regions so that
the area of the lower region is 7 times the area of the upper region. Find the slope of this line.
Best Response
You've already chosen the best response.
|dw:1330592696055:dw| First solve for the point where line intersects the curve x-x^2 = mx x(1-x) = mx 1-x = m x = 1-m Then define area A (upper region) area of entire region R is 8A so A = 1/8*R
A is also area above line from 0 to 1-m \[A = \int\limits_{0}^{1-m}(x-x^{2})- mx = \frac{1}{8}\int\limits_{0}^{1}x-x^{2}\] combine x terms in 1st integral \[A = \int\limits\limits_{0}^{1-m}(1-m)
x-x^{2} = \frac{1}{8}\int\limits\limits_{0}^{1}x-x^{2}\] integrate and evaluate to solve for m
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f4d4961e4b0acf2d9fe16f1","timestamp":"2014-04-21T04:36:18Z","content_type":null,"content_length":"58041","record_id":"<urn:uuid:176199b6-8d78-43cc-8e8d-38f796c7da32>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00140-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multi-Step Equations
Two step equations generally consist of exactly three terms, a variable term and number on one side as well as a number on the other side.
The phrase “multi-step equation” simply means an equation that has a few more “bells and whistles” than a two-step equation. Multi-step equations may have several variable terms or number terms that
need to be combined before the equation can be solved.
3a + 5 + 4a + 7 = 11 + 22 6b + 12 + 8b = 40
Notice that by combining like terms, each of the above equations can be changed into a two-step equation.
The first equation can be simplified like this:
3a + 5 + 4a + 7 = 11 + 22
7a + 12 = 33
The second equation can be simplified like this:
6b + 12 + 8b = 40
14b + 12 = 40
In order to be able to solve multi-step equations, it is important to understand the concept of like terms.
Consider the following terms: y, 4x, 9, 15, 6x, 2y, 7x, 10, 4y, -8
These terms can be split into three categories:
When adding and subtracting numbers, only like terms can be combined.
The following equation can be simplified by combining like terms.
3x + 8 + 2x = 11 + 27
Before solving, combine like terms on each side.
On the left side, 3x + 2x = 5x.
On the right side, 11 + 27 = 38.
It often helps to underline like terms in order to differentiate them. You may even find it helpful to use colored pencils to differentiate variable terms from number terms. The variable terms in the
example can be colored red and the number terms blue.
When two terms are multiplied by a single number, the distributive property can be used in your solution.
4(x + 6) – 11 = 25
Related Links:
Looking for a different lesson on solving equations? Try the links below.
Related Lessons
Looking for something else? Explore our menu of general math or algebra lessons. | {"url":"http://www.freemathresource.com/lessons/algebra/11-multi-step-equations","timestamp":"2014-04-17T18:47:00Z","content_type":null,"content_length":"30301","record_id":"<urn:uuid:934c08a0-119f-4e14-b393-6515564de839>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00602-ip-10-147-4-33.ec2.internal.warc.gz"} |
Charlestown, PA Prealgebra Tutor
Find a Charlestown, PA Prealgebra Tutor
...Algebra 2 continues the basic principles learned in algebra 1 regarding equations and functions. My goal is to provide a simple explanation to complex ideas at the student's level to help them
master algebraic skills and concepts. I will work with the student to obtain a better understanding of...
9 Subjects: including prealgebra, chemistry, geometry, algebra 1
...To help my students learn from literature, I try to ask challenging questions and assign a variety of writing prompts to help them learn the underlying concepts of the text. As an English
teacher, I am well-practiced in proofreading student writing. I can help students improve their creative, persuasive, expository, and essay writing.
12 Subjects: including prealgebra, reading, English, writing
...This includes two semesters of elementary calculus, vector and multi-variable calculus, courses in linear algebra, differential equations, analysis, complex variables, number theory, and
non-euclidean geometry. As an undergraduate, I took three semesters of elementary calculus, which included tw...
12 Subjects: including prealgebra, calculus, writing, geometry
...I like reading grammar books and I'm a fairly strong writer. I've also tutored students in grammar and language. I'm a Princeton alum who majored in Classics with 4 years of experience in
college level Latin.
10 Subjects: including prealgebra, algebra 1, vocabulary, grammar
Hello! My name is Blaise. I have five years classroom experience, and have been tutoring on the side since college.
8 Subjects: including prealgebra, calculus, geometry, algebra 1 | {"url":"http://www.purplemath.com/Charlestown_PA_prealgebra_tutors.php","timestamp":"2014-04-16T04:24:38Z","content_type":null,"content_length":"24104","record_id":"<urn:uuid:ac605a2c-553d-4b1d-8f59-9c476ec73a9a>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00412-ip-10-147-4-33.ec2.internal.warc.gz"} |
Making data dance
By Carl Mäsak (masak)
Date: Saturday, 12 November 2011 10:20
Duration: 40 minutes
Language: English
Tags: algorithms languages problems solutions 蝶
Donald Knuth's "Dancing Links" algorithm deserves wider recognition. It can solve the N-queens problem. It can help us tile pentominoes. It makes solving a Sudoku problem trivial.
There's only one little issue: these problems are all solved in another domain which we can call the "big huge matrix of ones and zeroes domain". Constructing such a matrix manually is about as fun
as boning fish.
Solution: I wrote a set of parsers that can understand a given problem type, so that we can always deal with cute ASCII representations of the problems, and never the matrix itself. We become
unfettered from the technical specifics of the algorithm, while still reaping all the benefits from it.
The same idea was implemented in Perl 5/Moose (for prototyping), C (for speed), and Perl 6 (for beauty), and I will say a thing or two about what's nice about implementing a small project like this
in each of those languages.
Attended by: | {"url":"http://conferences.yapceurope.org/lpw2011/talk/3845","timestamp":"2014-04-18T08:04:33Z","content_type":null,"content_length":"7939","record_id":"<urn:uuid:34a41248-55a2-4a05-9684-404a4e96d64c>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the exponent of the group of points on elliptic curves in extension fields
- J. Number Th
"... We show that finite fields over which there is a curve of a given genus g ≥ 1 with its Jacobian having a small exponent, are very rare. This extends a recent result of W. Duke in the case g = 1.
We also show when g = 1 or g = 2 that our bounds are best possible. 1 ..."
Cited by 3 (1 self)
Add to MetaCart
We show that finite fields over which there is a curve of a given genus g ≥ 1 with its Jacobian having a small exponent, are very rare. This extends a recent result of W. Duke in the case g = 1. We
also show when g = 1 or g = 2 that our bounds are best possible. 1
"... Abstract. We present a lower bound for the exponent of the group of rational points of an elliptic curve over a finite field. Earlier results considered finite fields Fqm where either q is fixed
or m = 1 and q is prime. Here we let both q and m vary and our estimate is explicit and does not depend o ..."
Add to MetaCart
Abstract. We present a lower bound for the exponent of the group of rational points of an elliptic curve over a finite field. Earlier results considered finite fields Fqm where either q is fixed or m
= 1 and q is prime. Here we let both q and m vary and our estimate is explicit and does not depend on the elliptic curve. 1. introduction Let Fq be a finite fields with q = p m elements and let E be
an elliptic curve defined over Fq. It is well known (see for example the book of Washington [7]) that the group of rational point of E over Fq satisfies E(Fq) ∼ = Zn × Znk where n, k ∈ N are such
that n | q − 1. The exponent of E(Fq) is exp(E(Fq)) = nk. In 1989 Schoof [6] proved that if E is an elliptic curve over Q without complex multiplication, then for every prime p> 2 of good reduction
for E, one has the estimate √ log p exp(E(Fp))> CE p (log log p) 2 where CE> 0 is a constant depending only on E. In 2005 Luca and Shparlinski [4] consider the case when q is fixed and they prove
that if E/Fq is ordinary, the there exists an effectively computable constant ϑ(q) depending only on q such that log m (1) exp(E(Fqm))> qm/2+ϑ(q)m/ holds for all positive integers m. Other lower
bounds that hold for families of primes (resp. for families of powers of fixed primes) with density one were proven by Duke in [1] (resp. by Luca and Shparlinski in [4]). Here we let both p and m
vary and we prove the following Theorem. Let E be any elliptic curve over Fpm is even and E(F p 2r) ∼ = Zp r ±1 × Zp r ±1 where m ≥ 3 then either m = 2r
, 2006
"... A lower bound for the r-order of a matrix modulo N ..."
, 2008
"... In this paper, we prove that the period of the continued fraction expansion of √ 2 n + 1 tends to infinity when n tends to infinity through odd positive integers. 1 ..."
Add to MetaCart
In this paper, we prove that the period of the continued fraction expansion of √ 2 n + 1 tends to infinity when n tends to infinity through odd positive integers. 1
, 2009
"... Let C be a smooth absolutely irreducible curve of genus g ≥ 1 defined over Fq, the finite field of q elements, and let #C(Fqn) be the number of Fqn-rational points on C. Under a certain
condition, which for example, satisfied by all ordinary elliptic curves, we obtain an asymptotic formula for the n ..."
Add to MetaCart
Let C be a smooth absolutely irreducible curve of genus g ≥ 1 defined over Fq, the finite field of q elements, and let #C(Fqn) be the number of Fqn-rational points on C. Under a certain condition,
which for example, satisfied by all ordinary elliptic curves, we obtain an asymptotic formula for the number of ratios (#C(Fqn)−qn −1)/2gqn/2, n = 1,...,N, inside of a given interval I ⊆ [−1,1]. This
can be considered as an analogue of the Sato–Tate distribution which covers the case when the curve E is defined over Q and considered modulo consecutive primes p, although in our scenario the
distribution function is different. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=5934829","timestamp":"2014-04-18T17:30:07Z","content_type":null,"content_length":"22894","record_id":"<urn:uuid:ff4f049f-e8e7-451b-912f-bb970b250b17>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00427-ip-10-147-4-33.ec2.internal.warc.gz"} |
Create mathPad Questions Using Algebraic Mode with
Create mathPad Questions Using Algebraic Mode with Mathematica
You can create mathPad questions that use Algebraic mode and a Mathematica^® grading statement to compare the answer key and your students' responses for mathematical equivalence. Using Algebraic
mode and Mathematica lets you accurately evaluate your students' responses in situations where Symbolic evaluation cannot be used — for example, to distinguish between factored and unfactored
expressions, or for questions with multiple correct answers.
Note: Do not require your students to use function notation in an answer. WebAssign^® cannot grade answers that use function notation.
To create a mathPad question using Algebraic mode with Mathematica:
1. Click .
The Question Editor opens.
2. In Name, type a name for the question.
3. In Mode, select Algebraic.
4. In Question, type your question.
□ Use the answer placeholder string <_> to specify where the answer box should be displayed.
□ Be sure that your question identifies any variables that the student should use in their answer.
5. In the Question Editor, click Mathematica under Page Tools to create and test your Mathematica grading statement and answer key.
a. In the Mathematica window, type your grading statement, using Mathematica expressions for the answer key and student response that you want to test.
☆ Your grading statement is a Mathematica statement providing information about how to compare your answer key and your students' responses.
☆ Your answer key is a Mathematica expression specifying the correct answer to the question; sometimes the answer key is one of multiple possible correct answers.
b. Click Execute.
Your grading statement is evaluated using the expressions you specified for the answer key and student response, and the result is displayed. If your grading statement evaluates to True, then
the response will be marked correct. Otherwise, the response will be marked incorrect.
For example, if your question asks students to calculate an indefinite integral, your grading statement might compare the derivatives of your answer key and of your student's response:
The answer key specifies one of the correct responses: 3 · sin(x^2) + 2x + C. The specified response is also a valid answer and the grading statement evaluates as True.
Note: Most, but not all, Mathematica expressions are valid in WebAssign. Any expression that works in the WebAssign Mathematica tool will work in your question.
6. In Answer, type the following items on a single line:
<eqn $CASGRADER='mathematica'; $PAD='devmath'; ''>
variable_list:answer_key {tab} grading_statement
□ variable_list is a comma-delimited list of the variables used in the answer key.
□ answer_key is the Mathematica answer key you created in the previous step.
□ grading_statement is the Mathematica grading statement you created, with your answer key and student response expressions replaced by the keywords key and response. When the question is
scored, the actual answer key and student response values will be used in place of these keywords.
□ To add the {tab} operator, either type the characters {tab} or click Add tab.You cannot enter {tab} by pressing the Tab key.
□ If an answer extends beyond the right side of the Answer box, it is wrapped to the next line, but it is still considered a single line so long as you do not press ENTER.
For example, the following answer key and grading statement allows students to submit the equation of an ellipse in standard form with the 1 on either side of the equation:
<EQN $CASGRADER='mathematica'; $PAD='devmath';'' >
x,y:(x+4)^2/9+(y-5)^2/5 == 1 {tab} Apply[List,key] ==
{(response)[[1]],(response)[[2]]}||Apply[List,key] ==
The following responses would be accepted as correct:
Responses not in the standard form, such as the following, would be marked incorrect:
7. Optional: Type a Solution.
The solution helps your students understand the steps they need to take to determine the correct answer to the question. Your assignment settings specify when to show the solution.
8. Click Test/Preview to test the appearance and behavior of the question. See Test Questions.
9. Click Redisplay to show certain kinds of errors in the Display section of the Question Editor. Make any needed changes to your question.
10. Optional: Click Show Additional Information and change the question's sharing permission or add descriptive information.
□ By default, other instructors can use your question only if you provide them with the question ID, and only you can edit the question or find it in search results. To change the permission,
see Share Questions With Other Instructors.
□ If you make your question publicly available, you might want to provide descriptive information to help others search for it. See Add Search Metadata to Questions.
11. When your question displays and functions correctly, click Save.
WebAssign assigns it a unique question ID (QID), which is displayed in parentheses after the question name.
You can use your question in an assignment and see it in your My Questions list only after it is saved.
Example mathPad Question Using Algebraic Mode with Mathematica
The following table summarizes an actual question.
│ QID │ 1344935 │
│ Name │ Template2 4.MATHP.02. │
│ Mode │ Algebraic │
│ Question │ Find the equation in standard form of the following ellipse: │
│ │ <div class="indent"> │
│ │ Center: (-4, 5)<br> │
│ │ Vertices: (-7, 5) and (-1, 5)<br> │
│ │ Foci: (-6, 5) and (-2, 5) │
│ │ </div> │
│ │ <_> │
│ Answer │ <EQN $CASGRADER='mathematica'; $PAD='devmath';'' > │
│ │ x,y:(x+4)^2/9+(y-5)^2/5 == 1 {tab} Apply[List,key] == │
│ │ {(response)[[1]],(response)[[2]]}||Apply[List,key] == │
│ │ {(response)[[2]],(response)[[1]]} │
│ Display to Students │ │ | {"url":"http://www.webassign.net/manual/instructor_guide/t_i_creating_questions_mathpad_algebraic_mode_mathematica.htm","timestamp":"2014-04-19T09:51:33Z","content_type":null,"content_length":"33695","record_id":"<urn:uuid:dde9886e-f0e3-447c-938c-7c2cb5b725f2>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00178-ip-10-147-4-33.ec2.internal.warc.gz"} |
Regarding Coordinate Transforms [Archive] - OpenGL Discussion and Help Forums
05-12-2012, 03:12 AM
Hello All!
I am reading these wonderful set of articles by Jason L. McKesson. I am currently reading the translation article:
>>This is a translation transformation: it is used to position the origin point of the initial space relative to the destination space. Since all of the coordinates in a space are relative to the
origin point of that space, all a translation needs to do is add a vector to all of the coordinates in that space. The vector added to these values is the location of where the user wants the origin
point relative to the destination coordinate system.
Now, my question is, Why can't we consider this scenario as transforming a coordinate instread of transforming a coordinatee system? I am confused as to why Jason mentions it as moving to a new
coordinate system, instead of moving the coordinates of the point?
I also remember reading in 3D Math Primer for Graphics and Game Development that transforming a coordinate is not the same as coordinate system transformation(and that they would be opposite to each
But, since they yield the same result, can't we use any point of view, i.e seeing it as a coordinate transformation of a transformation of the coordinate system?
PS: Please provide links of threads if a similar question has already been asked. | {"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-177630.html","timestamp":"2014-04-16T16:33:26Z","content_type":null,"content_length":"5759","record_id":"<urn:uuid:ff295c51-03a3-44dd-a0bb-85a11f23b1d8>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
192 pounds in kg
You asked:
192 pounds in kg
87.08973504 kilograms
the mass 87.08973504 kilograms
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/192_pounds_in_kg","timestamp":"2014-04-18T03:55:14Z","content_type":null,"content_length":"52844","record_id":"<urn:uuid:ee7dfe28-dec6-4298-8126-bf64a811e95a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
Realizing higher level Fock spaces
up vote 4 down vote favorite
Let $\mathfrak{g}$ = $\mathfrak{gl}_{\infty}$.
To each positive integer $k$ one can associate the level $k$ Fock space $\mathcal{F}_{k}$.
For a dominant weight $\lambda$ of level $k$, one can define an action of $\mathfrak{g}$ on $\mathcal{F}_{k}$
so that it has a highest weight vector of weight $\lambda$ which generates the corresponding irreducible $\mathfrak{g}$-module. Let us denote $\mathcal{F}_{k}$ with this action as
My question is about realizing these spaces and actions. In the lectures of Kac and Raina "Highest-weight representations of infinite dimensional Lie algbebras", there is a construction of $\mathcal
{F}_{1}(\lambda)$ (here $\lambda$ is a fundamental weight) as a subspace of the semi-infinite wege space. In this realization the action of $\mathfrak{g}$ is just the natural action on wedge
products. Is there an analogous realization of
for higher $k$? What about for
$\mathfrak{g}$ = $\hat{\mathfrak{sl}}_{p}$?
rt.representation-theory lie-algebras
add comment
3 Answers
active oldest votes
For $\mathfrak{gl}_{\infty}$ the answer is easy, you realize the Fock space as a direct limit of polynomial representations of finite $\mathfrak{gl}_n$ modules. You can read about the
up vote 2 down construction here. I worked on the $\hat{sl}_p$ case a few years ago, but got stuck. If you could do it, it would be very nice.
vote accepted
I guess I should say that it is straighforward to extend the $\mathfrak{gl}_\infty$-module I mentioned above to $a_\infty$ in a way analogous to the Kac-Raina level 1 extension. So,
you get an action of $\hat{\mathfrak{sl}}_p$ on the Fock space. The problem is that it is not generally irreducible as a $\hat{\mathfrak{sl}}_p$-module. – David Hill Oct 4 '10 at
Thank you very much for this link. I think this is very close to what I was looking for, although I haven't had a chance to look at it closely yet. A question: from what I
understand the level k Fock space has a basis indexed by k multi-partitions. Is this "visible" in your construction? – Oded Yacobi Oct 4 '10 at 20:37
I'll have to think about it more, but my first instinct is that the answer is no. The $k$-multipartition description comes from tensoring together $k$ level 1 representations, each
parametarized by partitions. In my paper, I skipped the big space and defined the action in one shot. Then again, maybe the translation is not too bad? I'll think some more. – David
Hill Oct 4 '10 at 21:14
add comment
Higher level Fock spaces have been studied in the context of the quantum affine algebra $U_q(\widehat{sl}_n)$. There is a "higher level Fock space" representation for this algebra whose
underlying space looks like semi-infinite wedge space. I believe the original reference is Jimbo, Miwa, Misra and Okado "Combinatorics of representations of $U_q(\widehat{sl}_n)$ at $q=0$"
although there the wedge space structure is not clear. That is explained in Uglov's paper "Canonical bases of higher level $q$-deformed Fock space and Kahzdan Lusztig polynomials"
up vote
3 down http://arxiv.org/abs/math/9905196.
Higher level Fock space is more complicated then the level 1 case. For instance many different irreducible representation occur as direct summands of Fock space. In order to get a
realization of a single irreducible highest weight representation, you need to pick off the irreducible subrepresentation generated by a certain overall highest weight vector. On the level
of representations, this is difficult. However, in the "crystal limit" (i.e. at $q=0$), this can be done quite easily. The basis of the resulting representation is naturally indexed by $\
ell$ tuples of partitions (where $\ell$ is the level), satisfying a couple conditions. This fact has been useful in studying crystal bases of these higher level representations.
Thanks! Does the higher level Fock space carry a rep of $U_{q}(\hat{\mathfrak{sl}}_{n})$ only for generic $q$, or also for $q=1$ or $q$ root of unity? – Oded Yacobi Oct 5 '10 at 0:07
The papers I mentioned deal with generic $q$. I think one should be able to make sense of the construction at $q=1$ though. This is just because the formulas for the actions of $E_i$ and
$F_i$ on the standard basis of Fock space make sense at $q=1$. See Theorem 2.1 in Uglov's paper. These seem to make sense at other roots of unity as well...although certainly the structure
of the representation would be much much complicated in those cases. – Peter Tingley Oct 5 '10 at 1:58
add comment
Representation theory of direct limit Lie algebras (like $\mathfrak{gl}_\infty$) has been studied extensively by Dimitrov, Penkov, and Styrkas. You can find their papers on the
up vote 0 down vote
add comment
Not the answer you're looking for? Browse other questions tagged rt.representation-theory lie-algebras or ask your own question. | {"url":"http://mathoverflow.net/questions/41033/realizing-higher-level-fock-spaces?sort=oldest","timestamp":"2014-04-19T07:48:32Z","content_type":null,"content_length":"64384","record_id":"<urn:uuid:5be59552-918b-4912-8e73-b1876b7f184e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00566-ip-10-147-4-33.ec2.internal.warc.gz"} |
Which year was Sarah Chalke born in?
You asked:
Which year was Sarah Chalke born in?
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/which_year_was_sarah_chalke_born_in","timestamp":"2014-04-21T13:47:23Z","content_type":null,"content_length":"56473","record_id":"<urn:uuid:880bc4f7-a097-4243-8a24-98bd1ea6afaa>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
The λ-calculus
The lambda calculus is:
• a simple programming language;
• a model of computation (akin to Turing machine?s and recursive function?s), through which we can study the computability and complexity of functions and predicates; and
It comes in both typed and untyped (or, more correctly, single-typed) versions.
Abstraction and application
The basic constructs of lambda calculus are lambda abstractions and applications. If $t$ is an expression of some sort involving a free variable $x$ (possibly vacuously), then the lambda abstraction
$\lambda x. t$
is intended to represent the function which takes one input and returns the result of substituting that input for $x$ in $t$. Thus, for instance, $(\lambda x. (x+1))$ is the function which adds one
to its argument. Lambda expressions are often convenient, even outside lambda calculus proper, for referring to a function without giving it a name.
Application is how we “undo” abstraction, by applying a function to an argument. The application of the function $f$ to the argument $t$ is generally denoted simply by $f t$. Applications can be
parenthesized, so for instance $f(t)$ and $(f)t$ and $(f t)$ all denote the same thing as $f t$.
Application is generally considered to associate to the left. Thus $u v w$ denotes the application of $u$ to $v$, followed by application of the result (assumed to again be a function) to $w$. This
allows the representation of functions of multiple variables in terms of functions of one variable via “currying” (named for Haskell Curry, although it was invented by Moses Schönfinkel): after being
applied to the first argument, we return a function which, applied to the next argument, returns a function which, when applied to the next argument, … , returns a value. For instance, the “addition”
function of two variables can be denoted $(\lambda x. (\lambda y. x+y))$: when applied to an argument $x$, it returns a function which, when applied to an argument $y$, returns $x+y$. This is so
common that it is generally abbreviated $(\lambda x y. x+y)$.
Evaluation and Reduction
Evaluation or reduction is the process of “computing” the “value” of a lambda term. The most basic operation is called beta reduction and consists in taking a lambda abstraction at its word about
what it is supposed to do when applied to an input. For instance, the application $(\lambda x. x+1) 3$ reduces to $3+1$ (and thereby, presuming appropriate rules for $+$, to $4$). Terms which can be
connected by a zigzag of beta reductions (in either direction) are said to be beta-equivalent.
Another basic operation often assumed in the lambda calculus is eta reduction/expansion, which consists of identifying a function, $f$ with the lambda abstraction $(\lambda x. f x)$ which does
nothing other than apply $f$ to its argument. (It is called “reduction” or “expansion” depending on which “direction” it goes in, from $(\lambda x. f x)$ to $f$ or vice versa.)
A more basic operation than either of these, which is often not even mentioned at all, is alpha equivalence; this consists of the renaming of bound variables, e.g. $(\lambda x. f x) \to (\lambda y. f
More complicated systems that build on the lambda calculus, such as various type theories?, will often have other rules of evaluation as well.
In good situations, lambda-calculus reduction is confluent and terminating (the Church-Rosser theorem), so that every term has a normal form?, and two terms are equivalent precisely when they have
the same normal form.
Pure lambda calculus
In the pure “untyped” lambda calculus, there is only one kind of variable and one kind of term, and the only construction used to form expressions is application of a function $f$ to an argument $t$,
generally denoted simply $f t$. In particular, all variables and terms “represent functions”, and can be applied to any other variable or term.
From the point of view of type theory, it is more appropriate to call this “single-typed” or “unityped” lambda-calculus rather than “untyped” — there is a single type which all terms belong to.
As an example of the sort of freedom this allows, any term can always be applied to itself. We can then form the term $\lambda x. x x$ which applies its argument to itself. The self-application of
this term:
$(\lambda x. x x) (\lambda x. x x)$
is a classic example of a term which admits an infinite sequence of beta-reductions (each of which leads back to itself).
In pure untyped lambda calculus, we can define natural numbers using the Church numerals?: the number $n$ is represented by the operation of $n$-fold iteration. Thus for instance we have $2 = \lambda
f. (\lambda x.f (f x))$, the function which takes a function $f$ as input and returns a function that applies $f$ twice. Similarly $1 = \lambda f. (\lambda x.f x)$ is the identity on functions, while
$0 = \lambda f. (\lambda x . x)$ takes any function $f$ to the identity function (the 0th iterate of $f$). We can then construct (very inefficiently) all of arithmetic, and prove that the arithmetic
functions expressible by lambda terms are exactly the same as those computable by Turing machines or (total) recursive functions.
The most natural sort of model of pure lambda calculus is a set or other object $D$ which is equivalent to its own exponential $D^D$. Of course there are no nontrivial such models in sets, but they
do exist in other categories, such as domains. It is worth remarking that a necessary condition on such $D$ is that every term $f \colon D^D$ have a fixed-point; see fixed-point combinator.
Simply typed lambda calculus
In simply typed lambda calculus, each variable and term has a type, and we can only form the application $f t$ if $t$ is of some type $A$ while $f$ is of a function type $A \to B = B^A$ whose domain
is $A$; the type of $f t$ is then $B$. Similarly, if $x$ is a variable of type $A$ and $t$ is a term of type $B$ involving $x$, then $\lambda x. t$ has type $A\to B$.
Without some further type and term constructors, there is not much that can be done, but if we add a natural numbers object (that is, a type $N$ with constants $0$ of type $N$ and $s$ of type $N\to
N$, along with a “definition-by-recursion” operator), then we can express many recursive functions. (We cannot by this means express all computable functions, although we can go beyond primitive
recursive function?s; for instance we can define the Ackermann function?. One way to increase the expressiveness to all partial recursive functions is to add a fixpoint? combinator?, or an unbounded
search operator).
Simply typed lambda calculus is the natural internal language of cartesian closed categories. This means that
• Every cartesian closed category gives rise to a simply typed lambda calculus whose basic types are its objects, and whose basic terms are its morphisms, while
• Every simply typed lambda calculus “generates” a cartesian closed category whose objects are its types and whose morphisms are its equivalence classes of terms.
These two operations are adjoint in an appropriate sense.
Functional programming
Most functional programming languages, such as Lisp, ML, and Haskell, are at least loosely based on lambda calculus.
Please add | {"url":"http://ncatlab.org/nlab/show/lambda-calculus","timestamp":"2014-04-20T19:02:39Z","content_type":null,"content_length":"55462","record_id":"<urn:uuid:bdb46006-7c18-4dca-a034-701c5d84bd95>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00249-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comparability of Results from Pair and Classical Model Formulations for Different Sexually Transmitted Infections
The “classical model” for sexually transmitted infections treats partnerships as instantaneous events summarized by partner change rates, while individual-based and pair models explicitly account for
time within partnerships and gaps between partnerships. We compared predictions from the classical and pair models over a range of partnership and gap combinations. While the former predicted similar
or marginally higher prevalence at the shortest partnership lengths, the latter predicted self-sustaining transmission for gonorrhoea (GC) and Chlamydia (CT) over much broader partnership and gap
combinations. Predictions on the critical level of condom use (C[c]) required to prevent transmission also differed substantially when using the same parameters. When calibrated to give the same
disease prevalence as the pair model by adjusting the infectious duration for GC and CT, and by adjusting transmission probabilities for HIV, the classical model then predicted much higher C[c]
values for GC and CT, while C[c] predictions for HIV were fairly close. In conclusion, the two approaches give different predictions over potentially important combinations of partnership and gap
lengths. Assuming that it is more correct to explicitly model partnerships and gaps, then pair or individual-based models may be needed for GC and CT since model calibration does not resolve the
Citation: Ong JBS, Fu X, Lee GKK, Chen MI-C (2012) Comparability of Results from Pair and Classical Model Formulations for Different Sexually Transmitted Infections. PLoS ONE 7(6): e39575.
Editor: Joan A. Caylà, Public Health Agency of Barcelona, Spain
Received: September 30, 2011; Accepted: May 27, 2012; Published: June 27, 2012
Copyright: © 2012 Ong et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original author and source are credited.
Funding: JBSO and MIC are funded by the National Medical Research Council, Singapore (under NMRC/NIG/0029/2008 and NMRC/CSA/011/2009). The funders had no role in study design, data collection and
analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Mathematical models have been used to investigate the transmission dynamics of sexually transmitted infections (STIs) and assess the potential impact of public health interventions on both bacterial
and viral STIs, such as the efficacy of screening measures on reducing Chlamydia prevalence [1], [2], and the effect of anti-retroviral therapy on incident HIV infections [3], [4].
Much of this modelling relies on variants of the “classical model” for STIs proposed by Hethcote and Yorke in the early 1980s [5]. The classical model describes populations of individuals with
different rates of acquisition of new sexual partners (partner change rates, R). However, this modelling approach is not without its limitations. Firstly, the manner in which the model describes
“sexual mixing” is possibly too simplistic. By comparing several modelling approaches, Eames and Keeling showed that accounting for contact network heterogeneities produces results that are different
and likely more realistic [6]. A separate limitation relates to how the classical model treats all partnerships as instantaneous events, often with per partnership transmission probabilities that are
independent of partner change rates. Some modelling studies have pointed out that individuals with higher partner change rates would likely have have have have shorter partnerships with fewer
episodes of sexual intercourse within each partnership, and hence could theoretically have a lower probability of transmitting a given STI per partner than individuals with lower partner change rates
and correspondingly longer partnerships [7], [8]. Much work with classical models now accounts for different partnership types (e.g. shorter casual partnerships versus longer stable partnerships), in
effect assuming lower per partnership transmission probabilities for those with higher partner change rates [1], [9], [10], [11], but it is not clear if such modifications sufficiently counteract the
limitations associated with modelling partnerships as instantaneous events.
Alternative approaches to treating partnerships as instantaneous events involves the use of pair models, which were proposed by Dietz and Hadeler in the late 1980 s [12], and individual-based models
which followed in late 1990 s [13], [14]. Both approaches explicitly account for the duration of partnerships and the duration spent between partnerships (henceforth referred to as partnership () and
gap () lengths respectively). Previous work by Chen et al. suggests substantial variability of partnership and gap lengths at the population level; by using a pair model, they also showed that
stratifying a population into various categories of partnership and gap lengths has important implications on the transmission dynamics of gonorrhoea [15]. In addition, different contexts for STI
transmission may be characterised by different partnership and gap length combinations. For instance, sex worker client interactions would be characterised by single episodes of sex between
individuals, interspersed with extremely short gaps in the sex worker (of hours to days in the sex worker) and intermediate length gaps in the client (of several weeks to months) [16], [17]; romantic
partnerships in young heterosexuals may comprise largely intermediate gap and partnership lengths on the order of a few months [18]; while the transmission context for heterosexual HIV in parts of
Africa appears to involve both shorter as well as longer stable partnerships on the order of several months to years [19]. Accurately modelling the effect of partnership and gap lengths may hence be
important for understanding the types of STIs likely to persist in different risk populations as well as predict the effect of potential interventions.
Explicitly modelling partnerships and gaps, as is done in pair and individual-based models, is intuitively more accurate, and work on pair models suggests that doing so results in different
predictions from what would be expected from the classical model. Lloyd-Smith and colleagues suggested that this was particularly for bacterial STIs (which are mostly
susceptible-infectious-susceptible (S-I-S) type infections with a shorter duration of infectiousness) than for viral STIs (which are mostly susceptible-infectious (S-I) type infections with a longer
duration of infectiousness) [20]. However, Kretzschmar and Dietz also pointed out that, for the same set of model parameter values, different epidemic growth rates and steady-state prevalence (π^s)
can result when modelling S-I type pathogens with the two different model formulations [21]. However, neither work considered whether the results would be more similar if model outputs had been
calibrated to observed data by allowing model parameter values to vary, as is done in much modelling work (e.g. [1], [4], [9], [22]). One recent paper suggests that, when modelling Human Papilloma
Virus, both the pair and classical model formulations produce reasonably similar predictions on the impact of vaccination, provided transmission rates are first calibrated to match the same empirical
data on pre-vaccination prevalence [23]. However, it remains unclear if model calibration can reduce the discrepancy in predictions when applied to other STIs, and for what types of sexual behaviour
(framed in terms of partnership and gap lengths). In particular, re-infection within partnerships can prolong the infectious duration of S-I-S type infections, a phenomena not adequately accounted
for within the classical model [24].
In this work, we aim firstly to identify the context, both in terms of disease types and sufficiently broad combinations of partnership and gap lengths, when the classical model`s assumption of
instantaneous partnerships may produce different predictions from those derived when explicitly modelling partnerships and gaps. We do so by using simplified versions of the classical and pair models
to predict π, the prevalence of infection, and C[c], the critical amount of condom use needed to prevent self-sustaining transmission. This was done for model parameters depicting gonococcus (GC) and
Chlamydia trachomatis (CT) infections, as examples of S-I-S type infections with different reproductive potentials [25]. We also modelled HIV in the absence and presence of co-factor enhancement,
represented respectively as an S-I type infection with lower and higher estimates of per-sex-act transmission probability [26]. Secondly, by assuming that explicitly modelling partnerships and gaps
is more correct, we investigate whether and when model calibration for the classical model reduces the divergence in predictions for each partnership and gap length combination. To approximate
situations when model outputs are calibrated to disease prevalence data, we first altered one key infection parameter by an adjustment factor () so that the classical model could reproduce the
prevalence predicted by the pair model, then estimated again the predicted critical condom use (C[c]') using the classical model with the “calibrated” parameter. We conclude by pointing out
situations where predictions from the classical and pair model diverge substantially in spite of model calibration, as these contexts would be where more complex modelling approaches such as pair and
individual-based models may be needed.
Using the baseline parameters in Table 1, Figure 1 contrasts the predictions from the pair and classical model formulations for the steady-state prevalence (π^s) for GC and CT, and the peak
prevalence (π^p) for HIV with and without cofactor enhancement over various combinations of partnership and gap lengths (for an explanation of why peak prevalence is used in HIV, see methods and
Figure S1). For GC and CT, we see that predictions on π^s begin to diverge substantially for anything but very short partnership lengths (less than 10 days). For the longer gap lengths (30 days for
both, and 90 days for CT only), the classical model gives a higher value of π^s than the pair model at lower partnership lengths and vice-versa as partnership lengths increase, the cross-over
occurring at 8 days for a gap length of 30 days in GC, and at 14 and 48 days for gap lengths of 30 and 90 days respectively for CT. The lower values of π^s at combinations of shorter partnership
lengths and longer gap lengths in the pair model is partly due to the explicit inclusion of the pair-formation process which reduces opportunities for infectious contacts (for an elaboration, see
discussion and [27]); at longer partnership lengths, this effect is offset by the potential re-infection within intermediate to longer partnerships, which is accounted for in the pair model but
ignored in the classical model. For S-I type infections, where re-infection does not apply, the inclusion of the pair-formation process results in the pair model predicting lower values of π^p for
HIV without cofactor enhancement (HIV CF-) throughout. For HIV with cofactor enhancement (HIV CF+), however, a crossover in the predictions of the pair and classical models occurs (at 299 and 899
days for gap lengths of 30 and 90 days respectively). In this case, HIV-induced mortality is resulting in additional pair separation of HIV concordant pairs in what would otherwise be very long and
stable partnerships, an effect which was ignored classical model.
Figure 1. Predictions from the classical and pair model formulations for the steady-state π^sof GC/CT (A and B), and the peak π^p of HIV with and without cofactor enhancement (C and D).
The horizontal axes give partnership length in days while the vertical axes give π. The different lines denote predictions from using gap lengths () of 1 day, 7 days, 30 days and 90 days. The inset
in each figure magnifies crossover point, if any, in the region where the classical and pair models diverge in π predictions. Models in (A) and (C) are unable to provide predictions at a gap length
of 90 days.
Table 1. Model parameters.
Figure 2 explores C[c], the critical level of condom use predicted to prevent self-sustaining transmission (i.e. so that effective reproduction number is less than 1) when using the pair model (
Figures 2A to D), the classical model (Figures 2E to H), and a classical model which has been calibrated to give the same values of π^s and π^p as the pair model (2I to L); for simplicity, condoms
were assumed to have 100% efficacy so that 100% condom use would prevent all transmission. Predicted C[c] values are given by a colour gradient from 0% (blue) to 100% (red); C[c] values of 0% also
demarcate the most extreme combination of partnership and gap lengths which can support self-sustaining transmission. For all the four infection parameter sets, both the pair and classical models
predict that the longest permissible gap length occurs at some partnership length between the extremes of values modelled. However, the classical model predicts a much more restricted range of
partnership and gap combinations for self-sustaining transmission of GC and CT than the pair model. Both approaches predict maximum permissible gap lengths of similar magnitude, with the values for
CT being substantially longer than for GC. However, even with extremely short gap lengths, permissible partnership lengths extend only up to 100 and 248 days respectively in the classical model. On
the other hand, the pair model predicts that intermediate to longer partnership lengths of up to 724 day for GC and 1150 days for CT could still support self-sustaining transmission. For HIV without
cofactor enhancement, the reverse is true. The classical model gives a wider range of permissible partnership lengths of up to 1000 and gap lengths of up to 140 days, while the corresponding values
for the pair model are 617 and 48 days. For HIV with cofactor enhancement, the maximum permissible gap length is less in the pair model as compared to the classical model (671 vs. 1138 days), but the
maximum permissible partnership length is much greater (8390 days vs. 2786 days; extends into the truncated area in figure 2D and 2H) due to the additional pair separation from HIV-induced mortality
that arises when HIV involves long stable partnerships.
Figure 2. Critical level of condom use (C[c]) predicted to prevent self-sustaining GC/CT and HIV transmission for the pair (A to D), classical uncalibrated (E to H), and classical model following
calibration of π to the pair model output (I to L).
The horizontal axes give partnership length in days while the vertical axes give gap length in days. C[c] values are denoted by a gradient of colours as indicated; values of 0% demarcate the most
extreme combination of partnership and gap lengths which supports self-sustaining transmission, while values above 100% (up to a theoretical maximum of 111% since condoms are assumed to be only 90%
effective in preventing transmission) show partnership and gap combinations where consistent condom use is insufficient to prevent self-sustaining transmission.
With regards to the critical level of condom use which prevents self-sustaining transmission, both the pair and classical model formulations predict the same general pattern of decreasing C[c] with
increasing gap length, but the exact predictions differ. In the overlapping combinations where both models predict self-sustaining transmission, the classical model generally predicts higher C[c]
values; for example, for GC at a partnership length of 30 days and a gap length 30 days, the corresponding predictions for C[c] are 0.7704 and 0.6984 in the classical and pair models respectively.
Assuming the pair model more accurately predicts disease prevalence and the partnership and gap combinations where self-sustaining transmission is possible, the classical models for GC and CT were
calibrated to give the same values of π^s as the pair model by adjusting the duration of the non-care-seeking infections (Figures 2I to 2J). After calibration, the classical model now predicts higher
C[c] values at longer partnership lengths, the effect being more pronounced for GC than for CT. The calibrated classical model also predicts that close to 100% condom use would be required to prevent
self-sustaining transmission for a wide range of partnership and gap combinations. The divergence between the predictions of the pair and calibrated classical model is elaborated on in Figures 3A and
B, which highlight the areas where the absolute difference in predicted C[c] values is close to 100% for GC (i.e. combinations when C[c] approaches the maximum of 100% in the calibrated model and is
close to 0% in the pair model). However, Figures 3E and F show that the corresponding adjustment factors used in the calibrated classical model are at the edge of plausibility, since approaches 10
for GC, and 7.5 for CT at longer partnership lengths, in effect assuming infectious periods in the realm of several years. In contrast, after the classical model for HIV was calibrated by adjusting
per sex act transmission probabilities, predicted C[c] values are fairly close to those from the pair model (Figures 2K and 2L); differences are less than 1% for HIV without cofactor enhancement and
less than 10% for most partnership and gap combinations for HIV with cofactor enhancement (Figures 3C and 3D). Moreover, adjustment factors are fairly close to 1 for most combinations of partnership
and gap lengths (Figures 3G and 3H). However, for HIV with cofactor enhancement, the calibrated model could only be extended up to partnership lengths of about 1500 days, as the assumption of higher
adjustment factors would have resulted in values for per sex act transmission probabilities exceeding 1 (for the primary stage of HIV infection), and it was hence not possible to replicate the
dynamics from the pair model for long stable partnerships.
Figure 3. Absolute difference in predicted critical level of condom use (Abs(ΔC[c])) for GC/CT and HIV with and without cofactor enhancement (A to D), with its corresponding adjustment factor, (E to
The horizontal axes give partnership length in days while the vertical axes give gap length in days. Abs(ΔC[c]) is computed from the absolute difference in the corresponding values from Figure 2.
Coloured bars in the left (A to D) and right (E to H) panels give the values of Abs(ΔC[c]) and by the respective gradient of colours.
It has been hypothesized that STIs can broadly be divided into two groups based on their transmission dynamics: those with short infectious periods but high transmission probabilities (mostly
bacterial STIs, with S-I-S dynamics) and those with long infectious periods but low transmission probabilities (mostly viral STIs, with S-I dynamics) 28]. Using parameters for gonorrhoea and
Chlamydia to represent the former, and parameters for HIV (with and without cofactor enhancement) to represent the latter, we compared the traditional classical model against results from the pair
model and found that the two model formulations lead to very different results on predictions about disease prevalence, partnership and gap length combinations that support transmission, and the
levels of condom use needed to prevent self-sustaining transmission. Calibrating the classical model to give similar outputs for π^s and π^p as the pair model fails to reconcile the predictions for
an S-I-S type infection with gonorrhoea and Chlamydia-like parameters, but does reduce the differences in predictions on condom use for an S-I type infection like HIV.
For bacterial or S-I-S type infections, Lloyd-Smith and colleagues had previously demonstrated that the two model formulations diverge greatly in predicting epidemic growth rate under a simplified
situation which varied the partner change rate while assuming the partnership and gap lengths to be of equal duration [20]. In our case, we independently varied partnership and gap lengths while
looking at predictions on steady-state prevalence (π^s) and condom use, hence providing several additional insights with key implications. Firstly, it is worth re-emphasizing that the pair model
identifies a broader spectrum of behaviours as populations potentially capable of sustaining the transmission of S-I-S type infections; this would include individuals with intermediate partnership
lengths (on the order of a few months) combined with short gaps in gonorrhoea (less than 3 months), and short to intermediate gap lengths (up to about 5 months) in Chlamydia [15], [24]; the potential
transmission of Chlamydia in populations with longer gap lengths may explain why CT has a wider distribution range in the population than GC [29], [30], [31]. Secondly, we show that, across the range
of partnership and gap lengths investigated, discordance in predictions occurs mainly from intermediate to longer partnership lengths, with the discordance being greater for gonorrhoea which has a
shorter infectious period than Chlamydia. This helps identify the combination of disease and behavioural contexts where both modelling approaches would yield similar results, and where they might
diverge. Thirdly, our work shows that fitting S-I-S models to data will not resolve the discrepant quantitative predictions which arise from the choice of model formulation. Logically, either one or
both the model formulations are failing to adequately represent reality. In particular, Garnett and colleagues have previously pointed out that the classical model applied to gonorrhoea can give
predictions that are unrealistically sensitive to small changes in parameter values, and our work adds weight to the previous call for caution on the interpretation of such results [32]. Assuming
that the areas of divergence between the two approaches does highlight the limits of the classical model in a given disease context, then given the restricted range of partnership lengths (less than
a few weeks) for gonorrhoea, it is difficult to think of significant applications in the heterosexual context for the classical model other than client sex worker interactions. On the other hand,
since results for Chlamydia are still reasonably similar for partnership lengths up to a couple of months, the classical framework may still be adequate for modelling Chlamydia transmission in
partnerships amongst at-risk heterosexual youths like those described by Bearman et al. [18].
With regards to viral and other STIs with S-I dynamics, our work shows that, for a less infectious pathogen like HIV without cofactor enhancement, the classical model predicts a higher peak
prevalence (π^p) and self-sustaining transmission over a wider combination of partnership and gap lengths. This can be explained by the reduced opportunities for transmission imposed by the pair
formation mechanism; as previously pointed out by Kretzschmar [27], susceptible individuals who are single or in stable partnerships with other susceptible individuals, as well as infectious
individuals paired with other infectious individuals are all excluded from the transmission process (which occurs only in pairs where one individual is infectious and another is susceptible). This
leads to slower epidemic growth rates in the pair model [21] and hence a lower value of π^p for HIV, as well as the need to assume higher pair formation rates to allow self-sustaining transmission.
However, we also show that, while we get similar results for HIV with cofactor enhancement at shorter partnership lengths, the pair model predicts higher values of π^p and greater potential for
self-sustaining transmission at longer partnership lengths as compared to the classical model; this results from the dominance of HIV-induced mortality on pair separation at longer partnership
lengths which is unaccounted for in the classical model. While it is comforting to know that model calibration seems to resolve the discrepant predictions between the pair and classical models, it
must be remembered we are adjusting the classical model using individually derived adjustment factors for each partnership and gap combination, while in practice most model fitting would involve
using some average adjustment factor for disease transmissibility or other parameter value [23]. The classical model may thus underestimate the role of long-term stable partnerships in propagating
HIV even when calibrated to data. Other additional differences have been highlighted by others, including the prediction of higher epidemic growth rates and higher estimates on the contribution of
acute infectious stages by the classical model as compared to the pair model [21], [33].
As with any modelling work, several limitations and assumptions must be acknowledged. Firstly, we must reiterate that our analysis only highlights the difference in predictions between the pair and
classical models, and does not prove which model formulation is more accurate. It has been argued that the pair model is a more correct representation of STI dynamics [12], [20], but what is modelled
here is a serially monogamous population rotating through a fixed partnership and gap length combination in isolation and in perpetuity. In reality, populations comprise individuals with a
heterogeneous mix of behaviour that changes with successive partnerships. In addition, concurrent partnerships would relax the constraint on individuals to have contacts with only one partner at a
time, and a pair model accounting for concurrency may be closer in dynamics to the classical formulation for S-I type infections. On the other hand, concurrent partnerships increase the potential for
re-infection within partnerships, and may thus exaggerate the differences between the two model formulations for S-I-S type infections. One question is how significant re-infections are in prolonging
the duration of S-I-S type infections. Several studies suggest that some re-infections arise from the same source partner [34], [35], and the effectiveness of expedited partner therapy in reducing
repeat infections emphasizes their importance to such STIs [36], [37]. However, stochastic extinction, partial immunity and treatment of partners would in reality reduce the effect from re-infections
within partnerships. Concerns may also be raised on the parameters describing both the sexual behaviour of the population and the natural history of the various STIs modelled. For instance, in the
absence of sufficiently detailed data on how the frequency sex might vary with partnership duration, we assumed that sex occurred at the same frequency in all partnership lengths, although this is
unlikely to be the case in reality; results from the classical and pair models would be less divergent if sex is less frequent than we had assumed in longer partnerships, and vice versa if the
reverse is true. Also, for infection parameters, there have also been no direct estimates for the per sex act transmission probability of Chlamydia that we know of, and it has been difficult to
accurately measure the duration of infectiousness, as well as the proportion of incident infections that are asymptomatic or do not receive treatment for both gonorrhoea and Chlamydia; this is
particularly since such infections would not be accurately represented in clinic-based data. Our study used estimates for the proportion of incident infections which are symptomatic and receive care
as derived by Farley et al. [38], which was based on a case-finding type strategy, with its inherent limitations; in particular, the Chlamydia infections in men receiving treatment with symptoms is
lower than what has typically been assumed in other studies. Also, our approach of calibrating the classical model to the pair model through an “adjustment factor” could be criticised; this resulted
in the need to assume implausible parameter values for the S-I-S infections, beyond the bounds of what would normally be used in modelling studies. However, we did this not because we intended to use
such re-scaled parameters to model the infection in the classical framework, but more to illustrate the dangers in calibrating an inappropriate model, and to identify situations where model
calibration might fail to help based on the assumption that the pair model was more correct. An alternative approach used by others would be to present the threshold values for transmission
probabilities or infectious duration for the competing modelling approaches [39], and highlight the partnership and gap combinations where the divergence between the two models occurs; this would
have avoided passing judgment as to which approach is more correct. Finally, while a deterministic pair model was sufficient as a means of identifying situations where the classical model is most
likely inadequate, descriptions of sexual networks ranging from those involving heterosexual youths [18] to sex worker client encounters [40] have revealed high levels of complexity and
heterogeneity, and other work has shown that such network heterogeneities have an important effect on transmission dynamics that is inadequately approximated by both the pair and classical models [6]
, [41]. These observations, along with the importance of modelling partnership and gap durations demonstrated in this paper, adds to the impetus for developing efficient individual-based models which
can simultaneously account for both factors.
In summary, our work suggests that outputs from classical and pair model formulations are in conflict for a range of gap and partnership combinations which could possibly support the transmission of
some common STIs. Model calibration may resolve the discrepancies for S-I type infections, but only for the very shortest partnerships in S-I-S type infections. If we accept that the actual
transmission process is better modelled by the pair rather than the classical model, then our findings emphasize the need to move beyond measuring partner change rates to account for partnership and
gap behaviours, and to adopt STI modelling approaches which accounts for the effect of partnership and gap lengths on STI transmission.
Overview of model structure, infection parameters and notation used
Both the classical and pair models were deterministic compartmental models depicting a heterosexual population, , with an equal number of males and females, with additional compartments to represent
different infection states; the pair model also included compartments to represent pairs with different infection state combinations. The population was assumed to have a finite sexually active
lifespan of duration (), so that turnover of this sexually active population occurs; the sexually active lifespan was assumed to be 35 years. Individuals leaving the sexually active pool are replaced
at the same rate by uninfected individuals. In all formulas, superscripts denote gender (for males and for females), while subscripts are used to denote the infection state.
Gonorrhoea and Chlamydia (GC/CT) were both depicted as susceptible-infectious-susceptible (S-I-S) type pathogens, where the infection states were for susceptible, for symptomatic infections that
receive treatment, and for infections which either result in symptoms but do not receive treatment, or are completely asymptomatic; the proportion (, ) who are symptomatic and receive treatment have
shorter infectious periods, (, ), while the remainder (, ) have longer infectious periods (, ). Individuals recover without mortality and are immediately susceptible to re-infection.
HIV was depicted as a susceptible-infectious (S-I) type pathogen with three successive stages of duration, where represent primary, chronic, and advanced HIV infection respectively. Susceptible
individuals (denoted by) enter the primary infection stage and progress through stages chronic, and advanced HIV; individuals with advanced HIV are removed by HIV-induced mortality, and are not
The corresponding parameter values used are in Table 1. Sex within partnerships was assumed to occur at a frequency of once in 3 days, while condoms were assumed for simplicity to be 100% efficacious
in preventing transmission; these are values similar to those assumed in other studies [1], [9], [15]. For gonorrhoea, we used the per-sex-act transmission probabilities estimated from historical
studies [42], [43] and the same infectious durations proposed by Garnett et al. [32]. The proportions which are symptomatic and receive treatment for gonorrhoea follows the estimates from a study by
Farley and colleagues; we also used the corresponding estimates for Chlamydia from that study [38]. While there are some estimates on the infectious durations of Chlamydia [44], there are no direct
estimates of per-sex-act transmission probabilities for Chlamydia; we used the data from Lycke et al. [45] to obtain some estimate for this parameter, with details being described in the section on
Transmission probabilities for Chlamydia.
Parameters used to depict the durations of acute, chronic and advanced stages of HIV infection, and the corresponding per-sex-act transmission probabilities in these stages follow those proposed by
Abu-Raddad et al. [9], [46]. We considered a scenario for HIV with per-sex-act transmission probabilities which were four times higher, as cofactors such as ulcerative genital disease have been
estimated to enhance transmission by such an amount [26].
Pair model
The pair model depicts a serially monogamous individuals cycling through the unpaired and paired states. The letter with the corresponding superscripts and subscripts was used to depict the number of
unpaired individuals of a particular gender and infection state, e.g. for gonorrhoea, would be the number of susceptible unpaired males. The letter with two subscripts separated by a comma give the
corresponding infection state for the male followed by the female member of pair, e.g. for HIV, is a susceptible male paired with a female with primary HIV infection.
Pair formation occurs when opposite gender individuals transit from the unpaired to the paired state at a rate () inverse to the specified gap length, and pair separation occurs when paired
individuals return to the unpaired state at a rate () inversely related to partnership lengths; additional pair separation occurs from turnover of sexually active individuals and HIV-related
mortality, where one member is removed and the surviving member is returned to the unpaired state. Transmission potentially occurs at the instant of pair formation between an infected and an
uninfected individual, in accordance with the infection state specific per-sex-act probability of transmission () modified by the proportion of sex acts protected by condom use () and condom efficacy
(). For instance, for HIV:
Within pairs between a susceptible and infectious member, potential transmission continues at a rate ; this is based on the chance of avoiding infection after the number of sex acts, , that occurs
per unit time, and the respective transmission probabilities modified by condom use (), so that:
Disease transmission results in transitions between the pairs with different infection state combinations, as do progression of HIV infection and recovery from GC/CT in the respective pair models;
progression of and recovery from infection also applies to individuals in the unpaired state. Pairs of a particular infection state combination form at a rate proportionate to the availability of
unpaired opposite sex individuals from the respective infection states, while pair separation returns individuals to the respective compartments by infection state and gender.
In the GC/CT model, and denote the three possible infection states () of the male and female member of the pair respectively, compartments for pairs and compartments for unpaired individuals of each
gender as follows:
, and are transitions resulting from pair formation, transmission of infection and disease recovery, where:
In all the above, and for all other values of, so that certain expressions are active only for the relevant pair compartments; also, and since individuals in the susceptible state do not recover from
or transmit the infection. In addition, , , , and .
Disease prevalence, π, was expressed as
In all analyses, steady-state GC and CT prevalence (π^s) was used.
In the HIV, there are 4 possible infection states () with a total of combinations of pairs and 4 compartments for unpaired individuals of each gender, as follows:
is defined similarly as for GC/CT, while and are defined differently since all new infections enter via stage 1 and then experience disease progression, as follows:
In all the above, and for all other values of ; we also defined several dummy variables and parameters that are set to 0, including , , , , , , and.
Disease prevalence, π, was expressed as
In the case of HIV, modelling predictions on steady-state prevalence will vary depending on assumptions about replacement of at-risk individuals removed from the system due to HIV-related mortality;
moreover, most epidemics are still evolving, and have not reached their steady-state prevalence, although the prevalence in some geographical areas and population groups may have passed their peak.
We therefore used the peak value of prevalence (π^p) predicted by the model in all our analyses.
Comparable formulation of the classical model
The classical model treats partnerships as instantaneous events with a per-partnership transmission probability () commonly computed by estimating the chance of avoiding infection for the total sex
acts in partnerships of a given length, regardless of the duration of the infectious stage (e.g. [9], [10]). To apply this, we assumed that sex occurs once upon partnership formation then at
frequency till the partnership ends. For instance, for each infectious stage in the HIV model:
where is the stage specific per-sex act transmission probabilities as modified by condom use; in the classical model formulation of HIV:
where is an “adjustment factor” which has a default value of 1 but can be altered to change the value of π^p from the classical model (as explained in a subsequent section on calibrating classical
model to pair model outputs).
The partner change rate () for the classical model is approximated by the inverse of the “cycle length”, which is the time taken to cycle through successive gaps and partnerships, i.e.:
If and are the number of males and females of the respective infection state, then for gonorrhoea and Chlamydia:
where, and as with the pair models, and for all other values of; is again an “adjustment factor”, and in this case we fix while adjusting from the default value of 1 to adjust the output of π^s from
the classical model, since only the duration of non-care-seeking infections is altered (more detailed explanation to follow later).
The symbols and define force of infection acting on males and females respectively, where:
where , and likewise for and .
Disease prevalence, π, was expressed as, with the steady-state GC/CT prevalence, π^s, used in all analyses.
For HIV:
where several dummy variables and parameters that are set to 0, including , , and. Again, and define force of infection acting on males and females respectively, where:
where are as defined previously and.
Disease prevalence, π, was expressed as. As with the HIV pair model, all analyses refer to the peak prevalence, π^p, predicted by the model.
Estimating the critical level of condom use
We also determined for the pair model and the classical models the “critical level of condom use” (C[c]). This parameter represents the proportion of sex acts which would need to be protected (e.g.
by condom use, or some other similar intervention) to prevent the infection from spreading.
Calibrating classical model to pair model outputs
It is not uncommon in STI modelling work (e.g. [1], [4], [9], [22]) to calibrate model outputs to observed data by allowing model parameter values to vary. Our aim was to see if transmission dynamics
in sub-populations, as characterised by particular combinations of partnership and gap lengths, could be adequately modelled with the classical approach. Since we lack real data on sub-population
specific for the different diseases, we instead start with the assumption that the pair model more accurately depicts transmission dynamics. We then altered one key infection parameter in the
classical model by an adjustment factor () so that it could reproduce the prevalence predicted by the pair model. Then, using the classical model with the “calibrated” parameter, we re-estimated the
predicted critical condom use (C[c]') for that partnership and gap length combination.
In the case of GC and CT, model fitting often uses estimates of prevalence (e.g. [1], [47]), so model fitting was performed to minimize the difference in the value of π^s (see Figure S1A).
Re-infection extends the infectious period of an S-I-S pathogen in the pair model [24], and there is considerable uncertainty in estimates on the duration of non-care-seeking infections; we therefore
adjusted the value of π^s for GC/CT by altering this parameter through an adjustment factor ().
In the case of HIV, there has been a wide variation in estimates on the per-sex-act transmission probabilities [48]. We therefore calibrated the classical model to give the same value of π^p (see
Figure S1B) as obtained from the pair model by simultaneously multiplying the transmission probability across all 3 infectious stages using the same adjustment factor ().
Model implementation and solutions
At the steady-state prevalence for GC and CT, the value in each of the compartments does not change, i.e., and in the pair model, and and in the classical model. We solved the above sets of 15 and 6
simultaneous equations in the pair and classical models numerically to find the respective steady-state prevalence, π^s, for each parameter set, as well as to obtain the adjustment factor () for a
calibrated pair model which would give the same value of π^s as the pair model; we also verified using the dynamic version of the model that the simulated values of π approached the calculated values
of π^s after 100,000 model days. For HIV, the peak prevalence, π^p, was obtained by simulation, as was the adjustment factor (). The solution for the critical level of condom use (C[c]) was also
found numerically. All models implemented using the Java programming language version 1.6.0_26.
Transmission probabilities for Chlamydia
We focused on the case-contact pairs where the index was co-infected with Chlamydia and gonorrhoea described by Lycke et al. [45]. Assuming the infections passed from the index to the contact, we
observe the respective per-partnership transmission probabilities for Chlamydia and gonorrhoea in Table 2.
Table 2. Data from Lycke et al. [45] on case contact pairs for gonorrhoea and Chlamydia
We assumed that both infections could have potentially passed from the index to the contact, but that neither infection influences transmissibility of the other, and that the infections were
transmitted around the same time to the contact. Therefore, by using the per-sex-act transmission probabilities for gonorrhoea as in Table 1 ( and ), we can estimate the average number of sex acts
that occurred in order to observe the above per-partnership probabilities of gonorrhoea transmission as and for the partnerships where the index case is male and female respectively. Then, to observe
the respective per partnership transmission probabilities for Chlamydia with the corresponding number of sex acts, the respective per-sex act transmission probabilities for Chlamydia in those
partnerships would thus be for males-to-females and for females-to-males.
The above rests on multiple assumptions, but concurs with the opinion of various authors that Chlamydia is less transmissible than gonorrhoea [45], [47], [49], and was the best that could be done
given the lack of direct estimates.
Supporting Information
Figure S1A and S1B illustrate the classical model being calibrated to the output of the pair model for GC/CT (A) and HIV (B) respectively. The horizontal axes give simulation time in days while the
vertical axes give π. For the same arbitrary partnership and gap lengths, the classical model is calibrated to give the same steady-state prevalence (π^s) for GC/CT (A), and peak prevalence (π^p) for
HIV, as obtained from the pair model, with the direction of shift in prevalence from uncalibrated to calibrated as indicated by the arrow.
Author Contributions
Conceived and designed the experiments: MIC JBSO. Performed the experiments: JBSO. Analyzed the data: MIC JBSO. Contributed reagents/materials/analysis tools: MIC JBSO. Wrote the paper: MIC JBSO XJF
GKKL. Devised the model used in simulations and predictions: MIC JBSO. | {"url":"http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0039575","timestamp":"2014-04-21T05:28:58Z","content_type":null,"content_length":"239959","record_id":"<urn:uuid:1ed32def-1d81-4b42-ae07-be5347212002>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
There are documented instances, and many more suspected instances, of standards being manipulated by attackers. This raises the question of how users of standard curves can be assured that the curves
were not generated to be weak.
SafeCurves requires curve shapes for which the ECC security story is as simple as possible, for example by requiring prime fields. This still leaves various security dangers such as incompleteness
and transfers, but SafeCurves checks for these dangers in a publicly verifiable way. There is still a potential lack of assurance in the following corner case:
• public ECC cryptanalysis might have missed an attack that applies to a small fraction of curves,
• an attacker might have figured out this attack, and
• the attacker might have manipulated the choices of standard curves to be vulnerable to this secret attack.
SafeCurves requires rigidity to protect ECC users against this possibility. Rigidity is a feature of a curve-generation process, limiting the number of curves that can be generated by the process.
The attacker succeeds only if some curve in this limited set is vulnerable to the secret attack. For comparison, without rigidity, the attacker can freely generate curves until finding a curve
vulnerable to the secret attack.
SafeCurves classifies existing curve-generation processes into four levels of protection:
• Fully rigid: The curve-generation process is completely explained. Consider, for example, a curve-generation process that takes primes larger than 2^224 for (explained) security reasons, takes
the smallest prime larger than 2^224 for (explained) efficiency reasons, takes the curve shape y^2=x^3-3x+b for (explained) efficiency reasons, and takes the smallest positive integer b meeting
various (explained) security criteria. This provides the maximum available protection against malicious curve generators: the only possible flexibility for the curve generator is in the choice of
security and efficiency criteria, and one expects public research to produce convergence on those criteria.
• Somewhat rigid: The curve-generation process is not completely explained, but the unexplained parts do not give the curve generators many bits of control.
• Manipulatable: The curve-generation process has a large unexplained input, giving the curve generator a large space of curves to choose from. Consider, for example, a curve-generation process
that takes y^2=x^3-3x+H(s) meeting various security criteria, where s is a large random "seed" and H is a hash function. No matter how strong H is, a malicious curve generator can search through
many choices of s, checking each y^2=x^3-3x+H(s) for vulnerability to a secret attack; this works if the secret attack applies to (e.g.) one curve in a billion.
• Trivially manipulatable: The curve-generation process has a large unexplained input, giving the curve generator a large space of curves to choose from, and there is an efficient method to work
backwards from a specified curve in this space to the input. Consider, for example, a curve-generation process that simply produces y^2=x^3-3x+b for a random b. This process is trivially
manipulatable: a malicious curve generator can publish any desired b. This works even in the most extreme case, a secret attack that applies to just one curve, provided that the attacker can
somehow compute this curve.
The following table reports the protection provided by various existing curves:
│ Curve │ Rigid? │ Comments │
│Anomalous │fully rigid✔ │Follows the most concise method in the literature for generating anomalous curves: prime shape 11m(m+1)+3, and curve of j-invariant -2^15. Uses the smallest prime │
│ │ │with m above 2^100. │
│M-221 │fully rigid✔ │p is largest prime smaller than 2^221; B=1; A > 2 is as small as possible. │
│E-222 │fully rigid✔ │ │
│NIST P-224 │manipulatable│Coefficients generated by hashing the unexplained seed bd713447 99d5c7fc dc45b59f a3b9ab8f 6a948bc5. │
│ │ │Prime chosen "very close to, but not above, a power of 2" for efficiency. Prime chosen 3 mod 4 for compatibility with Elligator 1. The only primes 3 mod 4 within 32 │
│ │ │of 2^e for e between 200 and 300 are 2^206-5, 2^212-29, 2^226-5, 2^243-9, 2^251-9, 2^285-9; first four rejected for security level, and last rejected for efficiency, │
│Curve1174 │fully rigid✔ │producing 2^251-9. Edwards curve shape x^2+y^2=1+dx^2y^2 chosen for efficiency. Complete Edwards curve chosen for security. Curve and twist orders chosen as │
│ │ │{4*prime,4*prime} for security. d chosen to have the form -(c+1)^2/(c-1)^2 with c=2/s^2 for compatibility with Elligator 1. d=-1174 is smallest qualifying integer in │
│ │ │absolute value. │
│ │ │Prime chosen "as close as possible to a power of 2" for efficiency reasons ("save time in field operations"). Prime chosen "slightly below 32k bits, for some k" for │
│ │ │efficiency reasons ("no serious concerns regarding wasted space"). k=8 chosen for "a comfortable security level". 2^255-19 chosen above 2^255+95, 2^255-31, 2^254+79, │
│Curve25519 │fully rigid✔ │2^253+51, 2^253+39 "because 19 is smaller than 31, 39, 51, 79, 95". Montgomery curve shape y^2=x^3+Ax^2+x chosen for efficiency ("to allow extremely fast x-coordinate│
│ │ │point operations"). (A-2)/4 selected as a small integer for efficiency ("to speed up the multiplication by (A-2)/4"). Curve and twist orders required to be │
│ │ │{4*prime,8*prime} for security ("protect against various attacks ... here 4, 8 are minimal"). Primes required to be above 2^252 for security ("theoretical possibility│
│ │ │of a user's secret key matching the prime"), ruling out A=358990 and A=464586. A=486662 chosen as smallest positive integer meeting these requirements. │
│BN(2,254) │fully rigid✔ │p chosen sparse, close to 2^256, within BN family; using u=−(2^62 + 2^55 + 1). p congruent 3 modulo 4 to have z^2+1 irreducible; b=2 to have twist be y^2=x^3+ (1 − │
│ │ │2i). │
│ │somewhat │Several unexplained decisions: Why SHA-1 instead of, e.g., RIPEMD-160 or SHA-256? Why use 160 bits of hash input independently of the curve size? Why pi and e instead│
│brainpoolP256t1 │rigid✔ │of, e.g., sqrt(2) and sqrt(3)? Why handle separate key sizes by more digits of pi and e instead of hash derivation? Why counter mode instead of, e.g., OFB? Why use │
│ │ │overlapping counters for A and B (producing the repeated 26DC5C6CE94A4B44F330B5D9)? Why not derive separate seeds for A and B? │
│ANSSI FRP256v1 │trivially │No explanation provided. │
│ │manipulatable│ │
│NIST P-256 │manipulatable│Coefficients generated by hashing the unexplained seed c49d3608 86e70493 6a6678e1 139d26b7 819f7e90. │
│secp256k1 │somewhat │GLV curve with 256 bits and prime order group; prime and coefficients not fully explained but might be minimal │
│ │rigid✔ │ │
│E-382 │fully rigid✔ │ │
│M-383 │fully rigid✔ │ │
│Curve383187 │fully rigid✔ │p is largest prime smaller than 2^383; B=1; A > 2 is as small as possible. │
│brainpoolP384t1 │somewhat │See brainpoolP256t1. │
│ │rigid✔ │ │
│NIST P-384 │manipulatable│Coefficients generated by hashing the unexplained seed a335926a a319a27a 1d00896a 6773a482 7acdac73. │
│ │ │Prime chosen above 2^(32*12) for security, but below 2^(32*13) for efficiency. Prime chosen very close to, but not above, a power of 2 for efficiency. Prime 2^414-17 │
│Curve41417 │fully rigid✔ │is uniquely closest. Edwards curve shape x^2+y^2=1+dx^2y^2 chosen for efficiency. Complete Edwards curve chosen for security. Curve and twist cofactors limited to │
│ │ │{4,8} for security. d=3617 is smallest qualifying integer in absolute value. │
│ │ │Minimal cofactor 4. -39081 is the first negative d where E and ~E have 4*prime order. Choice of 2^448-2^224-1: "The coefficients are all 32-bit aligned, which helps │
│Ed448-Goldilocks│fully rigid✔ │full-radix implementations with UMAAL or similar. It’s a Solinas trinomial prime, which also reduces the number of carries required. The center tap doesn’t interfere │
│ │ │with Karatsuba multiplication." │
│M-511 │fully rigid✔ │p = 5 (mod 8) is largest prime smaller than 2^511; B=1; (A - 2)/4 is as small as possible for A > 2; A^2 - 4 is not a square; curve order is n = 8r and quadratic │
│ │ │twist order is n' = 4r'. │
│E-521 │fully rigid✔ │ │
Isn't it safest to choose cryptographic parameters at random?
Cryptographic keys lose security when they do not have enough randomness. There is a common confusion between public parameters and public keys, creating a common myth that public parameters lose
security unless they are as random as possible.
The literature contains many counterexamples to this myth. For example, there are known attacks that significantly reduce the security level of random genus-3 curves, but the attacks do not apply to
specially structured genus-3 curves, namely hyperelliptic curves. As another example, in elliptic-curve cryptography one takes only unusual curves whose group orders have very large prime divisors,
because uniform random curves are much less secure than these unusual curves. See 2011 Koblitz–Koblitz–Menezes (Section 11) for more subtle examples.
One should not conclude that uniform random parameters are necessarily bad: there are also examples where adding randomness to parameters is good. To see whether randomness is good or bad for the
parameters of any particular system, one needs to study the details of attacks against that system.
All curves that meet the SafeCurves criteria are solidly protected against all published attacks. The criteria are computer-verified, with full details presented on this site to support third-party
verification. It is conceivable that some of these curves are vulnerable to an attack that is not publicly known, but there is no basis for guessing whether any particular curve will be more or less
vulnerable to attack than a random curve.
ECC users can reasonably choose their own random curves to protect against multiple-target rho attacks. However, giving a random curve to each user also has several obvious costs, and for lower costs
one can take steps that have larger security benefits. This is why essentially all ECC applications use shared curves.
What do the manipulatable standards say about this?
The possibility of attackers manipulating standard curve choices was raised in the late 1990s, when NSA volunteered to "contribute" elliptic curves to the committee producing ANSI X9.62. NSA did in
fact end up producing various elliptic curves later standardized by ANSI X9.62, SEC 2, and NIST FIPS 186-2; these curves were subsequently deployed in many applications.
In response to NSA's contributions, ANSI X9.62 developed "a method for selecting an elliptic curve verifiably at random", and a procedure to "verify that a given elliptic curve was indeed generated
at random"; it even claims that this procedure "serves as proof (under the assumption that SHA-1 cannot be inverted) that the parameters were indeed generated at random". However, this procedure does
not verify randomness; it verifies only that the curve coefficients were produced as SHA-1 output. The claimed "proof" is nonexistent. The ANSI X9.62 curve-generation method is not trivially
manipulatable but it is manipulatable.
IEEE P1363 copied the same curve-generation method and stated that it allows "others to verify that the curve was indeed chosen pseudo-randomly". However, "pseudo-random" is not the same as "random",
and does nothing to stop a malicious curve generator from searching through many choices of seeds. NIST correctly characterized the verification procedure for these curves as merely checking "that
the coefficient b was obtained from s via the cryptographic hash function SHA-1".
SEC 2 version 1.0 copied the curves that NSA had produced for NIST, and copied the incorrect ANSI X9.62 claim that the curves were "chosen verifiably at random". SEC 2 further claimed that the curves
were chosen "by repeatedly selecting a random seed and counting the number of points on the corresponding curve until appropriate parameters were found". This claim might be correct but is certainly
not verifiable.
What do other sources say about this?
Shortly after the NIST curves were announced, 1999 Scott pointed out that the curves were not in fact verifiably random:
Now if the idea is to increase our confidence that these curves are therefore completely randomly selected from the vast number of possible elliptic curves and hence likely to be secure, I think
this process fails. The underlying assumption is that the vast majority of curves are "good". Consider now the possibility that one in a million of all curves have an exploitable structure that
"they" know about, but we don't.. Then "they" simply generate a million random seeds until they find one that generates one of "their" curves. Then they get us to use them. And remember the
standard paranoia assumptions apply - "they" have computing power way beyond what we can muster. So maybe that could be 1 billion.
Scott recommended a rigid curve-generation method as an alternative, and concluded his posting as follows: "So, sigh, why didn't they do it that way? Do they want to be distrusted?"
In 2005, Brainpool identified the lack of explanation of the NSA/NIST curve seeds as a "major issue" (p.2). Brainpool required a rigid curve-generation method, as noted above, with seeds "generated
in a systematic and comprehensive way" rather than being generated randomly. At one point Brainpool incorrectly described its curves as "random". At several points Brainpool described its requirement
as a requirement to be "verifiably pseudo-random", but this understates what Brainpool actually requires and seems likely to cause confusion.
In May 2013, Bernstein–Lange again raised the possibility of NSA having searched through a billion curves. In September 2013, Schneier wrote
Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can. ... I no longer trust the constants. I believe the NSA
has manipulated them through their relationships with industry.
What about rigid choices of subgroups?
For each curve considered by SafeCurves, the order ℓ of the specified subgroup of the group of rational points is prime and larger than sqrt(p)+1. A curve cannot have two different subgroups meeting
this requirement.
What about rigid choices of base points?
For each curve considered by SafeCurves, the specified base point is a generator of the specified subgroup. SafeCurves does not place restrictions on the choice of this base point. If there is a
"weak" base point W allowing easy computations of discrete logarithms, then ECDLP is weak for every base point: an attacker can compute log_P Q as the ratio of log_W Q and log_W P modulo ℓ. Typical
ECC protocols, such as signatures, are designed to be secure for all choices of base point.
There are some protocols where base-point rigidity is important. For example, a "random" ECDLP challenge, computing the discrete logarithm of Q base P, could have a back door for the challenge
creator. Certicom's ECDLP challenges use rigid generators P and Q of the subgroup to prevent Certicom from choosing the discrete logarithm in advance.
For some curves the specified base point is chosen rigidly. The usual choice is the generator with smallest possible x-coordinate for short Weierstrass curves or Montgomery curves, or smallest
possible y-coordinate for Edwards curves. The reason for x vs. y here is that y(-P)=y(P) for Edwards, allowing y as a ladder coordinate, while x(-P)=x(P) for the others, allowing x as a ladder
Brainpool multiplies this smallest point by a mostly rigid hash; Brainpool states that a small point "could possibly" allow side-channel attacks. However, there is no indication that this adds any
protection against serious side-channel attacks, such as template attacks. Serious defenses, such as secret sharing, work for any choice of base point.
Version: This is version 2013.10.25 of the rigid.html web page. | {"url":"http://safecurves.cr.yp.to/rigid.html","timestamp":"2014-04-16T19:04:10Z","content_type":null,"content_length":"22973","record_id":"<urn:uuid:d1a59de2-4d7a-45de-9303-85dfebc7f114>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00150-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tremley Point, NJ Math Tutor
Find a Tremley Point, NJ Math Tutor
...My technical skills are strong, particularly, in MS Excel. On a daily basis, I proofread investment-related articles for their content, grammar and punctuation. Most of my availability is
during the evenings and on the weekends.
11 Subjects: including calculus, algebra 1, algebra 2, prealgebra
...I like to work one on one with students as I can adapt to their learning styles and figure out what teaching techniques work best for them. I believe constant interaction with students is
crucial to determine what they are absorbing and what needs to be repeated. I try to bolster student confidence with positive reinforcement whenever possible.
19 Subjects: including prealgebra, algebra 1, algebra 2, SAT math
...Having spent the last five years tutoring students in public speaking, I have a great deal of experience and look forward to continue tutoring over the summer. I am interested in tutoring in
math (Algebra I to AP Calculus BC), English (SAT/ACT Prep to AP English Lang/Lit), introductory economics...
43 Subjects: including precalculus, trigonometry, sociology, algebra 1
Hello, my name is Sarah. I have been tutoring for about nine years now. I have worked with children from all levels (elementary, middle school, high school, and college). I graduated from Kean
University in 2011 with a 4.0 GPA, and am currently in the process of obtaining my Master's degree.
8 Subjects: including geometry, prealgebra, SAT math, linear algebra
I am currently enrolled at Rutgers University in the School of Arts and Sciences. My major is Cell Biology and Neuroscience and my minor is Psychology. I have been a private tutor for over 9
years and have also worked with other tutoring companies.
40 Subjects: including algebra 1, algebra 2, biology, calculus
Related Tremley Point, NJ Tutors
Tremley Point, NJ Accounting Tutors
Tremley Point, NJ ACT Tutors
Tremley Point, NJ Algebra Tutors
Tremley Point, NJ Algebra 2 Tutors
Tremley Point, NJ Calculus Tutors
Tremley Point, NJ Geometry Tutors
Tremley Point, NJ Math Tutors
Tremley Point, NJ Prealgebra Tutors
Tremley Point, NJ Precalculus Tutors
Tremley Point, NJ SAT Tutors
Tremley Point, NJ SAT Math Tutors
Tremley Point, NJ Science Tutors
Tremley Point, NJ Statistics Tutors
Tremley Point, NJ Trigonometry Tutors
Nearby Cities With Math Tutor
Arlington, NJ Math Tutors
Bayway, NJ Math Tutors
Bergen Point, NJ Math Tutors
Cliffwood Beach, NJ Math Tutors
Hopelawn, NJ Math Tutors
Menlo Park, NJ Math Tutors
Midtown, NJ Math Tutors
North Elizabeth, NJ Math Tutors
Parkandbush, NJ Math Tutors
Peterstown, NJ Math Tutors
Townley, NJ Math Tutors
Tremley, NJ Math Tutors
Union Square, NJ Math Tutors
West Carteret, NJ Math Tutors
Winfield Park, NJ Math Tutors | {"url":"http://www.purplemath.com/Tremley_Point_NJ_Math_tutors.php","timestamp":"2014-04-20T19:17:42Z","content_type":null,"content_length":"24137","record_id":"<urn:uuid:6970b66a-6a1b-40f0-a491-af7bbe64990e>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00503-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quotient Rule of Logarithms - Problem 2
Using the quotient rule of logarithms to condense or put two logarithms back together. We know that the log of base b of x over y is the same thing as log base b of x minus log base b of y.
For this example what we’re actually doing is we’re going from the expanded subtraction version back to a single log. All you have to know is that the first one, log base b of x is going to go in the
numerator. Going here that means our x is going to be in the numerator, log base 5 of x over the second one is going to be in the denominator, x plus 3.
Using the quotient rule of logarithms we we’re able to combine these two, condense then down into a single log.
quotient rule of logarithms condense | {"url":"https://www.brightstorm.com/math/precalculus/exponential-and-logarithmic-functions/quotient-rule-of-logarithms-problem-2/","timestamp":"2014-04-16T13:51:01Z","content_type":null,"content_length":"76230","record_id":"<urn:uuid:54d21e0d-083f-489a-b8f5-b0eb4aa7aa3c>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
and the anti-Specker Property
Ishihara's principle BD-N and the anti-Specker Property
Ishihara's principle BD-N
We now consider another principle of great significance for constructive reverse mathematics. Following Ishihara (1992), we say that an inhabited subset S of the set N of natural numbers is
pseudobounded if for each sequence (s[n])[n≥1] in S, s/n → ∞ as n → ∞. This brings us to Ishihara's principle (BD-N):
Every inhabited, countable, pseudobounded subset of N is bounded.
This principle has the unusual property of being derivable in all three main interpretations—CLASS, INT, RUSS—of BISH but not in BISH alone: Lietz has shown that BD-N fails in various realizability
models of extensional Martin-Löf type theory; see Lietz 2004 and Lietz & Streicher 2011.
Ishihara (1992, Theorem 4) has shown that BD-N is equivalent, over BISH, to each of the statements:
Every sequentially continuous mapping of a complete, separable metric space into a metric space is pointwise continuous;
Every sequentially continuous mapping of a separable metric space into a metric space is pointwise continuous.
He and other authors have subsequently proved that BD-N is a necessary and sufficient addition to BISH to ensure the derivability of a number of results allowing us to pass from sequential to
pointwise properties. One important example occurs with Banach's inverse mapping theorem from functional analysis. The classical theorem states that the inverse of an injective bounded linear mapping
between Banach spaces is continuous; but Ishihara (1994), working with separable spaces, has shown that the best possible conclusion in BISH is the sequential continuity of the inverse map, and that
to pass from that to full continuity, it is necessary and sufficient to work in BISH + BD-N.
The anti-Specker Property
Is there any viable constructive substitute for the classical property of sequential compactness? This question might seem frivolous, since the sequential compactness of the two-point space {0,1} is
equivalent, constructively, to LPO. However, and perhaps to the reader's surprise, there is a notion that is both constructively useful and classically equivalent to sequential compactness. In order
to discuss it, we need some definitions, and a little motivation from RUSS.
Let z = (z[n])[n≥1] be a sequence in a metric space (Z,ρ), and X a subset of A. We say that z is
eventually bounded away from each point of X if for each x ∈ X there exist δ > 0 and a positive integer N such that ρ(z[n],x) ≥ δ whenever n ≥ N;
eventually bounded away from the set X if there exist δ > 0 and a positive integer N such that for each x ∈ X, ρ(z[n],x) ≥ δ whenever n ≥ N.
We recall, from earlier in this article, the strong recursive counterexample to the sequential compactness of [0,1], Specker's theorem (Specker 1949) in RUSS: there exists a (Specker) sequence z in
[0,1] that is eventually bounded away from each point of [0,1]. What would it take to rule such a counterexample out of our constructive mathematics?
By a one-point extension of a metric space X we mean a metric space of the form Z∪{ω} on which the metric ρ restricted to X, is the original one, and ρ(ω,X) > 0. Here we are adopting the convention
that ρ(ω,X) > 0 is shorthand for
∃r > 0 ∀x ∈ X (ρ(ω,x) ≥ r}.
To avoid Specker sequences, we can introduce the following anti-Specker property, AS[X], for X:
If Z = X∪{ω} is a one-point extension of X, then every sequence in Z that is eventually bounded away from each point of X is eventually bounded away from X — that is, z[n] = ω for all
sufficiently large n.
This property is independent of the particular one-point extension of X: if it holds for one such extension, it holds for them all.
The anti-Specker property is classically equivalent to the sequential compactness of X. It allows us to pass from a pointwise property—being eventually bounded away from each point of the set—to a
uniform one — being eventually bounded away from the set itself. One might therefore expect it to be connected with some version of the fan theorem. This is indeed the case: the anti-Specker property
for [0,1] is equivalent, over BISH, to the fan theorem FT[c] for c-bars (Berger and Bridges 2006). It should therefore be no surprise that, when added to BISH, anti-Specker properties enable us to
pass from pointwise to uniform properties in a number of instances. To illustrate, we sketch the proof of the following result:
BISH + AS[X] ⊢ Every pointwise continuous mapping of X into a metric space is uniformly sequentially continuous, in the sense that for all sequences (x[n])[n≥1],(x′[n])[n≥1] in X, if ρ(x[n],x′
[n]) → 0 as n → ∞, then ρ(f(x[n]),f(x′[n])) → 0 as n → ∞.
Let Z ≡ X∪{ω} be a one-point extension of X, and consider sequences (x[n])[n≥1], (x′[n])[n≥1] in X such that ρ(x[n],x′[n])→∞ as n→∞. Given ε > 0, construct a binary sequence (λ[n])[n≥1] such that
λ[n] = 0 ⇒ ρ(f(x[n]),f(x′[n])) < ε,
λ[n] = 1 ⇒ ρ(f(x[n]),f(x′[n])) > ε/2.
If λ[n] = 0, set z[n] = ω; if λ[n] = 1, set z[n] = x[n] . Using the sequential continuity of f (we omit the details), we can show that the sequence (z[n])[n≥1] is eventually bounded away from each
point of X. Hence, by the anti-Specker property of X, it is eventually bounded away from the set X. It follows that λ[n] = 0, and hence that ρ(f(x[n]),f(x′[n])) < ε, for all sufficiently large
values of n. Since ε is an arbitrary positive number, the proof is complete.
In our present state of knowledge, we can say that the proof-theoretic strength of AS[[0,1]] lies between that of the uniform continuity theorem (a consequence of FT[Π01]) and FT[D]. Diener (2012)
has shown that AS[[0,1]] is equivalent to every compact metric space having the anti-Specker property, and that a more general form of anti-Specker property, in which one-point extensions are
replaced by arbitrary supersets, is equivalent to FT[Π01]). Further interconnections between anti-Specker properties, fan theorems, and other principles of constructive mathematics have been studied
in detail by Dent (forthcoming); see also Diener 2008 (Chapter 4). | {"url":"http://plato.stanford.edu/entries/mathematics-constructive/supplement1.html","timestamp":"2014-04-19T22:28:31Z","content_type":null,"content_length":"21984","record_id":"<urn:uuid:965a6579-9356-45af-b641-2f2d61967554>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
Measuring sales and marketing effectiveness of SaaS companies
The CAC Ratio
I read an interesting
blog post
from Will Price, a fellow VC from Hummer Windblad on how to measure the sales and marketing effectiveness of SaaS companies with a magic number defined approximately as the ratio of the incremental
sales in a quarter (annualized) divided by the sales and marketing expenses of the previous quarter (this assumes that sales are recognized from a gap standpoint the quarter following the sales and
marketing investment).
I found this approach interesting, but it seems to me that gross margin is a more relevant benchmark than revenues, given that the GM of saas companies varies by type of application and size of the
company (for example NetSuite went from about 50% GM three years ago to close to 70% today). A more accurate benchmark would then be to divide the incremental GM (annualized) in a given quarter by
the S&M expenses of the previous quarter. Let's call this ratio the Customer Acquisition Cost ratio or CAC ratio. The definition becomes for the last quarter of 2007:
CAC Ratio = (GM (Q4 07) - GM (Q307)) x 4 / S&M costs (Q307)
A CAC ratio of one would be equivalent to breakeven marginally on a new customer in one year. A ratio of 0,5, would mean breaking even in two years.
The next question is then: what is the right benchmark for this ratio? From our private investor experience, a breakeven in 1-2 year seems reasonable and if we look at Salesforce.com CAC ratio since
its IPO, it is indeed within this 0.5-1 range:
The implications for a private saas companies are straightforward:
• If your CAC ratio is above 1, invest more to accelerate growth (and send me an e-mail at saasvc@bvp.com)
• If your CAC ratio is lower than 0.5, you need to think through your sales and marketing model and ramp up the sales learning curve before investing more
• If you are in between, stay on your course, your are doing fine
Refining the CAC Ratio
This CAC ratio can be refined by looking at the variation in Monthly Recurring Gross Margin defined as the Montly Recurring Revenue (MRR or CMRR) less the COGS run rate for the month (See my previous
post on saas metrics for the definitions of MRR and CMRR). If you use the MRR, then the formula above is correct (just multiply by 12 instead of 4 to annualize the gross margin increase), but if you
use the CMRR, then you need to divide the increase in gross margin by the S&M costs of the current quarter not the previous quarter.
The assumption here is that for most SaaS companies, the service takes a few months to get implemented, so from a GAAP standpoint, the revenue recognized in quarter N has been acquired in quarter
(N-1) and therefore it is natural to use the S&M costs from the quarter (N-1) in the CAC ratio. If you use the gross margin derived from CMRR, the situation is different. The CMRR represents the
revenue contracted during quarter N and not recognized yet on a GAAP basis because the service has not been implemented. Therefore it is legitimate to say that the increase in CMRR from quarter N vs.
quarter (N-1) has been acquired with S&M cost of the quarter N, not (N-1), hence the need to adjust the formula.
For companies with short implementation cycle (like e-mail marketing), then the revenue can be recognized in the same quarter and therefore the formula should be calculated with the S&M cost from the
quarter N, not (N-1) for the same reason.
CAC Ratio benchmarking for public SaaS companies
I also looked at all the 13 saas companies listed in my SaaS 13 Index to see how they were performing. The results for Q4 2007 are exposed in the chart below where the CAC ratio is plotted against
the EV/TTM revenue mutliple (Enterprise Value divided by Trailing Twelve Month revenues):
Note: Negative numbers indicate that companies actually decreased their gross margin over the quarter.
As a SaaS VC, I tend to focus on Private companies, so I leave it up to you to design you short and long strategies on this peer group - of course a lot of other factors need to be taken into account
(like growth rate and churn as $1 of recurring revenue is worth more for companies with lower churn) - but it is interesting to note that SuccessFactors is valued at more than 7x EV/TTM while it lost
GM in Q4 07 and that Constant Contact is valued at the same multiple than Concur (both expecting to grow 60% 2007 vs. 2008) but with very different CAC ratios.
5 comments:
Great analysis, good to see some refinement on the magic number concept.
This comment has been removed by a blog administrator.
This comment has been removed by a blog administrator.
Am curious of your thoughts on how the CAC ratio is influenced based on stage/maturity of company (ie. a new entrant trying to gain awareness in the marketplace investing in various mktg programs
and sales to get the lead-gen engine flywheel started)?
Great analysis Phillipe. Really insightful. However, it seems like there's little correlation between the CAC ratio and EV/rev.
Ed's question is valid too. If I have time I'd like to do my own analysis around CAC ratio and size/maturity of the company. | {"url":"http://cracking-the-code.blogspot.com/2008/03/measuring-sales-and-marketing.html?showComment=1238308200000","timestamp":"2014-04-17T08:02:44Z","content_type":null,"content_length":"103272","record_id":"<urn:uuid:8e4e659b-e1d3-4e26-b258-04bcf079bc90>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00215-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistics is the science and art of making inferences from data, under conditions of uncertainty. The practice of statistics requires not only the understanding of the statistical techniques, but
also understanding of the problem requiring statistical analysis, whether it is in the liberal arts, the sciences, health sciences, or business. The statistics sequence has an interdisciplinary
component so that this program will help develop skills in the application of statistics to a variety of disciplines.
There are job opportunities available in the government and private sector for individuals with training in statistical skills. Census Bureau, Bureau of Labor Statistics, US Environmental Protection
Agency, National Center for Health Statistics, National Institute of Health, and Food and Drug Administration are some government agencies having ever-growing demand for employees with statistical
training. In the private sector pharmaceutical and agricultural industry and marketing are always in need for employees with statistical training. Given the present market demand for statisticians,
graduates with bachelor's degree with a statistics sequence have a variety of options of choosing careers in the public sector, pharmaceutical industry, or agribusiness. Please see the following
web-sites for more information on what is statistics, what can you do with statistics, and how to become a statistician.
Program Options & Requirements
1. Required courses:
• MAT 145, Calculus I
• MAT 146, Calculus II
• MAT 147, Calculus III
• MAT 175, Elementary Linear Algebra
• MAT 260, Discrete Mathematics
• MAT 350, Applied Probability Models
• MAT 351, Statistics and Data Analysis
2. At least two courses from the following list:
• MAT 353, The Analysis of Time Series
• MAT 356, Statistical Computing
• MAT 450, Finite Sampling
• MAT 453, Regression Analysis
• MAT 455, Applied Stochastic Processes
• MAT 456, Multivariate Statistics
• MAT 458, The Design of Experiments
[Only senior students with good standing will be allowed to take a 400-level course subject to the Graduate School's approval.]
3. One computer-programming course from Introduction to Micro Computers ACS 155.01, or ACS 155.02
4. Senior Portfolio to be handed in before graduation.
Program Options & Requirements, cont'd
4. Select at least two of the following areas and complete at least two courses from the list of approved courses for each area.
Biology (Complete two):
• BSC 201: Ecology
• BSC 203: Cell Biology
• BSC 219: Genetics
• BSC 297: Biological Evolution
• BSC 321: Molecular and Developmental Genetics
Economics (Complete two):
• ECO 225: Labor Economics and Labor Problems
• ECO 235: Telecommunications Economics and Public Policy
• ECO 238: Using Econometrics
• ECO 239: Managerial Economics
• ECO 240: Intermediate Microeconomic Theory
• ECO 241: Intermediate Macroeconomic Theory
• ECO 320: Industrial Organization
• ECO 339: Organizational Economics
• ECO 331: Intermediate Economic Statistics
Psychology (Complete two):
• PSY 231 Research Methods in Psychology
• PSY 331 Laboratory in Research Methods in Psychology
• PSY 334 Psychological Measurement
• PSY 230 Business and Industrial Psychology
• PSY 232 Personality
Note: It is to the advantage of the student to have a minor or double major in one of the above areas. However, it is not a requirement for the sequence. Senior students in good standing are
encouraged to take upper level applied statistics courses from selected cognate areas. | {"url":"http://math.illinoisstate.edu/undergrad/statistics/index.shtml","timestamp":"2014-04-19T04:41:08Z","content_type":null,"content_length":"21373","record_id":"<urn:uuid:313596ba-0a06-4087-854d-38857418724f>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00061-ip-10-147-4-33.ec2.internal.warc.gz"} |
is commonly defined as the study of
of structure,
, and
; more informally, one might say it is the study of 'figures and numbers'. In the
view, it is the investigation of axiomatically defined abstract structures using
mathematical notation
; other views are described in
Philosophy of mathematics
. Mathematics might be seen as a simple extension of spoken and written languages, with an extremely precisely defined vocabulary and grammar, for the purpose of describing and exploring physical and
conceptual relationships.
Although mathematics itself is not usually considered a natural science, the specific structures that are investigated by mathematicians often have their origin in the natural sciences, most commonly
in physics. However, mathematicians also define and investigate structures for reasons purely internal to mathematics, because the structures may provide, for instance, a unifying generalization for
several subfields, or a helpful tool for common calculations. Finally, many mathematicians study the areas they do for purely aesthetic reasons, viewing mathematics as an art form rather than as a
practical or applied science. Some mathematicians like to refer to their subject as "the Queen of Sciences".
Mathematics is often abbreviated to math (in American English) or maths (in British English).
Overview and history of mathematics
See the article on the history of mathematics for details.
The word "mathematics" comes from the Greek μάθημα (máthema) which means "science, knowledge, or learning"; μαθηματικός (mathematikós) means "fond of learning".
The major disciplines within mathematics arose out of the need to do calculations in commerce, to measure land and to predict astronomical events. These three needs can be roughly related to the
broad subdivision of mathematics into the study of structure, space and change.
The study of structure starts with numbers, first the familiar natural numbers and integers and their arithmetical operations, which are recorded in elementary algebra. The deeper properties of whole
numbers are studied in number theory. The investigation of methods to solve equations leads to the field of abstract algebra, which, among other things, studies rings and fieldss, structures that
generalize the properties possessed by the familiar numbers. The physically important concept of vectorss, generalized to vector spaces and studied in linear algebra, belongs to the two branches of
structure and space.
The study of space originates with geometry, first the Euclidean geometry and trigonometry of familiar three-dimensional space (also applying to both more and less dimensions), later also generalized
to non-Euclidean geometries which play a central role in general relativity. Several long standing questions about ruler and compass constructions were finally settled by Galois theory. The modern
fields of differential geometry and algebraic geometry generalize geometry in different directions: differential geometry emphasizes the concepts of functions, fiber bundles, derivatives, smoothness
and direction, while in algebraic geometry geometrical objects are described as solution sets of polynomial equations. Group theory investigates the concept of symmetry abstractly and provides a link
between the studies of space and structure. Topology connects the study of space and the study of change by focusing on the concept of continuity.
Understanding and describing change in measurable quantities is the common theme of the natural sciences, and calculus was developed as a most useful tool for doing just that. The central concept
used to describe a changing variable is that of a function. Many problems lead quite naturally to relations between a quantity and its rate of change, and the methods to solve these are studied in
the field of differential equations. The numbers used to represent continuous quantities are the real numbers, and the detailed study of their properties and the properties of real-valued functions
is known as real analysis. For several reasons, it is convenient to generalise to the complex numbers which are studied in complex analysis. Functional analysis focuses attention on (typically
infinite-dimensional) spaces of functions, laying the groundwork for quantum mechanics among many other things. Many phenomena in nature can be described by dynamical systems and chaos theory deals
with the fact that many of these systems exhibit unpredictable yet deterministic behavior.
In order to clarify and investigate the foundations of mathematics, the fields of set theory, mathematical logic and model theory were developed.
When computers were first conceived, several essential theoretical concepts were shaped by mathematicians, leading to the fields of computability theory, computational complexity theory, information
theory and algorithmic information theory. Many of these questions are now investigated in theoretical computer science. Discrete mathematics is the common name for those fields of mathematics useful
in computer science.
An important field in applied mathematics is statistics, which uses probability theory as a tool and allows the description, analysis and prediction of phenomena and is used in all sciences.
Numerical analysis investigates the methods of efficiently solving various mathematical problems numerically on computers and takes rounding errors into account.
Topics in mathematics
An alphabetical and subclassified list of mathematical topics is available. The following list of subfields and topics reflects one organizational view of mathematics.
In general, these topics and ideas present explicit measurements of sizes of numbers or sets, or ways to find such measurements.
Number -- Natural number -- Pi -- Integers -- Rational numbers -- Real numbers -- Complex numbers -- Hypercomplex numbers -- Quaternions -- Octonions -- Sedenions -- Hyperreal numbers -- Surreal
numbers -- Ordinal numbers -- Cardinal numbers -- p-adic numberss -- Integer sequences -- Mathematical constants -- Number names -- Infinity -- Base
These topics give ways to measure change in mathematical functions, and changes between numbers.
Arithmetic -- Calculus -- Vector calculus -- Analysis -- Differential equations -- Dynamical systems and chaos theory -- List of functions
These branches of mathematics measure size and symmetry of numbers, and various constructs.
Abstract algebra -- Number theory -- Algebraic geometry -- Group theory -- Monoids -- Analysis -- Topology -- Linear algebra -- Graph theory -- Universal algebra -- Category theory -- Order
These topics tend to quantify a more visual approach to mathematics than others.
Topology -- Geometry -- Trigonometry -- Algebraic geometry -- Differential geometry -- Differential topology -- Algebraic topology -- Linear algebra -- Fractal geometry
Discrete mathematics
Topics in discrete mathematics deal with branches of mathematics with objects that can only take on specific, separated values.
Combinatorics -- Naive set theory -- Probability -- Theory of computation -- Finite mathematics -- Cryptography -- Graph theory -- Game theory
Applied mathematics
Fields in applied mathematics use knowledge of mathematics to real world problems.
Mechanics -- Numerical analysis -- Optimization -- Probability -- Statistics -- Financial mathematics
Famous theorems and conjectures
These theorems have interested mathematicians and non-mathematicians alike.
Fermat's last theorem -- Goldbach's conjecture -- Twin Prime Conjecture -- Gödel's incompleteness theorems -- Poincaré conjecture -- Cantor's diagonal argument -- -- Four color theorem -- Zorn's
lemma -- Euler's identity -- Scholz Conjecture -- Church-Turing thesis
Important theorems
These are theorems that have changed the face of mathematics throughout history.
Riemann hypothesis -- Continuum hypothesis -- P=NP -- Pythagorean theorem -- Central limit theorem -- Fundamental theorem of calculus -- Fundamental theorem of algebra -- Fundamental theorem of
arithmetic --Fundamental theorem of projective geometry -- classification theorems of surfaces -- Gauss-Bonnet theorem
Foundations and methods
Such topics are approaches to mathematics, and influence the way mathematicians study their subject.
Philosophy of mathematics -- Mathematical intuitionism -- Mathematical constructivism -- Foundations of mathematics -- Set theory -- Symbolic logic -- Model theory -- Category theory --
Theorem-proving -- Logic -- Reverse Mathematics -- Table of mathematical symbols
History and the world of mathematicians
History of mathematics -- Timeline of mathematics -- Mathematicians -- Fields medal -- Abel Prize -- Millennium Prize Problems (Clay Math Prize) -- International Mathematical Union -- Mathematics
competitions -- Lateral thinking
Mathematics and other fields
Mathematics and architecture -- Mathematics and education -- Mathematics of musical scales
Mathematical coincidences
Mathematical tools
Referring to the axiomatic method, where certain properties of an (otherwise unknown) structure are assumed and consequences thereof are then logically derived, Bertrand Russell said:
Mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.
This may explain why John Von Neumann
once said:
In mathematics you don't understand things. You just get used to them.
About the beauty of Mathematics,
Bertrand Russell
said in
Study of Mathematics
Mathematics, rightly viewed, possesses not only truth, but supreme beauty -- a beauty cold and austere, like that of sculpture, without appeal to any part of our weaker nature, without the
gorgeous trappings of painting or music, yet sublimely pure, and capable of a stern perfection such as only the greatest art can show. The true spirit of delight, the exaltation, the sense of
being more than Man, which is the touchstone of the highest excellence, is to be found in mathematics as surely as poetry.
Elucidating the symmetry between the creative and logical aspects of mathematics, W.S. Anglin observed, in
Mathematics and History
Mathematics is not a careful march down a well-cleared highway, but a journey into a strange wilderness, where the explorers often get lost. Rigour should be a signal to the historian that the
maps have been made, and the real explorers have gone elsewhere.
Mathematics is not...
Mathematics is not numerology. Although numerology uses modular arithmetic to boil names and dates down to single digit numbers, numerology arbitrarily assigns emotions or traits to numbers without
bothering to prove the assignments in a logical manner. Mathematics is concerned with proving or disproving ideas in a logical manner, but numerology is not. The interactions between the arbitrarily
assigned emotions of the numbers are intuitively estimated rather than calculated in a thoroughgoing manner.
Mathematics is not accountancy. Although arithmetic computation is crucial to the work of accountants, they are mainly concerned with proving that the computations are true and correct through a
system of doublechecks. The proving or disproving of hypotheses is very important to mathematicians, but not so much to accountants. Advances in abstract mathematics are irrelevant to accountancy if
the discoveries can't be applied to improving the efficiency of concrete bookkeeping.
Mathematical abilities and gender issues
Mathematical abilities are said to differ by gender. Males are supposedly more skilled in mathematical fields than females. Results of intelligence tests, such as the Differential Aptitude Test
(DAT), provide evidence to this statement. 12th graders males who took the DAT scored almost nine-tenths of a standard deviation higher on mechanical reasoning than females (Lupkowski, 1992). There
are many theories of what may be causing this difference between the genders on mathematical ability. Environmentalists argue that this difference is caused by gender biased education, while some
other researchers argue that it is the characteristics of the genders that cause this ability gap. The reason is still not certain.
Characteristic differences are one of the theories said to be the reason for greater mathematical performances among male students. Males are said to have high self-esteem, while females are not as
confident. When studying mathematics at a young age, males believe that they do well, when the truth is that their abilities do not differ much from females (Leonard, 1995). This level of confidence,
motivation, and interest in the mathematical field eventually results in mathematical ability gaps (Manning, 1998).
Biased education
There are many people who believe that biased education is the reason of the mathematical ability differences. As an example of biased education, a woman who scored the same as a man on a test was
given worse grades than the man. The professor who taught her believed that women did not belong in his field (Isaacson, 1990). There are also examples of biased education where although girls offer
ideas as much as boys, boys are called upon more frequently. Leder (1990) comments that, ÓAcknowledgement, praise, encouragement, and corrective feedback are given slightly more frequently to men
than to womenÔ. Females also tend to put less effort into mathematics than linguistics because they are tied up with stereotypical statements saying that they will not succeed in the mathematics
field. The stereotypical thought that men make better mathematicians, scientists, or engineers, are still engraved in womenÒs minds, discouraging women from study mathematics.
National Science Foundation (1997). Gender issues in math and technology. TERC. Retrieved July 22, 2004, from http://www.terc.edu/mathequity/gender.html
Tencza (2002). Gender Differences in Mathematics Among Various Aged Students. Georgetown College. Retrieved July 22, 2004 from http://www.georgetowncollege.edu/departments/education/portfolios/Tencza
Stanley, Benbow, Brody, Dauber, &Lupkowski (1992). Gender Differences on Eighty-Six Nationally Standardized Aptitude and Achievement Tests, Talent Development, vol.1, 42-65 | {"url":"http://july.fixedreference.org/en/20040724/wikipedia/Mathematics","timestamp":"2014-04-18T03:31:29Z","content_type":null,"content_length":"35017","record_id":"<urn:uuid:cacf25eb-7b58-411a-97e0-becee4815353>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
area bounded
September 15th 2012, 09:25 AM #1
area bounded
What is the Area bounded by x^2 = y, x = 1 , and x - axis?
what does this mean? favor please
Re: area bounded
Have you at least graphed those? You should have a parabola (y= x^3), a vertical line (x= 1), and a horizontal line (y= 0, the x-axis). Do you see the region bounded by those? Do you see that you
are asked for the area "below" y= x^2 betweeen x= 0 and x= 1? Since you are asking a question like this, I take it you are taking a Calculus class. And you should have learned that the "area
below y= f(x), between x= a and x= b" is given by the integral $\int_a^b f(x)dx$. That is the simplest application of the integral and, in fact, is usually used to introduce the integral.
Re: area bounded
you are only giving things with the same concept of the my math problem but you either cannot answer
Re: area bounded
Re: area bounded
The user you are addressing here is a valuable contributor on several forums that I know of personally, and probably several that I do not know of. I have gained insight many times from reading
his posts. When you imply that the solution is beyond him, I can but chuckle, yet your demeanor here recently is no laughing matter.
I'm not trying to bust your chops, but please treat the folks here with more respect.
He is giving you the concept needed to answer the question in the hope that you may apply it yourself, and gain an understanding from the application.
Re: area bounded
The user you are addressing here is a valuable contributor on several forums that I know of personally, and probably several that I do not know of. I have gained insight many times from reading
his posts. When you imply that the solution is beyond him, I can but chuckle, yet your demeanor here recently is no laughing matter.
I'm not trying to bust your chops, but please treat the folks here with more respect.
He is giving you the concept needed to answer the question in the hope that you may apply it yourself, and gain an understanding from the application.
i respect anyone here unless they do something not good in my feelings. thanks MArky
September 15th 2012, 09:32 AM #2
MHF Contributor
Apr 2005
September 15th 2012, 04:30 PM #3
September 15th 2012, 04:53 PM #4
September 15th 2012, 10:21 PM #5
September 16th 2012, 12:48 AM #6 | {"url":"http://mathhelpforum.com/geometry/203496-area-bounded.html","timestamp":"2014-04-20T02:50:46Z","content_type":null,"content_length":"49922","record_id":"<urn:uuid:83f73c7b-9936-4684-8200-6353a4a4eda2>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
Area Problem
August 24th 2010, 10:52 PM
Area Problem
a rectangular plot of land, 12m by 10m hs a path of width 4m all around it, find the area of the path.
August 24th 2010, 11:03 PM
Prove It
Have you at least tried drawing this out first?
August 24th 2010, 11:27 PM
I tried to solve it but not get answerd. Plz help me to get its proper answer.
August 25th 2010, 12:42 AM
Let me help you out:
"A rectangular plot of land, 12m by 10m hs a path of width 4m all around it, find the area of the path."
You can divide the added path into 4 sections:
The top and bottom, plus the 2 sides. It helps if you draw a picture.
You can say the top and bottom's area = $(12 + 4 + 4) * 4 * 2$
You can say that the 2 sides' area = $10*4 * 2$
Do you know what to do now?
August 25th 2010, 02:35 AM
can you send me its full solution ?
August 25th 2010, 06:07 AM
Prove It
August 25th 2010, 01:51 PM
Dear Winsome,
Take a look at pictures on your walls.Sketch a similar picture but insert the inside picture dimensions to 12x10 and then calculate the outside dimensions of the picture frame. You take it from | {"url":"http://mathhelpforum.com/geometry/154358-area-problem-print.html","timestamp":"2014-04-19T20:38:40Z","content_type":null,"content_length":"6404","record_id":"<urn:uuid:71ba42d4-dbf5-41a9-8816-6406e0846953>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dirk Laurie's home page
Documents on this server
Documents on this server not directly clickable from here
Visit http://dip.sun.ac.za/~laurie/topic where "topic" is replaced by the relevant directory name. These directories mostly contain material that I don't want to be indexed by webcrawlers.
Some mathematicians and computer programmers
I found this interesting
When I asked Google for a picture of Kronrod, I got various wrong Kronrods and some non-Krodrods. I lifted this one from Brudno's translation of the short biography by Landis and Yaglom , published
by the Department of Scientific Computing and Computational Mathematics at Stanford.
Dirk Laurie se werf
University of Dundee, Scotland
Getroud met Trienke ; vyf seuns Henri, Diederik, Dirk Pieter, Reenen en Kestell. | {"url":"http://dip.sun.ac.za/~laurie/","timestamp":"2014-04-18T18:12:17Z","content_type":null,"content_length":"3484","record_id":"<urn:uuid:cc32468b-1881-4d73-a0f7-ed0f9208f316>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00181-ip-10-147-4-33.ec2.internal.warc.gz"} |
imagine and real part of inner product?
November 9th 2009, 03:54 AM #1
MHF Contributor
Nov 2008
imagine and real part of inner product?
uppon what law this transition was made?
how did they deside what is the real and what is the imaginate part?
The "decide" what are the real and imaginary parts of a number from the definition: If x= a+ bi then the real part of x is a and the imaginary part is b.
They are also using the "linearity" of the inner product,
<au+ bv, w>= a<u, w>+ b<v, w>. and the fact that $<u, v>= \overline{<u, v>}$ so that $<u, av+ bw>= \overline<av+ bw, u>= \overline{a<v,u>}+ \overline{b<w,u>}$$= \overline{a}<u, v>+ \overline{b}
<u, w>$.
November 9th 2009, 07:15 AM #2
MHF Contributor
Apr 2005
November 9th 2009, 07:24 AM #3
MHF Contributor
Nov 2008 | {"url":"http://mathhelpforum.com/differential-geometry/113418-imagine-real-part-inner-product.html","timestamp":"2014-04-20T08:37:11Z","content_type":null,"content_length":"35494","record_id":"<urn:uuid:60a8605f-2057-4422-80cc-817d5d66c6c7>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
User Chris
bio website
visits member for 1 year, 10 months
seen Feb 22 at 17:35
stats profile views 3
Jan Volume normalization
25 comment You are correct!
23 awarded Promoter
21 asked Volume normalization
Oct X-axis changes Excel plot output
21 revised edited tags
21 asked X-axis changes Excel plot output
Oct Three pairs of box whisker plots in one coordinate system
21 comment Thanks for the assistance, I've managed to get it done with your help and the website you've linked!
Oct Three pairs of box whisker plots in one coordinate system
17 comment Hello dav! I haven't managed to solve the thing with the gaps.
16 accepted Three pairs of box whisker plots in one coordinate system
Oct Three pairs of box whisker plots in one coordinate system
16 comment When I insert a blank column between B and C for instance, the whiskers from F disappear. How could I fix this?
Oct Three pairs of box whisker plots in one coordinate system
16 comment How did you manage to generate the gaps between the pairs of columns?
Oct Three pairs of box whisker plots in one coordinate system
16 comment This looks great dav! Could you share your excel file with me?
15 asked Three pairs of box whisker plots in one coordinate system
23 awarded Supporter
23 awarded Scholar
23 accepted Firefox Add-On for closing multiple tabs except some
23 awarded Student
Jun asked Firefox Add-On for closing multiple tabs except some | {"url":"http://superuser.com/users/141857/chris?tab=activity&sort=all","timestamp":"2014-04-18T00:45:35Z","content_type":null,"content_length":"40585","record_id":"<urn:uuid:2ccc6ece-d94f-465f-822b-b3c66834df01>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00540-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prime Counting Function Model
Prime Counting Function Model Documents
This material has 2 associated documents. Select a document title to view a document's information.
Main Document
written by Wolfgang Christian
The Prime Counting Function model uses the trial division algorithm to compute the number of primes less than or equal to the number n. Although the trial division algorithm is inefficient, we use it
to perform a lengthy calculation while a standard EJS simulation thread accumulates and plots data from the parallel computation. Users can vary the number of independent threads and observe the
computational time.
The Prime Counting Function Model was created using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the model's jar file
will run the simulation if Java is installed.
Published October 11, 2013
Last Modified October 11, 2013
previous versions.
Source Code Documents
The source code zip archive contains an XML representation of the Prime Counting Function Model. Unzip this archive in your Ejs workspace to compile and run this model using Ejs.
Last Modified October 11, 2013 | {"url":"http://www.compadre.org/OSP/document/ServeFile.cfm?ID=13030&DocID=3597&Attachment=1","timestamp":"2014-04-18T10:44:54Z","content_type":null,"content_length":"15380","record_id":"<urn:uuid:b5c6726f-8dea-4e01-8d6c-90403c7d1b1b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00395-ip-10-147-4-33.ec2.internal.warc.gz"} |
Subnetting Made Easy – Part 1: Decimal & Binary Numbers
Subnetting Made Easy–Part 1: Decimal & Binary Numbers
We know decimal numbers. We’ve been using those all our lives, so we’re familiar with that. So, if you look at it, our number system is based on the value of where it sits:
10^3 10^2 10^1 10^0
So this first column as we know is the ones column, then our tens, hundreds, and thousands. But really that’s ten to the zero, our ten to the 1 column, our ten squared column, and ten cubed, which
means ten times ten times ten. So this is what we’re used to with decimal numbers, different positions have different values.
In binary numbers, it’s the exact same thing except instead of using base 10, we use base 2. Our values are 2 to the zero is one, then two to the one is two, two squared is four, then two times two
times two is eight, that’s two cubed. Then two the fourth is 16, our next column is 32, then 64 and 128. This makes eight binary bits.
Your binary is 11010010
How to Determine the Size of a Network
So how do we determine the size of a network? The size of the network is determined by the number of host bits. The more host bits you have, the larger your network will be. If you have one host
bits, it could either be a 0 or a 1. If you have two bits, it could be 00, 01, 10, or 11 so we have four combinations with two bits. So the size is equal to 2 to the number of host bits. That
determines the size of our network.If we have one bit, it’s 2^1 or 2. If we have two bits it’s 2^2 or 4 possible combinations. If we have 8 host bits, it would be 2^8 which equals 256, and that would
be the size of that network.
Guest Blogger: Jill Liles
Subnetting Series
• Subnetting Made Easy–Part 1: Decimal & Binary Numbers
Leave your response!
Add your comment below, or trackback from your own site. You can also subscribe to these comments via RSS.
Be nice. Keep it clean. Stay on topic. No spam.
You can use these tags:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>
This is a Gravatar-enabled weblog. To get your own globally-recognized-avatar, please register at Gravatar. | {"url":"http://blog.globalknowledge.com/technology/subnetting-made-easy-part-1-decimal-binary-numbers/","timestamp":"2014-04-17T15:40:39Z","content_type":null,"content_length":"77863","record_id":"<urn:uuid:1173513c-5e56-4311-801a-916eb250b769>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00101-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probabilistic framework for the adaptation and comparison of image codes
We apply a Bayesian method for inferring an optimal basis to the problem of finding efficient image codes for natural scenes. The basis functions learned by the algorithm are oriented and localized
in both space and frequency, bearing a resemblance to two-dimensional Gabor functions, and increasing the number of basis functions results in a greater sampling density in position, orientation, and
scale. These properties also resemble the spatial receptive fields of neurons in the primary visual cortex of mammals, suggesting that the receptive-field structure of these neurons can be accounted
for by a general efficient coding principle. The probabilistic framework provides a method for comparing the coding efficiency of different bases objectively by calculating their probability given
the observed data or by measuring the entropy of the basis function coefficients. The learned bases are shown to have better coding efficiency than traditional Fourier and wavelet bases. This
framework also provides a Bayesian solution to the problems of image denoising and filling in of missing pixels. We demonstrate that the results obtained by applying the learned bases to these
problems are improved over those obtained with traditional techniques.
© 1999 Optical Society of America
OCIS Codes
(000.5490) General : Probability theory, stochastic processes, and statistics
(100.2960) Image processing : Image analysis
(100.3010) Image processing : Image reconstruction techniques
Michael S. Lewicki and Bruno A. Olshausen, "Probabilistic framework for the adaptation and comparison of image codes," J. Opt. Soc. Am. A 16, 1587-1601 (1999)
Sort: Year | Journal | Reset
1. J. G. Daugman, “Uncertainty relation for resolution in space, spatial-frequency, and orientation optimized by two-dimensional visual cortical filters,” J. Opt. Soc. Am. A 2, 1160–1169 (1985).
2. J. G. Daugman, “Complete discrete 2-D Gabor transforms by neural networks for image-analysis and compression,” IEEE Trans. Acoust., Speech, Signal Process. 36, 1169–1179 (1988).
3. J. G. Daugman, “Entropy reduction and decorrelation in visual coding by oriented neural receptive-fields,” IEEE Trans. Biomed. Eng. 36, 107–114 (1989).
4. D. J. Field, “What is the goal of sensory coding,” Neural Comput. 6, 559–601 (1994).
5. T. S. Lee, “Image representation using 2D Gabor wavelets,” IEEE Trans. Pattern. Anal. Mach. Intell. 18, 959–971 (1996).
6. M. S. Lewicki and T. J. Sejnowski, “Learning overcomplete representations,” Neural Comput. (to be published).
7. C. Jutten and J. Herault, “Blind separation of sources. 1. An adaptive algorithm based on neuromimetic architecture,” Signal Process. 24, 1–10 (1991).
8. P. Comon, “Independent component analysis, a new concept,” Signal Process. 36, 287–314 (1994).
9. A. J. Bell and T. J. Sejnowski, “An information maximization approach to blind separation and blind deconvolution,” Neural Comput. 7, 1129–1159 (1995).
10. B. A. Olshausen and D. J. Field, “Sparse coding with an overcomplete basis set: a strategy employed by V1?” Vision Res. 37, 3311–3325 (1997).
11. E. P. Simoncelli, W. T. Freeman, E. H. Adelson, and D. J. Heeger, “Shiftable multiscale transforms,” IEEE Trans. Inf. Theory 38, 587–607 (1992).
12. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” tech. rep. (Stanford University, Stanford, Calif., 1996).
13. R. R. Coifman and M. V. Wickerhauser, “Entropy-based algorithms for best basis selection,” IEEE Trans. Inf. Theory 38, 713–718 (1992).
14. S. G. Mallat and Z. F. Zhang, “Matching pursuits with time-frequency dictionaries,” IEEE Trans. Signal Process. 41, 3397–3415 (1993).
15. S. C. Zhu, Y. N. Wu, and D. Mumford, “Minimax entropy principle and its application to texture modeling,” Neural Comput. 9, 1627–1660 (1997).
16. B. A. Olshausen and D. J. Field, “Emergence of simple-cell receptive-field properties by learning a sparse code for natural images,” Nature (London) 381, 607–609 (1996).
17. P. J. B. Hancock, R. J. Baddeley, and L. S. Smith, “The principal components of natural images,” Network Comput. Neural Syst. 3, 61–70 (1992).
18. C. Fyfe and R. Baddeley, “Finding compact and sparse-distributed representations of visual images,” Network Comput. Neural Syst. 6, 333–344 (1995).
19. R. P. N. Rao and D. H. Ballard, “Dynamic-model of visual recognition predicts neural response properties in the visual-cortex,” Neural Comput. 9, 721–763 (1997).
20. R. P. N. Rao and D. H. Ballard, “Development of localized oriented receptive-fields by learning a translation-invariant code for natural images,” Network Comput. Neural Syst. 9, 219–234 (1998).
21. A. J. Bell and T. J. Sejnowski, “The ‘independent components’ of natural scenes are edge filters,” Vision Res. 37, 3327–3338 (1997).
22. J. H. van Hateren and A. van der Schaaf, “Independent component filters of natural images compared with simple cells in primary visual cortex,” Proc. R. Soc. London, Ser. B 265, 359–366 (1998).
23. C. Zetzsche, E. Barth, and B. Wegmann, “The importance of intrinsically two-dimensional image features in biological vision and picture coding,” in Digital Images and Human Vision, A. B. Watson,
ed. (MIT Press, Cambridge, Mass., 1993), pp. 109–138.
24. D. L. Ruderman, “The statistics of natural images,” Network Comput. Neural Syst. 5, 517–548 (1994).
25. H. B. Barlow, “Possible principles underlying the transformation of sensory messages,” in Sensory Communication, W. A. Rosenbluth, ed. (MIT Press, Cambridge, Mass., 1961), pp. 217–234.
26. H. B. Barlow, “Unsupervised learning,” Neural Comput. 1, 295–311 (1989).
27. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipies in C: The Art of Scientific Programming, 2nd ed. (Cambridge U. Press, Cambridge, England, 1992).
28. S. Marcelja, “Mathematical description of the responses of simple cortical cells,” J. Opt. Soc. Am. 70, 1297–1300 (1980).
29. R. L. De Valois, D. G. Albrecht, and L. G. Thorell, “Spatial frequency selectivity of cells in macaque visual cortex,” Vision Res. 22, 545–559 (1982).
30. A. J. Parker and M. J. Hawken, “Two-dimensional spatial structure of receptive fields in monkey striate cortex,” J. Opt. Soc. Am. A 5, 598–605 (1988).
31. J. H. van Hateren and D. L. Ruderman, “Independent component analysis of natural images sequences yield spatiotemporal filters similar to simple cells in primary visual cortex,” Proc. R. Soc.
London Ser. B 265, 2315–2320 (1998).
32. I. Daubechies, “Orthonormal bases of compactly supported wavelets,” Commun. Pure Appl. Math. XLI, 909–996 (1988).
33. R. W. Buccigrossi and E. P. Simoncelli, “Image compression via joint statistical characterization in the wavelet domain,” Tech. Rep. 414 (University of Pennsylvania, Philadelphia, Penn., May
34. E. P. Simoncelli and E. H. Adelson, “Noise removal via Bayesian wavelet coring,” in Proceedings of International Conference IEEE on Image Processing, III Lausanne, Switzerland (Institute of
Electrical and Electronics Engineers, New York, 1996), pp. 379–382.
35. S. Chen, “Basis pursuit,” Ph.D. dissertation (Stanford University, Stanford, Calif., 1995). Available at http://www-stat.stanford.edu/reports/chen.s
36. R. Everson and L. Sirovich, “Karhunen–Loève procedure for gappy data,” J. Opt. Soc. Am. A 12, 1657–1664 (1995).
37. B. A. Pearlmutter and L. C. Parra, “Maximum likelihood blind source separation: a context-sensitive generalization of ICA,” in Advances in Neural and Information Processing Systems M. C. Mozer,
M. I. Jordan, and T. Petsche, eds. (Morgan Kaufmann, Los Altos, Calif., 1997), Vol. 9, pp. 613–619.
38. H. Attias, “Independent factor analysis,” Neural Comput. 11, 803–851 (1998).
39. B. D. Rao and K. Kreutz-Delgado, “An affine scaling methodology for best basis selection,” tech. rep. (Center for Information Engineering, University of California, San Diego, San Diego, Calif.,
40. R. M. Neal, Bayesian Learning for Neural Networks (Springer-Verlag, New York, 1996).
41. J.-P. Nadal and N. Parga, “Nonlinear neurons in the low-noise limit: a factorial code maximizes information transfer,” Network 5, 565–581 (1994).
42. J.-P. Nadal and N. Parga, “Redundancy reduction and independent component analysis: conditions on cumulants and adaptive approaches,” Network 5, 565–581 (1994).
43. J-F. Cardoso, “Infomax and maximum likelihood for blind source separation,” IEEE Signal Process. Lett. 4, 109–111 (1997).
44. G. E. Hinton and T. J. Sejnowski, “Learning and relearning in Boltzmann machines,” in Parallel Distributed Processing, D. E. Rumelhart and J. L. McClelland, eds. (MIT Press, Cambridge, Mass.,
1986), Vol. 1, Chap. 7, pp. 282–317.
45. R. Linsker, “Self-organization in a perceptual network,” Computer 21, 105–117 (1988).
46. J. J. Atick, “Could information-theory provide an ecological theory of sensory processing,” Network Comput. Neural Syst. 3, 213–251 (1992).
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article » | {"url":"http://www.opticsinfobase.org/josaa/abstract.cfm?uri=josaa-16-7-1587","timestamp":"2014-04-19T18:16:40Z","content_type":null,"content_length":"145572","record_id":"<urn:uuid:4259d2e6-89f2-4f67-839d-e67dce0b4bb3>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00045-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algorithm of the Week: Multiplication
Perhaps right after the addition at school we’ve learned how to multiply two numbers. This algorithm isn’t as easy as addition, but besides that we’re so familiar with it and that we even don’t
recognize it as an “algorithm”. We just know it by heart.
However, as I already said, multiplication is a bit more difficult than addition. This algorithm is interesting because for several reasons. First of all, let’s compare multiplication in binary and
So, let’s see how to multiply two numbers.
Multiplying and adding is practically the same thing, so where’s the difference?
It’s clear that a product of two numbers (each with N digits) can be represented as N sums, as we can see on the next picture.
Now, for larger integers each time we shift left the next intermediate sum.
That is absolutely logical since we can represent the numbers as a sum of decimals divisible by 10 without a remainder.
Binary Multiplication
Sometimes binary is easier to work with than decimal and multiplication is just the case. As shown on the picture below binary multiplication is much easier compared to decimal.
That’s because we multiply only by 1 and 0, so the intermediate sum can be either the first number or 0.
In the oder hand, shifting left in binary equals multiplication by 2.
Why? Well, simply because the base is 2. It’s practically the same with decimals where shifting left equals multiplying by 10.
The same applies of course for any base, i.e. hex F equals decimal 15 (since the first number is 0) and FF equals 255 (which is 16×16 – 1).
Can we do better?
Unlike addition, here the answer is – yes, and we already know how to multiply faster (either decimal or binary numbers) using the Karatsuba’s fast multiplication algorithm.
It’s Important Because …
I’ve to admit that both addition and multiplication algorithms are fairly to understand, but we must remember that they are giving us the ground level for more complex cryptographic algorithms.
Published at DZone with permission of Stoimen Popov, author and DZone MVB. (source) | {"url":"http://python.dzone.com/articles/algorithm-week-multiplication","timestamp":"2014-04-16T21:51:48Z","content_type":null,"content_length":"62944","record_id":"<urn:uuid:c4dc625c-e9b6-4d80-acd0-905e063c82cc>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
Work done by a radial force with constant magnitude
April 26th 2009, 08:25 PM #1
Feb 2009
Work done by a radial force with constant magnitude
A particle moves along the smooth curve y=f(x) from (a,f(a)) to (b,f(b)). The force moving the particle has constant magnitude k and always points away from the origin. Show that the work done by
the force is $\int_{c}F*T ds=k[(b^{2}+(f(b))^{2})^{1/2}-(a^{2}+(f(a))^{2})^{1/2}]$
It's rather long so see attachment
it might help i attached the attachment
April 26th 2009, 10:56 PM #2
April 26th 2009, 10:58 PM #3 | {"url":"http://mathhelpforum.com/calculus/85894-work-done-radial-force-constant-magnitude.html","timestamp":"2014-04-18T00:39:06Z","content_type":null,"content_length":"35845","record_id":"<urn:uuid:fea58741-ef00-44ec-8945-43fc9f933ba8>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |
Are there other algebra structures on the regular representation of a group?
up vote 2 down vote favorite
Let $G$ be a (discrete, say) group and $\mathbb K$ a field. The regular representation $G^{\mathbb K}$ is the vector space of all functions $G \to \mathbb K$. It is a (left, say) $G$-module: given $g
\in G$ and $f: G \to \mathbb K$, the action is $g\cdot f: x \mapsto f(xg)$. Then $G^{\mathbb K}$ is a commutative algebra object in $G\text{-rep}_{\mathbb K}$, the symmetric monoidal category of $\
mathbb K$-valued $G$-representations, under pointwise multiplication $f_1f_2: x \to f_1(x)f_2(x)$.
But the pointwise product is not necessarily the only commutative algebra (in $G\text{-rep}$) structure that can be put on $G^{\mathbb K}$. For example, when $\mathbb K = \mathbb R$ and $G = \mathbb
Z/2$, as an algebra $G^{\mathbb K} \cong \mathbb R[\epsilon]/(\epsilon^2 = 1)$, with the $G$-action corresponding to conjugation $\epsilon \mapsto -\epsilon$. The same $G$-module supports the algebra
structure $\mathbb R[\epsilon]/(\epsilon^2 = -1) = \mathbb C$, which is patently a different algebra.
My question is whether there are any examples with $\mathbb K = \mathbb C$? I.e.:
Does there exist a group $G$ so that there is a commutative algebra object in $G\text{-rep}_{\mathbb C}$ that is isomorphic to $G^{\mathbb C}$ as a representation but not as an algebra?
I believe that any such group must be rather large: in particular, I'm sure that it cannot be finite.
gr.group-theory rt.representation-theory qa.quantum-algebra
I will mention an idea in the comments, since I'm very unsure of its correctness. Take G=PGL(2,C) and forget its algebraic structure: think of it just as a discrete group. It acts on the field C
(x) as automorphisms, and the invariant subspace of C(x) is just C. So G^C and C(x) have the same dimension and the same invariant subspace, and that's almost a proof that they're the same
representation. Unfortunately, math isn't that easy, and I don't see how to really get my hands on either representation. – Theo Johnson-Freyd Sep 13 '10 at 6:13
Silly/stupid question: when you say ``${\mathbb K}$-valued representation of $G$'', do you mean a repn of G as endomorphisms of some ${\mathbb K}$-vector space? I also don't follow what it means
for $G^{\mathbb K}$ to be an algebra object in the category $G-{\rm rep}_{\mathbb K}$. – Yemon Choi Sep 13 '10 at 6:36
Yemon, yes to the first question; "algebra object etc" simply means that the multiplication map on functions is $G$-equivariant (well, the same for scalar multiplication, but that is automatic).
You can also call it a "commutative $G$-algebra", which is fairly standard. However, the correct notion of isomorphism should take both structures into account (in particular, $A$ and $B$ may two
$G$-algebras that are isomorphic as algebras and as $G$-modules, but not as $G$-algebras); this question is about a stronger property of being non-isomorphic as algebras only. – Victor Protsak Sep
13 '10 at 7:18
1 Aren't $A=\mathbb{C}[x]/(x^n)$ and $B=\mathbb{C}[x]/(x^n-1)\cong\mathbb{C}^G$ two commutative $G$-algebras isomorphic to the regular representation of $G=\mathbb{Z}_n$ but non-isomorphic as
algebras (e.g. because $A$ is not semisimple)? In both cases, the standard generator of $G$ acts on $x$ as the multiplication by the fixed primitive $n$th root of unity. – Victor Protsak Sep 13
'10 at 7:28
8 I think your exponential notation is at odds with standard convention: $G^{\mathbb{K}}$ usually means maps from $\mathbb{K}$ to $G$, and you want the opposite. – S. Carnahan♦ Sep 13 '10 at 7:56
show 2 more comments
1 Answer
active oldest votes
Summary Yes, there are many such examples, even for finite groups.
Construction Let $W<GL(V)$ be a complex reflection group, $A=\mathbb{C}[V]$ be the algebra of polynomial functions on $V$ and $A^W$ be the subalgebra of $W$-invariants. Then by the
Chevalley–Shephard–Todd theorem, $A^W$ is a polynomial algebra and $A$ is a free $A^W$-module. This may be viewed as a deformation of $\mathbb{C}^W$ as follows. For any $z\in\text{Spec}A^
W,$ consider the fiber $A_z=A/zA.$ By the freeness property, each $A_z$ carries the regular representation of $W.$ For a regular $z,$ corresponding to a $W$-orbit of a regular point in
$V,$ the algebra $A_z$ may be identified with the algebra of functions on the orbit, which consists of $|W|$ points; in particular, $A_z\cong \mathbb{C}^W$ as a $W$-algebra. But for values
up vote 6 of $z$ corresponding to singular orbits, algebras $A_z$ are not reduced. The most singular fiber is $A_0=\mathbb{C}[V]/(\mathbb{C}[V]^W_{+})$ and is a graded nilpotent Frobenius algebra,
down vote with one-dimensional socle and radical, called the covariant algebra of $W.$
Example Let $W=\mathbb{Z}_n$ acting on the one-dimensional vector space with coordinate $x$ by the $n$th roots of unity. Then regular fibers $\mathbb{C}[x]/(x^n-a)$ with $a\ne 0$ are
semisimple and isomorphic to $\mathbb{C}^{\mathbb{Z}_n}$ (explicitly, $x$ is mapped to $b\sum_k \zeta^{-k}\delta_k,$ where $b^n=a$), whereas the singular fiber $\mathbb{C}[x]/(x^n)$ is a
graded nilpotent algebra.
1 The case $n=2$ is by itself rather instructive. One only has the choice of specifying multiplication on the sign representation summand, and asking whether the map $sign \otimes sign \
to triv$ is the zero map or an isomorphism. – S. Carnahan♦ Sep 13 '10 at 9:32
I'm slightly ashamed of myself for not thinking of this when I first read the question. – Ben Webster♦ Sep 13 '10 at 22:50
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory rt.representation-theory qa.quantum-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/38539/are-there-other-algebra-structures-on-the-regular-representation-of-a-group?answertab=oldest","timestamp":"2014-04-24T00:20:31Z","content_type":null,"content_length":"64809","record_id":"<urn:uuid:ac1b39d9-998c-482e-ae60-13842e06c347>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
"Misbehaved" differential equations
up vote 1 down vote favorite
I have always been fascinated by the so called taxicab geometry first considered by Hermann Minkowski. The metric that has to be used here is a L[1] distance which e.g. means that the lenght of the
diagonal in a unit square is 2 and not √2 - and this holds true no matter how fine the mesh is! Basically the solution doesn't converge when approximated dicretely.
If you see the mesh as an analogy of solving a differential equation numerically the difference between their true solution and the numerical outcome is obviously huge and won't be acceptable in most
My questions: Do you know of certain classes of differential equations where no closed form solutions exist AND that misbehave in an corresponding way? How are they called and where do I find more
information on those? How do you identify them (what characteristics do they have) and how do you tackle such misbehaved equations?
Addition: When I think about it: What could be even stranger is the case when a closed form solution exists but the differential equation can't be solved numerically in the abovementioned sense.
na.numerical-analysis ca.analysis-and-odes
I don't understand your first paragraph - as you say, the length of the diagonal curve of a square is 2 in the taxicab metric. So why would you say that "the solution doesn't converge". The sum of
distances of subdivisions of the diagonal certainly converges to 2 in the taxicab metric. – j.c. Oct 29 '09 at 18:33
increasingly finer subdivisions, I mean – j.c. Oct 29 '09 at 18:33
What I mean is that when you try to evaluate a length in a continous setting by a discrete approximation. It simply stays at 2 and doesn't move towards Sqrt(2). – vonjd Oct 29 '09 at 18:45
2 All that means is that length isn't a continuous function in whatever topology you're imposing on curves. I don't see what this has to do with differential equations. – Qiaochu Yuan Oct 29 '09 at
Probably the granular media are best physical example of systems which do not allows easily description by differential continuous models is disputable: en.wikipedia.org/wiki/Contact_dynamics In
1 multi body regime of contact dynamics, where You consider for example sand, there is serious problem to obtain phenomenological equations from microscopic one. Microscopic system is too
overcomplicated and we believe there should be simpler macroscopic PDE. But such systems dynamic is very complicated and we have troubles to model it on the macroscopic level. – kakaz Feb 26 '10
at 14:28
show 2 more comments
2 Answers
active oldest votes
You'd probably be interested in reading about discrete differential geometry, as put forth by Bobenko et al. Here's one paper about when certain geometric quantities defined in the
up vote 2 discrete sense converge to the analogous quantities in the continuous sense. Again, I don't know what connection you're trying to make to differential equations, (perhaps the results on
down vote convergence of finite element methods?) so perhaps you should spend some time trying to make that precise and then edit your question again.
add comment
I'm not a numerical analyst, and I didn't really understand your question, but one example that came to mind as I was reading it was oscillatory phenomena at discontinuities. If you model a
linear initial value problem with a jump discontinuity using a linear method with greater than first order accuracy, you will get oscillations near the jump, and this can be qualitatively
at odds with predicted behavior. It is a numerical analogue of the Gibbs phenomenon. There are ways of damping this using nonlinear methods like flux limiters.
up vote 2
down vote As far as I can tell, you are asking for a fairly generic phenomenon. If you choose a random PDE with sufficient complexity, you should expect to encounter bad stability behavior and
numerical intractability. For well-behaved PDEs, there are ways of rigorously bounding error.
add comment
Not the answer you're looking for? Browse other questions tagged na.numerical-analysis ca.analysis-and-odes or ask your own question. | {"url":"http://mathoverflow.net/questions/3308/misbehaved-differential-equations?sort=oldest","timestamp":"2014-04-18T23:57:23Z","content_type":null,"content_length":"62259","record_id":"<urn:uuid:47d402be-ff5f-410c-a95f-bb77f5260a28>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Fit indices when assessing latent var interactions
Chris Stride posted on Wednesday, October 19, 2005 - 7:45 am
I am currently trying to fit a SEM which contains interaction terms between continuous latent variables. I have upgraded from v2 to v3.13 and have hence successfully used the next XWITH command to
compute my interaction term.
However, when I run the analyses, the only fit indices in the output are the AIC and BIC. No chi-squared stat, CFI, TLI, RMSEA.
From this I have 2 questions...
i) Is it possible to get these indices for this type of analyses?
ii) And a sort of related question - can a SEM which contains such an interaction effect between factors be considered as nested w.r.t.o. the model that contains just the main effects? And if so, is
there a way of testing for whether the model is significantly improved by fitting the interaction?
Linda K. Muthen posted on Wednesday, October 19, 2005 - 9:05 am
i) No, this is not possible. These fit statistics are available only for models where means, variances, and covariances are sufficient statistics for model estimation. They are printed whenever they
are available.
ii) I think these models would be considered nested. It is the case that -2 times the loglikelihood differenece is distributed as chi-square and can be used to test the difference between two nested
Xiaojing Yan posted on Tuesday, March 20, 2007 - 8:40 am
Dear All,
I am currently doing a similar analysis and have two broad questions:
1. Is it possible to get the effect size of interaction between two continuous latent variables? It seems for RANDOM model the R-sq is not available. How can I solve this? What's the problem if I use
the R-sq change from means interaction?
2. May I have references on interaction between continuous latent variables?
Xiaojing Yan posted on Tuesday, March 20, 2007 - 8:46 am
Many thanks.
Bengt O. Muthen posted on Tuesday, March 20, 2007 - 4:54 pm
1. To me, effect size concerns a standardized mean difference between two conditions (treatment-control, male-female, etc). But it sounds like you use this term for R-square. R-square when predictors
involve products of latent variables is less straightforward because the variance and covariance related to a product of two variables is involved. R-square for the means would not be the same. I
would simply see if the influence of the product term is significant.
2. Our User's Guide refers to Klein-Moosbrugger in Psychometrika 2000
Xiaojing Yan posted on Wednesday, March 21, 2007 - 11:23 am
Many thanks.
Heiko Schimmelpfennig posted on Saturday, September 08, 2007 - 1:04 pm
Dear Bengt,
is it just as difficult to calculate R-square if the predictors involve a quadratic or cubic latent variable?
Thanks in advance.
Linda K. Muthen posted on Tuesday, September 18, 2007 - 4:48 am
Yes, this is more involved also.
Heiko Schimmelpfennig posted on Monday, October 29, 2007 - 5:42 am
Thank you for your reply.
Nevertheless, is it possible to calculate R-square by hand in these cases with the help of the Mplus-Output? Maybe do you have any references?
Linda K. Muthen posted on Monday, October 29, 2007 - 6:54 am
I don't know. You might want to check the Aiken and West book on regression analysis with observed variable interactions or contact Andreas Klein.
Steve posted on Friday, July 12, 2013 - 4:14 am
Dear Linda,
I have read the above discussion (and recognizing that it is a bit dated now), I am wondering if it is now possible somehow to estimate model fit statistics (chi-squared stat, CFI, TLI, RMSEA) when
including an interaction term in the model. I have run my analysis and this information is not included in the output. I would like to do a chi-square difference test after adding in the interaction
variable. Mplus is specifically referenced as being able to do this in a recent paper (Graves, Sarkis & Zhu, 2013, p. 86, Journal of Environmental Psychology), and I would like to replicate this
process if you could please explain how to get this output.
Many thanks.
Linda K. Muthen posted on Friday, July 12, 2013 - 10:18 am
They probably did difference testing using -2 times the loglikelihood difference which is distributed as chi-square. This is the same as the z-test of the interaction.
Steve posted on Friday, July 12, 2013 - 1:07 pm
Dear Linda,
Thank you very much for this information. Have a nice weekend.
Miguel Villodas posted on Wednesday, October 09, 2013 - 10:09 am
I am a bit confused about how to compare an SEM model with an interaction to a "nested" model without an interaction using the log likelihood value. Specifically, I only get an Ho loglikelihood value
in my output for the model with the interaction and do not get a scaling factor for my model without the interactions. Could you please clarify which of these I should compare?
Linda K. Muthen posted on Wednesday, October 09, 2013 - 10:16 am
You should get an H0 loglikelihood and scaling correction factor in both outputs. This is what you compare. It is not really necessary, however, because the z-test of the interaction gives the same
Miguel Villodas posted on Thursday, October 10, 2013 - 5:45 am
Thank you Linda.
Sabrina Thornton posted on Monday, November 25, 2013 - 8:30 am
Hi Linda,
Please can you elaborate on your post on October 09, 2013?
"You should get an H0 loglikelihood and scaling correction factor in both outputs. This is what you compare. It is not really necessary, however, because the z-test of the interaction gives the same
Does that mean that the significant level of the interaction term (Est./S.E.) would provide enough information for reporting the interaction effect? If not, can you let me know where I can find "the
z-test of the interaction" you referred to?
I am comparing two nested models using H0 log likelihood. I specified a model with only direct effects and the other with the direct effects and the interaction term:
Null model
H0 value -4103.707
free parameters 70
Interaction model (the interaction term is significant, p < 0.05)
H0 value -4103.948
free parameters 71
-2*(-4103.948-(-4103.707))=0.482, which is not significant at the 0.05 level for 1 df. Does that mean that the model is not significantly different from the null model, i.e. the fit is not
significantly worsen when adding the interaction term, and therefore the interaction model can be accepted?
Linda K. Muthen posted on Monday, November 25, 2013 - 9:22 am
The z-test is in the third column of the results. It is the parameter estimate divided by its standard error.
Sabrina Thornton posted on Monday, November 25, 2013 - 9:33 am
Hi Linda,
Thanks. So does it mean that the z-test for the interaction term would be sufficient when reporting the interaction effect? Does the log likelihood value comparison between two nested models (without
vs. with interaction term) necessary when the z-test is significant? if so, can you advise as to whether my reasoning in the previous post is appropriate? Thanks.
Linda K. Muthen posted on Monday, November 25, 2013 - 11:22 am
The loglikelihood test with one degree of freedom is the same as the z-test. You don't need both.
Back to top | {"url":"http://www.statmodel.com/discussion/messages/11/862.html?1381409117","timestamp":"2014-04-19T05:47:49Z","content_type":null,"content_length":"42928","record_id":"<urn:uuid:678cf726-464c-4838-b7e7-219e719fbb15>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Severity on Error Statistics Philosophy
Fallacies of statistics & statistics journalism, and how to avoid them: Summary & Slides Day #8 (Phil 6334)
We spent the first half of Thursday’s seminar discussing the Fisher, Neyman, and E. Pearson “triad”[i]. So, since it’s Saturday night, join me in rereading for the nth time these three very short
articles. The key issues were: error of the second kind, behavioristic vs evidential interpretations, and Fisher’s mysterious fiducial intervals. Although we often hear exaggerated accounts of the
differences in the Fisherian vs Neyman-Pearson (NP) methodology, in fact, N-P were simply providing Fisher’s tests with a logical ground (even though other foundations for tests are still possible),
and Fisher welcomed this gladly. Notably, with the single null hypothesis, N-P showed that it was possible to have tests where the probability of rejecting the null when true exceeded the probability
of rejecting it when false. Hacking called such tests “worse than useless”, and N-P develop a theory of testing that avoids such problems. Statistical journalists who report on the alleged
“inconsistent hybrid” (a term popularized by Gigerenzer) should recognize the extent to which the apparent disagreements on method reflect professional squabbling between Fisher and Neyman after 1935
[A recent example is a Nature article by R. Nuzzo in ii below]. The two types of tests are best seen as asking different questions in different contexts. They both follow error-statistical reasoning.
We then turned to a severity evaluation of tests as a way to avoid classic fallacies and misinterpretations.
“Probability/Statistics Lecture Notes 5 for 3/20/14: Post-data severity evaluation” (Prof. Spanos)
[i] Fisher, Neyman, and E. Pearson.
[ii] In a recent Nature article by Regina Nuzzo, we hear that N-P statistics “was spearheaded in the late 1920s by Fisher’s bitter rivals”. Nonsense. It was Neyman and Pearson who came to Fisher’s
defense against the old guard. See for example Aris Spanos’ post here. According to Nuzzo, “Neyman called some of Fisher’s work mathematically ‘worse than useless’”. It never happened. Nor does she
reveal, if she is aware of, the purely technical notion being referred to. Nuzzo’s article doesn’t give the source of the quote; I’m guessing it’s from Gigerenzer quoting Hacking, or Goodman (whom
she is clearly following and cites) quoting Gigerenzer quoting Hacking, but that’s a big jumble.
N-P did provide a theory of testing that could avoid the purely technical problem that can theoretically emerge in an account that does not consider alternatives or discrepancies from a null. As for
Fisher’s charge against an extreme behavioristic, acceptance sampling approach, there’s something to this, but as Neyman’s response shows, Fisher, in practice, was more inclined toward a dichotomous
“thumbs up or down” use of tests than Neyman. Recall Neyman’s “inferential” use of power in my last post. If Neyman really had altered the tests to such an extreme, it wouldn’t have required Barnard
to point it out to Fisher many years later. Yet suddenly, according to Fisher, we’re in the grips of Russian 5-year plans or U.S. robotic widget assembly lines! I’m not defending either side in these
fractious disputes, but alerting the reader to what’s behind a lot of writing on tests (see my anger management post). I can understand how Nuzzo’s remark could arise from a quote of a quote, doubly
out of context. But I think science writers on statistical controversies have an obligation to try to avoid being misled by whomever they’re listening to at the moment. There are really only a small
handful of howlers to take note of. It’s fine to sign on with one side, but not to state controversial points as beyond debate. I’ll have more to say about her article in a later post (and thanks to
the many of you who have sent it to me).
Gigerenzer, G. (1993). The superego, the ego, and the id in statistical reasoning. In G. Keren & C. Lewis (Eds.), A handbook for data analysis in the behavioral sciences: Methodological issues (pp.
311-339). Hillsdale: Lawrence Erlbaum Associates.
Hacking, I. (1965). Logic of statistical inference. Cambridge: Cambridge University Press.
Nuzzo, R .(2014). “Scientific method: Statistical errors: P values, the ‘gold standard’ of statistical validity, are not as reliable as many scientists assume”. Nature, 12 February 2014.
Categories: phil/history of stat, Phil6334, science communication, Severity, significance tests, Statistics Tags: Nuzzo 35 Comments
New SEV calculator (guest app: Durvasula)
Karthik Durvasula, a blog follower[i], sent me a highly apt severity app that he created: https://karthikdurvasula.shinyapps.io/Severity_Calculator/
I have his permission to post it or use it for pedagogical purposes, so since it’s Saturday night, go ahead and have some fun with it. Durvasula had the great idea of using it to illustrate howlers.
Also, I would add, to discover them.
It follows many of the elements of the Excel Sev Program discussed recently, but it’s easier to use.* (I’ll add some notes about the particular claim (i.e, discrepancy) for which SEV is being
computed later on).
*If others want to tweak or improve it, he might pass on the source code (write to me on this).
[i] I might note that Durvasula was the winner of the January palindrome contest.
Categories: Severity, Statistics 12 Comments
Two Severities? (PhilSci and PhilStat)
The blog “It’s Chancy” (Corey Yanofsky) has a post today about “two severities” which warrants clarification. Two distinctions are being blurred: between formal and informal severity assessments,
and between a statistical philosophy (something Corey says he’s interested in) and its relevance to philosophy of science (which he isn’t). I call the latter an error statistical philosophy of
science. The former requires both formal, semi-formal and informal severity assessments. Here’s his post:
In the comments to my first post on severity, Professor Mayo noted some apparent and some actual misstatements of her views.To avert misunderstandings, she directed readers to two of her
articles, one of which opens by making this distinction:
“Error statistics refers to a standpoint regarding both (1) a general philosophy of science and the roles probability plays in inductive inference, and (2) a cluster of statistical tools, their
interpretation, and their justification.”
In Mayo’s writings I see two interrelated notions of severity corresponding to the two items listed in the quote: (1) an informal severity notion that Mayo uses when discussing philosophy of
science and specific scientific investigations, and (2) Mayo’s formalization of severity at the data analysis level.
One of my besetting flaws is a tendency to take a narrow conceptual focus to the detriment of the wider context. In the case of Severity, part one, I think I ended up making claims about severity
that were wrong. I was narrowly focused on severity in sense (2) — in fact, on one specific equation within (2) — but used a mish-mash of ideas and terminology drawn from all of my readings of
Mayo’s work. When read through a philosophy-of-science lens, the result is a distorted and misstated version of severity in sense (1) .
As a philosopher of science, I’m a rank amateur; I’m not equipped to add anything to the conversation about severity as a philosophy of science. My topic is statistics, not philosophy, and so I
want to warn readers against interpreting Severity, part one as a description of Mayo’s philosophy of science; it’s more of a wordy introduction to the formal definition of severity in sense (2).
[It’s Chancy, Jan 11, 2014)
A needed clarification may be found in a post of mine which begins:
Error statistics: (1) There is a “statistical philosophy” and a philosophy of science. (a) An error-statistical philosophy alludes to the methodological principles and foundations associated with
frequentist error-statistical methods. (b) An error-statistical philosophy of science, on the other hand, involves using the error-statistical methods, formally or informally, to deal with
problems of philosophy of science: to model scientific inference (actual or rational), to scrutinize principles of inference, and to address philosophical problems about evidence and inference
(the problem of induction, underdetermination, warranting evidence, theory testing, etc.).
I assume the interest here* is on the former, (a). I have stated it in numerous ways, but the basic position is that inductive inference—i.e., data-transcending inference—calls for methods of
controlling and evaluating error probabilities (even if only approximate). An inductive inference, in this conception, takes the form of inferring hypotheses or claims to the extent that they
have been well tested. It also requires reporting claims that have not passed severely, or have passed with low severity. In the “severe testing” philosophy of induction, the quantitative
assessment offered by error probabilities tells us not “how probable” but, rather, “how well probed” hypotheses are. The local canonical hypotheses of formal tests and estimation methods need
not be the ones we entertain post data; but they give us a place to start without having to go “the designer-clothes” route.
The post-data interpretations might be formal, semi-formal, or informal.
See also: Staley’s review of Error and Inference (Mayo and Spanos eds.)
A. Spanos lecture on “Frequentist Hypothesis Testing”
I attended a lecture by Aris Spanos to his graduate econometrics class here at Va Tech last week[i]. This course, which Spanos teaches every fall, gives a superb illumination of the disparate pieces
involved in statistical inference and modeling, and affords clear foundations for how they are linked together. His slides follow the intro section. Some examples with severity assessments are also
Frequentist Hypothesis Testing: A Coherent Approach
Aris Spanos
1 Inherent difficulties in learning statistical testing
Statistical testing is arguably the most important, but also the most difficult and confusing chapter of statistical inference for several reasons, including the following.
(i) The need to introduce numerous new notions, concepts and procedures before one can paint — even in broad brushes — a coherent picture of hypothesis testing.
(ii) The current textbook discussion of statistical testing is both highly confusing and confused. There are several sources of confusion.
• (a) Testing is conceptually one of the most sophisticated sub-fields of any scientific discipline.
• (b) Inadequate knowledge by textbook writers who often do not have the technical skills to read and understand the original sources, and have to rely on second hand accounts of previous
textbook writers that are often misleading or just outright erroneous. In most of these textbooks hypothesis testing is poorly explained as an idiot’s guide to combining
off-the-shelf formulae with statistical tables like the Normal, the Student’s t, the chi-square, etc., where the underlying statistical model that gives rise to the testing procedure is
hidden in the background.
• (c) The misleading portrayal of Neyman-Pearson testing as essentially decision-theoretic in nature, when in fact the latter has much greater affinity with the Bayesian rather than the
frequentist inference.
• (d) A deliberate attempt to distort and cannibalize frequentist testing by certain Bayesian drumbeaters who revel in (unfairly) maligning frequentist inference in their attempts to motivate
their preferred view on statistical inference.
(iii) The discussion of frequentist testing is rather incomplete in so far as it has been beleaguered by serious foundational problems since the 1930s. As a result, different applied fields have
generated their own secondary literatures attempting to address these problems, but often making things much worse! Indeed, in some fields like psychology it has reached the stage where
one has to correct the ‘corrections’ of those chastising the initial correctors!
In an attempt to alleviate problem (i), the discussion that follows uses a sketchy historical development of frequentist testing. To ameliorate problem (ii), the discussion includes ‘red flag’
pointers (¥) designed to highlight important points that shed light on certain erroneous in- terpretations or misleading arguments. The discussion will pay special attention to (iii), addressing
some of the key foundational problems.
[i] It is based on Ch. 14 of Spanos (1999) Probability Theory and Statistical Inference. Cambridge[ii].
[ii] You can win a free copy of this 700+ page text by creating a simple palindrome! http://errorstatistics.com/palindrome/march-contest/
Categories: Bayesian/frequentist, Error Statistics, Severity, significance tests, Statistics Tags: Spanos 36 Comments
A critical look at “critical thinking”: deduction and induction
I’m cleaning away some cobwebs around my old course notes, as I return to teaching after 2 years off (since I began this blog). The change of technology alone over a mere 2 years (at least here at
Super Tech U) might be enough to earn me techno-dinosaur status: I knew “Blackboard” but now it’s “Scholar” of which I know zilch. The course I’m teaching is supposed to be my way of bringing “big
data” into introductory critical thinking in philosophy! No one can be free of the “sexed up term for statistics,” Nate Silver told us (here and here), and apparently all the college Deans & Provosts
have followed suit. Of course I’m (mostly) joking; and it was my choice.
Anyway, the course is a nostalgic trip back to critical thinking. Stepping back from the grown-up metalogic and advanced logic I usually teach, hop-skipping over baby logic, whizzing past toddler and
infant logic…. and arriving at something akin to what R.A. Fisher dubbed “the study of the embryology of knowledge” (1935, 39) (a kind of ‘fetal logic’?) which, in its very primitiveness, actually
demands a highly sophisticated analysis. In short, it’s turning out to be the same course I taught nearly a decade ago! (but with a new book and new twists). But my real point is that the hodge-podge
known as “critical thinking,” were it seriously considered, requires getting to grips with some very basic problems that we philosophers, with all our supposed conceptual capabilities, have left
unsolved. (I am alluding to Gandenberger‘s remark). I don’t even think philosophers are working on the problem (these days). (Are they?)
I refer, of course, to our inadequate understanding of how to relate deductive and inductive inference, assuming the latter to exist (which I do)—whether or not one chooses to call its study a
“logic”[i]. [That is, even if one agrees with the Popperians that the only logic is deductive logic, there may still be such a thing as a critical scrutiny of the approximate truth of premises,
without which no inference is ever detached even from a deductive argument. This is also required for Popperian corroboration or well-testedness.]
We (and our textbooks) muddle along with vague attempts to see inductive arguments as more or less parallel to deductive ones, only with probabilities someplace or other. I’m not saying I have easy
answers, I’m saying I need to invent a couple of new definitions in the next few days that can at least survive the course. Maybe readers can help.
I view ‘critical thinking’ as developing methods for critically evaluating the (approximate) truth or adequacy of the premises which may figure in deductive arguments. These methods would themselves
include both deductive and inductive or “ampliative” arguments. Deductive validity is a matter of form alone, and so philosophers are stuck on the idea that inductive logic would have a formal
rendering as well. But this simply is not the case. Typical attempts are arguments with premises that take overly simple forms:
If all (or most) J’s were observed to be K’s, then the next J will be a K, at least with a probability p.
To evaluate such a claim (essentially the rule of enumerative induction) requires context-dependent information (about the nature and selection of the K and J properties, their variability, the
“next” trial, and so on). Besides, most interesting ampliative inferences are to generalizations and causal claims, not mere predictions to the next J. The problem isn’t that an algorithm couldn’t
evaluate such claims, but that the evaluation requires context-dependent information as to how the ampliative leap can go wrong. Yet our most basic texts speak as if potentially warranted inductive
arguments are like potentially sound deductive arguments, more or less. But it’s not easy to get the “more or less” right, for any given example, while still managing to say anything systematic and
general. That is essentially the problem…..
The age-old definition of argument that we all learned from Irving Copi still serves: a group of statements, one of which (the conclusion) is claimed to follow from one or more others (the premises)
which are regarded as supplying evidence for the truth of that one. This is written:
P[1], P[2],…P[n]/ ∴ C.
In a deductively valid argument, if the premises are all true then, necessarily, the conclusion is true. To use the “⊨” (double turnstile) symbol:
P[1], P[2],…P[n] ⊨ C.
Does this mean:
P[1], P[2],…P[n]/ ∴ necessarily C?
No, because we do not detach “necessarily C”, which would suggest C was a necessary claim (i.e., true in all possible worlds). “Necessarily” qualifies “⊨”, the very relationship between premises and
It’s logically impossible to have all true premises and a false conclusion, on pain of logical contradiction.
We should see it (i.e., deductive validity) as qualifying the process of “inferring,” as opposed to the “inference” that is detached–the statement placed to the right of “⊨”. A valid argument is a
procedure of inferring that is 100% reliable, in the sense that if the premises are all true, then 100% of the time the conclusion is true.
Deductively Valid Argument: Three equivalent expressions:
(D-i) If the premises are all true, then necessarily, the conclusion is true.
(I.e., if the conclusion is false, then (necessarily) one of premises is false.)
(D-ii) It’s (logically) impossible for the premises to be true and the conclusion false.
(I.e., to have the conclusion false with the premises true leads to a logical contradiction, A & ~A.)
(D-iii) The argument maps true premises into a true conclusion with 100% reliability.
(I.e., if the premises are all true, then 100% of the time the conclusion is true).
(Deductively) Sound argument: deductively valid + premises are true/approximately true.
All of this is baby logic; but with so-called inductive arguments, terms are not so clear-cut. (“Embryonic logic” demands, at times, more sophistication than grown-up logic.) But maybe the above
points can help…
With an inductive argument, the conclusion goes beyond the premises. So it’s logically possible for all the premises to be true and the conclusion false.
Notice that if one had characterized deductive validity as
(a) P[1], P[2],…P[n] ⊨ necessarily C,
then it would be an easy slide to seeing inductively inferring as:
(b) P[1], P[2],…P[n] ⊨ probably C.
But (b) is wrongheaded, I say, for the same reason (a) is. Nevertheless, (b) (or something similar) is found in many texts. We (philosophers) should stop foisting ampliative inference into the
deductive mould. So, here I go trying out some decent parallels:
In all of the following, “true” will mean “true or approximately true”.
An inductive argument (to inference C) is strong or potentially severe only if any of the following (equivalent claims) hold [iii]
(I-i) If the conclusion is false, then very probably at least one of the premises is false.
(I-ii) It’s improbable that the premises are all true while the conclusion false.
(I-iii) The argument leads from true premises to a true conclusion with high reliability (i.e., if the premises are all true then (1-a)% of the time, the conclusion is true).
To get the probabilities to work, the premises and conclusion must refer to “generic” claims of the type, but this is the case for deductive arguments as well (else their truth values wouldn’t be
altering). However, the basis for the [I-i through I-iii] requirement, in any of its forms, will not be formal; it will demand a contingent or empirical ground. Even after these are grounded, the
approximate truth of the premises will be required. Otherwise, it’s only potentially severe. (This is parallel to viewing a valid deductive argument as potentially sound.)
We get the following additional parallel:
Deductively unsound argument:
Denial of D-(i), (D-ii), or (D-iii): it’s logically possible for all its premises to be true and the conclusion false.
One or more of its premises are false.
Inductively weak inference: insevere grounds for C
Denial of I-(i), (ii), or (iii): Premises would be fairly probable even if C is false.
Its premises are false (not true to a sufficient approximation)
There’s still some “winking” going on, and I’m sure I’ll have to tweak this. What do you think?
Fully aware of how the fuzziness surrounding inductive inference has non-trivially (adversely) influenced the entire research program in philosophy of induction, I’ll want to rethink some elements
from scratch, this time around….
So I’m back in my Thebian palace high atop the mountains in Blacksburg, Virginia. The move from looking out at the Empire state building to staring at endless mountain ranges is… calming.[iv]
[i] I do, following Peirce, but it’s an informal not a formal logic (using the terms strictly).
[ii]The double turnstile denotes the “semantic consequence” relationship; the single turnstyle, the syntatic (deducibility) relationship. But some students are not so familiar with “turnstiles”.
[iii]I intend these to function equivalently.
[iv] Someone asked me “what’s the biggest difference I find in coming to the rural mountains from living in NYC?” I think the biggest contrast is the amount of space. Not just that I live in a large
palace, there’s the tremendous width of grocery aisles: 3 carts wide rather than 1.5 carts wide. I hate banging up against carts in NYC, but this feels like a major highway!
Copi, I. (1956). Introduction to Logic. New York: Macmillan.
Fisher, R.A. (1935). The Design of Experiments. Edinburgh: Oliver & Boyd.
Categories: critical thinking, Severity, Statistics 28 Comments
P-values as posterior odds?
I don’t know how to explain to this economist blogger that he is erroneously using p-values when he claims that “the odds are” (1 – p)/p that a null hypothesis is false. Maybe others want to jump in
On significance and model validation (Lars Syll)
Let us suppose that we as educational reformers have a hypothesis that implementing a voucher system would raise the mean test results with 100 points (null hypothesis). Instead, when sampling,
it turns out it only raises it with 75 points and having a standard error (telling us how much the mean varies from one sample to another) of 20. Continue reading
Categories: fallacy of non-significance, Severity, Statistics 36 Comments
Severity Calculator
SEV calculator (with comparisons to p-values, power, CIs)
In the illustration in the Jan. 2 post,
H[0]: μ < 0 vs H[1]: μ > 0
and the standard deviation SD = 1, n = 25, so σ[x ] = SD/√n = .2
Setting α to .025, the cut-off for rejection is .39. (can round to .4).
Let the observed mean X = .2 , a statistically insignificant result (p value = .16)
SEV (μ < .2) = .5
SEV(μ <.3) = .7
SEV(μ <.4) = .84
SEV(μ <.5) = .93
SEV(μ <.6*) = .975
Some students asked about crunching some of the numbers, so here’s a rather rickety old SEV calculator*. It is limited, rather scruffy-looking (nothing like the pretty visuals others post) but it is
very useful. It also shows the Normal curves, how shaded areas change with changed hypothetical alternatives, and gives contrasts with confidence intervals. Continue reading
Categories: Severity, statistical tests Leave a comment
Severity as a ‘Metastatistical’ Assessment
Some weeks ago I discovered an error* in the upper severity bounds for the one-sided Normal test in section 5 of: “Statistical Science Meets Philosophy of Science Part 2″ SS & POS 2. The published
article has been corrected. The error was in section 5.3, but I am blogging all of 5.
(* μo was written where xo should have been!)
5. The Error-Statistical Philosophy
I recommend moving away, once and for all, from the idea that frequentists must ‘sign up’ for either Neyman and Pearson, or Fisherian paradigms. As a philosopher of statistics I am prepared to admit
to supplying the tools with an interpretation and an associated philosophy of inference. I am not concerned to prove this is what any of the founders ‘really meant’.
Fisherian simple-significance tests, with their single null hypothesis and at most an idea of a directional alternative (and a corresponding notion of the ‘sensitivity’ of a test), are commonly
distinguished from Neyman and Pearson tests, where the null and alternative exhaust the parameter space, and the corresponding notion of power is explicit. On the interpretation of tests that I am
proposing, these are just two of the various types of testing contexts appropriate for different questions of interest. My use of a distinct term, ‘error statistics’, frees us from the bogeymen and
bogeywomen often associated with ‘classical’ statistics, and it is to be hoped that that term is shelved. (Even ‘sampling theory’, technically correct, does not seem to represent the key point: the
sampling distribution matters in order to evaluate error probabilities, and thereby assess corroboration or severity associated with claims of interest.) Nor do I see that my comments turn on whether
one replaces frequencies with ‘propensities’ (whatever they are). Continue reading
An established probability theory for hair comparison? “is not — and never was”
Hypothesis H: “person S is the source of this hair sample,” if indicated by a DNA match, has passed a more severe test than if it were indicated merely by a visual analysis under a microscopic. There
is a much smaller probability of an erroneous hair match using DNA testing than using the method of visual analysis used for decades by the FBI.
The Washington Post reported on its latest investigation into flawed statistics behind hair match testimony. “Thousands of criminal cases at the state and local level may have relied on exaggerated
testimony or false forensic evidence to convict defendants of murder, rape and other felonies”. Below is an excerpt of the Post article by Spencer S. Hsu.
I asked John Byrd, forensic anthropologist and follower of this blog, what he thought. It turns out that “hair comparisons do not have a well-supported weight of evidence calculation.” (Byrd). I put
Byrd’s note at the end of this post. Continue reading
Categories: Severity, Statistics 14 Comments
Mayo: (section 5) “StatSci and PhilSci: part 2″
Here is section 5 of my new paper: “Statistical Science Meets Philosophy of Science Part 2: Shallow versus Deep Explorations” SS & POS 2. Sections 1 and 2 are in my last post.*
5. The Error-Statistical Philosophy
I recommend moving away, once and for all, from the idea that frequentists must ‘sign up’ for either Neyman and Pearson, or Fisherian paradigms. As a philosopher of statistics I am prepared to admit
to supplying the tools with an interpretation and an associated philosophy of inference. I am not concerned to prove this is what any of the founders ‘really meant’.
Fisherian simple-significance tests, with their single null hypothesis and at most an idea of a directional alternative (and a corresponding notion of the ‘sensitivity’ of a test), are commonly
distinguished from Neyman and Pearson tests, where the null and alternative exhaust the parameter space, and the corresponding notion of power is explicit. On the interpretation of tests that I am
proposing, these are just two of the various types of testing contexts appropriate for different questions of interest. My use of a distinct term, ‘error statistics’, frees us from the bogeymen and
bogeywomen often associated with ‘classical’ statistics, and it is to be hoped that that term is shelved. (Even ‘sampling theory’, technically correct, does not seem to represent the key point: the
sampling distribution matters in order to evaluate error probabilities, and thereby assess corroboration or severity associated with claims of interest.) Nor do I see that my comments turn on whether
one replaces frequencies with ‘propensities’ (whatever they are). Continue reading
Mayo: (first 2 sections) “StatSci and PhilSci: part 2″
Here are the first two sections of my new paper: “Statistical Science Meets Philosophy of Science Part 2: Shallow versus Deep Explorations” SS & POS 2. (Alternatively, go to the RMM page and scroll
down to the Sept 26, 2012 entry.)
1. Comedy Hour at the Bayesian Retreat[i]
Overheard at the comedy hour at the Bayesian retreat: Did you hear the one about the frequentist…
“who defended the reliability of his radiation reading, despite using a broken radiometer, on the grounds that most of the time he uses one that works, so on average he’s pretty reliable?”
“who claimed that observing ‘heads’ on a biased coin that lands heads with probability .05 is evidence of a statistically significant improvement over the standard treatment of diabetes, on the
grounds that such an event occurs with low probability (.05)?”
Such jests may work for an after-dinner laugh, but if it turns out that, despite being retreads of ‘straw-men’ fallacies, they form the basis of why some statisticians and philosophers reject
frequentist methods, then they are not such a laughing matter. But surely the drubbing of frequentist methods could not be based on a collection of howlers, could it? I invite the reader to stay and
find out. Continue reading
Categories: Error Statistics, Philosophy of Statistics, Severity 2 Comments | {"url":"http://errorstatistics.com/category/error-statistics-2/severity-error-statistics/","timestamp":"2014-04-18T10:46:35Z","content_type":null,"content_length":"99374","record_id":"<urn:uuid:0e2d3a2c-dfac-4f50-8591-6d5a05151d19>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00402-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistical Analysis of Network Data
There does not appear to be, at this point in time, any single software package containing pre-developed tools for all of the types of network analyses covered in the book. In writing the book, I
have drawn on various resources. Most network graph visualization was done using the graph drawing package Pajek, while most of the network-oriented computations (e.g., simulations, modeling fitting,
etc.) were done using the statistical software package R.
Below is a more detailed description of some of the software resources used in constructing the examples for this book, as well as certain other related resources that may be of interest.
Good network analysis packages allow for efficient input and manipulation of network graph data. At a minimum, they include tools for common graph-theoretic operations (e.g., shortest path
calculations, flow analysis, etc.) and basic descriptive analysis (e.g., degree distributions, centrality, partitioning, etc.). In addition, they may include tools for simulation of different
classes of random graphs and, in some cases, network graph modeling. Network visualization capabilities tend to vary with these packages, but for that purpose there are dedicated software tools
(see below).
□ R packages
R is an open-source software environment for statistical computing and graphics. There are a number of contributed packages relating to the statistical analysis of networks and network data.
I have used two of these with some regularity in the book.
☆ igraph is a package for the generating, manipulating, analyzing, and visualizing network graphs, of sizes up to millions of vertices and edges. (This package is also implemented as a C
library and a Python extension module.)
☆ statnet is a suite of software packages for network analysis and modeling, that allows for the estimation, evaluation, and simulation of network models, as well as network analysis and
visualization. The network models include exponential random graph models (ERGMs) and latent variable models. Model fitting and evaluation is driven by a core of appropriate MCMC
□ Matlab toolboxes
Matlab is a commercial software environment for technical computing. Some members of the user community have developed toolboxes that allow one to conduct network analysis to varying extents.
The most comprehensive appears to be the MatlabBGL toolbox, which offers a combination of tools for graph-theoretic calculations, network analysis, network graph generation, and
visualization. See MatlabCentral for more information on this and other related packages.
□ Other
Many collections of network analysis tools may be found as part of larger special-purpose software packages. See, for example, the popular Bioconductor package in R or the Bioinformatics
Toolbox in Matlab. Conversely, certain tools are implemented in stand-alone form. For example, I used the Windows-based mfinder package for motif detection.
While most of the network analysis packages mentioned above offer the capability of visualizing network graphs, the task of visualization is challenging enough that dedicated software for this
purpose typically may be required to obtain high-quality results. There is a relatively large body of such software available. The list below is meant to be illustrative and clearly not
□ Pajek
Pajek is a freely available (for non-commerical use) Windows-based package for the visualization of large networks. It also has a suite of network analysis tools, mainly oriented towards
social network analysis. There is a non-trivial time investment up front necessary to acclimate oneself to the unique input format and the GUI interface. However, the software is capable of
producing high-quality network visualizations allowing for a great deal of fine tuning, and was used to produce the majority of the visualizations in this book.
□ Graphviz
Graphviz is an open-source software for graph visualization, developed by researchers at AT&T. Like Pajek, it allows for a variety of high-quality layouts. Graphviz has been used by other
packages as the muscle behind their own graph visualization capabilities. For example, the Bioconductor package in R mentioned above has a visualization sub-package called Rgraphviz built on
top of the basic Graphviz package.
□ Other
For some tasks, special-purpose drawing software may be useful. For example, I used the software yEd for drawing most of the tree diagrams in the book. Alternatively, one may have certain
platform requirements or programming language requirements/preferences. Appendix A of the book Drawing Graphs: Methods and Models, by Kaufmann and Wagner (Eds), provides a useful list of
additional resources for graph drawing. | {"url":"http://math.bu.edu/people/kolaczyk/softwareSAND.html","timestamp":"2014-04-21T12:20:25Z","content_type":null,"content_length":"11364","record_id":"<urn:uuid:6ef537ae-daa7-4a8a-960e-f50064ccc677>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00402-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving Sudoku :
Swordfish Overview Nishio Printable page
Solving Sudoku : Forcing Chains
This technique (thankfully!) is quite easy to understand, but can take quite some time to work out within a puzzle. This is a technique where having a separate copy or notepad overlay really helps,
because you'll be making lots of notes!
A simple forcing chain is when you have lots of cells with just 2 candidates - and whichever value you would choose for one cell forces another cell to be just one of its two values. (It'll be
clearer with an example!)
The First Choice
Take a look at this puzzle, which shows an example of a forcing chain.
It doesn't matter which value - 1 or 2 - was in the top cell (at coordinate C3,R1), they would force the value 5 into the other cell (C1,R4).
Now before starting, its worthwhile mentioning that some of these chains can be short, and sometimes they can be quite long! This example has one of each.
First of all, imagine if the top cell is a 1. This in turn would force the {14} a few cells below it to be a 4, and so on. Can you follow the chain?
The Second Choice
Now start again, but instead of a 1, this time make the top cell a 2.
Again crossing out the values which get ruled out - you'll see its quite a long chain!
Here's what it looks like with the arrows...
The two chains have different paths starting with each of the two values for the first cell, but either one means there'll be a 5 in the second cell.
As soon as you find a situation like this, even though you don't know what the first cell will be, you definitely know the value in the second cell, so you can write it in!
Is this the same as guessing?
Not quite - what you're doing is simultaneously looking at the implications of either choice, and seeing if any other cells will turn out the same whichever your choice would be. If you were to have
just guessed one, and worked from there, you would actually fill in the same result for the second cell, but depending on whether you guessed correctly or not, you might have made a whole batch of
mistakes on the way.
Can this one get harder too?
What makes this method hard is that you might have to follow chains a long way, and you will have a lot of testing to do. Longer chains don't make it conceptually any harder, but they do make it more
likely that you'll make mistakes along the way.
Sticking to working with just the pairs generally keeps it fairly simple, but there's nothing stopping you considering the effects of triples or other techniques as you go!
Tip: When using an overlay (tracing paper or computer overlay), here's a method which makes finding the chains a bit easier.
Pick your starting cell, and make a small u shape (a little smiley curve) underneath the first pencilmark. From there, look around, but instead of crossing out pencilmarks (otherwise it gets too
messy!), when you find a value that it forces somewhere else, put the same u shape underneath the forced value. Ignore any that the first choice eliminated - you might need them later!
Carry on doing this until you can't make any more "u" forces.
Now choose the second value in your original cell - and this time put a little "n" symbol (a downturned mouth) above the second pencilmark. Like before, look at the implications and forces that this
makes, continuing on until you can't find any more.
If there's a forcing chain, at some point you'll find a pencilmark with both a "u" and "n" on the same mark (in which case they'll almost join up!). When you see this its a sure sign that whichever
candidate you picked for the first cell, you've found the right value for the second cell. Fill in that value, because it means you don't need to look any further!
Some people use colours to make it easier too - but it isn't essential.
NOTE! This example has an error!
A number of people have been in touch to say that they've seen an error in the example above, and most of them are right! Since you'd have to understand the concept of forcing chains to see where the
error lies, this example is still somewhat valid as a learning example, even if it isn't correct. If I get time, I'll put together a different (and hopefully correct) example!
Thanks for spotting this error, and wanting to let us know, but now that we know, please don't get in touch to tell us about the error - the pages of weird numbers and brackets they get by email only
scare the customer support team! Thanks!
Swordfish Overview Nishio Printable page | {"url":"http://www.sudokuoftheday.com/pages/techniques-10.php","timestamp":"2014-04-19T14:30:35Z","content_type":null,"content_length":"15710","record_id":"<urn:uuid:a73d31f6-25cb-42b7-8205-9090c244f50f>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
The Russian Option: Reduced Regret
A Russian option is a perpetual American-type option that pays the owner upon exercise the historical maximum price of the stock (this is supposed to "reduce the regret" of not exercising the option
at the right time). The option was invented by L. Shepp and A. N. Shirayev and named "the Russian option". Although it is not traded in practice, several remarkable formulas for the option value,
optimal exercise time, and the expected exercise time (under the assumption that the stock follows the Black–Scholes model) have been found and have had an important impact on probability theory.
In this Demonstration the orange line shows the movement of a single path of stock prices. You can choose whether the actual or discounted stock price should be shown. The initial stock price is
fixed at 100. The blue dot on the vertical axis is the strike price, which you can vary. The blue line shows the discounted option payoff: it splits into two branches when the option is
exercised—one, the horizontal line shows the discounted payoff after exercise; the other shows the value of the discounted payoff should you choose not to exercise the option. The horizontal black
line shows the option value computed with the Shepp–Shirayev formula and the blue dot on the horizontal axis shows the expected exercise time found by Graversen and Peskir.
Mouseover the lines in the Demonstration to see what they represent.
Shepp and Shiryayev defined the Russian option in pure mathematical terms. Consider a share whose price follows a geometric Brownian motion with drift and volatility (they also consider the case
where the share price follows a Brownian motion with drift—i.e., the Bachelier model). Let and let be the payoff of the option at time . The task is to maximize the expected value of the discounted
payoff over all stopping times . In financial terms the problem can be expressed as follows. The owner of a Russian option chooses an exercise date, represented by a stopping time , and then
receives either or the maximum stock price achieved up to this exercise time, whichever is larger, discounted by . Shepp and Shiryaev showed that there is a number that depends only on , , and ,
such that the optimal strategy is to exercise the option at the first time such that (and the payoff is ). This they define as the fair value of the option. It is crucial that is larger than ,
otherwise it is never optimal to exercise the option. In terms of arbitrage theory one can assume that the stock pays a continuous dividend and take . In this case, the Shepp and Shirayev "fair
price" is also the "arbitrage price" of the option.
The original proof of Shepp and Shirayev was based basically on "guessing" the correct formula for the option price by being guided by Kolmogorov's principle of smooth fit (which they say was a part
of the reason for the name of the option) and then proving that the conjectured answer was the right one. Subsequently several other proofs have been given, for example in [2], in which a formula
for the expected waiting time for optimal exercise is also given. This value is represented by the blue dot on the (horizontal) time axis.
In [3] the problem of valuing a finite horizon Russian option was solved. As in the case of standard American options there are no explicit formulas and the option value is given as the solution of
a nonlinear integral equation that has to be solved by numerical methods.
[1] L. Shepp and A. N. Shiryaev, "The Russian Option:Reduced Regret,"
The Annals of Applied Probability
, 1993 pp. 631–640.
[2] S. E. Graverseb and G. Peskir, "On the Russian Option: The Expected Waiting Time,"
Theory Probab. Appl.
(3), 1997 pp. 564–575.
[3] G. Peskir, "The Russian Option: Finite Horizon,"
Finance Stochast., 9
, 2005 pp. 251–267. | {"url":"http://demonstrations.wolfram.com/TheRussianOptionReducedRegret/","timestamp":"2014-04-20T05:45:36Z","content_type":null,"content_length":"47507","record_id":"<urn:uuid:b1d73d80-dd16-4d6f-a20b-bd866dfc2a03>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00196-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Tutor] Floating Confusion
Luke Paireepinart rabidpoobear at gmail.com
Thu Aug 23 08:45:01 CEST 2007
Chris Calloway wrote:
> wormwood_3 wrote:
>> The second case is, of course, what is throwing me. By having a decimal point, "1.1" is a float type, and apparently it cannot be represented by binary floating point numbers accurately. I must admit that I do not understand why this is the case. Would anyone be able to enlighten me?
> This is fairly standard computer science, not just Python. If you take
> freshman Fortran for scientists, you will eat, sleep, and breath this stuff.
> Is one tenth any power of 2?
> Like how 2**-1 is 0.5? Or how 2**-2 is 0.25? Or 2**-3 is 0.125? Or 2**-4
> is 0.0625. Oops, we went right by 0.1.
> Any binary representation of one tenth will have a round-off error in
> the mantissa.
I've always wondered:
There are numbers in Decimal that can't be represented accurately in
Binary without infinite precision.
But it doesn't go the other way. Rational base-2 numbers are rational
base-10 numbers. I'm supposing this is because base-10 is a higher base
than base-2.
So I've wondered that, even though Pi is an irrational number in
base-10, is it possible that it's a simple (rational) number in a higher
I mean, I suppose a base of Pi would result in Pi being 1, but what
about integer bases?
Is there some kind of theory that relates to this and why there's not a
higher base with an easy representation of Pi?
I'd be interested to read about this but I'm not sure what to look for.
More information about the Tutor mailing list | {"url":"https://mail.python.org/pipermail/tutor/2007-August/056580.html","timestamp":"2014-04-19T11:12:31Z","content_type":null,"content_length":"3992","record_id":"<urn:uuid:b3c8c7ee-35a7-41ea-9dae-be392faf8d7b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
Silverdale, WA ACT Tutor
Find a Silverdale, WA ACT Tutor
...If you are not sure about something or can't solve a math problem, I can simplify it so that you can understand it well and solve the problem all by yourself. I enjoy tutoring math. I've
helped many elementary school students with their math, trying to make it fun and easy to learn.
13 Subjects: including ACT Math, geometry, Chinese, algebra 1
...I have tutored high school level Algebra II for both Public and Private School courses. I also volunteer my time in the Seattle area assisting at-risk students on their mathematics homework.
As an aspiring physician I spent great amounts of time thoroughly studying Biology during my undergradua...
27 Subjects: including ACT Math, chemistry, reading, writing
...At the end of it there was a multiplication problem. I said 'take the first number. Draw that many circles.
17 Subjects: including ACT Math, calculus, geometry, statistics
...Regardless of the subject, I would say I am effective at recognizing patterns. I love sharing any shortcuts or tips that I discover.I have taken 2 quarters of Discrete Structures (Mathematics)
at University of Washington, Tacoma. I earned a 4.0 each quarter.
16 Subjects: including ACT Math, chemistry, calculus, algebra 2
...Not only did I help them to understand and apply concepts, I also guided them through their various projects and assisted them in writing statistical research papers. Teaching SAT is a big
chunk of of my 15 years for tutoring and teaching experience. I have taught SAT math classes while working with College Connections Hawaii, where I received a formal training for teaching the
20 Subjects: including ACT Math, reading, calculus, geometry
Related Silverdale, WA Tutors
Silverdale, WA Accounting Tutors
Silverdale, WA ACT Tutors
Silverdale, WA Algebra Tutors
Silverdale, WA Algebra 2 Tutors
Silverdale, WA Calculus Tutors
Silverdale, WA Geometry Tutors
Silverdale, WA Math Tutors
Silverdale, WA Prealgebra Tutors
Silverdale, WA Precalculus Tutors
Silverdale, WA SAT Tutors
Silverdale, WA SAT Math Tutors
Silverdale, WA Science Tutors
Silverdale, WA Statistics Tutors
Silverdale, WA Trigonometry Tutors | {"url":"http://www.purplemath.com/Silverdale_WA_ACT_tutors.php","timestamp":"2014-04-18T05:45:16Z","content_type":null,"content_length":"23720","record_id":"<urn:uuid:4f1d2c26-8e4e-4c47-9cb0-68a302c9763e>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
solving a system of non-linear equations
I have a little mathematical question here.
Suppose we have two matrices A and B (both of the dimension N x N). Now we want to check whether or not another matrix P (permutation matrix) exists, so that
P*A*(P^T) = B
Does anyone have a good algorithm (with detailed explanation)?
I checked wikipedia, but I don't quite understand which algorithm is
appropriate here... First time I'm dealing with such a problem.
And I cant use any lib (like GSL), because I'd like to use the algorithm
with parallel processing (A and B might get large 10000 x 10000 and more).
Any advice for a good algorithm would be great.
Thanks in advance
Thanks hamsterman.
Maybe I should check the English version of Wikipedia more often. Much more informative
than the German one...
And thanks for the site. Next math question will go there ;-)
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/lounge/87527/","timestamp":"2014-04-18T21:15:43Z","content_type":null,"content_length":"8438","record_id":"<urn:uuid:a6bd9812-fcc7-409c-b03d-38a22e90e239>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00470-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
January 14th 2011, 07:40 AM #1
Jan 2011
error correction
I'm new to this forum and I hope you'll be able to help me.
I have a few exercises about error detection/correction, can you please help me to solve them:
1. The polynomial f(x)=x^6+3x^5+x^4+x^3+2x^2+2x+1 generates a cyclic code of length 15 on F_4. Find a systematic matrix of that code.
2. Find the generator polynomial and the minimal distances of the cyclic codes of length 6.
3. Let C be a linear code on F_3 whose generator matrix is
Decode the words 1212, 2102, 1111.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-algebra/168344-error-correction.html","timestamp":"2014-04-18T14:59:23Z","content_type":null,"content_length":"28946","record_id":"<urn:uuid:a5948c0a-2bc8-4885-b472-00da2e2eb5cc>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |
Needles and hay in haystacks: Empirical bayes estimates of possibly sparse sequences
Results 1 - 10 of 16
- STATIST. SCI , 2004
"... ..."
- , 2005
"... In many statistical problems, stochastic signals can be represented as a sequence of noisy wavelet coefficients. In this paper, we develop general empirical Bayes methods for the estimation of
true signal. Our estimators approximate certain oracle separable rules and achieve adaptation to ideal risk ..."
Cited by 17 (1 self)
Add to MetaCart
In many statistical problems, stochastic signals can be represented as a sequence of noisy wavelet coefficients. In this paper, we develop general empirical Bayes methods for the estimation of true
signal. Our estimators approximate certain oracle separable rules and achieve adaptation to ideal risks and exact minimax risks in broad collections of classes of signals. In particular, our
estimators are uniformly adaptive to the minimum risk of separable estimators and the exact minimax risks simultaneously in Besov balls of all smoothness and shape indices, and they are uniformly
superefficient in convergence rates in all compact sets in Besov spaces with a finite secondary shape parameter. Furthermore, in classes nested between Besov balls of the same smoothness index, our
estimators dominate threshold and James–Stein estimators within an infinitesimal fraction of the minimax risks. More general block empirical Bayes estimators are developed. Both white noise with
drift and nonparametric regression are considered.
, 2003
"... wavelets ..."
, 2004
"... This article develops three methods for the multiscale analysis of irregularly spaced data based on the recently developed lifting paradigm by "lifting one coefficient at a time". The concept of
scale still exists within these transforms but as a continuous quantity rather than dyadic levels. We dev ..."
Cited by 10 (5 self)
Add to MetaCart
This article develops three methods for the multiscale analysis of irregularly spaced data based on the recently developed lifting paradigm by "lifting one coefficient at a time". The concept of
scale still exists within these transforms but as a continuous quantity rather than dyadic levels. We develop empirical Bayes methods that take account of the continuous nature of the scale. We apply
our new methods to the problems of estimation of krill density and rail arrival delays. We demonstrate good performance in a simulation study on new two-dimensional analogues of the well-known
Blocks, Bumps, Doppler and Heavisine and a new piecewise linear function called maartenfunc
, 2006
"... We propose a generic bivariate hard thresholding estimator of the discrete wavelet coefficients of a function contaminated with i.i.d. Gaussian noise. We demonstrate its good risk properties in
a motivating example, and derive upper bounds for its mean-square error. Motivated by the clustering of la ..."
Cited by 6 (2 self)
Add to MetaCart
We propose a generic bivariate hard thresholding estimator of the discrete wavelet coefficients of a function contaminated with i.i.d. Gaussian noise. We demonstrate its good risk properties in a
motivating example, and derive upper bounds for its mean-square error. Motivated by the clustering of large wavelet coefficients in real-life signals, we propose two wavelet denoising algorithms,
both of which use specific instances of our bivariate estimator. The BABTE algorithm uses basis averaging, and the BITUP algorithm uses the coupling of “parents” and “children” in the wavelet
coefficient tree. We prove the L2 near-optimality of both algorithms over the usual range of Besov spaces, and demonstrate their excellent finite-sample performance. Finally, we propose a robust and
effective technique for choosing the parameters of BITUP in a data-driven way.
- In preparation Okabe , 2004
"... Many wavelet shrinkage methods assume that the data are observed on an equally spaced grid of length of the form 2 J for some J. These methods require serious modification or preprocessed data
to cope with irregularly spaced data. The lifting scheme is a recent mathematical innovation that obtains a ..."
Cited by 6 (5 self)
Add to MetaCart
Many wavelet shrinkage methods assume that the data are observed on an equally spaced grid of length of the form 2 J for some J. These methods require serious modification or preprocessed data to
cope with irregularly spaced data. The lifting scheme is a recent mathematical innovation that obtains a multiscale analysis for irregularly spaced data. A key lifting component is the “predict ”
step where a prediction of a data point is made. The residual from the prediction is stored and can be thought of as a wavelet coefficient. This article exploits the flexibility of lifting by
adaptively choosing the kind of prediction according to a criterion. In this way the smoothness of the underlying ‘wavelet ’ can be adapted to the local properties of the function. Multiple
observations at a point can readily be handled by lifting through a suitable choice of prediction. We adapt existing shrinkage rules to work with our adaptive lifting methods. We use simulation to
demonstrate the improved sparsity of our techniques and improved regression performance when compared to non-wavelet methods suitable for irregular data. We also exhibit the benefits of our adaptive
lifting on the real inductance plethysmography and motorcycle data.
- Mathematical Methods of Statistics , 2006
"... In this paper we compare wavelet Bayesian rules taking into account the sparsity of the signal with priors which are combinations of a Dirac mass with a standard distribution properly
normalized. To perform these comparisons, we take the maxiset point of view: i. e. we consider the set of functions ..."
Cited by 4 (3 self)
Add to MetaCart
In this paper we compare wavelet Bayesian rules taking into account the sparsity of the signal with priors which are combinations of a Dirac mass with a standard distribution properly normalized. To
perform these comparisons, we take the maxiset point of view: i. e. we consider the set of functions which are well estimated (at a prescribed rate) by each procedure. We especially consider the
standard cases of Gaussian and heavy-tailed priors. We show that if heavy-tailed priors have extremely good maxiset behavior compared to traditional Gaussian priors, considering large variance
Gaussian priors (LVGP) leads to equally successful maxiset behavior. Moreover, these LVGP can be constructed in an adaptive way. We also show, using comparative simulations results that large
variance Gaussian priors have very good numerical performances, confirming the maxiset prediction, and providing the advantage of a high simplicity from the computational point of view. 1
, 908
"... We propose a general maximum likelihood empirical Bayes (GM-LEB) method for the estimation of a mean vector based on observations with i.i.d. normal errors. We prove that under mild moment
conditions on the unknown means, the average mean squared error (MSE) of the GMLEB is within an infinitesimal f ..."
Cited by 4 (0 self)
Add to MetaCart
We propose a general maximum likelihood empirical Bayes (GM-LEB) method for the estimation of a mean vector based on observations with i.i.d. normal errors. We prove that under mild moment conditions
on the unknown means, the average mean squared error (MSE) of the GMLEB is within an infinitesimal fraction of the minimum average MSE among all separable estimators which use a single deterministic
estimating function on individual observations, provided that the risk is of greater order than (log n) 5 /n. We also prove that the GMLEB is uniformly approximately minimax in regular and weak ℓp
balls when the order of the length-normalized norm of the unknown means is between (log n) κ1 /n
, 2008
"... Classical nondecimated wavelet transforms are attractive for many applications. When the data comes from complex or irregular designs, the use of second generation wavelets in nonparametric
regression has proved superior to that of classical wavelets. However, the construction of a nondecimated seco ..."
Add to MetaCart
Classical nondecimated wavelet transforms are attractive for many applications. When the data comes from complex or irregular designs, the use of second generation wavelets in nonparametric
regression has proved superior to that of classical wavelets. However, the construction of a nondecimated second generation wavelet transform is not obvious. In this paper we propose a new
‘nondecimated ’ lifting transform, based on the lifting algorithm which removes one coefficient at a time, and explore its behaviour. Our approach also allows for embedding adaptivity in the
transform, i.e. wavelet functions can be constructed such that their smoothness adjusts to the local properties of the signal. We address the problem of nonparametric regression and propose an
(averaged) estimator obtained by using our nondecimated lifting technique teamed with empirical Bayes shrinkage. Simulations show that our proposed method has higher performance than competing
techniques able to work on irregular data. Our construction also opens avenues for generating a ‘best ’ representation, which we shall explore.
, 2006
"... Multiscale methods for data on graphs and irregular multidimensional situations. ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=375710","timestamp":"2014-04-23T13:41:47Z","content_type":null,"content_length":"35330","record_id":"<urn:uuid:8aeb564c-ce65-4689-b8a6-55c3519b7f3b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00364-ip-10-147-4-33.ec2.internal.warc.gz"} |
diagnostic assessment
Mathematics units often present the greatest difficulty for enabling pathway ('bridging course') students for a wide range of reasons. Many of these students have limited prior mathematics
content knowledge, and report, like much of the general population, of feeling phobic towards mathematics and lacking confidence. At the University of Notre Dame Australia (UNDA), Fremantle
campus, a student's final mark in their mathematics enabling unit often results in failing to meet the institutionally required benchmarks to move into undergraduate study. As staff experimented
with a diagnostic mathematics assessment for teaching and learning purposes, a new proposition emerged: Could diagnostic assessment be used to help students make an informed decision about their
choice of enabling program, and select a less intense course, without mathematics in the first semester? The benefits in having a student in the 'right course' should improve retention, and
increase the number of students successfully transitioning to undergraduate study. UNDA is piloting the use of the diagnostic mathematics assessment to help students select the specific enabling
program (either the "Tertiary Enabling Program" (TEP) or "Foundation Year" program) best suited to their background and experience.
The Tertiary Enabling Program (TEP) is a one-semester bridging course offered at the University of Notre Dame Australia (UNDA), Fremantle campus. Two separate enabling programs, the TEP, and the
Foundation Year program, are run by the University's Fremantle campus Academic Enabling and Support Centre (AESC). Enabling program units are coded "EP" (enabling program) and are common to both TEP
and Foundation Year courses. The TEP has been operational on the Fremantle campus since 2005 and was initially designed for students who did not receive an ATAR (Australian Tertiary Admission
Ranking) score that met the University's minimum entry requirements. The TEP also provides places for students who are exiting Year 12 without taking subjects which lead to an ATAR score. Although
most mature age students enter through other mechanisms, some do use the TEP as a pathway to university studies. More recently, entrants to the TEP have included students with a TAFE level
qualification, at either Certificate III or IV. As part of UNDA's enrolment processes, every applicant to the University is individually interviewed, by a member of academic staff, providing an
opportunity to discuss the courses, and the options available to incoming students. The Foundation Year program commenced in 2011, follows the same entry requirements and processes, but provides a
program which is less intense in the first semester, and takes longer (two semesters) to complete.
The TEP program attracts approximately 200 entrants in first-semester, and approximately 90 in a smaller mid-year intake. Many entrants come from backgrounds of disadvantage (e.g. low socio- economic
background, first in family to university, educational disadvantage within schooling) and are accordingly recognised as a skewed entry group, requiring intensive academic input and strong pastoral
support (Levy & Murray, 2005). EP001 Learning Skills, the first coursework unit, is delivered as week-long intensive course prior to all other units commencing, and orientates students to university
level study and expectations, and negotiates understandings around independent adult learning principles (Knowles, 1982). Staff and students alike understand that an enabling course is a bridge to
future undergraduate study; with few exceptions, it is not a course completed 'for its own sake'. However, some entrants express a lack of clarity about their vision for the future; the TEP may be
viewed by them as experiential. Some entrants acknowledge that their parents have 'strongly encouraged' them to undertake this course, as they have not gained undergraduate entry, and are not in full
time employment.
All TEP students complete a total of seven units, including the following six core units.
• Learning skills
• Literacy competency
• Academic writing
• Research skills and information literacy
• Mathematical competency
• Information technology for academic purposes.
There are two specific streams of the Tertiary Enabling Program, one for Nursing and Life Science courses, and the other Education Humanities and Business courses; the seventh unit (in addition to
the core units listed above) is stream specific. Seven units is not a usual university load, but these units are not pitched at undergraduate level content, and assessments are parallel to, but not
equivalent to undergraduate expectations. Students are not expected to be able to study autonomously, and learning is appropriately scaffolded (Rosenshine & Meister, 1992).
Students at the University of Notre Dame Fremantle campus who complete the TEP are required to achieve a minimum of 65% in every unit of study in order to be considered for approval for undergraduate
course admission. The 'minimum 65%' was determined on the basis of tracking TEP students over a period of five years, and ascertaining the point at which these students would be expected to make a
smooth transition to successful undergraduate study. Such success was measured in terms of academic progress, via grade point average (GPA), and retention data. Since 2005, the percentage of students
achieving the required benchmark (65% in all units) has varied considerably, from 25% of students in semester one, 2009, to 65% in semester two, 2010. Students who complete the course are regularly
offered places at other universities, who accept a pass (50%) in all units, rather than the institutionally required benchmark (65%). It is impossible to track the success of students moving to other
institutions to complete undergraduate study, other than incidentally and anecdotally. UNDA has data which demonstrates that TEP graduates perform at least as well as other entrants, as undergraduate
students. Retention rates for TEP graduates, in undergraduate courses, are markedly higher than other alternative entry pathways, including mature age students and Certificate IV graduates.
The importance of mathematics
All students, entering any university degree, need at least rudimentary mathematics skills and knowledge, such as the ability to:
• read and understand quantitative data, such as found in journal articles
• carry out calculations appropriate to the particular course content
• deal with measurements
• process data
• understand and use statistics
• work with simple calculations
• use analytical skills
• reason numerically
• problem solve.
These elementary skills, often captured by the term 'numeracy' do not require a student to have taken a specialist mathematics program (Galligan & Taylor, 2008). Numeracy is essentially the ability
to use mathematics effectively for active participation in society. Students entering some university degrees (e.g. Actuarial Sciences, Engineering) require an aptitude for more complex mathematics
(Galligan, 2004) that is not required for the majority of courses. Students who enter any university course with deficient basic numeracy skills are likely to be innately disadvantaged (Parsons &
Bynner, 2005). Within some courses the mathematics is overt and studied as a core unit, within other courses the mathematics is covert and, at times, easily underestimated by course entrants
(Galligan & Taylor, 2008). For example, a student entering a degree program in psychology may underestimate the high level of statistics usually found within such a course.
The problematic nature of lacking fundamental mathematics, as an undergraduate student, is noted throughout the research: "Students who lack the basic and fundamental skills, especially in
mathematics and writing, are finding it difficult to cope with the normal course workload" (Lau, 2003, p. 2).
In 2010, The University of Tasmania investigated the role of numeracy within all courses offered at the institution, noting the importance of numeracy when it was both explicit and embedded (Skalicky
et al, 2010).
All TEP students complete EP005 Mathematical Competency, a three hour per week, 13 week unit, which covers the areas of Number, Measurement, Geometry, Algebra and Data. The content of EP005 does not
exceed that usually expected of students in 'general mathematics' courses at Year 10, and is thus described as a 'low cognitive load' course.
The problem: TEP students consistently struggled with the mathematics unit
Over time, EP005 has proved the most problematic TEP unit for students; it has the highest failure rate of any unit within the program, with an average (over the past four semesters - S2, 2009; S1 &
S2, 2010; S1, 2011) of 26% of students failing to pass the unit (achieving 50% or greater) and an average, over four semesters, 55% of students failing to meet the University's minimum benchmark of
65%, for progress to undergraduate studies.
The TEP is fairly described as an 'intense course' with multiple units which traverse English, Mathematics, and Information Technology. Understandably, many students find this challenging. There are
several likely reasons why the students in TEP have such difficulty with this mathematics unit. Firstly, many students come to the TEP without having completed a high level of, or in some cases any,
mathematics at Year 12. In Western Australia students choose subjects in stages 1, 2 or 3 of various curriculum offerings; however, as an example, in Semester Two, 2011, not one EP005 student had
completed a stage 3 mathematics course, and less than 40% had completed a stage 2 mathematics course. Tracking demonstrates those students without at least mathematics at stage 2 (or its equivalent)
in Year 12 struggle with the mathematical content which is positioned within this course. Secondly, many students who have undertaken lower level mathematics courses in secondary schools have made
extensive use of calculators for all computational work. The entire EP005 mathematics program is positioned around non-calculator use, and for many students who enter this course, their dependence on
calculators appears, over time, to leave them lacking skills with straightforward mental mathematics. The errors in questions on the diagnostic test, which require students to multiply whole and
decimal numbers by 100, illustrate the limited mental mathematics skills of participants. That large numbers of students have trouble with such questions alludes to broader issues of teaching and
learning experiences during 12 years of formal schooling.
As the TEP is an intense course, EP005 is completed simultaneously with six other units of study and so, whilst the mathematics content itself is not onerous, there is a considerable concurrent work
load for a student. Students in many cases choose to focus their study time on the subjects they enjoy more or the subjects that they think might be more beneficial to them, if they underestimate the
importance of numeracy to any university course.
Lastly, the first task completed in EP005, a written personal reflection on mathematics, provides evidence every semester that many students enter this course with negative prior mathematical
experiences, and at least some students are reluctant to engage with mathematics, or phobic, or anxious, as they commence this unit. A student with an unconstructive approach to mathematics may have
reduced confidence, and such negativity can impact on self-esteem, inhibit functioning, and interfere with performance (Shapka & Keating, 2003). That anxiety and confidence levels predict mathematics
performance better than standardised measures of quantitative ability, demonstrates the significance of psychological factors inherent in mathematics teaching and learning (Ironsmith, Marva, Harju &
Eppler, 2001).
TEP course coordinators and EP005 teaching staff wrestled with the issues. Students were finding this unit the most problematic, and in many cases it was the unit that ultimately prevented them
moving to undergraduate study, as they failed to achieve the benchmark in this unit alone. Many students were ill prepared for mathematics, in particular within a 13 week program, and at least some
students needed time to begin to address their negative attitudes towards mathematics.
The value of diagnostic assessments within the complexity of higher education structures
Given the low student success rate within EP005, and a large number of students not meeting the University imposed benchmark for progress to undergraduate study, the AESC developed a twenty question,
multiple choice, diagnostic mathematics assessment based around a content which was less demanding, but indicative of, the content covered within EP005. The actual assessment is not included in this
paper, as the tool remains in current use, in order to use the same tool with different cohorts. To ensure validity, the assessment was written by an experienced mathematics teacher, and then
moderated by three other experienced mathematics teachers, with feedback used to modify and re-review the tool. The questions were mapped to the Western Australian secondary mathematics curriculum,
to ensure all questions were within the mainstream teaching and learning experiences of students within Years 8 - 10.
Every profession recognises the importance of early identification and intervention (Gale, et al, 2010). The principle is that the earlier intervention occurs, the more effective it will be (Rogers,
1996). Moreover, early intervention is associated with reduced costs over time (McCain & Mustard 1999; Gauntlett, Hugman, Kenyon & Logan 2000). Early intervention in an academic setting is ethically
responsible behaviour, and there is solid evidence that proactive work is more effective than remediation (Tumen, Shulruf & Hattie, 2008). In interpreting the results of the University of Auckland's
diagnostic screening tool, Elder and von Randow (2008) note that students identified as 'mainly satisfactory' are unlikely to obtain high grades in their first-year subjects. Furthermore, they note
that students identified as being 'at risk' are "at risk of failure in one or more subjects" (p. 176).
In the case of EP005, all the content and assessment for each semester is prescribed before any knowledge of the participants is known. This is a conundrum for higher education - where units are
tightly planned, with all content and assessment prescribed, without reference to the incoming student cohort and their particular strengths and needs. It might be logically implied that the results
of a diagnostic assessment could potentially necessitate a significant change to the teaching and learning plan. However, University regulations around the pre-approval of unit outlines prevent
changes without a rigorous and time-consuming process being followed. Whilst best teaching practice might be to change content sequencing, topics to be covered, or the dates that assessments are
completed, none of these changes are permitted. A rationale for diagnostic testing for a unit such as EP005, whilst educationally and ethically sound, designed to improve the quality of teaching and
learning, might well be thwarted by university processes which reduce flexibility. Academic staff need to be able to use the results of diagnostic testing within the set parameters, and to maximise
the flexibility available to them professionally. The innate complexities are pronounced by the very specific time constraints within tertiary units. In the case of EP005, a 13 week timeframe to
address mathematics skills, content and attitudes, perhaps reflective of 12 years of formal schooling, is an immediate challenge.
Diagnostic assessment mechanisms can be powerful for the more 'able' students (Harrington, 2005), not just those flagged 'at risk' (Elder & von Randow, 2008). In practice, it may be easier to modify
teaching and learning for more able students (Alderson, 2005), as they are more able to self pace through the required content. Moreover, they are likely to become dissatisfied if held back in their
work, in order to work at a pace more suited to less skilled peers. The mathematics text aligned to EP005 has the advantage of having a parallel set of tasks for each topic, but at an 'extended'
level; however, staff discussions demonstrated that these were seldom used or recommended, without logical explanation being available. This lack of use might indicate that staff were more focused on
the needs of 'at risk' students, rather than the provision of a differentiated curriculum (Forster, 2004), for more capable learners.
Developing a diagnostic mathematics assessment as a work in progress
The 2010 developed diagnostic assessment was first used, in an informal and ad hoc manner, with two groups of students in semester two, 2010: 41 Health Sciences entrants, and 80 TEP entrants. The
diagnostic assessment comprised of 20 questions, benchmarked to that usually expected in Year 10 mathematics. The 'pencil and paper' multiple choice assessment was completed in an invigilated
setting, with 20 minutes working time, and calculator use was not permitted. An analysis of the results of the Health Sciences is reported fully elsewhere (McNaught & Hoyne, 2011). The lowest mark
achieved on the diagnostic test, by TEP entrants, was 10%, and the highest mark achieved 95%. The average was 51%. Staff teaching the EP005 were keen to experiment with a diagnostic test, and the
results were used informally at that stage, mainly to generate discussion amongst the staff, and to provide students with an indication of their entry level with mathematics, to encourage them to
plan accordingly for success in the unit. At that stage, there was no evidence of any link between this diagnostic assessment and overall results in EP005.
At the end of EP005, semester two, 2010, in an analysis of the final results of students occurred, again, largely informally and through collegial discussions, but it was clear that a 'critically low
score' in the diagnostic assessment was linked to students failing to achieve the minimum benchmark of 65% in the unit. No student who achieved a score less than 40% achieved an overall EP005 result
of 65% or greater. Of the 6 students who achieved the score of 40% in the diagnostic test, only 2 (33%) achieved the necessary 65% at the end of the unit. Essentially, a cathartic change of thinking
occurred with the team working with the diagnostic assessment. Instead of thinking about such an assessment as a way to inform teaching and learning, and increase student self-awareness, a new
hypothesis had emerged: Could this diagnostic assessment be used to help students make an informed decision about their choice of enabling program, and select a less intense course, without
mathematics in the first semester? Moreover, could delaying a mathematics course be a better option for the transition of some enabling program students? An effective enabling program should assist
students to become self-regulating, independent learners, and to use reflective mechanisms about their own skill set, to ensure the likelihood of future success (Wright, 2010).
2011 trial of the diagnostic assessment
The diagnostic assessment was implemented formally in semester one, 2011, with the newly articulated goal of creating a future mechanism to allow students to make their own choice between the two
bridging courses, TEP and Foundation Year, based, at least in part, on their results in the diagnostic test.
Students entering UNDA to complete a bridging course have the option of undertaking an alternative enabling program to TEP, the year-long Foundation Year Program. The Foundation Year Program does not
have a mathematics unit in first semester and is structured very differently to the TEP. Foundation Year students complete four units linked to academic literacy only, in semester one: Learning
Skills, Literacy Competency, Academic Writing and Research Skills. These are four of the seven units also completed by the TEP students. In their second semester, Foundation Year students complete
four undergraduate units, in a chosen stream, which may later be used for advanced standing in their undergraduate degree. Foundation Year students will first encounter mathematics units as
undergraduate students, if so required in their course of study. The advanced standing option, from their second semester program, will potentially permit them to complete less units in the semester
in which they complete a mathematics course, thus enabling more time to be devoted to this area, if so required. The AESC offers primer courses for undergraduate mathematics units, and also free
tutoring services, creating additional support mechanisms for students wishing to engage with these services. On a superficial judgement, many students choose the TEP Program, over the Foundation
Year, because it can be completed in one semester. However, students who are unlikely to achieve success in an intense course, such as the TEP, may be better advised to complete the Foundation Year.
Prior to the enrolment of students within the TEP and Foundation Year, for semester two, 2011, a flowchart was collaboratively created for use by the University's Admissions Office. All applicants
who do not meet the requirements for direct entry to undergraduate study are usually offered a place within an enabling program, and many have placed this as a 'second preference' in their
application to the University. This flowchart was designed to place students in the course deemed best suited to their background. Essentially, the flowchart used stage two mathematics courses as the
discriminator; students who had completed stage two mathematics were recommended for TEP, those without stage two mathematics were recommended for the Foundation Year. resulted in more than 30
students who had applied for TEP being offered, and accepting, a place in the Foundation Year program. The mathematics skills of those 30 students were not tested, but all were offered the
opportunity to complete the diagnostic assessment, in order to make a fully informed decision. None chose to do so.
Comparing diagnostic assessment results and EP005 results in semester one 2011
In semester one, 2011, the diagnostic mathematics assessment was administered to 176 students, in the first class of their EP005 course. The mean was 57%, with a median of 55% and mode of 45%. There
were 21% of students scoring a test result of <=40%, deemed to be a critically low score range, and indicative of a significantly reduced likelihood of achieving the benchmark of 65% in EP005. The
distribution of marks is summarised in Figure 1.
Figure 1: Marks of all students, diagnostic test, semester one, 2011
The students were told, prior to completing the assessment, that the task was designed primarily to give them an indication of their current skill set in the topics that would be covered, and to use
this information to predict the time, work and effort that would be required to achieve the necessary benchmark. The students were informed that additional, optional, no cost tutoring would be
available for interested students, but that it would be provided contingent on student attendance. The students were informed that students from semester two, 2010, who had achieved a mark of less
than 40% (in the diagnostic test), had not achieved the benchmark of 65% in EP005. Staff were explicit with students that the information was useful, but not conclusive - hypothetically, a student
with a low score could do very well in the unit, and to use the data as a guide, not a guarantee.
Figure 2: Student performance in EP005 compared with Diagnostic assessment performance
No student who achieved a mark less than 40% in the semester one, 2011, diagnostic test, achieved the required benchmark of 65% for EP005 (see Figure 2). This replicated the same results from the
Semester Two, 2010 data.
Proportionately, males performed better than females in the diagnostic test. Of the 59 students who had a result of less than 50% in the Diagnostic Test, 86.4% are females, 13.5% are males; of the 65
students who achieved >65% in the Diagnostic Test, 55.3% are males and 44.6% are females. Given that the gender balance of EP005 overall (176) is 68.2% female and 31.8% male, a disproportionately
high number of females achieved low results in the diagnostic test. However, although females perform poorly, compared to males, in the diagnostic test, females are more than likely to perform well
(achieve >65%) in the EP005 unit than males. Of the 82 students to achieve an EP005 final mark >=65%, 65.8% are female and 34.1% are male. Reasons for female students making significant gains in
their mathematical skill levels warrant future examination. It is hypothesised that the pedagogical approach, focused on both procedure and conceptual understandings, and an intentionally explicit
teaching approach, linked to linear progression within topics, is a contributing reason, based on the student feedback contained within unit evaluation forms, and subsequent discussions with the
staff involved. As noted earlier, EP005 also has a strong focus on building personal confidence with mathematics (Rutter, 2001), which warrants further investigation as a gender issue.
The semester one, 2011, TEP cohort was a high achieving group, across all units, with the exception of EP005. The fail rate for all courses, other than mathematics, was less than 3%, yet 18.3% (six
times the rate) in EP005. The percentage of EP005 students to achieve the minimum of 65% was significantly lower than all the other units. It is of note that this particular cohort, with 54.5%
achieving the minimum benchmark, was the highest rate of any semester since 2006. Whilst staff working within EP005 were pleased to see the increased success across the cohort, and noted the gains
made by female students, they remained disappointed that EP005 still had a significant performance tail, and only 54% of students met the required 65% benchmark.
In semester two, 2011, one staff member taught all classes of EP005, and he used the results of the testing to offer earlier additional support, within the unit. In a 13 week course, a late
identification of students who struggle can be very disadvantageous to them, and many students who struggle with mathematics have perfected ways of going unnoticed during their secondary years. This
is powerfully illustrated by a student telling that in high school she avoided being called on in mathematics classes by 'making like a fish'. When asked to explain, she demonstrated how she would
'mouth', goldfish-like, and not make a sound, ensuring that teachers did not call on her again. It is recognised that many students are reluctant, even phobic, about mathematics within enabling
programs (and perhaps more generally) and require a very supportive learning environment to be willing to engage whole heartedly. The psychological and social issues around attitudes towards
mathematics are significant in the context of teaching and learning.
A diagnostic assessment may serve several purposes, including being a predictor of future success. It is apparent that scores, on a diagnostic mathematics test, within a low range, indicate that TEP
students would be unlikely to achieve the required benchmark for the unit. Students complete enabling programs as a bridge to undergraduate studies and the success of an enabling program can be
measured, to a large degree, on future undergraduate success. Unrealistic expectations or naivety are likely to result in both failure and disappointment. The use of diagnostic assessment has the
potential to assist enabling students to make informed choices, and to create a study plan based on realistic understandings, which demonstrate a course's suitability, and the level of workload
anticipated for future success. Given that many universities offer a range of enabling programs, diagnostic assessment could be a useful tool to assist students to select the program which provides
the best chance of successful completion.
Alderson, C. (2005). Diagnosing foreign language proficiency. The interface between learning and assessment. London: Continuum.
Arem, C. (2003). Conquering Math Anxiety (2nd ed.). Pacific Grove, CA: Brooks/Cole.
Elder, C., & von Randow, J. (2008). Exploring the utility of a web-based English language screening tool. Language Assessment Quarterly, 5(3), 173-194.
Forster, J. (2004). Quality practice: Implementing differentiated teaching and learning. Australasian Journal of Gifted Education, 13(1), 28-37.
Gale, T., Tranter, D., Bills, D., Hattam, R., & Comber, B. (2010). Interventions early in school as a means to improve higher education outcomes for disadvantaged (particularly low SES) students.
Canberra: DEEWR.
Galligan, L. & Taylor, J.A. (2008). Adults returning to study mathematics. In H. Forgasz, A. Barkatsas, A. Bishop, B, Clarke, S. Keast, W. Seah & P. Sullivan (Eds.), Research in Mathematics Education
in Australasia. (pp. 99-118). Rotterdam: Sense.
Galligan, L. (2004). Preparing international students for university: Mathematics as part of an integrated program. Senior Mathematics Journal, 18(1), 28-41.
Gauntlett, E., Hugman, R., Kenyon, P. & Logan, P. (2001). A meta-analysis of the impact of community-based prevention and early intervention. Commonwealth Department of Family and Community Services,
Canberra, ACT.
Harrington, S. (2005). Learning to Ride the Waves: Making Decisions about Placement Testing. WPA: Writing Program Administration, 28(3), 9-29.
Ironsmith, M., Marva, J., Harju, B., & Eppler, M. (2003). Motivation and Performance in College Students Enrolled in Self-Paced Versus Lecture-Format in Remedial Mathematics Courses. Journal of
Instructional Psychology, 30(4), 276-284.
Knowles, M. (1982). The Modern Practice of Adult Education: From Pedagogy to Andragogy (2nd ed.), Cambridge Books, New York.
Lau, L. K. (2003). Institutional factors affecting student retention. Education, 124(1), 126-136.
Levy, M., & Murray, J, (2005) Tertiary Entrance Scores Need Not Determine Academic Success: An analysis of student performance in an equity and access program. Journal of Higher Education Policy and
Management, 27(1), 129-140.
McCain, M. N. & Mustard, J. F. (1999). Reversing the Real Brain Drain: Early Years Study Final Report. Toronto: Ontario Children's Secretariat.
McNaught, K. & Hoyne, G. (2011) Mathematics for first year success. Proceedings of the 14th Pacific Rim First Year in Higher Education Conference, Fremantle June 29 - July 1st, 2011.
Parsons, S. & Bynner, J. (2005). Does Numeracy Matter More? National Research and Development Centre for Adult Literacy and Numeracy, Institute of Education, London.
Rogers, S. (1996). Brief report: Early intervention in autism. Journal of Autism and Developmental Disorders, 26(2), 243-246.
Rosenshine, B. & Meister, C. (1992). The use of scaffolds for teaching higher-level cognitive strategies. Educational Leadership, 49(7), 26-33.
Rutter, M. (2001). Psychosocial adversity: Risk, resilience and recovery. In J. M. Richman & M. W. Fraser (Eds.), The context of youth violence: Resilience, risk, and protection (pp.13-41). Westport,
CT: Praeger Publishers.
Shapka, J.D. & Keating, D.P. (2003). Effects of a girls-only curriculum during adolescence: Performance, persistence, and engagement in mathematics and science. American Educational Research Journal,
40, 929-960.
Skalicky, J., Adam, A., Brown, N., Caney, A. & Lejda, A. (2010). Tertiary Numeracy Enquiry. Centre for the Advancement of Learning and Teaching (CALT), University of Tasmania.
Tumen, S., Shulruf, B., & Hattie, J. (2008). Student Pathways at the University: Patterns and Predictors of Completion. Studies in Higher Education, 33(3), 233-252.
│ Please cite as: McNaught, K. (2012). Trialling the use of a mathematics diagnostic assessment task. In Creating an inclusive learning environment: Engagement, equity, and retention. Proceedings │
│ of the 21st Annual Teaching Learning Forum, 2-3 February 2012. Perth: Murdoch University. http://otl.curtin.edu.au/tlf/tlf2012/refereed/mcnaught.html │
Copyright 2011 Keith McNaught. The authors assign to the TL Forum and not for profit educational institutions a non-exclusive licence to reproduce this article for personal use or for
institutional teaching and learning purposes, in any format, provided that the article is used and cited in accordance with the usual academic conventions.
[PDF version] [Refereed papers] [Contents - All Presentations] [Home Page]
This URL: http://otl.curtin.edu.au/tlf/tlf2012/refereed/mcnaught.html
Created 22 Jan 2012. Last revision: 22 Jan 2012. | {"url":"http://www.roger-atkinson.id.au/tlf2012/refereed/mcnaught.html","timestamp":"2014-04-20T03:37:40Z","content_type":null,"content_length":"36095","record_id":"<urn:uuid:74a260ba-50fa-4b94-bf45-ad47b36289f3>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
A problem involving Inverse trignometry.
Solve the equation: Tan-1 [(x-1)/(x-2)] + Tan-1 [(x+1)/(x+2)] = π/4
We have $\tan^{-1} \left( \frac{x-1}{x-2}\right)+\tan^{-1} \left( \frac{x+1}{x+2}\right)=\frac{\pi}{4}$ By using the famous identity, $\arctan(x)+\arctan(y)= \arctan \left ( \frac{x+y}{1-xy}\right)$
we obtain $\tan^{-1} \left( \frac{\frac{x-1}{x-2}+\frac{x+1}{x+2}}{1-\frac{x+1}{x+2} \frac{x-1}{x-2}}\right)=\frac{\pi}{4}$ $\frac{(x-1)(x+2)+(x+1)(x-2)}{x^2-2-(x^2-1)} = \tan \left( \frac{\pi}{4}\
right)$ $x^2+2x-x-2+x^2-2x+x-2=-1$ $2x^2-4=-1$ $2x^2=3$ $x= \pm \sqrt{\frac{3}{2}}$
I had tried it the same way, but I had a few queries regarding it as thus. 1)This identity is valid when the following conditions are satisfied: a) x>0 b) y>0 c) xy<1 I was apprehensive about how I
could ascertain the fact that the question satisfies these conditions. Or is it any other vital piece of concept I am lacking? 2)Secondly could you please tell me if the site "www.wolframalpha.com",
which is a computational knowledge engine, gives the same answer as the above(ie: root of 1.5), or first of all is it correct to tally this answer to the one that computational engine is producing?
Well, I thank you sbhatnagar. And I would be glad if you, or anyone could help me with my doubts as stated above. | {"url":"http://mathhelpforum.com/trigonometry/198945-problem-involving-inverse-trignometry.html","timestamp":"2014-04-18T02:09:59Z","content_type":null,"content_length":"36671","record_id":"<urn:uuid:bf40a290-260b-4175-bd4c-522e0b86a5ab>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00617-ip-10-147-4-33.ec2.internal.warc.gz"} |
Artificial Intelligence
1206 Submissions
[2] viXra:1206.0043 [pdf] replaced on 2012-06-11 04:21:17
Applications of Extenics to 2D-Space and 3D-Space
Authors: Florentin Smarandache, Victor Vladareanu
Comments: 12 Pages.
In this article one proposes several numerical examples for applying the extension set to 2D- and 3D-spaces. While rectangular and prism geometrical figures can easily be decomposed from 2D and 3D
into 1D linear problems, similarly for the circle and the sphere, it is not possible in general to do the same for other geometrical figures.
Category: Artificial Intelligence
[1] viXra:1206.0014 [pdf] submitted on 2012-06-04 23:37:54 | {"url":"http://vixra.org/ai/1206","timestamp":"2014-04-19T04:27:39Z","content_type":null,"content_length":"5609","record_id":"<urn:uuid:9fdbbfe3-c10a-4c17-9d2e-3650bd1bc2da>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00496-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maximum area and minimum perimeter.
December 8th 2009, 01:00 PM #1
Sep 2009
Maximum area and minimum perimeter.
Here's another problem I'm stuck on:
"A rectangle has one side on the x-axis, one side on the y-axis, one vertex at the origin and one on the curve y=e^-2x (greater than or equal to) 0. Find the
a) Maximum area
b) Minimum perimeter."
Here's a picture of the curve:
Any help on how to do this would be appreciated!
Here's another problem I'm stuck on:
"A rectangle has one side on the x-axis, one side on the y-axis, one vertex at the origin and one on the curve y=e^-2x (greater than or equal to) 0. Find the
a) Maximum area
b) Minimum perimeter."
Here's a picture of the curve:
Any help on how to do this would be appreciated!
$A(x) = xe^{-2x}$
$P(x) = 2(x + e^{-2x})<br />$
take the derivatives and optimize ...
December 8th 2009, 01:36 PM #2 | {"url":"http://mathhelpforum.com/calculus/119379-maximum-area-minimum-perimeter.html","timestamp":"2014-04-17T20:44:42Z","content_type":null,"content_length":"34140","record_id":"<urn:uuid:6ace8fd7-625c-4117-a9a8-5034ffcf4a45>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00221-ip-10-147-4-33.ec2.internal.warc.gz"} |
The following article discusses a (sub)language of ``fuzzy'', probabilistic numbers. The language lets us write ordinarily-looking arithmetic expressions; variables in these expressions however
denote random values rather than numbers. A user can specify the exact shape of the probability distribution curve for each random quantity. Arithmetic expressions over fuzzy numbers often arise
in physics and statistics. Physical quantities often cannot be described by a single number; rather they are random values, due to measurement errors or the very nature of the corresponding
physical process. We can use the language of fuzzy numbers to automate the error analysis and to estimate the resulting distribution.
The problem is more complex than it appears. An apparent solution is to represent fuzzy numbers as distribution functions. Operator + should be generalized to take two distribution functions as
arguments and yield the distribution function of the sum. Other arithmetical operators and functions should be lifted as well. As Jerzy Karczmarczuk noted, this naive approach is fraught with
problems. For example, fuzzy numbers cannot be an instance of the Num class as the latter requires its members to belong to the Eq class; comparing functions is in general undecidable. A more
immediate problem is that the arithmetic of distributions is far more complex. Distributions are not linear: the distribution of a sum is a convolution of the distributions of its terms. Division
is especially troublesome, if the denominator has a non-zero probability of being zero.
The following article develops a different approach. The key insight is that we do not need to ``symbolically'' manipulate distributions. We represent a random variable by a computation, which,
when executed, generates one number. If executed many times, the computation generates many numbers -- a sample of its distribution. We can lift the standard arithmetical operators and functions
(including comparisons) to such computations, and write ordinarily-looking arithmetical expressions over these random computations. We can then execute the resulting expression many times and
obtain a sample distribution of the resulting quantity. The sample lets us estimate the mathematical expectation, the variance and other moments of the resulting quantity. Our method is thus a
Monte Carlo estimation of a complex distribution.
As an example, we compute the center and the variance of sqrt( (Lw - 1/Cw)^2 + R^2) which is the impedance of an electric circuit, with an inductance being a normal variable (center 100, variance
1), frequency being uniformly distributed from 10 through 20 kHz. Resistance is also uniformly distributed from 10 through 50 kOm and the capacitance has the square root of the normal
distribution. (The latter is just to make the example more interesting.)
In the follow-up, summarizing article, Jerzy Karczmarczuk designed a complementary approach, based on approximating a distribution by linear combinations of normal distributions. The arithmetic
of normal distributions is well-understood. | {"url":"http://okmij.org/ftp/Computation/monads.html","timestamp":"2014-04-18T15:49:43Z","content_type":null,"content_length":"26929","record_id":"<urn:uuid:c1167cc0-b003-408a-9cbb-c652b5eab8d1>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00624-ip-10-147-4-33.ec2.internal.warc.gz"} |
System Dynamics The State Space Equations and Their Time Domain Solution Analytic Solution of Linear Stationary Systems
System Dynamics (24.509)
III. The State Space Equations and Their Time Domain Solution
Analytic Solution of Linear Stationary Systems
Now our next logical step is to address the time domain solution of the state equations. Our approach to developing analytic solutions will be to emphasize the analogy between the scalar problem and
the matrix or the multivariable problem. We will see that the matrix exponential function plays an important role in developing solutions for linear stationary systems.
Homogeneous Linear Stationary Systems
Consider an LTI system which has no independent forcing function and only a single state variable. This results in a scalar
homogeneous system.
Scalar Case
Assume a solution of the form
Substitution into the original equation gives
Now evaluating
Matrix Case
By analogy to the scalar case, assume a solution of the form
Substitution into the original equation gives
Now, using the initial conditions gives
This result can also be cast into several other useful forms. For example, if we evaluate the initial condition at some arbitrary
Or writing
and, substituting t for
is also a valid form of the solution. This form is particularly useful since
Thus the evaluation of
discrete recursive formulation can also be written as
In the technical literature, the matrix exponential,
state transition matrix. To see how this term applies, consider the following development.
In general, the solution to the homogeneous state equation can be written as
state transition matrix which is the unique solution of
To show that the solution to eqn. (3.34) is indeed a solution to the original system of equations, consider the following:
For a homogeneous linear time-invariant system we know that
transformation of the initial conditions.
We already know several properties of the state transition matrix,
4. Letting
These properties can be written in terms of
The terminology and manipulation of the so-called state transition matrix,
Non-Homogeneous Linear Stationary Systems
We now address the solution of non-homogeneous systems, where there is an independent input function that drives the system response.
Scalar Case
Multiplying by the integrating factor,
Integrating this expression between t[0] and t gives
Finally, multiplying through by e^at and rearranging gives
The standard procedure for checking the correctness of a solution [such as eqn. (3.37)] is to substitute it into the original differential equation. This case is no different, but one must be careful
when differentiating integral terms that have variable limits of integration. The usual technique for doing this is called
Leibnitz's Rule, which can be stated as follows:
Substituting eqn. (3.37) into eqn. (3.35) and using Leibnitz's Rule, we have
Matrix Case
By analogy to the scalar case, multiply this expression by the integrating factor,
Integrating this between t[0] and t, one has
Finally, multiplying through by
As a check on this solution we can differentiate eqn. (3.41), giving
As a final note, one should be aware that these expressions can also be written in terms of the state transition matrix, where
For example, eqn. (3.41) written with
A Note About Time-Varying Systems
In general,
a closed form solution in terms of the matrix exponential is not possible for time varying systems. This is because, for the general case,
We have seen that linear stationary systems have analytical closed form solutions and that linear non-stationary systems (in general) do not. However, solutions to realistic problems with several
unknowns or state variables must be determined via
computer implementation, whether we are dealing with stationary or non-stationary systems. The analytic solutions for linear stationary systems are extremely important, but they can be generated by
hand for only very low-order systems. Thus, no matter what form the system takes, computer implementation for realistic engineering problems will always be required.
24.509 Lecture Notes by Dr. J. R. White, UMass-Lowell (Spring 1997).
Return to Online Courses | {"url":"http://www.profjrwhite.com/system_dynamics/sdyn/s3/s3anals/s3anals.html","timestamp":"2014-04-20T13:18:22Z","content_type":null,"content_length":"15242","record_id":"<urn:uuid:bb7ab8db-6cf7-4bb0-85cd-20cab0b87a2f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00335-ip-10-147-4-33.ec2.internal.warc.gz"} |
9.8.2 The Sequential Algorithm
Next: 9.8.3 The Concurrent Algorithm Up: 9.8 Munkres Algorithm for Previous: 9.8.1 Introduction
The input to the assignment problem is the matrix 9.19. The first point to note is that the particular assignment which minimizes Equation 9.21 is not altered if a fixed value is added to or
subtracted from all entries in any row or column of the cost matrix D. Exploiting this fact, Munkres' solution to the assignment problem can be divided into two parts
Modifications of the distance matrix D by row/column subtractions, creating a (large) number of zero entries.
With i, construction of a so-called minimal representative set, meaning a distinct selection i, such that
The steps of Munkres algorithm generally follow those in the constructive proof of P. Hall's theorem on minimal representative sets.
The preceding paragraph provides a hopelessly incomplete hint as to the number theoretic basis for Munkres algorithm. The particular implementation of Munkres algorithm used in this work is as
described in Chapter 14 of [Blackman:86a]. To be definite, take special zero entries (starred zeros
Munkres Algorithm
Step 1:
1. Find a zero Z in the distance matrix.
2. If there is no starred zero already in its row or column, star this zero.
3. Repeat steps 1.1, 1.2 until all zeros have been considered.
Step 2:
1. Cover every column containing a
2. Terminate the algorithm if all columns are covered. In this case, the locations of the
Step 3:
Main Zero Search
1. Find an uncovered Z in the distance matrix and prime it,
2. If No
3. If a Z.
Step 4:
Increment Set of Starred Zeros
1. Construct the ``alternating sequence'' of primed and starred zeros:
if such a zero exists
The sequence eventually terminates with an unpaired N.
2. Unstar each starred zero of the sequence.
3. Star each primed zero of the sequence, thus increasing the number of starred zeros by one.
4. Erase all primes, uncover all columns and rows, and return to Step 2.
Step 5:
New Zero Manufactures
1. Let h be the smallest uncovered entry in the (modified) distance matrix.
2. Add h to all covered rows.
3. Subtract h from all uncovered columns
4. Return to Step 3, without altering stars, primes, or covers.
A (very) schematic flowchart for the algorithm is shown in Figure 9.19. Note that Steps 1,5 of the algorithm overwrite the original distance matrix.
The preceding algorithm involves flags (starred or primed) associated with zero entries in the distance matrix, as well as ``covered'' tags associated with individual rows and columns. The
implementation of the zero tagging is done by first noting that there is at most one
Covered column tags,
Covered row tags,
Figure 9.19: Flowchart for Munkres Algorithm
Entries in the cover arrays CC and CR are one if the row or column is covered zero otherwise. Entries in the zero-locator arrays ZS, ZR, and ZP are zero if no zero of the appropriate type exists in
the indexed row or column.
With the star-prime-cover scheme of the preceding paragraph, a sequential implementation of Munkres algorithm is completely straightforward. At the beginning of Step 1, all cover and locator flags
are set to zero, and the initial zero search provides an initial set of nonzero entries in ZS(). Step 2 sets appropriate entries in CC() to one and simply counts the covered columns. Steps 3 and 5
are trivially implemented in terms of the cover/zero arrays and the ``alternating sequence'' for Step 4 is readily constructed from the contents of ZS(), ZR() and ZP().
As an initial exploration of Munkres algorithm, consider the task of associating two lists of random points within a 2D unit square, assuming the cost function in Equation 9.19 is the usual Cartesian
distance. Figure 9.20 plots total CPU times for execution of Munkres algorithm for equal size lists versus list size. The vertical axis gives CPU times in seconds for one node of the Mark III
hypercube. The circles and crosses show the time spent in Steps 5 and 3, respectively. These two steps (zero search and zero manufacture) account for essentially all of the CPU time. For the
Since the zero searching in Step 3 of the algorithm is required so often, the implementation of this step is done with some care. The search for zeros is done column-by-column, and the code maintains
pointers to both the last column searched and the most recently uncovered column (Step 3.3) in order to reduce the time spent on subsequent re-entries to the Step 3 box of Figure 9.19.
Figure 9.20: Timing Results for the Sequential Algorithm Versus Problem Size
The dashed line of Figure 9.20 indicates the nominal 9.20 are consistent with this expected behavior. It should be noted, however, that both the nature of this scaling and the coefficient of
with the distance between items given by the absolute value function. For the data sets in Equation 9.22, the preliminaries and Step 1 of Munkres algorithm completely solve the association in a time
which scales as 9.21 plots the CPU time per step for the last passes through the outer loop of Figure 9.19 for the 150
Figure 9.21: Times Per Loop (i.e.,
Next: 9.8.3 The Concurrent Algorithm Up: 9.8 Munkres Algorithm for Previous: 9.8.1 Introduction
Guy Robinson
Wed Mar 1 10:19:35 EST 1995 | {"url":"http://www.netlib.org/utk/lsi/pcwLSI/text/node222.html","timestamp":"2014-04-19T14:42:08Z","content_type":null,"content_length":"13855","record_id":"<urn:uuid:d455db24-1457-4dc1-a14c-a983b455f506>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
University of Pittsburgh: Department of Mathematics
The Mathematics Department offers programs leading to bachelor of science degrees in Mathematics, Applied Mathematics, and Actuarial Mathematics, as well as joint degree programs in Scientific
Computing, Mathematics/Economics and in Mathematics/Philosophy.
A degree in mathematics provides an excellent base on which to build a career in virtually any high technology field, and provides a particularly strong foundation for advanced study in science,
engineering and finance.
As a Mathematics major, you will be thoroughly trained in classical mathematics, in addition to being exposed to ideas on the cutting edge of current research. Your training will include the use of
the latest technological tools, and you will have access to the Department's computer labs, equipped with workstations and the latest mathematical software.
There are numerous opportunities for Math majors to undertake research projects under the supervision of faculty members. Recent projects have included: the rhythms of smell (Manuel Gomez-Ramirez,
advised by Prof. Bard Ermentrout), and autohomeomorphisms of the plane and Cantor set (Rolf Suabedissen, advised by Profs. Paul Gartside and Bob Heath).
Students considering a major in the Mathematics Department should speak with an advisor in the Department as soon as possible. An advising appointment can be arranged by contacting the Mathematics
receptionist in room 301 of Thackeray Hall, or by calling the Department at 412-624-8375.
Degree Programs > | {"url":"http://www.math.pitt.edu/undergrad.shtml","timestamp":"2014-04-18T00:23:31Z","content_type":null,"content_length":"6169","record_id":"<urn:uuid:07c1c5fe-0d97-4320-b1f7-f106eca2bb5b>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00262-ip-10-147-4-33.ec2.internal.warc.gz"} |
CCP4BB Archives
Yuan Cheng wrote:
> Eleanor Dodson wrote:
>> This phenonema doesnt necessarily mean you have lattice-tranlation
>> defects - pseudo translations are quite common with perfectly good
>> crystals.
>> Lattice translation defects usually imply your "crystal" has two or
>> more layered different crystals in the beam.. It can be best detected
>> by an analysis of the data statistics.
>> Eleanor
>> Yuan Cheng wrote:
>>> Eleanor Dodson wrote:
>>>> You must have a pseudo translation vector of ~ 0.02 0.5 0.0
>>>> That relates solution 1 and 2, and 3 and 4.
>>>> That makes it hard to determine space group - there will be
>>>> absences along 0k0 because of the translation so the space group
>>>> could be P 2i 2 2i or P2i 21 2i
>>>> But which ever SG it is thedata with k odd will be weak, and that
>>>> means you will have a higher R factor.
>>>> Eleanor
>>>> Jerry McCully wrote:
>>>>> Dear all:
>>>>> I am currently refining a structure solved by MAD and
>>>>> somehow the R factor got stuck around 30% with 2.2 resolution.
>>>>> There are four molecules in one ASU. Two had very good
>>>>> density map and the other two were not equally good.
>>>>> I tried using NCS during refinement but it did not help
>>>>> much.
>>>>> Then I checked my data. Actually I found that there are
>>>>> alternate layers of strong and weak reflections. THe crystal is
>>>>> in a thin-plate shape with orthorombic space group.
>>>>> Then I looked at my molecular replacement solution from Phaser
>>>>> using my native data.
>>>>> Actually phaser gave two sets of solutions, which showed
>>>>> slightly different positions.
>>>>> You can also see that there is a translation inside the same set
>>>>> of solution.
>>>>> SOLU SET RFZ=12.8 TFZ=21.4 PAK=0 LLG=452 RFZ=10.7 TFZ=47.9 PAK=0
>>>>> LLG=1693 RFZ=13.0 TFZ=47.6 PAK=0 LLG=2791 RFZ=10.7 TFZ=46.1 PAK=0
>>>>> LLG=4045
>>>>> SOLU 6DIM ENSE ensemble1 EULER 184.052 0.185 175.770 FRAC
>>>>> -0.49889 -0.00218 -0.00000
>>>>> SOLU 6DIM ENSE ensemble1 EULER 225.116 0.167 134.696 FRAC
>>>>> -0.47056 0.49706 0.00051
>>>>> SOLU 6DIM ENSE ensemble1 EULER 359.333 31.677 180.633 FRAC
>>>>> 0.75769 -0.71475 -0.14004
>>>>> SOLU 6DIM ENSE ensemble1 EULER 359.373 31.969 180.711 FRAC
>>>>> 0.73074 -0.21423 -0.14108
>>>>> SOLU SET RFZ=12.8 TFZ=21.4 PAK=0 LLG=452 RFZ=10.7 TFZ=47.9 PAK=0
>>>>> LLG=1693 RFZ=13.0 TFZ=47.6 PAK=0 LLG=2791 RFZ=10.7 TFZ=47.3 PAK=0
>>>>> LLG=4042
>>>>> SOLU 6DIM ENSE ensemble1 EULER 213.115 0.173 146.741 FRAC
>>>>> -0.49931 -0.00269 0.00045
>>>>> SOLU 6DIM ENSE ensemble1 EULER 248.173 0.254 111.665 FRAC
>>>>> -0.47091 0.49661 0.00101
>>>>> SOLU 6DIM ENSE ensemble1 EULER 359.399 31.602 180.578 FRAC
>>>>> 0.75808 -0.71455 -0.13980
>>>>> SOLU 6DIM ENSE ensemble1 EULER 359.378 31.255 180.361 FRAC
>>>>> 0.78370 -0.21555 -0.13830
>>>>> I remember there is a discussion in CCP4bb about the same
>>>>> topic with the focus of pseudo-symmetry or translational
>>>>> pseudo-symmetry. Can anybody give some troubleshooting
>>>>> about my issue?
>>>>> Thanks a lot and have a nice weekend,
>>>>> Jerry McCully
>>>>> _________________________________________________________________
>>>>> Windows Live: Keep your friends up to date with what you do online.
>>>>> http://windowslive.com/Campaign/SocialNetworking?ocid=PID23285::T:WLMTAGL:ON:WL:en-US:SI_SB_online:082009
>>> Hi Jerry,
>>> I am having similar trouble with you. You might want to check out
>>> this paper Acta Cryst.(2005) D61:67-74. It is about
>>> lattice-tranlation defects and how to correct it. Hopefull it is
>>> helpful!
>>> Good Luck!
>>> Yuan
> Hi Eleanor,
> Could you explain a little bit more about how to tell the
> difference between pseudo-translational symmetry and lattice
> translation defects? Thanks a lot!
> Yuan
Hmm - that is hard.. pseudo translation is relatively harmless - you
have 2 or more molecules in the same orientation but in different
positions in the unit cell, and the structure factors they generate will
have some different properties. For instance the 0k0 in your case will
always have k=2n+1 weak because the translation is xt, 0.5,zt ( you can
work that out from a SF equation if you like!) And since the xt =
0.02, ie is rather small, and zt = 0, at low resolution all the hkl with
k=2n+1 will be weak.
hklview data.mtz and look at h0l then next level , next level etc and
you should see this effect..
The only problem it gives in is determining the spacegroup. But phaser
will usually sort that out as long as you let it test all SGs .
Lattice translation is effectively one form of twinning, you can
visualise it as a set of crystals where that lattice are aligned in 2
dimensions but there is slippage along the third. So each "reflection"
is in fact the sum of two or more intensities and the twinning analyses
should be valid. But as well you have the problem that some classes of
reflections are very weak, in the same way as a pseudo translation
affects the data.
And the twinning tests via moments, H test and Britten test are all
distorted by the weak/strong pattern so really the only effective test
is the L test, and that too can be badly distorted by anisotropy and
other defects.
Apparently it is often possible to recognise a lattice defect by looking
at the images, if you are good at that. Some classes of reflections will
be very streaky ( where there is an overlap between the different
crystal fragments) and others sharp. But once the data is integrated
that information is lost.
Does that help? | {"url":"https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=ind0909&L=CCP4BB&F=&S=&P=41819","timestamp":"2014-04-21T05:05:03Z","content_type":null,"content_length":"61503","record_id":"<urn:uuid:df3c1627-1ff1-4c8a-aba7-d8fee18f27ac>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00371-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brad Ideas
In my article two weeks ago about the odds of knowing a cousin I puzzled over the question of how many 3rd cousins a person might have. This is hard to answer, because it depends on figuring out how
many successful offspring per generation the various levels of your family (and related families) have. Successful means that they also create a tree of descendants. This number varies a lot among
families, it varies a lot among regions and it has varied a great deal over time. An Icelandic study found a number of around 2.8 but it’s hard to conclude a general rule. I’ve used 3 (81
great-great-grandchildren per couple) as a rough number.
There is something, however, that we can calculate without knowing how many children each couple has. That’s because we know, pretty accurately, how many ancestors you have. Our number gets less
accurate over time because ancestors start duplicating — people appear multiple times in your family tree. And in fact by the time you go back large numbers of generations, say 600 years, the
duplication is massive; all your ancestors appear many times.
To answer the question of “How likely is it that somebody is your 16th cousin” we can just look at how many ancestors you have back there. 16th cousins share with you a couple 17 generations ago.
(You can share just one ancestor which makes you a half-cousin.) So your ancestor set from 17 generations ago will be 65,536 different couples. Actually less than that due to duplication, but at this
level in a large population the duplication isn’t as big a factor as it becomes later, and if it does it’s because of a closer community which means you are even more related.
So you have 65K couples and so does your potential cousin. The next question is, what is the size of the population in which they lived? Well, back then the whole world had about 600 million people,
so that’s an upper bound. So we can ask, if you take two random sets of 65,000 couples from a population of 300M couples, what are the odds that none of them match? With your 65,000 ancestors being
just 0.02% of the world’s couples, and your potential cousin’s ancestors also being that set, you would think it likely they don’t match.
Turns out that’s almost nil. Like the famous birthday paradox, where a room of 30 people usually has 2 who share a birthday, the probability there is no intersection in these large groups is quite
low. it is 99.9999% likely from these numbers that any given person is at least a 16th cousin. And 97.2% likely that they are a 15th cousin — but only 1.4% likely that they are an 11th cousin. It’s a
double exponential explosion. The rough formula used is that the probability of no match will be (1-2^C/P)^(2^C) where C is the cousin number and P is the total source population. To be strict this
should be done with factorials but the numbers are large enough that pure exponentials work.
Now, of course, the couples are not selected at random, and nor are they selected from the whole world. For many people, their ancestors would have all lived on the same continent, perhaps even in
the same country. They might all come from the same ethnic group. For example, if you think that all the ancestors of the two people came from the half million or so Ashkenazi Jews of the 18th
century then everybody is a 10th cousin.
Many populations did not interbreed much, and in some cases of strong ethnic or geographic isolation, barely at all. There are definitely silos, and they sometimes existed in the same town, where
there might be far less interbreeding between races than among races. Over time, however, the numbers overwhelm even this. Within the close knit communities, like say a city of 50,000 couples who
bred mostly with each other, everybody will be a 9th cousin.
These numbers provide upper bounds. Due to the double exponential, even when you start reducing the population numbers due to out-breeding and expansion, it still catches up within a few generations.
This is just another measure of how we are all related, and also how meaningless very distant cousin relationships, like 10th cousins, are. As I’ve noted in other places, if you leave aside the
geographic isolation that some populations lived in, you don’t have to go back more more than a couple of thousand years to reach the point where we are not just all related, but we all have the same
set of ancestors (ie. everybody who procreated) just arranged in a different mix.
The upshot of all this: If you discover that you share a common ancestor with somebody from the 17th century, or even the 18th, it is completely unremarkable. The only thing remarkable about it is
that you happened to know the path.
43 comments
14 comments
27 comments
Recent comments
• You're good
• enjoy enough rice (not verified)
• Citizenship through Grandparent.
Phil (not verified)
• Right to travel
• Anonymous (not verified)
• media awareness
Isabel (not verified)
• Gerrymandering etc
• brad
• Cranes are fine
Glen Raphael (not verified)
• Mixing it up
Lunatic Esex (not verified) | {"url":"http://ideas.4brad.com/taxonomy_menu/6/103","timestamp":"2014-04-20T10:52:00Z","content_type":null,"content_length":"37327","record_id":"<urn:uuid:3cc1c653-24c1-4fb1-8378-bbb8a680ed84>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
fourier transforms
December 31st 2007, 03:59 AM #1
Junior Member
Jun 2007
fourier transforms
if $B(x)$ is a scalar field in (3d space) with fourier transform $\widetilde{B}(p)$ and $x\rightarrow x'$ is a finite isometry (i.e a rotation and translation $x'=Rx-a$ what is the fourier
transform of $B(x')$
I keep wanting to write $\widetilde{B'}(p)=e^{ip.x`}\widetilde{B}(p)$ (i.e, a change of phase) but I aint so sure and I'm too dumb to prove it. Maybe I should say that doing this works for what I
intend it for.
A translation gives a change of phase, so if $B'(x) = B(x-a)$ then $\widehat{B'}(p) = e^{-ia\mathbf{.}p}\widehat{B}(p)$ (using a hat to denote the Fourier transform).
The Fourier transform commutes with rotations, so if $B'(x) = B(R(x))$ then $\widehat{B'}(p) = \widehat{B}(R(p))$.
Last edited by Opalg; January 2nd 2008 at 10:46 AM. Reason: corrected formula
Thanks very much for your reply. There is too much detail to go through the whole problem, but the thing about rotations leaves me with a problem in that it won't work for what i want. I'm not
saying you are wrong, I'm just stupid.
Can you prove it, or give me some pointers about how to. I would be gratefull. I have had a thought about it myself, and my argument is as follows.
$B(x')=\int d^3p e^{ipx'}\overline{B}(p)=\int d^3 p e^{ipx'-ipx+ipx}\overline{B}(p)=$
$=\int d^3 p e^{ipx}e^{ip(x'-x)}\overline{B}(p)=\int d^3 p e^{ipx} \overline{B'}(p)$
with $\overline{B'}(p)=e^{ip(x'-x)}\overline{B}(p)$
with x --> x' a rotation or a translation. This isn't quite the same as what i thought in my first post, but it works. Unless I have done something completely wrong, I don't see what is wrong
with this?
Last edited by ppyvabw; January 1st 2008 at 03:12 PM.
ignore the above load of nonsense.
The thing is, I need a general expression in terms of x',B' and p', and not explicitly involving $\Lambda$ or a, if that makes sense.
I'm not sure what $\Lambda$ is (it hasn't been mentioned before). In fact, I'm not too happy with any of this notation, so I'll use my own.
I'll use x and p to denote points in 3-dimensional space: x will be the variable for B-space (the space on wich the scalar field B is defined) and p will be the variable for $\widehat{B}$-space
(the space on which the Fourier trnsform is defined).
Let T:x→x' be an isometry of B-space. Then T consists of a rotation R followed by a translation x→x–a. If you are looking for a formula for the Fourier transform of B(Tx) that doesn't mention R
and a explicitly, then I don't think you are going to find one, the reason being that the Fourier transform affects translations and rotations differently.
The Fourier transform of B is given by $\widehat{B}(p) = \int_{\mathbb{R}^3}B(x)e^{-ix.p}dx$, where x.p denotes the inner product of x and p.
To find the effect of a translation, we have to calculate the Fourier transform of B'(x)=B(x–a). This is done by making the substitution y=x–a in the integral:
. . . . . . $\begin{array}{rcl}\widehat{B'}(p) &=& \displaystyle \int_{\mathbb{R}^3}B(x-a)e^{-ix.p}dx \vspace{1ex} \\ &=& \displaystyle \int_{\mathbb{R}^3}B(y)e^{-i(y+a).p}dy \\ &=& \displaystyle
e^{-ia.p}\int_{\mathbb{R}^3}B(y)e^{-iy.p}dy = e^{-ia.p}\widehat{B}(p). \end{array}$
To find the effect of a rotation R, we have to calculate the Fourier transform of B'(x)=B(Rx). This is again done by making a substitution, namely y=R(x). This time, we have to worry about how to
express dy in terms of dx. For a general linear transformation R, the formula would be dy=Jdx, where J is the jacobian of the transformation, which is the determinant of R. Since R is a rotation,
its determinant is 1, so in fact we just get dy=dx. We also need to use the fact that $x.Rp = R^{\text{\textsf{T}}}x.p$, where R^T is the transpose of R. This is also the inverse of R, so if $y=R
(x)$ then $x = R^{\text{\textsf{T}}}(y)$. If B'(x)=B(R(x)), the calculation then goes:
. . . . . . $\begin{array}{rcl}\widehat{B'}(p) &=& \displaystyle \int_{\mathbb{R}^3}B(R(x))e^{-ix.p}dx \vspace{1ex} \\ &=& \displaystyle \int_{\mathbb{R}^3}B(y)e^{-iR^{\text{\textsf{T}}}(y).p}dy
\vspace{1ex} \\ &=& \displaystyle \int_{\mathbb{R}^3}B(y)e^{-iy.R(p)}dy = \widehat{B}(R(p)). \end{array}$
Yeah sorry, I meant R, not Lambda. Ok, thanks.
December 31st 2007, 12:55 PM #2
January 1st 2008, 03:01 PM #3
Junior Member
Jun 2007
January 2nd 2008, 10:17 AM #4
Junior Member
Jun 2007
January 2nd 2008, 11:21 AM #5
January 2nd 2008, 11:39 AM #6
Junior Member
Jun 2007 | {"url":"http://mathhelpforum.com/advanced-applied-math/25378-fourier-transforms.html","timestamp":"2014-04-19T05:16:28Z","content_type":null,"content_length":"52382","record_id":"<urn:uuid:c46e6026-dc24-4517-8ecf-cca17a53a9b8>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
RGPV Syllabus for 4th Sem Mechanical Branch - KopyKitab Resources
RGPV Syllabus for 4th Sem Mechanical Branch
B.E. 401 – ENGINEERING MATHEMATICS III
Unit I
Functions of complex variables : Analytic functions, Harmonic Conjugate, Cauchy-Riemann Equations, Line Integral, Cauchy’s Theorem, Cauchy’s Integral Formula, Singular Points, Poles & Residues,
Residue Theorem , Application of Residues theorem for evaluation of real integrals
Unit II
Errors & Approximations, Solution of Algebraic & Trancedental Equations (Regula Falsi , Newton-Raphson, Iterative, Secant Method), Solution of simultaneous linear equatins by Gauss Elimination, Gauss
Jordan, Crout’s methods , Jacobi’s and Gauss-Siedel Iterative methods
Unit III
Difference Operators, Interpolation ( Newton Forward & Backward Formulae, Central Interpolation Formulae, Lagrange’s and divided difference formulae ), Numerical Differentiation and Numerical
Unit IV
Solution of Ordinary Differential Equations(Taylor’s Series, Picard’s Method, Modified Euler’s Method, Runge-Kutta Method, Milne’s Predictor & Corrector method ), Correlation and Regression, Curve
Fitting (Method of Least Square).
Unit V
Concept of Probability : Probability Mass function, Probability density function. Discrete Distribution: Binomial, Poisson’s, Continuous Distribution: Normal Distribution, Exponential Distribution
,Gamma Distribution ,Beta Distribution ,Testing of Hypothesis |:Students t-test, Fisher’s z-test, Chi-Square Method
(i) Numerical Methods using Matlab by J.H.Mathews and K.D.Fink, P.H.I.
(ii) Numerical Methods for Scientific and Engg. Computation by MKJain, Iyengar and RK Jain, New Age International Publication
(iii) Mathematical Methods by KV Suryanarayan Rao, SCITECH Publuication
(iv) Numerical Methods using Matlab by Yang,Wiley India
(v) Pobability and Statistics by Ravichandran ,Wiley India
(vi) Mathematical Statistics by George R., Springer
AU/IP/ME-402 Material Science and Metallurgy
Unit I
Crystal Atoms of Solid: Structure of atom binding in solids metallic, Vander walls, ionic and covalent, Space lattice and crystal system arrangement of atoms in BCC, FCC and HCP crystal. Manufacture
of Refractory and Ferrous Metals: Properties uses and selection of acid, basic and natural refractory, metallurgical coke, Properties, types, uses and brief description of the manufacturing processes
for iron and steel making.
Unit II
Plastic deformation of Metals: Point and line defects in crystals, their relation to mechanical properties, deformation of metal by slip and twinning stress strain curves of poly crystalline
materials viz. mild steel cast iron and brass yield point phenomenon. Cold and hot working of metals and their effect on mechanical properties, annealing of cold worked metals, principles of
re-crystallization and grain growth phenomenon, fracture in metal and alloys, ductile and brittle fracture, fatigue failure
Unit III
Alloy Formation and Binary Diagram: Phase in metal system solution and inter-metallic compounds. Hume-Rottery’s rules, solidification of pure metals and alloy equilibrium diagrams of isomorphous,
eutectic peritectic and eutectoid system, non-equilibrium cooling and coring iron, iron carbon equilibrium diagram.
Unit IV
Heat Treatment of Alloys Principles of Heat Treatment of Steel: TTT curves heat treating processes, normalizing, annealing spherodizing, hardening, tempering, case hardening, aus- tempering,
mar-tempering, precipitation hardening process with reference to Al, Cu alloys
Unit V
Properties of Material: Creep Fatigue etc., Introduction to cast iron and steel, Non Ferrous metals base alloys, Bronze, Brasses, Duralumin, and Bearing Metals. Plastics, Composites and ceramics:
Various types of plastics, their properties and selection. Plastic molding technology, FRP, GRP resins adhesive, elastomers and their application. Powder Metallurgy: Property and Applications of
Powder Metallurgy, Various process and methods of making products by powder Metallurgy techniques.
1. Narula GK, KS and GuptaVK; Material science; TMH
2. Raghavan V; Material Science and Engineering, PHI Publication.
3. Raghavan V; Physical Metallurgy Principles and Practice; PHI
4. Rajendran V and Marikani; Material science; TMH
5. Sriniwasan R; Engineering materials and Metallurgy; TMH
6. Navneet Gupta, Material Science & Engineering, Dhanpat Rai.
7. B. K. Agrawal, Introduction to Engineering Materials, TMH.
AU/IP/ME-403 Theory of M/C and Mechanism
Unit 1:
Mechanisms and Machines: Mechanism, machine, plane and space mechanisms, kinematic pairs, kinematic chains and their classification, degrees of freedom, Grubler’s criterion, kinematic inversions of
four bar mechanism and slider crank mechanism, equivalent linkages, pantograph, straight line motion mechanisms, Davis and Ackermann’s steering mechanisms, Hooke’s joint.
Unit 2:
Kinematic analysis of plane mechanisms using graphical and Cartesian vector notations: Planar kinematics of a rigid body, rigid body motion, translation, rotation about a fixed axis, absolute general
plane motion. General case of plane motion, relative velocity method, velocity and acceleration analysis, instantaneous center and its application, Kennedy’s theorem, relative motion, Coriolis
component of acceleration; velocity and acceleration analysis using complex algebra (Raven’s) method.
Unit 3 :Gears: Classification of gears, nomenclature, involutes and cycloidal tooth profile properties, synthesis of tooth profile for spur gears, tooth system, conjugate action, velocity of sliding,
arc of contact, path of contact, contact ratio, interference and undercutting, helical, spiral, bevel and worm gears.
Unit 4:
Cams: Classification of followers and cams, radial cam nomenclature, analysis of follower motion (uniform, modified uniform, simple harmonic, parabolic, cycloidal), pressure angle, radius of
curvature, synthesis of cam profile by graphical approach, cams with specified contours.
Gear Trains: Simple, compound, epicyclic gear trains; determination of gear speeds using vector, analytical and tabular method; torque calculations in simple, compound and epicyclic gear trains.
Unit 5:
Gyroscopic Action in Machines: angular velocity and acceleration, gyroscopic torque/ couple; gyroscopic effect on naval ships; stability of two and four wheel vehicles, rigid disc at an angle fixed
to a rotating shaft
1. Rattan SS; Theory of machines; TMH
2. Ambekar AG; Mechanism and Machine Theory; PHI.
3. Sharma CS; Purohit K; Theory of Mechanism and Machines; PHI.
4. Thomas Bevan; Theory of Machines; Pearson/ CBS PUB Delhi.
5. Rao JS and Dukkipati; Mechanism and Machine Theory; NewAge Delhi.
6. Dr.Jagdish Lal; Theory of Machines; Metropolitan Book Co; Delhi -
7. Ghosh,A,.Mallik,AK; Theory of Mechanisms & Machines, 2e,; Affiliated East West Press,
List of experiments (expandable)
1. To study all inversions of four-bar mechanisms using models
2. Draw velocity and acceleration polygons of all moving link joints in slider crank mechanism
3. Determination of velocity and acceleration in above using method of graphical differentiation
4. To study working of differential gear mechanism.
5. To study working of sun and planet epicycle gear train mechanism using models
6. To plot fall and rise of the follower versus angular displacement of cam and vice versa.
7. Study of universal gyroscope
8. Analytical determination of velocity and acceleration in simple mechanism using Roven’s M.
ME-404 Thermal Engg and gas dynamics
Unit I Steam generators: classification, conventional boilers, high-pressure boilers-lamont, benson, loeffler and velox steam generators, performance and rating of boilers, equivalent evaporation,
boiler efficiency, heat balance sheet, combustion in boilers, super critical boilers, fuel and ash handling, boiler draught, overview of boiler codes.
Unit II Phase Change Cycles: Vapor Carnot cycle and its limitation, Rankin cycle, effect of boiler and Condenser pressure and superheat on end moisture and efficiency of ranking cycle, modified
Rankin cycle, reheat cycle, perfect regenerative cycle, Ideal and actual regenerative cycle with single and multiple heaters, open and closed type of feed water heaters, regenerative-reheat cycle,
supercritical pressure and binary-vapor cycle, work done and efficiency calculations.
Unit III (A) Gas dynamics: speed of sound, in a fluid mach number, mach cone, stagnation properties, one-dimensional isentropic flow of ideal gases through variable area duct-mach number variation,
area ratio as a function of mach number, mass flow rate and critical pressure ratio, effect of friction, velocity coefficient, coefficient of discharge, diffusers, normal shock.
(b) Steam nozzles: isentropic flow of vapors, flow of steam through nozzles, condition for maximum discharge, effect of friction, super-saturated flow.
Unit IV Air compressors: working of reciprocating compressor, work input for single stage compression different, compression processes, effect of clearance, volumetric efficiency real indicator
diagram, isentropic & isothermal and mechanical efficiency, multi stage compression, inter – cooling, condition for minimum work done, classification and working of rotary compressors.
Unit V Steam condensers, cooling towers and heat exchangers: introduction, types of condensers, back pressure and its effect on plant performance air leakage and its effect on performance of
condensers, various types of cooling towers, design of cooling towers, classification of heat exchangers, recuperates and regenerators .parallel flow, counter flow and cross flow exchangers, fouling
factor, introduction to LMTD approach to design a heat exchanger.
1. Nag PK; Power plant Engineering; TMH
2. Thermodynamics by Gordon J. Van Wylen
3. P.K.Nag; Basic and applied Thermodynamics; TMH
4. Ganesan; Gas turbines; TMH
5. Heat Engines by V.P. Vasandani & D. S. Kumar
6. R. Yadav Steam and Gas Turbines
7. R.Yadav Thermal Engg.
8. Kadambi & Manohar; An Introduction to Energy Conversion – Vol II. Energy conversion cycles List of Experiments (Please Expand it) (Thermal Engg and gas dynamics):
9. Study of working of some of the high pressure boilers like Lamont or Benson
10. Study of Induced draft/forced and balanced draft by chimney
11. Determination of Calorific value of a fuel
12. Study of different types of steam turbines
13. Determination of efficiencies of condenser
14. Boiler trail to chalk out heat balance sheet
15. Determination of thermal efficiency of steam power plant
16. Determination of Airflow in ducts and pipes.
17. To find out efficiencies of a reciprocating air compressor and study of multistage Compressors .
18. Find Out heat transfer area of a parallel flow/counter flow heat exchanger
ME-405 fluids Mechanics
Unit-I Review of Fluid Properties: Engineering units of measurement, mass, density, specific weight, volume and gravity, surface tension, capillarity, viscosity, bulk modulus of elasticity, pressure
and vapor pressure. Fluid Static’s : Pressure at a point, pressure variation in static fluid, Absolute and gauge pressure, manometers, Forces on plane and curved surfaces (Problems on gravity dams
and Tainter gates); buoyant force, Stability of floating and submerged bodies, Relative equilibrium.
Unit-II Kinematics of Flow : Types of flow-ideal & real , steady & unsteady, uniform & nonuniform, one, two and three dimensional flow, path lines, streak-lines, streamlines and stream tubes;
continuity equation for one and three dimensional flow, rotational & irrotational flow, circulation, stagnation point, separation of flow, sources & sinks, velocity potential, stream function, flow
netstheir utility & method of drawing flow nets.
Unit-III Dynamics of Flow: Euler’s equation of motion along a streamline and derivation of Bernoulli’s equation, application of Bernoulli’s equation, energy correction factor, linear momentum
equation for steady flow; momentum correction factor. The moment of momentum equation, forces on fixed and moving vanes and other applications. Fluid Measurements: Velocity measurement (Pitot tube,
Prandtl tube, current meters etc.); flow measurement (orifices, nozzles, mouth pieces, orifice meter, nozzle meter, venturi-meter, weirs and notches).
Unit-IV Dimensional Analysis and Dynamic Similitude: Dimensional analysis, dimensional homogeneity, use of Buckingham-pi theorem, calculation of dimensionless numbers, similarity laws, specific model
investigations (submerged bodies, partially submerged bodies, weirs, spillways, rotodynamic machines etc.)
Unit-V Laminar Flow: Introduction to laminar & turbulent flow, Reynolds experiment & Reynolds number, relation between shear & pressure gradient, laminar flow through circular pipes, laminar flow
between parallel plates, laminar flow through porous media, Stokes law, lubrication principles.
References: -
1. Modi & Seth; Fluid Mechanics; Standard Book House, Delhi
2. Streeter VL, Wylie EB, Bedford KW; Fluid Mechanics; TMH
3. Som and Biswas; Fluid Mechnics and machinery; TMH
4. Cengal; Fluid Mechanics; TMH
5. White ; Fluid Mechanics ; TMH
6. Gupta; Fluid Mechanics; Pearson
7. JNIK DAKE; Essential of Engg Hyd; Afrikan Network & Sc Instt. (ANSTI)
8. R Mohanty; Fluid Mechanics; PHI
List of Experiments (Pl. expand it):
1. To determine the local point pressure with the help of pitot tube.
2. To find out the terminal velocity of a spherical body in water.
3. Calibration of Orifice meter and Venturi meter
4. Determination of Cc, Cv, Cd of Orifices
5. Calibration of Nozzle meter and Mouth Piece
6. Reynolds experiment for demonstration of stream lines & turbulent flow
7. Determination of meta-centric height
Grading IVth Semester w.e.f.2011-12
1. Determination of Friction Factor of a pipe
2. To study the characteristics of a centrifugal pump.
3. Verification of Impulse momentum principle.
ME-406 Dot Net
UNIT I Introduction .NET framework, features of .Net framework, architecture and component of .Net, elements of .Net.
UNIT II Basic Features Of C# Fundamentals, Classes and Objects, Inheritance and Polymorphism, Operator Overloading, Structures. Advanced Features Of C# Interfaces, Arrays, Indexers and Collections;
Strings and Regular Expressions, Handling Exceptions, Delegates and Events.
UNIT III Installing ASP.NET framework, overview of the ASP .net framework, overview of CLR, class library, overview of ASP.net control, understanding HTML controls, study of standard controls,
validations controls, rich controls. Windows Forms: All about windows form, MDI form, creating windows applications, adding controls to forms, handling Events, and using various Tolls
UNIT IV Understanding and handling controls events, ADO.NET- Component object model, ODBC, OLEDB, and SQL connected mode, disconnected mode, dataset, data-reader Data base controls: Overview of data
access data control, using grid view controls, using details view and frame view controls, ado .net data readers, SQL data source control, object data source control, site map data source.
UNIT V XML: Introducing XML, Structure, and syntax of XML, document type definition (DTD), XML Schema, Document object model, Presenting and Handling XML. xml data source, using navigation controls,
introduction of web parts, using java script, Web Services
1. C# for Programmers by Harvey Deitel, Paul Deitel, Pearson Education
2. Balagurusamy; Programming in C#; TMH
3. Web Commerce Technology Handbook by Daniel Minoli, Emma Minoli , TMH
4. Web Programming by Chris Bates, Wiley
5. XML Bible by Elliotte Rusty Harold ,
6. ASP .Net Complete Reference by McDonald, TMH.
7. ADO .Net Complete Reference by Odey, TMH
List of Experiments/ program (Pl. expand it):
1. Working with call backs and delegates in C#
2. Code access security with C#.
3. Creating a COM+ component with C#.
4. Creating a Windows Service with C#
5. Interacting with a Windows Service with C#
6. Using Reflection in C#
7. Sending Mail and SMTP Mail and C#
8. Perform String Manipulation with the String Builder and String Classes and C#:
9. Using the System .Net Web Client to Retrieve or Upload Data with C#
10. Reading and Writing XML Documents with the XML Text-Reader/-Writer Class and C#
11. Working with Page and forms using ASP .Net.
12. Data Sources access through ADO.Net,
13. Working with Data readers , Transactions
14. Creating Web Application. | {"url":"http://www.kopykitab.com/blog/rgpv-syllabus-for-4th-sem-mechanical-branch/","timestamp":"2014-04-21T03:05:33Z","content_type":null,"content_length":"61329","record_id":"<urn:uuid:98a6ef78-7024-4ed4-9042-cf309d7635b4>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00003-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stability and Bifurcation in a Delayed Holling-Tanner Predator-Prey System with Ratio-Dependent Functional Response
Journal of Applied Mathematics
Volume 2012 (2012), Article ID 384293, 19 pages
Research Article
Stability and Bifurcation in a Delayed Holling-Tanner Predator-Prey System with Ratio-Dependent Functional Response
^1Department of Science, Bengbu College, Bengbu, Anhui 233030, China
^2School of Management Science and Engineering, Anhui University of Finance and Economics, Bengbu, Anhui 233030, China
Received 9 November 2012; Revised 21 November 2012; Accepted 21 November 2012
Academic Editor: C. Conca
Copyright © 2012 Juan Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
We analyze a delayed Holling-Tanner predator-prey system with ratio-dependent functional response. The local asymptotic stability and the existence of the Hopf bifurcation are investigated. Direction
of the Hopf bifurcation and the stability of the bifurcating periodic solutions are studied by deriving the equation describing the flow on the center manifold. Finally, numerical simulations are
presented for the support of our analytical findings.
1. Introduction
Predator-prey dynamics has long been and will continue to be of interest to both applied mathematicians and ecologists due to its universal existence and importance [1]. Although the early
Lotka-Volterra model has given way to more sophisticated models from both a mathematical and biological point of view, it has been challenged by ecologists for its functional response suffers from
paradox of enrichment and biological control paradox. The ratio-dependent models are discussed as a solution to these difficulties and found to be a more reasonable choice for many predator-prey
interactions [2–4]. One type of the ratio-dependent models which plays a special role in view of the interesting dynamics it possesses is the ratio-dependent Holling-Tanner predator-prey system [5, 6
]. A ratio-dependent Holling-Tanner predator-prey system takes the form of where and represent the population of prey species and predator species at time . It is assumed that in the absence of the
predator, the prey grows logistically with carrying and intrinsic growth rate . The predator growth equation is of logistic type with a modification of the conventional one. The parameter represents
the maximal predator per capita consumption rate, and is the half capturing saturation constant. The parameter is the intrinsic growth rate of the predator and is the number of prey required to
support one predator at equilibrium, when equals . All the parameters are assumed to be positive.
Liang and Pan [6] established the sufficient conditions for the global stability of positive equilibrium of system (1.1) by constructing Lyapunov function. Considering the effect of time delays on
the system, Saha and Chakrabarti [7] considered the following delayed system where is the negative feedback delay of the prey. Saha and Chakrabarti [7] proved that the system (1.2) is permanent under
certain conditions and obtained the conditions for the local and global stability of the positive equilibrium. It is well known that studies on dynamical systems not only involve a discussion of
stability and persistence, but also involve many dynamical behaviors such as periodic phenomenon, bifurcation, and chaos [8–10]. In particular, the Hopf bifurcation has been studied by many authors [
11–13]. Based on this consideration and since both species are growing logistically, we consider the Hopf bifurcation of the following system with two delays: where and represent the negative
feedbacks in prey and predator growth.
Before proceeding further we nondimensionalize our model system (1.3) with the following scaling , , , , . Then we get the nondimensional form of system (1.3): where , , .
This paper is organized as follows. In the next section, we will consider the local stability of the positive equilibrium and the existence of Hopf bifurcation of system (1.4). In Section 3, we can
determine the direction of the Hopf bifurcations and the stability of the bifurcating periodic solutions. Some numerical simulations are also given to illustrate the theoretical prediction in Section
2. Local Stability and Hopf Bifurcation
Considering the ecological significance of system (1.4), we are interested only in the positive equilibrium of system (1.4). It is not difficult to verify that system (1.4) has a unique positive
equilibrium , where , if : holds.
Let , , and we still denote and by and , respectively, then system (1.4) can be rewritten as where Then the linearized system of (2.1) is The characteristic equation of (2.3) is where
Case 1 (). Equation (2.4) reduces to If the condition : and holds, it is clear that roots of (2.6) must have negative real parts.
Case 2 (). Equation (2.4) becomes Let be a root of (2.7). Then, we have From (2.8), we can get If the condition : holds, then (2.9) has a unique positive root The corresponding critical value of time
delay is Next, differentiating (2.7) with respect to and substituting , then we get From (2.10) and (2.12), we have Therefore, if the condition : holds, then . Thus, we have the following results.
Theorem 2.1. For system (1.4), if the conditions hold, then the positive equilibrium of system (1.4) is asymptotically stable for and unstable when , system (1.4) undergoes a Hopf bifurcation at when
Case 3 (). Equation (2.4) becomes Let be a root of (2.14). Then, we get It follows that If the condition holds, then . Thus, (2.16) has a unique positive root , The corresponding critical value of
time delay is Similar as in Case 2, we know that if the condition holds, then we have In conclusion, we have the following results.
Theorem 2.2. For system (1.4), if the condition holds, then the positive equilibrium of system (1.4) is asymptotically stable for and unstable when , system (1.4) undergoes a Hopf bifurcation at when
Case 4 (). Equation (2.4) becomes Multiplying on both sides of (2.20), we have Let be a root of (2.21). Then, we get Then, we can get where Thus, we can obtain with Let , then (2.25) can be
transformed into the following form Next, we suppose that : (2.27) has at least one positive root. Without loss of generality, we suppose that it has four positive roots which are denoted as , , ,
and . Thus, (2.25) has four positive roots , . The corresponding critical value of time delay is Let , , .
Differentiating (2.21) regarding and substituting , we obtain where Obviously, if the condition : holds, then . Namely, the transversality condition is satisfied if holds. From the above analysis, we
have the following theorem.
Theorem 2.3. For system (1.4), if the conditions , , and hold, then the positive equilibrium of system (1.4) is asymptotically stable for and unstable when , system (1.4) undergoes a Hopf bifurcation
at when .
Case 5 ( and , ). We consider (2.4) with in its stable interval and is considered as a parameter. Let be the root of (2.4). Then we have where It follows that where Suppose that : (2.33) has at least
finite positive roots.
If the condition holds, we denote the roots of (2.33) as . Then, for every fixed , the corresponding critical value of time delay is Let . The corresponding purely imaginary roots of (2.33) are
denoted as . Next, we give the following assumption. : . Hence, we have the following theorem.
Theorem 2.4. Suppose that the conditions , , and hold and . The positive equilibrium of system (1.4) is asymptotically stable for and unstable when , system (1.4) undergoes a Hopf bifurcation at when
3. Direction and Stability of Bifurcated Periodic Solutions
In this section, we will employ the normal form method and center manifold theorem introduced by Hassard [14] to determine the direction of Hopf bifurcation and stability of bifurcating periodic
solutions of system (1.4) at .
We denote as , , , Then is the Hopf bifurcation value of system (1.4). For convenience, we first rescale the time by , , and still denote , then system (1.4) can be transformed to the following form:
where and are given by where , and with
Hence, by the Riesz representation theorem, there exists a matrix function whose elements are of bounded variation such that In fact, we choose where is the Dirac delta function, then (3.3) is
For , we define Then system (3.1) can be transformed into the following operator equation The adjoint operator of is defined by associated with a bilinear form where .
From the above discussion, we know that that are eigenvalues of and they are also eigenvalues of .
We assume that are the eigenvectors of belonging to the eigenvalue and are the eigenvectors of belonging to . Thus, Then, we can obtain
Next, we get the coefficients used in determining the important quantities of the periodic solution by using a computation process similar to that in [15]: with where and can be computed as the
following equations, respectively with Therefore, we can calculate the following values: Based on the discussion above, we can obtain the following results.
Theorem 3.1. The direction of the Hopf bifurcation is determined by the sign of : if , then the Hopf bifurcation is supercritical (subcritical). The stability of bifurcating periodic solutions is
determined by the sign of : if , the bifurcating periodic solutions are stable (unstable).
4. Numerical Example
In this section, to illustrate the analytical results obtained in the previous sections, we present some numerical simulations. Let , , , then we have the following particular case of system (1.4):
Obviously, . Thus, the condition holds. Then we can get the unique positive equilibrium of system (4.1). By a simple computation, , . Namely, the condition holds.
Firstly, we can obtain that . Namely, the condition is satisfied for , . Further, we have , . By Theorem 2.1, we can know that the positive equilibrium is asymptotically stable for and unstable when
. Let , then the positive equilibrium is asymptotically stable, which can be seen from Figure 1. When , it can be seen from Figure 2 that the positive equilibrium is unstable and a Hopf bifurcation
occurs. Similarly, we have , . For , the positive equilibrium is asymptotically stable from Theorem 2.2 and this property can be shown in Figure 3. If , the positive equilibrium is unstable and a
Hopf bifurcation occurs, and the corresponding waveform and phase plots are shown in Figure 4.
Secondly, we can get , , and for . From Theorem 2.3, we know that is asymptotically stable for , which can be illustrated by Figure 5. As can be seen from Figure 5 that when the positive equilibrium
is asymptotically stable. However, if , then the positive equilibrium becomes unstable and a family of bifurcated periodic solutions occur, which is illustrated by Figure 6. In addition, from (3.18),
we get , . Thus, by Theorem 3.1, we know that the Hopf bifurcation is supercritical and the bifurcated periodic solutions are stable.
Lastly, regard as a parameter and let , we can obtain that . Further we have . Let , we can know that the positive equilibrium is asymptotically stable from Theorem 2.4, which can be shown by Figure
7. When then the positive equilibrium becomes unstable and a Hopf bifurcation occurs, which can be illustrated in Figure 8.
5. Conclusion
In the present paper, a Holling-Tanner predator-prey system with ratio-dependent functional response and two delays is investigated. We prove that the system is asymptotically stable under certain
conditions. Compared with the system considered in [7], we not only consider the feedback delay of the prey but also the feedback delay of the predator. By choosing the delay as a bifurcation
parameter, we show that the Hopf bifurcations can occur as the delay crosses some critical values. Furthermore, we get that the two species could also coexist with some available delays of the prey
and the predator. This is valuable from the view of biology. In addition, Saha and Chakrabarti [7] only considered the stability of the system. It is well known that there are also some other
behaviors for dynamical systems. Based on this consideration, we investigate the Hopf bifurcation and properties of the bifurcated periodic solutions of the system. The direction and the stability of
the bifurcated periodic solutions are determined by applying the normal theory and the center manifold theorem. If the bifurcated periodic solutions are stable, then the two species may coexist in an
oscillatory mode from the viewpoint of biology. Some numerical simulations supporting the theoretical results are also included.
The authors are grateful to the referees and the editor for their valuable comments and suggestions on the paper. This work is supported by Anhui Provincial Natural Science Foundation under Grant no.
1. A. A. Berryman, “The origins and evolution of predator-prey theory,” Ecology, vol. 73, no. 5, pp. 1530–1535, 1992. View at Scopus
2. C. Cosner, D. L. Deangelis, J. S. Ault, and D. B. Olson, “Effects of spatial grouping on the functional response of predators,” Theoretical Population Biology, vol. 56, no. 1, pp. 65–75, 1999.
View at Publisher · View at Google Scholar · View at Scopus
3. S. B. Hsu, T. W. Hwang, and Y. Kuang, “Rich dynamics of a ratio-dependent one-prey two-predators model,” Journal of Mathematical Biology, vol. 43, no. 5, pp. 377–396, 2001. View at Scopus
4. Y. Kuang, “Rich dynamics of Gause-type ratio-dependent predator-prey system,” Fields Institute Communications, vol. 21, pp. 325–337, 1999. View at Zentralblatt MATH
5. M. Haque and B. L. Li, “A ratio-dependent predator-prey model with logistic growth for the predator population,” in Proceedings of 10th International Conference on Computer Modeling and
Simulation, pp. 210–215, University of Cambridge, Cambridge, UK, April 2008. View at Publisher · View at Google Scholar · View at Scopus
6. Z. Liang and H. Pan, “Qualitative analysis of a ratio-dependent Holling-Tanner model,” Journal of Mathematical Analysis and Applications, vol. 334, no. 2, pp. 954–964, 2007. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH
7. T. Saha and C. Chakrabarti, “Dynamical analysis of a delayed ratio-dependent Holling-Tanner predator-prey model,” Journal of Mathematical Analysis and Applications, vol. 358, no. 2, pp. 389–402,
2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
8. X.-Y. Meng, H.-F. Huo, and H. Xiang, “Hopf bifurcation in a three-species system with delays,” Journal of Applied Mathematics and Computing, vol. 35, no. 1-2, pp. 635–661, 2011. View at Publisher
· View at Google Scholar · View at Zentralblatt MATH
9. Y.-H. Fan and L.-L. Wang, “Periodic solutions in a delayed predator-prey model with nonmonotonic functional response,” Nonlinear Analysis. Real World Applications, vol. 10, no. 5, pp. 3275–3284,
2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
10. L. Zhang and C. Lu, “Periodic solutions for a semi-ratio-dependent predator-prey system with Holling IV functional response,” Journal of Applied Mathematics and Computing, vol. 32, no. 2, pp.
465–477, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
11. S. Yuan and Y. Song, “Stability and Hopf bifurcations in a delayed Leslie-Gower predator-prey system,” Journal of Mathematical Analysis and Applications, vol. 355, no. 1, pp. 82–100, 2009. View
at Publisher · View at Google Scholar · View at Zentralblatt MATH
12. G.-P. Hu and X.-L. Li, “Stability and Hopf bifurcation for a delayed predator-prey model with disease in the prey,” Chaos, Solitons & Fractals, vol. 45, no. 3, pp. 229–237, 2012. View at
Publisher · View at Google Scholar
13. F. Lian and Y. Xu, “Hopf bifurcation analysis of a predator-prey system with Holling type IV functional response and time delay,” Applied Mathematics and Computation, vol. 215, no. 4, pp.
1484–1495, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
14. B. D. Hassard, N. D. Kazarinoff, and Y. H. Wan, Theory and Applications of Hopf Bifurcation, Cambridge University Press, Cambridge, UK, 1981.
15. T. K. Kar and A. Ghorai, “Dynamic behaviour of a delayed predator-prey model with harvesting,” Applied Mathematics and Computation, vol. 217, no. 22, pp. 9085–9104, 2011. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH | {"url":"http://www.hindawi.com/journals/jam/2012/384293/","timestamp":"2014-04-17T06:05:52Z","content_type":null,"content_length":"830181","record_id":"<urn:uuid:fcb55942-540c-4ddb-a177-54506f8c0b53>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00542-ip-10-147-4-33.ec2.internal.warc.gz"} |
General Objections
Observational Evidence Weak
Response to Professor Skiff
Einstein's Equation
Why not build on established work?
The use of the SED approach
Waves or Frequency?
Observational Evidence Weak
Comment: Observational evidence in favor of the proposition that the speed of light is time-variable is so weak as to be practically non-existent. I think the proposition can be safely ignored,
simply on the grounds of lack of evidence. The interpretation of quantization as a fundamental problem for standard cosmology is also in error. Tifft's argument is that galactic redshifts have a
superimposed periodicity. If the redshift is caused by a Doppler effect, and if the matter distribution is periodic, then the Redshifts will be periodic too, just as Tifft argues that they are
(although Tifft's results are not all that strong either). So, even if we accept the "quantization" (poor semantics, "periodicity" is better and more descriptive of what is actually observed), it
works just fine in a modified Big Bang cosmology (one has to find a way to construct a periodic mass distribution on large Scales, which should not be an onerous task).
Setterfield: A lot of cosmologists and science journal editors didn't think so. Neither did those editors who commissioned major articles on the topic.
There are in fact periodicities as well as redshift quantisation effects. The periodicities are genuine galaxy-distribution effects. However, they all involve high redshift differences such as
repeats at z = 0.0125 and z = 0.0565. The latter value involves 6,200 quantum jumps of Tifft's basic value and reflects he large-scale structuring of the cosmos at around 850 million light-years. The
smaller value is around 190 million light-years. This is the approximate distance between super-clusters.
The point is that Tifft's basic quantum states still occur within these large-scale structures and have nothing to do with the size of galaxies or the distances between them. The lowest observed
redshift quantisation that can reasonably be attributed to an average distance between galaxies is the interval of 37.6 km/s that Guthrie and Napier picked up in our local supercluster. This
comprises a block of 13 or 14 quantum jumps and a distance of around 1.85 million light-years. It serves to show that basic quantum states below the interval of 13 quantum jumps have nothing to do
with galaxy size or distribution. Finally, Tifft has noted that there are red-shift quantum jumps within individual galaxies. This indicates that the effect has nothing to do with clustering.
(November 16, 1999.)
Response to Professor Skiff
In Douglas Kelly's book, Creation and Change: Genesis 1.1 - 2.4 in the light of changing scientific paradigms (1997, Christian Focus Publications, Great Britain) a changing speed of light is
discussed in terms of Genesis. Endeavoring to present both sides of the variable light speed argument, he asked for a comment from Professor Frederick N. Skiff. Professor Skiff responded with a
private letter which Kelly published on pp. 153 and 154 of his book. The letter is quoted below and, after that, Barry Setterfield responds.
From Professor Frederick N. Skiff:
I see that Setterfield does indeed propose that Planck's constant is also changing. Therefore, the fine structure constant 'a' could remain truly constant and the electron velocity in the atom
could then change in a fashion proportional to the speed of light. His hypothesis is plausible.
My concern was that if you say 1) The speed of light is changing. And 2) The electron velocity in the atom is proportional to the speed of light, then you will generate an immediate objection
from a physicist unless you add 3) Planck's constant is also changing in such a way as to keep the fine structure 'constant' constant.
The last statement is not a small addition. It indicates that his proposal involves a certain relations between the quantum theory (in the atom) and relativity theory (concerning the speed of
light). The relation between these theories, in describing gravity, space and time, is recognized as one of the most important outstanding problems in physics. At present these theories cannot be
fully reconciled, despite their many successes in describing a wide rang of phenomena. Thus, in a way, his proposal enters new territory rather than challenging current theory. Actually, the idea
has been around for more than a decade, but it has not been pursued for lack of proof. My concerns are the following:
The measurements exist over a relatively short period of time. Over this period of time the speed changes by only a small amount. No matter how good the fit to the data is over the last few
decades, it is very speculative to extrapolate such a curve over thousands of years unless there are other (stronger) arguments that suggest that he really has the right curve. The fact is that
there are an infinite number of mathematical curves which fit the data perfectly (he does not seem to realize this in his article). On the other hand, we should doubt any theory which fits the
data perfectly because we know that the data contain various kinds of errors (which have been estimated). Therefore the range of potential curves is even larger, because the data contain errors.
There is clearly some kind of systematic effect, but not one that can be extrapolated with much confidence. The fact that his model is consistent with a biblical chronology is very interesting,
but not conclusive (there are an infinite number of curves that would also agree with this chronology). The fact that he does propose a relative well known, and simply trigonometric function is
also curious, but not conclusive.
The theoretical derivation that he gives for the variation of the speed of light contains a number of fundamental errors. He speaks of Planck's constant as the quantum unit of energy, but it is
the quantum unit of angular motion. In his use of the conversion constant b he seems to implicitly infer that the 'basic' photon has a frequency of 1Hz, but there is no warrant for doing this.
His use of the power density in an electromagnetic wave as a way of calculating the rate of change of the speed of light will not normally come out of a dynamical equation which assumes that the
speed of light is a constant (Maxwell's Equations). If there is validity in his model, I don't believe that it will come from the theory that he gives. Unfortunately, the problem is much more
complicated, because the creation is very rich in phenomena and delicate in structure.
Nevertheless, such an idea begs for an experimental test. The problem is that the predicted changes seem to be always smaller than what can be resolved. I share some of the concerns of the second
respondent in the Pascal Notebook article.* One would not expect to have the rate of change of the speed of light related to the current state-of-the-art measurement (the graph of page 4 of
Pascal's Notebook**) unless the effect is due to bias. Effects that are 'only there when you are not looking' can happen in certain contexts in quantum theory, but you would not expect them in
such a measurement as the speed of light.
There are my concerns. I think that it is very important to explore alternative ideas. The community which is interested in looking at theories outside of the ideologtical mainstream is small and
has a difficult life. No one scientist is likely to work out a new theory from scratch. It needs to be a community effort, I think.
* A reference to "Decrease in the Velocity of Light: Its Meaning For Physics" in The Pascal Centre Notebook, Vol One, Number one, July, 1990. The second respondent to Setterfield's theory was Dr.
Wytse Van Dijk, Professor of Physics and Mathematics, Redeemer College, who asked (concerning Professor Troistskii's model of the slowing down of the speed of light): 'Can we test the validity of
Troitskii's model? If his model is correct, then atomic clocks should be slowing compared to dynamic clocks. The model could be tested by comparing atomic and gravitational time over several
years to see whether they diverge. I think such a test would be worthwhile. The results might help us to resolve some of the issues relation to faith and science." ( p.5.)
** This graph consists of a correlation of accuracy of measurements of speed of light c with the rate of change in c between 1740 and 1980.
Setterfield: During the early 1980's it was my privilege to collect data on the speed of light, c. In that time, several preliminary publications on the issue were presented. In them the data list
increased with time as further experiments determining c were unearthed. Furthermore, the preferred curve to fit the data changed as the data list became more complete. In several notable cases, this
process produced trails on the theoretical front and elsewhere which have long since been abandoned as further information came in. In August of 1987, our definitive Report on the data was issued as
The Atomic Constants, Light and Time in a joint arrangement with SRI International and Flinders University. Trevor Norman and I spent some time making sure that we had all the facts and data
available, and had treated it correctly statistically. In fact the Maths Department at Flinders Uni was anxious for us to present a seminar on the topic. That report presented all 163 measurements of
c by 16 methods over the 300 years since 1675. We also examined all 475 measurements of 11 other c-related atomic quantities by 25 methods. These experimental data determined the theoretical approach
to the topic. From them it became obvious that, with any variation of c, energy is going to be conserved in all atomic processes. A best fit curve to the data was presented.
In response to criticism, it was obvious the data list was beyond contention - we had included everything in our Report. Furthermore, the theoretical approach withstood scrutiny, except on the two
issues of the redshift and gravitation. The main point of contention with the Report has been the statistical treatment of the data, and whether or not these data show a statistically significant
decay in c over the last 300 years. Interestingly, all professional statistical comment agreed that a decay in c had occurred, while many less qualified statisticians claimed it had not! At that
point, a Canadian statistician, Alan Montgomery, liaised with Lambert Dolphin and me, and argued the case well against all comers. He presented a series of papers which have withstood the criticism
of both the Creationist community and others. From his treatment of the data it can be stated that c decay (cDK) [note: this designation has since been changed to 'variable c' or Vc] has at least
formal statistical significance.
However, Zero Point Energy and the Redshift takes the available data right back beyond the last 300 years. In so doing, a complete theory of how cDK occurred (and why) has been developed in a way
that is consistent with the observational data from astronomy and atomic physics. In simple terms, the light from distant galaxies is redshifted by progressively greater amounts the further out into
space we look. This is also equivalent to looking back in time. As it turns out, the redshift of light includes a signature as to what the value of c was at the moment of emission. Using this
signature, we then know precisely how c (and other c-related atomic constants) has behaved with time. In essence, we now have a data set that goes right back to the origin of the cosmos. This has
allowed a definitive cDK curve to be constructed from the data and ultimate causes to be uncovered. It also allows all radiometric and other atomic dates to be corrected to read actual orbital time,
since theory shows that cDK affects the run-rate of these clocks.
A very recent development on the cDK front has been the London Press announcement on November 15th, 1998, of the possibility of a significantly higher light-speed at the origin of the cosmos. I have
been privileged to receive a 13 page pre-print of the Albrecht-Magueijo paper (A-M paper) which is entitled "A time varying speed of light as a solution to cosmological puzzles". From this
fascinating paper, one can see that a very high initial c value really does answer a number of problems with Big Bang cosmology. My main reservation is that it is entirely theoretically based. It may
be difficult to obtain observational support. As I read it, the A-M paper requires c to be at least 10^60 times its current speed from the start of the Big Bang process until "a phase transition in c
occurs, producing matter, and leaving the Universe very fine-tuned ...". At that transition, the A-M paper proposes that c dropped to its current value. By contrast, the redshift data suggests that
cDK may have occurred over a longer time. Some specific questions relating to the cDK work have been raised. Penny wrote to me that someone had suggested "that the early measurements of c had such
large probable errors attached, that (t)his inference of a changing light speed was unwarranted by the data." This statement may not be quite accurate, as Montgomery's analysis does not support this
conclusion. However, the new data set from the redshift resolves all such understandable reservations.
There have been claims that I 'cooked' or mishandled the data by selecting figures that fit the theory. This can hardly apply to the 1987 Report as all the data is included. Even the Skeptics
admitted that "it is much harder to accuse Setterfield of data selection in this Report". The accusation may have had some validity for the early incomplete data sets of the preliminary work, but I
was reporting what I had at the time. The rigorous data analyses of Montgomery's papers subsequent to the 1987 Report have withstood all scrutiny on this point and positively support cDK. However,
the redshift data in the forthcoming paper overcomes all such objections, as the trend is quite specific and follows a natural decay form unequivocally.
Finally, Douglas Kelly's book Creation and Change contained a very fair critique on cDK by Professor Fred Skiff. However, a few comments may be in order here to clarify the issue somewhat. Douglas
Kelly appears to derive most of his information from my 1983 publication "The Velocity of Light and the Age of the Universe". He does not appear to reference the 1987 Report which updated all
previous publications on the cDK issue. As a result, some of the information in this book is outdated. In the "Technical And Bibliographical Notes For Chapter Seven" on pp.153-155 several corrections
are needed as a result. In the paragraph headed by "1. Barry Setterfield" the form of the decay curve presented there was updated in the 1987 Report, and has been further refined by the redshift work
which has data back essentially to the curve's origin. As a result, a different date for creation emerges, one in accord with the text that Christ, the Apostles and Church Fathers used. Furthermore
this new work gives a much better idea of the likely value for c at any given date. The redshift data indicate that the initial value of c was (2.54 x 10^10) times the speed of light now. This
appears conservative when compared with the initial value of c from the A-M paper of 10^60 times c now.
Professor Skiff then makes several comments. He suggests that cDK may be acceptable if "Planck's constant is also changing in such a way as to keep the fine structure 'constant' constant." This is in
fact the case as the 1987 Report makes clear.
Professor Skiff then addresses the problem of the accuracy of the measurements of c over the last 300 years. He rightly points out that there are a number of curves which fit the data. Even though
the same comments still apply to the 1987 Report, I would point out that the curves and data that he is discussing are those offered in 1983, rather than those of 1987. It is unfortunate that the
outcome of the more recent analyses by Montgomery are not even mentioned in Douglas Kelly's book.
Professor Skiff is also correct in pointing out that the extrapolation from the 300 years data is "very speculative". Nevertheless, geochronologists extrapolate by factors of up to 50 million to
obtain dates of 5 billion years on the basis of less than a century's observations of half-lives. However, the Professor's legitimate concern here should be largely dissipated by the redshift results
which take us back essentially to the origin of the curve and define the form of that curve unambiguously. The other issue that the Professor spends some time on is the theoretical derivation for
cDK, and a basic photon idea which was used to support the preferred equation in the 1983 publication. Both that equation and the theoretical derivation were short-lived. The 1987 Report presented
the revised scenario. The upcoming redshift paper has a completely defined curve, that has a solid observational basis throughout. The theory of why c decayed along with the associated changes in the
related atomic constants, is rooted firmly in modern physics with only one very reasonable basic assumption needed. I trust that this forthcoming paper will be accepted as contributing something to
our knowledge of the cosmos.
Professor Skiff also refers to the comments by Dr. Wytse Van Dijk who said that "If (t)his model is correct, then atomic clocks should be slowing compared to dynamical clocks." This has indeed been
observed. In fact it is mentioned in our 1987 Report. There we point out that the lunar and planetary orbital periods, which comprise the dynamical clock, had been compared with atomic clocks from
1955 to 1981 by Van Flandern and others. Assessing the evidence in 1984, Dr. T. C. Van Flandern came to a conclusion. He stated that "the number of atomic seconds in a dynamical interval is becoming
fewer. Presumably, if the result has any generality to it, this means that atomic phenomena are slowing with respect to dynamical phenomena ..." This is the observational evidence that Dr. Wytse Van
Dijk and Professor Skiff required. Further details of this assessment by Van Flandern can be found in "Precision Measurements and Fundamental Constants II", pp.625-627, National Bureau of Standards
(US) Special Publication 617 (1984), B. N. Taylor and W. D. Phillips editors.
In conclusion, I would like to thank Fred Skiff for his very gracious handling of the cDK situation as presented in Douglas Kelly's book. Even though the information on which it is based is outdated,
Professor Skiff's critique is very gentlemanly and is deeply appreciated. If this example were to be followed by others, it would be to everyone's advantage.
Einstein's Equation
Comment: As for the physical problems with the c-decay model, probably the easiest refutation for the layman to understand invokes probably the only science equation that is well known by all, e
= m c^2. Let us imagine, if you will, that we have doubled the speed of light [the c constant]. That would increase e by a factor of 4. The heat output of the sun would be 4 times as hot. And you
thought we had a global warming problem now. In other words, if the speed of light was previously higher (and especially if it was exponentially higher), the earth would've been fried a long time
ago and no life would have been able to exist.
Setterfield: In the 1987 Report which is on these web pages, we show that atomic rest masses "m" are proportional to 1/(c^2). Thus when c was higher, rest masses were lower. As a consequence the
energy output of stars etc from the (E = m c^2) reactions is constant over time when c is varying. Furthermore, it can be shown that the product Gm is invariant for all values of c and m. Since all
the orbital and gravitational equations contain G m, there is no change in gravitational phenomena. The secondary mass in these equations appears on both sides of the equation and thereby drops out
of the calculation. Thus orbit times and the gravitational acceleration g are all invariant. This is treated in more detail in General Relativity and the ZPE.
Why not build on established work?
Comments: I quote the emininent gravitation theorist Charles Misner,intending no offense in quoting his terminology. He wrote:
"By correspondence principles, here, I mean limiting relationships between one theory and another--the fact that general relativity grew out of Newtonian mechanics and special relativity and
bears formal mathematical relations to them by which it can reproduce those theories in suitable circumstances and limits....
It is very characteristic of improper theories to be deficient in correspondence principles. Relativists see many `crackpot' theories; people write letters to relativists proposing why special
relativity is wrong because they can rethink the Michelson-Morley experiment, or the Lorentz contraction, from some other viewpoint. The reason one regards most such proposals as `crackpot' is
that they are not born within the milieu of evolving physical theory; they do not have roots and branches reaching out, securing them into the other, more firmly established, theories of physics.
One knows that if he begins working on such a theory he will have to reconstruct his whole world view, rather than just revise a current line of development. Every experiment or observation from
centuries past becomes a possible crucial test of the `crackpot' theory, because current standard theories, and the previous theories they improve upon, are not incorporated into the newly
proposed theory by correspondence principles. Thus one makes a demand that any theory be at most `conventionally revolutionary,' rather than `crackpot,' and this demand is based on a requirement
for thorough confrontation with experiment. The conventionally revolutionary theory (such as special relativity at its inception) may discard previously fundamental concepts and change the basic
laws, but it does so in a way that leaves its testing against all previously satisfactory experiments under the control of the previous theory whose domain of authority or validity it clarifies.
Thus most of the discussion of a conventionally revolutionary theory properly focuses on the small group of experiments where it differs from the previous theory. The `crackpot' theory usually
directs its attention to a similar small group of critical experiments, but this limitation is now unjustified since the crackpot theory is--in my use of the word here--by definition deficient in
correspondence principles and must, for an adequate testing, also discuss a host of experiments that were nonproblematic in standard theories with which the new theory has lost touch."
C. Misner, (pp. 84-85), "Cosmology and Theology", in Cosmology, History, and Theology, ed. W. Yourgrau and A. D. Breck, Plenum, 1977.
The task for Setterfield's notion of VSL [variable speed of light] is to acquire the sorts of correspondence principles that Misner discusses, and that will involve formulating the theory like a
relativistic field theory. Since people like Magueijo, Visser, and Moffat have shown ways to do that, why not use their work? At that point one would make contact with current experiments. The
rapid light speed decay that Setterfield posits would then, I expect, be empirically falsified, but that would still be progress.
Setterfield: The main thrust of these comments is that if new work does not build on the bases established by general and special relativity, then it must be an "improper" theory. I find this
attitude interesting as a similar view has been expressed by a quantum physicist. This physicist has accepted and taught quantum electrodynamics (QED) for many years and has been faithful in his
proclamation of QED physics. However, when he was presented with the results of the rapidly developing new interpretation of atomic phenomena based on more classical lines called SED physics, he
effectively called it an improper theory since it did not build on the QED interpretation. It did not matter to him that the results were mathematically the same, though the interpretation of those
results was much more easily comprehensible. It did not matter to him that the basis of SED was anchored firmly in the work of Planck, Einstein and Nernst in the early 1900's, and that many important
physicists today are seriously examining this new approach. It had to be incorrect because it did not build on the prevailing paradigm.
I feel that the above comments may perhaps be in a similar category. The referenced quote implies that this lightspeed work does "not have roots and branches reaching out, securing them into the
other, more firmly established, theories of physics." However, I have gone back to the basics of physics and built from there. But if by the basics of physics one means general and special
relativity, I admit guilt. However, there is a good reason that I do not build on special or general relativity and use the types of equations those formalisms employ. In most of the work using those
equations, the authors put lightspeed, c, and Planck's constant, h, equal to unity. Thus at the top of their papers or implied in the text is the equation: c=ħ=1. Obviously, in a scenario where both
c and h are changing, such equations are inappropriate. Instead, what the lightspeed work has done is to go back to first principles and basic physics, such as Maxwell's equations, and build on that
rather than on special or general relativity. This also makes for much simpler equations. Why complicate the issue when it can be done simply?
There is a further reason. With significant changes to c and h, it may mean that general relativity should be re-examined. A number of serious scientists have thought this way. For example, SED
physics is providing a theory of gravity which is already unified with the rest of physics. This approach employs very different equations to those of general relativity. A second example is Robert
Dicke, who, in 1960, formulated his scalar-tensor theory of gravity as a result of observational anomalies. This Brans-Dicke theory became a serious contender against general relativity up until 1976
when it was disproved on the basis of a prediction. Note, however, that the original anomalous data that led to the theory still stand; the anomaly still exists in measurements today, and it is not
accounted for by general relativity. A third illustration comes from 2002. In this last year, over 50 papers addressing the matter of lightspeed and relativity have been accepted for publication by
serious journals. These facts alone indicate that the last word has not been spoken on this matter. It is true that the 2002 papers have been tinkering around the edges of relativity. Perhaps the
whole issue probably needs an infusion of new thinking in view of the changing values of c and this other anomalous data, despite the comments of Misner. For these 3 reasons I am reluctant to dance
with the existing paradigm and utilize those equations which may be an incomplete description of reality.
Therefore, I plead guilty in that I am not following the path dictated by relativity. But this does not necessarily prove that I am wrong, any more than SED physics is wrong, or that Brans and Dicke
were wrong in trying to find a theory to account for the observational anomalies. On the basis of common sense and the history of scientific endeavor, I therefore feel that the "requirement"
presented above may legitimately be ignored.
Comments: I suspect that your QED physicist is not fully convinced that they are mathematically equivalent. From my skimming of L. de la Peña and A. M. Cetto, "Quantum Theory and Linear
Stochastic Electrodynamics", Foundations of Physics v. 31 (#12): pp.1703-1731, December 2001, that question is controversial, and its answer varies between stochastic electrodynamics and their
new linear stochastic electrodynamics. If SED, LSED, or the like really is mathematically equivalent to QED, and if it does restore classical properties substantially, then 1. I will be happy,
and 2. the foundations of physics community will be eager to learn the consequences for resolving the QM measurement problem, explaining EPR correlations, and the like. I doubt that this
equivalence has been proved, and if it has, that fact has not become widely known. I found nothing on stochastic electrodynamics at www.arxivorg, by the way, which is curious. SED advocates, who
might have trouble with journal peer review, should like the arxiv a great deal. I am acquainted with Bohm's work, which is catching on in some circles and has a credible claim on empirical
equivalence with quantum mechanics.
[In addition,] If you have gone back to, say, 1905, then you have missed almost a century of experiments.
The statements c=ħ=1 are just conventions, and so cannot possibly exclude the expression of physical claims that some ostensible constants are varying. The statement "the speed of light is
changing" (relative to measurements made with meter sticks and stopwatches, I assume) can be expressed as something like "the metric tensor appearing in the Lagrangian density for
electromagnetism is not conformally related to the metric tensor appearing in the matter Lagrangian density". Formalisms for doing this kind of thing are being published today in relativistic
classical field theoretic form--the language of the current state of this sort of physics--by Drummond, Magueijo, Visser, Moffat, and the like. Without expressing a theory in a form like this,
one fails to establish the correspondence principles that Misner wants. As a result everyone's time is wasted, especially yours, because you cannot be confident that your theory is consistent
with physical experiments, even old ones,that all standard theories satisfy as a matter of course.
You need to show that an empirically adequate story simpler than special and general relativity exists. But does it? I doubt it. I especially doubt that one can know that without working in the
language of these theories in order to express the alternative theories in a form manifestly comparable to them. One cannot even read Clifford Will's Theory and Experiment in Gravitational
Physics to know the current state of gravitational experiments, unless one uses a formalism that looks like classical relativistic field theory, general relativity, and the like.
Actually Brans-Dicke is still viable if you turn the knobs appropriately. The experiments in question had to do with solar oblateness measurements and models. But Brans-Dicke will not help here,
because it is, as Misner puts it, conventionally revolutionary, not crackpot. It is quite clear what the relation between Brans-Dicke gravity and general relativity is, and how to get Brans-Dicke
experiments to agree in some limit with GR. (One can't even express Brans-Dicke theory in a form that looks very different from general relativity!) That is just what I want to see for c-decay
The following response was posted as part of a general discussion of the material by a third person:
Misner was quoted as stating "By correspondence principles, here, I means limiting relationships between one theory and another--the fact that general relativity grew out of Newtonian mechanics
and special relativity and bears formal mathematical relations to them by which it can reproduce those theories in suitable circumstances and limits...."
As you know, the correspondence principle between General Relativity, QM and classical Newtonian mechanics is satisfied as h --> 0 & c --> infinity. So Barry's hypothesis of a varying 'c' is not
necessarily disjointed from solid physics.
Also, as long as Planck's constant 'h' is not constant but also varies "contravariantly" with 'c,' such that the product 'hc' is constant and as long as 'G' and 'e' don't vary then the total
mass-energy of any SSM type universe would remain conserved.
However there appears to be a difficulty with intra-system energy conservation across time. Perhaps a closer reading of Setterfield's material will shed some light on this matter.
I'm in sympathy with Setterfield regarding SED versus QED. Moreover, having known Dirac and his enormous antipathy toward some of the outrageous mathematical license taken in QED, I'm almost
positive that Dirac would concur.
However, much to my dismay, SED, the Casimer effect and so forth, appears to have been hijacked by new-agers and the rest of the free-energy-flying-saucer-extra-terrestrial-crowd based on the
left coast.
Setterfield: "Science must not neglect any anomaly but give nature's answers to the world humbly and with courage." [Sir Henry Dale, President of the Royal Society, 1981]
By neglecting the anomalies associated with the dropping speed of light values, and neglecting the anomalies associated with mass measurements of various sorts and the problems with quantum
mechanics, those adhering to relativistic theory have left themselves open to the charge that relativity has become theory-based rather than observationally-based.
Thanks [to the second correspondent] for your summation of the situation, which is largely correct. The evidence does indeed suggest that h tends to zero and c tends to infinity as one approaches
earlier epochs astronomically. At the same time, the product hc has been experimentally shown to be invariant over astronomical time, just as you indicated. These experimental results, the
theoretical approach that incorporated them, and their effects on the atom, were thrashed out in the 1987 Report. These ideas have been subsequently developed further in Exploring the Vacuum. Later,
Reviewing the Zero Point Energy refined this further. In this way the correspondence principle has been upheld from its inception, and I had thought that this part of the debate was substantially
over. However, those unfamiliar with the 1987 Report would not be expected to know this and may wonder about its validity as a result.
As far as intra-system energy conservation is concerned, that issue was partly addressed in the 1987 Report where it was shown that atomic processes were such that energy was conserved in the system
over time, and in more detail in the main 2001 paper. The outcome was hinted at in the Vacuum paper in the context of a changing zero-point energy and a quantized redshift. However, another paper
dealing with these specific matters is proposed.
Finally, I, too, am dismayed by the hijacking of SED work by the new-agers, but that should not cloud the valid physics involved.
Comment: Setterfield wrote: "Thanks for your summation of the situation, which is largely correct. The evidence does indeed suggest that h tends to zero and c tends to infinity as one approaches
earlier epochs astronomically."
How does this square with the work of (I think it was) Webb et al, that get a fraction of a percent change in alpha=e^2/(ħ*c)? What are the consequences of a changing e, instead of a changing *c?
Furthermore, as already said:
"Also, as long as Planck's constant 'h' is not constant but also varies 'contravariantly' with 'c,' such that the product 'hc' is constant and as long as 'G' and 'e' don't vary then the total
mass-energy of any SSM type universe would remain conserved."
in response to this: if alpha is varying (given the evidence above, which I am not yet comfortable asserting to be absolutely true), then it is possible that e is in fact varying.
From another person:
Comment: I would be interested in getting references to the evidence that suggests that "h tends to zero and c tends to infinity as one approaches earlier epochs astronomically." The latter would
indicate that high energy physics is governed by classical rather than quantum mechanics at extreme temperatures and densities.; Some time ago I developed a model of particle structure where the
statistics governing the dynamics was classical. I successfully applied this model to the study of the very early universe.
Setterfield: The key issue which is raised here concerns the work of Webb et al that indicated there was a change in alpha, the fine structure constant, by about one part in 100,000. A couple of
points. The first problem is that this result is very difficult to disentangle from redshifted data. One first has to be sure that this change is separate from anything the redshift has produced. The
second potential difficulty is that all the data has been collected from only one observatory and may be an artifact of the instrumentation. This latter difficulty will soon be overcome as other
instruments are scheduled to make the observations as well. That will be an important test. The third point is that observational evidence, some of which is listed in the 1987 Report, indicates that
the product hc is invariant with time. This only leaves the electronic charge, e, or the permittivity of free space, epsilon, to be the quantities giving any actual variation in alpha, unless alpha
itself is changing. However, this whole situation appeals to my sense of humor. Physicists are getting excited over a suspected change of 1 part in 100,000 in alpha over the lifetime of the universe,
but ignore a change of greater than 1% in c that has been measured by a variety of methods over 300 years.
It was noted that the results of the variable c (Vc) research applied to the early universe might “indicate that high energy physics is governed by classical rather than quantum mechanics at extreme
temperatures and densities.” In response, it is fair to say that I have not investigated that possibility. What this research is showing is that the basic cause of all the changes in atomic constants
can be traced to an increase with time in the energy density of the Zero-Point Energy (ZPE). Thus the ZPE was lower at earlier epochs. This has a variety of consequences which are being followed
through in this series of papers, of which the Vacuum paper is the first. One consequence of a lower energy density for the ZPE is a higher value for c in inverse proportion as the permittivity and
permeability of space are directly linked with the ZPE. Another consequence concerns Planck’s constant h. Planck, Einstein, Nernst and others have shown mathematically that the value of h is a
measure of the strength of the ZPE. Therefore, any change in the strength of the ZPE with time also means a proportional change in the value of h. The systematic increase in h which has been measured
over the last century as outlined in the 1987 Report implies an increase in the strength of the ZPE. Thus the invariance of hc is also explicable. But a lower value for h means that quantum
uncertainty is also less in those epochs. This in turn means that atomic particle behaviour was more classical in the early days of the cosmos. This result seems to be independent of the temperature
and density of matter, but does not deny the possibility of other effects. The final matter that the Vacuum paper addresses is the cause for the increasing strength of the ZPE. The work of Gibson
allows it to be traced to the turbulence in the ‘fabric of space’ at the Planck length level induced by the expansion of the cosmos.
The use of the SED approach
Comments: Having read your recent paper for the Journal of Theoretics, and noticing the strong reliance of your thesis upon the tenets of Stochastic Electrodynamics, any of the following may be
of interest:
R. Ragazzoni, M. Turatto, & W. Gaessler, "Lack of observational evidence for quantum structure of space-time at Planck scales,"http://www.arxiv.org/ e-Print archive, astro-ph/0303043;
R. Lieu & L. Hillman, "The phase coherence of light from extragalactic sources - direct evidence against first order Planck scale fluctuations in time and space," http://www.arxiv.org/ e-Print
archive, astro-ph/0301184;
R. Lieu & L. W. Hillman, "Stringent limits on the existence of Planck time from stellar interferometry," http://www.arxiv.org/ e-Print archive, astro-ph/0211402.
It appears that observational evidence precludes the Planck-scale "partons" on which H. Puthoff based his rationale for the conservation of electron orbitals (and therefore on which you base your
hypothesis of a time dependent decrease in c).
One consequence of these results is that the speed of lght in vacuo must have been constant to one part in 10^32. It may be that your notion of "stretching out the heavens" is incorrect and
should be modified. Perhaps one of the "superfluid aether" models would offer a better foundation for your hypothesis than the SED approach to which you seem to have become enamored.
Setterfield: Many thanks for bringing these papers to my attention. However, they do not pose the problem to the SED approach and/or the Variable c (Vc) model that the questioner supposes. Basically,
on the Vc model, quantum uncertainty is less at the inception of the cosmos because Planck's constant times the speed of light is invariable. Therefore, when the speed of light was high, quantum
uncertainty was lower. But other issues are also raised by the papers referred to above. Therefore, let us take this a step at a time.
In the Lieu and Hillman paper of 18^th November 2002 entitled “Stringent limits on the existence of Planck time from stellar interferometry” they specifically state in the Abstract that they “present
a method of directly testing whether time is ‘grainy’ on scales of … [about] 5.4 x 10^-44 seconds, the Planck time.” They then use the techniques of stellar interferometry to “place stringent limits
on the graininess of time.” Elucidation of the rationale behind their methodology comes in the first sentence, namely “It is widely believed that time ceases to be exact at intervals [less than or
equal to the Planck time] where quantum fluctuations in the vacuum metric tensor renders General Relativity an inadequate theory.” They then go on to demonstrate that if time is ‘grainy’ or
quantised, then the frequencies of light must also be quantised since frequency is a time-dependent quantity. Furthermore, they point out that quantum gravity demands that “the time t of an event
cannot be determined more accurately than a standard deviation of [a specific form]…” This form is then plugged into their frequency equations which indicate that light photons from a suitably
distant optical source will have their phases changed randomly. But interferometers take two light rays from a distant astronomical source along different paths and converge them to form interference
fringes. They then conclude “By Equ. (11), however, we see that if the time quantum exists the phase of light from a sufficiently distant source will appear random – when [astronomical distance] is
large enough to satisfy Equ. (12) the fringes will disappear.” This paper, and their subsequent one, both point out that the fringes still exist even with very distant objects. The conclusion is that
time is not ‘grainy’, in contradiction to quantum gravity theories. This result is a serious blow to all quantum gravity theories and a major re-appraisal of their validity is needed as a
consequence. Insofar as these results also call into question the very existence of space-time, upon which all metric theories of gravity are based, then considerable doubt must be expressed as to
the reality of this entity.
However, this is not detrimental to the SED approach, since gravity is already a unified force in that theory. It is in an attempt to unify gravity with the other forces of nature, including quantum
phenomena, that quantum gravity was introduced. By contrast, SED physics presents a whole new view of quantum phenomena and gravity, pointing out that both arise simply as a result of the “jiggling”
of subatomic particles by the electromagnetic waves of the Zero-Point Energy (ZPE). Since this ZPE jiggling is giving rise to uncertainty in atomic events, this uncertainty is not traceable to either
uncertainty in other systems or to an intrinsic property of space or time. This point was made towards the close of my Journal of Theoretics article “Exploring the Vacuum”. As a consequence, it
becomes apparent that time is not quantised on this Vc approach.
Ragazzoni, Turatto and Gaessler use more recent data to reinforce the original conclusions of Lieu and Hillman. These latter two then expand on their 2002 approach in their 27^th January 2003 paper
“The phase coherence of light from extragalactic sources – direct evidence against first order Planck scale fluctuations in time and space.” They take some Hubble Space Telescope results from very
distant galaxies to reinforce their earlier conclusions. In the Abstract of this 2003 paper they also state that “According to quantum gravity, the time t of an event cannot be determined more
accurately than a standard deviation of [a specific form]…likewise distances are subject to an ultimate uncertainty…” They then use this distance uncertainty relationship with light from astronomical
sources to demonstrate that there is no ‘graininess’ to space at the Planck length.
Here is the key point. In order to obtain an uncertainty in distance, they multiply the uncertainty in time by the speed of light. If there is no uncertainty in time, as the Vc model indicates, then
the equations used by Lieu and Hillman cannot be employed to discover if there is any uncertainty in distance at the Planck length. Furthermore, the final statement in their 2003 Abstract, namely
that “The same result may be used to deduce that the speed of light in vacuo is exact to a few parts in 10^32”, is also incorrect for the same reason. Nevertheless, insofar as they are using quantum
gravity equations and the resulting concept of the graininess of space-time, these results indicate that space-time is not grainy, and therefore quantum gravity is conceptually in error.
However, there are other ways of determining whether or not space itself is ‘grainy’ at the Planck length level. If metric theories of gravity have any validity at all, and the work of Lieu and
Hilman has cast serious doubt on this, then an approach suggested by Y. Jack Ng and H. van Dam may soon provide observational evidence for the existence of the graininess of space-time. They write in
their Abstract “We see no reason to change our cautious optimism on the detectability of space-time foam with future refinements of modern gravitational-wave interferometers like LIGO/VIRGO and
LISA.” [arXiv:gr-qc/9911054 v2 28^th March 2000]. Their metric equations indicate that over the size of the whole observable universe, an expected fluctuation of only 10^-15 metres would manifest as
quantised gravitational waves. Upcoming refinements in gravitational wave interferometers will soon allow this degree of uncertainty to be detected. If these refinements do not detect quantised
gravitational waves, then there is further trouble for some metric theories of gravity. Indeed, as at the moment of writing, no gravitational waves have been detected at all by these expensive
interferometers. If this situation continues to exist with the proposed refinements, then the validity of General Relativity may be called into question and the SED option become more attractive.
A different approach has been adopted by Baez and Olson which suggests that wave fluctuations the size of the Planck length are the only ones expected to exist if the fabric of space exhibits
graininess at that scale. As a result, such graininess will be undetectable to gravitational wave ^th January 2002].
The outcome from this discussion is that the granular structure of space is still a very viable option when the SED approach is followed through, as it is in the Vc model. The anonymous reader’s
final two paragraphs therefore draw incorrect conclusions. However, if, as on the Vc model, a decrease in the value of Planck's constant can also be construed as meaning a decrease in the uncertainty
of time, then part of the problem that has been raised by these Hubble telescope observations may be overcome. If the decrease in the uncertainty of time at the inception of the cosmos is followed
through, then this may provide an answer for the problem that these observations pose to theories of quantum gravity. Thus the graininess of space is not called into question in the Vc approach.
(April 3, 2003 updated)
In response to a request for a slightly more simple response, Barry wrote the following:
The theoretical basis for these experiments is the expected fuzziness or granularity of space and time that emerges from those theories that attempt to meld general relativitistic concepts of gravity
with those of quantum mechanics. The respective papers by Lieu and Hillman, as well as Ragazzoni et al, have concentrated on the expected fuzziness or graininess of time. They deduced that if such a
quantum fuzziness or granularity for time really exists, then there will be a smearing of light photons from a sufficiently distant source which will give slightly blurry pictures of very distant
astronomical objects, the blurriness increasing with distance. As it turns out, the Hubble Space Telescope pictures of the most distant objects are sharp, not blurry. This may call into question the
whole concept of quantum gravity. However, the newly developing branch of SED physics has a completely consistent approach to gravity that is already unified to other physical concepts, and therefore
does not need “unifying” in the way that quantum gravity theories attempt to do. On this basis, the HST images of distant objects supports the SED approach rather than the quantum gravity approach.
Furthermore, these results are not detrimental to the variable speed of light (Vc) model. On this approach, quantum uncertainty was getting less the further back into the past we go. This uncertainty
is given by Planck’s constant, h. At the inception of the cosmos, h was very much smaller than it is now. Since the units of Planck’s constant are energy multiplied by time, this means that the
uncertainty in time was very much less (of the order of 1/10^7)for those distant astronomical objects. On that basis, the results from the Hubble Space Telescope are entirely explicable on the Vc
Waves or Frequency?
Comment: Light emitted from atoms is frequency-driven, not wave-length –driven. As light enters a denser medium, it is the frequency which remains constant and the wave length which varies. By
contrast, Setterfield has the wave length constant and the frequency varying. and claims a redshift on a different basis to that which is conventionally followed.
Setterfield: This question raises an important issue. Consider the behaviour of an infinitely long beam of light from an object at the frontiers of the cosmos, or a wavetrain associated with a single
photon of light, entering a medium such as air or water from a less dense medium when compared with a cosmologically changing ZPE. In the first instance, imagine the beam or wavetrain going from air
into glass in such a way that the light ray is moving perpendicularly to the glass. In this case, “every point on a given wavefront enters the glass slab simultaneously and, hence, experiences a
simultaneous retardation, since the velocity of light is less in glass than in air. The wave fronts in the glass are therefore parallel to those in the air but closer together…” [Martin & Connor,
Basic Physics, Vol.3, p.1193-194]. Thus the wave fronts bunch up in the glass as the waves behind approach the glass with higher speed, and so crowd together in the denser medium. The same effect can
be seen on a highway with cars when an obstacle in the path slows the traffic stream, and cars bunch up near the obstacle. What is causing the effect is that this example has two concurrent values
for lightspeed; one in air, the other in glass.
This situation does not apply to emitted light traveling through a cosmos where the ZPE is changing. In this case, an infinitely long beam or a photon wavetrain is traveling through the vacuum. The
energy density of the vacuum is smoothly increasing simultaneously throughout the universe. This means that the infinitely long beam and the wavetrain have all parts slowing SIMULTANEOUSLY. In other
words there is no bunching up effect because all parts of the beam or wavetrain are traveling with the same velocity. A similar situation would exist with cars on a highway if all cars were
simultaneously slowing at the same rate. The distance between the cars would remain constant, but the number of cars passing any given point per unit of time would be lessening proportional to the
speed of the traffic stream. For that reason, in the lightspeed case, wavelengths remain fixed in transit and the frequency, the number of waves passing a given point per unit time, drops in a manner
proportional to the rate of travel. Therefore in a situation with cosmologically changing ZPE, the frequency of light is lightspeed dependent, while the wavelength remains fixed. It was the
experimental proof of this very fact that was being seriously discussed by Raymond T. Birge in Nature in the 1930’s.
Another consideration applies here also. The equation for the energy E of a photon of light is given as E = hf where h is Planck’s constant, and f is frequency. In the situation which applies here,
the energy density of the ZPE is uniformly increasing, and h is a measure of the energy density of the ZPE. Thus, as the ZPE strength increases, so does h. But it has been suggested that frequency f
should remain constant for light in transit with these changes. If that is so, it means that every photon in transit through the universe must be gaining energy as it travels. In other words, energy
is not conserved. However, with the formulation that has been adopted in the paper under review, energy is conserved as would be expected, and so it is wavelength that varies, not frequency.
Comment: Because Setterfield considers the frequency to be varying instead of the wavelength, a redshift of spectral lines is obtained. However, Maxwell’s equations show it is the frequency of
light that is constant in any situation with changing light speed, while wave length is the variable factor. Under these circumstances, there will be no redshift of spectral lines.
Setterfield: The approach that the reviewer has given in item 2 dictates the response that he sees as appropriate to item 3. He seems to have mistakenly applied the results obtained from light
traveling in an in homogeneous medium to those in which there are simultaneous cosmos-wide changes in the medium. This is inappropriate as noted above and does not agree with experiment. If the
approach is adopted that deals with a situation with simultaneous cosmological changes, then the redshift originates in the I have indicated in my work, and the reviewer's basic objection has already
been answered.
However, the applicability of Maxwell’s equations is also called into question here. It is occasionally mentioned that these equations imply a constant speed of light in the vacuum, and any variation
elsewhere is treated on the basis of a changing refractive index of the medium concerned. As noted above, this approach is inappropriate for the situation considered in this paper. Since Maxwell’s
equations seem to imply a constant value for c in a vacuum, this condition can only apply to a changing c scenario when seen from the atomic point of view. Let us explore this a little.
Since all atomic processes are linked to the behaviour of the ZPE as is lightspeed, then as c declines with increasing ZPE, so too does the rate of atomic processes, including atomic clocks and
atomic frequencies. Therefore, as seen from the atom, lightspeed is constant and frequencies are constant. Thus Maxwell’s equations apply to an atomic frame of reference when c is varying
cosmologically. This means that for Maxwell’s equation to apply in our dynamical or orbital time frame, we have to correct the atomic time that is used inherently in those equations to read dynamical
time instead. When that is done, it is the frequency which varies, not the wavelength. In order to see this in a simple way, we note that the basic equation for lightspeed is c = f w where f is
frequency and w is wavelength. The units of c are, for example, metres per second while the frequency is events per second. Thus it is the “per second” part of this equation that needs to be altered.
Since wavelengths, w, will be in metres, and these have no time dependence, then all the “per second” changes can only occur in c and f. Thus it is the frequency that will vary under these conditions
with varying c, not wavelength.
There is a problem which needs to be mentioned in closing; a problem which is underlying much of the problem some are having with the work presented on these pages. Physics has currently seemed to
reverse a sequence which should not have been reversed, and in doing so has made several wrong choices in the latter part of the twentieth century. Those that are underlying the reviewer's criticisms
have to do with the permeability of space, a mistaken idea about frequency in terms of the behavior of light, and the equations of Lorentz and Maxwell. As mentioned in point 1, permeability was
related to the speed of light early in the twentieth century, but divorced from it later and declared invariant. It was invariant by declaration, not by data, and this is the first backwards move
which has influenced the reviewer's thinking here. Secondly, it has become accepted that the frequency of light is the basic quantity and that it is the wavelength which is subsidiary. Until about
1960 it was the wavelength that was considered the basic quantity for measurement. However since it had become easier to measure frequency with a greater degree of accuracy, the focus shifted from
choosing wavelength as the basic quantity to using frequency in its stead, thus relegating wavelength to a subsidiary role. The data dictates something else, however. It is wavelength which remains
constant and the frequency which varies when the speed of light changes. This latter point was made plain by experimental data from the 1930’s, and was commented on by Birge himself.
In a similar way, although both Lorentz and Maxwell formulated their equations before Einstein adopted and worked with them, it has become almost required to derive the formulas of both Lorentz and
Maxwell in terms on Einstein’s work. Properly done, it should be the other way around, and the work of both earlier men should be allowed to stand alone without Einstein’s imposed conditions.
One final note: In the long run, it is the data which must determine the theory, and not the other way around. There are five anomalies cosmology cannot currently deal with in terms of the reigning
paradigm. These are easily dealt with, however, when one lets the data go where it will. The original data are in the Report. As given in my lectures, the anomalies concern measured changes in
Planck’s constant, the speed of light, changes in atomic masses, the slowing of atomic clocks, and the quantized redshift. Modern physics seems to be showing a preference for ignoring much of this in
favor of current theories. That is not the way I wish to approach the subject.
The common factor for solving all five anomalies is increase through time of the zero point energy, for reasons outlined in “Exploring the Vacuum.” The material has also been updated in Reviewing the
Zero Point Energy. | {"url":"http://setterfield.org/000docs/objections.htm","timestamp":"2014-04-17T21:23:48Z","content_type":null,"content_length":"83457","record_id":"<urn:uuid:4f3abc06-0d7e-4edd-8b59-ca69fbff78e3>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00304-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why is this not printing to screen?
03-07-2011 #1
Registered User
Join Date
Feb 2011
Why is this not printing to screen?
this is my code:
#include <stdio.h>
#include <stdlib.h>
int main (void)
int n;
FILE * input;
input = fopen("input.txt", "r");
fscanf(input, "%d", &n);
if(n == 2)
printf("%d is prime", n);
else if(n % 2 == 0 || n < 2)
printf("no, %d is prime!", &n);
int x;
for(x = 0; x < (int)sqrt((double)n); x++)
if(n % x == 0)
printf("no, %d is not prime", &n);
printf("yes, %d is prime!", &n);
if(n % 7 == 0)
printf("Yes, %d is a multiple of 7",&n);
if(n % 11 == 0)
printf("Yes, %d is a multiple of 11",&n);
if(n % 13 == 0)
printf("Yes, %d is a multiple of 13",&n);
printf("No, %d is not a multiple of 7, 11, or 13",&n);
if(n % 2 == 0)
printf("%d is even", &n);
printf("%d is odd", &n);
return 0;
Let me know what you think thanks
You failed to initialize n with a value?
You aren't checking that the file was opened at all, and you don't check fscanf() actually put a value into n. fopen and fscanf both have return values that should be checked.
You need to work the logic into loops. It's OK to explicitly test a few values, but after that, have it all done within the loops.
Last edited by Adak; 03-07-2011 at 02:28 PM.
Read printf man page. You don't need &.
printf is usually line buffered. You need to either print newline char or use fflush.
Last edited by Bayint Naung; 03-07-2011 at 02:13 PM.
03-07-2011 #2
Registered User
Join Date
Sep 2006
03-07-2011 #3
Registered User
Join Date
May 2010 | {"url":"http://cboard.cprogramming.com/c-programming/135451-why-not-printing-screen.html","timestamp":"2014-04-21T01:01:12Z","content_type":null,"content_length":"46950","record_id":"<urn:uuid:f4459de4-b774-4e12-9c1b-a91a41b33929>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: -dpplot- now on SSC
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: -dpplot- now on SSC
From "Nick Cox" <n.j.cox@durham.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject st: -dpplot- now on SSC
Date Thu, 4 Jul 2002 18:42:15 +0100
Thanks to Kit Baum, a package -dpplot- has been added to SSC.
This kind of plot may be as unfamiliar to you as it was to me
a short while ago. I remain agnostic on how useful it is,
but the program having been written, some others may wish to play.
A fairly long explanation is appended, which is all in the
help file; but as usual,
Stata 7 is required.
To install, type
ssc inst dpplot
in an up-to-date Stata. If that last sentence is obscure,
please consult the -findit- FAQ cited below my signature.
<start of longer explanation>
-dpplot- plots density probability plots for varname given a reference
distribution, by default
normal (Gaussian).
To establish notation, and to fix ideas with a concrete example: consider an
observed variable Y,
whose distribution we wish to compare with a normally distributed variable
X. That variable has
density function f(X), distribution function P = F(X) and quantile function
X = Q(P). (The
distribution function and the quantile function are inverses of each other.)
Clearly, this notation
is fairly general and also covers other distributions, at least for
continuous variables.
The particular density function f(X | parameters) most pertinent to
comparison with data for Y can
be computed given values for its parameters, either estimates from data on
Y, or parameter values
chosen for some other good reason. In the case of a normal distribution,
these parameters would
usually be the mean and the standard deviation. Such density functions are
often superimposed on
histograms or other graphical displays. In Stata, -graph, histogram- has a
normal option which adds
the normal density curve corresponding to the mean and standard deviation of
the data shown.
The density function can also be computed indirectly via the quantile
function as f(Q(P)). For
example, if P were 0.5, then f(Q(0.5)) would be the density at the median.
In practice P is
calculated as so-called plotting positions p_i attached to values y_(i) of a
sample of Y of size n
which have rank i: that is, the y_(i) are the order statistics y_(1) <= ...
<= y_(n). One simple
rule uses p_i = (i - 0.5) / n. Most other rules follow one of a family (i -
a) / (n - 2a + 1)
indexed by a.
Plotting both f(X | parameters) and f(Q(P = p_i)), calculated using plotting
positions, versus
observed Y gives two curves. In our example, the first is normal by
construction and the second
would be a good estimate of a normal density if Y were truly normal with the
same parameters. In
terms of Stata functions, the two curves are based on -normden((X - mean) /
SD))- and
-normden(invnorm(p_i))-. The match or mismatch between the curves allows
graphical assessment of
goodness or badness of fit. What is more, we can use experience from
comparing frequency
distributions, as shown on histograms, dot plots or other similar displays,
in comparing or
identifying location and scale differences, skewness, tail weight, tied
values, gaps, outliers and
so forth.
Such density probability plots were suggested by Jones and Daly (1995).
They are best seen as
special-purpose plots, like normal quantile plots and their kin, rather than
general-purpose plots,
like histograms or dot plots.
Extending the discussion in Jones and Daly (1995), the advantages (+) and
limitations (-) of these
plots include
+1. No choices of binning or origin (cf. histograms, dot plots, etc.) or of
kernel or of degree
of smoothing (cf. density estimation) are required.
+2. Some people find them easier to interpret than quantile-quantile plots.
+3. They work well for a wide range of sample sizes. At the same time, as
with any other
method, a sample of at least moderate size is preferable (one rule of thumb
is >= 25).
+4. If X has bounded support in one or both directions, then this should be
clear on the plot.
-1. Results may be difficult to decipher if observed and reference
distributions differ in
modality. For example, if the reference distribution is unimodal but the
observed data hint at
bimodality, nevertheless f(Q(P)) must be unimodal even though f(Y) may not
be. Similarly, when
the reference distribution is exponential, then f(Q(P)) must be monotone
decreasing whatever
the shape of f(Y).
-2. It may be difficult to discern subtle differences in one or both tails
of the observed and
reference distributions.
-3. Comparison is of a curve with a curve: some people argue that graphical
references should
where possible be linear (and ideally horizontal). (A linear reference is a
clear advantage of
quantile plots.)
-4. There is no simple extension to comparison of two samples with each
Programmers may wish to inspect the code and add code for other
distributions. If parameters are
not estimated, then naturally their values must be supplied: the order of
parameters should seem
natural or at least conventional.
Jones, M.C. and F. Daly. 1995. Density probability plots. Communications in
Statistics, Simulation
and Computation 24: 911-927.
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2002-07/msg00089.html","timestamp":"2014-04-19T01:59:12Z","content_type":null,"content_length":"9734","record_id":"<urn:uuid:d7cae6de-ae5c-4376-959b-1f560afea57c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
i cant understand the exact meaning of ..
If have no idea what "smallest inf ever" could mean! I suspect you have a completely wrong idea of what lim inf and lim sup are.
"lim inf" of a sequence is the inf of all subsequential limits.
"lim sup" of a seque nce is the sup of all subsequential limits
why in this drawing its located on the right of inf(Xn)
and they say that lim inf >= of point d
how could i t be
the subsequence converges to d .lim inf cannot be larger then D (it is D)
and the same thing goes for the lim sup
Since I can't see the drawing you are talking about, I have no idea.
If a sequence converges to anything, the all subsequences converge to the same thing so both lim inf and lim sup must be equal to that.
For example, the sequence (-1)
/n converges to 0 so both lim sup and lim inf are 0. That has nothing to do with the fact that there are members of the sequence both above and below 0.
If the sequence were a
= (-1)
(n+1)/n, then there are two convergent subsequences: a
with n even is (n+1)/n which converges to 1 and with n odd is -(n+1)/n which converges to -1. lim inf= -1 and lim sup= 1. However, all a
with n even are
than 1 and decrease to 1, while all a
with n odd are
than -1 and increase to -1. | {"url":"http://www.physicsforums.com/showthread.php?t=278859","timestamp":"2014-04-20T11:21:21Z","content_type":null,"content_length":"26369","record_id":"<urn:uuid:fc2f8745-1d39-4200-a057-9027d7c0d655>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00045-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newton Center Algebra 2 Tutor
Find a Newton Center Algebra 2 Tutor
...I emphasize the importance of connecting material to the bigger picture to enhance understanding, and ensuring the student is proficient in the fundamental math skills required to excel in
Chemistry. I obtained my Bachelor's degree in Biopsychology and have extensive coursework and research experience in this subject. I emphasize the importance of study skills to retain course
10 Subjects: including algebra 2, chemistry, geometry, biology
I am a senior chemistry major and math minor at Boston College. In addition to my coursework, I conduct research in a physical chemistry nanomaterials lab on campus. I am qualified to tutor
elementary, middle school, high school, and college level chemistry and math, as well as SAT prep for chemistry and math.I am a chemistry major at Boston College.
13 Subjects: including algebra 2, chemistry, calculus, geometry
...I majored in chemistry at Boston University, and worked in organic, inorganic, and materials labs. During my college years, I also grew a fondness for tutoring. I continued to tutor privately
after graduation, and now have more than six years' experience.
15 Subjects: including algebra 2, chemistry, writing, geometry
...What the Science section of the ACT actually tests is mostly interpreting the data in charts and tables, understanding experiments and evaluation viewpoints. While the subject matter may be
science, it is actually more reading comprehension than anything else. When the several students that I have tutored in this realized this, they found the section much easier to do.
28 Subjects: including algebra 2, reading, English, writing
...I’ve completed testing for state licensure and begun classroom teaching with a local school system. My areas of expertise are math and business. I’ve instructed at all levels – from
professionals down to early elementary students.
22 Subjects: including algebra 2, calculus, geometry, GRE | {"url":"http://www.purplemath.com/Newton_Center_Algebra_2_tutors.php","timestamp":"2014-04-17T08:04:30Z","content_type":null,"content_length":"24360","record_id":"<urn:uuid:208a0c76-4e6a-4aa6-a9d7-3b59c2b3c089>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A man stands on his balcony, 120 feet above the ground. He looks at the ground, with his sight line forming an angle of 50° with the building, and sees a bus stop. The function d = 120sectheta models
the distance from the man to any object given his angle of sight theta. How far is the bus stop from the man? Round your answer to the nearest foot.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50c01520e4b0231994ecd601","timestamp":"2014-04-21T16:15:04Z","content_type":null,"content_length":"25506","record_id":"<urn:uuid:4bf460d7-5c47-47fe-b849-769676e268fa>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00399-ip-10-147-4-33.ec2.internal.warc.gz"} |
Beads on a Necklace (Convex Hulls)
Problem 1456. Beads on a Necklace (Convex Hulls)
We may describe a convex hull as a rubber band stretched around a list of points. Some of the points will be inside the rubber band, and some will define vertices of (or lie along the edge of) a
convex polygon.
Given an n-by-2 list of points xy where each row describes a point, your job is to determine if all the points belong to the convex hull for those points. In the matrix xy, the x coordinate is the
first column and the y coordinate is the second column.
So, for example, the points below form a single convex hull.
* *
* *
In contrast, the points below do not, since the convex hull is a triangle that includes an interior point. Any polygon that includes all the points must necessarily be concave.
* *
Thus if
xy = [1 1;1 2;2 2;2 1]
allConvex(xy) => true
xy = [1 1;3 1;2 2;2 3]
allConvex(xy) => false
Problem Comments | {"url":"http://mathworks.com/matlabcentral/cody/problems/1456-beads-on-a-necklace-convex-hulls","timestamp":"2014-04-16T16:07:07Z","content_type":null,"content_length":"25147","record_id":"<urn:uuid:1a6419ff-1649-41cd-b300-e92a3d589450>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00639-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jamul Geometry Tutor
Find a Jamul Geometry Tutor
I have a bachelor's degree in Electrical Engineering from UCSD. My favorite subjects are math and science, and I have been tutoring for over 15 years to students of all ages, K-12 and college. My
tutoring sessions are very effective.
35 Subjects: including geometry, reading, English, physics
...I really love helping students overcome the hardships they are having in their classes. When I was younger I had a relatively hard time paying attention and completing assignments on time so I
know firsthand how hard it can be to catch up after how easy it is to fall behind. It's my goal to tak...
43 Subjects: including geometry, reading, English, chemistry
...I am a 5-time CIF champion (2006-2007), nationally ranked in the 100 & 200 breaststroke, recruited to NCAA Div1 school. I've been working with Photoshop for about 3 years now-- am currently
running CS5. I have plenty of experience editing and retouching photographs, as well as building 2D media for publication.
34 Subjects: including geometry, English, Spanish, SAT math
...I have an undergraduate Engineering degree from the University of Pennsylvania, with a masters degree in Education from Mercy College. I'm a newly certified Middle-High School teacher from the
state of New York. I've worked as a substitute teacher in the Bedford, Mt Kisko school district for the last 2 years.
9 Subjects: including geometry, Spanish, calculus, algebra 1
...There are few jobs more rewarding than tutoring, and I have been lucky enough to tutor a diverse group of students throughout my career. I specialize in tutoring English at all levels,
particularly in writing skills and reading comprehension. I also specialize in tutoring for GRE, SAT, and ACT ...
29 Subjects: including geometry, reading, English, biology | {"url":"http://www.purplemath.com/Jamul_geometry_tutors.php","timestamp":"2014-04-19T07:28:52Z","content_type":null,"content_length":"23657","record_id":"<urn:uuid:4607b7c0-ee8b-4028-8829-efc56e3f456e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
, a term of relation between different things, but of the same kind, magnitude, quantity, or quality.——Wolfius defines Equals to be those things that may be substituted for each other, without any
alteration of their quantity.—It is an axiom in mathematics &c, that two things which are equal to the same third, are also equal to each other. And if Equals be equally altered, by Equal addition,
subtraction, multiplication, division, &c, the results will be also Equal.
Equal Circles, are those whose diameters are equal.
Equal Angles, are those whose sides are equally inclined, or which are measured by similar arcs of circles.
Equal Lines, are lines of the same length.
Equal Plane Figures, are those whose areas are equal; whether the figures be of the same form or not.
Equal Solids, are such as are of the same space, capacity, or solid content; whether they be of the same kind or not.
Equal Curvatures, are such as have the same or equal radii of curvature.
Equal Ratios, are those whose terms are in the same proportion.
, in Optics, is said of things that are seen under equal angles. | {"url":"http://words.fromoldbooks.org/Hutton-Mathematical-and-Philosophical-Dictionary/e/equal.html","timestamp":"2014-04-19T22:10:43Z","content_type":null,"content_length":"6301","record_id":"<urn:uuid:40a1df41-25e1-4dd5-99bc-b69300e1ac65>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tricks of the Trade
up vote 4 down vote favorite
Can you name a mathematical theorem that is simple to state and relatively simple to prove, was essential to your research or to a work you found interesting and significant, has the potential to be
applied in a wide variety of fields, and is not part of the curriculum of what "every mathematician should know"?
soft-question big-list
4 Why are people voting to close? This seems like a perfectly reasonable question... – Igor Rivin Apr 27 '11 at 15:04
2 I voted to close because I dislike the kind of soft question where the OP expects everyone else to put more effort in answering the question than it was spent in asking it. – José
Figueroa-O'Farrill Apr 27 '11 at 15:10
2 Answers will be a lot more useful if they come together with explanations for why the theorem satisfies (at least some of) the criteria in the question. – Dan Ramras Apr 27 '11 at 15:11
2 I would vote to close (if I could) as this is way too broad (as well as subjctive and argumentative); every mildly specialized technical result that is easy to state and not too complicated to
prove seems to qualify as an answer. – quid Apr 27 '11 at 15:21
5 You might try looking at the Math Tricki: tricki.org – Ben Linowitz Apr 27 '11 at 15:42
add comment
closed as not constructive by José Figueroa-O'Farrill, Daniel Moskovich, Charles Siegel, Zev Chonoles, Bruce Westbury Apr 27 '11 at 15:27
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate,
arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.If this question can be reworded to fit the rules
in the help center, please edit the question.
1 Answer
active oldest votes
The nine lemma.
up vote 1 down vote
add comment
Not the answer you're looking for? Browse other questions tagged soft-question big-list or ask your own question. | {"url":"http://mathoverflow.net/questions/63178/tricks-of-the-trade","timestamp":"2014-04-21T12:56:25Z","content_type":null,"content_length":"48332","record_id":"<urn:uuid:88089ba3-fdcd-4099-b2e5-5b1a4d14360f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
Society of Undergraduate Mathematics Students - U of R
Why Math
Majoring in Math
Summer Opportunities
Semester Opportunities
SUMS Archives
Mathematics Department Home Page
University of Rochester
General Math Research Papers:
Past Research papers written by some U of R students: For more information on REUs, select "Summer Opportunities" to the left.
We are attempting to collect some of the REU and general research papers written by current and former U of R students who have participated in Math research. If you are a current or former U of R
student who would like to post your REU paper here, please send an email to Blair Germain. | {"url":"http://www.math.rochester.edu/undergraduate/sums/reu_papers.html","timestamp":"2014-04-20T11:33:18Z","content_type":null,"content_length":"4184","record_id":"<urn:uuid:32edc551-c8eb-40bf-bb64-11e2eed6a9f6>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00454-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: RE: Tobit coefficients
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: RE: Tobit coefficients
From "Austin Nichols" <austinnichols@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: RE: Tobit coefficients
Date Fri, 27 Jul 2007 11:34:55 -0400
Vitor <vfgate-demog@yahoo.com.br>:
As I have pointed out before on this list, e.g.
-tobit- is inappropriate if the zeros are not censored values
(representing a negative y* that is observed as y=0). If the zeros
are simply a point mass in the distribution of a nonnegative dep var,
then -poisson- or -glm- are better options. In your case, the fact
that many people receive no money from parents does not mean that
their parents would like to take money from them (make a negative
transfer), but are prevented by law from doing so. I would use
-poisson- to estimate the response of ln(y) including zeros of y in
the estimation, and then predict as usual. Why do you need a
prediction conditional on y>0 anyway? Just to ensure predictions are
nonnegative? Bad reason!
Though -poisson- is designed for count variables, it works well for
any model where E(y|x)=exp(xb). See Wooldridge
(http://www.stata.com/bookstore/cspd.html) p.651 and surrounding text:
"A nice property of the Poisson QMLE is that it retains some
efficiency for certain departures from the Poisson assumption."
In any case, the better way to get the predictions is to predict over
the whole estimation sample, but first replace 1) AFTER=1, TREAT=1,
TREAT*AFTER=1, then 2) AFTER=1, TREAT=0, TREAT*AFTER=0, etc. to
generate predictions *for each observation* under different
counterfactuals, then compare the means (the use of -poisson- is
illustrative--any estimation command can be used, even -tobit-):
sysuse auto, clear
replace mpg=max(0,mpg-20)
g treat=rep78>3 if !mi(rep78)
la var treat "Fake Treatment Var"
ren for after
la var after "Fake After Var"
g ta=treat*after
poisson mpg len treat after ta
predict m if e(sample)
replace treat =1
replace after=1
replace ta=treat*after
predict m11 if e(sample)
replace treat =0
replace ta=treat*after
predict m01 if e(sample)
replace after=0
replace ta=treat*after
predict m00 if e(sample)
replace treat =1
replace ta=treat*after
predict m10 if e(sample)
su m01
local m01=r(mean)
su m11
local m11=r(mean)
di "Effect of treat given after=1 is " `m11'-`m01'
su m00
local m00=r(mean)
su m10
local m10=r(mean)
di "DD of treat, after switching 0->1, is " `m11'-`m01'-`m10'+`m00'
and so on. Then you could wrap all that in a rclass -program- and
-bootstrap- it for standard errors on the estimated treatment effects,
not that the diff-in-diff will really identify any true treatment
On 7/27/07, vfgate-demog@yahoo.com.br <vfgate-demog@yahoo.com.br> wrote:
> Austin, thank you very much for your help.
> Yes, I was talking about -prvalue- from spost package.
> Let me try to explain my situation more clearly. I am
> estimating a classic triple-difference model. So I
> have three dummies: TREAT, AFTER and AFFECTED. I also
> have TREAT*AFTER, TREAT*AFFECTED, AFTER*AFFECTED and
> TREAT*AFTER*AFFECTED. And I also have a vector of
> variables (sex, education, etc.)
> My dependent variable (Y) is the amount of money
> received from parents. (many people receive zero). I
> want to have predicted values E(Y|Y>0), and respective
> Standard Errors, for every combination of the three
> dummies:
> 1) TREAT=1, AFTER=1, TREAT=1, TREAT*AFTER=1, ...
> 2) TRATE=0, AFTER=1, TREAT=0, TREAT*AFTER=0, ...
> 3) etc...
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2007-07/msg00848.html","timestamp":"2014-04-17T12:35:42Z","content_type":null,"content_length":"9465","record_id":"<urn:uuid:2ca65ae1-d967-4aca-bc37-a60d9cf9e902>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
San Marino Trigonometry Tutors
...I graduated college with a Space Physics degree with an overall 3.8 GPA at a private university. I love physics and have kept all of my text books from college. I was a teacher's assistant for
2 years in basic physics labs and advanced optics lab.
13 Subjects: including trigonometry, physics, calculus, geometry
...I would also be able to help in the English and Science areas. Because of my 19 years of experience teaching middle school (3 years) and high school math, I would be able to help review your
math content. I have taught Geometry for many years which has a logic element to it.
15 Subjects: including trigonometry, geometry, statistics, GRE
...I am also fluent in Spanish, and can tutor students who want to improve their speaking skills, or who just want to pass their next Spanish test! My teaching/tutoring style differs depending on
the needs of my students. If you goal is to build a mastery of math, I will help fill in the gaps in y...
23 Subjects: including trigonometry, Spanish, algebra 1, GRE
...I have been tutoring for this website for almost one year and had the pleasure of meeting all types of people. I've tutored subjects as low as third grade math, and as high as trignometry. I
love helping students out in math and forming a strong relationship with them to make them feel comfortable by creating a positive environment.
10 Subjects: including trigonometry, calculus, geometry, algebra 1
...In my years of working in widely varying fields, I've noticed that many people have a very hard time understanding mathematics. I hope to change that. Everyone that I have tutored has been able
to apply what I've helped them with and move on to more advanced levels of math.
10 Subjects: including trigonometry, geometry, algebra 2, elementary math | {"url":"http://www.algebrahelp.com/San_Marino_trigonometry_tutors.jsp","timestamp":"2014-04-21T00:43:04Z","content_type":null,"content_length":"25178","record_id":"<urn:uuid:5ba5d025-efff-430e-86f0-da1009372be1>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00021-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lakewood, WA Math Tutor
Find a Lakewood, WA Math Tutor
...Still other time, grammar patterns and changes are difficult. Lastly, reading comprehension skills and vocabulary skills play a big part in successfully navigating the reading tasks. I have
worked with new learners, students with dyslexic patterns and advanced students in reading skills.
12 Subjects: including algebra 1, algebra 2, SAT math, geometry
...I formally took an instructors course when I became an instructor for the US ARMY, which covered standard public speaking practices. Lastly, I have demonstrated software to US government
entities and have a fair knowledge of presentation tactics to get focused points across. The key for me is to keep complex information simple to reach and inform a larger audience.
53 Subjects: including algebra 1, chemistry, ACT Math, geometry
...I have a B.A. degree in Elementary Education as of May 1992 from Northwest University. I was a substitute teacher in grades K-6 in Kitsap County from September 1992 through June 1994. I
volunteered at my sons' elementary schools from 2002 through 2008.
16 Subjects: including SAT math, prealgebra, reading, writing
With my teaching experience of all levels of high school mathematics and the appropriate use of technology, I will do everything to find a way to help you learn mathematics. I can not promise a
quick fix, but I will not stop working if you make the effort. -Bill
16 Subjects: including discrete math, Mathematica, algebra 1, algebra 2
...In addition, we work on ear training to ensure pitch is right where it should be. I have been a soccer coach with Covington Community sports since 2008. When I was young, I played select
soccer for 6 years before I had a knee injury which took me out of the game.
46 Subjects: including ACT Math, trigonometry, SAT math, algebra 1
Related Lakewood, WA Tutors
Lakewood, WA Accounting Tutors
Lakewood, WA ACT Tutors
Lakewood, WA Algebra Tutors
Lakewood, WA Algebra 2 Tutors
Lakewood, WA Calculus Tutors
Lakewood, WA Geometry Tutors
Lakewood, WA Math Tutors
Lakewood, WA Prealgebra Tutors
Lakewood, WA Precalculus Tutors
Lakewood, WA SAT Tutors
Lakewood, WA SAT Math Tutors
Lakewood, WA Science Tutors
Lakewood, WA Statistics Tutors
Lakewood, WA Trigonometry Tutors | {"url":"http://www.purplemath.com/Lakewood_WA_Math_tutors.php","timestamp":"2014-04-21T15:29:02Z","content_type":null,"content_length":"23679","record_id":"<urn:uuid:9d54e19a-306a-43a6-b04c-342229cc3618>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
DOCUMENTA MATHEMATICA, Vol. Extra Volume: Andrei A. Suslin's Sixtieth Birthday (2010), 197-222
DOCUMENTA MATHEMATICA
, Vol. Extra Volume: Andrei A. Suslin's Sixtieth Birthday (2010), 197-222
Eric M. Friedlander and Julia Pevtsova
Generalized Support Varieties for Finite Group Schemes
We construct two families of refinements of the (projectivized) support variety of a finite dimensional module $M$ for a finite group scheme $G$. For an arbitrary finite group scheme, we associate a
family of {\it non-maximal rank varieties} $\Gamma^j(G)_M$, $1<= j leq p-1$, to a $kG$-module $M$. For $G$ infinitesimal, we construct a finer family of locally closed subvarieties $V^{\ul a}(G)_M$
of the variety of one parameter subgroups of $G$ for any partition $\ul a$ of $\dim M$. For an arbitrary finite group scheme $G$, a $kG$-module $M$ of constant rank, and a cohomology class $\zeta$ in
$\HHH^1(G,M)$ we introduce the {\it zero locus} $Z(\zeta) \subset \Pi(G)$. We show that $Z(\zeta)$ is a closed subvariety, and relate it to the non-maximal rank varieties. We also extend the
construction of $Z(\zeta)$ to an arbitrary extension class $\zeta \in \Ext^n_G(M,N)$ whenever $M$ and $N$ are $kG$-modules of constant Jordan type.
2010 Mathematics Subject Classification: 16G10, 20C20, 20G10
Keywords and Phrases:
Full text: dvi.gz 54 k, dvi 132 k, ps.gz 951 k, pdf 282 k.
Home Page of DOCUMENTA MATHEMATICA | {"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/DMJDMV/vol-suslin/friedlander_pevtsova.html","timestamp":"2014-04-20T18:29:54Z","content_type":null,"content_length":"2274","record_id":"<urn:uuid:1745b51e-7cdb-4028-9d74-68fd4a7ad58e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jersey Vlg, TX Math Tutor
Find a Jersey Vlg, TX Math Tutor
...I am a former professional engineer which requires 2 areas of state approval and testing, the FET portion on the 1st day, and the more detailed test on the 2nd day. I understand my engineering
basics well enough to teach and help student(s) wanting to improve their FET skills. I have experience with mechanical engineering, as well as chemical, electrical, and civil engineering.
36 Subjects: including algebra 1, algebra 2, calculus, chemistry
...I taught Chip Design Methods to teams in the U.S., U.K., France, Sweden, Japan, and India. I can clarify complex concepts in ways anyone can understand. I also volunteer for Special Olympics
events and help with ARC/Texana activities.
18 Subjects: including differential equations, linear algebra, logic, TAKS
...Biology is also cool because it has been affected by natural selection, a process that has resulted in a vast diversity of functional complexity. I personally find aspects of the living world
and the living world in its entirety fascinating and sometimes even beautiful in its simultaneous order ...
8 Subjects: including algebra 1, biology, chemistry, prealgebra
...This is where Algebra gets more involved but more exciting. Here is where I like to help students really become algebra experts so they are better prepared for higher levels of math. In
addition to their regular assignments, I like to give students supplemental problems that provide extra but fun challenges, so they can strengthen their problem-solving skills.
7 Subjects: including algebra 1, algebra 2, biology, trigonometry
...I have been playing guitar for well over forty years. Although I'm mostly self-taught on the guitar, I have been classically trained on piano and trumpet. I can help students learn how to read
music, form chords to accompany themselves or others, play in a band, and learn songs by ear and from sheet music.
27 Subjects: including discrete math, algebra 1, algebra 2, ACT Math | {"url":"http://www.purplemath.com/jersey_vlg_tx_math_tutors.php","timestamp":"2014-04-17T01:00:08Z","content_type":null,"content_length":"24072","record_id":"<urn:uuid:d841d896-4554-46a2-aa24-96f57baa642e>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Diffusion Coefficients for Multicomponent Gases
In the analysis of combustion problems and other multicomponent reacting systems, it is important to account for the relevant transport properties of the reacting mixture. This Demonstration is
concerned with computing the ordinary diffusion coefficients of a mixture. The thermal diffusion coefficients, which describe the Soret effect, are not considered.
For an ideal gas mixture, the general expression for the ordinary diffusion flux of species in a mixture of species is:
where is the molecular weight of species , is the mixture molecular weight, are the ordinary diffusion coefficients for the mixture, and are the species mole fractions. The ordinary diffusion
coefficients for the mixture can be determined from the kinetic theory of gases ([1] and [2]), and are given by
where the components are elements of the matrix . The components of the matrix are given by
where are the binary diffusion coefficients. These coefficients can be predicted from the kinetic theory of gases, using the Chapman–Enskog theory with a Lennard–Jones (6-12) potential [2] and [3].
In this Demonstration you can select a gas mixture of up to molecular species. You can specify the molar composition of the mixture (moles of each species) as well as the mixture temperature and
pressure . From the species popup menus you can select distinct molecular species for the mixture. The Demonstration then computes the multicomponent diffusion coefficients, which are displayed in
tabular form. The diagonal terms are not computed, as these components (known as the mutual diffusion coefficients) require a separate calculation beyond the scope of this Demonstration [2].
From the pull-down menu, you can also view the species molecular parameters and the binary diffusion coefficients used in the computation. If the molecular species chosen are not distinct, the
inverse of is not defined, and you need to update the selected molecular species.
[1] S. R. Turns,
An Introduction to Combustion: Concepts and Applications
, 3rd ed., New York: McGraw–Hill, 2012.
[2] G. Dixon-Lewis, "Flame Structure and Flame Reaction Kinetics. II. Transport Phenomena in Multicomponent Systems,"
Proceedings of the Royal Society of London, Series A
, 1968 pp. 111–135.
[3] R. B. Bird, W. E. Stewart, and E. N. Lightfoot,
Transport Phenomena
, 2nd ed., New York: John Wiley & Sons, 2002. | {"url":"http://demonstrations.wolfram.com/DiffusionCoefficientsForMulticomponentGases/","timestamp":"2014-04-19T09:24:31Z","content_type":null,"content_length":"47055","record_id":"<urn:uuid:fa5c73c1-21fa-4051-b7d6-5d12ab3defe0>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
Beltsville SAT Math Tutor
I have been tutoring for over two years now and enjoying every moment. I just received word that one of my students who had previously been unsuccessful in passing the GED has passed after
learning with me. I am a college graduate from the University of MD, College Park with a BS in Decision Information Sciences and Statistics.
31 Subjects: including SAT math, English, reading, writing
...I can also tutor in additional subjects ranging from elementary school through college level, with a focus in biology and chemistry.I have been a competitive swimmer since the age of six. I was
the captain of my high school swim team and a varsity team member in college. I have taught multiple ...
39 Subjects: including SAT math, Spanish, chemistry, writing
...I believe students success in geometry comes from full understanding of why each theorem or property works not from memorizing countless formulas. Students often struggle in geometry when they
don't have a solid foundation in the basics of lines and angles or have not yet become comfortable with...
12 Subjects: including SAT math, geometry, ASVAB, GRE
...Examples are worked on with the learner, and the learner will be assisted to do similar exercises until she/he is able to do them independently. My approach is flexible to meet the needs of the
learner. ALGEBRA II The contents of Algebra II include, solving equations and inequalities involving...
7 Subjects: including SAT math, geometry, algebra 1, algebra 2
...I'm one of those nerdy people who loves to talk and think about numbers. I sincerely enjoy sharing an understanding of math with others. I know how to make your writing clear and concise.
38 Subjects: including SAT math, English, reading, chemistry | {"url":"http://www.purplemath.com/Beltsville_SAT_math_tutors.php","timestamp":"2014-04-18T15:48:07Z","content_type":null,"content_length":"24021","record_id":"<urn:uuid:61182072-e255-4707-bb74-68677973024b>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Mixed logit estimation with mixlogit
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Mixed logit estimation with mixlogit
From Tunga Kantarcı <tungakantarci@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Mixed logit estimation with mixlogit
Date Wed, 15 Jun 2011 11:14:01 +0200
Thank you. I will follow your suggestion. I might update the thread
for how it worked.
Kind regards,
>This is the sort of specification I had in mind. In response to your questions:
>Q1: Yes, X should be specified to have a random coefficient and the
>X*Y interaction a fixed coefficient
>Q2: Gamma is estimated (as the mean parameter) along with n (the SD
>parameter) when you specify X to be random.
>I hope this helps.
>On 14 June 2011 16:03, Tunga Kantarci <tungakantarci@gmail.com> wrote:
>> Please let me express what I understand from your suggestion.
>> U = alfa + beta X + e is the random utility model where beta is a
>> random coefficient and I assume e is normally distributed.
>> beta = gamma + lambda Y + n where Y is observed and n is unobserved
>> and assumed to be normally distributed with mean of zero and variance
>> to be estimated.
>> I plug beta in the first equation to get
>> U = alfa + gamma X + lambda Y X + n X + e is the new random utility
>> model where n is unobserved.
>> Question 1: Would I indicate X as the variable with a random
>> coefficient, which is e, in rand(varlist)?
>> Question 2: I guess I should get rid of the gamma then?
>> Tunga
>> PS. Thanks for the quick reply... and how lucky one can be to get a
>> reply from the author of mixlogit.
>>> Tunga
>>> If I understood your question correctly it seems to me that you can
>>> handle this by interacting X with the observed characteristics driving
>>> the heterogeneity in beta.
>>> Arne (author of -mixlogit-)
>>>> On 14 June 2011 14:27, Tunga Kantarci <tungakantarci@gmail.com> wrote:
>>>> Hello,
>>>> I have a random utility model where the coefficients are treated
>>>> random. That is, U = alfa + beta * X + U is a random utility model
>>>> where alfa and beta are treated as "random" coefficients which depend
>>>> on "observed" and "unobserved" characteristics. This leads to a mixed
>>>> logit model that needs to be estimated using maximum simulated
>>>> likelihood. I have read Arne Risa Hole's "Fitting mixed logit models
>>>> using maximum simulated likelihood" in The Stata Journal, 2007, 7 (3),
>>>> 388-401. It seemed to me that the mixlogit package can handle my
>>>> estimation. However, a first question I have is the following: In the
>>>> article, the random coefficient is treated "unobserved". In my model,
>>>> the random coefficient (beta above) depends on observed as well as
>>>> unobserved characteristics. It looks like I cannot specify that the
>>>> random coefficient depends on observed characteristics in the mixlogit
>>>> syntax.
>>>> Would it be possible to specify that my random coefficients depend on
>>>> observed and unobserved characteristics prior and still make use of
>>>> the mixlogit procedure?
>>>> Thanks,
>>>> Tunga
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2011-06/msg00709.html","timestamp":"2014-04-16T04:31:53Z","content_type":null,"content_length":"10428","record_id":"<urn:uuid:064aff99-cf42-4650-930a-fd9e9e1c1167>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
Notation Question
October 28th 2010, 03:24 PM #1
Nov 2009
Notation Question
I'm having some confusion with notation of Linear Maps. b and c are bases of V. Do I read this as the linear map of b to c.
For example I write elements of c using b as the basis:
$c_i = b_1(v_1) + b_2(v_2) + ... + b_n(v_n)$
Or do I write elements of b using c as a the basis:
$b_i = c_1(v_1) + c_2(v_2) + ... + c_n(v_n)$
I was working on it when trying to solve:
$\Phi^{c}_{c}(D) = \Phi^{c}_{b}(1v) \Phi^{b}_{b}(D) \Phi^{b}_{c}(1v)$
Where D is the derivative map.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-algebra/161352-notation-question.html","timestamp":"2014-04-17T19:03:36Z","content_type":null,"content_length":"29479","record_id":"<urn:uuid:d973f6a9-f09e-4414-ac55-41b278b7848a>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating Effect Size, Part II
In the previous post, I introduced effect size (more specifically, Cohen’s d) as a statistical tool that can answer whether a CME activity was effective, as well as quantify the magnitude of this
effectiveness and allow for comparisons of effectiveness across CME activities. Using Cohen’s d, a CME provider can report the effectiveness of an annual meeting in affecting, for example,
participant competency (Level 4 outcomes) and then compare the magnitude of effect to previous year’s meetings and/or other CME activities of similar format or topic focus. Ultimately, a CME
provider can determine benchmarks for effectiveness at each outcome level (or for each educational format) to quickly diagnose the performance of each CME activity. That sort of info comes in real
handy for accreditation review and for communicating with sponsors (but that will be the focus of the next post).
So, all that being said, it’s now time to discuss how to actually calculate a Cohen’s d. One caution: you will not need a statistician, an advanced grasp of mathematics, or any specialty
certification…if you can calculate (or more likely, use MS Excel to calculate) an average, standard deviation and have access to the Internet, you’re good.
I’ll set the stage with a common example: assume that you are a CME provider who just produced a 2-hour, mixed didactic-interactive case discussion regarding advances in the detection, evaluation and
treatment of high blood cholesterol in adults. You used a paper-based survey (administer both pre- and post-activity) to measure participants self-reported utilization (on a 5-point scale) of
clinical tasks related to the CME activity content. Each survey consisted of eight assessment items (i.e., clinical tasks). Now you want to summarize this pre- vs. post-activity data into a single
effect size. The steps for such are as follows:
1. Calculate a mean rating and standard deviation for each assessment item in the pre-survey.
2. Calculate a mean rating and standard deviation for each assessment item the post-survey.
3. Type “effect size calculator” into Google and click any of the identified links (I like to use this one).
4. Enter the data from items #1 and 2 (above) into the effect size calculator.
5. Behold the effect size for your activity!
There is one more step…interpretation. For that, you need to be aware of the following:
1. Cohen’s d is expressed in standard deviation units. Accordingly, a Cohen’s d of 1.0 indicates that one standard deviation separates the pre-activity average rating vs. the post-activity average
rating (with the post-activity rating being greater).
2. Cohen’s d is proportional. Therefore, a Cohen’s d of 1.0 is twice the magnitude of a Cohen’s d of .5 (or half the magnitude of a 2.0).
3. There is no upper or lower bound to the possible range of Cohen’s d. The maximum expected range of Cohen’s d is from -3 to +3, but the majority is expected to fall within -1 to +1.
4. Benchmarks are used to assess the magnitude of a Cohen’s d. Based on repeated measurement, benchmarks (or expected ranges of Cohen’s d) can be established in a given area (e.g., mixed,
didactic-interactive CME). In areas where benchmarks remain to be established, the following preliminary benchmarks can be used to assessed magnitude of effect: 0.2 (small), 0.5 (medium) and 0.8
(large) (Cohen 1988).
5. You can compare the Cohen’s d from one activity to the d from any other activity that used a similar outcome assessment method (i.e., case-based survey).
6. You can aggregate Cohen’s d across activities (i.e., take an average d across all of your eLearning activities, or all of your cholesterol-focused CME – assuming you used the same outcome
assessment method for these activities [see item #5 above]).
And just like that, you are now proficient in calculating and interpreting effect size in CME. I told you this would be easy. Now go forth and make this look hard to all of your competition.
Reference: Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences, 2nd edition. Erlbaum, Hillsdale, NJ.
One response to “Calculating Effect Size, Part II”
Filed under CME, Cohen's d, Effect size, Methodology, Statistics | {"url":"http://assesscme.wordpress.com/2012/02/29/calculating-effect-size-part-ii/","timestamp":"2014-04-21T09:36:23Z","content_type":null,"content_length":"66337","record_id":"<urn:uuid:f5406ba6-965d-4f10-a60b-fc606c7828b2>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 19
Global Perspectives for Local Action: Using TIMSS to Improve U.S. Mathematics and Science Education CHAPTER TWO What Does TIMSS Say about Student Achievement? TIMSS provided a wealth of information
on the knowledge and skills of students in mathematics and science. In each of the three student groups studied by TIMSS, the achievement tests included questions on different topics in mathematics
and science, so that particular strengths and weaknesses could be measured. In addition, for populations 1 and 2, TIMSS tested students in adjacent grades, providing a measure of gains achieved
between those two grades (third and fourth grades and seventh and eighth grades in the United States). As described in the previous chapter, the achievement test results were just one of many kinds
of data produced by TIMSS. Taken together, these data provide an unprecedented amount of information about the teaching practices, educational policies, school characteristics, student attitudes, and
other factors that contribute to academic strengths and weaknesses in each participating country. However,
OCR for page 19
Global Perspectives for Local Action: Using TIMSS to Improve U.S. Mathematics and Science Education as might be expected, the achievement scores have garnered the most public attention. Much of this
attention has focused on the ''horserace'' aspects of TIMSS—how did U.S. students do compared with students in other countries? This emphasis on the bottom line of the achievement scores can obscure
potentially more interesting results. For instance, in what subjects did U.S. students perform well and in which did they perform poorly, and how are these areas aligned with common U.S. mathematics
and science curricula? Do U.S. students learn as much from grade to grade as students in other countries? How are student scores linked to the characteristics of the schools they attend? The
questions in the TIMSS achievement tests were based on the curricula in participating countries, and to the extent that these curricula reflected national standards in science and mathematics, the
tests provide a general indication of how well students are meeting those standards. However, the TIMSS achievement tests were not aligned with the standards of any one country, such as those of the
United States (Beatty, 1997, pp. 27–28; National Research Council, 1997, p. 3). The TIMSS results therefore do not provide a direct measure of whether students are achieving the standards and
benchmarks specified by national organizations (National Council of Teachers of Mathematics, 1989; American Association for the Advancement of Science, 1993; National Research Council, 1996) or the
standards in place at the state, national, or local levels. Nevertheless, an important message from the achievement results is that there is considerable room for improvement in U.S. education (Table
2-1). While U.S. fourth graders scored considerably above the international average in both science and mathematics, U.S. eighth graders scored just above the average in science and below it in
mathematics. U.S. high school seniors performed even less well overall in tests of general mathematical and scientific knowledge and had particularly low mean scores on the assessments of advanced
mathematics and physics. On an international scale, U.S. students, particularly in the upper grades tested, are not achieving high standards. Furthermore, many students are not achieving even at the
level indicated by the average U.S. score. While the variability of U.S. scores was not markedly greater than in other countries (Stedman, 1997), variability among student scores in the United States
was strongly linked to the specific classes a student took (for example, regular mathematics versus algebra in middle school or junior high) and to differences among schools (Schmidt et al., 1999,
pp. 163–180). These findings suggest that many students are not being given the educational opportunities needed to achieve at high levels. This chapter looks first at the achievement results in
mathematics and then at those in science. It applies a somewhat different analysis to each discipline, partly to reveal particularly noteworthy results and partly to demonstrate different ways of
using the achievement results. Much more extensive analyses of the achievement results, along with sample problems, can be found in the reports from the TIMSS International Study Center (Beaton et
al., 1996a, 1996b; Harmon et al., 1997; Martin et al., 1997; Mullis et al., 1997, 1998) and in the summary reports from the U.S. Department of
OCR for page 19
Global Perspectives for Local Action: Using TIMSS to Improve U.S. Mathematics and Science Education Education (1996, 1997b, 1998). The publicly released test items for populations 1, 2, and 3 also
can be ordered from the TIMSS International Study Center or can be downloaded from the World Wide Web at http://www.csteep.bd.edu/TIMSS1/TIMSSPublications.html#International MATHEMATICS ACHIEVEMENT
In mathematics the population 1 assessment asked students 102 questions overall. Each student tested answered just a subset of questions, but by combining student responses it is possible to
calculate "student scores" for the entire set of achievement items. Using this method, U.S. fourth graders answered 64 of the 102 questions correctly on average, which is 10 to 13 items below the
average performance of students in the top four countries and in a band of performance comparable with that found in the Czech Republic, Iceland, and Canada (Table 2-2a). In the population 2
assessment, U.S. eighth graders answered a mean of 80 questions out of 151 correctly (Table 2-2b). Students in the four top-scoring countries—Singapore, Japan, Korea, and Hong Kong—answered an
average of between 105 and 119 questions correctly. The questions on the population 1 assessment were grouped into six areas: whole numbers data representation, analysis, and probability geometry
patterns, relations, and functions fractions and proportionality measurement, estimation, and number sense U.S. students at grade four achieved above the international mean performance in the first
four of the content areas listed above. (This analysis considers just the students in the upper grades of both populations 1 and 2.) They did less well in the area of fractions and proportionality
(though still near the international mean) and less well than that in the area of measurement, estimation, and number sense. The population 2 assessment was divided into six somewhat different topic
areas: data representation, analysis, and probability fractions and number sense geometry algebra measurement proportionality Only in the first two areas listed above—data representation, analysis,
and probability, and fractions and number sense—did U.S. eighth graders score near the international mean. They scored below the international mean in geometry, algebra, measurement, and
proportionality. In the final year of secondary school the performance of U.S. students is even farther below international standards (U.S. Department of Education, 1998, pp. 17–18). The population 3
results can be difficult to evaluate because of sampling issues and other problems mentioned in Chapter 1. For example, of the 21 countries that participated in the general
OCR for page 19
Global Perspectives for Local Action: Using TIMSS to Improve U.S. Mathematics and Science Education Table 2-1 Overview of Student Achievement Results from TIMSS
OCR for page 19
Global Perspectives for Local Action: Using TIMSS to Improve U.S. Mathematics and Science Education
OCR for page 19
Global Perspectives for Local Action: Using TIMSS to Improve U.S. Mathematics and Science Education
OCR for page 19
Global Perspectives for Local Action: Using TIMSS to Improve U.S. Mathematics and Science Education TABLE 2-2a Mean Number of Questions Answered Correctly by Upper-Grade Students in Population 1 for
Countries Participating in Both the Population 1 and Population 2 TIMSS Assessments Country Mean Number of Items Correct for Population 1 Upper-Grade Students (102 items total) Singapore 77.52 Korea
77.52 Japan 75.48 Hong Kong 74.46 Czech Republic 67.32 United States 64.26 Iceland 64.26 Canada 61.20 England 58.14 Cyprus 55.08 New Zealand 54.06 Norway 54.06 Portugal 48.96 Iran 38.76 Source: John
Dossey, 1998, "Some Implications of the TIMSS Results for Mathematics Education," paper commissioned by the Continuing to Learn from TIMSS Committee. TABLE 2-2b Mean Number of Questions Answered
Correctly by the Upper-Grade Students in Population 2 for Countries Participating in Both the Population 1 and Population 2 TIMSS Assessments Country Mean Number of Items Correct for Population 2
Upper-Grade Students (151 items total) Singapore 119.29 Japan 110.23 Korea 108.72 Hong Kong 105.70 Czech Republic 99.66 Canada 89.09 New Zealand 81.54 Norway 81.54 United States 80.03 England 80.03
Iceland 75.50 Cyprus 72.48 Portugal 64.93 Iran 57.38 Source: John Dossey, 1998, "Some Implications of the TIMSS Results for Mathematics Education," paper commissioned by the Continuing to Learn from
TIMSS Committee. mathematics and science literacy assessment, only 8 met the TIMSS guidelines for sample participation, and the United States was not among those 8 (Mullis et al., 1998, p. 3).
Nevertheless, if potential difficulties with the data are kept in mind, the test scores still reveal much about the mathematical abilities of U.S. high school seniors. On the assessment of general
knowledge in mathematics—the level of mathematics deemed necessary to function effectively in society as adults—14 countries outperformed the United States, 4 countries were not significantly
different, and 2 countries were below. On the assessment of advanced mathematics—which was given to students who had taken or were taking precalculus, calculus, or Advanced Placement calculus in the
United States—11 countries outperformed the United States and no countries performed worse. The data reveal that U.S. eighth graders performed at a lower level compared with other countries than did
U.S. fourth graders, and relative performance declined again between the eighth and twelfth grades. For example, student performance in the area of measurement, which was already below average at
grade four, was the lowest recorded area of U.S. performance across the two populations in grade eight. In the areas of geometry and data representation, analysis, and probability, student
performance started above the international mean in grade four and moved to below it in grade eight. Mathematical literacy was not broken into subareas at the population 3 level.
OCR for page 19
Global Perspectives for Local Action: Using TIMSS to Improve U.S. Mathematics and Science Education Despite the often-expressed concern that the basics are slighted in U.S. education, U.S. students
did not falter on items calling for straightforward algorithmic work relative to their international peers. For example, U.S. fourth graders performed at or above the international mean on the
following questions: selecting the largest of 2735, 2537, 2573, and 2753 selecting the answer to 6000-2369 selecting what part of a figure was shaded finding the solution to a word problem involving
decimal subtraction At the same time, fourth graders were below the international mean in solving a number problem for a missing addend and using a ratio to calculate a larger proportional value,
which are both considered more advanced skills in the United States. At the grade eight level, U.S. students performed at or above the international mean in: selecting the answer to 6000-2369 writing
a fraction larger than 2/7 writing a weight that might have rounded to a given number selecting the correct ratio of red to total paint in a mixture However, eighth graders fell below the mean in
determining the portion of a purchase that belonged to one individual and in determining the number of one part of a proportion given the ratio of parts and the total. Overall, student performance in
grade eight in the areas of number and operation-based computations was at or above the international level. In other mathematical content areas, however, U.S. performance was much weaker. At the
grade eight level, several of the items indicated that U.S. students have a weak ability to conceptualize measurement relationships. For example, when asked which of four students had the longest
pace given a table of paces it took each student to measure a room's width, only 48 percent of U.S. students selected the student who used the fewest paces, versus the international average of 74
percent. Geometry performance showed perhaps the greatest relative change between grades four and eight. At grade four, U.S. student performance was over one-half of a standard deviation above the
international mean for the countries that participated in both the population 1 and population 2 assessments. By grade eight, it had decreased to almost one standard deviation beneath the mean for
this set of countries. At grade four, U.S. performance showed that students were near or above the mean in locating objects on a grid and in dealing with visual perception and line reflections. These
items were in large part items dependent on following simple directions and knowing the names of figures. By grade eight, U.S. students had fallen behind in identifying a rotated figure, identifying
necessary properties of a parallelogram, and selecting congruent triangles based on angle measurement and figure reflection properties. However, they remained at the international average in
determining which of five given points fell on a line determined by two other points when the points were given as ordered
OCR for page 19
Global Perspectives for Local Action: Using TIMSS to Improve U.S. Mathematics and Science Education pairs. At the eighth-grade level, the differences seemed to fall along the lines of being able to
use definitions and properties to reason about geometric figures and actions in the plane. At grade four the emphasis in the TIMSS assessments was on name recognition, where U.S. students did
relatively well. At grade eight, the emphasis was on understanding the properties of mathematical objects and the consequences of actions on those objects, where more U.S. students faltered. A
related observation about the skills conveyed in mathematics classes came from the TIMSS videotape study (Stigler and Hiebert, 1997; Stigler et al., 1999). Researchers used the tapes of eighth-grade
mathematics classes to compare the kinds of mathematical reasoning evident in the lessons. Using a reasonably generous definition of deductive reasoning, in which conclusions are drawn from axioms or
premises through explicit logical steps, no examples of such reasoning were found in the U.S. lessons. In contrast, there were instances of deductive reasoning in 53 percent of Japanese lessons and
10 percent of German lessons. This feature of U.S. lessons seems to point toward an emphasis on fact and definition and a lack of emphasis on deductive reasoning. The national standards in
mathematics and many sets of state standards call for students to achieve proficiency in exploring mathematical ideas, conjecturing, using logical reasoning, and solving nonroutine problems. The
relative weaknesses of U.S. students in areas of the TIMSS assessments related to these abilities indicates that many students are not yet achieving the standards' objectives. SCIENCE ACHIEVEMENT As
in mathematics, the scores of U.S. students in science were relatively high on an international scale at the population 1 level and declined at the population 2 and 3 levels. U.S. third and fourth
graders scored among the highest of students in all TIMSS countries. At the population 2 level, U.S. students ranked with those in a band of countries close to the international mean. During the
final year of secondary school, a much greater number of nations scored significantly higher than did the United States. According to TIMSS, U.S. students are leaving high school with substantially
less proficiency in science than are students in many other countries. The calculated gains in student learning between adjacent grades also point to declining achievement compared to other
countries. As explained in the previous chapter, TIMSS sampled from the two adjacent grades with the most 9 year olds for population 1 and with the most 13 year olds for population 2. Therefore, it
is possible to look at how much students "gained" in learning between grades three and four and between grades seven and eight, even though the students tested actually were in successive grades
rather than being the same set of students tested in two successive years. For population 1 the United States ranked eleventh in achievement gain between grades three and four out of the 17 countries
following all of the sampling procedures (Martin et al., 1997, p. 29). This relatively modest gain from grade to grade compared to other countries foreshadows the relative decline in the U.S.
standing between populations 1 and 2. For
OCR for page 19
Global Perspectives for Local Action: Using TIMSS to Improve U.S. Mathematics and Science Education population 2, U.S. students ranked 26th in gain between grades seven and eight out of the 27
countries following all of the sampling procedures (Beaton et al., 1996b, p. 29). As with the mathematics scores, the science scores were broken down into a number of subject areas and subareas. For
population 1 the four main content areas were: earth science life science environmental issues and the nature of science physical science One notable aspect of performance in these four areas
involves the early appearance of weaknesses in the physical sciences among U.S. students (Schmidt et al., 1999, p. 120). Even in population 1, where only Korean students scored significantly better
than U.S. students overall in science, the deficit of learning in the physical sciences among U.S. students is apparent. U.S. population 1 students did not score significantly above average in any of
the four subareas within the physical sciences, whereas Korean and Japanese students scored significantly higher in all four and Dutch students in three of the four. Another measure of the relative
weakness of U.S. students in the physical sciences involves the 12 performance tasks given at both the population 1 and population 2 levels (Harmon et al., 1997). All but one of the five science
tasks dealt with physical science topics, and U.S. students scored at or below the international average on all of these. For example, U.S. students did particularly poorly with a task involving
batteries at the eighth-grade level, where they scored 11 percentage points below the international average and 20 percentage points (or more) behind Singapore, England, Romania, and Switzerland, the
highest-scoring countries. At the population 2 level, the performance tests were broken down into five broad categories: earth science life science environmental issues and the nature of science
chemistry physics Again, eighth-grade students in the United States notably lagged in their performance in physics. Population 2 students scored near the bottom of the distribution of 22 countries in
four of the six subareas within the physical sciences (Schmidt et al., 1999, pp. 125–127). At the population 3 level, the measured level of overall U.S. science performance was very low. Even
countries that explicitly track their students into different streams in upper secondary school—for example, academic, technical, vocational, and general—demonstrated higher student achievement for
mathematics and science literacy in the latter three streams than the United States does for its academic students (Mullis et al., 1998, p. 83). And for the physics test, which measured the
proficiency in physics of students who were completing or had completed a physics or advanced physics course, U.S. student achievement was the lowest of the 16 countries
OCR for page 19
Global Perspectives for Local Action: Using TIMSS to Improve U.S. Mathematics and Science Education participating. Even comparing the best U.S. students—the 1 percent of U.S. seniors taking Advanced
Placement physics courses—versus all of the students taking the advanced physics test in other countries (representing 10 to 15 percent of all students in their final year of secondary school), U.S.
students could do no better than low average (U.S. Department of Education, 1998, p. 52). These results clearly demonstrate that in the United States a considerably smaller percentage of students
meet high performance standards in science than do students in other countries. And even the small percentage of "elite" U.S. students do not excel compared to the larger proportion of "elites" in
other countries. One notable aspect of the U.S. science performance at all three levels is the relative lack of gender differences. Even at the population 3 level, which is the only level with a
statistically significant difference between genders, this difference is the lowest (along with that of Cyprus) among the 21 participating countries (Mullis et al., 1998, p. 52). Historically in the
United States, gender differences favoring males in science achievement have been considerably greater than is the case for the TIMSS results. Perhaps the results reflect the considerable attention
given to involving and supporting female students in the sciences. Indeed, TIMSS data for the United States show equal numbers of male and female students taking science in the twelfth grade,
although the specific courses taken are not indicated (Mullis et al., 1998, p. 90). CONCLUSION The 1998 draft revision of the mathematics standards issued by the National Council of Teachers of
Mathematics reaffirms the NCTM's commitment "to providing the highest-quality mathematics instructional program for all students." Similarly, the National Science Education Standards issued by the
National Research Council (1996) describe standards as "criteria to judge progress toward a national vision of learning and teaching science in a system that promotes excellence." By these measures
the results of TIMSS suggest that U.S. students are falling short. Although U.S. fourth graders compare favorably to their international peers, U.S. eighth graders and high school seniors achieve at
a lower level than do students in many other countries. The next three chapters of this report examine factors related to student learning in mathematics and science. Chapter 3 looks at selected
qualities of science and mathematics curricula. Chapter 4 discusses instructional practices, including examples of representative classrooms in different countries. Chapter 5 considers the support
systems available to teachers and students in seeking to achieve high standards. | {"url":"http://www.nap.edu/openbook.php?record_id=9605&page=19","timestamp":"2014-04-18T05:34:33Z","content_type":null,"content_length":"60510","record_id":"<urn:uuid:ff74f834-95e2-4df3-9f0c-c9d9f5b401db>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00598-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hingham, MA Algebra 2 Tutor
Find a Hingham, MA Algebra 2 Tutor
...After that, I'll show you how to find out and apply the rules for yourself. I was a modern European History major at Harvard, graduating magna cum laude in that field. I wrote an undergraduate
thesis involving original research that was much admired.
55 Subjects: including algebra 2, English, reading, algebra 1
...I have a good grasp of algorithms, data structures, and object-oriented programming principles, as well as general proficiency in software implementation. I am familiar with a few Java IDEs as
well, so I am able to tutor from a versatile standpoint. I received excellent scores in all areas on my first and only attempt at the SAT.
38 Subjects: including algebra 2, reading, English, physics
I am recent graduate of UMass Dartmouth who is looking to start his own tutoring business. I got interested in tutoring after I tutored 4 days a week at Global Learning Charter Public School in
New Bedford where I did my teaching practicum. I was very successful as a tutor at Global and I would like to continue tutoring on a part-time basis.
8 Subjects: including algebra 2, calculus, geometry, algebra 1
...My concentration in college, as well as my areas of expertise now, are in all areas of Algebra, Geometry, and Calculus. I will tutor at my house or at the residence of the student, whichever
is the most comfortable environment for the student and most conducive for learning. My tutoring style is pretty basic.
21 Subjects: including algebra 2, calculus, geometry, biology
I received my PhD in Chemistry from the University of Massachusetts, Amherst and am currently a research fellow at Mass General Hospital/ Harvard Med. I have taught Bio 101 at a local community
college recently and have experience teaching Chemistry from my graduate studies as well. I began tutori...
7 Subjects: including algebra 2, chemistry, biology, algebra 1
Related Hingham, MA Tutors
Hingham, MA Accounting Tutors
Hingham, MA ACT Tutors
Hingham, MA Algebra Tutors
Hingham, MA Algebra 2 Tutors
Hingham, MA Calculus Tutors
Hingham, MA Geometry Tutors
Hingham, MA Math Tutors
Hingham, MA Prealgebra Tutors
Hingham, MA Precalculus Tutors
Hingham, MA SAT Tutors
Hingham, MA SAT Math Tutors
Hingham, MA Science Tutors
Hingham, MA Statistics Tutors
Hingham, MA Trigonometry Tutors | {"url":"http://www.purplemath.com/hingham_ma_algebra_2_tutors.php","timestamp":"2014-04-21T13:10:51Z","content_type":null,"content_length":"24100","record_id":"<urn:uuid:65a15a6f-e182-4ded-a3ec-3816dfc00ad1>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
Frobenius Numbers by Lattice Points: Applications of Integer Optimization to a Classic Computational Problem -- from Wolfram Library Archive
Frobenius Numbers by Lattice Points: Applications of Integer Optimization to a Classic Computational Problem
Organization: Macalester College
Organization: Wolfram Research, Inc.
2006 Wolfram Technology Conference
Champaign IL
The Frobenius number a of a finite set of positive integers is the least integer that is not representable as a nonnegative combination of entries in a. For example, Chicken McNuggets® come in
serving sizes of 6, 9, and 20. One cannot buy 43 McNuggets, but any larger target is representable. Thus a. Finding the Frobenius number, and solving Frobenius instances (finding a solution to a
in nonnegative integers a) are closely related to more general problems in integer optimization. For example, minimizing a subject to the constraints that this quantity is nonnegative and each
entry of a is nonnegative will solve the instance problem, since the target a is representable iff 0 is the minimum.
In this talk we will show how a study of a certain lattice of integer vectors leads to a geometric object—the fundamental domain—that encodes enough information to find the Frobenius number.
Getting this information about the domain requires developing several algorithms, many of which have had an impact on some of Mathematica’s kernel functions, such as a and a. Letting a be the
number of integers in a, we can find a (with rare exceptions) when a, but with no restriction on the size of the entries in a. The algorithm seems to have complexity that, when a is fixed, is
close to quadratic on average. The previous best algorithm for this problem was polynomial-time in the input size, but with polynomial degree a. Here is an example with a (in Mathematica 5.2).
Needs["NumberTheory`"] A = {10000000001, 20100000000, 30000000017, 40000000039, 50999000101}; g = FrobeniusF[A] 529710001010385 FrobeniusInstance[A, g + 100] {122, 900, 3, 8, 10000} The main
points are best illustrated when a. Letting a be the integers a, there is a natural map a given by a. The kernel of a is the lattice a consisting of triples a such that a is divisible by a. The
classic relationship a raises the question of finding a geometric form for a; by this we mean choosing a representative vector for each equivalence class. We do this in a natural way, and call
the resulting set of a vectors , for fundamental domain; it sets in the first octant and includes the origin. There is then a tiling of a induced by translating copies of by vectors in a.
Here is picture of the domain—the gray cubes—when a. The trick then is to capture concisely the geometric structure of the domain, so that it can be easily described even when it has a entries.
We do this by getting the elbows of , by which we mean vectors a that are not in , but such that subtracting 1 in any nonzero coordinate yields a point that is in ; these are the nine
tetrahedra in the diagram. It follows that is obtained by deleting the cones defined by all the elbows. Having the elbows in hand allows us to get the corners of , and one of the corners
yields the Frobenius number of a.
Our algorithm for doing this is complicated, with many steps. But it appears that, when n is fixed, the average complexity is close to quadratic. Other consequences of our work are a very fast
algorithm when a (it can handle million-digit inputs) and some new theoretical results about the Frobenius number of special sequences. An important part of our algorithm involves solving linear
Diophantine equations to give important information about the domain . Such problems are also of importance in their own right, with applications in computational number theory, operations
research, and elsewhere. A particular application, closely related to the problem at hand, is solving a specific Frobenius instance: Given our set a and a target value a, we find a nonnegative a
that solves a. We will discuss how this is done efficiently in Mathematica, and mention how similar Diophantine equations relate to finding Frobenius numbers.
Mathematics > Number Theory
Frobenius numbers, integer linear programming, lattices
Frobenius Numbers.nb (3.3 MB) - Mathematica Notebook [for Mathematica 5.2]
TechConf2006_ilp_talk_v5.nb (935.4 KB) - Mathematica Notebook [for Mathematica 5.2] | {"url":"http://library.wolfram.com/infocenter/Conferences/6449/","timestamp":"2014-04-18T15:51:45Z","content_type":null,"content_length":"39209","record_id":"<urn:uuid:2d015b0c-c2d1-4d6a-bf45-d9dc0a529706>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00644-ip-10-147-4-33.ec2.internal.warc.gz"} |
s of C
There are two main aspects to a programming language - its syntax and its semantics. The syntax defines the correct form for legal programs and the semantics determines what they compute (if
Programming Language = Syntax + Semantics
The treatment of syntax in programming languages has been very successful. Backus Naur Form (or Backus Normal Form) BNF was first used to describe the grammar of Algol-60. BNF is a meta language for
syntax. Once a good notation exists it is natural to use it mathematically to describe classes of grammar (e.g., regular, context free, context sensitive), to investigate parsing algorithms for
various classes and so on. Most programming languages from BNF's inception onwards have had "nice" syntax whereas most of those from before it have not - e.g., Fortran. Computer programs to
manipulate BNF were written and could check various properties of a grammar (e.g., LL(1)) or translate it into a parser, giving us parser generators.
The other popular meta language for grammars is the syntax diagram which is equivalent to BNF and there are easy translations from one to the other.
Models for semantics have not caught-on to the same extent that BNF and its descendants have in syntax. This may be because semantics does seem to be just plain harder than syntax. The most
successful system is denotational semantics which describes all the features found in imperative programming languages and has a sound mathematical basis. (There is still active research in type
systems and parallel programming.) Many denotational definitions can be executed as interpreters or translated into "compilers" but this has not yet led to generators of efficient compilers which may
be another reason that denotational semantics is less popular the BNF.
On Mathematical Models of Computation.
Finite State Automata and Turing Machines.
A finite state automata (FSA) consists of (i) a finite set of states S, one of which is the start state, (ii) a finite alphabet A, (iii) a transition function t:S×A->S.
In addition depending on how you want to define FSA's, either there is an accepting state and a rejecting state, or some of the states are labelled as accepting and some as rejecting.
The notion of FSA is very simple yet it clearly captures the essence of some things in computing. First of all a lot of hardware devices are FSAs - e.g., think of a lift controller. Secondly a lot of
software devices are FSAs - e.g., think of lexical analysis in a compiler.
Once we have a definition of a mathematical concept, such as FSA, we can do mathematics on it, e.g., the class of language that FSAs accept is exactly the regular languages. We can compose two or
more FSAs and form their product etc.
A Turing machine is an FSA with an unbounded tape that it can read from and write to; so now the transition function t:S×A->S×A×{fwd,rev}. This captures much of the essence of a computer: The FSA is
the control unit and we can only build FSAs with digital devices (gates etc.). The tape is the store but note that it is unbounded. Real computers have a finite, if large, store, ie. still an FSA.
However we can count backing tapes, disc cassettes and other off-line media and in principle we could manufacture more if the machine needed. In other words, the tape or store is well described for
many purposes as being unbounded, although the maximum amount of tape that can be read or written within t steps is limited to t.
The Turing machine is very simple and is easily shown to be equivalent to an actual physical computer in the sense that they can compute exactly the same set of functions. A Turing machine is much
easier to analyse and prove things about than any real computer. However, Turing machines are unpleasant to program, to say the least. Therefore they do not provide a good basis for studying
programming and programming languages.
Lambda Calculus.
Lambda Calculus is a simple mathematical system:-
<exp> ::= <identifier> |
λ <identifier>.<exp> | --abstraction
<exp> <exp> --application
( <exp> )
-- Syntax of Lambda Calculus --
It turns out that the Lambda Calculus' four lines of syntax, plus conversion rules, are sufficient to define booleans, integers, conditional expressions (if), arbitrary data structures and
computations on them. It is therefore a good candidate for being considered the prototype programming language and has inspired Lisp and the modern functional programming languages.
Lambda Calculus was designed to study function definition, function application, parameter passing and recursion. As originally presented, its semantics are defined by its conversion rules and these
are purely syntactic - they could be implemented in a decent text editor or macro-processor. This is not considered abstract enough.
A potential problem with syntactic definitions is that they can be meaningless (no meaning) or ambiguous (many possible meanings). This is one reason why we would prefer programs to stand for or to
denote some abstract objects which we could reason about.
Problems of Circularity.
Here are few well worn paradoxes:-
• This statement is false.
• 1: Statement 2 is true. 2: Statement 1 is false.
• The barber is a man who shaves all men who do not shave themselves. Who shaves the barber?
• Let S = the set of all sets that are not members of themselves. S = {T | not(T in T)}. Is S a member of S or not?
Paradoxes are a bit of fun, but recall the proof of the halting problem. Suppose that we can write a program `halts', then we can write another program `paradox':-
procedure paradox(q);
procedure halts(p, d):boolean;
{returns true if program p will
halt when run on data d, and
returns false otherwise}
... impossibly cunning code ...
begin while halts(q, q) do skip
Question: does paradox(paradox) loop or halt?
A description of paradox is:- Paradox is a program which halts when applied to programs which loop when applied to themselves. Now carry out the following substitutions:-
1. Replace `loop' by `do not halt':- Paradox is a program which halts when applied to programs which do not halt when applied to themselves.
2. Replace `halt(s) when applied to' by `shave(s)':- Paradox is a program which shaves programs which do not shave themselves.
3. Replace `program(s)' by `man (men)' and `which' by `who':- Paradox is a man who shaves men who do not shave themselves.
4. Replace `paradox' by `the barber':- The barber is a man who shaves men who do not shave themselves.
Sound familiar?
Semantics Based on Abstract Sets.
We prefer to have semantics based on abstract objects rather than on purely syntactic rules which might be meaningless or ambiguous, or on concrete objects which might be implementation dependent or
superseded. An almost too familiar example is the language of Numerals which stand for integers:-
V : Numerals -> Int --V is a valuation function
<Numeral> ::= <Numeral><Digit> | <Digit>
<Digit> ::= 0|1|2|3|4|5|6|7|8|9
V[0] = 0
V[1] = 1
V[2] = 2
V[3] = 3
V[4] = 4
V[5] = 5
V[6] = 6
V[7] = 7
V[8] = 8
V[9] = 9
n :Numeral
d :Digit
V[nd] = V[n]*10 + V[d]
Semantics of Decimal Numerals
Note that the numerals `123', `0123' and `00123' all stand for the integer 123. `123' is a string and 123 is an integer. Have you ever seen an integer?
Potential Problems with Semantics based Naively on Set Theory.
We would like our programming languages to manipulate some set of values which includes at least integers and booleans say:-
We can model arrays, records, lists, and trees etc as products and sums of other values:-
We also want functions, Value->Value, to be "first class values":-
However there is a well known result in set theory that if Value->Value is the set of all mappings from Value to Value then Value->Value is strictly larger than Value and so the equality above cannot
The Solution.
The collection of computable functions, which is what Value->Value must be in the context of programming languages, must be much smaller than the set of all mapping from Value to Value. There are
some well known mappings that are not computable, for example, halts :(Value->Value)×Value->Bool. This turns out to be our salvation.
Scott developed a theory of lambda calculus in which (computable) functions (->) turn out to be monotonic and continuous under a certain partial order. (This has little to do with the usual numerical
notions of continuity and monotonicity.)
From a mathematical point of view a function is just a special kind of relation. A binary relation on sets A and B is a subset of A×B. A function, f, from A to B is a relation such that for each `a'
in A, there is at most one entry <a,b> in f, but for each b' there could be zero, one or many entries <a',b'>, <a",b'> etc., i.e. functions are in general many to one.
{<0,1>, <1,1>, <2,2>, <3,6>, <4,24>, ...}
-- e.g., The Factorial Function --
The problem is to relate an equation or program for factorial to this abstract object.
1. let factorial n = if n=0 then 1
else n*factorial(n-1)
2. let F f n = if n=0 then 1
else n*f(n-1)
let factorial' = Y F
-- e.g., Factorial Programs --
The first program is a self-referential equation which might have 0, 1, 2, several, or even infinitely many solutions. It turns out that this equation does have infinitely many solutions! Fortunately
it has a special one, the least fixed-point, and this is the natural meaning of the equation.
f[0] = λ n.undefined --e.g., loop
= { } the undefined function
f[1] = F f[0] = λ n.if n=0 then 1 else n*f[0](n-1)
= {<0,1>}
f[2] = F f[1] = λ n.if n=0 then 1 else n*f[1](n-1)
= {<0,1>, <1,1>}
f[3] = F f[2]
={<0,1>, <1,1>, <2,2>}
f[4] = F f[3]
={<0,1>, <1,1>, <2,2>, <3,6>}
None of the functions f[0], f[1], ... is the factorial function, but note that they all approximate it, in the sense that they agree with it where they are defined. The factorial function is the
limit of this sequence of functions. This is where the continuity mentioned above is needed; we do not want any "jumps" at the limit.
Incidentally, the infinitely many solutions for factorial come from the ability to specify an arbitrary result for f(-1), say. This fixes the results for other negative inputs. This is an arbitrary
choice and should not be part of the factorial function which is the least defined function that agrees with all f[i] and is undefined on negative inputs.
L. Allison. A Practical Introduction to Denotational Semantics. Cambridge University Press, 1986 [programs].
D. S. Scott. Logic and programming languages. CACM, 20(9), pp.634-641, Sept 1977.
R. D. Tennent. The denotational semantics of programming languages. CACM, pp.437-453, Aug 1976.
© L.A., 1996, 1997, 2001 | {"url":"http://www.csse.monash.edu.au/~lloyd/tilde/Semantics/","timestamp":"2014-04-20T13:22:27Z","content_type":null,"content_length":"28562","record_id":"<urn:uuid:e52f5be6-6927-4b75-af8b-9254ede8366f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can anyone help me with this?
June 2nd 2012, 06:16 PM #1
Jun 2012
san diego
Can anyone help me with this?
Apublishing project will require a calculation of the book's spine width. Theformula reads thusly: "Total page count divided by paper thickness."A more precise mathematical spine width formula
is: "page count/2 x papergram weight x paper volume /1000 - rounded up to nearest whole number
Re: Can anyone help me with this?
What is it that you need help with? Is there some question you are trying to answer, or problem you are trying to solve with this information?
Re: Can anyone help me with this?
1. A publishing project will require a calculation of the book's spine width. The formula reads thusly: "Total page count divided by paper thickness." A more precise mathematical spine width
formula is: "page count/2 x paper gram weight x paper volume /1000 - rounded up to nearest whole number."
So a 100-page book using paper which is 80 gsm with a volume of 1.8 would require a spine width of 8 mm. I am trying to figure out how this answer was achieved.
Re: Can anyone help me with this?
$\frac{100}{2}\times80\times\frac{1.8}{1000}=7.2$, and 7.2 rounded up is 8.
Re: Can anyone help me with this?
Just substitute the given quantities into the formula (assuming that the volume is given in the correct units):
$\frac{\mathrm{pages}}2\times\mathrm{grammage} \times\frac{\mathrm{volume}}{1000}$
$= \frac{100}2\cdot80\cdot\frac{1.8}{1000} = 7.2\ \mathrm{mm}$
If we're rounding to a whole number, we'll need a spine width of 8 mm (7 is closer, but it isn't wide enough).
Re: Can anyone help me with this?
Thank you so much for your help with this.
June 2nd 2012, 06:33 PM #2
June 2nd 2012, 06:49 PM #3
Jun 2012
san diego
June 2nd 2012, 07:06 PM #4
MHF Contributor
Oct 2009
June 2nd 2012, 07:06 PM #5
June 2nd 2012, 07:11 PM #6
Jun 2012
san diego | {"url":"http://mathhelpforum.com/algebra/199582-can-anyone-help-me.html","timestamp":"2014-04-18T14:02:32Z","content_type":null,"content_length":"40705","record_id":"<urn:uuid:49cee45b-294b-4a7a-8ceb-20da47807d08>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
. If EF = 5x + 15, FG = 53, and EG = 143, find the value of x. The drawing is not to scale. http://tinypic.com/r/fbczn/6
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5081f7f4e4b0dab2a5ebbba4","timestamp":"2014-04-16T19:59:05Z","content_type":null,"content_length":"44082","record_id":"<urn:uuid:e45bf195-44b7-4457-ad06-5df4dc546e5d>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] logrithimic question
April 7th 2009, 11:59 AM #1
Nov 2008
[SOLVED] logrithimic question
I have used base 10
and Im at x=log_5-log_2 over 2log_3-log_5/2
I expanded the bottom and cancelled -log_2
so I have log_5 over 2log3-log_5
I cant cancell the log base 5 because you cant make them have the same sign (neg. pos.) correct?
please help
thank you.
Multiply out that two to give 6^{2x}
Taking logs of both sides (I will choose base e) and using the rule
$k ln(x) = ln(x^k)$
$(x+1)ln(5) = 2x ln(6)$ (using base 10 would give you $(x+1)log_{10}5 = 2xlog_{10}6$ and you can solve it in the same way)
Can you solve it from there? I don't see why you'd use the change of base rule for this problem
Last edited by e^(i*pi); April 7th 2009 at 12:13 PM. Reason: Completing Answer inside spoiler tags
Multiply out that two to give 6^{2x}
Taking logs of both sides (I will choose base e) and using the rule
$k ln(x) = ln(x^k)$
$(x+1)ln(5) = 2x ln(6)$ (using base 10 would give you $(x+1)log_{10}5 = 2xlog_{10}6$ and you can solve it in the same way)
Can you solve it from there? I don't see why you'd use the change of base rule for this problem
I thought you couldnt multiply something out if it has a differnt power?
ie 2x2^2 = 8 not 16
You can multiply the base but not the exponent. In the example you just put the base is 2 which would give 8. It is the same as 2^2 + 2^2 = 4 + 4 = 8.
If I were to do 3(2^2) which is obviously 12 it would be 2^2+2^2+2^2 = 4+4+4 = 12.
If it were a(b^n) then I'd get ab^n = b^n + b^n ... + b^n up to n=a
hm. my answer comes out close to yours. But the answer they give is
x= log2-log5 over log5-2log3
$<br /> 5^{x+1}=2(3^{2x})<br />$
taking logs of both sides
(x+1)log(5) = log(2(3^2x))
we can simplify the right hand side by using the law $log{(ab)} = log(a) + log(b)$ with a = 2 and b = 3^2x. Also expanding the left
$xlog(5) + log(5) = log(2) + 2xlog(3)$
combine x terms:
$-2xlog(3) + xlog(5) = log(2)-log(5)$
$x(-2log(3) + log(5)) = log(2)-log(5)$
and so $x = \frac{log(2)-log(5)}{log(5)-2log(3)}$
Last edited by e^(i*pi); April 7th 2009 at 12:32 PM. Reason: sign error
Thank you, I appreciate it
I did it a (slightly different way) and got
log5-log2 over 2log3-log5
is that the same thing? I know sometimes with fractions you can swap positions and negative signs and have the same value in a different order.
your way makes sense though so thanks agian.
Yeah, that's the same answer. It's the same as taking the negative of top and bottom which would cancel.
April 7th 2009, 12:06 PM #2
April 7th 2009, 12:11 PM #3
Nov 2008
April 7th 2009, 12:16 PM #4
April 7th 2009, 12:21 PM #5
Nov 2008
April 7th 2009, 12:31 PM #6
April 7th 2009, 12:35 PM #7
Nov 2008
April 7th 2009, 12:37 PM #8 | {"url":"http://mathhelpforum.com/algebra/82737-solved-logrithimic-question.html","timestamp":"2014-04-17T09:11:22Z","content_type":null,"content_length":"60154","record_id":"<urn:uuid:e4325453-169e-4fa3-8112-927ea1f67abb>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alpha Omega Publications Horizons Math, Grade 2, Complete Set
No more hours of boring, endless repetition - your kids will actually thank you for this interactive math curriculum! Each level has two student workbooks, but the teacher handbook is the main
component of the program. All instruction is provided through one-on-one teacher instruction, which is the biggest difference between Horizons and the self-guided Lifepacs.
Another big focus in Horizons is hands-on learning using a variety of manipulatives, which can mostly be found around the house: pencils, pipe cleaners, play or real money, thread, a timer, clock,
yardstick, etc.
Novice home-educators should not be intimidated by the amount of teacher involvement as it is carefully laid out in the beginning of each year's teacher handbook. The teacher handbook also provides a
variety of teaching suggestions and supplemental activities for additional practice.
The concepts covered in Horizons 2 include:
• Two-digit Addition and Subtraction with Carrying and Borrowing
• Multiplication Facts for 1-10
• Place Value
• Number Order
• Sets
• Correspondence
• Cardinal and Ordinal Numbers
• Graphs
• Fractions
• Measurement
• Temperature
• Estimation
• Ratio
• Perimeter
• Volume
• Decimals in Money
Customer Questions & Answers: | {"url":"http://answers.christianbook.com/answers/2016/product/20060/alpha-omega-publications-horizons-math-grade-2-complete-set-questions-answers/questions.htm","timestamp":"2014-04-20T11:26:41Z","content_type":null,"content_length":"80421","record_id":"<urn:uuid:d433bf31-274a-4667-a676-2bfaaacba77b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
Edgemoor, DE SAT Math Tutor
Find an Edgemoor, DE SAT Math Tutor
...Although I can adjust to any curriculum and will help students meet the requirements of their teachers, I use my own well-tested methods to give students different perspectives that may enable
them to understand the material more easily. Students must do their own homework and write their own pa...
32 Subjects: including SAT math, English, chemistry, writing
Well hello there! My name is Christie. I'm a 24-year-old college graduate from Arcadia University, and I am currently pursuing a Master's in Business Administration with the University of
35 Subjects: including SAT math, reading, writing, English
...I thought back on my first teaching experience back when I was in college. I took part in this program where students from my university taught a group of public school children how to make a
model rocket and how it worked. I remember I got the chance to instruct a small group of children on the names of all the parts of the rocket and a basic explanation of how they functioned.
16 Subjects: including SAT math, Spanish, calculus, physics
...Most people are able to fall back on these tactics in test scenarios so that they not only understand the method to solve the problems, but have a back up plan in case they suddenly blank. Of
course there will always be some memorization involved, but I try to keep that to a minimum! I am flexi...
10 Subjects: including SAT math, geometry, ASVAB, algebra 1
After studying to be an engineer, I fell in love with the intellectual rigor and challenge of analytic philosophy, with its heavy emphasis on logic and reasoning. I taught full time for five
years at the College of William and Mary, leaving to join my wife in Delaware. Since moving, I have been writing extensively while working part time as an SAT tutor.
22 Subjects: including SAT math, reading, English, writing
Related Edgemoor, DE Tutors
Edgemoor, DE Accounting Tutors
Edgemoor, DE ACT Tutors
Edgemoor, DE Algebra Tutors
Edgemoor, DE Algebra 2 Tutors
Edgemoor, DE Calculus Tutors
Edgemoor, DE Geometry Tutors
Edgemoor, DE Math Tutors
Edgemoor, DE Prealgebra Tutors
Edgemoor, DE Precalculus Tutors
Edgemoor, DE SAT Tutors
Edgemoor, DE SAT Math Tutors
Edgemoor, DE Science Tutors
Edgemoor, DE Statistics Tutors
Edgemoor, DE Trigonometry Tutors
Nearby Cities With SAT math Tutor
Bellefonte, DE SAT math Tutors
Boothwyn SAT math Tutors
Carneys Point Township, NJ SAT math Tutors
Carneys Point, NJ SAT math Tutors
Feltonville, PA SAT math Tutors
Greenville, DE SAT math Tutors
Lower Chichester, PA SAT math Tutors
Minquadale, DE SAT math Tutors
Talleyville, DE SAT math Tutors
Twin Oaks, PA SAT math Tutors
Upper Chichester, PA SAT math Tutors
Village Green, PA SAT math Tutors
West Bradford, PA SAT math Tutors
West Deptford, NJ SAT math Tutors
Wilmington, DE SAT math Tutors | {"url":"http://www.purplemath.com/Edgemoor_DE_SAT_math_tutors.php","timestamp":"2014-04-17T21:45:37Z","content_type":null,"content_length":"24312","record_id":"<urn:uuid:1157070c-6709-46b4-a5e2-7fcc4d7698de>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00044-ip-10-147-4-33.ec2.internal.warc.gz"} |
Surveys in Mathematics and its Applications
Surveys in Mathematics and its Applications is a free electronic journal. It is open to all mathematical fields (including Statistics and mathematical applications to Computer Science, Economics,
Physics or Engineering).
The journal publishes both surveys/reviews and research articles. The survey papers should present an accessible, mature and clear review/overview of recent advances in a well-specified topic.
Research articles accompanied by an accessible and well-constructed review of the corresponding topic are strongly welcomed.
We expect enlighten expositions on quite technical themes. We hope that these surveys will help the readers of this journal to reinterpret in new perspective results from areas outside their own, to
learn techniques developed in other fields, to apply them and to find out new connections.
In order to maintain high standards all papers will be peer reviewed.
Abstracting and Indexing: The articles of this journal are indexed/reviewed in: MathSciNet (Mathematical Reviews), Zentralblatt MATH, Open J-Gate, Genamics JournalSeek, Directory of Open Access
Journals (SPARC Europe Seal for Open Access Journals), Intute, EMIS ELibM, AMS Digital Mathematics Registry and EBSCO.
All Volumes : 1(2006), 2(2007), 3(2008), 4(2009),
5(2010), 6(2011), 7(2012), 8(2013) (current) | {"url":"http://www.utgjiu.ro/math/sma/","timestamp":"2014-04-18T23:15:13Z","content_type":null,"content_length":"7651","record_id":"<urn:uuid:b875b9f4-7c03-498b-8e7b-dd0e8c225e6b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
geometric decomposition of J(11),
up vote 6 down vote favorite
Let $N$ be a prime number. Let $J(N)$ be the jacobian of $X_\mu(N)$, the moduli space of elliptic curves with $E[N]$ symplectically isomorphic to $Z/NZ \times \mu_N$. Over complex numbers we get that
J(N) is isogeneous to product of bunch of irreducible Abelian varieties. Is there a way of describing these Abelian varieties using $J_1(M)$ and $J_0(M)$? Specifically, what can we say about the
decomposition of $J(11)$?
Note that $X_\mu(N)$ is birationally isomorphic as a curve to the fibre product $X_0(N^2) \times_{X_0(N)} X_1(N)$. (This is because $\Gamma(N)$ is conjugate to $\Gamma_0(N^2) \cap \Gamma_1(N)$, and
the group generated by $\Gamma_0(N^2)$ and $\Gamma_1(N)$ is $\Gamma_0(N)$.) Therefore, we have $J_1(N)$ and $J_0(N^2)$ are both some of the factors in $J(N)$. In fact, we know that $J(7)$ is three
copies of $J_0(49)$. For N=11, the above fibre product to $X_0(121)$ is an unramified covering. If I was going to make a guess on what $J(11)$ going to decompose as, I would guess that it is five
copies of $J_0^{new}(121)$ and six copies of $J_1(11)$. Is that reasonable? Is there a geometric way of arguing this?
Also, I'm guessing that the question about $SL_2(F_N)$ decompoposition of space of cusprforms is related to this, and Jared Weienstein's thesis will come into play here, but I'm not sure how.
nt.number-theory modular-forms ag.algebraic-geometry
The curve $X(N)$ in its standard definition as a moduli space is not geometrically connected over $\mathbf{Q}$). You must mean to use the variant $X_{\mu}(N)$ classifying "twisted" full level-$N$
structures of type $\mathbf{Z}/(N) \times \mu_N$ as symplectic spaces? And are you interested in the simple isogeny factors over $\mathbf{Q}$, or working geometrically (which might come to the same
thing, but only after the fact)? Also, the "fiber product" description looks non-smooth near the cusps. It is at best birational, no? Anyway, representation theory should be better than geometry
here. – BCnrd Sep 28 '10 at 1:50
You are definitely right. I meant X(N) to be the twisted full level N structure, and I changed the wording to reflect that. On the other hand, I think the fiber product is actually smooth at the
cusps, since the covering of X_0(N) by X_1(N) is unramified at the cusps. I think for primes N larger than 5, that is an actual isomorphism. I also agree with you that representation theory is
probably better than geometry for this type of problems. However, I was trying to figure out what's happening geometrically, and there seems to be something there. – Soroosh Sep 29 '10 at 19:39
Also, I'm mostly interested in this geometrically. So, I want to know the simple isogeny factors over C. – Soroosh Sep 29 '10 at 21:10
Could you give some details on how you get $X_{\mu}(N)$ as a fibered product ? – François Brunault Sep 30 '10 at 1:32
Thanks for the details on the fibered product. I agree that $J_1(N)$ is a factor of $J(N)$, but I am a little bit confused about $J_0(N^2)$ because for $N=11$ not all elliptic curves of conductor
$121$ appear in $J(11)$. Maybe this is because we are looking things over $\mathbf{C}$ and not over $\mathbf{Q}$ ? – François Brunault Oct 2 '10 at 15:17
show 1 more comment
2 Answers
active oldest votes
The decomposition of $J(11)$ was known (at least over $\mathbf{C}$) to Hecke. It turns out that the Jacobian of the compactification of $\Gamma(11) \backslash \mathfrak{h}$ is isogenous
to a product of 26 elliptic curves. All this is very well explained in the following article :
MR0463118 (57 #3079) Ligozat, Gérard . Courbes modulaires de niveau $11$. (French) Modular functions of one variable, V (Proc. Second Internat. Conf., Univ. Bonn, Bonn, 1976), pp.
149--237. Lecture Notes in Math., Vol. 601, Springer, Berlin, 1977. http://www.springerlink.com/index/6722kj1764m8g50t.pdf
The idea is to look at the natural representation of the group $\mathrm{PSL}_2(\mathbf{F}_p)$ on the space of cusp forms $S_2(\Gamma(p))$. So, you're right that there is a geometric
up vote 8
down vote If I remember well, there are, among the factors of $J(11)$, elliptic curves of conductor $121$ which are $11$-isogenous to itself. These can be seen as rational points of the modular
accepted curve $X_0(11)$ which are not cusps (there are three such points).
EDIT : I remembered somewhat incorrectly. The three non-cuspidal points of $X_0(11)(\mathbf{Q})$ correspond to the elliptic curves 121B1, 121C1 and 121C2. The subgroups of order $11$ of
these curves are described as follows : the elliptic curve 121B1 has CM by $\mathbf{Z}[\frac{1+i\sqrt{11}}{2}]$, so it is $11$-isogenous to itself, whereas 121C1 and 121C2 are
$11$-isogenous to each other. Using the notations of Cremona's tables, the Jacobian of the compactification of $\Gamma(11)\backslash \mathfrak{h}$ is then isogenous to $(11A)^{11} \
times (121B)^5 \times (121C)^{10}$.
add comment
Ernst Kani was very interested in this and related questions around 2000. I remember implementing an algorithm for him in around 2000 when I visited Essen to compute a basis of $S_2(\Gamma
(p))$ in terms of $\Gamma_1(p^2)$. I'm sure Kani knows the decomposition of $J(N)$ for small $N$, since I vaguely remember talking about it with him, but I didn't explicitly see it in a
up vote 5 cursory glance through the papers at http://www.mast.queensu.ca/~kani/. You may want to look at the papers up there from around 2000, since many mention X(11) explicitly. You might also
down vote just email Kani.
1 I posted the code mentioned above that I wrote here: wstein.org/tmp/kani.m – William Stein Sep 28 '10 at 14:13
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory modular-forms ag.algebraic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/40241/geometric-decomposition-of-j11/40547","timestamp":"2014-04-18T18:56:31Z","content_type":null,"content_length":"65020","record_id":"<urn:uuid:c812a24c-441d-4615-8390-e0e323d2d7cd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00276-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Between Level Covariance Matrix is NPD
Between Level Covariance Matrix is NPD
Anonymous posted on Tuesday, March 29, 2005 - 8:38 am
First, let me congratulate you all on keeping this great resource up and running. I realize it requires a substantial time investment on your part and it is greatly appreciated!
I am a relative novice at multi-level SEM and wanted to get your thoughts on an error message that I am getting when running a model in Mplus.
The model I am running is as follows (Type=twolevel; estimator=muml):
Within level: 3 constructs each measured by 3 continuous indicators (the 3 constructs are allowed to covary but no directional relationships are specified).
Between level: No between-level factors are specified—3 observed independent variables are utilized to predict the between level random intercepts (my focus is on testing the relationship between the
observed independent variables and the cluster means).
Number of level 2 (between) units=30 (unbalanced groups)
Number of level 1 (within) units=300
After running the model, it converges and I get the needed parameter estimates. The problem is that I am also getting an error message which reads” WARNING: THE RESIDUAL COVARIANCE MATRIX (PSI) ON
So, my questions are:
1. Is this NPD error message a result of my small between level sample size (n=30)?
2. What are the implications of this message as it pertains to the validity of the parameter estimates I get? (In other words, what statistically valid conclusions can I reach if I do nothing to
correct this problem?).
3.Is there anyway of correcting this problem that does not involve constraining my between level covariances to 0?
I’ve run multiple variations of the model and cannot seem to circumvent this problem. Any help you could offer would be greatly appreciated.
Linda K. Muthen posted on Saturday, April 02, 2005 - 8:40 pm
It does not sound like you are using the most recent version of Mplus. If you were, you would be getting an error message that would point you more directly to the problem. You should check for
negative residual variances of your factors. If you do not have these, you should ask for TECH4 to see if you have variables with correlations greater than one. You need to take this message
seriously as it points to a problem with your model. It is often the case that there is less variation on the between level and that you may be able to obtain fewer factors on the betweem level. 30
clusters should be sufficient although the minimum.
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=12&page=603","timestamp":"2014-04-18T03:01:54Z","content_type":null,"content_length":"19208","record_id":"<urn:uuid:26b6a9e9-9c54-4839-85c3-779bc69d5934>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00069-ip-10-147-4-33.ec2.internal.warc.gz"} |
Modular dynamics in double cones
Brunetti, Romeo and Moretti, Valter (2010) Modular dynamics in double cones. UNSPECIFIED. (Unpublished)
Download (583Kb) | Preview
We investigate the relation between the actions of Tomita-Takesaki modular operators for local von Neumann algebras in the vacuum for free massive and massless bosons in four dimensional Minkowskian
spacetime. In particular, we prove a longstanding conjecture that says that the generators of the mentioned actions differ by a pseudo-differential operator of order zero. To get that, one needs a
careful analysis of the interplay of the theories in the bulk and at the boundary of double cones. After introducing some technicalities, we prove the crucial result that the vacuum state for massive
bosons in the bulk of a double cone restricts to a K.M.S. state at its boundary, and that the restriction of the algebra at the boundary does not depend anymore on the mass. The origin of such result
lies in a careful treatment of classical Cauchy and Goursat problems for the Klein-Gordon equation as well as the application of known general mathematical techniques, concerning the interplay of
algebraic structures related with the bulk and algebraic structures related with the boundary of the double cone, arising from QFT in curved spacetime. Our procedure gives explicit formulas for the
modular group and its generator in terms of integral operators acting on symplectic space of solutions of massive Klein-Gordon Cauchy problem.
Actions (login required) | {"url":"http://eprints.biblio.unitn.it/1896/","timestamp":"2014-04-19T01:48:30Z","content_type":null,"content_length":"18146","record_id":"<urn:uuid:1b6b368b-77d8-46dd-b14a-218bf317e894>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00206-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
okay I took this test I got these questions wrong I was wondering if any of you could help me better understand what it is I am supposed to do
• one year ago
• one year ago
Best Response
You've already chosen the best response.
here is one of them find the percent increase from 125 to 130
Best Response
You've already chosen the best response.
You must be careful with these kind of problems: because it increases from 125 to 130, you'll have to compare the increase of 5 with the original number (125). So 125 stands for 100%. Some find
this formula handy: \[change=\frac{ new-old }{ old }\cdot100\%\]
Best Response
You've already chosen the best response.
wow okay thank you. how about how to write 0.347 as a percent
Best Response
You've already chosen the best response.
Always convert a fraction to a pecentage by multiplying with 100
Best Response
You've already chosen the best response.
like 0.347*100
Best Response
You've already chosen the best response.
Yes, so 34.7%
Best Response
You've already chosen the best response.
and how would I write 65% as a fraction or mixed number in the simplest form
Best Response
You've already chosen the best response.
Do the opposite: divide by 100, getting\[\frac{ 65 }{ 100 }\]then simplify...
Best Response
You've already chosen the best response.
when something asks what is 159 is 30% of a number that like when you said multiply 30% by one hundred sorry by the way am struggle in math and I am so happy you are helping understand it
Best Response
You've already chosen the best response.
If 159 is 30% of a number and you want to know what that number is: Divide 159 by 30 to get 1%. Then multiply by 100.
Best Response
You've already chosen the best response.
ok Jackets are on sale for 25% off. If the original price of a jacket is $120, what is the sale price? how would I do this
Best Response
You've already chosen the best response.
So $120 stands for 100%. If there is 25% off, the price goes down with a quarter, which is $30. New price: $90. Another way of doing this: look at what will be left: 25% off means: there is only
75% left, or three quarters (0.75). So 0.75*120=90. New price is $90.
Best Response
You've already chosen the best response.
oh okay thank you now hope next time I do better
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5102f081e4b03186c3f8e9cb","timestamp":"2014-04-20T21:04:07Z","content_type":null,"content_length":"56720","record_id":"<urn:uuid:222f92dc-315e-47c2-b322-ca2f27f7546e>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
Plz help..The internal and external diameters of a hollow hemispherical vessel are 14 cm and 21 cm respectively. The... - Homework Help - eNotes.com
Plz help..
The internal and external diameters of a hollow hemispherical vessel are 14 cm and 21 cm respectively. The cost of silver plating of 1 sq. cm of its surface is Rs 1.60. Find the total cost of silver
plating the vessel all over.
The hemispherical vessal has two surfaces. Internal and external.
Let R and r be the eadius of the internal and external radius. Also radius is half the diameter.
The outer surface area of the hemisphere = (1/2)( 4*pi*R^2 ) =2 pi*(21/2)^2 = 2770.88472/4 sq cm = 692.72118
The area of the internal hemisphere = (1/2)*(4pir^2) = 2pi*r^2 = 2pi*(14/2)^2 = 1231.50432/4 sq cm = 307.8760801 sqcm
The ring or edge area = pi *R^2-Pir^2 = pi*(R^2-r^2) = pi*(21^2-14^2)/4 =769.6903001/4 = 192.422575 .
Thererefore the total area = 692.72118+307.87608+192.422575 = 1193.019835 sq cm
Therefore the plating cost @ Rs 1.60 for the above total area = 1193.019835 sq cm * Rs1.6sqcm = Rs 1908.83
Hi, for any homework help, assignment help, dissertation help, visit our website http://www.helpwithassignment.com. We offer online tutoring, assignment help, homework help, dissertation help and
thesis help in almost all the subjects. We offer services for a range of subjects for K-12, College and University.
This is a basic math problem if you are facing trouble solving this than you should hire one of the online math tutors available on internet.At this students can submit their questions online to get
the answers. They can also get elaborate explanations on particular topics required from the tutors. The students can observe the solved examples online and solve similar problems. They can also
clarify their doubts regarding the concepts involved from the tutors immediately.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/plz-help-126087","timestamp":"2014-04-19T16:21:17Z","content_type":null,"content_length":"29764","record_id":"<urn:uuid:03db8598-cfc6-4c4c-b488-1d507de77ca0>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Minimal representative of the elements of the fundamental group of a negatively curved manifold
up vote 1 down vote favorite
Let (M,g) be a negatively curved manifold , let p be any point of M and denote by G=π1(M,p) . the minimal representative (by minimal i mean the smallest length representative ) of every α in G is a
simple closed geodesic loop at p . my question is why it should be simple ?
1 If by "simple" you mean "has no self-intersections", that statement is false. – Igor Rivin Nov 25 '11 at 12:45
yeah that's what i mean . what is true or special if you want for negatively curved manifold concerning this subject ( representative of based point class ,length of a representative ,...) a
reference would be most welcomed also thank you – student Nov 25 '11 at 12:54
add comment
1 Answer
active oldest votes
For a negatively curved manifold, there is a unique geodesic in a free homotopy class, and a unique geodesic broken loop in a homotopy class, and it is the shortest curve. In neither
case is the curve necessarily simple. For references, almost any book on differential geometry will work (Cheeger/Ebin, Ballmann/Gromov/Schroeder, Bridson/Haefliger are all good
up vote 3 candidates for having a discussion). For curves on surfaces, you might want to check out the little paper of McShane/Rivin in IMRN, which talks about minimal representatives in homology
down vote classes...
add comment | {"url":"http://mathoverflow.net/questions/81877/minimal-representative-of-the-elements-of-the-fundamental-group-of-a-negatively?sort=oldest","timestamp":"2014-04-21T16:01:17Z","content_type":null,"content_length":"51128","record_id":"<urn:uuid:0cec1702-ca19-4b9f-8027-12116d2ba276>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00036-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complexity Thinking
Complexity theory - as it's applied to organisations - is a way of understanding how a community, an organisation or any group of
living things actually behaves in the real world.
Viewing organisations as complex adaptive/evolving systems, complexity thinking discourages approaches that are linear, bounded and
overtly analytical. It also discourages any attempt to find simple solutions or, indeed, solutions of any kind - preferring instead a
process of continual engagement and adaptation.
Triarchy authors Patrick Beautement and Christine Broenner identify four types of complexity:
"We all know that the world is complex but we have to acknowledge that the term ‘complexity’ causes difficulties. It means different
things to different people and has also become rather a fashionable word to use - you will find it interspersed here and there in
speeches or articles.
So, what is complexity? We differentiate four different ways of talking about complexity: as it is naturally; as academics see it
generally in theory; as it is seen objectively when in some context (contextual); and as experienced subjectively by people as
follows: 1. We use the term natural complexity to refer to complexity as it is in the real world - an expression of the phenomena
that arise, unadorned by any particular set of abstractions or terminology. Natural complexity exists regardless of the presence of
human beings, yet it provides the backdrop to people’s activities - the medium in which people must function.
2. We have coined the term academic complexity to refer to the descriptions and explanations of natural complexity that are provided Featured titles:
by complexity scientists using the abstractions of scientific terminology (e.g., emergence, co-evolution etc).
For Complexity Practitioners:
The following two ways of talking about complexity are necessary because, without them, we cannot reflect sensibly on the activities
of practitioners. The first we call contextual complexity as it provides an objective perspective on the realities of the context.
The second we call experienced complexity as it describes the realities from the subjective view of practitioners themselves. For
practitioners the starting point is not complexity science and its terminology but the natural context and people’s experiences and For Complexity Beginners:
perceptions of it.
3. So, contextual complexity refers to the types of phenomena manifested in the particular situation with which practitioners are Adventures in Complexity: For Organisations Near the Edge of
concerned, and describes in objective terms, as far as possible, the context in which they are arising. In practice it has become a Chaos
kind of myth that complexity on complexity begets only further complexity. If this were true, and complexity was additive in this
way, then people would have no choice but to deal with this overwhelming ‘über-complexity’ in their lives. Yet, self-evidently, for The focus of
practitioners in a natural context, the underlying causes and complexities are hidden and can, for all intents and purposes, be
largely ignored. Adventures in Complexity
4. Lastly then, experienced complexity. This concerns the real-world realities that are experienced subjectively by individuals, or is not so much organisations as the ‘life of organisations’.
by communities or institutions in a context and are described in terms that make sense to the subject given their abilities, Author Lesley Kuhn sees organisations as ‘collectives of human
experience and viewpoint. These descriptions can range from common-sense observations (sadly, an undervalued, yet powerful natural activity’ and here describes how complexity theory can be applied
ability) of the phenomena to ones that can be highly complicated and contrived. Many are ‘co-constructed’ realities built up in a in and to organisations.
social context into a set of prejudices or ‘habits’ of thought.
Let’s summarise the ways of talking about complexity with an example. A child is pouring out milk for a cat and does this perfectly
competently because, for the child, the discernable features in this natural context are ‘simple’ and self-evident (experienced Complexity entries in
complexity). Yet for complexity scientists there is turbulent flow in the milk and massive underlying complexity in the bodies of the
child and the cat and a myriad interactions with bacteria in their environment and so on (academic complexity). The Idioticon:
For a fuller version of this summary, look at Complexity
What is Complexity? Thoughts:
in the Library of Thoughts.
Follow the links below to Triarchy books in this field.
entries and
papers (to your right) are also available to read online, free of charge.
Complexity Demystified: A Guide for Practitioners(2011; 268 pages)
Drawing on the authors' wide experience of handling projects at a national and international level, this guide presents a clear and
systematic framework for how to work with complexity to bring about sustainable change in practice.
Adventures in Complexity: For Organisations Near the Edge of Chaos(2009, 146 pages)
Offers a more introductory approach to show how complexity theory can be applied in/to organisations. Shows you how to understand
things in a complex situation. | {"url":"http://www.triarchypress.net/complexity-thinking.html","timestamp":"2014-04-18T15:55:28Z","content_type":null,"content_length":"23748","record_id":"<urn:uuid:9b1243f8-27d3-41ff-b0c5-4ba1cc9eaadc>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00423-ip-10-147-4-33.ec2.internal.warc.gz"} |
Center for Philosophy of Science ::: Wagner 3-16-04
Tuesday, 16 March 2004
Defining Fitness: A Measurement Theoretical Approach
Günter P. Wagner
Yale University
12:05 pm, 817R Cathedral of Learning
Abstract: Fitness is one of the fundamental concepts of biology, but yet its definition is still controversial. In this talk I will present a solution to the problem of how to define fitness by
utilizing tools and concepts from measurement theory. Measurement theory is a branch of applied mathematics that deals with the relationship between empirical structures and the numerical structures
that represent them, i.e. quantitative concepts or scales. The basic idea is that fitness is a measure of competitive ability with certain projectability properties. From that it is argued that
fitness can be defined in terms of a pair comparison system based on an operational definition of competitive ability. I will present a new metrization theorem to accommodate this definition and show
that from that metrization theorem the basic equation of population genetic theory, the Wright selection equation, can be derived. | {"url":"http://www.pitt.edu/~pittcntr/Events/All/Lunchtime_talks/lunchtime_2003_04/abstracts_2003_04/Mar_2004/Abstract_Wagner_16_Mar_2004.htm","timestamp":"2014-04-18T03:39:03Z","content_type":null,"content_length":"8992","record_id":"<urn:uuid:a648e50c-7b0c-418a-abd3-90321b6799ec>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://projecteuler.net/index.php?section=problems&id=187A composite is a number containing at least two prime factors. For example, 15 = 3 × 5; 9 = 3 × 3; 12 = 2 × 2 × 3.There are ten composites
below thirty containing precisely two, not necessarily distinct, prime factors: 4, 6, 9, 10, 14, 15, 21, 22, 25, 26.Read...
Project Euler — Problem 187
http://projecteuler.net/index.php?section=problems&id=187A composite is a number containing at least two prime factors. For example, 15 = 3 × 5; 9 = 3 × 3; 12 = 2 × 2 × 3.There are ten composites
below thirty containing precisely two, not necessarily distinct, prime factors: 4, 6, 9, 10, 14, 15, 21, 22, 25, 26.Read...
Root finding
Numerical root finding methods use iteration, producing a sequence of numbers that hopefully converge towards a limits which is a root. In this post, only focus four basic algorithm on root finding,
and covers bisection method, fixed point method, Newton-Raphson method, and secant method.Read More: 1886 Words Totally
Root finding
Numerical root finding methods use iteration, producing a sequence of numbers that hopefully converge towards a limits which is a root. In this post, only focus four basic algorithm on root finding,
and covers bisection method, fixed point method, Newton-Raphson method, and secant method.Read More: 1896 Words Totally
bubble chart by using ggplot2
The visualization represented by Hans Rosling’s TED talk was very impressive. FlowingData provides a tutorial on making bubble chart in R. I try to create bubble chart by using ggplot2.With the
dataset provided by FlowingData,The bubble chart was made by the following code.Read More: 548 Words Totally
bubble chart by using ggplot2
The visualization represented by Hans Rosling’s TED talk was very impressive. FlowingData provides a tutorial on making bubble chart in R. I prefer ggplot2 for graphics.
The avalanche of publications mentioning GO
Gene Ontology is the de facto standard for annotation of gene products. It has been widely used in biological data mining, and I believe it will play more central role in the future.Publications
mentioning GO was collected and deposited in GO ftp, and can be accessed (ftp://ftp.geneontology.org/go/doc/).Read More: 454 Words Totally
The avalanche of publications mentioning GO
Gene Ontology is the de facto standard for annotation of gene products. It has been widely used in biological data mining, and I believe it will play more central role in the future.Publications
mentioning GO was collected and deposited in GO ftp, and can be accessed (ftp://ftp.geneontology.org/go/doc/).Read More: 454 Words Totally
GOSemSim redesign in terms of S4 classes
I started to develop GOSemSim package two years ago when I was not quite familiar with R. I am very happy to see that someone use it and found it helpful.I try to learn S4 and redesign GOSemSim with
S4 classes and methods in the pass two weeks, and the very first version was implemented. As I’m...
GOSemSim redesign in terms of S4 classes
I started to develop GOSemSim package two years ago when I was not quite familiar with R. I am very happy to see that someone use it and found it helpful.I try to learn S4 and redesign GOSemSim with
S4 classes and methods in the pass two weeks, and the very first version was implemented. As I’m not... | {"url":"http://www.r-bloggers.com/author/ygc/page/5/","timestamp":"2014-04-18T13:11:30Z","content_type":null,"content_length":"35943","record_id":"<urn:uuid:d36754a9-9236-4334-acd5-6dfb46054eb1>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00196-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can you view objects that moved beyond the event horizon?
T. guy, this is an opportunity to get clear about some standard terminology. The observable universe (aka visible) includes all the matter that we are getting light or other signals from. That
includes matter which is today about 46 billion ly from us.
That is distance in the "proper distance" or freeze-frame sense that if you could pause the expansion process to give yourself time to measure a radar beep would take 46 billion years to reach that
most distant material.
That 46 Gly (giga for billion) is about how far the matter is that in early times emitted the cosmic microwave background radiation that we are now detecting. So we are in effect LOOKING AT matter
that is now 45.5+ Gly from here. It has by now formed galaxies and stars etc. We see it as it was in early days: a hot gas.
That distance is called "particle horizon" to distinguish it from "cosmic event horizon".
The cosmic event horizon (CEH) is only about 16 Gly. It is the proper distance today of the most distant galaxy we can expect to reach with a signal we send TODAY.
If an event happens today in a galaxy that is more than 16 Gly from us, like a supernova explosion, we will never see it no matter how long we wait.
If a supernova explodes today in a galaxy that is LESS than Gly from us (today, freeze-frame i.e. proper distance) then we WILL eventually see it if we wait long enough.
Most of the objects we can see today are well beyond today's event horizon.
That is, most of the galaxies we can observe are today more than 16 Gly from us.
So the answer to your stated question is definitely YES. We certainly can continue to observe galaxies which have moved beyond the event horizon. Indeed most of the galaxies we do observe are beyond
the event horizon. | {"url":"http://www.physicsforums.com/showthread.php?s=b8d19007b5733d2ddc83ea67d195021d&p=4648722","timestamp":"2014-04-25T00:31:45Z","content_type":null,"content_length":"54356","record_id":"<urn:uuid:0a3716a9-8740-4f3f-85ae-17bfc0d65681>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: st: qnorm
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: st: qnorm
From Nick Cox <n.j.cox@durham.ac.uk>
To "'statalist@hsphsun2.harvard.edu'" <statalist@hsphsun2.harvard.edu>
Subject RE: st: qnorm
Date Mon, 5 Mar 2012 15:23:52 +0000
The approach in Maarten's program is to generate a number of random samples and show the lot as replicates.
Here as an alternative is some example code for individual 95% confidence intervals for each plotted point. -qplot- used at the end is from SJ. The code isn't smart about missing values, but it could easily be made smarter. I also guess the code could be shortened in the middle.
sysuse auto, clear
y = sort(st_data(., "mpg"), 1)
mean = mean(y)
sd = sqrt(variance(y))
n = rows(y)
compare = J(n, 0, .)
for (j = 1; j <= 100; j++) {
compare = compare, sort(rnormal(n, 1, mean, sd), 1)
envelope = J(n, 2, .)
for (i = 1; i <= n; i++) {
x = sort(compare[i,]', 1)
envelope[i,] = ((x[2] + x[3])/2, (x[97] + x[98])/2)
names = tokens("_lower _upper")
(void) st_addvar("float", names)
st_store(., names, envelope)
qplot mpg _lower _upper, ms(O i i) c(. J J) legend(off) ytitle("`: var label mpg'")
Maarten Buis
On Mon, Mar 5, 2012 at 10:03 AM, Nick Cox wrote:
> 1. Simulate several samples from a distribution with the same mean and
> standard deviation (or more generally an appropriate mean and standard
> deviation) and use the resulting portfolio of plots in assessing what
> kind of variability is to be expected.
An easy way to do so is to use the -margdistfit- package, which you
can install by typing in Stata -ssc install margdistfit-. The default
is actually to first sample the mean and the standard deviation from
its sampling distribution and than sample a new variable with those
sampled means and standard deviation. I suspect that this makes sense
in most cases, though I also suspect that it won't matter much. If you
want to do exactly what Nick proposes you can add the -noparsamp-
Here is an example of what such a graph would look like:
*-------- begin example -----------
sysuse auto, clear
reg mpg
margdistfit, qq name(qq)
margdistfit, pp name(pp)
margdistfit, hangroot name(hangr)
margdistfit, cumul name(cumul)
*--------- end example ------------
(For more on examples I sent to the Statalist see:
http://www.maartenbuis.nl/example_faq )
Hope this helps,
Maarten L. Buis
Institut fuer Soziologie
Universitaet Tuebingen
Wilhelmstrasse 36
72074 Tuebingen
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2012-03/msg00202.html","timestamp":"2014-04-17T04:48:44Z","content_type":null,"content_length":"11428","record_id":"<urn:uuid:b6929563-e306-4616-bb82-0dbc33ac1c39>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |