content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Introduction to Probability and Its Applications
ISBN: 9780534386719 | 0534386717
Edition: 3rd
Format: Hardcover
Publisher: Cengage Learning
Pub. Date: 8/21/2009
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help | {"url":"http://www.knetbooks.com/introduction-probability-its-applications/bk/9780534386719","timestamp":"2014-04-18T15:53:36Z","content_type":null,"content_length":"31274","record_id":"<urn:uuid:bea3aec0-0c1c-470f-ba09-faf9be43c2c3>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00000-ip-10-147-4-33.ec2.internal.warc.gz"} |
Institute for Mathematics and its Applications (IMA)
- Previous Annual Program Special Seminars
Modelling Uncertainty of Dependence in Risk Aggregation Ruodu Wang
, University of Waterloo
Wednesday, May 1, 2013 10:30 a.m. - 12:00 p.m. Lind Hall 305
The model risk is the risk of inappropriate modelling and misused quantitative methods in financial risk management. One of the most challenging model risk lies in modelling the dependence between
individual risks. In this talk I will give a general introduction to this challenge. To give a proper mathematical framework to study the model risk in dependence, I will introduce the admissible
risk class as the set of all possible risk aggregation when the marginal distributions of individual risks are given but the dependence structure among them is unspecified. The concept provides
flexibility to analyze model risk of dependence. I will also discuss some theoretical results on convex ordering bounds of this class, which can be used to identify extreme scenarios for risk
aggregation and calculate bounds on convex risk measures and other quantities of interest.
Special PDE Seminar: Euler's equation as a limit of the general geodesic equation 'EPDiff' used in image analysis David Mumford
, Brown University
Thursday, February 28, 2013 11:15 a.m. Lind Hall 305
NSF Division of Mathematical Sciences Programs Sastry Pantula
, National Science Foundation
Monday, July 9, 2012 2:30 - 3:30 p.m. Lind Hall 305
Dr. Pantula is the Director of the Division of Mathematical Sciences at the The National Science Foundation and will give a presentation about the Programs in the Division of Mathematical Sciences.
Bernardo Cockburn Topics at the IMA MATH 8994 Discontinuous Galerkin Methods: An Introduction
Screening of "Top Secret Rosies: The Female Computers of WWII"
Douglas N. Arnold 2010 Special Course
Clint Dawson Tutorial Lectures: Modeling Hurricane Storm Surges
Ronald H.W. Hoppe (University of Houston), An Introduction to the A Posteriori Error Analysis of Elliptic Optimal Control Problems, 2:00-3:30pm, Lind Hall 305, November 15 and 17, 2010
Slides: Hoppe_IMA1.pdf Hoppe_IMA2.pdf Paper: HHH_MFO_Adaptive.pdf
Uday Banerjee (Syracuse University), Short Course: The Generalized Finite Element Method (GFEM), 11:15am, Lind Hall 305, November 10 and 12, 2010
Slides: IMA-GFEM1and2.handout.pdf
Tutorial: Guido Kanschat (Texas A&M University), Deal. II Finite Element Library, 2:00pm-5:00pm, Lind Hall 305, October 11 and 13, 2010
Clawpack Tutorial, Randall J. LeVeque (University of Washington), October 4-6, 2010
Working seminar: Probabilistic methods in bioinformatics
AEM 8551 - Theory of the Structure of Viruses
School of Mathematics, Applied Mathematics Seminar
School of Mathematics, Partial Differential Equations Seminar
Variational/PDE based Image Reconstruction and Processing Seminar
Image Processing and Analysis Working Seminar (IPAWS) | {"url":"https://www.ima.umn.edu/seminar/previous_special_seminars.html","timestamp":"2014-04-20T18:34:29Z","content_type":null,"content_length":"18064","record_id":"<urn:uuid:b653d695-70c5-474e-ac8f-87b264553f46>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alexandria, VA Prealgebra Tutor
Find an Alexandria, VA Prealgebra Tutor
...I love, love, love to travel. That is what I invest in. So far I have been to 9 countries, including the U.S.
24 Subjects: including prealgebra, English, reading, physics
I am a former Air Force engineer and pilot. I have taught a number of different topics including engineering, mathematics, physics, flying, and survival. I spent four years teaching undergraduate
electrical engineering, general engineering, engineering mathematics, and physics.
22 Subjects: including prealgebra, physics, geometry, calculus
...This is especially true for those of us who teach students labeled EDLD. One of the core differences in teaching special education is the necessity of providing direct instruction in how to be
a successful student. A large proportion of the students who are given these labels suffer from some form Executive Function disorders.
15 Subjects: including prealgebra, reading, geometry, algebra 1
...I have several years of experience teaching all aspects of English. I have worked with high school and college students studying English literature, as well as students from elementary through
graduate school who were struggling with reading, writing, and grammar. I currently teach three ESL classes focusing on writing and reading.
46 Subjects: including prealgebra, English, Spanish, algebra 1
...I have experience working with children ages 10 and up, so I can work with those who are younger or those who are older and need assistance with more advanced coursework. I do work full time,
but would be available during evening hours and sometimes on the weekends on a case by case basis. I'd ...
25 Subjects: including prealgebra, chemistry, physics, calculus | {"url":"http://www.purplemath.com/Alexandria_VA_prealgebra_tutors.php","timestamp":"2014-04-18T23:19:01Z","content_type":null,"content_length":"24242","record_id":"<urn:uuid:3b6f0456-60d7-410b-ba9c-467949a55963>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00347-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dynamic Steiner tree problem
Results 11 - 20 of 118
- Discrete and Computational Geometry , 1993
"... Suppose we are given a sequence of n points in the Euclidean plane, and our objective is to construct, on-line, a connected graph that connects all of them, trying to minimize the total sum of
lengths of its edges. The points appear one at a time, and at each step the on-line algorithm must construc ..."
Cited by 50 (4 self)
Add to MetaCart
Suppose we are given a sequence of n points in the Euclidean plane, and our objective is to construct, on-line, a connected graph that connects all of them, trying to minimize the total sum of
lengths of its edges. The points appear one at a time, and at each step the on-line algorithm must construct a connected graph that contains all current points by connecting the new point to the
previously constructed graph. This can be done by joining the new point (not necessarily by a straight line) to any point of the previous graph, (not necessarily one of the given points). The
performance of our algorithm is measured by its competitive ratio: the supremum, over all sequences of points, of the ratio between the total length of the graph constructed by our algorithm and the
total length of the best Steiner tree that connects all the points. There are known on-line algorithms whose competitive ratio is O(log n) even for all metric spaces, but the only lower bound known
is of [IW] for some con...
, 1996
"... The Generalized Steiner Problem (GSP) is defined as follows. We are given a graph with non-negative weights and a set of pairs of vertices. The algorithm has to construct minimum weight subgraph
such that the two nodes of each pair are connected by a path. Off-line generalized Steiner problem ap ..."
Cited by 40 (5 self)
Add to MetaCart
The Generalized Steiner Problem (GSP) is defined as follows. We are given a graph with non-negative weights and a set of pairs of vertices. The algorithm has to construct minimum weight subgraph such
that the two nodes of each pair are connected by a path. Off-line generalized Steiner problem approximation algorithms were given in [AKR91, GW92]. We consider the on-line generalized Steiner
problem, in which pairs of vertices arrive on-line and are needed to be connected immediately. We give a simple O(log² n) competitive deterministic on-line algorithm. The previous best algorithm was
O( p n log n) competitive [WY93]. We also consider the network connectivity leasing problem which is a generalization of the GSP. Here edges of the graph can be either bought or leased for different
costs. We provide simple randomized O(log² n) competitive algorithm based on the on-line generalized Steiner problem result.
- ACM Transactions on Algorithms , 2004
"... ..."
- IEEE/ACM Transactions on Networking , 1995
"... Establishing a multicast tree in a point-to-point network of switch nodes, such as a wide-area ATM network, can be modeled as the NP-complete Steiner problem in networks. In this paper, we
introduce and evaluate two distributed algorithms for finding multicast trees in point-to-point data networks. ..."
Cited by 39 (2 self)
Add to MetaCart
Establishing a multicast tree in a point-to-point network of switch nodes, such as a wide-area ATM network, can be modeled as the NP-complete Steiner problem in networks. In this paper, we introduce
and evaluate two distributed algorithms for finding multicast trees in point-to-point data networks. These algorithms are based on the centralized Steiner heuristics, the shortest path heuristic
(SPH) and the Kruskalbased shortest path heuristic (K-SPH), and have the advantage that only the multicast members and nodes in the neighborhood of the multicast tree need to participate in the
execution of the algorithm. We compare our algorithms by simulation against a baseline algorithm, the pruned minimum spanning-tree heuristic, which is the basis of many previously published
algorithms for finding multicast trees. Our results show that the competitiveness (the ratio of the sum of the heuristic tree's edge weights to that of the best solution found) of both of our
algorithms was on the average ...
- IN PROCEEDINGS OF THE 7TH ACM CONFERENCE ON ELECTRONIC COMMERCE , 2006
"... We consider a multicast game with selfish non-cooperative players. There is a special source node and each player is interested in connecting to the source by making a routing decision that
minimizes its payment. The mutual influence of the players is determined by a cost sharing mechanism, which in ..."
Cited by 36 (2 self)
Add to MetaCart
We consider a multicast game with selfish non-cooperative players. There is a special source node and each player is interested in connecting to the source by making a routing decision that minimizes
its payment. The mutual influence of the players is determined by a cost sharing mechanism, which in our case evenly splits the cost of an edge among the players using it. We consider two different
models: an integral model, where each player connects to the source by choosing a single path, and a fractional model, where a player is allowed to split the flow it receives from the source between
several paths. In both models we explore the overhead incurred in network cost due to the selfish behavior of the users, as well as the computational complexity of finding a Nash equilibrium. The
existence of a Nash equilibrium for the integral model was previously established by the means of a potential function. We prove that finding a Nash equilibrium that minimizes the potential function
is NP-hard. We focus on the price of anarchy of a Nash equilibrium resulting from the best-response dynamics of a game course, where the players join the game sequentially. For a game with n players,
we establish an upper bound of O ( √ n log 2 n) on the price of anarchy, and a lower bound of Ω(log n / log log n). For the fractional model, we prove the existence of a Nash equilibrium via a
potential function and give a polynomial time algorithm for computing an equilibrium that minimizes the potential function. Finally, we consider a weighted extension of the multicast game, and prove
that in the fractional model, the game always has a Nash equilibrium.
- IEEE Journal on Selected Areas in Communications , 2005
"... In a sensor network, data routing is tightly coupled to the needs of a sensing task, and hence the application semantics. This paper introduces the novel idea of information-directed routing, in
which routing is formulated as a joint optimization of data transport and information aggregation. The ro ..."
Cited by 35 (2 self)
Add to MetaCart
In a sensor network, data routing is tightly coupled to the needs of a sensing task, and hence the application semantics. This paper introduces the novel idea of information-directed routing, in
which routing is formulated as a joint optimization of data transport and information aggregation. The routing objective is to minimize communication cost while maximizing information gain, differing
from routing considerations for more general ad hoc networks. The paper uses the concrete problem of locating and tracking possibly moving signal sources as an example of information generation
processes, and considers two common information extraction patterns in a sensor network:routing a user query from an arbitrary entry node to the vicinity of signal sources and back, or to a
prespecified exit node, maximizing information accumulated along the path. We derive information constraints from realistic signal models, and present several routing algorithms that find
near-optimal solutions for the joint optimization problem. Simulation results have demonstrated that information-directed routing is a significant improvement over a previously reported greedy
algorithm, as measured by sensing quality such as localization and tracking accuracy and communication quality such as success rate in routing around sensor holes.
, 2000
"... We consider a network design problem, where applications require various levels of Quality-of-Service (QoS) while connections have limited performance. Suppose that a source needs to send a
message to a heterogeneous set of receivers. The objective is to design a low cost multicast tree from the sou ..."
Cited by 34 (1 self)
Add to MetaCart
We consider a network design problem, where applications require various levels of Quality-of-Service (QoS) while connections have limited performance. Suppose that a source needs to send a message
to a heterogeneous set of receivers. The objective is to design a low cost multicast tree from the source that would provide the QoS levels (e.g., bandwidth) requested by the receivers. We assume
that the QoS level required on a link is the maximum among the QoS levels of the receivers that are connected to the source through the link. In accordance, we define the cost of a link to be a
function of the QoS level that it provides. This definition of cost makes this optimization problem more general than the classical Steiner tree problem. We consider several variants of this problem
all of which are proved to be NPhard. For the variant where QoS levels of a link can vary arbitrarily and the cost function is linear in its QoS level, we give a heuristic that achieves a multicast
tree with cost ...
- in FOCS, 2004
"... Real-world networks often need to be designed under uncertainty, with only partial information and predictions of demand available at the outset of the design process. The field of stochastic
optimization deals with such problems where the forecasts are specified in terms of probability distribution ..."
Cited by 33 (9 self)
Add to MetaCart
Real-world networks often need to be designed under uncertainty, with only partial information and predictions of demand available at the outset of the design process. The field of stochastic
optimization deals with such problems where the forecasts are specified in terms of probability distributions of future data. In this paper, we broaden the set of models as well as the techniques
being considered for approximating stochastic optimization problems. For example, we look at stochastic models where the cost of the elements is correlated to the set of realized demands, and
risk-averse models where upper bounds are placed on the amount spent in each of the stages. These generalized models require new techniques, and our solutions are based on a novel combination of the
primal-dual method truncated based on optimal LP relaxation values, followed by a treerounding stage. We use these to give constant-factor approximation algorithms for the stochastic Steiner tree and
single sink network design problems in these generalized models. 1.
- In SODA ’08 , 2007
"... In a network with selfish users, designing and deploying a protocol determines the rules of the game by which end users interact with each other and with the network. We study the problem of
designing a protocol to optimize the equilibrium behavior of the induced network game. We consider network co ..."
Cited by 32 (4 self)
Add to MetaCart
In a network with selfish users, designing and deploying a protocol determines the rules of the game by which end users interact with each other and with the network. We study the problem of
designing a protocol to optimize the equilibrium behavior of the induced network game. We consider network cost-sharing games, where the set of Nash equilibria depends fundamentally on the choice of
an edge cost-sharing protocol. Previous research focused on the Shapley protocol, in which the cost of each edge is shared equally among its users. We systematically study the design of optimal
costsharing protocols for undirected and directed graphs, single-sink and multicommodity networks, different classes of cost-sharing methods, and different measures of the inefficiency of equilibria.
One of our main technical tools is a complete characterization of the uniform cost-sharing protocols—protocols that are designed without foreknowledge of or assumptions on the network in which they
will be deployed. We use this characterization result to identify the optimal uniform protocol in several scenarios: for example, the Shapley protocol is optimal in directed graphs, while the optimal
protocol in undirected graphs, a simple priority scheme, has exponentially smaller worst-case price of anarchy than the Shapley protocol. We also provide several matching upper and lower bounds on
the bestpossible performance of non-uniform cost-sharing protocols.
- In Proceedings of the 17th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA , 2006
"... Consider the following network design problem: given a network G = (V, E), source-sink pairs {si, ti} arrive and desire to send a unit of flow between themselves. The cost of the routing is
this: if edge e carries a total of fe flow (from all the terminal pairs), the cost is given by ∑ e ℓ(fe), wher ..."
Cited by 31 (8 self)
Add to MetaCart
Consider the following network design problem: given a network G = (V, E), source-sink pairs {si, ti} arrive and desire to send a unit of flow between themselves. The cost of the routing is this: if
edge e carries a total of fe flow (from all the terminal pairs), the cost is given by ∑ e ℓ(fe), where ℓ is some concave cost function; the goal is to minimize the total cost incurred. However, we
want the routing to be oblivious: when terminal pair {si, ti} makes its routing decisions, it does not know the current flow on the edges of the network, nor the identity of the other pairs in the
system. Moreover, it does not even know the identity of the function ℓ, merely knowing that ℓ is a concave function of the total flow on the edge. How should it (obliviously) route its one unit of | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=4284&sort=cite&start=10","timestamp":"2014-04-20T19:53:17Z","content_type":null,"content_length":"40805","record_id":"<urn:uuid:e3501b15-d644-4021-ba36-b7055b7d74e1>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
My research
Broadly speaking, my research focuses on theory and implementation of high-performance optimization algorithms.
I am currently concentrating on four topics:
• Using optimization techniques for diagnosing hidden vulnerabilities in electrical power grids. Several national-scale blackouts took place in the recent past, with significant consequences. More
such blackouts can be expected. Even though large-scale blackouts are quite rare, when they do happen the impact can be dire. Blackouts are rare because modern power grids are robust; yet
vulnerabilities do exist. The search for hidden vulnerabilities, roughly speaking, embodies finding an instance where a limited amount of damage can cause a grid to collapse. It is important to
frame the search for such vulnerabilities in a completely agnostic manner. We are using techniques reminiscent of robust optimization in order to develop computational tools that effectively
incorporate the physics of power networks and scale well to large cases. This work is currently funded by the U.S. Department of Energy. Further material available here.
• Fundamental methodologies for integer programming. We are studying techniques for provably tightening formulations of mixed-integer programs. Here, 'provably' means that we seek guarantees (e.g.
error bounds). Thus, in an ideal setting an improved formulation is polynomially large and guarantees a 'small' approximation error. Our primary technique in this context is the use of carefully
selected combinatorial disjunctions. Another focal area involves the use of eigenvalue techniques in order to tighten formulations for convex objective, nonconvex (but possibly noncombinatorial)
optimization problems. Problems of this type abound in engineering applications, where the convex objective is often a measure of "error" or "energy", while the underlying physics impose
significant nonconvexities. This work is funded by ONR.
• Robust epidemic models. An important application we are currently investigating involves 'gaming' an epidemic: clasical models for the evolution of an epidemic are based on several noisy
parameters, in particular the infection rate. A decision-maker who wishes to allocate scarce resources so as to manage the impact of an epidemic would have to take difficult-to-define uncertainty
into account. In our work we take the robust optimization outlook; we have developed computationally practicable versions of Benders' algorithm, which in this context can be interpretated as a
game between an optimizer and the adversary.
• Efficient solutions of massive linear programs related to constrained scheduling problems. Motivated by problems arising in the mining industry, we consider precedence-constrained, multi-period
production scheduling problems with side constraints. The side constraints arise from the need to assign production to a given set of facilities with general constraints. The precedence specify a
partial order among the jobs to be processed. In practice this gives rise to optimization problems with hundreds of millions of constraints and variables. Without the side constraints the problem
can be modeled as a maximum weight closure problem, and thus, solved as a min-cut problem; the side constraints, however, render the problem strongly NP-complete. We have developed a new solution
technique for the LP-relaxation of the problem which enable us to solve the largest practical instances, to proved optimality, in few iterations and short CPU time. This work is partially funded
by a gift from BHP Billiton, Inc, and the ONR. | {"url":"http://www.columbia.edu/~dano/research.html","timestamp":"2014-04-16T05:38:38Z","content_type":null,"content_length":"5584","record_id":"<urn:uuid:cb526d36-18fc-419d-96aa-a14d7ff23990>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00086-ip-10-147-4-33.ec2.internal.warc.gz"} |
106398 - Prerequisite tree
Tree for 106398
max prerequitise depth: 2
max prerequisite-to depth: 2
mark by (*) if 106398 is a necessary prerequisite: yes
Data for 106398:
Prerequisite tree:
106398 (UG) - ALGEBRAIC TOPOLOGY 2
106383 (UG) - ALGEBRAIC TOPOLOGY
104142 (UG) - INT. TO METRIC AND TOPOLOGICAL SPACE
104142 (UG) - INT. TO METRIC AND TOPOLOGICAL SPACE
104172 (UG) - INTRODUCTION TO THE THEORY OF GROUPS
Prerequisite-to tree: Prerequisite tree (with linked courses):
Hint: Click a node to expand its tree (run query with same parameters as original).
Prerequisite-to tree (without linked courses):
Hint: Click a node to expand its tree (run query with same parameters as original). | {"url":"http://www.cs.technion.ac.il/~haggai/prereq/prereq.cgi?coursenum=106398&maxdownlevel=2&maxuplevel=2&marknecessary=1&submit=Submit","timestamp":"2014-04-17T18:23:29Z","content_type":null,"content_length":"4848","record_id":"<urn:uuid:707b4ef7-4c7f-4961-bc7c-029d00adb4cb>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inf A= 0 <=> 0 is an accumulation point
December 6th 2009, 11:34 PM #1
Sep 2009
Inf A= 0 <=> 0 is an accumulation point
Let A be a subset of the positive real. Prove that inf a = 0 <=> 0 is accmulation point of A
is it still true if A is the set of non negative reals? proof or give counter example
So I think I understand why this is true...since inf A=0 A is bounded below by 0, and we can have elements of A as close as we want to 0, but I dont know how to write it out?
Notice that if $c>0$ then $c$ is not a lower bound for $A$.
So $\left( {\exists x \in A} \right)\left[ {0 < x < c} \right]$.
In this way construct a decreasing sequence of distinct terms from $A$ converging to $0$.
Consider this set of nonnegative numbers: $\left\{ 0 \right\} \cup \left\{ { 2 + \frac{1}{n}:n \in \mathbb{Z}^ + } \right\}$.
December 7th 2009, 02:51 AM #2 | {"url":"http://mathhelpforum.com/differential-geometry/119023-inf-0-0-accumulation-point.html","timestamp":"2014-04-18T07:53:16Z","content_type":null,"content_length":"35341","record_id":"<urn:uuid:099b8ae1-9a19-49b3-b58f-af640ace81ed>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00078-ip-10-147-4-33.ec2.internal.warc.gz"} |
4-point weight distribution? (self.math)
submitted ago by wasserkraft
sorry, this has been archived and can no longer be voted on
hey /r/math,
I have a problem, maybe you can help...
• There is a table with 4 weighing scales, 1 per leg (see graphic).
• On this table are several objects (example: graphic with 2 objects X and Y).
• I know the position of each object and the weight on each leg (weight of table itself can be ignored)
I want to determine the weights of each object.
I can't figure out how the weight is distributed on 4 points.
Are there situations, where you can't tell the weight unambiguously?
I'm glad for any help!
all 3 comments
[–]nerkbot1 point2 points3 points ago
sorry, this has been archived and can no longer be voted on
To answer the last question, the amount of weight on each scale is going to be linear in the size of each of the weights on the table. If you have n different weights on the table, you'll have 4
linear equations in n unknowns. If n > 4 then you won't be able to solve for a unique solution (the system is underdetermined). For n <= 4, there may still be configurations where the weights can't
be determined unambiguously. I suspect that this can't happen if the weights have distinct positions on the table, but you'd have to work out the physics part to show this.
Edit: Here's the physics part. Let the side lengths of the table be 1. If the weight is at a distance y from the bottom edge, then the split between the bottom pair of legs and the top pair is 1-y
and y respectively. Similarly for the right pair versus the left pair based on x. So the fraction on each leg is (1-x)(1-y), x(1-y), (1-x)y, xy.
[–]wasserkraft[S] 0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
[–]CrazyStatisticianStatistics0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on | {"url":"http://www.reddit.com/r/math/comments/179pdz/4point_weight_distribution/","timestamp":"2014-04-17T01:37:26Z","content_type":null,"content_length":"57798","record_id":"<urn:uuid:ecc5cbbf-248c-43ee-a463-b5842d1a4522>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00106-ip-10-147-4-33.ec2.internal.warc.gz"} |
find ratio x:y if 3x = 4y and y does not equal zero
Can you help? How do you find the ratio x:y?
If 3x=4y, and y does not =0
Could you explain where the 3y came from?
You are needing to find x/y. You are given 3x = 4y.
To get "(x/y) = (something)", you need to get rid of the "3" and get a denominator of "y".
So divide both sides of "3x = 4y" by "3y", to get what you need. | {"url":"http://www.purplemath.com/learning/viewtopic.php?f=7&t=654&p=2031","timestamp":"2014-04-20T02:01:19Z","content_type":null,"content_length":"18529","record_id":"<urn:uuid:1e301598-8b3d-4420-a839-4f12202cfd34>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
barrel oil per day to tonne
Category - start in: main menu • flow rate menu • Barrels oil per day
Amount: 1 barrel oil per day (bbl/d) of flow rate
Equals: 0.16 tonnes (water mass) per day (ton/d) in flow rate
TOGGLE : from tonnes (water mass) per day into barrels oil per day in the other way around.
CONVERT : between other flow rate measuring units - complete list.
Conversion calculator for webmasters.
Flow rate
This unit-to-unit calculator is based on conversion for one pair of two flow rate units. For a whole set of multiple units for volume and mass flow on one page, try the Multi-Unit converter tool
which has built in all flowing rate unit-variations. Page with flow rate by mass unit pairs exchange.
Convert flow rate measuring units between barrel oil per day (bbl/d) and tonnes (water mass) per day (ton/d) but in the other reverse direction from tonnes (water mass) per day into barrels oil per
conversion result for flow rate:
From Symbol Equals Result To Symbol
1 barrel oil per day bbl/d = 0.16 tonnes (water mass) per day ton/d
Converter type: flow rate units
This online flow rate from bbl/d into ton/d converter is a handy tool not just for certified or experienced professionals.
First unit: barrel oil per day (bbl/d) is used for measuring flow rate.
Second: tonne (water mass) per day (ton/d) is unit of flow rate.
0.16 ton/d is converted to 1 of what?
The tonnes (water mass) per day unit number 0.16 ton/d converts to 1 bbl/d, one barrel oil per day. It is the EQUAL flow rate value of 1 barrel oil per day but in the tonnes (water mass) per day flow
rate unit alternative.
How to convert 2 barrels oil per day (bbl/d) into tonnes (water mass) per day (ton/d)? Is there a calculation formula?
First divide the two units variables. Then multiply the result by 2 - for example:
0.158987294928 * 2 (or divide it by / 0.5)
1 bbl/d = ? ton/d
1 bbl/d = 0.16 ton/d
Other applications for this flow rate calculator ...
With the above mentioned two-units calculating service it provides, this flow rate converter proved to be useful also as a teaching tool:
1. in practicing barrels oil per day and tonnes (water mass) per day ( bbl/d vs. ton/d ) values exchange.
2. for conversion factors training exercises between unit pairs.
3. work with flow rate's values and properties.
International unit symbols for these two flow rate measurements are:
Abbreviation or prefix ( abbr. short brevis ), unit symbol, for barrel oil per day is:
Abbreviation or prefix ( abbr. ) brevis - short unit symbol for tonne (water mass) per day is:
One barrel oil per day of flow rate converted to tonne (water mass) per day equals to 0.16 ton/d
How many tonnes (water mass) per day of flow rate are in 1 barrel oil per day? The answer is: The change of 1 bbl/d ( barrel oil per day ) unit of flow rate measure equals = to 0.16 ton/d ( tonne
(water mass) per day ) as the equivalent measure for the same flow rate type.
In principle with any measuring task, switched on professional people always ensure, and their success depends on, they get the most precise conversion results everywhere and every-time. Not only
whenever possible, it's always so. Often having only a good idea ( or more ideas ) might not be perfect nor good enough solution. If there is an exact known measure in bbl/d - barrels oil per day for
flow rate amount, the rule is that the barrel oil per day number gets converted into ton/d - tonnes (water mass) per day or any other flow rate unit absolutely exactly.
Find in traditional oven | {"url":"http://www.traditionaloven.com/tutorials/flow-rate/convert-bbl-oil-barrel-per-day-to-tonne-1-day-water-mass.html","timestamp":"2014-04-19T15:31:18Z","content_type":null,"content_length":"29284","record_id":"<urn:uuid:eebe8dda-91a3-4ea1-98bb-7b94d930c91e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Ufunc memory access optimization
Pauli Virtanen pav@iki...
Tue Jun 15 13:15:39 CDT 2010
ti, 2010-06-15 kello 11:35 -0400, Anne Archibald kirjoitti:
[clip: Numpy ufunc inner loops are slow]
> This is a bit strange. I think I'd still vote for including this
> optimization, since one hopes the inner loop will get faster at some
> point. (If nothing else, the code generator can probably be made to
> generate specialized loops for common cases).
Ok, apparently I couldn't read. Numpy already unrolls the ufunc loop for
C-contiguous arrays, that's the reason there was no difference.
This should be fairer:
x = np.zeros((2,)*20).transpose([1,0]+range(2,20))
y = x.flatten()
%timeit x+x
10 loops, best of 3: 122 ms per loop
%timeit y+y
10 loops, best of 3: 19.9 ms per loop
And with the loop order optimization:
%timeit x+x
10 loops, best of 3: 20 ms per loop
To get an idea how things look without collapsing the loops, one can
engineer nastier arrays:
x = np.zeros([50*5,50*3,50*2])[:-23:5, :-29:3, 31::2]
%timeit x+x
100 loops, best of 3: 2.83 ms (3.59 ms) per loop
[clip: minimize sum of strides]
> I suspect that it isn't optimal. As I understand it, the key
> restriction imposed by the cache architecture is that an entire cache
> line - 64 bytes or so - must be loaded at once. For strides that are
> larger than 64 bytes I suspect that it simply doesn't matter how big
> they are. (There may be some subtle issues with cache associativity
> but this would be extremely architecture-dependent.) So I would say,
> first do any dimensions in which some or all strides are less than 64
> bytes, starting from the smallest. After that, any order you like is
> fine.
This is all up to designing a suitable objective function. Suppose
`s_{i,j}` is the stride of array `i` in dimension `j`. The
order-by-sum-of-strides rule reads
f_j = \sum_i |s_{i,j}|
and an axis permutation which makes the sequence `f_j` decreasing is
chosen. Other heuristics would use a different f_j.
One metric that I used previously when optimizing the reductions, was
the number of elements that fit in a cache line, as you suggested. The
sum of this for all arrays should be maximized in the inner loops. So
one could take the objective function
f_j = - C_1 C_2 \sum_i CACHELINE / (1 + C_2 |s_{i,j}|)
with some constants C_1 and C_2, which, perhaps, would give better
results. Note that the stride can be zero for broadcasted arrays, so we
need to add a cutoff for that. With a bias toward C order, one could
then have
f_j = - C_3 j - C_1 C_2 \sum_i CACHELINE / (1 + C_2 |s_{i,j}|)
Now the question would be just choosing C_1, C_2, and C_3 suitably.
Another optimization could be flipping negative strides. This sounds a
bit dangerous, though, so one would need to think if it could e.g. break
a += a[::-1]
> I'm more worried this may violate some users' assumptions. If a user
> knows they need an array to be in C order, really they should use
> ascontiguousarray. But as it stands it's enough to make sure that it's
> newly-allocated as the result of an arithmetic expression. Incautious
> users could suddenly start seeing copies happening, or even transposed
> arrays being passed to, C and Fortran code.
Yes, it's a semantic change, and we have to make it consciously (and
conscientiously, to boot :).
I believe f2py checked the contiguity of intent(inout) arguments?
The same should go for most C code that accepts Numpy arrays, at least
PyArray_IsContiguous should be checked before assuming it is true. But
this would still mean that code that previously worked, could start
throwing up exceptions.
Personally, I believe this change is worth making, with suitable mention
in the release notes.
> It's also worth exploring common usage patterns to make sure that
> numpy still gravitates towards using C contiguous arrays for
> everything. I'm imagining a user who at some point adds a transposed
> array to a normal array, then does a long series of computations on
> the result. We want as many of those operations as possible to operate
> on contiguous arrays, but it's possible that an out-of-order array
> could propagate indefinitely, forcing all loops to be done with one
> array having large strides, and resulting in output that is stil
> out-of-order.
I think, at present, non-C-contiguous arrays will propagate
> Some preference for C contiguous output is worth adding.
Suggestions for better heuristics are accepted, just state it in the
form of an objective function :)
> It would also be valuable to build some kind of test harness to track
> the memory layout of the arrays generated in some "typical"
> calculations as well as the ordering of the loop used.
> More generally, this problem is exactly what ATLAS is for - finding
> cache-optimal orders for linear algebra. So we shouldn't expect it to
> be simple.
Yes, I do not think any of us is expecting it to be simple. I don't
think we can aim for the optimal solution, since it is ill-defined, but
only for one that is "good enough in practice".
Pauli Virtanen
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-June/051013.html","timestamp":"2014-04-20T03:32:29Z","content_type":null,"content_length":"8111","record_id":"<urn:uuid:417f00c9-4a37-4487-a066-7820c5d4bdbb>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
Heating Effects of Electricity by Vijays1
Heating effect of electricity
Energy exists in various forms such as mechanical energy, heat energy, chemical energy, electrical energy, light energy and nuclear energy. According to the law of conservation of energy, energy can
be transformed from one form to another. In our daily life we use many devices where the electrical energy is converted into heat energy, light energy, chemical energy or mechanical energy. When an
electric current is passed through a metallic wire like filament of an electric heater, oven or geyser, the filament gets heated up and here electrical energy is converted into heat energy. This is
known as 'heating effect of current'. It is a matter of common experience that a wire gets heated up when electric current flows through it. Why does this happen? A metallic conductor has a large
number of free electrons in it. When a potential difference is applied across the ends of a metallic wire, the free electrons begin to drift from the low potential to the high potential region. These
electrons collide with the positive ions (the atoms which have lost their electrons). In these collisions, energy of the electrons is transferred to the positive ions and they begin to vibrate more
violently. As a result, heat is produced. Greater the number of electrons flowing per second, greater will be the rate of collisions and hence more heat is produced. 1. Mathematical Expression for
Heat Produced
2. Application of the Heating Effect of Current
Mathematical Expression for Heat Produced
Potential difference is a measure of work done in moving a unit charge across a circuit. Current in a circuit is equal to the amount of charge flowing in one second. Therefore, the work done in
moving 'Q' charges through a potential difference 'V' in a time 't' is given by Work done = potential difference x current x time
W = VIt
The same can be expressed differently using ohm's law.
According to ohm's law V = IR
Therefore work can be expressed as
W =... | {"url":"http://www.studymode.com/essays/Heating-Effects-Of-Electricity-1321259.html","timestamp":"2014-04-19T22:59:18Z","content_type":null,"content_length":"32688","record_id":"<urn:uuid:22ad06a1-a58a-4926-ac90-c19f9a715a3f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00255-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Return to List
Differential Topology, Foliations, and Group Actions
Edited by:
Paul A. Schweitzer, S. J.
Pontificia Universidade Catolica, Rio de Janeiro, Brazil
Steven Hurder
University of Illinois, Chicago
Nathan Moreira dos Santos
Universidade Federal Fluminense, Niteroi, Brazil
, and
José Luis Arraut
USP-ICMSE, Sao Carlos, Brazil
             
Contemporary This volume contains the proceedings of the Workshop on Topology held at the Pontifícia Universidade Católica in Rio de Janeiro in January 1992. Bringing together about one hundred
Mathematics mathematicians from Brazil and around the world, the workshop covered a variety of topics in differential and algebraic topology, including group actions, foliations,
low-dimensional topology, and connections to differential geometry. The main concentration was on foliation theory, but there was a lively exchange on other current topics in
1994; 287 pp; topology. The volume contains an excellent list of open problems in foliation research, prepared with the participation of some of the top world experts in this area. Also presented
softcover here are two surveys on group actions--finite group actions and rigidity theory for Anosov actions--as well as an elementary survey of Thurston's geometric topology in dimensions 2
and 3 that would be accessible to advanced undergraduates and graduate students.
Volume: 161
Researchers and graduate students in topology and related fields.
0-8218-5170-5 • Foliations
• J. Cantwell and L. Conlon -- Topological obstructions to smoothing proper foliations
ISBN-13: • O. Calvo-Andrade -- Deformations of holomorphic foliations
978-0-8218-5170-8 • S. Hurder and Y. Mitsumatsu -- Transverse Euler classes of foliations on non-atomic foliation cycles
• N. M. d. Santos -- Foliated cohomology and characteristic classes
List Price: US$43 • R. Langevin -- A list of questions about foliations
• De Rham theory and singularities
Member Price: • A. G. Aleksandrov -- Duality and De Rham complex on singular varieties
US$34.40 • J.-P. Brasselet -- De Rham theorems for singular varieties
• M. A. S. Ruas -- On the equisingularity of families of corank \(1\) generic germs
Order Code: CONM/ • Two surveys on actions
161 • A. Adem -- Cohomology and actions of finite groups
• S. Hurder -- A survey of rigidity theory for Anosov actions
• Low-dimensional topology
• N. C. Saldanha -- An introduction to geometric topology: Geometric structures on manifolds of dimensions \(2\) and \(3\)
• D. Randall -- On \(4\)-dimensional bundle theories
• D. Randall and P. A. Schweitzer S. J. -- On foliations, concordance spaces, and the Smale conjectures
• J. N. Ballesteros and M. C. R. Fuster -- Generic \(1\)-parameter families of closed space curves
• Characteristic classes
• J. L. Dupont and F. W. Kamber -- Dependence relations for Cheeger-Chern-Simons invariants of locally symmetric spaces
• N. E. Barufatti -- Obstructions to immersions of projective Stiefel manifolds | {"url":"http://ams.org/bookstore?fn=20&arg1=conmseries&ikey=CONM-161","timestamp":"2014-04-16T23:41:38Z","content_type":null,"content_length":"16977","record_id":"<urn:uuid:43eb0f93-9543-4a9f-a1a4-a9bbc90abf98>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find the product of (3x + 7y)(3x − 7y). 9x2 − 42xy + 49y2 9x2 + 42xy + 49y2 9x2 − 49y2 9x2 + 49y2
Best Response
You've already chosen the best response.
if you know the answer could you please explain it to me? I would like to know how..
Best Response
You've already chosen the best response.
9x2 − 49y2
Best Response
You've already chosen the best response.
9x2 − 49y2
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
(3x + 7y)(3x − 7y). First 3x times 3x = 9x2 Outside 3x times -7y = -21xy Inside 3x times 7y = 21xy Last 7y times -7y = -49y2 9 x2 - 21xy + 21xy - 49y2 = 9x2 - 49y2
Best Response
You've already chosen the best response.
thank you!(:
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f1195dae4b0a4c24f57dd70","timestamp":"2014-04-21T02:08:38Z","content_type":null,"content_length":"41973","record_id":"<urn:uuid:5d192e9c-32ba-4ea0-b884-afdf20400a69>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00290-ip-10-147-4-33.ec2.internal.warc.gz"} |
Robustness and modular design of the
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Mol Syst Biol. 2006; 2: 70.
Robustness and modular design of the Drosophila segment polarity network
Biomolecular networks have to perform their functions robustly. A robust function may have preferences in the topological structures of the underlying network. We carried out an exhaustive
computational analysis on network topologies in relation to a patterning function in Drosophila embryogenesis. We found that whereas the vast majority of topologies can either not perform the
required function or only do so very fragilely, a small fraction of topologies emerges as particularly robust for the function. The topology adopted by Drosophila, that of the segment polarity
network, is a top ranking one among all topologies with no direct autoregulation. Furthermore, we found that all robust topologies are modular—each being a combination of three kinds of modules.
These modules can be traced back to three subfunctions of the patterning function, and their combinations provide a combinatorial variability for the robust topologies. Our results suggest that the
requirement of functional robustness drastically reduces the choices of viable topology to a limited set of modular combinations among which nature optimizes its choice under evolutionary and other
biological constraints.
Keywords: Drosophila, evolution, function and topology, modularity, robustness
Biological systems are evolved to function robustly under complex and changing environments (Waddington, 1957). At the cellular level, the interactions of genes and proteins define biomolecular
networks that reliably execute various functions despite fluctuations and perturbations. Functional robustness as a systems property may have preferences in and constraints on the wiring diagram of
the underlying networks (Barkai and Leibler, 1997; Li et al, 2004; El-Samad et al, 2005; Wagner, 2005). It has been demonstrated in a computational study that a robust oscillator has a strong
preference on certain type of the network topology (Wagner, 2005). Preferred network motifs in biological networks were identified (Milo et al, 2002) and were attributed to their robust dynamical
properties (Prill et al, 2005). It was argued through a comparative study of a few networks that a bacteria signaling network is optimally designed for its function (Kollmann et al, 2005). To clearly
lay out the relationship between the functional robustness and the topological constraints, we carried out an exhaustive computational analysis on the network topologies that perform the same
patterning function as the segmentation polarity gene network in Drosophila (Martizez Arias, 1993; DiNardo et al, 1994; Perrimon, 1994). We found that only a small fraction of topologies can perform
this patterning function robustly. This information can be used in combination with mutant phenotypes to discriminate biological models. We show that the topology of the Drosophila network is among
this small group of robust topologies and is optimized within certain biological constraints. We further found that all robust topologies can be classified into families of core topologies. Each
family is a particular combination of three kinds of network modules, which originate from the three subfunctions of the patterning function. We argue that the modular combinations also facilitate
flexibility and evolvability in this case.
The segmentation process in the embryogenesis of the fruitfly Drosophila is characterized by a sequential cascade of gene expression, with the protein levels of one stage acting as the positional
cues for the next (Wolpert et al, 2002). The successive transient expression of the maternal, the gap and the pair-rule genes divide the embryo into an ever finer pattern. After cellularization, the
segment polarity genes stabilize the pattern, setting up the boundaries between the parasegments and providing positional ‘readouts' for further development (Martizez Arias, 1993; DiNardo et al, 1994
; Perrimon, 1994). We are concerned here only with the network that is in action during the extended and the segmented germband stage, which is characterized by the interdependency of the expression
of en and wg (DiNardo et al, 1994; Perrimon, 1994), and we focus on its function of stabilizing a periodic pattern of sharp boundaries defined by the en- and the wg-expressing cells (Vincent and
O'Farrell, 1992). As depicted in Figure 1, the core network in Drosophila consists of the hedgehog (Hh) (Lum and Beachy, 2004) and the wingless (Wg) (Klingensmith and Nusse, 1994) signal transduction
pathways. Previous studies demonstrated that this network is a very robust patterning module. Differential equation models of the network can stabilize and maintain the required patterns of en and wg
expression with a remarkable tolerance to parameter changes (von Dassow et al, 2000; von Dassow and Odell, 2002; Ingolia, 2004). A simple Boolean model was shown to capture the main feature of the
network's dynamics (Albert and Othmer, 2003). These findings have led to the hypothesis that the segment polarity gene network is a very robust developmental module that is adopted in a wide range of
developmental programs (von Dassow et al, 2000). Indeed, the striped expression patterns of the segment polarity genes in the segmented germband stage are remarkably conserved among all insects,
perhaps among all arthropods (Peel et al, 2005). On the other hand, it was argued that the conservation of this gene network is not due to robustness but rather to pleiotropy (high connectivity with
other modules/networks) (Sander, 1983; Raff, 1996; Galis et al, 2002). Pleiotropic effects may constrain the network's evolution, ‘freezing' its topology early on during evolution and making it
conserved among developmental programs that later diverged. In this study, we investigate the relationship between the functional robustness and the network's topology. Specifically, we ask the
following questions: (1) How many network topologies can perform the given patterning function and how many can do so robustly? (2) Can a robust topology also satisfy certain topological constraints
imposed by, for example, pleiotropic effects, and if so how is this achieved? (3) Where does the Drosophila network stand in this analysis? (4) Are there any organization principles emerging from the
robust topologies for the given function?
Segment polarity network and expression pattern of wg and en. (A) The segment polarity gene network model of Ingolia (2004). Ellipses represent mRNAs and rectangles proteins. Lines ending with an
arrow and a dot denote activation and repression, respectively. ...
Coarse-graining the biological network
Instead of analyzing the full biological network, we focus on its core topology. The core topology is derived from the full network and is the minimal set of nodes and links that represent the
underlying topology of the full network. This reduction in degrees of freedom enables us to perform a much more comprehensive computational and theoretical analysis and at the same time to preserve
key functional properties. The topology of the Drosophila segment polarity network can be represented by a network of three nodes. The network represented in Figure 1A can be simplified into the
topology of Figure 1B. As we are mainly concerned with the steady-state behavior, certain ‘intermediate steps' in the network can be combined. First, we combine the mRNA node with its corresponding
protein node if there is no post-transcriptional regulation for the mRNA, because the time delay between the mRNA and the protein production does not play any role in our steady-state analysis. We
then combine the node hh/Hh with en/En, because the expression of hh depends solely on En. The expression patterns of these two genes, hh and en, are highly correlated at this stage of the
development (Tabata et al, 1992). We thus use a single node ‘E' in Figure 1B representing the four nodes en, En, hh and Hh in Figure 1A. Extracellular Hh signaling activates wg by regulating the
amount of Ci and Cn, which are parts of the Hh signal transduction pathway. Both Ci and Cn are the products of the gene ci. In the absence of Hh signaling, Ci goes through a process of proteolysis
and the remaining fragment functions as a repressor, Cn. The Hh signaling blocks the proteolysis of Ci, resulting in the accumulation of Ci in the nucleus, which acts as a transcriptional activator
for wg (Alexandre et al, 1996; Lum and Beachy, 2004). Thus, Ci and Cn function like a transcriptional switch in response to the Hh signaling. This regulation is simplified as a direct (intercellular)
link from ‘E' to ‘W' in the coarse grained topology (Figure 1B). The repression of ci by En in Figure 1A is represented in Figure 1B as ‘E' repressing ‘W', as the function of ci is to control the
expression of wg. The simplified model Figure 1B has similar dynamic properties with the more detailed model Figure 1A; in particular, they can both stabilize the wild-type pattern and sharpen the
parasegment boundary.
In coarse-graining the network of Figure 1C to that of Figure 1D, the two negative regulations, from Slp to mid and from Mid to wg, are replaced with a direct positive regulation from ‘S' to ‘W'.
Again, the simplified model Figure 1D has similar dynamic properties and patterning function with the full model Figure 1C.
Enumerating three-node networks
We then proceed to enumerate topologies of three-node networks with intra- and intercellular interactions. Every node may regulate itself and the other two nodes, both intracellularly and
intercellularly, resulting in 3 × 3 × 2=18 directed links. Each link has three possibilities: the regulation can be positive, negative or absent. So the total number of possible topologies for the
three-node network is 3^18=387 420 489. Enumerating all of them is beyond our computational power. Thus, we make the following restrictions on the topology: only two out of the three nodes can
possibly go outside of the cell to signal. This restriction reduces the total number of topologies to 3^15=14 348 907, all of which we enumerate. For each topology, we use a model of ordinary
differential equations (ODEs) to quantitatively assess its ability to perform the required function, which is to stabilize the pattern of Figure 1F given the initial condition of Figure 1E (Materials
and methods). The functional robustness of a topology is measured by the quantity Q (=the fraction of the parameter space that can perform the function) (von Dassow et al, 2000; Ingolia, 2004). We
estimate Q by randomly sampling the parameter space: Q≈m/n, where n is the number of random parameter sets used in the sampling and m the number of those sets that can perform the function. We first
sampled each and every topology with n=100 random parameter sets. We found that about 1% of the topologies can perform the function with at least one of the 100 parameter sets (m>0). However, their Q
values differ drastically. As shown in Figure 2, the distribution of the Q values is much skewed among the 1% population of the topologies—although the majorities have very small Q values, there is a
long tail in the distribution.
The histogram of Q values for three-node networks. Each of the 14 348 907 networks is sampled with 100 random parameter sets (black bars). Each of the resulting 156 016 networks with Q>0 is resampled
with 1000 random parameter sets (only data ...
Biological network
The topology (Figure 1B) of the network constructed in previous studies (von Dassow and Odell, 2002; Ingolia, 2004) (Figure 1A) scored very high but is not the top ranking one. However, there may be
some biological constraints on the selection of topologies. Indeed, a group of topologies consisting of only two nodes (with the ‘S' node left unlinked) come close to the top (see Figure 3A),
suggesting that if Drosophila were only presented with the function defined in our study, the best design would be to just use two mutually activating signaling pathways (‘E' and ‘W') and nothing
else. But both the Hh and the Wg signaling pathways are utilized in at least several other functions besides stabilizing the parasegment boundaries (Galis et al, 2002), which may impose pleiotropic
constraints on the topology of the networks that utilize these pathways. In general, these constraints may be hard to decipher. Here, we simply note that there is no sound biological evidence for any
direct positive autoregulation loops on the two signaling pathways and on the slp genes. If we exclude topologies with any direct positive autoregulation on the ‘E' and the ‘S' nodes, Figure 1B
stands up as the most robust topology (Q=0.47). In this topology, there is still a direct autoregulation loop on ‘W', which originates from the Wg → wg autoregulation in Figure 1A. This
autoregulation has no basis in biological evidence, but was added by previous authors to ensure the correct patterning of the model—without this added link, their models cannot reproduce the correct
biological pattern (von Dassow et al, 2000; von Dassow and Odell, 2002; Ingolia, 2004). We ask that if we do not add this autoregulation, whether we can identify a robust topology that has biological
evidence for every link. There are eight topologies with Q>0.1 that have no direct autoregulation on any of the three nodes. A top ranking one (Q=0.36) is shown in Figure 1D. Instead of a direct
autoregulation on ‘W', this topology accomplishes the positive feedback indirectly through the node ‘S'. This would suggest that the Wg signaling pathway regulates the slp gene whose product in turn
regulates wg. Indeed, there is ample biological evidence for these regulations (Lee and Frasch, 2000; Buescher et al, 2004), suggesting a biological network of Figure 1C. The role of slp in
regulating wg was also discussed in previous computation models (Meir et al, 2002; Albert and Othmer, 2003).
Skeletons and functional modules. (A) The four skeletons in robust two-node topologies (black lines). The green, orange and red links are neutral, bad and very bad links, respectively. The numbers
below the skeletons are its Q value and the size of its ...
To further determine which of the two topologies, Figure 1B or D, is closer to the true biological one, we subject both to the mutant test. We model two kinds of mutants corresponding to
perturbations in the two signaling pathways and compare the computed phenotypes with the experimental observations. The first is the zw3 (a protein kinase in Wg signaling pathway) mutant—the mutation
results in a ubiquitous Wg signaling with a phenotype of an expanded en-expressing region ended by an ectopic wg-expressing stripe (Figure 1G) (Siegfried et al, 1994). The second is the mutation of
the Hh receptor patched (ptc), which results in a ubiquitous Hh signaling and has a phenotype of an expanded wg-expressing region ended by an ectopic en-expressing stripe (Figure 1H) (DiNardo et al,
1988). We found that although both topologies, Figure 1B and D, can produce the wild-type patterning robustly, only the network of Figure 1D can also produce the two mutant phenotypes. Specifically,
for Figure 1D, about 1/3 of the parameter sets that produced the wild-type pattern can also produce the two mutant patterns. For Figure 1B, none of the parameter sets that produced the wild-type
pattern can also produce either of the two mutant patterns. We also used the more detailed models Figure 1A and C to carry out the mutant test and obtained similar results (see details in
Supplementary information). This suggests that the network of Figure 1C (and its corresponding topology of Figure 1D) is a better model for the Drosophila network than that of Figure 1A (and Figure
Two-node topologies
In our enumeration study of the three-node topologies, some two-node topologies (with the ‘S' node unlinked) scored very high. This indicates that the simplest ‘irreducible' topology for the required
patterning function consists of only two nodes and that it would be instructive to study two-node topologies. There are 45 two-node topologies with Q>0.1. A close examination of these topologies
revealed that all of them come from four core topologies, which we call skeletons (Figure 3A). In other words, the 45 topologies can be classified into four families. In each family, all the members
come from a skeleton by adding extra links to the skeleton. These links are either ‘neutral' (have no effect on the Q value) or ‘bad' (will reduce the Q value). The number of neutral links a skeleton
can accommodate and the number of bad links it can tolerate (so that the reduced Q value is still larger than 0.1) depend on the structure and the robustness of the skeleton. As shown in Figure 3A,
the first skeleton can accommodate and tolerate combinations of two neutral links and four bad links, whereas the fourth skeleton can accommodate or tolerate none. Furthermore, the four skeletons all
contain the following three topological features: positive feedback on ‘E' (either intra- or intercellularly), positive feedback on ‘W' (either intra- or intercellularly) and intercellular mutual
activation between ‘E' and ‘W'. These three topological features can be traced back to three subfunctions, which the required patterning function can be decomposed into. Note that cells adjacent to
an ‘E'-expressing cell can have two different fates: expressing ‘W' or none (Figure 1F). So the network should be bistable in ‘W'. Similarly, cells adjacent to ‘W' can express either ‘E' or none,
implying bistability in ‘E'. Thus, the positive feedback loops on ‘E' and ‘W' follow the functional requirement of bistability on ‘E' and ‘W' (Ingolia, 2004). The mutual intercellular activation
between ‘E' and ‘W' arises from the functional requirement of maintaining a sharp patterning boundary. In order to sharpen a wide boundary (Figure 1E), it is necessary to have ‘E' expressed only
right next to a ‘W' cell, and vice versa, leading to the interdependency of ‘E' and ‘W' in the network topology. Therefore, the three functional requirements lead to the three kinds of topological
features, or modules. The combination of the three kinds of modules, with one from each kind, results in the four skeletons of the robust topologies (Figure 3A). Note that for the second, third and
fourth skeletons in Figure 3A, there are necessary repressive links (which are neutral in the first skeleton) in addition to the three modules. When the positive feedback module is intercellular, it
is necessary to have an intracellular repression on the node to prevent the ‘E' and ‘W' being expressed in the same cell causing further blurring of the boundary (see details in Supplementary
information). Also note that some bad links are just redundant modules, for example, the intercellular autoactivation of E or W in the first skeleton.
Families of three-node topologies
Having identified the three essential kinds of modules for the patterning function and the rules of their combination in two-node topologies, we turn our attention to the robust three-node topologies
and ask if similar organization principles exist there. With one extra node ‘S', there are multiple new ways to form each kind of module (Figure 3B). (Note that for the E and W modules, the positive
feedback does not have to act on E/W directly. If E/W is dependent on S, positive feedback on S is also a viable choice. Also note that as we have excluded from our enumeration the intercellular
regulation from S, there are no modules with this regulation.) We then checked all three-node topologies with Q>0.1 to see if they contained these modules. Intriguingly, every topology in this pool
(37 580 of them) contains at least one module of each kind. Therefore, it is a necessary condition for a robust topology to include at least one module of each kind. On the other hand, the reverse is
not true. From the modules in Figure 3B, one can form 108 combinations that include one and only one module of each kind and that have no conflicting regulations (see details in Supplementary
information). Only 44 of them are robust enough to be the skeletons of networks with Q>0.1. In other words, we found that all topologies with Q>0.1 can be classified into 44 distinct families
corresponding to 44 modular combinations (skeletons). In most families, the Q value of the skeleton is either the highest or close to the highest in the family, implying that other members in the
family have extra non-beneficial (neutral and bad) links compared to the skeleton. There are a few cases where the skeleton's Q value is not close to the top within the family, implying that some
extra links in addition to the modular combination are beneficial.
As shown in Figure 4A, the family size roughly scales exponentially with the skeleton's Q value. This means that the larger the skeleton's Q value, the more non-beneficial links it can accommodate
and tolerate. The exponential dependence of the family size on the Q value suggests family members as some kind of combinatorial additions to the core topology, although in general the effects of
additions of links to the core may be correlated. Although the non-beneficial links do not improve the Q values, they may facilitate variability and plasticity that can be useful in adapting to new
environments and functional tasks (Schuster et al, 1994). We found that certain neutral links and redundant modules are beneficial when the system is faced with noisy initial conditions (see details
in Supplementary information). The modular organization of the skeletons suggests that their Q values might be related to the Q values of the modules. Indeed, we found that for the 44 skeletons, the
Q value of a skeleton is well correlated with the product of the Q values of the three modules that make up the skeleton (Figure 4B).
The number of networks in a family (A) and the skeleton's estimated Q value (the product of the modules' Q values) (B) versus the Q value of the skeleton.
In summary, our study of the relationship between function and topology revealed certain design principles that may be applicable to a broader class of biological systems. We found that the
requirement of functional robustness drastically reduces the choices of viable topology. Similar findings were reported in models of circadian oscillators (Wagner, 2005) and, in a broader sense,
protein folding (Li et al, 1996), suggesting that the constraint may be general. The approach and method developed here may be applicable in analyzing other networks and in designing novel functional
In our case, the robust topologies are a set of modular combinations. Here, modularity arises from the decomposability of the function into relatively independent subfunctions. Combinations of
modules provide a combinatorial variability—each subfunction has a multiple choice of modules. Although only a subset of these combinations is robust, this flexibility may be crucial for the network
to evolve and adapt in a wide range of situations (Kirschner and Gerhart, 2005). On the other hand, the fact that each module in the network can be traced back to a simpler subfunction suggests that
new and more complex functions can be built from the bottom up via combinations of simpler functional modules. Similar principles have been seen in other biological systems, for example, the
transcriptional control (Carroll, 2005) and protein interactions (Bhattacharyya et al, 2006), suggesting a hierarchical modular design toward an increasing complexity.
Optimality and pleiotropy
Another insight gained from our study is that the topology adopted by nature may not necessarily be the most robust per se, but may nonetheless be optimized within certain biological constraints.
Here, the constraint seems to be that no direct positive loops can be used on the three nodes: ‘E', ‘S' and ‘W'. Direct positive autoregulation may result in a less flexible system, which may impair
the other functional abilities of the Hh and Wg pathways. Given the multiple tasks carried out by the two major signaling pathways (Galis et al, 2002), it is plausible that when a positive loop is
needed for a specific function, it is best done with another mediator (here ‘S') that is only involved in that function. Intriguingly, in the segment polarity gene network, the ‘S' node is part of
the positive loops of both ‘E' and ‘W' (Figure 1C and D). The ‘S' loops with ‘E' through mutual repression and with ‘W' through mutual activation. This design ensures that ‘E' and ‘W' cannot be
switched on in the same cell. Our study suggests that modular design not only provides robustness but can also facilitate variability to accommodate a variety of pleiotropic constraints.
The distribution of Q values (Figure 2) may have quantitative implications in the early history of evolution. One may ask whether nature picked a robust topology in the first place or a fragile one
and then improved upon it. The argument for the former is that a robust topology has a very large working parameter space and thus is easy to be ‘hit' by random parameter sets. The argument for the
latter is that although each particular fragile topology has a tiny working parameter space and is hard to be ‘hit', there are so many of them that the chance of hitting any is high. This question
can be phrased quantitatively by asking what is the most probable Q value for the quantity Q × P(Q), where P(Q) is the probability density of the Q distribution (inset of Figure 2). We found that P(Q
)≈c/Q^α, with α=1.37, which implies that Q × P(Q)≈c/Q^0.37 has larger weights in smaller Q's, favoring the fragile topologies as nature's first pick.
Functional versus robustness constraint
We have sampled each of the 14 348 907 three-node topologies with N=100 random sets of parameters. We found that there are M=156 016 (about 1%) networks that are ‘functional' with at least one
parameter set. Among these ‘functional networks', 96% of them contain at least one module of each kind. Thus, it appears that the function alone (without robustness) is a primary constraint on
topology. However, note that the number of ‘functional networks' M can increase with the sampling number N. We have sampled all two-node networks with N=100, 1000 and 10 000 (Supplementary
information) and found that M=75, 100 and 120, respectively. Furthermore, we found that the percentage of ‘functional networks' that are modular (containing at least one skeleton) decreases with N:
it is 92% for N=100, 74% for N=1000 and 63% for N=10 000. Thus, ‘functional network' cannot be defined unambiguously without a minimal robustness (Q) requirement. There would be more and more
non-modular ‘functional networks' if we sample the parameter space more and more thoroughly. These networks ‘function' with some special arrangements of parameters. On the other hand, if we focus on
robust functional networks (the ones with Q larger than a minimal value), all the statistical properties converge with the sampling number and the conclusions are robust.
Materials and methods
The ODE model
For a fixed topology, every cell has the same set of nodes and links. Each node A has a half-life time τ[A]. Each link is modeled with a Hill function. ‘A link from A' has either the form A^n/(A^n+k^
n) (positive regulation) or k^n/(A^n+k^n) (negative regulation). After proper normalization, each node has one parameter (half-life time) and each link has two parameters (n and k). Multiple
regulations to the same node are modeled as the product of the regulations. For example, for the topology of Figure 5, the equations in each cell are
An example of a two-node topology. Intracellular regulations (solid lines) act on nodes within the cell; intercellular regulations (dashed lines) act on target nodes in nearby cells.
where A[out] is the average concentration of A in the neighboring cells. We use the multiplication rule to model multiple regulations because in the full biological network (Figure 1A and C) the
negative links are dominant, implying an ‘AND'-like logic when a negative link appears together with other regulations. In the simplified model, a positive link can be a result of two negative links
(e.g., the link S → W in Figure 1D is from two negatives: Slp—Figure 1C). In this case, the positive link should also have the ‘AND'-like logic. For simplicity, we implement the multiplication rule
uniformly whenever there are multiple regulations. In our case, we have tested that the simplified models (Figure 1B and D) have the same steady-state pattern as the full models (Figure 1A and C).
We use the GNU Scientific Library for ODE simulation (Galassi et al, 2002). The function used for the integration is rkf45. Calculation time is set to 800 min (virtual simulation time). In most
calculations, we randomly sample 100–10 000 parameter sets using the LHS method (McKay et al, 1979), which minimizes the correlation between different parameter dimensions. The ranges of the
parameters used in the sampling are as follows: k=(0.001–1), n=(2–10) and τ=(5–100 min). They are similar to the ranges used in previous studies (von Dassow et al, 2000; Ingolia, 2004). k is evenly
sampled on the log scale and both τ and n are evenly sampled on the linear scale. The ODEs are simulated on an eight-cell segment (one row of the parasegment in Figure 1F). Periodic boundary
condition is used in both directions (x and y).
Patterning function and judgment of pattern
We judge whether or not a network with a given parameter set can perform the required patterning function in the following way. Let x(I;n) be the value of node I in cell n. I can be E, S or W and x
is a real number between 0 and 1. The patterning function is defined as follows: given the initial condition (Figure 1E) x(E;1,2)=1, x(E;3–8)=0, x(S;1–4)=0, x(S;5–8)=1, x(W;1–6)=0, x(W;7,8)=1, the
network should reach the target steady state (Figure 1F) x(E;1)=1, x(E;2–8)=0, x(W;1–7)=0, x(W;8)=1 within a given time. We use a similar criterion as the one in previous studies (von Dassow et al,
2000; Ingolia, 2004) to judge if a pattern is acceptable to be the target pattern. Specifically, for node I in cell n, a score T is given to evaluate if its expression level is consistent with the
target pattern:
where x(I,n) is the concentration of node I in cell n, x[t] the threshold for x (we use 10% here) and α[max] the worst-possible score (0.5 here). T[off] is used when the target state requires that
node I has a low value (0) in cell n. T[on] is used when node I should have a high value (1) in cell n. All the individual scores are combined to give the total score:
If the total score is lower than 0.0125, the pattern is acceptable. This threshold is more stringent than that in the previous work (von Dassow et al, 2000; Ingolia, 2004). We check the pattern
twice, at 600 and 800 min. If the scores are smaller than 0.0125 at both times, we accept the pattern.
Supplementary Material
Supplementary information
We thank Nicholas Ingolia, Morten Kloster, Edo Kussell, Patrick O'Farrell, Wendell Lim, Andrew Murray, Leslie Spector and members of the Center for Theoretical Biology at PKU for discussion, comments
and/or critical reading of the manuscript. This work was supported by National Key Basic Research Project of China (2003CB715900) and National Natural Science Foundation of China. CT acknowledges
support from the Sandler Family Supporting Foundation.
• Albert R, Othmer HG (2003) The topology of the regulatory interactions predicts the expression pattern of the segment polarity genes in Drosophila melanogaster. J Theor Biol 223: 1–18. [PubMed]
• Alexandre C, Jacinto A, Ingham PW (1996) Transcriptional activation of hedgehog target genes in Drosophila is mediated directly by the cubitus interruptus protein, a member of the GLI family of
zinc finger DNA-binding proteins. Genes Dev 10: 2003–2013. [PubMed]
• Barkai N, Leibler S (1997) Robustness in simple biomedical networks. Nature 387: 913–917. [PubMed]
• Bhattacharyya RP, Reményi A, Yeh BJ, Lim WA (2006) Domains, motifs, and scaffolds: the role of modular interactions in the evolution and wiring of cell signaling circuits. Annu Rev Biochem 75:
655–680. [PubMed]
• Buescher M, Svendsen PC, Tio M, Miskolczi-McCallum C, Tear G, Brook WG, Chia W (2004) Drosophila T box proteins break the symmetry of hedgehog-dependent activation of wingless. Curr Biol 14:
1694–1702. [PubMed]
• Carroll SB (2005) Evolution at two levels: on genes and form. PLoS Biol 3: e245. [PMC free article] [PubMed]
• DiNardo S, Heemskerk J, Dougan S, O'Farrell PH (1994) The making of a maggot: patterning the Drosophila embryonic epidermis. Curr Opin Genet Dev 4: 529–534. [PMC free article] [PubMed]
• DiNardo S, Sher E, Heemskerk-Jongens J, Kassis JA, O'Farrell PH (1988) Two-tiered regulation of spatially patterned engrailed gene expression during Drosophila embryogenesis. Nature 332: 604–609.
[PMC free article] [PubMed]
• El-Samad H, Kurata H, Doyle JC, Gross CA, Khammash M (2005) Surviving heat shock: control strategies for robustness and performance. Proc Natl Acad Sci USA 102: 2736–2741. [PMC free article] [
• Galassi M, Davies J, Theiler J, Gough B, Jungman G, Booth M, Rossi F (2002) GNU Scientific Library Reference Manual. 2nd edn, 290–298.
• Galis F, van Dooren TJ, Metz JA (2002) Conservation of the segmented germband stage: robustness or pleiotropy? Trends Genet 18: 504–509. [PubMed]
• Ingolia NT (2004) Topology and robustness in the Drosophila segment polarity network. PLoS Biol 2: e123. [PMC free article] [PubMed]
• Kirschner M, Gerhart J (2005) The Plausibility of Life. New Haven and London: Yale University Press.
• Klingensmith J, Nusse R (1994) Signaling by wingless in Drosophila. Dev Biol 166: 396–414. [PubMed]
• Kollmann M, Lovdok L, Bartholome K, Timmer J, Sourjik V (2005) Design principles of a bacterial signalling network. Nature 438: 504–507. [PubMed]
• Lee HH, Frasch M (2000) Wingless effects mesoderm patterning and ectoderm segmentation events via induction of its downstream target sloppy paired. Development 127: 5497–5508. [PubMed]
• Li F, Long T, Lu Y, Ouyang Q, Tang C (2004) The yeast cell-cycle network is robustly designed. Proc Natl Acad Sci USA 101: 4781–4786. [PMC free article] [PubMed]
• Li H, Helling R, Tang C, Wingreen N (1996) Emergence of preferred structures in a simple model of protein folding. Science 273: 666–669. [PubMed]
• Lum L, Beachy PA (2004) The Hedgehog response network: sensors, switches, and routers. Science 304: 1755–1759. [PubMed]
• Martizez Arias A (1993) In Development and Patterning of the Larval Epidermis of Drosophila. Bate M, Hartenstein V (eds) pp 517–608. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press.
• McKay MD, Beckman RJ, Conover WJ (1979) A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 21: 239–245.
• Meir E, Munro EM, Odell GM, von Dassow G (2002) Ingeneue: a versatile tool for reconstituting genetic networks, with examples from the segment polarity network. J Exp Zool 294: 216–251. [PubMed]
• Milo R, Shen-Orr S, Itzkovitz S, Kashtan N, Chklovski D, Alon U (2002) Network motifs: simple building blocks of complex networks. Science 298: 824–827. [PubMed]
• Peel AD, Chipman AD, Akam M (2005) Arthropod segmentation: beyond the Drosophila paradigm. Nat Rev Genet 6: 905–916. [PubMed]
• Perrimon N (1994) The genetic basis of patterned baldness in Drosophila. Cell 76: 781–784. [PubMed]
• Prill RJ, Iglesias PA, Levchenko A (2005) Dynamic properties of network motifs contribute to biological network organization. PLoS Biol 3: e343. [PMC free article] [PubMed]
• Raff RA (1996) The Shape of Life. Chicago: University of Chicago Press.
• Sander K (1983) Development and Evolution. Goodwin BC, Holder N, Wylie CC (eds) pp 137–154. Cambridge: Cambridge University Press.
• Schuster P, Fontana W, Stadler PF, Hofacker I (1994) From sequences to shapes and back: a case study in RNA secondary structures. Proc R Soc London B 255: 279–284. [PubMed]
• Siegfried E, Wilder EL, Perrimon N (1994) Components of wingless signalling in Drosophila. Nature 367: 76–80. [PubMed]
• Tabata T, Eaton S, Kornberg TB (1992) The Drosophila hedgehog gene is expressed specifically in posterior compartment cells and is a target of engrailed regulation. Genes Dev 6: 2635–2645. [
• Vincent JP, O'Farrell PH (1992) The state of engrailed expression is not clonally transmitted during early Drosophila development. Cell 68: 923–931. [PMC free article] [PubMed]
• von Dassow G, Odell GM (2002) Design and constraints of the Drosophila segment polarity module: robust spatial patterning emerges from intertwined cell state switches. J Exp Zool 294: 179–215. [
• von Dassow G, Meir E, Munro EM, Odell GM (2000) The segment polarity network is a robust developmental module. Nature 406: 188–192. [PubMed]
• Waddington CH (1957) The Strategy of the Genes. George Allen & Unwin Ltd: London.
• Wagner A (2005) Circuit topology and the evolution of robustness in two-gene circadian oscillators. Proc Natl Acad Sci USA 102: 11775–11780. [PMC free article] [PubMed]
• Wolpert L, Beddington R, Jessell T, Lawrence P, Meyerowitz E, Smith J (2002) Principles of Development. New York: Oxford Press.
Articles from Molecular Systems Biology are provided here courtesy of The European Molecular Biology Organization and Nature Publishing Group
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1762089/?tool=pubmed","timestamp":"2014-04-19T00:35:03Z","content_type":null,"content_length":"113981","record_id":"<urn:uuid:733ae05f-1f5d-4013-930f-8da6980f57de>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00605-ip-10-147-4-33.ec2.internal.warc.gz"} |
Intrinsic versus extrinsic voltage sensitivity of blocker interaction with an ion channel pore
Many physiological and synthetic agents act by occluding the ion conduction pore of ion channels. A hallmark of charged blockers is that their apparent affinity for the pore usually varies with
membrane voltage. Two models have been proposed to explain this voltage sensitivity. One model assumes that the charged blocker itself directly senses the transmembrane electric field, i.e., that
blocker binding is intrinsically voltage dependent. In the alternative model, the blocker does not directly interact with the electric field; instead, blocker binding acquires voltage dependence
solely through the concurrent movement of permeant ions across the field. This latter model may better explain voltage dependence of channel block by large organic compounds that are too bulky to fit
into the narrow (usually ion-selective) part of the pore where the electric field is steep. To date, no systematic investigation has been performed to distinguish between these voltage-dependent
mechanisms of channel block. The most fundamental characteristic of the extrinsic mechanism, i.e., that block can be rendered voltage independent, remains to be established and formally analyzed for
the case of organic blockers. Here, we observe that the voltage dependence of block of a cyclic nucleotide–gated channel by a series of intracellular quaternary ammonium blockers, which are too bulky
to traverse the narrow ion selectivity filter, gradually vanishes with extreme depolarization, a predicted feature of the extrinsic voltage dependence model. In contrast, the voltage dependence of
block by an amine blocker, which has a smaller “diameter” and can therefore penetrate into the selectivity filter, follows a Boltzmann function, a predicted feature of the intrinsic voltage
dependence model. Additionally, a blocker generates (at least) two blocked states, which, if related serially, may preclude meaningful application of a commonly used approach for investigating
channel gating, namely, inferring the properties of the activation gate from the kinetics of channel block. | {"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC2812505/?lang=en-ca","timestamp":"2014-04-16T11:36:04Z","content_type":null,"content_length":"226182","record_id":"<urn:uuid:27bf7284-846b-4614-b8e2-3dc4ae66c4a3>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
Where does the splitting principle come from and does it generalize
up vote 6 down vote favorite
Basically, I'm aware of "splitting principles" for the following three objects (which are all isomorphic modulo torsion).
1. The Chow group a la Fulton.
2. The classical Grothendieck group of vector bundles or coherent sheaves.
3. The $\gamma$-graded Grothendieck group.
I was just wondering where the idea of "the splitting principle" comes from. I'm guessing somewhere in topology when one wanted to define Chern classes and show some properties. But I don't know.
And above that, is there some more general way of looking at this? I know there is a theorem that connects higher K-groups with Chow groups in a sense. So I ask, is there a way of deducing the
splitting principle for one of the above objects from the other? (It's easy if we want to do this modulo torsion, of course.)
ag.algebraic-geometry chern-classes
add comment
3 Answers
active oldest votes
We can think of the splitting principle as a condition on a "cohomology theory" (of some sort) $E^*$, coming about when working with Chern classes for instance, and then ask: When does
$E^*$ satisfy this condition? First, let's make the condition more precise and reformulate it:
Condition 1: Given $X$ and a vector bundle $V$ on $X$, there exists $f: X' \to X$ such that $f^* V'$ has a filtration with subquotients line bundles, and $f^*: E^*(X) \to E^*(X')$ is
But there is a universal choice for $X'$, namely the flag variety of $V$: $p: Fl(V) \to X$. Any $f: X' \to X$ with $f^* V'$ filtered with line bundle subquotients will factor through
$p$, and so we're really just asking if $p^*: E^*(X) \to E^*(Fl(V))$ is injective.
Condition 1': For all $X$ and $V$, $p^*: E^*(X) \to E^*(Fl(V))$ is injective.
At this point there are two ways this answer can go, depending on ones tastes:
1. $Fl(V)$ is a very geometric object over $X$, so we might as well ask that we actually have a formula for $E^*(Fl(V))$ in terms of $E^*(X)$. If $E^*$ is "reasonable" (i.e., has Chern
classes giving rise to a "projective bundle formula") then iteratively applying the projective bundle formula will give such a thing, and in fact show that $E^*(X)$ is a direct
summand of $E^*(Fl(V))$.
up vote 16 2. (My favorite:) There's a nice way of strengthening Condition 1' that also holds in all reasonable cases, and that looks rather natural. You can ask that $Fl(V) \to X$ behave like a
down vote "covering", i.e. that (Condition 2:) $$ E^*(X) \to E^*(Fl(V)) \to E^*\left(Fl(V) \times_X Fl(V)\right) $$ is an equalizer diagram. (So not only is pullback injective, but you can
accepted identify its image...) (In fact, in reasonable cases it'll be a split equalizer diagram, related to the direct summand thing above.)
If your question is one of proof + generalization (which I think it is), rather than vague motivation, then I haven't addressed it yet:
In topology. one can show that any complex-oriented cohomology theory (i.e., one with Chern classes for line bundles) $E^*$ has a projective bundle formula, satisfies all the
conditions, etc.
In more-algebro-geometric contexts, you could deduce the Chow + K-theory (I don't know anything about the $\gamma$-filtration) statements by either
1. Constructing $c_1$ + proving a projective bundle formula, and then feeding this into a general argument using these to prove the rest.
2. Going to the universal example of algebraic cobordism and then deducing the results for Chow + K-theory from the known relationships between them and algebraic cobordism. (Though
this second approach is not so great, since those relationships hold under much more stringent hypotheses than are necessary to run the argument.)
One could also ask to generalize this in another direction, replacing vector bundles and $Fl(V)$ by more general $G$-bundles and their associated $G/B$-bundles. In general, that's a
more complicated story...
For a nice description of this picture in the classical case of vector bundles, I suggest Hatcher's notes (math.cornell.edu/~hatcher/VBKT/VBpage.html). His proof of the projective
2 bundle theorem relies on his very nice proof of the Leray-Hirsch Theorem (in his Algebraic Topology book), and avoids the temptation to use Mayer-Vietoris sequences. (Mayer-Vietoris
works well in the compact case and not so well in general.) I believe Fulton's Intersection Theory book has a nice description of the analogue in algebraic geometry. – Dan Ramras Apr
14 '10 at 20:12
add comment
I'm not sure if this is really an answer to your question, but I like to think about the splitting principle as the statement that if you want to check a formula for all bundles, it
usually suffices to check it for sums of line bundles.
As Anatoly explained, this "principle" works because you can always pull back your bundle $E\to X$ so that it becomes a sum of line bundles, and moreover you can do so using a map $f: Y\to
X$ that's injective on cohomology. So to check some (cohomological) formula involving $E$ in the ring $H^*(X)$, it's enough to check it in the larger cohomology ring $H^*(Y)$, and back in
up vote 6 $Y$ you get to work with the sum of line bundles $f^* (E)$.
down vote
Often the real work will come in translating a formula that works for sums of line bundles into a formula that makes sense for arbitrary bundles. In the case of the Chern Character, for
example, one has to introduce the Newton polynomials for precisely this purpose.
add comment
In algebraic topology, there are two slightly different ways of thinking about this: one bundle at a time (the more usual way) and universally (via bundles between classifying spaces). The
second approach is surprisingly simple and general, as shown in the very brief paper "A note on the splitting principal", http://www.math.uchicago.edu/~may/PAPERS/Split.pdf, #109 on my web
up vote page. A key point is to notice that the splitting principle concerns the reduction of the structural group of a pullback of a given (complex) vector bundle from $U(n)$ to its maximal torus $T
3 down ^n$. Replacing $U(n)$ by other compact Lie groups leads to splitting principals for other kinds of bundles, e.g. symplectic and real with cohomology away from $2$. Replacing $T^n$ by a
vote maximal $2$-torus gives a splitting principal for real bundles and cohomology at $2$.
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry chern-classes or ask your own question. | {"url":"http://mathoverflow.net/questions/21370/where-does-the-splitting-principle-come-from-and-does-it-generalize/21381","timestamp":"2014-04-18T21:20:56Z","content_type":null,"content_length":"63112","record_id":"<urn:uuid:1917d6e4-5946-40ea-9b89-56d41693394e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00071-ip-10-147-4-33.ec2.internal.warc.gz"} |
Time Value of Money - Six Functions of a Dollar
Using Assessors’ Handbook Section 505 (Capitalization Formulas and Tables)
Appraisal Training: Self-Paced Online Learning Session
Lesson 9: Frequency of Compounding
This lesson discusses the frequency of compounding and its affect on the present and future values using the compound interest functions presented in Assessors’ Handbook Section 505 (AH 505),
Capitalization Formulas and Tables. The lesson:
• Explains compounding frequency and intra-year compounding,
• Demonstrated calculation of FW$1 and PW$1 factors given monthly compounding, and
• Concludes with generalizations with respect to frequency of compounding and future and present value.
Intra-Year Compounding
Up to this point, we generally have assumed that interest was calculated at the end of each year, based on the principal balance at the beginning of the year and the annual interest rate. That is, we
have assumed that interest was compounded (or discounted) on an annual basis, and in solving problems we have used the annual compounding pages in AH 505.
Compounding interest more than once a year is called "intra-year compounding". Interest may be compounded on a semi-annual, quarterly, monthly, daily, or even continuous basis. When interest is
compounded more than once a year, this affects both future and present-value calculations.
With intra-year compounding, the periodic interest rate, instead of being the stated annual rate, becomes the stated annual rate divided by the number of compounding periods per year. The number of
periods, instead of being the number of years, becomes the number of compounding periods per year multiplied by the number of years.
As shown in the following table:
With monthly compounding, for example, the stated annual interest rate is divided by 12 to find the periodic (monthly) rate, and the number of years is multiplied by 12 to determine the number of
(monthly) periods.
Calculating a FW$1 Factor Given Monthly Compounding
In lesson 2, we calculated the annual FW$1 factor at a stated annual rate of 6% for 4 years with annual compounding. The resulting factor was 1.262477.
Now let’s calculate the FW$1 for an annual rate of 6% for 4 years, but with monthly compounding. In this case, the periodic monthly rate is 0.5% (one-half of one percent per month, 6% ÷ 12), and the
number of monthly compounding periods is 48 (12 periods/year × 4 years).
In order to calculate the FW$1 factor for 4 years at an annual interest rate of 6%, with monthly compounding, use the formula below:
• FW$1 = (1 + i)^n
• FW$1 = (1 + 0.5%)^48
• FW$1 = (1 + 0.005)^48
• FW$1 = (1.005)^48
• FW$1 = 1.270489
The FW$1 factor with monthly compounding, 1.270489, is slightly greater than the factor with annual compounding, 1.262477. If we had invested $100 at an annual rate of 6% with monthly compounding we
would have ended up with $127.05 four years later; with annual compounding we would have ended up with $126.25.
AH 505 contains separate sets of compound interest factors for annual and monthly compounding. Factors for annual compounding are on the odd-numbered pages; factors for monthly compounding are on the
even-numbered pages.The FW$1 factor for 4 years at an annual interest rate of 6%, with monthly compounding, is in AH 505, page 32 (monthly page).
Calculating a PW$1 Factor Given Monthly Compounding
In lesson 3, we calculated the PW$1 factor at an annual rate of 6% for 4 years with annual compounding. The resulting factor was 0.792094.
Let’s calculate the PW$1 factor for 4 years at an annual interest rate of 6%, with monthly compounding. In this case, the periodic monthly rate is 0.5% (one-half of one percent per month, 6% ÷ 12),
and the number of monthly compounding periods is 48 (12 periods/year × 4 years).
In order to calculate the PW$1 factor for 4 years at an annual interest rate of 6%, with monthly compounding, use the formula below:
The PW$1 factor for 4 years at an annual interest rate of 6%, with monthly compounding, can be found in AH 505, page 32. The amount of the factor is 0.787098.
The following two generalizations can be made with respect to frequency of compounding and future and present values:
• When interest is compounded more than once a year, a future value will always be higher than it would have been with annual compounding, all else being equal.
• When interest is compounded more than once a year, a present value will always be lower than it would have been with annual compounding, all else being equal.
Thus, with our examples for the FW$1 and the PW$1:
• Given FW$1, at a rate of 6%, for a term of 4 years: 1.270489 (compounded monthly) > 1.262477 (compounded annually)
• Given PW$1, at a rate of 6%, for a term of 4 years: 0.787098 (compounded monthly < 0.792094 (compounded annually)
We would have obtained similar results with FW$1/P and PW$1/P, respectively.
Most appraisal problems involve annual payments and require the use of annual factors. Monthly factors are also useful because most mortgage loans are based on monthly payments, and it is often
necessary to make mortgage calculations as part of an appraisal problem.
For other compounding periods, the factors for which are not included in AH 505, the appraiser can calculate the desired factor from the appropriate compound interest formula. As noted, AH 505
contains factors for annual and monthly compounding only. | {"url":"http://www.boe.ca.gov/info/tvm/lesson9.html","timestamp":"2014-04-21T04:45:09Z","content_type":null,"content_length":"44248","record_id":"<urn:uuid:c42b9e1b-a3b8-4e44-b7bf-e93d33fddb57>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00338-ip-10-147-4-33.ec2.internal.warc.gz"} |
Co-ordinate geometry of a circle
Co-ordinate geometry of a circle resources
The geometry of a circle
In this unit we find the equation of a circle, when we are told its centre and its radius. There are two different forms of the equation, and you should be able to recognise both of them. We also
look at some problems involving tangents to circles. | {"url":"http://www.mathcentre.ac.uk/students/types/teach-yourself/circle/","timestamp":"2014-04-21T09:36:12Z","content_type":null,"content_length":"7203","record_id":"<urn:uuid:210e7c22-3bca-4bf8-82e5-7d3a3bb90f73>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
Analysis - IRR, NPV, FV and PV
Economics & Finance
> Analysis – IRR, NPV, FV and PV
Analysis – IRR, NPV, FV and PV
Internal Rate of Return (IRR)
If you have an investment that requires and produces a number of cash flows over time, the internal rate of return is defined to be the discount rate that makes the net present value of those cash
flows equal to zero. This article discusses computing the internal rate of return on periodic payments, which might be regular payments into a portfolio or other savings program, or payments against
a loan. Both scenarios are discussed in some detail.
We’ll begin with a savings program. Assume that a sum "P" has been invested into some mutual fund or like account and that additional deposits "p" are made to the account each month for "n" months.
Assume further that investments are made at the beginning of each month, implying that interest accrues for a full "n" months on the first payment and for one month on the last payment. Given all
this data, how can we compute the future value of the account at any month? Or if we know the value, what was the rate of return?
The relevant formula that will help answer these questions is:
F = -P(1+i)^n – [p(1+i)((1+i)^n - 1)/i]
In this formula, "F" is the future value of your investment (i.e., the value after "n" months or "n" weeks or "n" years–whatever the period over which the investments are made), "P" is the present
value of your investment (i.e., the amount of money you have already invested), "p" is the payment each period, "n" is the number of periods you are interested in, and "i" is the interest rate per
period. Note that the symbol ‘^’ is used to denote exponentiation (2 ^ 3 = 8).
Very important! The values "P" and "p" should be negative. This formula and the ones below are devised to accord with the standard practice of representing cash paid out as negative and cash received
(as in the case of a loan) as positive. This may not be very intuitive, but it is a convention that seems to be employed by most financial programs and spreadsheet functions.
The formula used to compute loan payments is very similar, but as is appropriate for a loan, it assumes that all payments "p" are made at the end of each period:
F = -P(1+i)^n – [p((1+i)^n - 1)/i]
Note that this formula can also be used for investments if you need to assume that they are made at the end of each period. With respect to loans, the formula isn’t very useful in this form, but by
setting "F" to zero, the future value (one hopes) of the loan, it can be manipulated to yield some more useful information.
To find what size payments are needed to pay-off a loan of the amount "P" in "n" periods, the formula becomes this:
p = ————
(1+i)^n – 1
If you want to find the number of periods that will be required to pay-off a loan use this formula:
log(-p) – log(-Pi – p)
n = ———————-
Keep in mind that the "i" in all these formula is the interest rate per period. If you have been given an annual rate to work with, you can find the monthly rate by adding 1 to annual rate, taking
the 12th root of that number, and then subtracting 1. The formula is:
i = ( r + 1 ) ^ 1/12 – 1
where "r" is the rate.
Conversely, if you are working with a monthly rate–or any periodic rate–you may need to compound it to obtain a number you can compare apples-to-apples with other rates. For example, a 1 year CD
paying 12% in simple interest is not as good an investment as an investment paying 1% compounded per month. If you put $1000 into each, you’ll have $1120 in the CD at the end of the year but $1000*
(1.01)^12 = $1126.82 in the other investment due to compounding. In this way, interest rates of any kind can be converted to a "simple 1-year CD equivalent" for the purposes of comparison.
You cannot manipulate these formulas to get a formula for "i," but that rate can be found using any financial calculator, spreadsheet, or program capable of calculating Internal Rate of Return or
Technically, IRR is a discount rate: the rate at which the present value of a series of investments is equal to the present value of the returns on those investments. As such, it can be found not
only for equal, periodic investments such as those considered here but for any series of investments and returns. For example, if you have made a number of irregular purchases and sales of a
particular stock, the IRR on your transactions will give you a picture of your overall rate of return. For the matter at hand, however, the important thing to remember is that since IRR involves
calculations of present value (and therefore the time-value of money), the sequence of investments and returns is significant.
Here’s an example. Let’s say you buy some shares of Wild Thing Conservative Growth Fund, then buy some more shares, sell some, have some dividends reinvested, even take a cash distribution. Here’s
how to compute the IRR.
You first have to define the sign of the cash flows. Pick positive for flows into the portfolio, and negative for flows out of the portfolio (you could pick the opposite convention, but in this
article we’ll use positive for flows in, and negative for flows out).
Remember that the only thing that counts are flows between your wallet and the portfolio. For example, dividends do NOT result in cash flow unless they are withdrawn from the portfolio. If they
remain in the portfolio, be they reinvested or allowed to sit there as free cash, they do NOT represent a flow.
There are also two special flows to define. The first flow is positive and is the value of the portfolio at the start of the period over which IRR is being computed. The last flow is negative and is
the value of the portfolio at the end of the period over which IRR is being computed.
The IRR that you compute is the rate of return per whatever time unit you are using. If you use years, you get an annualized rate. If you use (say) months, you get a monthly rate which you’ll then
have to annualize in the usual way, and so forth.
On to actually calculating it…
We first have the net present value or NPV:
NPV(C, t, d) = Sum C[i]/(1+d)^t[i]
C[i] is the i-th cash flow (C[0] is the first, C[N] is the last).
d is the assumed discount rate.
t[i] is the time between the first cash flow and the i-th. Obviously, t[0]=0 and t[N]=the length of time under consideration. Pick whatever units of time you like, but remember that IRR will end
up being rate of return per chosen time unit.
Given that definition, IRR is defined by the equation: NPV(C, t, IRR) = 0.
In other words, the IRR is the discount rate which sets the NPV of the given cash flows made at the given times to zero.
In general there is no closed-form solution for IRR. One must find it iteratively. In other words, pick a value for IRR. Plug it into the NPV calculation. See how close to zero the NPV is. Based on
that, pick a different IRR value and repeat until the NPV is as close to zero as you care.
Note that in the case of a single initial investment and no further investments made, the calculation collapses into:
(Initial Value) – (Final Value)/(1+IRR)^T = 0 or
(Initial Value)*(1+IRR)^T – (Final Value) = 0
Initial*(1+IRR)^T = Final
(1+IRR)^T = Final/Initial
And finally the quite familiar:
IRR = (Final/Inital)^(1/T) – 1
Contributed-By: Christopher Yost (cpy at world.std.com), Rich Carreiro (rlcarr at animato.arlington.ma.us)
The Time Value Of Money
NPV – Net Present Value
An approach used in capital budgeting where the present value of cash inflows is subtracted by the present value of cash outflows. NPV is used to analyze the profitability of an investment or
NPV analysis is sensitive to the reliability of future cash inflows that an investment or project will yield.
NPV compares the value of a dollar today versus the value of that same dollar in the future, after taking inflation and return into account.
If the NPV of a prospective project is positive, then it should be accepted. However, if it is negative, then the project probably should be rejected because cash flows are negative.
FV : Future Value
= Original Amount * ( 1 + Interest Rate Per Period ) ^ Number of Period
= P * (1+i)^n
PV : Present Value (of a Future Payment)
= Original Amount * ( 1 + Interest Rate Per Period ) ^ – Number of Period
= P * (1+i)^(-n)
1. 13 May 2013 at 12:54 PM |
The device samples the audio as you pluck, and stopsturning the peg when the string
is too short to fit the wooden dowel inside of.
1. No trackbacks yet. | {"url":"http://mynoz.wordpress.com/2005/07/11/analysis-irr-npv-fv-and-pv/","timestamp":"2014-04-18T15:38:30Z","content_type":null,"content_length":"79582","record_id":"<urn:uuid:b6b258f7-a686-4362-8d9d-cc14c2e19232>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathFiction: The Maxwell Equations (Anatoly Dnieprov)
a list compiled by Alex Kasman (College of Charleston)
Home All New Browse Search About
The Maxwell Equations (1969)
Anatoly Dnieprov
The math in this story seems very real, though the specifics of it are inconsequential to the plot. A mathematical physicist in an isolated city needs help finding a solution to a linearized version
of Maxwell's equations, and he finds an offer for such help in a surprising ad in a newspaper. Going to the address listed in the paper, he finds that he is at the local insane asylum. Although this
worries him, he is very pleased with the results: he shortly receives a package containing an absolutely brilliant, handwritten solution to the problem he posed. His theory that one of the residents
at the asylum happens to be a great mathematician is shattered when he submits another question which is similarly answered, but in another person's handwriting. Eventually, he finds the truth, that
a Nazi war criminal is working in the asylum on a method for taking ordinary people and making them into brilliant mathematicians (for the rest of their shortened lives) through electro-magnetic
Originally published in Russian Science Fiction (edited by Robert Magidoff, 1969), this story was also reprinted in Mathenauts.
More information about this work can be found at another page on this Website.
(Note: This is just one work of mathematical fiction from the list. To see the entire list or to see more works of mathematical fiction, return to the Homepage.)
Works Similar to The Maxwell Equations
According to my `secret formula', the following works of mathematical fiction are similar to this one:
1. What Dead Men Tell by Theodore Sturgeon
2. The Pre-Persons by Philip K. Dick
3. The Non-Statistical Man by Raymond F. Jones
4. Habitus by James Flint
5. The Library of Babel by Jorge Luis Borges
6. The Feeling of Power by Isaac Asimov
7. The Mathenauts by Norman Kagan
8. The Extraordinary Hotel or the Thousand and First Journey of Ion the Quiet by Stanislaw Lem
9. Ms Fnd in a Lbry by Hal Draper
10. Left or Right by Martin Gardner
Ratings for The Maxwell Equations:
Content: Have you seen/read this work of mathematical fiction? Then click here to enter your own votes on its mathematical content and literary quality or send me comments to post on
3/5 (5 votes) this Webpage.
Literary Quality:
4.6/5 (5 votes)
Genre Science Fiction,
Motif Genius,
Topic Analysis/Calculus/Differential,
Medium Short Stories,
Home All New Browse Search About
Your Help Needed: Some site visitors remember reading works of mathematical fiction that neither they nor I can identify. It is time to crowdsource this problem and ask for your help! You would help
a neighbor find a missing pet...can't you also help a fellow site visitor find some missing works of mathematical fiction? Please take a look and let us know if you have seen these missing stories
(Maintained by Alex Kasman, College of Charleston) | {"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf87","timestamp":"2014-04-16T16:10:52Z","content_type":null,"content_length":"8999","record_id":"<urn:uuid:d798ec91-1200-4804-ad42-75ffa9e95ee3>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00040-ip-10-147-4-33.ec2.internal.warc.gz"} |
Horizontal Circular
It is not often that one finds the need to twirl a mass at the end of a string above their head in a horizontal plane. For that
reason, this animation will examine a couple of "real-life" applications. Everything discussed in the
Introduction Of Circular Motion still applies but the factors of friction and weight must be considered.
The force diagram in the animation consists of three forces. F[N ]is the normal force which is the perpendicular force that
pushes the two surfaces together, F[W ]needs no introduction, and F[cp] is the centripetal force. All the forces are expressed
in newtons and F[cp] is perpendicular to both F[W] and F[N]. The force diagram is best visualized as an astute observer such
as the one in the animation who is not the least bit animated.
Important notes to keep in mind when using the simulator:
If there is no friction in the problem, the input, μ, must be left blank. The last input which is unlabeled serves two
(i) The options for this input are H (horizontal) and V (vertical). The H and the V refer to the direction of the
frictional force. For example in Problem 1, the F[f] is horizontal so you can let the field default to H. When doing a
problem similar to Problem 2, the F[f] is vertical, so you must enter a V.
(ii) When doing a problem similar to Problem 3 pertaining to banked curves, you must enter a -1 for the μ and the
last unlabeled field will be used to display the angle.
(iii) When determining μ for a flat road, you must enter a 0 for μ and that field will display the computed μ.
The simulator accepts simple factors for entries. For example, if a problem gives the weight of an object (685 N) rather
than its mass, simply enter 685/9.80 for its mass.
Unlike some of the other animations, you must enter your inputs before pressing Play.
1) A 2.0 kg box is placed at the edge of a merry-go-round of radius of 6.0 m. The coefficient of friction between the box
and the merry-go-round is 0.30.
(a) Draw a free-body diagram of the box.
(b) Determine the speed at which the box slides off the edge.
(c) How would your answer to (b) change if the mass of the box was doubled? Support your answer.
2) The Rotor-Ride, a ride found at many amusement parks consist of a hollow cylindrical room approximately 2.5 m in
radius. After the room rotates to a certain speed, the floor drops away. A rider is left stationary because they are
supported by the static friction. Assume the rider has a mass of 75 kg and the coefficient of friction is 0.35.
Calculate the:
(a) speed required to prevent the rider from slipping down the wall
(b) frequency of rotation corresponding to this speed
3) A car with a mass of 850. kg traveling at 18 m/s approaches a curve with a radius of 75.0 m.
(a) Determine the banking angle such that no friction is required between the car's tires and the banked curve.
(b) Does this angle apply for all cars? Justify your answer.
(c) Does the banking angle increase or decrease with an increase of speed? Justify your answer.
(d) Does the banking angle increase or decrease with a decrease of radius? Justify your answer.
4) A 1.54 kg mass is attached to a 0.60 m rope and is swung in a horizontal circle. The string makes an angle of 12.0°
with the vertical and the speed remains constant. Determine the:
(a) tension in the string
(b) speed of the mass | {"url":"http://mmsphyschem.com/horCM.htm","timestamp":"2014-04-16T21:51:38Z","content_type":null,"content_length":"21923","record_id":"<urn:uuid:5d691f67-88b8-49c3-bc0e-71a27595b9b1>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kind Inference
Brief Explanation
Haskell 98 lacks kind polymorphism, and performs kind inference over dependency groups with polymorphic kinds defaulted to *. The Report gives the following example of illegal declarations:
data Tree a = Leaf | Fork (Tree a) (Tree a)
type TreeList = Tree [] -- illegal
(because Tree has been assigned the kind * -> *).
A more meaningful example is the following attempt to compose monad transformers:
newtype StateT s m a = S (s -> m (a,s))
newtype ReaderT r m a = R (r -> m a)
newtype Compose f g m a = C (f (g m) a)
type SR s r = Compose (StateT s) (ReaderT r) -- illegal
GHC users have been able to work around this by adding KindAnnotations to the above definition of Compose.
more liberal kind inference
Listed in order of decreasing need for KindAnnotations (even if you think those are useful in their own right):
GHC 6.4
Kind inference is performed across all the data, newtype, type and class declarations of a module before defaulting, and so the above examples are accepted.
Monomorphic kinds, using all available information
Kind inference is performed across the whole module, using all occurrences of type constructors and classes, i.e. the above sources plus instance declarations, type signature declarations and
expression type signatures.
Polymorphic kinds
Type constructors and classes have polymorphic kinds, inferred over dependency groups. KindAnnotations would never be required. | {"url":"https://ghc.haskell.org/trac/haskell-prime/wiki/KindInference?version=4","timestamp":"2014-04-19T22:30:37Z","content_type":null,"content_length":"10538","record_id":"<urn:uuid:889e4043-26af-4e79-94ec-06ca4f1dacc7>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00261-ip-10-147-4-33.ec2.internal.warc.gz"} |
What makes Langlands for n=2 easier than Langlands for n>2?
up vote 11 down vote favorite
I must confess a priori that I haven't read the proof of Taniyama-Shimura, and that my familiarity with Langlands is at best tangential.
As I understand it Langlands for $n=1$ is class field theory. Not an easy theory, but one that was known for a long time.
Langlands for $n=2$ is the Taniyama-Shimura conjecture, proven recently by Andrew Wiles and others (some of whom participate in this forum).
Clearly Taniyama-Shimura required new ideas. What special property of the $n=2$ case made the proof of Taniyama-Shimura possible, that doesn't exist for Langlands with $n\geq 3$?
nt.number-theory langlands-conjectures
1 One reason is that the automorphic forms relevant to Galois representations for $n = 2$ are much easier to understand than the corresponding automorphic forms (conjecturally) associated to Galois
representations when $n = 3$. See, for example, this question: mathoverflow.net/questions/43377/automorphic-forms-on-gl3 – Michael Sep 4 '11 at 2:15
2 Dear Nicole, You should probably register, and ask the moderators (at the appropriate place on meta) to consolidate your various accounts. This will give you a unified profile for your questions
and answers on MO. Regards, Matthew – Emerton Sep 4 '11 at 4:22
2 One could also quibble a little with the premise of the question. It's taken almost 100 years (more or less) to go from the case $n = 1$ to the case $n = 2$, and the latter <i> still </i> has some
unresolved problems which seem very hard. It's not impossible that the even Artin conjecture for $n = 2$ requires tools powerful enough to do everything. – Michael Sep 4 '11 at 18:02
1 $SL(2) \simeq SO(2,1) \simeq Sp(2)$ (isogenies) – David Hansen Sep 4 '11 at 22:50
add comment
3 Answers
active oldest votes
"Langlands for $n = 2$", to the extent that such a notion is defined, is more than just Shimura--Taniyama, and for even Galois representations/Maass forms, it is still very much open. (See
here for more on this.) For odd Galois representations of dimension $2$, though, it is completely (or almost completely, depending on exactly what you mean by "Langlands") resolved at this
point, with the proof of Serre's conjecture (by Khare, Wintenberger, and Kisin) playing a pivotal role.
Much is known for $n > 2$ (see the web-pages of e.g. Michael Harris, Richard Taylor, and Toby Gee). A key point is that it is hard to say anything outside the essentially self-dual case
(and this is a condition which is automatic for $n = 2$). A second is that Serre's conjecture is not known in general.
If one restricts to the regular (corresponding to weight $k \geq 2$ when $n = 2$), essentially self-dual case (automatic when $n = 2$), then basically everything for $n = 2$ carries over to
$n > 2$, with the exception of Serre's conjecture. (See e.g. the recent preprint of Barnet-Lamb--Gee--Geraghty--Taylor.)
up vote So really, what is special for $n = 2$ is that Serre's conjecture was able to be resolved. And the reason that this has (so far) been possible only for $n = 2$ is that the proof depends on
23 down certain special facts about $2$-dimensional Galois representations.
More specifially:
In the particular case of Shimura--Taniyama, the Langlands--Tunnell theorem allowed Wiles to resolve a particular case of Serre's conjecture (for $p = 3$). To then get all the necessary
cases of Serre's conjecture, Wiles introduced the $3$-$5$ switch.
The general proof of Serre's conjecture uses a massive generalization of the $3$-$5$ switch (along with many other techniques), and although (unlike with Wiles's argument) it doesn't build
specifically on Langlands--Tunnell, it does build on a result of Tate which is a special fact about $2$-dimensional representations of $G_{\mathbb Q}$ over a finite field of characteristic
what is the $3-5$ switch? Is there a layman's exposition? Can we do $p-q$ switch where $p$ and $q$ are primes? Is there a easy reading on this? – J.A Aug 10 '13 at 23:25
@Arul: Dear Arul, You can find a discussion in the introduction to Wiles's 1994 Annals paper (the one on FLT), and a discussion of similar such prime switches in the Pacific Journal
1 paper of Richard Taylor from around the same time. Basically, it is a method in which one has a mod $p$ Galois representation for some particular prime $p$, and one finds a compatible
family of $\ell$-adic representations such that the given representation is the mod $p$ reduction of the $\ell = p$ member of the family, and one then argues at a different prime $q$ to
deduce properties (e.g. modularity) of ... – Emerton Aug 11 '13 at 0:22
1 ... the entire compatible family, and so in particular of the original mod $p$ representation. Regards, – Emerton Aug 11 '13 at 0:22
add comment
Most (if not all) results in the Langlands program use, at some point, the cohomology of Shimura varieties. The linear group gives rise to Shimura varieties when $n=2$ (modular curves),
up vote 10 but not for $n>2$. However, unitary groups furnish Shimura varieties in any dimension and this permits to obtain the automorphy of self-dual Galois representations.
down vote
add comment
One could (and sometimes would) argue the opposite of your claim/question. Namely, that "Langlands for $n = 2$" is more difficult than "Langlands for $n > 2$". Or that more specifically
(since I really can't agree with my previous sentence in full generality), "Langlands for $n = 2$" is more difficult than "Langlands for $n > 2$ an odd prime". Here are two key reasons :
1) Suppose $F$ is a $p$-adic field, $n$ is prime, and $p$ doesn't divide $n$. Then the part of the local Langlands correspondence of $GL(n,F)$ dealing with supercuspidal representations
(proven by Bushnell/Henniart) is given by $$Ind_{W_E}^{W_F}(\chi) \mapsto \pi(\chi \Delta_{\chi})$$ where $W_F$ is the Weil group of $F$, $E/F$ is a degree $n$ separable extension, $\chi :
W_E \rightarrow \mathbb{C}^*$ is a certain type of character ("admissible" to be precise, see Bushnell/Henniart papers/book), $\Delta_{\chi} : W_E \rightarrow \mathbb{C}^*$ is a "twisting
up vote character" associated to $\chi$ (again see Bushnell/Henniart), and $\pi(\chi \Delta_{\chi})$ is the supercuspidal representation of $GL(n,F)$ attached to $\chi \Delta_{\chi}$ via the "Howe
3 down construction". If $n$ is an odd prime, then $\Delta_{\chi}$ is either trivial or the unramified quadratic character of $W_E^{ab} \cong E^*$ (note that any character of $W_E$ factors through
vote the abelianization $W_E^{ab}$). If $n = 2$, then $\Delta_{\chi}$ is much more complicated (see page 217 of Bushnell/Henniart's recent book on $GL(2)$).
2) Let $\pi$ be a supercuspidal representation of $GL(n,F)$ (same assumptions as in the beginning of 1) above). If $n$ is an odd prime, then the distribution character $\theta_{\pi}$ of $\pi$
is much simpler to write down on elliptic tori than if $n = 2$. In particular, if $n = 2$, a non-trivial Gauss sum arises in $\theta_{\pi}$, whereas no such term arises if $n$ is an odd
prime. So the theory for $GL(2,F)$ contains some nontrivial arithmetic information that just doesn't arise for $GL(n,F)$, $n$ an odd prime.
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory langlands-conjectures or ask your own question. | {"url":"http://mathoverflow.net/questions/74472/what-makes-langlands-for-n-2-easier-than-langlands-for-n2","timestamp":"2014-04-16T08:01:22Z","content_type":null,"content_length":"69708","record_id":"<urn:uuid:aeba7a4f-04fb-46ff-94a3-037c5f98bcbe>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00463-ip-10-147-4-33.ec2.internal.warc.gz"} |
Our users:
My son has struggled with math the entire time he has been in school. Algebrator's simple step by step solutions made him enjoy learning. Thank you!
Waylon Summerland, TX.
WOW!!! This is AWESOME!! Love the updated version, it makes things go SMOOTH!!!
Diane Flemming, NV
Your new release is so much more intuitive! You team was quick to respond. Great job! I will recommend the new version to other students.
G.O., Kansas
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
Search phrases used on 2011-01-07:
• c++ code to generate polynomial graph
• polynomial samples
• printable prime numbers worksheets
• draw a picture prentice hall math
• 8th grade algebra math problems printable worksheets
• purple math calculator simplify (-32x^4y^3)/(4x^-5y^8)
• squares and square roots, cubes and cube roots multiple choice question
• how to factor trinomials on Ti-92
• freestarfallwork
• online partial fraction calculator
• simplify cube root worksheet
• calculator that divides polynomials
• algebra math software
• nth term applications in real world
• 6th grade Math solver
• solve for x samples
• simplifying variable exponential expressions
• Algebra Elimination Calculator
• Quadratic Equations.ppt
• a<0 b<0 then ab<0 natural number
• free math games f0r 10th grade
• algebra 2 book online summation
• balance chemical equation representing electron affinity
• free 8th grade worksheets
• free algebra solver scientific notation
• modular arithmetic examples
• answers to aventa learning
• mixed number fraction to decimal calculator
• fraction line
• wronskian calculator online
• convert to decimal notation 8.84*10^7
• convertion ques/ans
• Free ebook download pdf books on simple interest
• 7th grade math charts
• free MCQ worksheets for 9th grade math
• work sheet for dumies in math
• multiplying and dividing integers worksheet
• algebra help programs
• holt algebra 2 linear programming ansers
• exponents with multiple expressions ti 89
• mathematica calculus solver intermediate steps
• dividing unlike terms
• math practice on intersection, complement, and union testing
• "Prentice Hall Algebra 1" First Semester Study Guide
• latest printable math trivia
• dividing whole numbers by decimals free worksheets
• multiplying and dividing integers worksheets
• writing equations of a line worksheets
• sample puzzles and games in multiplying polynomials
• algebrator.com
• printable 9th grade algebra
• kuta software inequalities
• basic of financial statement/ ratio analysis _exercise_answer
• area of a circle
• wwyear 5 optional sats papers
• convert mized radicals to entire radicals
• fractions on a number line
• free 9th grade math worksheets
• locate Inx key on TI 83 clculator
• solution books for abstract algebra pdf
• free math problem solvers
• furmula of multiplication and division
• Math LCM +Practical examples
• Fitting Linear Relationships: A History of the Calculus pdf free downlod
• Division of non-like integers examples
• MATHS FOR 7TH STANDARD
• softmath algebrator for free
• how to use +logarthim in scientific calculator
• algebra gender tree diagram
• 9th Grade Algebra 1 Worksheets Free
• year 5 printible optional sats maths paper
• nth power calculator
• Highest Common Factor real life examples
• percentage circle graph
• primery 3 worksheets printout
• college algebra tutor software
• Free 8Th Grade Math Worksheets
• solve simultaneous equations using identities
• top algebra software
• Permutation Combination Problems Practice
• "tensor for dummies" free
• square root calculator java
• less common multiple for expressions
• adding subtracting exponents handouts | {"url":"http://www.mhsmath.com/math-problem-solving/function-domain/test-of-genius-math-questions.html","timestamp":"2014-04-17T03:48:39Z","content_type":null,"content_length":"19584","record_id":"<urn:uuid:257b8fd8-3b4e-4857-8843-7be8c2080c8f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
Group Axioms G4 Associative
Could anyone help!
I need to prove G4 associativity for the following group and I know it exists
(Q*, o) where x o y = xy/7.
This group is closed so G1 holds
This group is has an identity e = 7 so G2 holds
This group has an inverses which is 49/x. | {"url":"http://mathhelpforum.com/advanced-algebra/175468-group-axioms-g4-associative.html","timestamp":"2014-04-17T23:29:15Z","content_type":null,"content_length":"39227","record_id":"<urn:uuid:9f31d3ff-8317-4028-853c-b844da48d84f>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00285-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fall River, MA Math Tutor
Find a Fall River, MA Math Tutor
...I tell my students that education is not about right and wrong, it is about finding solutions to problems, checking for accuracy and correcting for inaccuracy. I also tell them that success
does not depend on being an honors student, but on a strong work ethic and perseverance. One of the most ...
30 Subjects: including prealgebra, geometry, reading, English
...Additionally, I have teaching experience at Syracuse University (Computer Science), Lemoyne College (Management Information Systems), and at RPI (Graduate Level Systems Engineering). While at
G.E. I also taught an entry level course in MATLAB. While the specifics of most of the systems that I ...
22 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...I have been teaching full-time for 9 years with an additional 5 years substitute teaching. This experience covers grades K-8. After my 1-6 elementary education license, I continued my
education to receive a Master's degree Special Education (K-12) and a CAGS Reading Specialist degree(all levels...
14 Subjects: including prealgebra, algebra 1, discrete math, reading
...Some very capable people do not learn well in a classroom alone. Understanding, patient one on one tutoring can break down the barriers to success and build skills that last long after the
tutoring sessions. I am an experienced, licensed High School math teacher working in the classroom every day.
17 Subjects: including statistics, computer programming, Microsoft Outlook, probability
...I also did home based therapy for students with autism part-time for 5 years. I am a ELA resource teacher. At the elementary level, one of our main focuses is helping students with their
20 Subjects: including geometry, reading, English, writing
Related Fall River, MA Tutors
Fall River, MA Accounting Tutors
Fall River, MA ACT Tutors
Fall River, MA Algebra Tutors
Fall River, MA Algebra 2 Tutors
Fall River, MA Calculus Tutors
Fall River, MA Geometry Tutors
Fall River, MA Math Tutors
Fall River, MA Prealgebra Tutors
Fall River, MA Precalculus Tutors
Fall River, MA SAT Tutors
Fall River, MA SAT Math Tutors
Fall River, MA Science Tutors
Fall River, MA Statistics Tutors
Fall River, MA Trigonometry Tutors | {"url":"http://www.purplemath.com/Fall_River_MA_Math_tutors.php","timestamp":"2014-04-16T04:46:01Z","content_type":null,"content_length":"23895","record_id":"<urn:uuid:4ce06425-863c-4110-b9a2-e6bd86afce66>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
Infinite Series Help
May 9th 2010, 02:51 PM #1
Mar 2010
Hello All,
I'm trying to figure out if I'm doing this problem correctly.
I've rewritten the top as the cubed root of 1 and am using the root test. Am I on the right track?
My answer is 1 / (n^2 + 1) and that converges.
Last edited by mr fantastic; May 9th 2010 at 05:19 PM. Reason: Merged posts.
Thank you. I was going about it the more difficult way.
May 9th 2010, 03:16 PM #2
May 9th 2010, 03:19 PM #3
Mar 2010 | {"url":"http://mathhelpforum.com/calculus/143886-infinite-series-help.html","timestamp":"2014-04-16T21:12:18Z","content_type":null,"content_length":"36216","record_id":"<urn:uuid:f27fe8a4-3fc3-40df-94f6-0902c17a2136>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
Consecutive Seven
Copyright © University of Cambridge. All rights reserved.
'Consecutive Seven' printed from http://nrich.maths.org/
Why do this problem?
This problem
has several different solutions. The problem can be solved using an experimental / trial and error approach but some consideration of the structure can lead to more efficient solution techniques.
There will be no need for students to feel 'stuck' on this problem: they will always be able to experiment with new combinations.
Possible approach
Students could all write down the numbers $0$ to $20$. One student could be asked to select a first triple and everyone writes that down. All students search for a second triple, whose sum is one
more than the first's sum. One such triple is chosen, and everyone writes it down and starts to search for the next - until the task of finding triples whose sums are consecutive is fully understood
by the group, at which point, they can work alone or in pairs to find a solution.
With the whole group, ask students to describe what problems occur and how they are dealing with them. Ask them to share any observations, or inspirations they have had. Check that the points in the
key questions have been covered in the students comments.
Students who wish to continue to work experimentally could be encouraged to devise a clear recording system for the combinations they are trying. For example, starting with the 20 and 19, what are
the possibilities for the other two cards. Students who want to work analytically may choose to use algebra to determine the smallest consecutive number.
Key questions
• Why have you run out of possibilities? Can you change anything to avoid that problem?
• What are the biggest and smallest sums we could get from a triple?
• Can you work out what the sevenconsecutive numbers will have to add up to?
• Can you select your triples in a logical/symmetrical way?
Possible extension
• How many different solutions do you think that there might be? Can you work out how many might be possible?
• If the numbers $0$ to $20$ were changed to different sets of $20$ numbers, would solutions still be possible? (specifically designed sets e.g. $\{10, 11, .., 30\}$ or $\{1000, 1001, ...1020\}$ or
$\{0, n, 2n, 3n, 4n, ..., 20n\}$, or totally random sets)
Possible support
It might be helpful to provide students with cards labelled $0$ to $20$ to allow them to make their arrangements. You could also provide calculators so that students can focus on the structure of the
problem, rather than getting stuck with the additions.
Alternative questions include:
Can you find seven pairs of numbers which add up to consecutive numbers?
Can you find seven sets of three cards which add up to the SAME number?
Students could write such questions for each other, working in pairs to ensure that these are phrased as accurately as possible. | {"url":"http://nrich.maths.org/2661/note?nomenu=1","timestamp":"2014-04-20T23:38:10Z","content_type":null,"content_length":"6173","record_id":"<urn:uuid:ddb05a90-48c6-4329-b6ba-1fe5ebd3e33e>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rectangle rotation
August 18th 2008, 11:35 AM #1
Aug 2008
Rectangle rotation
Heya all,
I've got a problem where I know the four corners of a rectangle that is rotated about its center by an unknown angle. I need to know the angle of rotation, and also the width and height of the
rectangle if it was not rotated at all.
For the life of me, I can't see how I can determine that information with just the four corners of the rectangle. Any help would be greatly appreciated.
Heya all,
I've got a problem where I know the four corners of a rectangle that is rotated about its center by an unknown angle. I need to know the angle of rotation, and also the width and height of the
rectangle if it was not rotated at all.
For the life of me, I can't see how I can determine that information with just the four corners of the rectangle. Any help would be greatly appreciated.
I did not see this before.
Post the coordinates of the four corners so that we can play on your question.
Rotating about the origin by angle $\theta$ counterclockwise is the same by multiplying by the matrix $\begin{pmatrix}\cos\theta & -\sin\theta\\\sin\theta & \cos\theta\end{pmatrix}$.
So if the centre of your rectangle is the origin, if one corner of the rectangle has co-ordinates $(a,b)$, its co-ordinates after the rotation would be $(a\cos\theta-b\sin\theta,a\sin\theta+b\cos
August 26th 2008, 01:31 PM #2
MHF Contributor
Apr 2005
August 26th 2008, 04:30 PM #3 | {"url":"http://mathhelpforum.com/geometry/46197-rectangle-rotation.html","timestamp":"2014-04-18T02:06:34Z","content_type":null,"content_length":"36318","record_id":"<urn:uuid:0897afa2-349a-4ba1-8a97-93dc29adedfa>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
The interpretation of stereo-disparity information: the computation of surface orientation and depth
- Computer Vision, Graphics, and Image Processing , 1990
"... Obtaining exact depth from binocular disparities is hard if camera calibration is needed. We will show that qualitative information can be obtained from stereo disparities with little
computation, and without prior knowledge (or computation) of camera parameters. First, we derive two expressions tha ..."
Cited by 16 (2 self)
Add to MetaCart
Obtaining exact depth from binocular disparities is hard if camera calibration is needed. We will show that qualitative information can be obtained from stereo disparities with little computation,
and without prior knowledge (or computation) of camera parameters. First, we derive two expressions that order all matched points in the images by depth in two distinct ways from image coordinates
only. Using one for tilt estimation and point separation (in depth) demonstrates some anomalies observed in psychophysical experiments, most notably the "induced size effect". We apply the same
approach to detect qualitative changes in the curvature of a contour on the surface of an object, with either x- or y-coordinate fixed. Second, we develop an algorithm to compute axes of
zero-curvature from disparities alone. The algorithm is shown to be quite robust against violations of its basic assumptions for synthetic data with relatively large controlled deviations. It
performs almost as well on real i...
, 1992
"... An efficient algorithm for the estimation of the 2-d disparity between a pair of stereo images is presented. Phase based methods are extended to the case of 2-d disparities and shown to
correspond to computing local correlation fields. These are derived at multiple scales via the frequency domain an ..."
Cited by 16 (9 self)
Add to MetaCart
An efficient algorithm for the estimation of the 2-d disparity between a pair of stereo images is presented. Phase based methods are extended to the case of 2-d disparities and shown to correspond to
computing local correlation fields. These are derived at multiple scales via the frequency domain and a coarse-to-fine `focusing' strategy determines the final disparity estimate. Fast implementation
is achieved by using a generalised form of wavelet transform, the multiresolution Fourier transform (MFT), which enables efficient calculation of the local correlations. Results from initial
experiments on random noise stereo pairs containing both 1-d and 2-d disparities, illustrate the potential of the approach. 1 Introduction Estimating the disparity between a pair of binocular images
in order to determine depth information from a scene has received considerable attention for many years. Essentially a problem of finding corresponding points in the two views of the scene, the
complexity of t...
"... The geometry of binocular projection is analyzed in relation to the primate visual system. An oculomotor parameterization that includes the classical vergence and version angles is defined. It
is shown that the epipolar geometry of the system is constrained by binocular coordination of the eyes. A l ..."
Cited by 14 (9 self)
Add to MetaCart
The geometry of binocular projection is analyzed in relation to the primate visual system. An oculomotor parameterization that includes the classical vergence and version angles is defined. It is
shown that the epipolar geometry of the system is constrained by binocular coordination of the eyes. A local model of the scene is adopted in which depth is measured relative to a plane containing
the fixation point. These constructions lead to an explicit parameterization of the binocular disparity field involving the gaze angles as well as the scene structure. The representation of visual
direction and depth is discussed with reference to the relevant psychophysical and neurophysiological literature. © 2008 Optical Society of America OCIS codes: 330.1400, 330.2210. 1.
, 2009
"... The geometry of binocular projection is analyzed in relation to the primate visual system. An oculomotor parameterization, which includes the classical vergence and version angles, is defined.
It is shown that the epipolar geometry of the system is constrained by binocular coordination of the eyes. ..."
Add to MetaCart
The geometry of binocular projection is analyzed in relation to the primate visual system. An oculomotor parameterization, which includes the classical vergence and version angles, is defined. It is
shown that the epipolar geometry of the system is constrained by binocular coordination of the eyes. A local model of the scene is adopted, in which depth is measured relative to a plane containing
the fixation point. These constructions lead to an explicit parameterization of the binocular disparity field, involving the gaze angles as well as the scene structure. The representation of visual
direction and depth is discussed, with reference to the relevant psychophysical and neurophysiological literature. 1
"... Physiologically based models of binocular depth perception ning qian and yongjie li We perceive the world as three-dimensional. The inputs to our visual system, however, are only a pair of
two-dimensional projections on the two retinal surfaces. As emphasized by Marr and Poggio (1976), it is general ..."
Add to MetaCart
Physiologically based models of binocular depth perception ning qian and yongjie li We perceive the world as three-dimensional. The inputs to our visual system, however, are only a pair of
two-dimensional projections on the two retinal surfaces. As emphasized by Marr and Poggio (1976), it is generally impossible to uniquely determine the three-dimensional world from its two-dimensional
, 1998
"... The slant of a stereoscopically defined surface cannot be determined solely from horizontal disparities or from derived quantities such as horizontal size ratio (HSR). There are four other
signals that, in combination with horizontal disparity, could in principle allow an unambiguous estimate of sla ..."
Add to MetaCart
The slant of a stereoscopically defined surface cannot be determined solely from horizontal disparities or from derived quantities such as horizontal size ratio (HSR). There are four other signals
that, in combination with horizontal disparity, could in principle allow an unambiguous estimate of slant: the vergence and version of the eyes, the vertical size ratio (VSR), and the horizontal
gradient of VSR. Another useful signal is provided by perspective slant cues. The determination of perceived slant can be modeled as a weighted combination of three estimates based on those signals:
a perspective estimate, a stereoscopic estimate based on HSR and VSR, and a stereoscopic estimate based on HSR and sensed eye position. In a series of experiments, we examined human observers ’ use
of the two stereoscopic means of estimation. Perspective cues were rendered uninformative. We found that VSR and sensed eye position are both used to interpret the measured horizontal disparities.
When the two are placed in conflict, the visual system usually gives more weight to VSR. However, when VSR is made difficult to measure by using short stimuli or stimuli composed of vertical lines,
the visual system relies on sensed eye position. A model in which the observer’s slant estimate is a weighted average of the slant estimate based on HSR and VSR and the one based on HSR and eye
position accounted well for the data. The weights varied across viewing conditions because the informativeness of the signals they employ vary from one | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1994899","timestamp":"2014-04-18T07:21:22Z","content_type":null,"content_length":"28249","record_id":"<urn:uuid:41845aef-58eb-4e78-9969-4410c9b25914>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
ACP Atmospheric Chemistry and Physics ACP Atmos. Chem. Phys. 1680-7324 Copernicus GmbH Göttingen, Germany 10.5194/acp-10-6749-2010 Probabilistic description of ice-supersaturated layers in low
resolution profiles of relative humidity Dickson N. C. ^1 Gierens K. M. ^2 Rogers H. L. ^1 Jones R. L. ^1 Centre for Atmospheric Science, Department of Chemistry, University of Cambridge, Cambridge,
UK Deutsches Zentrum für Luft- und Raumfahrt, Institut für Physik der Atmosphäre, Oberpfaffenhofen, Germany 23 07 2010 10 14 6749 6763 This is an open-access article distributed under the terms of
the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. This article is available
from http://www.atmos-chem-phys.net/10/6749/2010/acp-10-6749-2010.html The full text article is available as a PDF file from http://www.atmos-chem-phys.net/10/6749/2010/acp-10-6749-2010.pdf
The global observation, assimilation and prediction in numerical models of ice super-saturated (ISS) regions (ISSR) are crucial if the climate impact of aircraft condensation trails (contrails) is to
be fully understood, and if, for example, contrail formation is to be avoided through aircraft operational measures. Given their small scales compared to typical atmospheric model grid sizes,
statistical representations of the spatial scales of ISSR are required, in both horizontal and vertical dimensions, if global occurrence of ISSR is to be adequately represented in climate models.
<br><br> This paper uses radiosonde launches made by the UK Meteorological Office, from the British Isles, Gibraltar, St. Helena and the Falkland Islands between January 2002 and December 2006, to
investigate the probabilistic occurrence of ISSR. Each radiosonde profile is divided into 50- and 100-hPa pressure layers, to emulate the coarse vertical resolution of some atmospheric models. Then
the high resolution observations contained within each thick pressure layer are used to calculate an average relative humidity and an ISS fraction for each individual thick pressure layer. These
relative humidity pressure layer descriptions are then linked through a probability function to produce an s-shaped curve which empirically describes the ISS fraction in any average relative humidity
pressure layer. Using this empirical understanding of the s-shaped relationship a mathematical model was developed to represent the ISS fraction within any arbitrary thick pressure layer. Two models
were developed to represent both 50- and 100-hPa pressure layers with each reconstructing their respective s-shapes within 8–10% of the empirical curves. These new models can be used, to represent
the small scale structures of ISS events, in modelled data where only low vertical resolution is available. This will be useful in understanding, and improving the global distribution, both observed
and forecasted, of ice super-saturation. | {"url":"http://www.atmos-chem-phys.net/10/6749/2010/acp-10-6749-2010.xml","timestamp":"2014-04-18T23:28:52Z","content_type":null,"content_length":"15733","record_id":"<urn:uuid:5d08dd8e-9f6c-46b6-a224-1a9c0df32ebd>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sample size and power analysis in medical research Zodpey SP - Indian J Dermatol Venereol Leprol
Year : 2004 | Volume : 70 | Issue : 2 | Page : 123-128
Sample size and power analysis in medical research
Zodpey SP
Clinical Epidemiology Unit, Department of Preventive and Social Medicine, Government Medical College, Nagpur, Maharashtra
Correspondence Address:
A/303, Amar Enclave, Prashant Nagar, Ajni, Nagpur - 440015
Among the questions that a researcher should ask when planning a study is "How large a sample do I need?" If the sample size is too small, even a well conducted study may fail to answer its research
question, may fail to detect important effects or associations, or may estimate those effects or associations too imprecisely. Similarly, if the sample size is too large, the study will be more
difficult and costly, and may even lead to a loss in accuracy. Hence, optimum sample size is an essential component of any research. When the estimated sample size can not be included in a study,
post-hoc power analysis should be carried out. Approaches for estimating sample size and performing power analysis depend primarily on the study design and the main outcome measure of the study.
There are distinct approaches for calculating sample size for different study designs and different outcome measures. Additionally, there are also different procedures for calculating sample size for
two approaches of drawing statistical inference from the study results, i.e. confidence interval approach and test of significance approach. This article describes some commonly used terms, which
need to be specified for a formal sample size calculation. Examples for four procedures (use of formulae, readymade tables, nomograms, and computer software), which are conventionally used for
calculating sample size, are also given
How to cite this article:
Zodpey SP. Sample size and power analysis in medical research. Indian J Dermatol Venereol Leprol 2004;70:123-8
How to cite this URL:
Zodpey SP. Sample size and power analysis in medical research. Indian J Dermatol Venereol Leprol [serial online] 2004 [cited 2014 Apr 17];70:123-8. Available from: http://www.ijdvl.com/text.asp?2004/
Medical researchers primarily consult bio-statisticians for two reasons. Firstly, they want to know how many subjects should be included in their study (sample size) and how these subjects should be
selected (sampling methods). Secondly, they desire to attribute a p value to their results to claim significance of results. Both these bio-statistical issues are interrelated. If a study does not
have an optimum sample size, the significance of the results in reality (true differences) may not be detected. This implies that the study would lack power to detect the significance of differences
because of inadequate sample size.[1] Whatever outstanding results the study produces, if the sample size is inadequate their validity would be questioned.
If the sample size is too small (less than the optimum sample size), even the most rigorously executed study may fail to answer its research question, may fail to detect important effects or
associations, or may estimate those effects or associations too imprecisely. Similarly, if the sample size is too large (more than the optimum size), the study will be more difficult and costly, and
may even lead to a loss in accuracy, as it is often difficult to maintain high data quality. Hence, it is necessary to estimate the optimum sample size for each individual study.[1] For these
reasons, in recent years, medical literature has focused increasing attention on sample size requirements in medical research[2] and peer reviewed journals seriously look for the appropriateness of
sample size in their manuscript review process.
Basically, the issue of sample size can be addressed at two stages of the actual conduct of the study. Firstly, one can calculate the optimum sample size required during the planning stage, while
designing the study, using appropriate approaches and information on some parameters. Secondly, the issue of sample size can be addressed through post-hoc power analysis at the stage of
interpretation of the results. In practice, the size of a study is often restricted because of limited financial resources, availability of cases (rare diseases) and time limitation. In these
situations the researcher completes the study using the available samples and performs post-hoc power analysis.[1]
It is also important to note that the requirement for estimating the sample size depends primarily on the study design and the main outcome measure of the study. There are various study design
options available for conducting medical research. A medical researcher needs to select an appropriate study design to answer the research question. There are many different approaches for
calculating the sample size for different study designs. For example, the procedure of calculating the sample size is different for a case-control design than for a cohort design. Similarly, there
are different approaches for calculating the sample size for cross-sectional studies, clinical trials, diagnostic test studies, etc. Moreover, within each study design there could be more sub-designs
and the sample size calculation approach would vary accordingly. For case-control studies, the approach for calculating the sample size is distinct for matched and un-matched designs. Hence, one must
use the correct approach for computing the sample size appropriate to the study design and its subtype.[1]
The second important issue that should be considered while computing the sample size is the primary outcome measure. The primary outcome measure is usually reflected in the primary research question
of the study and also depends on the study design. For estimating the risk in a case-control study the primary outcome measure would be the odds ratio, but while estimating the risk in a cohort study
it would be the relative risk. In a case-control study, the primary outcome measure could be the difference in means/proportions of exposure in cases and controls, crude odds ratio, adjusted odds
ratio, attributable risk, population attributable risk, prevented fraction, etc. While calculating the sample size, one of these primary outcome measures has to be specified since there are distinct
approaches for calculating the sample size for each of these outcomes.[3] Similarly, for each study design there could be many outcomes and a researcher needs to specify the main outcome measure of
the study.
For drawing a statistical inference from the study results two approaches are used: estimation (confidence interval approach) and hypothesis testing (test of significance approach). The procedures
for calculating the sample size for these two approaches differ and are available in the literature.[1],[2],[4],[5] A researcher needs to select the appropriate procedure for computing the sample
size and accordingly use the approach of drawing a statistical inference subsequently.
Moreover, one also needs to specify some additional parameters depending upon the approach chosen for calculating the sample size. They are hypothesis (one or two tailed), precision, type I error,
type II error, power, effect size, design effect, etc. For understanding the principles of sample size calculation and power analysis, one should have an understanding of these commonly used terms.
[TAG:2]Description of some commonly used terms[1][/TAG:2]
Random error
It describes the role of chance, particularly when the effects of explanatory or predictive factors have already been taken into account. Sources of random error include sampling variability, subject
to subject differences and measurement errors. It can be controlled and reduced to acceptably low levels by averaging, increasing the sample size and by repeating the experiment.
Systematic error (Bias)
It describes deviations that are not a consequence of chance alone. Several factors, including the patient selection criteria, might contribute to it. These factors may not be amenable to
measurement, but can usually be removed or reduced by good design and conduct of the experiment. A strong bias can yield an estimate very far from the true value, even in the wrong direction.
Precision (Reliability)
It describes the degree to which a variable has the same value when measured several times. It is a measure of consistency. Sometimes it simply refers to the width of the confidence interval. It is a
function of random error (the greater the error, the less precise the measurement), the sample size, the confidence interval required and the variance of the outcome variable. A larger sample size
would give precise estimates.
Accuracy (Validity)
It indicates the degree to which the variable actually represents what it is suppose to represent. It is a function of systematic error or bias.
Null hypothesis
This is a hypothesis which states that there is no difference among groups or that there is no association between the predictor and the outcome variables. This hypothesis needs to be tested.
Alternative hypothesis
This is a hypothesis that in some sense contradicts the null hypothesis. It assumes that there is a difference among the groups or there exists an association between the predictor and outcome
variable. If an alternative hypothesis cannot be tested directly, it is accepted by exclusion if the test of significance rejects the null hypothesis. There are two types of alternative hypothesis:
one-tailed (one-sided) hypothesis and two-tailed (two-sided) hypothesis. One-tailed hypothesis specifies the difference (or effect or association) in one direction only. For example, patients with
pancreatic cancer will have a higher rate of coffee drinking as compared to control subjects. Two-tailed hypothesis specifies the difference (or effect or association) in either direction. For
example, patients with pancreatic cancer will have a different rate of coffee drinking - either higher or lower - as compared to control subjects. A one-tailed approach leads to a smaller sample
size. However, the decision to use the one- or two-tailed approach depends on the clinical or biological importance or relevance of the research question and prior knowledge about effect or
association. This decision should not be based on sample size considerations.
Type I (a) error
It occurs if an investigator rejects a null hypothesis that is actually true in the population. It is the error of falsely stating that two drug effects are significantly different when they are
actually equivalent. This is the probability of erroneously finding a disease exposure association, when none exists in reality. The probability of making a error is called as level of significance
and is conventionally considered as 0.05 (5%). For computing the sample size its specification in terms of Za is required. The quantity Za is a value from the standard normal distribution
corresponding to a. For a one-sided test of the hypothesis, Za is taken to be the value of the standard normal distribution corresponding to a. For a two-sided test, Za is taken to be the value that
is exceeded with probability a/2. The sample size is inversely proportional to type I error.
Type II (b) error
It occurs if the investigator fails to reject a null hypothesis that is actually false in the population. It is the error of falsely stating that two drug effects are equivalent when they are
actually different. This is the probability of not erroneously finding disease exposure association, when it exists in reality. For computing the sample size its specification in terms of Zb is
required. The quantity Zb is a value from the standard normal distribution corresponding to b. For either a one-sided or two-sided test, Zb is taken to be the value that is exceeded with probability
b. The values of Za and Zb for the selected values of a and b are presented in [Table - 1]. The sample size is inversely proportional to type II error.
Power (1-b)
This is the probability that the test will correctly identify a significant difference or effect or association in the sample should one exist in the population. This is expressed as 1-b. The sample
size is directly proportional to the power of the study. The larger the sample size, the study will have greater power to detect significance of difference or effect or association.
Effect size
The effect size refers to the magnitude of the effect under the alternative hypothesis. It should represent the smallest difference that would be of clinical or biological significance. It varies
from study to study. For example, a treatment effect that reduces mortality by 1% might be clinically important, while a treatment effect that reduces transient asthma by 20% may be of little
clinical interest. It is also variable from one statistical procedure to the other. It could be the difference in cure rates, or a standardized mean difference or a correlation coefficient. If the
effect size is increased, the type II error decreases. Power is a function of an effect size and the sample size. For a given power, 'small effects' require larger sample size than 'large effects'.
[Table - 2] shows the sample size for various effect sizes at a fixed power and level of significance.
Design effect
Geographic clustering is generally used to make the study easier and cheaper to perform. The effect on the sample size depends on the number of clusters and the variance between and within clusters.
In practice this is determined from previous studies or from studies of a similar type in literature, and is expressed as a constant called 'design effect', often between 1.0 and 2.0. It is the ratio
of the variance when cluster sampling is used to the variance when simple random sampling is used. The sample sizes for simple random samples are multiplied by the design effect to obtain the sample
size for the clustered sample.
Procedures for calculating the sample size
There are four procedures that could be used for calculating sample size: use of formulae, readymade tables, nomograms, and computer software.
Use of formulae for sample size calculation and power analysis
There are more than one hundred formulae for calculating the sample size and power in different situations for different study designs.[1] The following are two examples of their use in medical
To investigate the role of oral contraceptives (OC) in the etiology of cutaneous malignant melanoma in women, an unmatched case-control study is to be undertaken. For calculating the sample size for
this study using formulae,[3] the following parameters have to be specified:
p[0] (Prevalence of exposure in control population) = 0.30 (approximately 30% women in the population are using OC (information obtained from literature)).
a = 0.05 (two-sided), Za = 1.96
b = 0.10, Zb = 1.28 (power = 90%)
Odds ratio (OR) = 2 (information obtained from literature or from earlier studies in other population settings)
n = 2p' q' (Za+ Zb)[2] / (p1 - p0)[2]
p1= p0 OR / (1 + p0 (OR - 1))
p' = (1/2) (p1 + p0), q' = 1 - p', q1 = 1-p1, q0 = 1-p0
Solution (by putting above specified values in the formula): n = 188 in each group.
If we decide to study only 50 cases and 50 controls, then with the other specifications unchanged, the power of the study would be as follows.
Formula:[3] Zb = {[sqrt(n(p1-p0)[2]] - [Za sqrt (2p'q')]} / {sqrt (p1q1+p0q0)}
The power is determined from tables of the normal distribution by finding the probability with which the calculated value of Zb is not exceeded.
Solution (by putting the above specified values in the formula): Zb = -1.13.
From tables of the normal probability function, one finds, Power = p (Z £ -1.13) = 0.13.
Thus if the odds ratio in the target population is 2, a case-control study of n = 50 per group has only a 13% chance of finding that the sample estimate will be significantly (a = 0.05) different
from unity.
Use of readymade tables for sample size calculation[1],[2],[3],[4],[5]
How large a sample of patients should be followed up if an investigator wishes to estimate the incidence rate of a disease to within 10% of its true value with 95% confidence? Here, the relative
precision (e) is 10% and the confidence level is 95%. [Table - 3] shows that for e = 0.10 and a confidence level of 95% a sample size of 385 would be needed. This table can be used to calculate the
sample size making the desired changes in the relative precision and confidence level, e.g. if the level of confidence is reduced to 90%, then the sample size required would be 271. Such tables that
give ready made sample sizes are available for different designs and situations.
Use of nomograms for sample size calculation[6],[7]
For using a nomogram to calculate the sample size, one needs to specify the study (treatment/group 1) and the control groups (placebo/group 2). This could be arbitrary or based on the study design;
the nomogram will work either way. The researcher should then decide the effect size that is clinically important to detect. This should be expressed in terms of the percentage change in the response
rate compared with that of the control group. For example, if 40% of patients treated with the standard therapy are cured and one wants to know whether a new drug can cure 50%, one is looking for a
25% increase in the cure rates [((50% - 40%)/40%) = 25%].
The desired percentage change is located on a horizontal axis of the nomogram (x line, [Figure - 1]. A vertical line is extended to intersect with the diagonal line corresponding to the response rate
in the control group. If the appropriate diagonal line does not extend far enough to intersect with this vertical line, one can try using the other treatment group as the control group. The
symmetrical design of the nomogram allows an arbitrary designation of control group. Finally, a horizontal line is extended from this point to the vertical axis, showing the sample size required for
both the treatment and control groups.
A study randomly allocates patients with an infectious disease to treatment with drug A or drug B. The study reports a 40% cure rate using drug A, the current standard therapy, and a 45% cure rate
using drug B, a new drug. The study concludes that there is no statistically significant difference in response rates between the two drugs. There are 150 patients in each treatment group.
A researcher, who is reading this study, believes that previous studies suggest a better response rate in patients treated with drug B. He decides that a 25% improvement in the usual response rate
from drug A, from 40% to 50%, would be important for him. He does not consider a smaller difference to be clinically important. Using the nomogram, he finds that the sample size required to detect a
25% difference in cure rate between drug A and drug B, assuming a control group cure rate of 40%, is about 370 (line x, [Figure - 1]. This is the sample size that ensures an 80% chance of detecting
this difference if it exists, assuming a of 0.05. Because there are only 150 patients in each treatment group, the sample size is clearly inadequate; it is not large enough to be sure that a
clinically important 25% difference in cure rates does not exist. The researcher, therefore, feels justified in continuing to prescribe drug B since previous evidence suggests that it is more
effective and the new study, despite its negative results, is too small to refute this evidence.
A separate nomogram is available for continuous variables.[6] Both these nomograms are intended to provide the clinician with a handy and easy-to-use reference for ascertaining whether an apparently
negative study has a sample size adequate to detect reliably any important difference between treatment groups.
Use of computer software for sample size calculation and power analysis
The following software can be used for calculating sample size and power: STATA, Epi-Info, Sample, Power and Precision, and nQuerry Advisor.
1. Zodpey SP, Ughade SN. Workshop manual: Workshop on Sample Size Considerations in Medical Research. Nagpur: MCIAPSM; 1999.
2. Arkin CF, Wachtel MS. How many patients are necessary to assess test performance? JAMA 1990;263:275-8. [PUBMED]
3. Schlesselman JJ. Case-control studies - Design, conduct and analysis. 1st ed. New York: Oxford University Press; 1982.
4. Bach LA, Sharpe K. Sample size for clinical and biological research. Aust NZ J Med 1989;19:64-8.
5. Lwanga SK, Lemeshow S. Sample size determination in health studies - A practical manual. 1st ed. Geneva: World Health Organization; 1991.
6. Young MJ, Bresnitz EA, Strom BL. Sample size nomograms for interpreting negative clinical studies. Ann Int Med 1983;99: 248-51. [PUBMED]
7. Altman DG, Gore SM. Statistics in practice. Harrow, Middlesex: BMJ 1982.
Previous article Next article | {"url":"http://www.ijdvl.com/article.asp?issn=0378-6323%3Byear=2004%3Bvolume=70%3Bissue=2%3Bspage=123%3Bepage=128%3Baulast=Zodpey","timestamp":"2014-04-17T04:21:15Z","content_type":null,"content_length":"56750","record_id":"<urn:uuid:f5472a48-3cec-42b8-b6d6-75f88f80d702>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00556-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fortran Wiki
Determines the maximum value of the elements in an array value, or, if the dim argument is supplied, determines the maximum value along each row of the array in the dim direction. If mask is present,
only the elements for which mask is .true. are considered. If the array has zero size, or all of the elements of mask are .false., then the result is the most negative number of the type and kind of
array if array is numeric, or a string of nulls if array is of character type.
Fortran 95 and later
Transformational function
• result = maxval(array, dim [, mask])
• result = maxval(array [, mask])
• array - Shall be an array of type integer, real, or character.
• dim - (Optional) Shall be a scalar of type integer, with a value between one and the rank of array, inclusive. It may not be an optional dummy argument.
• mask - Shall be an array of type logical, and conformable with array.
Return value
If dim is absent, or if array has a rank of one, the result is a scalar. If dim is present, the result is an array with a rank one less than the rank of array, and a size corresponding to the size of
array with the dim dimension removed. In all cases, the result is of the same type and kind as array.
See also
max, maxloc | {"url":"http://fortranwiki.org/fortran/show/maxval","timestamp":"2014-04-20T23:27:41Z","content_type":null,"content_length":"9917","record_id":"<urn:uuid:2cca2656-4a84-4ef7-b930-ea0e54e91afe>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vector Geometry
January 3rd 2013, 02:02 PM #1
Jan 2013
Vector Geometry
I don't know how to approach the problem. Draw triangle ABC, with P the midpoint of AB, Q the midpoint of BC, and R the Midpoint of CA. If X is any point , show that XA + XB + XC = XP + XQ + XR
Re: Vector Geometry
This is a nice question. What have you done towards solving?
It can teach you a great deal.
So post what you have done?
Re: Vector Geometry
so i wrote that XP = XA + AP and XP = XB - PB
AP = PB so i combined the two equations to ge 2XP = XA + XB
and that 2XR = XA +XC and 2XQ = XB + XC
adding the three equations together, i got 2XA +2XB + 2XC = 2XP + 2 XR + 2XQ and then simplified.
Am I doing it right, or did i miss something
Re: Vector Geometry
I think I would be inclined to start with the obvious: AB+ BC= AC. Then divide both sides by 2.
Re: Vector Geometry
Edit: Whoops, I might have misinterpreted vectors as lengths of segments (if XA referred to the length of XA, as opposed to vector XA, the problem would be clearly flawed).
Last edited by richard1234; January 15th 2013 at 08:44 PM.
Re: Vector Geometry
Did you figure out the answer...........I am trying to do the same problem...............some help and clarification would be much appreciated!!
January 3rd 2013, 02:08 PM #2
January 15th 2013, 03:19 PM #3
Jan 2013
January 15th 2013, 04:23 PM #4
MHF Contributor
Apr 2005
January 15th 2013, 08:35 PM #5
Super Member
Jun 2012
May 5th 2013, 11:10 AM #6
May 2013
New York | {"url":"http://mathhelpforum.com/geometry/210714-vector-geometry.html","timestamp":"2014-04-18T11:06:35Z","content_type":null,"content_length":"43686","record_id":"<urn:uuid:e0a3cdcf-c252-4b31-b637-9c45dbc881b5>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: BULK UNIVERSALITY AND CLOCK SPACING
WITH A.C. SPECTRUM
, YORAM LAST2,4
, AND BARRY SIMON3,4
Abstract. By combining some ideas of Lubinsky with some soft
analysis, we prove that universality and clock behavior of zeros
for OPRL in the a.c. spectral region is implied by convergence
of 1
n Kn(x, x) for the diagonal CD kernel and boundedness of the
analog associated to second kind polynomials. We then show that
these hypotheses are always valid for ergodic Jacobi matrices with
a.c. spectrum and prove that the limit of 1
n Kn(x, x) is (x)/w(x)
where is the density of zeros and w is the a.c. weight of the
spectral measure.
1. Introduction
Given a finite measure, dµ, of compact and not finite support on R,
one defines the orthonormal polynomials, pn(x) (or pn(x, dµ) if the µ- | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/833/1820995.html","timestamp":"2014-04-16T18:07:07Z","content_type":null,"content_length":"8013","record_id":"<urn:uuid:ee22ef06-e856-42fc-b29f-174834b80fa8>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: logistic regression and dropped variable (or error) messages
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: logistic regression and dropped variable (or error) messages
From "Wendell Joice" <wjoice@erols.com>
To <statalist@hsphsun2.harvard.edu>
Subject st: logistic regression and dropped variable (or error) messages
Date Wed, 11 Dec 2002 13:58:38 -0500
Dear statalist,
I am running a series of stepwise logistic regressions using the xi command
and stata 7 (but using stata 6 manuals)
xi: sw logistic psycther i.race i.ment i.mentlrec i.new i.seenbfr i.acchron
yearnum i.complain if sample==1, pr (.25)
Can anyone explain or direct me to the explanations for the following
messages received in different runs using the same basic command as above ??
(messages received with different variables) (I have tried manuals, findit,
etc. but can't find the explanations...that is, to what specific problems do
the messages refer) :
- "between-term collinearity, variable _Ivar1name_3" (Red error message
followed by r(498) in blue)
- (_Ivar2name_1 dropped due to estimability)
- note: _Ivar3name_2 dropped due to collinearity
- (_Ivar4name_2 dropped because constant)
I think I understand the third one due to collinearity but want to be sure
about the others.
Wendell Joice
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2002-12/msg00247.html","timestamp":"2014-04-21T15:22:37Z","content_type":null,"content_length":"5648","record_id":"<urn:uuid:38f548cf-de7b-40fc-aeb7-2570dc9b5e16>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00412-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chapter Contents Previous Next
Agresti, A. (1990), Categorical Data Analysis, New York: John Wiley & Sons, Inc.
Agresti, A. (1992), "A Survey of Exact Inference for Contingency Tables," Statistical Science, 7(1), 131 -177.
Agresti, A. (1996), An Introduction to Categorical Data Analysis, New York: John Wiley & Sons, Inc.
Agresti, A., Mehta, C.R. and Patel, N.R. (1990), "Exact Inference for Contingency Tables with Ordered Categories," Journal of the American Statistical Association, 85, 453 -458.
Agresti, A., Wackerly, D., and Boyett, J.M. (1979), "Exact Conditional Tests for Cross-Classifications: Approximation of Attained Significance Levels," Psychometrika, 44, 75 -83.
Birch, M.W. (1965), "The Detection of Partial Association, II: The General Case," Journal of the Royal Statistical Society, B, 27, 111 -124.
Bishop, Y., Fienberg, S.E., and Holland, P.W. (1975), Discrete Multivariate Analysis: Theory and Practice, Cambridge, MA: MIT Press.
Bowker, A.H. (1948), "Bowker's Test for Symmetry," Journal of the American Statistical Association, 43, 572 -574.
Breslow, N.E. and Day, N.E. (1993), Statistical Methods in Cancer Research, Volume I: The Analysis of Case-Control Studies, IARC Scientific Publications, No. 32, New York: Oxford University Press,
Breslow, N.E. and Day, N.E. (1994), Statistical Methods in Cancer Research, Volume II: The Design and Analysis of Cohort Studies, IARC Scientific Publications, No. 82, New York: Oxford University
Press, Inc.
Bross, I.D.J. (1958), "How to Use Ridit Analysis," Biometrics, 14, 18 -38.
Brown, M.B. and Benedetti, J.K. (1977), "Sampling Behavior of Tests for Correlation in Two-Way Contingency Tables," Journal of the American Statistical Association, 72, 309 -315.
Cicchetti, D.V. and Allison, T. (1971), "A New Procedure for Assessing Reliability of Scoring EEG Sleep Recordings," American Journal of EEG Technology, 11, 101 -109.
Cochran, W.G. (1950), "The Comparison of Percentages in Matched Samples," Biometrika, 37, 256 -266.
Cochran, W.G. (1954), "Some Methods for Strengthening the Common Biometrics, 10, 417 -451.
Collett, D. (1991), Modelling Binary Data, London: Chapman & Hall.
Cohen, J. (1960), "A Coefficient of Agreement for Nominal Scales," Educational and Psychological Measurement, 20, 37 -46.
Drasgow, F. (1986), "Polychoric and Polyserial Correlations" in Encyclopedia of Statistical Sciences, Volume 7 , eds. S. Kotz and N. L. Johnson, New York: John Wiley & Sons, Inc., 68 -74.
Fienberg, S.E. (1980), The Analysis of Cross-Classified Data, Second Edition, Cambridge, MA: MIT Press.
Fleiss, J.L. (1981), Statistical Methods for Rates and Proportions, Second Edition, New York: John Wiley & Sons, Inc.
Fleiss, J.L. and Cohen, J. (1973), "The Equivalence of Weighted Kappa and the Intraclass Correlation Coefficient as Measures of Reliability," Educational and Psychological Measurement, 33, 613 -619.
Fleiss, J.L., Cohen, J., and Everitt, B.S. (1969), "Large-Sample Standard Errors of Kappa and Weighted Kappa," Psychological Bulletin, 72, 323 -327.
Freeman, G.H. and Halton, J.H. (1951), "Note on an Exact Treatment of Contingency, Goodness of Fit and Other Problems of Significance," Biometrika, 38, 141 -149.
Gail, M. and Mantel, N. (1977), "Counting the Number of r ×c Contingency Tables with Fixed Margins," Journal of the American Statistical Association, 72, 859 -862.
Gart, J.J. (1971), "The Comparison of Proportions: A Review of Significance Tests, Confidence Intervals and Adjustments for Stratification," Review of the International Statistical Institute, 39(2),
148 -169.
Goodman, L.A. and Kruskal, W.H. (1979), Measures of Association for Cross Classification, New York: Springer-Verlag.
Greenland, S. and Robins, J.M. (1985), "Estimators of the Mantel-Haenszel Variance Consistent in Both Sparse Data and Large-Strata Limiting Models," Biometrics, 42, 311 -323.
Haldane, J.B.S. (1955), "The Estimation and Significance of the Logarithm of a Ratio of Frequencies," Annals of Human Genetics, 20, 309 -314.
Hollander, M. and Wolfe, D.A. (1973), Nonparametric Statistical Methods, New York: John Wiley & Sons, Inc.
Kendall, M. (1955), Rank Correlation Methods, Second Edition, London: Charles Griffin and Co.
Kendall, M. and Stuart, A. (1979), The Advanced Theory of Statistics, Volume 2, New York: Macmillan Publishing Company, Inc.
Kleinbaum, D.G., Kupper, L.L., and Morgenstern, H. (1982), Epidemiologic Research: Principles and Quantitative Methods, Research Methods Series, New York: Van Nostrand Reinhold.
Landis, R.J., Heyman, E.R., and Koch, G.G. (1978), "Average Partial Association in Three-way Contingency Tables: A Review and Discussion of Alternative Tests," International Statistical Review, 46,
237 -254.
Leemis, L.M. and Trivedi, K.S. (1996), "A Comparison of Approximate Interval Estimators for the Bernoulli Parameter," The American Statistician, 50(1), 63 -68.
Lehmann, E.L. (1975), Nonparametrics: Statistical Methods Based on Ranks, San Francisco: Holden-Day, Inc.
Liebetrau, A.M. (1983), Measures of Association, Quantitative Application in the Social Sciences, Vol. 32, Beverly Hills: Sage Publications, Inc.
Mack, G.A. and Skillings, J.H. (1980), "A Friedman-Type Rank Test for Main Effects in a Two-Factor ANOVA," Journal of the American Statistical Association, 75, 947 -951.
Mantel, N. (1963), "Chi-square Tests with One Degree of Freedom: Extensions of the Mantel-Haenszel Procedure," Journal of the American Statistical Association, 58, 690 -700.
Mantel, N. and Haenszel, W. (1959), "Statistical Aspects of the Analysis of Data from Retrospective Studies of Disease," Journal of the National Cancer Institute, 22, 719 -748.
Margolin, B.H. (1988), "Test for Trend in Proportions," in Encyclopedia of Statistical Sciences, Volume 9, eds. S. Klotz and N.L. Johnson, New York: John Wiley & Sons, Inc., 334 -336.
McNemar, Q. (1947), "Note on the Sampling Error of the Difference between Correlated Proportions or Percentages," Psychometrika, 12, 153 -157.
Mehta, C.R. and Patel, N.R. (1983), "A Network Algorithm for Performing Fisher's Exact Test in r ×c Contingency Tables," Journal of the American Statistical Association, 78, 427 -434.
Mehta, C.R., Patel, N.R., and Senchaudhuri, P. (1991), "Exact Stratified Linear Rank Tests for Binary Data," Computing Science and Statistics: Proceedings of the 23rd Symposium on the Interface, ed.
E.M. Keramidas, 200 -207.
Mehta, C.R., Patel, N.R., and Tsiatis, A.A. (1984), "Exact Significance Testing to Establish Treatment Equivalence with Ordered Categorical Data," Biometrics, 40, 819 -825.
Narayanan, A. and Watts, D. (1996), "Exact Methods in the NPAR1WAY Procedure," in Proceedings of the Twenty-First Annual SAS Users Group International Conference, Cary, NC: SAS Institute Inc., 1290
Olsson, U. (1979), "Maximum Likelihood Estimation of the Polychoric Correlation Coefficient," Psychometrika, 12, 443 -460.
Pirie, W. (1983), "Jonckheere Tests for Ordered Alternatives," in Encyclopedia of Statistical Sciences, Volume 4 , eds. S. Kotz and N.L. Johnson, New York: John Wiley & Sons, Inc., 315 -318.
Radlow, R. and Alf, E.F. (1975), "An Alternate Multinomial Assessment of the Accuracy of the Chi-Square Test of Goodness of Fit," Journal of the American Statistical Association, 70, 811 -813.
Robins, J.M., Breslow, N., and Greenland, S. (1986), "Estimators of the Mantel-Haenszel Variance Consistent in Both Sparse Data and Large-Strata Limiting Models," Biometrics, 42, 311 -323.
Snedecor, G.W. and Cochran, W.G. (1989), Statistical Methods, Eighth Edition, Ames, IA: Iowa State University Press.
Somers, R.H. (1962), "A New Asymmetric Measure of Association for Ordinal Variables," American Sociological Review, 27, 799 -811.
Stokes, M.E., Davis, C.S., and Koch, G.G. (1995), Categorical Data Analysis Using the SAS System, Cary, NC: SAS Institute Inc.
Theil, H. (1972), Statistical Decomposition Analysis, Amsterdam: North-Holland Publishing Company.
Thomas, D.G. (1971), "Algorithm AS-36. Exact Confidence Limits for the Odds Ratio in a 2 ×2 Table," Applied Statistics, 20, 105 -110.
Valz, P.D. and Thompson, M.E. (1994), "Exact Inference for Kendall's S and Spearman's Rho with Extensions to Fisher's Exact Test in r ×c Contingency Tables," Journal of Computational and Graphical
Statistics, 3(4), 459 -472.
van Elteren, P.H. (1960), "On the Combination of Independent Two-Sample Tests of Wilcoxon," Bulletin of the International Statistical Institute, 37, 351 -361.
Woolf, B. (1955), "On Estimating the Relationship between Blood Group and Disease," Annals of Human Genetics, 19, 251 -253.
Chapter Contents Previous Next Top
Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved. | {"url":"http://v8doc.sas.com/sashtml/stat/chap28/sect43.htm","timestamp":"2014-04-20T16:19:06Z","content_type":null,"content_length":"12140","record_id":"<urn:uuid:5596354c-cfa5-45e5-bd5c-bab2d3c1d847>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00054-ip-10-147-4-33.ec2.internal.warc.gz"} |
A bunch of interpreters using CPP and Haskell
This text is about the code distribution that you find
(@ SourceForge project
Released: 24 Aug 2010
Last updated: 31 Aug 2010
We walk through the evolution of a simple interpreter. This exercise is meant to explore functional programming in the context of language interpreters. We start from a trivial language which can
generate natural numbers. Step by step, we add some language constructs that make us deal with partiality of interpretation, type tests, a binding mechanism, and different forms of recursion.
None of these evolution steps are particularly advanced. The contribution of this code distribution lies more in the pedagogical style of making simple, well-understood extensions, and reflecting on
the functional programming issues that pop up as well as the code-level organization and reusability that is achievable. (For the actual reflection, I recommend the upcoming Channel9 lecture on
language interpretation that will use some part of the code base.)
The interpreters are programmed in a naive denotational style. We do not use monads for modularization until somewhat late in the exercise. Also, we do not make any effort to add language constructs
in a type-safe, modular way (c.f., the Expression Problem). Instead, we use CPP to incrementally extend the interpreter; we also subdivide the interpreters into chunks of code so that we can reuse
and replace chunks (files) selectively along evolution. Arguably, we use low level techniques for setting up a kind of product line for simple interpreters.
Each interpreter is subdivided into code units as follows:
• Imports: any imports needed to use Haskell's library.
• Syntax: the algebraic datatype for the abstract syntax.
• Value: the designated domain for the types of results.
• Domains: other types used by the interpreter.
• Interpreter: the semantic function (the interpreter).
• Algebra: composition functions for semantic meanings.
• Library: reusable, auxiliary functions for the interpreter.
• Locals: not so reusable auxiliary functions.
• Test: any code needed for testing the interpreter.
• Main: the main function for testing the interpreter.
Running the stuff
Contributions are welcome.
All interpreters are readily cached in subdirectory Haskell/src/cache.
You can rebuild and test all this stuff very easily if you have:
- ghc(i) (tested with version 6.10.4)
- gcc cpp (tested with version 4.2.1)
- make (tested with GNU Make 3.81)
Go to subdirectory "Haskell/src" and type in "make".
CPP is used to glue together the interpreters from the code units.
Description of the versions of the interpreter
• 0-ConstAdd: This is an evaluator for Const/Add expressions. (If you have seen the C9 lecture on the Expression Problem, this option only serves as a reminder that we have already dealt with
interpreters back then.)
• 1-ZeroSucc: Let's pick a trivial starting point for interpretation. This language has a constant operation, Zero, and the successor operation, Succ. This is the basis for Peano-like natural
numbers. Semantically, we represent natural numbers through Haskell's Ints. Hence, this tiny interpreter allows us to build Haskell Ints from Peano's Zero and Succ. That's nothing short of
impressive. :-)
• 2-ZeroSuccPred: So far, our interpreter was a total function (assuming non-bottom arguments). We add a predecessor construct to the abstract syntax. Since our (intended) semantic domain is
natural numbers, our interpreter function gets partial in this manner. That is, we need to rule out that we compute the predecessor of zero. We need to rewrite existing equations and use "partial
function application" all over the place.
• 3-NB: We add a few more primitives to the interpreter: operations for Booleans. In fact, we arrive at the NB language of Pierce's TAPL (except that we leave out False and True (because they can
be expressed already (as you may want to show))). This step of evolution adds something new insofar that we start to consider different result types. Hence, we need to perform partial projections
on results, if some operation of the interpreter requires a certain type. (For instance, the successor operation is only applicable to natural numbers.) In principle, NB also brings up typing,
but we do not get into issues of static semantics/typing here.
• 4-Lambda: We add the lambda calculus to NB. Adopting common terminology, our composed beast is best called an applicative lambda calculus. We are ready for functional programming now. (We are
Turing complete, too, for what it matters.) For instance, we can define recursive, arithmetic operations. To this end, we leverage the CBV fixed-point operator which is representable as a lambda
• 5-Letrec: We add a recursive let construct to the applicative lambda calculus. This is a minor extension in terms of interpreter design. That is, it is a straightforward data extension---we do
not leave the ground of our current semantic model. Conceptually, this is an interesting extension though because we can now define proper nested, recursive bindings. Our Turing-complete language
gets closer to a proper functional programming language. In fact, this is as much expressiveness as we get. The two remaining versions of the interpreter only vary style of its definition.
• 6-Monads: The signature of the interpreter function involves partiality (c.f., Maybe Value) and environment passing (Env -> ...). The actual interpreter involves considerable plumbing to deal
with partiality and environment passing. This makes it hard to focus on the core meaning of the interpreter's equations. We migrate to monadic style to raise the level of abstraction, and to make
parts of the interpreter more oblivious to partiality and environment passing, where possible. We compose the required monad through monad transformation---using a Maybe and a Reader transformer
on top of the identity monad. (Arguably, one may object to the use of the reader monad for environment passing. The resulting semantics of lambda abstraction shows the difficulty of using the
• 7-Bananas: Finally, we separate recursion of the interpreter from the actual algebra of meanings of constructs. In this manner, we clarify the intention of compositionality for such a
denotational-style interpreter. Also, this step presents the generalization of folding that is established for lists, which however can also be used for domain-specific algebraic datatypes such
as types of abstract syntaxes.
Comments and questions and contributions welcome.
PS: This is not rocket science, but it may be pretty useful in teaching interpretation and executable semantics and functional programming, no? | {"url":"http://professor-fish.blogspot.com/2010/08/bunch-of-interpreters-using-cpp-and.html","timestamp":"2014-04-20T04:28:20Z","content_type":null,"content_length":"71727","record_id":"<urn:uuid:1e0bcead-130a-4396-92f2-a2bbf17ef8b7>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00078-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fractions, Decimals, And Percentages Ppt Presentation
What do you know about the relationship between fractions, decimals, and percents? :
What do you know about the relationship between fractions, decimals, and percents?
Objective :
2 Objective Explain the numerical relationships between percents, decimals, and fractions.
Slide 3:
3 With your table discuss everything you know about … Fractions Decimals Percents
Converting Fractions :
Converting Fractions To Decimal … Divide the numerator by the denominator. Show your answer as a decimal. To Percents … Convert the fraction into a decimal. Multiply the decimal by 100 or move the
decimal over two places to the right. Add percentage sign. 4
Converting Decimals :
Converting Decimals To Fractions … Say the number aloud and write as a fraction. 2/10 = Two tenths Two tenths = 0.2 Reduce to simplest form. To Percents … Multiply the decimal by 100 or move the
decimal over two places to the right. Multiply by 100 Add percentage sign. 5
Converting Percents :
Converting Percents To Fractions … Write the percent as the numerator and use 100 for the denominator. Reduce to simplest form. To Decimals … Divide the decimal by 100 or move the decimal over two
places to the left. 6
Slide 7:
7 What fraction of the figure is shaded? What decimal represents the shaded portion? What percent represents the shaded portion?
Slide 8:
8 What fraction of the figure is shaded? What decimal represents the shaded portion? What percent represents the shaded portion?
Slide 9:
9 Michael Beasley shot 25 times and made 23 shots. How could you represent his made baskets as a fraction, decimal, and percent?
Slide 10:
10 Given the fraction 5/8, what is the decimal equivalent, and what is the percent equivalent? 0.625 62.5%
Slide 11:
11 Given the percent 75%, how would you convert it to a fraction and to a decimal? ¾ 0.75
Slide 12:
12 Given the decimal 0.40, what is the percent and fraction equivalents? 40% 2/5
Slide 13:
13 Given the fraction 6/15, what is the decimal equivalent and what is the percent equivalent? 0.40 40%
Slide 14:
14 Given the percent 60%, how would you convert it to a fraction and to a decimal? 3/5 0.6
Slide 15:
15 Given the decimal 0.8, what is the percent and fraction equivalents? 80% 4/5
FDP Basketball :
16 FDP Basketball One lucky contestant will win the chance to shoot eight shots today. As a class we will determine how to express the number of shots made as a fraction, decimal, and percent.
Slide 17:
17 The Chiefs won two games and lost fourteen games last season. How could you represent their record as a fraction, decimal, and percent?
Slide 18:
Slide 19:
Slide 20:
Slide 21:
Slide 22:
CSM Workouts http://www.commonsensemath.com/school/lessonsoverview/lessons.php?id=95 http://www.commonsensemath.com/school/lessonsoverview/lessons.php?id=20 22
Slide 23:
Practice Activities Percent Goodies Matching Game 23
Objective :
24 Objective Explain the numerical relationships between percents, decimals, and fractions. | {"url":"http://www.authorstream.com/Presentation/mrferrell-215540-fractions-decimals-percentages-percentage-math-2q-ferrell-fdp-review-education-ppt-powerpoint/","timestamp":"2014-04-16T13:41:32Z","content_type":null,"content_length":"145282","record_id":"<urn:uuid:185fd169-770b-4bb6-b875-52d88eb06b0f>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00358-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bipartite Nim-Geography
up vote 13 down vote favorite
Two players are playing a game on a bipartite graph where all of the edges are nim-heaps of various sizes. A token starts on one of the vertices, and on your turn you must move the token over an edge
and pick up some of the matchsticks in the nim heap corresponding to that edge. If all of the edges meeting the vertex containing the vertex are empty when you start your turn, you lose.
Is there a polynomial time algorithm to decide this game?
I am interested in a multiplayer combinatorial game played as follows: between every pair of people sits a combinatorial game. One player starts with a "hot potato" token. Whoever has the token must
make a move on one of the games incident to him to pass the token to the other player playing that game. If he can't make a move in any of the games adjacent to him, he loses and everybody else wins.
Since multiplayer games are hard to analyze, we can simplify the problem by splitting the players into two teams, where each team wants the potato to end up in the hands of the other team. To
simplify the game further, we might also assume that no two players on the same team have a game between them. Even then, if all of the games are numbers then this problem is directed bipartite
geography which I'm pretty sure is PSPACE-complete, so I'm assuming all of the games are impartial as well.
If all of the nim-heaps have size one or zero, though, then this problem becomes undirected bipartite geography, which can be solved in polynomial time, so I strongly suspect that bipartite
nim-geography should have a polynomial time solution as well.
combinatorial-game-theory computer-science co.combinatorics
add comment
1 Answer
active oldest votes
I guess you already know this, but I just stumbled on this paper that studies the exact same game you described:
M. Fukuyama, A Nim game played on graphs, Theoret. Comput. Sci. 304 (2003), 387–399.
$\text{Section 3}$ of this paper studies the Nim game on a simple bipartite graph in which all vertices on one side are of degree (at most) $2$. At the start of the game, the token is
assumed to be on a vertex on the side without the degree condition. I put the term "at most" in parentheses because this is proved to be irrelevant in the paper so that you can safely assume
that all vertices on the restricted side are of degree $2$. The author gives necessary and sufficient conditions for the first player to have a winning strategy.
To decide if the first player can win by using the necessary and sufficient conditions given in this paper, if my understanding through skimming the paper is correct, you check if the
following three conditions hold for each vertex $u$ on the side with the degree restriction:
1. The number $h$ of matchsticks in the nim heap on one edge with $u$ as its endpoint is different from that of the other edge with $u$ as its endpoint, (In the following, $h$ is assumed to
be smaller.)
up vote
2 down 2. Split the vertex $u$ into $u_1$ and $u_2$ and w.l.o.g. assume that the unique edge of which $u_1$ is an endpoint after splitting is of weight $h$. The minimum capacity of $u_1$-$u_2$
vote cuts is equal to $h$.
3. For any minimum $u_1$-$u_2$ cut, the vertex the token is currently at is connected to $u_2$ (i.e., the one with more matchsticks on its edge).
Checking the first condition is trivial. The second one only needs to compute the minimum capacity, which can be done in polynomial time. For the third condition, while checking if two
vertices are connected in a graph can be done quickly, we may have to list all minimum $u_1$-$u_2$ cuts. Google got me this paper that mentions an algorithm for listing all minimum $s$-$t$
cuts for fixed $s$ and $t$, where the run time depends on the total number of minimum $s$-$t$ cuts for all pair $s, t$ in the vertex set, which can be exponential... The details are
relegated to another technical report by the same authors (Ref. [GN1] in the linked PDF), and there seems to be an improvement on this in said technical report. But I couldn't find it
I looked for a similar paper that doesn't impose the degree condition, but my google-fu failed me.
Whoa, thanks for finding this! (I didn't already know it.) – zeb Oct 22 '13 at 19:13
add comment
Not the answer you're looking for? Browse other questions tagged combinatorial-game-theory computer-science co.combinatorics or ask your own question. | {"url":"http://mathoverflow.net/questions/17252/bipartite-nim-geography","timestamp":"2014-04-20T10:49:41Z","content_type":null,"content_length":"54327","record_id":"<urn:uuid:50a59b02-e6c1-418b-a0aa-98b6058918a0>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic Subtraction
Date: 06/08/2001 at 19:09:04
From: nicholas
Subject: Basic Subtraction
I need help with basic subtraction, like 15 - 9 =_____ ?
Date: 06/08/2001 at 22:35:25
From: Doctor Peterson
Subject: Re: Basic Subtraction
Hi, Nicholas.
The main thing you need is just a lot of practice, until you've
memorized the whole addition table. If you know that 9 + 6 = 15, you
can remember that 15 - 9 = 6, because that's just another way to say
the same thing. If you don't know that fact, then you have to work a
lot harder.
There are ways to work these things out, though, until you've learned
it all well enough. For example, you can picture this problem on a
number line:
... --+---+---+---+---+---+---+-- ...
I know that 9 is less than 10; and in particular, I know that it is 1
less than 10, because I've made sure to learn all the "_ + _ = 10"
facts. I also know that 15 is 5 more than 10, because that's what 15
means. So to get from 9 to 15, I can first take 1 step to 10, and then
5 more to 15, making a total of 6.
That's a lot of thinking for one subtraction, so it won't be too fast.
But knowing you can work out the answer when you're not sure can make
you a lot more confident about math! It also gives you a chance to get
to know numbers better, by playing little games like this with them,
and that's good for your understanding of math.
You might find it interesting to take a set of subtraction flash
cards, and put in one pile the facts that are easy (like 15 - 10), and
in another the ones you have to memorize. You'll be surprised how many
easy ones there are. You can also look for ways to use the easy ones
to do the hard ones, as I just did. Then you'll probably be left with
a few you'll just have to memorize; do that and you're on your way.
- Doctor Peterson, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/59044.html","timestamp":"2014-04-19T08:09:40Z","content_type":null,"content_length":"6765","record_id":"<urn:uuid:24e23d7b-cbc1-4705-8615-be6f884449f8>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00635-ip-10-147-4-33.ec2.internal.warc.gz"} |
h2g2 - Pi - Edited Entry
Created | Updated Apr 22, 2012
No, this isn't an entry about apple pie, or cream pie, or any other sort of pie you can think of. It's an entry about pi^1, the irrational number which is a vital part of mathematics. Pi is defined
as the circumference of any circle divided by its diameter.
Irrational Number?
An irrational number is one that cannot be written in the form p/q, where both p and q are whole numbers. For example, 1/2, 3/4, and 10971/182936 are all rational numbers, whereas pi, otherwise known
as π is not. All irrational numbers have an infinite number of digits after the decimal point, and these digits never form a repeating pattern^2.
How do I Calculate Pi?
From the definition, π can be found by taking a circle (any circle at all) and dividing the length of its circumference^3 by the length of its diameter^4. This will give you a value close to the
number stated below, depending on how accurate your measurements are.
What Exactly is the Value of π?
π, to 2,000 decimal places, is:
Of course, that's not where it ends. π continues for an infinite number of decimal places, and there is no pattern to the order of the digits. Any pattern that you may think is emerging is just an
illusion, and will quickly disappear to be replaced by another 'pattern'. For ease of mathematical calculations, π is usually reduced to 3.142, or 3.14159 for a more accurate result.
The value of π has been calculated to 1.24 trillion decimal places so far, by Professor Yasumasa Kanada, his team, and some amazing computing. However, the digits of π still seem to occur randomly,
meaning that we are no nearer predicting what the next digit will be than were the Ancient Greeks.
The History of Pi
One well-known reference to the value of π is in the Bible:
And he made a molten sea, ten cubits from the one brim to the other: it was round all about, and his height was five cubits: and a line of thirty cubits did compass it about.
- I Kings 7:23
This refers to a list of specifications for the Temple of Solomon, which was built around 950 BC, and gives the value of π = 3. Even for its day, this was quite an inaccurate figure, as the ancient
Egyptians and Mesopotamians are believed to have calculated π as 25/8 = 3.125 before this date.
The first attempt to actually calculate, rather than measure, π seems to have been by Archimedes in around 295 BC. He came up with the inequality 223/71 < π < 22/7, by working with many-sided
polygons rather than circles. Estimating the average of these two boundaries, we can come up with the value of π = 3.1418.
After Archimedes' success, others used the same method to calculate π to more and more decimal places. By 1600 AD, Van Ceulen had calculated the first 36 digits of π. Through this time, no changes to
the method were made, just more and more calculations carried out.
During the Renaissance in Europe, more algorithms^5 were proposed for calculating the value of π, the most well-known of these being π/4 = 1 - 1/3 + 1/5 - 1/7 + ... This is attributed variously to
Leibniz or Gregory, and was proposed at some point during the late 17th Century, and is an accurate method of calculating π to as many decimal places as you can be bothered to work out; however it is
hugely labour-intensive as you need about 10,000 terms of the series to work out π to just four decimal places.
Gregory later came up with a more useful series, using properties of tan to calculate π, and using this result the number of known decimal places of π shot up. It was proved in 1761 that π was
irrational, and in 1873 π was calculated to 707 decimal places. It was later discovered, however, that the last 180 were incorrect^6.
In 1949, π was calculated to 2,000 decimal places with the help of one of the first computers, and since then computers have been used to increase the known digits of π into the trillions.
What Use is Pi?
π, to a few decimal places, is highly useful in mathematics, and in construction, and anywhere else that accurate measurements of circles are needed. Increasing numbers of decimal places are used
depending on the accuracy needed, but really accurate values of π are not needed for any real-world applications, and more than 10 decimal places would be unlikely to be necessary. Working out the
value of π is more useful for developing computer systems capable of the task, which could then be used for other purposes.
π is also used when measuring angles in radians^7. There are 360/(2π) degrees in one radian, and so measurements in degrees can be converted to radians and vice versa. Radians are more commonly used
than degrees for advanced mathematics, with 2π radians being a full circle, and π/2 radians being a right-angle.
π has applications in cryptography, as it is useful for generating random keys for ciphers such as the Vigenère cipher.
A game that can be played with π is finding a book or website that gives π to as many decimal places as possible, and then trying to find your phone number, birth date, and interesting combinations
of numbers (like 123456) within the digits. There is even a website called PiSearch which will do this for you.
Memorising π to as many decimal places as possible has become a hobby for some people, and the current official record for memorising π is 42,195 digits, set by Hiroyuki Goto in 1995.
The Reciprocal of Pi
The reciprocal of π is the number y, such that yπ = 1, or to put it another way, y = 1/π. This is useful in calculations that include π as the denominator of a fraction - rather than dividing by π we
just multiply by the reciprocal. This reciprocal is 0.318310 to 6 decimal places.
So How Do I Memorise Pi?
There are many different memory techniques (otherwise known as mnemonics) used, most of which can be used to remember any sequence at all, from your shopping list to phone numbers to π. These include
taking an imaginary walk around your house, and seeing various objects that have a connection with the thing you're trying to remember. For example, you could relate each digit with something that
rhymes with it, like a nun for one, a shoe for two, a tree for three, etc. This can be a very effective method for remembering sequences.
There are other methods that relate strictly to π. One of these is that the first 13 digits seem to rhyme (three-point-one-four-one-five-nine, two-six-five-three-five-eight-nine) but there is only so
far you can get with that technique. Another method is using any of the following sayings:
How I wish I could determine
In circle round
The exact relation
Lindemann^8 found.
How I wish I
Could calculate Pi
How I want a drink,
alcoholic of course,
after all those chapters
involving quantum mechanics.
in which the number of letters in each word indicates the digit, but again this method is limited.
Another phrase can be used to memorise the reciprocal of π:
Can I remember the reciprocal?
where again, the number of letters in each word relates to the digit.
In the end, if you really want to memorise π, it's up to you to find the technique which suits you and allows you to remember the most digits. Good luck!
^1Pronounced 'pie' in English, and 'pee' in most other languages.^2Although irrational numbers with non-repeating patterns can be constructed, such as 0.12345678910111213...^3All the way around the
circle.^4From one side of the circle to the other, passing through the center.^5An algorithm is a set of rules or processes to follow in order to solve a problem.^6It is worth pointing out at this
stage that all calculations were done by hand, without the aid of calculators and computers, which hadn't yet been invented.^7A radian is defined by the Merriam-Webster dictionary as being a unit of
plane angular measurement that is equal to the angle at the center of a circle subtended by an arc equal in length to the radius.^8Lindemann proved in 1882 that π was transcendental, meaning that it
isn't a root of any polynomial equation with integer (whole number) coefficients. A consequence of this is that it is impossible to construct a square equal in area to a given circle using only
compass and straight-edge, one of the most sought-after constructions in history. | {"url":"http://h2g2.com/edited_entry/A211500","timestamp":"2014-04-16T07:22:32Z","content_type":null,"content_length":"37321","record_id":"<urn:uuid:810d1646-0206-4b55-9742-85775c283162>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
This tutorial introduces you to the concepts and practises of statistics in Analytical Chemistry, by presenting problems in data analysis and demonstrating the steps to solve them. It is modular, so
that you can either start at the beginning and continue sequentially, or skip material you are already familiar with. The tutorial includes the use of Microsoft Excel™ to perform the various
operations, and is illustrated with numerous screen captures and problem-solving assignments.
The tutorial begins with an introduction to the use of Microsoft Excel™ to perform statistical analyses. It then covers Basic Statistics, which discusses concepts like mean and standard deviation,
but also introduces you to other important statistical concepts. The section on Linear Regression explains the use of calibration curves, and finally, the section on Data Comparison and Evaluation
discusses the t-tests and F-tests.
To begin the tutorial, start with the basics of Microsoft Excel™
© 2006 Dr. David C. Stone & Jon Ellis, Chemistry, University of Toronto | {"url":"http://www.chem.utoronto.ca/coursenotes/analsci/StatsTutorial/","timestamp":"2014-04-17T09:49:39Z","content_type":null,"content_length":"4055","record_id":"<urn:uuid:3b61934f-ed91-4cb5-b553-66579b255493>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Fractions Puzzles Algebra - Saxon, Adventures with Logic, Painless Fractions, Math Magic, Workbooks by Ideal / Instructional Fair / Hayes
HOMESCHOOLERS: We have some very cool advertisers on this website, and as VegSource is supported by advertising, we're happy about that. If you can't see any ads, you might have an ad blocker, or a
setting in your browser which blocks ads from showing up. Consider turning on ads while on vegsource so that you can see what they have to offer, and so that this site can remain free.
Reply To This Post Return to Posts Index VegSource Home
From: Lurlee in MO (75.221.133.9)
Subject: Math Fractions Puzzles Algebra - Saxon, Adventures with Logic, Painless Fractions, Math Magic, Workbooks by Ideal / Instructional Fair / Hayes
Date: October 7, 2012 at 9:04 am PST
Math Fractions Puzzles Algebra - Saxon, Adventures with Logic, Painless Fractions, Math Magic, Workbooks by Ideal / Instructional Fair / Hayes
From a non smoking home. No writing or tears unless indicated.
I take paypal, money orders, and checks.
Email lurlee@gmail.com if you are interested.
Math The Easy Way by Barron’s Educational Series, Answers in the Back
All the Essential in One Clear Volume
~ Basic Arithmetic, Fractions, Decimals, Percents, Tables & Graphs, Word Problems
~ Elementary Algebra, Geometry, Statistics & Probability
~ Diagrams, Study Tips, Cartoon Illustrations
232p oversized sb, 3rd Edition, vg, edgewear, no writing, $7.50 ppd
Math Magic by “The Human Calculator” Scott Flansburg
How to Master Everyday Math Problems with answers in the back
Don’t live in fear of math any longer. Makes math easy and fun!
342p sb, Revised Edition, vg, edgewear, $6.50 ppd
Cliffs Quick Review of Basic Math and Pre-Algebra
The essential fast from the experts at CliffsNotes
Complete coverage of core concepts
Accessible topic by topic organization
Pocket guide for easy reference
168p sb, vg, shelf/edgewear, $4.50 ppd
~ Saxon 65 Student, 1st edition Hardback
G-, tight binding, pencil answers Lessons 1-30, torn out pgs 115-126, $6.50 ppd
Grade 5-7
Adventures with Logic, Feron Teacher Aids – Workbook with Answer Key in back
Master critical & creative logic skill with challenging exercises
Develop skills – critical thinking, inference, classification, sequencing, and creative thinking
Word analogies and building sets tickle the funny bone as well as the brain
60p workbook, 11pg torn out, $4.50 ppd
Grade 7
Hayes Mastery Drills in Mathematics for Grade 7 - Workbook with Answers in back
96p workbook, almost like new, name inside, slight edgewear, $4.50
Grades 7-8
Math Discoveries about Fractions & Decimals Workbook with Solutions for Teachers by Ideal
To demystify fractions, decimals, and percents - Based on NCTM Standards
54p workbook, vg, name inside, $5.50 ppd
100 Math Practice Activities with Answers by Instructional Fair
Provides thousands of practice problems and addresses only one basic skill on each page
Place value, rounding whole numbers, addition, subtraction, multiplication, division, fractions
Decimals, metric, scientific notations, prime factors, ratios, percents, sales tax, integers, more
128p workbook like new, $7.50 ppd
Grades 8
Hayes Mastery Drills in Mathematics for Grade 8 - Workbook with Answers in back
96p workbook, almost like new, name inside, slight edgewear, $4.50
Middle School
Painless Fractions by Ayece B. Cummings
If you think fractions are both dull and difficult, open this book and think again
Guides you through the steps of working out fractions and do it painlessly!
Word Problems and Answers at the end of each chapter
210p sb, vg, creases on cover, some dog-eared pages, $6.50 ppd
High School
Algebra 3 -Contemporary’s Number Power Workbook with Answers by Robert Mitchell
Building Number Power ~ Intro to Algebra, Signed Numbers, Powers & Roots,
Algebraic Expressions, Equations, Rectangular Coordinates, Polynomials, Review Test
Using Number Power ~ Appy algebraic skills in real life situations
170p workbook, g, slight shelf/edgewear, sticker on cover, no writing, $4.50 ppd
Integrated Arithmetic & Basic Algebra, 2nd Ed, by Bill Jordan & William Palow
Recommended for remediation in mathematics
808p oversized sb, vg, shelf/edgewear, marks on pg edges & back cover, $18 ppd
Reply To This Post Return to Posts Index VegSource Home
Follow Ups: | {"url":"http://www.vegsource.com/homeschool/fs712/messages/81421.html","timestamp":"2014-04-20T08:43:23Z","content_type":null,"content_length":"42111","record_id":"<urn:uuid:583b265d-0e6e-426f-be0c-8083783166b7>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00299-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic College Mathematics, 4th Edition
The author of Basic College Mathematics, 4th Edition is Ignacio Bello Basic College Mathematics, 4e will be a review of fundamental math concepts for some students and may break new ground for
Nevertheless, students of all backgrounds will be delighted to find a refreshing book that appeals to all learning styles and reaches out to diverse Through down-to-earth explanations, patient
skill-building, and exceptionally interesting and realistic applications, this worktext will empower students to learn and master mathematics in the real Bello has written a textbook with mathanxious
students in mind to combat the issue of student motivation, something that instructors face with each The addition of Green Math examples and applications expands Bello's reach into current,
timely.This title is available at BookMoving on Ignacio Bello's eBooks, .Basic College Mathematics, 4th Edition Textbook, course, ebook, pdf, download at bookmoving .
Basic College Mathematics, 4th Edition
You should be logged in to Download this Document. Membership is Required. Register here
Related Books on Basic College Mathematics, 4th Edition
Comments (0)
Currently,no comments for this book! | {"url":"http://bookmoving.com/book/basic-college-mathematics-th-edition_31129.html","timestamp":"2014-04-16T10:24:18Z","content_type":null,"content_length":"13485","record_id":"<urn:uuid:9f3423cf-8380-46eb-a5fc-a3ce9be231d8>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
Collective Monte Carlo updating for spin systems
Results 1 - 10 of 87
, 1996
"... For many applications it is useful to sample from a finite set of objects in accordance with some particular distribution. One approach is to run an ergodic (i.e., irreducible aperiodic) Markov
chain whose stationary distribution is the desired distribution on this set; after the Markov chain has ..."
Cited by 406 (13 self)
Add to MetaCart
For many applications it is useful to sample from a finite set of objects in accordance with some particular distribution. One approach is to run an ergodic (i.e., irreducible aperiodic) Markov chain
whose stationary distribution is the desired distribution on this set; after the Markov chain has run for M steps, with M sufficiently large, the distribution governing the state of the chain
approximates the desired distribution. Unfortunately it can be difficult to determine how large M needs to be. We describe a simple variant of this method that determines on its own when to stop, and
that outputs samples in exact accordance with the desired distribution. The method uses couplings, which have also played a role in other sampling schemes; however, rather than running the coupled
chains from the present into the future, one runs from a distant point in the past up until the present, where the distance into the past that one needs to go is determined during the running of the
- PAMI , 2005
"... Many vision tasks can be formulated as graph partition problems that minimize energy functions. For such problems, the Gibbs... ..."
, 2006
"... Abstract. The class of random-cluster models is a unification of a variety of stochastic processes of significance for probability and statistical physics, including percolation, Ising, and
Potts models; in addition, their study has impact on the theory of certain random combinatorial structures, an ..."
Cited by 43 (18 self)
Add to MetaCart
Abstract. The class of random-cluster models is a unification of a variety of stochastic processes of significance for probability and statistical physics, including percolation, Ising, and Potts
models; in addition, their study has impact on the theory of certain random combinatorial structures, and of electrical networks. Much (but not all) of the physical theory of Ising/Potts models is
best implemented in the context of the random-cluster representation. This systematic summary of random-cluster models includes accounts of the fundamental methods and inequalities, the uniqueness
and specification of infinite-volume measures, the existence and nature of the phase transition, and the structure of the subcritical and supercritical phases. The theory for two-dimensional lattices
is better developed than for three and more dimensions. There is a rich collection of open problems, including some of substantial significance for the general area of disordered systems, and these
are highlighted when encountered. Amongst the major open questions, there is the problem of ascertaining the exact nature of the phase transition for general values of the cluster-weighting factor q,
and the problem of proving that the critical random-cluster model in two
- Oxford Master Series in Physic , 2006
"... The author provides this version of this manuscript with the primary intention of making the text accessible electronically—through web searches and for browsing and study on computers. Oxford
University Press retains ownership of the copyright. Hard-copy printing, in particular, is subject to the s ..."
Cited by 31 (2 self)
Add to MetaCart
The author provides this version of this manuscript with the primary intention of making the text accessible electronically—through web searches and for browsing and study on computers. Oxford
University Press retains ownership of the copyright. Hard-copy printing, in particular, is subject to the same copyright rules as they would be for a printed book. CLARENDON PRESS. OXFORD
"... We consider the mixing properties o the Swendsen-Wang process or the 2-state Potts model or Ising model, on the complete n vertex graph Kn and for the Q-state model on an a x n grid where a is
bounded as n -- . ..."
Cited by 28 (0 self)
Add to MetaCart
We consider the mixing properties o the Swendsen-Wang process or the 2-state Potts model or Ising model, on the complete n vertex graph Kn and for the Q-state model on an a x n grid where a is
bounded as n -- .
- J. Stat. Phys , 1996
"... In this paper we examine a number of models that generate random fractals. The models are studied using the tools of computational complexity theory from the perspective of parallel computation.
Diffusion limited aggregation and several widely used algorithms for equilibrating the Ising model ar ..."
Cited by 16 (6 self)
Add to MetaCart
In this paper we examine a number of models that generate random fractals. The models are studied using the tools of computational complexity theory from the perspective of parallel computation.
Diffusion limited aggregation and several widely used algorithms for equilibrating the Ising model are shown to be highly sequential; it is unlikely they can be simulated efficiently in parallel.
This is in contrast to Mandelbrot percolation that can be simulated in constant parallel time. Our research helps shed light on the intrinsic complexity of these models relative to each other and to
different growth processes that have been recently studied using complexity theory. In addition, the results may serve as a guide to simulation physics. Keywords: Cluster algorithms, computational
complexity, diffusion limited aggregation, Ising model, Metropolis algorithm, P-completeness 1
- Rev. E , 1996
"... The invaded cluster algorithm, a method for simulating phase transitions, is described in detail. Theoretical, albeit nonrigorous, justification of the method is presented and the algorithm is
applied to Potts models in two and three dimensions. The algorithm is shown to be useful for both first-ord ..."
Cited by 14 (3 self)
Add to MetaCart
The invaded cluster algorithm, a method for simulating phase transitions, is described in detail. Theoretical, albeit nonrigorous, justification of the method is presented and the algorithm is
applied to Potts models in two and three dimensions. The algorithm is shown to be useful for both first-order and continuous transitions and evidently provides an efficient way to distinguish between
these possibilities. The dynamic properties of the invaded cluster algorithm are studied. Numerical evidence suggests that the algorithm has no critical slowing for Ising models.
�S1063-651X�96�09208-2� PACS number�s�: 05.50.�q, 64.60.Fr, 75.10.Hk, 02.70.Lq I.
- Phys. Rev. Lett , 1995
"... A new cluster algorithm based on invasion percolation is described. The algorithm samples the critical point of a spin system without a priori knowledge of the critical temperature and provides
an efficient way to determine the critical temperature and other observables in the critical region. The m ..."
Cited by 11 (4 self)
Add to MetaCart
A new cluster algorithm based on invasion percolation is described. The algorithm samples the critical point of a spin system without a priori knowledge of the critical temperature and provides an
efficient way to determine the critical temperature and other observables in the critical region. The method is illustrated for the two- and three-dimensional Ising models. The algorithm equilibrates
spin configurations much faster than the closely related Swendsen-Wang algorithm. Typeset using REVTEX 1 Enormous improvements in simulating systems near critical points have been achieved by using
cluster algorithms [1,2]. In the present paper we describe a new cluster method which has the additional property of ‘self-organized criticality. ’ In particular, the method can be used to sample the
critical region of various spin models without the need to fine tune any parameters (or know them in advance). Here, as in other cluster algorithms, bond clusters play a pivotal role in a Markov
process where successive spin configurations | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1802873","timestamp":"2014-04-16T08:55:28Z","content_type":null,"content_length":"35275","record_id":"<urn:uuid:0962f90b-1e84-48d3-9a2a-1afd51da5048>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00540-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solution Of Discrete Mathematics Pdf By Rosen Th Edition
Discrete Mathematics and Its Applications
English | 1072 pages | ISBN-10: 0073383090 | PDF | 9.66 MB
Discrete Mathematics and its Applications, Seventh Edition, is intended for one- or two-term introductory discrete mathematics courses taken by students from a wide variety of majors, including
computer science, mathematics, and engineering. This renowned best-selling text, which has been used at over 500 institutions around the world, gives a focused introduction to the primary themes in a
discrete mathematics course and demonstrates the relevance and practicality of discrete mathematics to a wide a wide variety of real-world applications…from computer science to data networking, to
psychology, to chemistry, to engineering, to linguistics, to biology, to business, and to many other important fields. | {"url":"http://www.allcandl.org/avax/solution-of-discrete-mathematics-pdf-by-rosen-th-edition","timestamp":"2014-04-17T00:55:19Z","content_type":null,"content_length":"64809","record_id":"<urn:uuid:231c4dcb-ffa1-493d-859c-0822b0524eb9>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stochastic process questions....
April 10th 2010, 09:23 AM #1
Apr 2010
Stochastic process questions....
Does anyone know how to solve the below?
A tv controller needs 2 batteries to be operational. Suppose that, in addition to the tv controller, we have a set of 12 functioning batteries (battery 1, battery 2,.. and so forth.) Initially,
we put in batteries 1 and 2 in the tv controller, leaving 10 spare batteries.
Whenever a battery (in the tv controller) fails, we immediately replace the failed battery by the lowest-numbered functioning battery that has not yet been put in use. Suppose that the batteries
remain like new until they are installed in the tv controller. Suppose that the lifetimes of
the different batteries (in use in tv controller) are independent random variables, each with an exponential distribution having mean 4 months (independently of how the tv controller is used,
even though that may be unrealistic).
T be the time that the tv controller ceases to work, i.e., the time that a working battery fails, causing the tv controller not to work, and Olga's stockpile of spares is empty. At that moment,
exactly one of the 12 original batteries (which we will call battery N) will not yet have failed. (It will be the one working battery in the tv controller, even though the tv controller no longer
(a) What is the expected value of T?
(b) What is the variance of T?
(c) What is P(N = 12)?
(d) What is P(N = 1)?
(e) What is the probability that exactly 5 batteries have failed during the first 8 months?
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-statistics/138320-stochastic-process-questions.html","timestamp":"2014-04-17T15:56:28Z","content_type":null,"content_length":"32616","record_id":"<urn:uuid:b9813790-160e-49c5-b239-0b70c8d5882b>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
Breakdown of linear response in the presence of bifurcations
Seminar Room 1, Newton Institute
(Joint with: M. Benedicks and D. Schnellmann) Many interesting dynamical systems possess a unique SRB ("physical") measure, which behaves well with respect to Lebesgue measure. Given a smooth
one-parameter family of dynamical systems f_t, is natural to ask whether the SRB measure depends smoothly on the parameter t. If the f_t are smooth hyperbolic diffeomorphisms (which are structurally
stable), the SRB measure depends differentiably on the parameter t, and its derivative is given by a "linear response" formula (Ruelle, 1997). When bifurcations are present and structural stability
does not hold, linear response may break down. This was first observed for piecewise expanding interval maps, where linear response holds for tangential families, but where a modulus of continuity t
log t may be attained for transversal families (Baladi-Smania, 2008). The case of smooth unimodal maps is much more delicate. Ruelle (Misiurewicz case, 2009) and Baladi-Smania (slow recurrence case,
2012) obtained linear response for fully tangential families (confined within a topological class). The talk will be nontechnical and most of it will be devoted to motivation and history. We also aim
to present our new results on the transversal smooth unimodal case (including the quadratic family), where we obtain Holder upper and lower bounds (in the sense of Whitney, along suitable classes of
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. | {"url":"http://www.newton.ac.uk/programmes/MFE/seminars/2013111210001.html","timestamp":"2014-04-17T21:34:02Z","content_type":null,"content_length":"6847","record_id":"<urn:uuid:2580cd46-b213-4aa0-9991-c5350e64d02a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Gemini Nebula
Posted 31 January 2005, 13:38
A variation on La Monte Young’s The Prime Time Twins in the Ranges 576 to 448; 144 to 112; 72 to 56; 36 to 28; with the Range Limits 576, 448, 288, 224, 144, 56, and 28. An electronic piece written
and realized with Scala, blue, and Csound.
Dedicated to La Monte Young in his 70th year.
Duration: 7 minutes 30 seconds.
Background & Technical Details
Young’s Prime Time Twins is one of his continuous sine-tone installations. The twins of the title refer to pairs of numbers called “twin primes”: prime numbers that have a difference of two, such as
137 and 139. Young treats the set of twin primes listed in his title as overtones above a subsonic fundamental at 7.5 Hz. The piece consists of these ten pairs of pitches, which cover a five octave
range, combined with seven other pitches (multiples of the seventh and ninth partials, the “range limits” of the title). The fundamental does not appear in any octave, but is implied by the resulting
combination tones.
In preparing for my piece, I converted the PTT numbers into ratios, essentially reducing them to intervals within a single octave. Then I used Scala to gather these ratios, along with 9/8 and 7/4,
into a “scale” (linked below). The fascinating thing when one considers the notes in this way is that it reveals very clearly that the PTTs are grouped into two tight clusters or ranges at the high
and low ends of an octave: five pairs are located between 1/1 and 9/8, and the other five pairs are located between 7/4 and 2/1. Here is a table of the PTT “scale”, in ascending pitch order:
Ratio Cents Interval
1/1 0.000 unison
521/512 30.167 521-523 twins
523/512 36.801
269/256 85.755 269-271 twins
271/256 98.579
137/128 117.638 137-139 twins
139/128 142.729
281/256 161.312 281-283 twins
283/256 173.590
569/512 182.742 569-571 twins
571/512 188.816
9/8 203.910 major whole tone
7/4 968.826 harmonic seventh
227/128 991.858 227-229 twins
229/128 1007.045
461/256 1018.348 461-463 twins
463/256 1025.842
29/16 1029.577 bottom of 29-31 twins
59/32 1059.172 bottom of 59-61 twins
239/128 1081.040 239-241 twins
241/128 1095.467
61/32 1116.885 top of 59-61 twins
31/16 1145.036 top of 29-31 twins
2/1 1200.000 octave
In my piece, I use all of these pitches within the octave that starts at 240 Hz. The 1/1 and 2/1 are used as drones, as are 9/8 and 7/4, together serving as what Young calls range limits. The other
tones enter gradually from low to high within the limits, and then gradually leave. As the texture thickens, the beating between tones forms a complex rhythmic pattern. Each pair of twins is played
in stereo, with the pair members on opposite sides, which adds the element of binaural beating. All of the tones are simple sine waves, and no effects are used. The piece was composed using Steven
Yi’s excellent program blue, which allowed me to work directly with the PTT scale I made in Scala.
The title The Gemini Nebula has several derivations. Gemini, of course, is a reference to twins. I used the word nebula because one of the effects produced by the piece reminds me of the “clouds” in
the piano music of Young and Michael Harrison, but since I recently used the word “cloud” for another piece, I decided to use a related word. (By the way, it turns out that there really is an
astronomical object called the Gemini Nebula.)
As with my previous piece that takes off from one of La Monte Young’s sine-tone works, I relied on Kyle Gann’s article “The Outer Edge of Consonance: Snapshots from the Evolution of La Monte Young’s
Tuning Installations”.
Copyright & Licensing
Copyright © 2005, Dave Seidel. Some rights reserved. This work is licensed under a Creative Commons Attribution License.
MP3 (18MB)
blue project (text, 11KB)
Csound unified score file (text, 2KB)
Scala Prime Time Twins scale (text, 307B)
drones, just intonation, la monte young
— 22 February 2005, 11:30 # | {"url":"http://mysterybear.net/article/9/the-gemini-nebula","timestamp":"2014-04-20T18:47:04Z","content_type":null,"content_length":"12279","record_id":"<urn:uuid:8d76c949-1809-4782-b6b6-7dc614f2048e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00070-ip-10-147-4-33.ec2.internal.warc.gz"} |
Center for the Mathmatics of Information and the Institute for Quantum Information Seminar | Caltech
Center for the Mathmatics of Information and the Institute for Quantum Information Seminar
Fourier sparsity, spectral norm, and the Log-rank conjecture
Shengyu Zhang, Department of Computer Science and Engineering, The Chinese University of Hong Kong
We study Boolean functions with sparse Fourier coefficients or small spectral norm, and show their applications to the Log-rank Conjecture for XOR functions f(x\oplus y) --- a fairly large class of
functions including well-studied ones such as Equality and Hamming Distance. The rank of the communication matrix M_f for such functions is exactly the Fourier sparsity of f. Let d be the F2-degree
of f and D(f) stand for the deterministic communication complexity for f(x\oplus y). We show that 1. D(f) = O(2^{d^2/2} log^{d-2} ||\hat f||_1). In particular, the Log-rank conjecture holds for XOR
functions with constant F2-degree. 2. D(f) = O(d ||\hat f||_1) = O(\sqrt{rank(M_f)}\logrank(M_f)). We obtain our results through a degree-reduction protocol based on a variant of polynomial rank, and
actually conjecture that its communication cost is already \log^{O(1)}rank(M_f). The above bounds also hold for the parity decision tree complexity of f, a measure that is no less than the
communication complexity (up to a factor of 2).
Along the way we also show several structural results about Boolean functions with small F2-degree or small spectral norm, which could be of independent interest. For functions f with constant
F2-degree: 1) f can be written as the summation of quasi-polynomially many indicator functions of subspaces with \pm-signs, improving the previous doubly exponential upper bound by Green and Sanders;
2) being sparse in Fourier domain is polynomially equivalent to having a small parity decision tree complexity; 3) f depends only on polylog||\hat f||_1 linear functions of input variables. For
functions f with small spectral norm: 1) there is an affine subspace with co-dimension O(||\hat f||_1) on which f is a constant; 2) there is a parity decision tree with depth O(||\hat f||_1 log ||\
hat f||_0). | {"url":"http://www.caltech.edu/content/center-mathmatics-information-and-institute-quantum-information-seminar","timestamp":"2014-04-17T00:51:02Z","content_type":null,"content_length":"52506","record_id":"<urn:uuid:1ba95522-a1f1-44bf-bafa-7cd29b3699ba>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00578-ip-10-147-4-33.ec2.internal.warc.gz"} |
Morton Grove ACT Tutor
Find a Morton Grove ACT Tutor
...There is nothing more rewarding as a teacher than instilling a confidence in a student which they never thought they could have in math. My expertise is tutoring any level of middle school,
high school or college mathematics. I can also help students who are preparing for the math portion of the SAT or ACT.
12 Subjects: including ACT Math, calculus, geometry, algebra 1
...I also tutored officially after school for students on campus, again in various subjects. I have been tutoring for WyzAnt for over 4 years now and very much enjoy it! I have both lived and
taught (English as foreign language) in France, and enjoy helping students from all different backgrounds.
16 Subjects: including ACT Math, English, chemistry, French
...Algebra 2 builds on the skills learned in Algebra 1 and digs further into variable mathematics. Topics include: functions and graphing (linear, quadratic, logarithmic, exponential), complex
numbers, systems of equations and inequalities, and relations. This can also include beginning trigonometry and probability and statistics.
11 Subjects: including ACT Math, calculus, geometry, algebra 1
...In addition, my background in psychology including a master's degree in it and my own research make me well-versed in different techniques and research-based approaches to improving
organization and study strategies. Currently, our school implements a structured executive functioning program for...
20 Subjects: including ACT Math, Spanish, English, writing
...With directed practice, a student can definitely improve his/her test results in a reasonable amount of time. My methods have proven to be very successful. I have a Masters degree in applied
mathematics and most coursework for a doctorate.
18 Subjects: including ACT Math, physics, GRE, calculus | {"url":"http://www.purplemath.com/Morton_Grove_ACT_tutors.php","timestamp":"2014-04-18T01:18:34Z","content_type":null,"content_length":"23826","record_id":"<urn:uuid:2fa2906b-07a9-492a-83e6-cbd446fbdb70>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
Took GMATPrep CAT Practice Test 1 today. Scored a 710 which
Question Stats:
100%0% (00:00)based on 1 sessions
Took GMATPrep CAT Practice Test 1 today. Scored a 710 which I was surprised because of how many problems I missed. I missed 9 in Quant and 9 in Verbal. Projection was a 49 Quant and 38 Verbal. I have
heard that people say that this test is most accurate. I have been scoring about a 680 on the
tests. I missed 5 in a row in quant and still got a 710!!! I just can't actually believe that they would let me get away with that. If anyone can verify that these projections are relatively
semi-accurate, that would really calm my nerves.
Anyways, I found this problem and, although I have found the solution, I am sure that the steps I took to get there are not the most efficient. Here is the problem.
Data Sufficiency-
If n is a positive integer and r is the remainder when (n-1)(n+1) is divided by 23, what is the value of R?
1. n is not divisible by 2
2. n is not divisible by 3
My Solution: I finally found that the numbers that are not div by 2&3 end up being prime or prime multiples of primes. Then, I finally realized though trial and error that all numbers surrounding the
primes (i.e n+1 & n-1) are divisible by 2x2x2x3 (24). I have no clue why this phenomenon works and really don't care to. Is there some way to approach a problem of this nature more efficiently?
I also struggled with this one...
Q: If n and y are both positive integers and 450y=n^3, which of following MUST be an integer?
I. y/(3x2x2x5)
II. y/(3x3x2x5)
III. y/(3x2x5x5)
I missed this and still don't know how to solve. Apparently the answer is Only "I" but I don't know why.
PS: If this is posted in the wrong place... please let me know. It's my first post. | {"url":"http://gmatclub.com/forum/took-gmatprep-cat-practice-test-1-today-scored-a-710-which-83963.html?fl=similar","timestamp":"2014-04-16T10:13:36Z","content_type":null,"content_length":"169356","record_id":"<urn:uuid:6cde9293-a652-4b31-a034-11a19bc40acd>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
My recent Factor travels in Light of Haskell
Submitted by metaperl on Sat, 02/24/2007 - 3:46am.
Factor is a modern stack-based language. It has a very interactive and easy-to-use GUI and is fun to work with.
Something was itching me about using this language and it was not until I picked up Haskell Road to Logic Maths and Computer Programming that I knew what was bothering me. He says that _Haskell_ is
a type of descriptive programming very different from the prescriptive programming that you see in Java or C.
And that's it. In Factor, one is very mechanical. putting things on the stack, duplicating stack items. Hiding things in a retain stack, etc.
I notice that a lot of Factor functions for stack shuffling scale up to 3 stack elements. Is there something magic about this number of args to a word that it would never have to shuffle on more?
A very telling example of the difference in Haskell and Factor has to do with
this page on the Factor website discussing how to determine if all elements of a sequence obey a boolean:
The word "all"
Tests if all elements in the sequence satisfy the predicate.
The implementation makes use of a well-known logical identity:
P[x] for all x <==> not ((not P[x]) for some x)
Let's compare the Factor and the Haskell:
: all? ( seq quot -- ? )
swap [ swap call not ] contains-with? not
all p = and . map p
The author of Factor is a very strong mathematician... what provoked him to involve himself in stackrobatics (acrobatics with a stack)?
Slava shows how Factor can be similar to the Haskell:
: all? map t [ and ] reduce ;
slava: if haskell was strict, and . map p and the above all? would be inefficient
metaperl: slava - why would someone with your depth in mathematics use a prescriptive stack-based language like Factor?
slava: why would someone with your depth in perl chat on irc?
metaperl: instead of a language more similar to mathematical notation
slava: its an illogical question
slava: because i like programming in a stack language
slava: i don't think there is any language which is like mathematical notation, and if there were, it would be unusable because mathematical notation is ambiguous, complex and only suitable for human to human communication
5:10 AM
metaperl: why was Factor developed even though Forth already existed?
slava: because they're very different languages
slava: factor is high level
slava: factor is to lisp what forth is to c, roughly | {"url":"http://sequence.complete.org/node/264","timestamp":"2014-04-17T18:54:21Z","content_type":null,"content_length":"19297","record_id":"<urn:uuid:c0bddf6d-943a-49b4-8652-3c2f611125f4>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
CS 596, Fall 1997
CSE 596, Fall 2009
Introduction to the Theory of Computation
Mondays, Wednesdays, and Fridays, 1 p.m. to 1:50 p.m., 322 Clemens
Instructor. Alan L. Selman. My office is located at 223 Bell Hall; telephone 645-3180, ext. 104; e-mail: selman@cse.buffalo.edu. I will announce office hours as soon as I know my full schedule for
the semester.
Please be certain that you are registered for one of the recitation sections and attend regularly.
Text. The required text is ÒComputability and Complexity TheoryÓ by Steven Homer and Alan L. Selman, Springer, NY. My Web page contains a link with much information about the text, and, therefore,
about this course.
Theory of Computation is concerned with the following kinds of questions: What is computation? What features are necessary in general models of computation? What are the limitations of computers?
What problems can computers not solve?
Complexity Theory is concerned with providing quantitative measures on the resources needed to solve computational problems. Computational complexity addresses questions about which problems can be
solved efficiently and which cannot be solved efficiently.
In this course we will study several of the more important and exciting intellectual achievements of the twentieth century, including the important theory of NP-completeness.
Prerequisites. Here is some personal advice on appropriate background for this course. You must have taken a course in discrete mathematics (or in some manner know the material that such a course
contains). Most students have difficulty with this course unless they have taken an undergraduate course in theory of computing (such as CSE 396). If you have not taken an undergraduate course in
theory of computing, I think that it would be the rare student who will do well in CSE 596 without some other prior courses that develop analytical thinking. For example, prior courses in
mathematical logic, algebra (at the level of groups, rings, and fields), or number theory, would all serve this purpose. The Preface of the text contains a similar statement.
You will need to write proofs of theorems in this course.
Study Habits. Study the text. You need to study the text outside of class in order to reinforce your understanding of the material in class. You might want to read material before coming to class and
then again in greater detail after class. Students frequently tell me that they ÒunderstandÓ the material even though they do not perform well on exams. The problem here is that ÒunderstandingÓ is
different from Òlearning.Ó The former is passive and means that the student follows what I say in class. Learning is active. I recommend the following technique. Do not read as you do in other
subjects. Instead, as you read, write everything you read in your own hand. This activity forces you to dwell on and digest the details. If you get stuck on a detail and cannot figure it out, get
help. I will be glad to help as will the TA.
Homework. You will hand in homework assignments. There are extensive homework assignments, because you learn the material primarily by working at it.
Homework assignments must be legible–the TA cannot grade what they cannot read. Use a sharp pencil. Erase neatly. Do not use a pen. Staple the pages together. Please remember to print your name on
every page that you hand in.
Hand in homework to a TA by the following protocol. Homework assigned during week n (n ³ 1) is due Friday of week n +1. Do not attempt to hand in homework after this deadline. (If there is no class
on a Friday, then the homework is due the following Monday.) The TAs will review homework solutions during recitation sections.
What homework is due on Friday? This is the question we receive most often. Do not ask this question because it is your responsibility to know the answer. I will assign homework exercises as I cover
the material. Your job is to write the assignments down, keep a record, and hand in the solutions the following week.
Academic Honesty. All of the work that you hand in for this class must be the result of your own independent effort. All work that you claim to be yours must be yours. You must not accept solutions
to homework problems, or assistance with homework, from other students or from any other sources (other than the TA and me).
You may talk about homework problems, but when you do, you may not write solutions together. You must write your own solutions by yourself. If you are uncertain about this policy, it is safest not to
talk about homework solutions with other students. Most importantly, please seek the help you need from the TA and/or from me.
All instances of academic dishonesty will result in an F in this course. There are no minor infractions.
Once again, seek help only from the TA and from me.
Course Grade. The grade at the end of the course will be in accordance with the following table:
Homework assignments 40%
Midterm 20% Monday, October 19
Final Exam 40%
Attendance. I require regular attendance. You must come to class, and you must come on time. The class does not start at 1:10 p.m. It begins at 1:00 p.m. Students who do not attend class regularly
risk receiving a failing grade in the course. You are not required to attend class on days listed in the university calendar as major religious holy days (although I assume that you practice at most
one religion).
EaGL. The second Eastern Great Lakes Theory of Computation Workshop will meet at the Center for Tomorrow, October 3 and 4. All students are invited to participate. Please refer to the URL
for details.
End of Semester. Do not plan to travel home at the end of the semester until final exam week is over. I do not know when the final exam for this course will be scheduled, and I will not give an early
final exam to any student.
Incompletes (the grade of ÒIÓ) will not in general be given. This is reserved for the rare circumstance that prevents a student from completing the work in the course. University and Department
policy dictates that an ÒIÓ can be given only if both of the following conditions are met: (i) Only a small amount of work remains, such as the final exam and one or two assignments, and (ii) the
student has a passing average in the work completed. In such a circumstance, the student will be given instructions and a deadline for completing the work, which is usually no more than 30 days past
the end of the semester.
Incompletes cannot be given as a shelter for poor grades. It is the student's responsibility to resign from the course in a timely manner if doing poorly. The last day to resign with a grade of ÒRÓ
is November 13, 2009.
Newsgroup. We will regularly post information about the course on the news group sunyab.cse.596. Check this frequently.
Course Web page. Look for solutions to the homework problems on the course Web page www.cse.buffalo.edu/courses/cse596/ | {"url":"http://www.cse.buffalo.edu/~selman/cse596/courseintro2009.htm","timestamp":"2014-04-18T10:33:41Z","content_type":null,"content_length":"19510","record_id":"<urn:uuid:3fb22e82-77b7-4d4b-a22e-e999943b09aa>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
Power Control for Distributed MAC Protocols in Wireless Ad Hoc Networks
October 2008 (vol. 7 no. 10)
pp. 1169-1183
ASCII Text x
Wei Wang, Vikram Srinivasan, Kee-Chaing Chua, "Power Control for Distributed MAC Protocols in Wireless Ad Hoc Networks," IEEE Transactions on Mobile Computing, vol. 7, no. 10, pp. 1169-1183,
October, 2008.
BibTex x
@article{ 10.1109/TMC.2008.40,
author = {Wei Wang and Vikram Srinivasan and Kee-Chaing Chua},
title = {Power Control for Distributed MAC Protocols in Wireless Ad Hoc Networks},
journal ={IEEE Transactions on Mobile Computing},
volume = {7},
number = {10},
issn = {1536-1233},
year = {2008},
pages = {1169-1183},
doi = {http://doi.ieeecomputersociety.org/10.1109/TMC.2008.40},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Mobile Computing
TI - Power Control for Distributed MAC Protocols in Wireless Ad Hoc Networks
IS - 10
SN - 1536-1233
EPD - 1169-1183
A1 - Wei Wang,
A1 - Vikram Srinivasan,
A1 - Kee-Chaing Chua,
PY - 2008
KW - Wireless communication
KW - Distributed networks
VL - 7
JA - IEEE Transactions on Mobile Computing
ER -
In centralized wireless networks, reducing the transmission power normally leads to higher network transport throughput. In this paper, we investigate power control in a different scenario, where the
network adopts distributed MAC layer coordination mechanisms. We first consider widely adopted RTS/CTS based MAC protocols. We show that an optimal power control protocol should use higher
transmission power than the "just enough" power in order to improve spatial utilization. The optimal protocol has a minimal transmission floor area of \Theta(d_{ij}d_{max}), where d_{max} is the
maximal transmission range and d_{ij} is the link length. This surprisingly implies that if a long link is broken into several short links, then the sum of the transmission floors reserved by the
short links is still comparable to that reserved by the long link. Thus, using short links does not necessarily lead to higher throughput. Another consequence of this is that, with the optimal RTS/
CTS based MAC, rate control can at best provide a factor of 2 improvement in transport throughput. We then extend our results to other distributed MAC protocols which uses physical carrier sensing or
busy-tone as the control signal. Our simulation results show that the optimal power controlled scheme outperforms other popular MAC layer power control protocols.
[1] S. Narayanaswamy, V. Kawadiav, R.S. Sreenivas, and P.R. Kumar, “Power Control in Ad-Hoc Networks: Theory, Architecture, Algorithm and Implementation of the COMPOW Protocol,” Proc. Eighth European
Wireless Conf. (EW '02), pp. 156-162, 2002.
[2] P. Gupta and P.R. Kumar, “The Capacity of Wireless Networks,” IEEE Trans. Information Theory, vol. 46, no. 2, pp. 388-404, 2000.
[3] J.P. Monks and V. Bhargavan, “A Power Controlled Multiple Access Protocol for Wireless Packet Networks,” Proc. IEEE INFOCOM '01, pp. 219-228, 2001.
[4] A. Muqattash and M. Krunz, “Power Controlled Dual Channel (PCDC) Medium Access Protocol for Wireless Ad Hoc Networks,” Proc. IEEE INFOCOM '03, pp. 470-480, 2003.
[5] A. Muqattash and M. Krunz, “POWMAC: A Single-Channel Power-Control Protocol for Throughput Enhancement in Wireless Ad Hoc Networks,” IEEE J. Selected Areas in Comm., vol. 23, no. 5, pp.
1067-1084, 2005.
[6] T. Moscibroda and R. Wattenhofer, “The Complexity of Connectivity in Wireless Networks,” Proc. IEEE INFOCOM, 2006.
[7] P. Karn, “MACA: A New Channel Access Method for Packet Radio,” Proc. Ninth ARRL Computer Networking Conf., pp. 134-140, 1990.
[8] X. Yang and N. Vaidya, “On Physical Carrier Sensing in Wireless Ad Hoc Networks,” Proc. IEEE INFOCOM '05, pp. 2525-2535, 2005.
[9] H. Zhai and Y. Fang, “Physical Carrier Sensing and Spatial Reuse in Multirate and Multihop Wireless Ad Hoc Networks,” Proc. IEEE INFOCOM, 2006.
[10] E.-S. Jung and N.H. Vaidya, “A Power Control MAC Protocol for Ad Hoc Networks,” Proc. ACM MobiCom '02, Aug. 2002.
[11] C. Yu, K.G. Shin, and L. Song, “Link-Layer Salvaging for Making Routing Progress in Mobile Ad Hoc Networks,” Proc. ACM MobiHoc '05, pp. 242-254, 2005.
[12] T. Moscibroda, “The Worst-Case Capacity of Wireless Sensor Networks,” Proc. Sixth Int'l Conf. Information Processing in Sensor Networks (IPSN), 2007.
[13] V. Rodoplu and T.H. Meng, “Minimum Energy Mobile Wireless Networks,” IEEE J. Selected Areas in Comm., vol. 17, no. 8, pp. 1333-1344, 1999.
[14] N. Li, J.C. Hou, and L. Sha, “Design and Analysis of an MST-Based Topology Control Algorithm,” Proc. IEEE INFOCOM '03, pp. 1702-1712, 2003.
[15] V. Kawadia and P.R. Kumar, “Principles and Protocols for Power Control in Ad Hoc Networks,” IEEE J. Selected Areas in Comm., vol. 23, no. 1, pp. 76-88, 2005.
[16] M. Kodialam and T. Nandagopal, “Characterizing Achievable Rates in Multi-Hop Wireless Networks: The Joint Routing and Scheduling Problem,” Proc. ACM MobiCom '03, pp. 42-54, 2003.
[17] K. Xu, M. Gerla, and S. Bae, “How Effective Is the IEEE 802.11 RTS/CTS Handshake in Ad Hoc Networks,” Proc. IEEE Global Telecomm. Conf. (GLOBECOM '02), pp. 72-76, 2002.
[18] M. Durvy and P. Thiran, “A Packing Approach to Compare Slotted and Non-Slotted Medium Access Control,” Proc. IEEE INFOCOM, 2006.
[19] P. Hall, Introduction to the Theory of Coverage Processes, p. 52. John Wiley & Sons, 1988.
[20] P. Gupta and P.R. Kumar, “Critical Power for Asymptotic Connectivity in Wireless Networks,” Stochastic Analysis, Control, Optimization and Applications: A Volume in Honor of W.H. Fleming, W.M.
McEneaney, G. Yin, and Q. Zhang, eds., pp. 547-566, 1998.
[21] V.P. Mhatre, K. Papagiannaki, and F. Baccelli, “Interference Mitigation through Power Control in High-Density 802.11 WLANs,” Proc. IEEE INFOCOM '07, pp. 535-543, 2007.
[22] V. Shah and S. Krishnamurthy, “Handling Asymmetry in Power Heterogeneous Ad Hoc Networks: A Cross-Layer Approach,” Proc. 25th IEEE Int'l Conf. Distributed Computing Systems (ICDCS'05), pp.
749-759, 2005.
[23] S. Yi, Y. Pei, and S. Kalyanaraman, “On the Capacity Improvement of Ad Hoc Wireless Networks Using Directional Antennas,” Proc. ACM MobiHoc '03, pp. 108-116, 2003.
[24] P. Kyasanur and N.H. Vaidya, “Capacity of Multi-Channel Wireless Networks: Impact of Number of Channels and Interfaces,” Proc. ACM MobiCom '05, pp. 43-57, 2005.
[25] A. Scaglione, D. Goeckel, and J. Laneman, “Cooperative Communications in Mobile Ad Hoc Networks,” IEEE Signal Processing Magazine, vol. 23, no. 5, pp. 18-29, 2006.
[26] V. Stankovic, A. Host-Madsen, and Z. Xiong, “Cooperative Diversity for Wireless Ad Hoc Networks,” IEEE Signal Processing Magazine, vol. 23, no. 5, pp. 37-49, 2006.
[27] Y. Yang, J.C. Hou, and L.-C. Kung, “Modeling the Effect of Transmit Power and Physical Carrier Sense in Multi-Hop Wireless Networks,” Proc. IEEE INFOCOM '07, pp. 2331-2335, 2007.
Index Terms:
Wireless communication, Distributed networks
Wei Wang, Vikram Srinivasan, Kee-Chaing Chua, "Power Control for Distributed MAC Protocols in Wireless Ad Hoc Networks," IEEE Transactions on Mobile Computing, vol. 7, no. 10, pp. 1169-1183, Oct.
2008, doi:10.1109/TMC.2008.40
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/tm/2008/10/ttm2008101169-abs.html","timestamp":"2014-04-19T07:33:16Z","content_type":null,"content_length":"58820","record_id":"<urn:uuid:69adccc5-2b55-413d-a8a5-aa83b6201779>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00012-ip-10-147-4-33.ec2.internal.warc.gz"} |
Here is a (periodically updated) list of books and sources that I have referred to, or plan to in the future, sorted by field:
• Representation Theory: A First Course, by William Fulton and Joseph Harris
• Algebra, by Serge Lang
• Commutative Algebra: With a View Towards Algebraic Geometry, by David Eisenbud.
• Commutative Algebra, by Nicolas Bourbaki
• An introduction to Homological Algebra, by Charles Weibel
• Introduction to Commutative Algebra, by Michael Atiyah and Ian Macdonald
• Linear Representations of Finite Groups, by Jean-Pierre Serre
• Lie Groups and Lie Algebras, by Jean-Pierre Serre
• Introduction to Lie Algebras and Representation Theory, by James Humphreys
• Complex Semisimple Lie Algebras, by Jean-Pierre Serre
Algebraic geometry
• Algebraic Geometry, by Robin Hartshorne
• Elements de Géometrie Algébrique, by Alexandre Grothendieck and Jean Dieudonné
• FGA Explained, by Barbara Fantechi
• The Geometry of Schemes, by David Eisenbud and Joe Harris
• Basic Algebraic Geometry, by Igor Shafarevich
• Topologie algébrique et théorie des faisceaux, by Roger Godement
• Algebraic curves and Riemann surfaces, by Rick Miranda
• Introduction to Algebraic Geometry and Algebraic Groups, by Michel Demazure and Peter Gabriel
Differential geometry
• Differential Geometry, Lie Groups, and Symmetric Spaces, by Sigurdur Helgason
• Morse Theory, by John Milnor
• A Comprehensive Introduction to Differential Geometry, by Michael Spivak
• Foundations of Differential Geometry, by Shoshichi Kobayashi and Katsumi Nomizu
• Riemannian Geometry, by Manfredo do Carmo
• Functional Analysis, by Peter Lax
• Real and Complex Analysis, by Walter Rudin
• Singular Integrals and Differentiability Properties of Functions, by Elias Stein
• Riemann Surfaces, by Hershel Farkas and Irwin Kra
• Partial Differential Equations, by Michael Taylor
• Introduction to Partial Differential Equations, by Gerald Folland
Number theory
• Algebraic Number Theory, by Serge Lang
• Local Fields, by Jean-Pierre Serre
• Algebraic Number Theory, by John Cassels and Albrecht Frohlich, eds.
• The Arithmetic of Elliptic Curves, by Joseph Silverman
• A Course in Arithmetic, by Jean-Pierre Serre
• Introduction to Cyclotomic Fields, by Lawrence Washington
Logic and computer science
• Introduction to the Theory of Computation, by Michael Sipser
• Introduction to the Metamathematics of Algebra, by Abraham Robinson
• Lectures on the Hyperreals, by Robert Goldblatt
• Mathematical Logic, by H.D. Ebbinghaus, J. Flum, and W. Thomas
• Model Theory, an Introduction, by David Marker
• Computational Complexity, by Christos Papadimitriou
Dynamical systems and ergodic theory
• Introduction to Ergodic Theory, by Peter Walters
• Ergodic Theory, by Paul Halmos
• Introduction to the Modern Theory of Dynamical Systems, by Anatole Katok and Boris Hasselblatt
• Elements of Algebraic Topology, by James Munkres
• Algebraic Topology, by Allen Hatcher
• Algebraic Topology, by Edwin Spanier
• Algebraic Topology: Homotopy and Homology, by Robert Switzer
• Topology and Geometry, by Glen Bredon
• Topology, by James Dugundji
• General Topology, by Nicolas Bourbaki
• A Concise Course in Algebraic Topology, by Peter May
• Spectral sequences in Algebraic Topology, by Allen Hatcher
• Homotopy theory, by Sze-Tsen Hu
• Stable homotopy theory and generalized homology, by J. Frank Adams
• Characteristic Classes, by John Milnor
Online sources
Here are some online sources:
Here is a more current reading list:
• Goerss-Jardine, Simplicial Homotopy Theory
• SGA 1, 4, 4 1/2; EGA
• Kashiwara and Schapira, Categories and Sheaves
• Lurie, Higher Topos Theory
• Tamme, Introduction to Etale Cohomology
August 25, 2010 at 8:56 pm
Very impressive! I wish you the best at Harvard; please feel free to contact me (contact information generally available through the harvard facebook) if you have any questions about Harvard, or math
more generally.
August 26, 2010 at 9:45 pm
Thanks! I suppose I’ll see you around campus.
September 15, 2010 at 1:05 am
Akhil, you wrote that you read real and complex analysis. How did you find the exercises? I’ve had a crack at that book and the exercises in the first 2 chapters were pretty OK as was the 4th chapter
but the third chapter was ridiculously difficult. If you’ve done it could you offer me some hints on how to do the exercises in chapter 3 specifically exercise 5? Also what whas your insight on the
Riesz Fischer theorem?
September 26, 2010 at 9:03 am
Sorry that your comment took so long to appear—it got caught in the spam filter, and I only just saw it.
I think that, in general, Rudin’s book has nice exercises. For the one you asked about, geometrically the condition $\phi((x+y)/2) \leq (\phi(x)+\phi(y))/2$ says that on the graph of $\phi$,
given two points $(x, \phi(x)), (y, \phi(y))$, the point corresponding to the midpoint $(s+y)/2$ lies below the line drawn through the two initial points. Repeating this inductively, one can get
the desired inequality on $\phi(tx+(1-t)y)$ for $t$ a dyadic fraction, and continuity then implies it for all $t$ in the unit interval. If I am not mistaken, this is the idea of the argument
(which can also be done formally).
The Riesz-Fischer theorem essentially says that each Hilbert space is isomorphic to some $\ell^2(A)$ for some $A$ (which is determined up to cardinality, and can be taken e.g. as an orthonormal
basis). Some people call different things by the same name, though.
September 29, 2010 at 11:10 pm
Hi Akhil,
Sorry I think I wasn’t clear. Exercise 5 is the one about a measure mu with mu(X) = 1 and the question asks to show that the L^r norm is <= L^s norm when r<s. Do you have any ideas on how to
do it?
BTW, how far have you gotten into the book? Would you recommend the complex analysis portion or should I go to another book for that? Also would you recommend Rudin's functional analysis as
the next step in analysis or are there better functional analysis books on the market?
September 29, 2010 at 11:21 pm
My mistake. For this one, I believe you can apply the Holder inequality with one of the functions being $f$ and the other being 1. The condition of measure one is necessary for one of the
factors to be one (in general, this argument shows that on a space of *finite* measure, $L^r \subset L^s$ when $r>s$ and the inclusion is continuous).
I went through all of Rudin’s book at some point. I actually think the complex analysis part of the book is extremely cleanly done. It gets into some interesting material (e.g.
nontangential estimates on Poisson integrals, Hardy spaces), but also does the basic material (e.g. Cauchy’s theorem) very thoroughly and rigorously (in that case, by giving an argument
which is actually rather recent).
Rudin’s functional analysis book seemed comparatively dry to me, but it is thorough. I have enjoyed the book by Lax. Anyway, I would recommend talking to someone who actually knows
analysis for such advice, though.
September 30, 2010 at 3:05 am
Hi Akhil,
Firstly, sorry. I use a shared computer and had your website open and I think someone posted some kind of vulgarity on your website using the same computer (and I think the vulgarity
came under my name too). Hopefully the spam filter caught it or something but in case it didn’t, please ignore it.
Anyway, may I ask how long you took to do Rudin’s real and complex analysis? It’s taking me a while. I don’t want to be spending too much time on it since it might not be worth the
investment but would something like 1 year be about par? Or should I be finishing the book quicker? What was your experiences?
When you did the book, could you do all the exercises and were they do-able? I mean, did it take you “not so long” to do most or all of them? Just interested. I’m kind of hoping I’m
not alone in my attacks at the problems that seem to come to no end! I know some very important results are there in the exercises so I want to do ‘em all so to speak but I don’t want
to be spending too much time on them. What’s your advice on how to approach them and how hard they are? I know they’re interesting but are there any exercises whose proofs go for a
page or longer?
Thanks Akhil!
September 30, 2010 at 3:08 am
BTW, you wrote “I’d recommend talking to someone who actually knows analysis for such advice, though”. Surely this is modesty! I mean I’ve heard professors in mathematics say that completing Rudin’s
book is like more than what an average math student learns in his entire PhD in analysis. And I’ve also heard that once you’ve done Rudin’s real and complex analysis, you could well begin reading
books that would get you to a point of research in the area. Is this talk of Rudin’s book being “so great” just over-emphasising the point or are you just being modest? ;)
September 30, 2010 at 8:57 am
I’m pretty sure a PhD student (especially in analysis) needs to and should know much more than what’s in Rudin, which is very much an introductory textbook. I’m afraid I can’t evaluate the last
claim, because I haven’t read too many research papers in analysis.
I usually read and re-read books, so I can’t answer your question on length. I also work on the exercises in a rather piecemeal fashion. I never took a formal course on this material, but a more
systematic approach would probably have been better. Some of the exercises are quite difficult.
October 2, 2010 at 4:41 am
At Harvard, the requirement for PhD students seems to be only a knowledge of Rudin’s text for the analysis portion of the qual. exams. I get that PhD students in analysis need to know more
but would it be fair to say Rudin is only an introductory book in analysis? I mean just by way of comparison and nothing more, no university in the world requires a knowledge of analysis for
the general qual. exams more than what’s covered in Rudin.
You say Rudin’s functional analysis is a good book. Would that be research level in functional analysis or would you need to know more to do research in functional analysis?
I’m just wondering, surely there’s some standard of research. I mean reading Rudin takes a while and you don’t have 10 years to do a PhD! So there must be some finite value of textbooks one
should be reading in analysis to get to a point of research and considering that not all undergraduates are as nearly well-prepared as yourself.
I understand that you’re still a student but you seem to have done a bit of mathematics in several directions. Which direction are you most experienced in and can you tell how much one should
know before doing research in that direction for example? There’s this dilemma nearly all PhD students have: It’s near impossible to do research in topic X unless you have the “right”
background but what is the right background? What’s your experiences with this and how much someone should know to do research? Thanks Akhil …
October 2, 2010 at 8:29 am
I believe Rudin is considered an introductory book in analysis in the same sense that Hartshorne is an introductory algebraic geometry book, or Spanier an intro algebraic topology book.
I.e., the material is foundational, but the prior acquaintance assumed is at the level of a intro-level undergraduate course (e.g. point-set topology, elementary real analysis);
“introductory” does not necessarily mean “easy reading.”
I’m afraid that I simply don’t know enough to answer your question about the background in functional analysis to do research. My current interests tend to be reflected in the choice of
topics (i.e., algebraic topology as I write this), but I have not done research in it, and even had I done a little I would not be qualified to comment. Perhaps you might find this MO
thread interesting, though.
October 3, 2010 at 1:40 am
OK, I understand. That seems to make sense. But can you please clarify about Hartshorne? I mean R&C and algebraic topology don’t assume much background but Hartshorne does. I’ve heard
you need to have read the tome commutative algebra by Eisendud. And surely that’s grad. level background.
Actually that brings me to an interesting question which I’d like to ask you. I know you read Hartshorne earlier (or started reading it). Can you tell me based on what you’ve read so
far how much background is necessary to read Hartshorne? I mean is Atiyah and Macdonald enough for the comm. algebra. Or do you need more like Eisenbud? Basically I’m interested in
the comm. algebra background necessary but it’d also be nice if you could tell me the necessary background in the other areas.
October 3, 2010 at 8:40 am
There are definitely points in Hartshorne that require more than what’s covered in Atiyah-Macdonald (e.g. the section on Kahler differentials). I’m pretty sure that Eisenbud
includes everything you need, though. On the other hand, if you’re willing to accept the properties of Kahler differentials without proof, you can just read Hartshorne straight
away I suppose. You also need to know basic general topology and homological algebra (e.g. some familiarity with derived functors). (Note that Hartshorne is generally *not* used
as a first course on algebraic geometry, despite the fact that it opens with a chapter on varieties over alg. closed fields.)
October 3, 2010 at 1:57 am
I am a Chinese student, would like to ask you a few questions, English is not very good, do not know can not express what I mean. On differential geometry, that is,1. how to finding
Griffiths-positive metric on an ample vector bundle.2.On moduli space,for example,moduli space on Calabi-Yau manifold.How to Finding the absruct new structure on it.I do not know if you have interest
in this area, that area you are interested in mathematics? Physics are not interested in? For example, string theory, quantum field theory, mirror symmetry. Want to introduce you to learn way. My
English is not good, and very worried about not properly express what I mean. Hope that it will communicate with you more mathematics or physics. I strive to learn English now.Hope to be friends with
October 3, 2010 at 8:35 am
Dear Wei, my background is insufficient for me to answer your questions. Perhaps you should try asking on MathOverflow.
October 3, 2010 at 3:42 am
Why the every pure sheaf has a unique Harder-Narasimhan filtration,but any semi-stable sheaf need not unique Jordan-Holder filtration.The geometry of moduli space of sheaf is very difficult.Whether
proof the existence of symplectic structure of moduli space of quasi-coherent sheaf?Ask you for hyperbolic geometric flow of interested?For example,hypobolic Ricci flow,hypobolic Kahler-Ricci
flow,hypobolic mean curvature flow,hypobolic Calabi flow and so on.
October 3, 2010 at 8:57 pm
(previous thread was getting skinny!)
Thanks Akhil. I appreciate your advice. But can you tell me what book you used for a
first course in alg. geo. if you search up “alg. geo. first course” you end up with this book
with that title by some guy called “Joe Harris”. How did you find this book by Joe Harris?
Did you read it and if so how much background do you need to tackle it?
Also is maclane a good book for homological algebra. I see you’ve used Weibel but how’s
saunders maclane’s book? Both look good but I’d appreciate your views on which is good
in which respects.
Finally, did you read A&M and then Hartshorne or did you read A&M, Eisendud and then
Hartshorne? I’m sometimes uncomfortable with accepting facts as true without knowing
their proofs. Did you do this with Hartshorne or did you know all the necessary comm. algebra before reading it? If not how did you find it accepting facts with proofs in Hartshorne?
Thanks Akhil.
October 3, 2010 at 9:59 pm
I think I have used Shafarevich and James Milne’s notes (cf. the link on the page) as intro sources. I haven’t read Harris’s book, I’m afraid, but it’s probably good.
Maclane’s book is on general category theory, not homological algebra (I don’t think it covers derived functors, for instance). I didn’t really read this books in any kind of order, but it
probably makes sense to start at least with A/M before trying Hartshorne (or any other book on schemes). I don’t think reading all of Eisenbud (which is a long book) is necessary before one
starts Hartshorne; you can always refer back as necessary.
October 4, 2010 at 7:56 am
Sorry, I think I wasn’t clear. I was referring to “Maclane’s Homology” book
in the grundlehren math series. I’m pretty sure this book covers hom.
algebra. But I think you’re right about Maclane’s “Categories for the working
mathematician” which is about category theory.
Algebraic geometry is such a interconnecting discipline. Do you need to
know things like algebraic topology or complex analysis (or even number theory) to read Hartshorne? Or are the prerequisites just restricted to comm.
algebra and a graduate algebra course?
I heard of this excellent book called “Principles of Algebraic Geometry” by Harris and Griffiths. Have you attacked this book yet? ;) It looks like a real killer in terms of background math
necessary but it also looks like the holy bible of alg. geo. I don’t think a knowledge of schemes is necessary for this book but it looks pretty good on the classical side of things. You
might want to have a look at it if you haven’t already. I think it might supplement Hartshorne well since you’ve already done diff. geometry complex analysis and the like which are the only
prereqs. for the book.
Thanks for being patient with me and sorry for asking so many questions! I’m
very much a beginner in alg. geo. I’m kind of scratching my head. The subject is like this vast theory of theories and it seems like there’s no clear path to take. How do you plan to approach
alg. geo.?
October 4, 2010 at 9:01 am
Ah. Right, I haven’t read _Homology._ Ditto for Griffiths and Harris. Hartshorne doesn’t invoke any algebraic topology, but Griffiths and Harris do, as well as some several complex
variables (which is sketched at the beginning). Lack of prerequisites prevented me from reading G+H in the past, but I should take another look sometime soon. I understand it’s standard.
For algebraic geometry, I’m currently not actively studying the subject outside of class, because I am distracted with algebraic topology. If I were, though, (and I probably will be soon)
I would probably be trying to a) read Hartshorne and EGA and b) looking at other scheme-theoretic books, e.g. Ueno’s and Liu’s.
For how to learn algebraic geometry, cf. my question on MO: http://mathoverflow.net/questions/1291/a-learning-roadmap-for-algebraic-geometry
You will find answers from people immensely more qualified than I.
October 15, 2010 at 11:50 pm
Akhil, how do you work efficiently? I hear harvard have tonnes of work to do so how do you manage your time? Do you get any sleep?
October 17, 2010 at 10:03 pm
I wish I knew a good answer to that question!
November 23, 2010 at 3:53 pm
Hi Akhil, I’m a undergraduate student at UFPI Brazil,and I want to study complex algebraic geometry, but first I know that I should learn the basic of algebraic geometry, what good references could
you give to me for a first read?I’m in the second year and have a background in elementary algebra(I studied Dummit’s book until module theory) , topology and Real analysis.By the way I’m building a
blog too,I hope see your comments there! Thanks !
November 23, 2010 at 4:21 pm
Shafarevich’s book is what we are using for algebraic geometry in my (introductory) course, though I have not read it all that carefully. I can also vouch for Fulton’s _Algebraic curves_
(available at people.reed.edu/~davidp/332/CurveBook.pdf), which I found useful in preparing for a reading course on elliptic curves some time back. Also, I recommend Milne’s online notes on
algebraic geometry.
November 23, 2010 at 8:29 pm
Thanks for the references , by the way could you give me good introductory text in complexity theory?Thanks again.
November 24, 2010 at 12:33 am
I know very little complexity theory. But my understanding was that Papadimitriou was the standard introduction.
February 10, 2011 at 9:47 pm
Akhil, have you heard of the book “Partial differential equations” by Lawrence Evans? It’s a really good book on PDE’s and I noticed you listed some books on the topic above, so if you’re interested
in PDE’s, you might wish to have a look at this book.
Also might be worthwhile taking a look at Gunning and Rossi’s “Several Complex Variables” at some point because of its connections with algebraic geometry. The book is very good too.
February 11, 2011 at 9:38 am
Dear David, thanks! I do intend to read those books at some point in my undergraduate education, perhaps this summer: the problem is, during the academic year, I seem to get very little time for
sustained reading on a given topic because of coursework. (I tried to read Gunning-Rossi a while back but was unsuccessful; my current plan is to start with “Coherent analytic sheaves,” which
seemed more accessible when I flipped through it.)
February 10, 2011 at 9:49 pm
I got the wrong email address in my previous comment.
April 18, 2011 at 11:09 am
Hi Akhil, that’s one impressive list…
I was wondering how you obtained all of these books. Did you actually buy them all? It seems to me that’s quite expensive to do so. Or did you borrow them at a library?
April 18, 2011 at 11:20 am
Hi Max,
I was rather fortunate to be living, when in high school, close to several university libraries.
April 25, 2011 at 10:30 pm
Another suggestion regarding analysis. Have you looked at Loukas Grafakos’ Classical Fourier Analysis and Modern Fourier Analysis? These are comprehensive and the materal is presented in a more
“textbook” style unlike Stein’s monographs (which are also excellent, of course). You might already be familiar with the first book of Grafakos in which case the second book is ideal. (I’m not sure
if you’ve seen the proof of the Carleson-Hunt theorem but if you haven’t, Grafakos’ Modern Fourier Analysis is great.)
But as you say, these are good books to read you’re not busy with your courses …
April 26, 2011 at 5:09 pm
Thanks for the recommendation! I’ve never read Grafakos (or, for that matter, thought about analysis anytime recently). I’ve never actually worked through the proof of Carleson-Hunt, but would
like to someday — probably I’ll have the time to catch up on things like this once I don’t have problem sets due.
May 11, 2011 at 10:56 pm
Dear Akhil,
I really liked reading your algebraic geometry notes. But I unforunately didn’t have the background in category theory so I ddn’t appreciate the categorical language. (I pretended everything was for
abelian groups, rings etc.) Can you please recommend some books on category theory that you think provide sufficient background to understand your AG notes and maybe EGA later on?
May 12, 2011 at 8:47 am
Dear Marcus, EGA 0 provides a fair bit of introductory material on categories (starting with Yoneda’s lemma, fibered products, etc.). I found it very helpful. Other sources on categories that
might be helpful are MacLane’s “Categories for the Working Mathematician” and Kashiwara-Schapira’s “Categories and Sheaves.” (I recommend the last one in particular.) At least for me, though, it
was easier to learn basic category theory by seeing lots of examples, and thinking about it in the context of algebraic geometry (and algebraic topology).
For EGA, you’ll also need basic sheaf theory. (There are even points in EGA III where Grothendieck uses the Cech-to-derived-functor spectral sequence.) You can find this in Godement’s “Theorie
des faisceaux,” or Chapter II of Hartshorne. Though I should note that you really have to do things differently for general sites (in which case you don’t have stalks to reason with, and you have
to define e.g. sheafification in a more general way). Cf. for instance Tamme’s book “Introduction to Etale Cohomology.”
May 16, 2011 at 7:58 pm
Thank you Akhil! I wanted to ask you another question about Hartshorne’s book: I hear from a lot of people that Hartshorne’s book is very difficult to read and that the exercises are also
very difficult. How much truth is there in this? I know you can only speak for yourself but I found the exercises in Chapter 2 of Hartshorne very routine (at least for the first few
sections). Is it that these exercises are much easier than the ones in the other chapters? For example, how are the Chapter 1, 4 and 5 exercises? It seems plausible that these could be harder
since they are on geometry whereas exercises on sheafs for example are routine but tedious. What did you find of Hartshorne and his exercises?
May 16, 2011 at 7:58 pm
Also thanks for recommending “Categories and Sheaves”. This looks like a very good book!
May 16, 2011 at 10:13 pm
I’m not sure if I’d describe most of Hartshorne’s exercises as routine! I think they get a lot harder after the first few sections of chapter II and general sheaf theory. I should mention
that sheaf theory on general sites feels a bit different (because you don’t have stalks, for instance) and the proofs (e.g. that the category of abelian sheaves has enough injectives)
have to appeal to general principles. I’ve never really looked at chapters 4 or 5 (except for Riemann-Roch). Also, a lot of them turn out to be special cases of (often nicer) results in
EGA. (One example: in II.7, he asks you to prove using divisors that for a regular noetherian scheme $X$ and a vector bundle $E$ on $X$, the Picard group of the projectivization is that
of $X$ times $\mathbb{Z}$; but it’s possible to do this very cleanly using the formal function theorem as Grothendieck points out (without a full proof) in III.4, which I might blog about
sometime.) Anyway, regardless of this, I’ve certainly learned a lot from the exercises, which strike me as one of the best features of the book.
On the other hand, I still think that Hartshorne often seems to leave out much (possibly because it’s such a short book) far too often: he omits the functorial characterization of
smoothness/etaleness/unramifiedness (the latter two of which figure in only briefly as exercises), the quasi-finite form of Zariski’s Main Theorem, the Leray spectral sequence, etc. I
think this is actually one of the reasons it is such a hard book; EGA, where things are developed both more leisurely and in a much more general (and, at least to me, natural) setting,
gives a very different picture, and is certainly easier reading (at least if you don’t try to read it linearly!).
I would also note that there are plenty of successful algebraic geometers who have never read EGA. I suppose it’s a matter of taste, and as I am a beginning student, what I say should not
be taken too seriously.
November 25, 2012 at 6:50 pm
What are your thoughts on reading etale cohomology? I’m not sure as to whether it’s worth reading it thoroughly, as I’ve heard the advice that it’s better to just black-box all the etale cohomology
theorems at the start. Do you have any thoughts on this, and what are the standard references? I’ve heard SGA 4/4.5, which is workable, but I’m at least an hour away (by car) from any library which
has them, and I find it somewhat hard to read the scanned copies on a computer screen.
November 30, 2012 at 10:38 pm
Sorry for the slow response — it’s been a busy past week. I’m certainly no expert on etale cohomology. I did a reading course about it once, which was a lot of fun, but I don’t know a lot about
number theory. I do think that it definitely makes sense to black box a lot of it at the start. Tamme’s “Introduction to Etale Cohomology” is by far the friendliest introduction I’ve seen. SGA
4.5 is very helpful though terse, while SGA 4 seems to go on to no end (but which also has a lot of really important and useful foundational stuff on topoi which I’ve never properly learned).
Freitag-Kiehl’s book on etale cohomology and Kiehl-Weissauer “Weil conjectures, perverse sheaves, and the l-adic Fourier transform” are pretty inspirational (and hard) stuff. You might look at
Emerton’s advice at
Something I’m trying to learn more about recently — and which I didn’t study at all when I took the reading course, because I didn’t know homotopy theory — is the refinement of etale cohomology
to etale homotopy theory. Instead of getting cohomology groups (with torsion or l-adic coefficients), you get a pro-homotopy type, which is a lot more information. I’ve heard things to the effect
that you can get Poincar\’e duality in etale cohomology (which is very difficult to prove directly) using homotopy-theoretic methods, although I do not know the details. (I could suggest people
who know a lot about this that you could contact if you’re interested.)
Anyway, one reason it may make sense to black-box things is that modern technology not available in the days of SGA 4 enables simpler proofs!
Incidentally, I saw your blog — very impressive! You should, of course, feel free to contact me if you have further questions about math or other things (e.g., college applications).
December 1, 2012 at 8:37 am
Thanks for the suggestions!
I unfortunately don’t really know normal topological homotopy theory. It seems to motivate a lot of the recent work on algebraic geometry, so I should probably get to reading it at some
January 2, 2013 at 1:55 pm
Hello Mr Akheel
I’m seeking advice and looking at your knowledge, your advice will really be useful.
I am new to abstract mathematics. By abstract I mean proving theorems and understanding what they mean in contrast to the computational side of mathematics.
I have read the book “How to prove it, a structured approach” and the book of proof and now I want to pick one branch of mathematics and practice theorem proving. Which one would you advice?
Analysis, algebra, topology? I have tried with analysis with the Rudin book but it seems hard to me. So what would you advice and which book(s)?
I equally hope your answer will be useful to someone else in the future.
January 3, 2013 at 12:05 am
Dear Toussaint,
Usually when starting out with math the goal is not so much to prove theorems, but to understand how mathematical proofs work and to practice with exercises, which you’ll find in any textbook,
and in any of the above three fields. I would only mention that learning general topology without knowing a little about metric spaces and real analysis is likely risky.
You might try math.stackexchange.com to look for lists of suggested textbooks (I myself found Rudin’s book very helpful, and I think Herstein’s algebra textbook; I’ve also heard that Munkres’s
book is a good introductory topology textbook but have never read it). The book “Proofs from the book” is a lot of fun but is written more for enjoyment than serious study. Terence Tao’s blog
(under “career advice”) is another helpful resource.
January 3, 2013 at 1:21 am
Dear Mr Akheel!
Thank you for the helpful advice! I was right about to give up Rudin’s book in desperation because I could not prove the theorems. But now, it is clear one must first emphasize understanding
of the proofs and doing the exercises (for the sake of repetition).
Indeed, I am on math.stackexchange.com and it is a wonderful resource for exercises, alternative proofs and references. And I did read Terrenece Tao advice(s) as well! | {"url":"http://amathew.wordpress.com/bibliography/","timestamp":"2014-04-20T05:43:05Z","content_type":null,"content_length":"142356","record_id":"<urn:uuid:fdad4d82-dcb6-4643-89f3-78a20d367f2e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Grothendieck foundations: Zariski and coherent cohomology
Colin McLarty colin.mclarty at case.edu
Tue Nov 29 09:07:10 EST 2011
I can confirm one conjecture from my previous post, but the proof is
more involved than I expected so it leaves me a bit less sure of the
other conjecture.
Harvey's suggestion of using ZF[0] (with suitable choice principle) is
working really well. In the present case I began with a quick and
dirty idea for n=1 or 2, and when it shook out it was n=0. So this is
at the strength of 2nd order arithmetic.
Specifically, the case is derived functor cohomology for all sheaves
of modules on any Noetherian scheme. This includes coherent
cohomology of Noetherian schemes, which is the tool of Hartshorne's
book _Algebraic Geometry_ and the central tool of all cohomological
number theory. It would be nice to also get etale cohomology on this
foundation, and I still suspect that can be done. But I do not know.
On the other hand, this foundation does not prove existence of some
important examples: real complex and p-adic numbers. It proves the
theorems hold for them if they exist. That is another issue. This
foundation does prove existence of all the "arithmetic schemes"
(schemes of finite type over the integers).
I sketch the key issues in the proof, as I now have it. It brings
coherent cohomology to the ambit of reverse math, where I have not
1) It currently takes global choice (over ZF[0]) to prove every
module over any Noetherian ring has an injective embedding. We use a
kind of Zorn's lemma argument for submodules of a given module,
without supposing there is a set of all those submodules. I tried
hard to get by with less choice, but I have no evidence now that it
*cannot* be done with less.
2) The natural context for this foundation is the Noetherian case.
That is the most important case in practice, especially for coherent
cohomology. The point for us is that ZF[0] even without choice proves
every set has a set of all its finite subsets, so every ring has a set
of all its finitely generated ideals -- which is all ideals, for a
Noetherian ring. An obvious inner model of countable sets shows that
this foundation does not prove all countable rings have sets of all
their ideals.
3) The chief result now is that every sheaf of modules on a Noetherian
scheme has an injective embedding, not only quasi-coherent modules.
This goes by proving the structure sheaf of rings on any Noetherian
module has a set of all sheaf ideals, not only quasi-coherent ideals.
And that works by proving that every sheaf of ideals on the sheaf of
rings R is determined by finite data relative to R, so there is a set
of all those data. The simplest proof I can find uses Krull's
Principle Ideal Theorem, roughly saying any single equation on an
algebraic space defines a subspace of codimension 1 or 0. It serves
to sharply limit how far a sheaf of ideals can deviate from being
quasi-coherent, in the Noetherian case.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2011-November/016005.html","timestamp":"2014-04-19T22:20:36Z","content_type":null,"content_length":"5698","record_id":"<urn:uuid:ba35905e-dfae-4b3c-a44a-5aba4c2b6c7f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can we construct rational functions with prescribed ramification on an algebraic curve over \Qbar
up vote 1 down vote favorite
Let $C$ be a smooth projective connected curve of genus $g$ over $\bar{\mathbf{Q}}$. Fix a finite non-empty (Edit) set of closed points $S$ in $C$ and let $U$ be the complement of $S$ in $C$.
Q1. (Algebraic formulation) Does there exist a finite (surjective) morphism $\pi:C\longrightarrow \mathbf{P}^1_{\bar{\mathbf{Q}}}$ such that $\pi|_{U}$ is etale?
Equivalently, let $X$ be a compact connected Riemann surface of genus $g$ which can be defined over $\bar{\mathbf{Q}}$ and let $B$ be a finite set of of closed points in $X$ with complement $Y$.
Q1. (Analytic formulation ) Does there exist a finite topological cover $Y\longrightarrow \mathbf{P}^1(\mathbf{C})-\{0,1,\infty\}$ ?
The equivalence of these two questions follows from the proof of Belyi's theorem and Riemann's existence Theorem.
If the answer to Question 1 is positive, I would be very interested in knowing if the degree of $\pi$ can be bounded effectively.
Q2. Does there exist a finite (surjective) morphism $\pi:C\longrightarrow \mathbf{P}^1$ such that $\pi|_{U}$ is etale and $\deg \pi \leq c$, where $c$ is a constant depending only on $S$ and $g$?
Example. Suppose that $g=0$. Then, following Belyi's proof of his theorem, the answer to Question 1 is yes. The answer to Question 2 is also positive and an explicit upper bound for such a rational
function is given by Khadjavi in An effective version of Belyi's Theorem.
I don't expect the answer to Question 1 to be easy. In fact, what I'm asking is to prove the existence of a Belyi morphism $\pi:C\longrightarrow \mathbf{P}^1_{\bar{\mathbf{Q}}}$ with prescribed
ramification. Now, that's probably very hard but definitely very interesting to find out.
Trivial Remark. Suppose that $g>1$. Then the automorphism group of $C$ is finite. Choose a Belyi morphism $\pi:C\longrightarrow \mathbf{P}^1_{\bar{\mathbf{Q}}}$ and let $U_0\subset C$ be the
complement of the ramification points of $\pi$. Then we see that Question 1 has a positive answer if we take $U$ to be $\sigma(U_0)$ with $\sigma$ an automorphism of $C$. But that's only finitely
many examples.
1 This is clearly not possible if $S$ is empty (and $g>0$). – Torsten Ekedahl Jun 21 '11 at 10:39
Of course. I'll edit the question. – Ari Jun 21 '11 at 11:12
add comment
1 Answer
active oldest votes
No, it is easy to construct examples where this is not possible (aside from trivial ones with $|S| < 3$). For example, if $g(C)>0$ one can find $S$ arbitrarily large so that the points
of $S$ give linearly independent elements in $Pic(C)$. For such an $S$ there can be no map of the kind you want since the elements of $S$ must be mapped to at least $2$ distinct points
up vote 3 down of $\mathbb{P}^1$ which would give a non-trivial relation on the classes of elements of $S$ in $Pic(C)$.
vote accepted
So if I understand correctly, for every integer n there exists a finite set of closed points S of cardinality n such that the points of S give linearly independent elements in Pic
(C). For such an S, we can never find a morphism C-->P^1 of the kind as described in the question, right? – Ari Jun 21 '11 at 11:15
So this raises another interesting question (in my opinion). Assume g(C) >0. Let n>2 be an integer. Does there exist a finite set of closed points S of cardinality n and a morphism
pi:C--->P^1 satisfying the conditions of the question? Moreover, it would be nice to know the cardinality of the set I(C)={n: n>2 and there is a finite set of closed points S of
cardinality n and pi:C--->P^1 as in question}. By Belyi's theorem, I is non-empty. Is it infinite? Finite? – Ari Jun 21 '11 at 11:27
To answer your first comment, that is exactly what I claim. The inverse images of the points of $\mathbb{P}^1$, considered as divisors on $C$, are all linearly equivalent, so if $S$
is the union of the inverse images of more than one point then we get a non-trivial relation. – ulrich Jun 21 '11 at 12:09
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry arithmetic-geometry riemann-surfaces rational-functions ramification or ask your own question. | {"url":"http://mathoverflow.net/questions/68360/can-we-construct-rational-functions-with-prescribed-ramification-on-an-algebraic","timestamp":"2014-04-18T11:19:31Z","content_type":null,"content_length":"60476","record_id":"<urn:uuid:79e4ffc0-da9c-49be-a23a-2bfe53a0ec6f>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
Investigating Fire Environments
Students consider the meaning of the term variable, both in a mathematical and an everyday sense, by considering a text‑based "equation." Data regarding flame length, vegetation and fire speed will
be organized in a table, and students will investigate correlations between variables using scatterplots and lines of best fit.
Have students read the "Why We’re Worried About Wildfire" equation found on the Wildfire Equation handout. Note that the equation on the second page of the handout was written specifically for the
western Nevada area. A more general version of this equation is used on the first page.
Wildfire Equation Activity Sheet
After reading the equation, have students answer the following questions:
• How do you know that this is an equation? [The behaviors on one side of the equation are equivalent to an unsafe fire environment on the other.]
• Why is this information important to you? [Answers will vary.]
• How is this equation like others that you use in mathematical contexts? [An equals sign represents equivalence, which is essential for understanding algebraic expressions. Students often perceive
the equal sign to indicate an answer rather than equivalence.]
Distribute the Fire Behavior activity sheet. Have students complete the chart at the top of the first page, using the information contained in "Examples of Local Fire Behavior," found on the last
page of the activity sheet. You may wish to allow students to work in pairs to complete the chart.
Fire Behavior Activity Sheet
Introduce students to scatterplots by graphing flame length versus fire speed. Work with the class to complete this scatterplot. Call on five different students; each of them should be asked to graph
one point on the scatterplot. All students should complete the graph on the activity sheet as the points are plotted on the chalkboard or overhead projector. The completed scatterplot should look
Ask students if they notice a relationship between flame length and fire speed. That is, as flame length increases, does fire speed change in a predictable way? [Generally, as flame length increases,
the fire speed also increases, so there is a correlation. However, if the correlation were stronger, the plotted points would be closer to lying along a straight line.]
Explain to students that a "line of best fit," which is formally known as a regression line, can be used to approximate a relationship between two variables if there is a strong correlation. A line
of best fit for this data will approximate the points slightly but not perfectly. To demonstrate this, ask students to draw a straight line that lies close to most of the points. Students should
notice that, while they are able to draw a line that is close to some of the points, it will be far away from others.
You might also ask students to approximate how well their line of best fit would approximate the data if the point furthest to the right—the point (55, 8.5), which represents the data for big sage
and bitterbrush—were removed. To some extent this point is an outlier, because the flame length greatly exceeds the flame length of the other four points. When this point is removed, the remaining
four points do not approximate a line very well. On the other hand, it may be that there is a strong correlation between flame length and fire speed, and the relationship might be more obvious if
additional data were considered. For instance, some sources estimate the following:
The burn rate doubles for every 2 mph increase in fire speed, while flame length increases 50%.
Do students’ estimated lines of best fit seem to agree with this statement?
On the second page of the handout, allow students to plot flame length versus burn rate on the top graph. Ask students to discuss the relationship between these two variables. [The scatterplot will
reveal that there is a moderate correlation between flame length and burn rate. As with the previous graph, however, it may be that the point (55, 5900) is an outlier, but with only five data points,
it is difficult to tell. This is a good opportunity to discuss the sample size needed to draw a reasonable conclusion.]
As with the previous graph, you may want to have students do additional research to find other data points. The general statement above relating burn rate, fire speed, and flame length can again be
used by students to assess the validity of their estimated line of fit.
On the bottom graph of the second page, students will plot fire speed versus burn rate. Ask students to discuss the relationship between these two variables. [The scatterplot will reveal that there
is a strong correlation between fire speed and burn rate. The points are very close to forming a straight line.]
To estimate a line of best fit, students should use a straight edge to draw a line that lies close to most of the points. There should be roughly the same number of points above and below the line.
After students draw a line of best fit, have them use the line to make predictions. With the third graph, for example, you might ask the following questions:
• If a fire moves at a speed of 5½ mph, approximately how many acres would burn in one hour? [A reasonable guess would be about 3200 acres.]
• If a fire burns 4500 acres in one hour, what is the approximate speed of the fire? [Approximately 7 miles per hour.]
• Write an equation that relates fire speed to burn rate. That is, write the equation for your line of best fit. [When a line of best fit is drawn by hand, the equation must be approximated. If r
is the burn rate and s is the fire speed, a reasonable approximation is r = 750s – 500. A more exact equation is r = 802s – 1196, which can be obtained using the linear regression feature of a
graphing calculator.]
• Computer and Internet connection
1. Ask students to write a reflection that compares the benefits of analyzing data in a table as opposed to analyzing data with a scatterplot.
2. Collect the student activity sheets and evaluate their scatterplots that compare fire speed to burn rate. In addition, require students to submit the equation of the line of best fit for this
3. Have students ask their classmates for two pieces of information—for instance, shoe size and height; age of mother and number of siblings; or, number of hours playing video games per week and
number of hours doing homework per week. Then, have students organize the data in a table, represent it in a scatterplot, and find the equation for an estimated line of best fit. (Note that a
line of best fit will fit some sets of data better than others. A discussion about when it is and is not appropriate to draw a line of best fit could occur.)
1. Have students use the On Fire applet. For each probability, have them run five trials and determine the average of the five trials. Then, have them plot probability along the vertical axis and
average percent of the forest that burned on the vertical axis. Is there a correlation? Is it linear? Can the correlation be represented by a line of best fit?
2. Have students use information from the chart in the Defensible Space handout to create a scatterplot that compares steepness of slope to the recommended distance for defensible space. (This
handout is the basis for the next lesson, How Steep Can You Be?) Students will need to determine how to deal with the slope; specifically, because a range is given in the chart, such as 0 to 20%,
students will need to decide if they should use one of the extreme values (0 or 20) or the average value (10). Then have students determine if there is a correlation between these variables and,
if there is, write an equation that relates them.
Questions for Students
1. How did organizing the data in a table help you see and understand the relationships among fire speed, flame length, and burn rate? How did translating this data to a scatterplot show
relationships? Describe the differences between these two representations.
[A scatterplot shows a visual representation of the numbers that appear in a table. Scatterplots often show patterns that are more difficult to detect using a table of data.]
2. For what purposes would you consider using a scatterplot to show data? Are there times that using a scatterplot would be useful in providing a convincing argument to your parents or a friend?
[As shown in this lesson, a scatterplot will often make patterns obvious.]
Teacher Reflection
• What relationships did students articulate and use when transferring information from the table to the graph?
• What level of expertise did students demonstrate in preparing and using scatterplots? What additional experiences do they need with this type of graphical representation?
• Did students have a clear idea of what pieces to include in the graph, such as labels on the axes, a title, and so forth? What additional knowledge and skills do they need that might be presented
in a mini-lesson on creating graphs?
In this lesson, students will discover the havoc that wildfire can cause. After learning about the factors that contribute to the spreading of a wildfire, students will use a probability model to
determine the portion of a forest that might be destroyed by fire.
Students will learn about creating defensible space around a home using a series of fire protection zones. Students will then draw the zones that surround a house and estimate the area of each zone.
In this lesson, students consider the construction of a tool that will measure percent slope. They then use ideas about percent slope to determine recommended defensible space distances near a home.
This is the culminating lesson for this unit. Students use information from a quiz, along with what they learned in the previous four lessons, to write a summary and finalize a property sketch.
Students measure their own progress in a final activity by Testing Their Firewise IQ.
Learning Objectives
By the end of this lesson, students will be able to:
• Recognize the impact that variables have on results.
• Draw conclusions based on numerical information organized in a table.
• Make predictions using a graph of data and a line of best fit.
Common Core State Standards – Mathematics
Grade 8, Stats & Probability
• CCSS.Math.Content.8.SP.A.1
Construct and interpret scatter plots for bivariate measurement data to investigate patterns of association between two quantities. Describe patterns such as clustering, outliers, positive or
negative association, linear association, and nonlinear association.
Grade 8, Stats & Probability
• CCSS.Math.Content.8.SP.A.2
Know that straight lines are widely used to model relationships between two quantitative variables. For scatter plots that suggest a linear association, informally fit a straight line, and
informally assess the model fit by judging the closeness of the data points to the line.
Grade 8, Stats & Probability
• CCSS.Math.Content.8.SP.A.3
Use the equation of a linear model to solve problems in the context of bivariate measurement data, interpreting the slope and intercept. For example, in a linear model for a biology experiment,
interpret a slope of 1.5 cm/hr as meaning that an additional hour of sunlight each day is associated with an additional 1.5 cm in mature plant height. | {"url":"http://illuminations.nctm.org/Lesson.aspx?id=2226","timestamp":"2014-04-20T05:42:22Z","content_type":null,"content_length":"87978","record_id":"<urn:uuid:5946d9ef-1af7-4a00-ae29-5ba2f6bf7b00>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00071-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with trigonometry question
November 28th 2010, 05:47 AM #1
Nov 2010
Help with trigonometry question
Hi, i need help in this trignometry question and i don't quite understand how to solve it this question is dealing with the equation:
y-k sin or cos (x-h)
---- = -----------------
b a
just in case your not sure
k = vertical shift
b = amplitude
h = horizontal shift
a = period
here is the question:
a tire with a diameter of 63 cm coasts at 1.6 m/s over a nail at exactly 12:00 noon. the nail is stuck in the tire but does not deflate
a. how high above the ground will the nail be at:
1. 12:00:08
2. 12:00:15
3: 12:01:00
4: 12:15:00
b. Find the first four times the nail at 10 cm above the ground
thanks! i really need to understand this question
Hi, i need help in this trignometry question and i don't quite understand how to solve it this question is dealing with the equation:
y-k sin or cos (x-h)
---- = -----------------
b a
just in case your not sure
k = vertical shift
b = amplitude
h = horizontal shift
a = period
here is the question:
a tire with a diameter of 63 cm coasts at 1.6 m/s over a nail at exactly 12:00 noon. the nail is stuck in the tire but does not deflate
a. how high above the ground will the nail be at:
1. 12:00:08
2. 12:00:15
3: 12:01:00
4: 12:15:00
b. Find the first four times the nail at 10 cm above the ground
thanks! i really need to understand this question
the first thing you need to determine is the period of up and down motion for the nail.
$\omega = \frac{v}{r} = \frac{160 \, cm/s}{31.5 \, cm} = \frac{320}{63} \, rad/s$
period, $T = \frac{2\pi}{\omega} = \frac{63\pi}{160} \approx 1.24 \, s$
at $t = 0$, the nail is on the ground and rises to a height of 63 cm at $t = \frac{63\pi}{320} \approx 0.62 \, s$
since the cycle starts at a low point (h = 0) , goes up to a height of h = 63 cm and back down, a negative cosine curve is probably the best model to simulate the motion of the nail up and down
over time.
$h = -31.5 \cos\left(\frac{320}{63} \cdot t\right) + 31.5$
where h is height above the ground in cm and t is time in seconds.
thanks for the reply but i am still very confuse about the whole thing
what about the
1. 12:00:08
2. 12:00:15
3: 12:01:00
4: 12:15:00
how high?
im sorry but i'm just confuse by the whole work
i need to identify the periods, amplitude, horizontal and vertical shift but then i can't see it and overall confuse
thanks for the reply but i am still very confuse about the whole thing
what about the
1. 12:00:08
2. 12:00:15
3: 12:01:00
4: 12:15:00
how high?
im sorry but i'm just confuse by the whole work
i need to identify the periods, amplitude, horizontal and vertical shift but then i can't see it and overall confuse
If you cannot determine this information given the height equation and graph provided, then you need to have a talk with your instructor.
November 28th 2010, 06:18 AM #2
November 28th 2010, 06:38 AM #3
Nov 2010
November 28th 2010, 06:44 AM #4 | {"url":"http://mathhelpforum.com/trigonometry/164598-help-trigonometry-question.html","timestamp":"2014-04-18T04:24:45Z","content_type":null,"content_length":"43257","record_id":"<urn:uuid:6debb941-bb41-4b81-b2ee-ccb6f0eecec0>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00276-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ratio problem
November 19th 2007, 04:36 PM #1
Oct 2007
Ratio problem
The measure of two supplementary angles are in a ratio 3:15. What are the measures of the angles? And can you explain what method you used to do this because at the moment i got 1/5=? I don't
know what to plug in on the other side along with an x to evaluate this.
Let $\alpha,\,\beta$ be the angles. So $\alpha=k,\,\beta=5k.$ ( $k$ is a positive constant.) Besides, you know that $\alpha+\beta=180^\circ.$
Go on from there.
wow im dumb cant believe i never thought of that. thanks
November 19th 2007, 04:39 PM #2
November 19th 2007, 04:52 PM #3
Oct 2007 | {"url":"http://mathhelpforum.com/geometry/23150-ratio-problem.html","timestamp":"2014-04-16T17:26:26Z","content_type":null,"content_length":"34815","record_id":"<urn:uuid:31e186c0-8e4c-437a-8628-1ef4f661fed3>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00110-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Summary: We study a shallow water equation derivable using the Boussinesq approximation, which includes as two special cases, one equation discussed by Ablowitz et al. and one by Hirota and Satsuma.
A catalogue of classical and non-classical symmetry reductions, and a Painlevé analysis, are given. Of particular interest are families of solutions found containing a rich variety of qualitative
behaviours. Indeed we exhibit and plot a wide variety of solutions all of which look like a two-soliton for $t>0$ but differ radically for $t<0$. These families arise as nonclassical symmetry
reduction solutions and solutions found using the singular manifold method. This example shows that nonclassical symmetries and the singular manifold method do not in general, yield the same solution
set. We also obtain symmetry reductions of the shallow water equation solvable in terms of solutions of the first, third and fifth Painlevé equations.
We give evidence that the variety of solutions found which exhibit ‘nonlinear superposition’ is not an artefact of the equation being linearizable since the equation is solvable by inverse
scattering. These solutions have important implications with regard to the numerical analysis for the shallow water equation we study, which would not be able to distinguish the solutions in an
initial value problem since an exponentially small change in the initial conditions can result in completely different qualitative behaviours.
35Q35 PDEs in connection with fluid mechanics
35B40 Asymptotic behavior of solutions of PDE
58J70 Invariance and symmetry properties
35C05 Solutions of PDE in closed form
35Q58 Other completely integrable PDE (MSC2000) | {"url":"http://zbmath.org/?q=an:0803.35111","timestamp":"2014-04-20T13:33:35Z","content_type":null,"content_length":"22261","record_id":"<urn:uuid:76a5ed83-02ae-44a9-b73a-ee5091de3b9d>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00502-ip-10-147-4-33.ec2.internal.warc.gz"} |
elleptical Integral.
Re: elleptical Integral.
Hi jacks;
As far as I know they are done using tables. Judging by the answer I would say it is very unlikely that a general hand method exists.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=18833","timestamp":"2014-04-17T10:21:09Z","content_type":null,"content_length":"12077","record_id":"<urn:uuid:49833b02-18a2-4696-b594-0401cb56fbee>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
The Right Move home-moving company has a variety of cardboard packing boxes available for use. The packing boxes shown here are similar figures. What is the volume of the larger box? Show your work
and explain how you arrived at your answer by applying the scale factor rule of volume.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4e32ef870b8ba7b2da418cc3","timestamp":"2014-04-18T00:28:04Z","content_type":null,"content_length":"38116","record_id":"<urn:uuid:d5df0193-a765-4f1a-8ceb-d4d78f23d57d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gaussian distribution
November 28th 2006, 10:23 AM #1
Nov 2006
Gaussian distribution
I'm having a hard time solving this problem:
I hope somebody can give me a little help. I've been strugling with this for quite a while now.
You want to find,
$\int_{-\infty}^{\infty} \frac{1}{\sigma \sqrt{2\pi}}x^2\exp \left( - \frac{(x-\mu)^2}{2\sigma^2} \right) dx$
Let us use the hint,
$u=\frac{x-\mu}{\sqrt{2} \sigma}$
$u'=\frac{1}{\sigma \sqrt{2}}$
$\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty} x^2 \exp (-u^2) u' dx$
$x=\sqrt{2} \sigma u+\mu$
Thus, by substitution rule,
$\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty} (\sqrt{2} \sigma u+\mu)^2 \exp (-u^2) du$
Thus, open,
$\frac{1}{\sqrt{\pi}} \int_{-\infty}^{\infty} (2\sigma^2u^2 +u\sqrt{2} \sigma \mu+\mu^2) \exp(-u^2)du$
Maybe, I will return to it later.
But that is the idea.
Also note that,
$\int_{-\infty}^{\infty} e^{-x^2}dx=\sqrt{\pi}$
This is why $\pi$ will cancel out in the end.
Hope that helps.
Thanks alot, I'll give it another try with this.
$<br /> I=\frac{1}{\sigma \sqrt{2 \pi}} \int_{-\infty}^{\infty} x^2 \exp \left( -\frac{(x-\mu)^2}{2\sigma ^2}\right) dx<br />$
Now let $u=\frac{x-\mu}{\sqrt{2}\sigma}$, so $x=\sqrt{2}\sigma u+\mu$. Hence:
$<br /> I=\frac{1}{ \sqrt{ \pi}} \int_{-\infty}^{\infty} (\sqrt{2}\sigma u+\mu)^2 \exp (-u^2) du=$$<br /> \frac{1}{\sqrt{ \pi}} \int_{-\infty}^{\infty} [2\sigma^2 u^2+2 \sqrt{2}\sigma u \mu+\mu^
2] \exp (-u^2) du<br />$
as the middle term in the integral is odd it integrates up to $0$ so:
$<br /> I=\frac{1}{\sqrt{ \pi}} \int_{-\infty}^{\infty} [2\sigma^2 u^2+\mu^2] \exp (-u^2) du<br />$
The last term integrates up to $\mu^2$ as
$<br /> \frac{1}{ \sqrt{ \pi}} \int_{-\infty}^{\infty} \exp(-u^2) du=1<br />$
and the first term integrates up to $\sigma^2$ as:
$<br /> \frac{1}{ \sqrt{2 \pi}} \int_{-\infty}^{\infty}x^2 \exp(-x^2/2) dx=1<br />$
$<br /> I=\sigma^2+\mu^2$
Cool, thanks..
Many thanks. I see now
November 28th 2006, 10:50 AM #2
Global Moderator
Nov 2005
New York City
November 28th 2006, 11:19 AM #3
Nov 2006
November 28th 2006, 12:01 PM #4
Grand Panjandrum
Nov 2005
November 28th 2006, 12:30 PM #5
Nov 2006 | {"url":"http://mathhelpforum.com/advanced-statistics/8128-gaussian-distribution.html","timestamp":"2014-04-16T19:08:44Z","content_type":null,"content_length":"47249","record_id":"<urn:uuid:0fb36e91-396c-4eed-ab1c-f9420774019c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - extract the rows except the specific index
Date: Dec 28, 2012 12:56 PM
Author: uny gg
Subject: extract the rows except the specific index
Hello, I am newbie.. for matlab..
I would like to extract the rows except the specific index..
For example,
A = (1,2,3;4,5,6;7,8,9;10,11,12;13,14,15;16,17,18) % 6 by 3 matrix.
What I want to do is..
divide this matrix, 3 by 3 and 3 by 3 matrix randomly ...
Here is what I try.
bottom =1;
top =6;
ran_idx = bottom + (top - bottom)*rand([1,3]);
ran_idx = ceil(ran_idx);
train = A(ran_idx,:);
test = A(~ran_idx.:); % this one is not working,, return. empty matrix..
I can extract the train matrix.. but I could not make test..
Could you please somebody answer thie one? | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7944669","timestamp":"2014-04-18T13:45:34Z","content_type":null,"content_length":"1633","record_id":"<urn:uuid:2effc5d6-33f1-42f8-bd22-72508e4813ec>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00661-ip-10-147-4-33.ec2.internal.warc.gz"} |
Puzzle No. 6 - world cup medallions
September 1998
If you're one of those people who dropped a grade in your Maths GCSE because you were too busy watching the World Cup then here's your chance to make amends...
This issue's puzzle is quite hard, so we've provided a hint. Thanks to Don Kite and the pupils at The Netherhall School, Cambridge, for bringing it to our attention.
The problem
During the World Cup, PASS Maths' local supermarket was running a World Cup medallions promotion. Each time you spent a certain amount of money you got a free coin embossed with the face of one of
the members of the England squad.
There were 22 players in the squad, so there were 22 medallions to collect. If we assume that each time you get a medallion it is equally likely to represent any one of the 22 players the question
is: on average how many will you need to collect before you get a full set?
This problem is quite hard to do all in one go, so if you need help, look at the hint
We will publish the best explanation in the next issue, along with the answer to the problem itself. | {"url":"http://plus.maths.org/content/puzzle-no-6-world-cup-medallions","timestamp":"2014-04-18T13:20:17Z","content_type":null,"content_length":"22997","record_id":"<urn:uuid:97ed5364-cfae-43f4-b321-b84a8d3028b5>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
Curve Fitting
I stole this from the "Triple A-S" (the American Association for the Advancement of Science). They know how to do it right so why change it.
Because of its abstractness, mathematics is universal in a sense that other fields of human thought are not. It finds useful applications in business, industry, music, historical scholarship,
politics, sports, medicine, agriculture, engineering, and the social and natural sciences. The relationship between mathematics and the other fields of basic and applied science is especially
I see eleven different fields mentioned in this single sentence. I only deal with one of them on a daily basis, but I agree that all of them are beholden to math. If you make decisions about what
you'll devote your life to (or at least what you'll study professionally) based on some notion of "importance" then abandon everything other than mathematics. It's one of the oldest branches of human
knowledge. The only one that's older is agriculture … um … yeah … and then there are tools and fire and … well, language is pretty important and math is right up there. Yeah math!
What does this mean for you right now?
The usual way it works. The assumption is that there is some mathematical relationship between the quantities being graphed. The data aspires toward this mathematical ideal, but because of the
limitations of human beings and their instruments it only approximates it. The data points of a graph form a cloud around the curve of a function. If only we had better "vision". If only our devices
were better at recording the actual values. If only we really knew what the essence of nature was so that we could assign these devices to their intended task. Then. Then we would see that every data
point fell precisely on a perfect analytical curve. Ah, what a beautiful world that would be. Unfortunately, real data never looks exactly like the ideal curves of mathematics.
horizontal axis (x) vertical axis (y)
independent variable dependent variable
explanatory variable response variable
• categorical variables — represented by different symbols on the same coordinate system
• lurking variables or hidden variables — the source of some experimental errors
In search of relationships.
Your friend the proportionality symbol.
Take a look at the curve to the right. No matter what value the x variable takes on the curve, the y variable stays the same. This is a classic example of a relationship called independence. Two
quantities are independent if one has no effect on the other. The curve is a horizontal, straight line represented by the general form equation …
y = k
where k is a constant.
A suitable conclusion statement from such a relationship would be that …
• y is independent of x.
• y does not depend on x.
• y is constant for all values of x.
• y is not affected by x.
• y and x are independent.
For example …
• Free fall acceleration is independent of mass. Heavy objects fall just as fast as light objects in the absence of air resistance.
• The period of a simple pendulum does not depend on its mass. Simple pendulums that are identical in all respects except for the weight of the mass on the end will swing back and forth in an
identical manner.
• The speed of light in a vacuum c is constant for all values of v, the speed of the reference frame. No matter how I move around, the speed of light in a vacuum always stays the same.
• The force of dry friction is not affected by the area of the two surfaces in contact. Dragging a box on its bottom or its side results in the same friction force.
• Mass and location are independent. If a frozen turkey has a mass of 10 kg in New York it'll have a mass of 10 kg in New Jersey, in New Delhi, on Mount Everest, in an airplane, in orbit, on the
surface of the moon, in the Andromeda Galaxy, in …. Well, you get the idea.
Independent relationships can be both boring and profound. Boring when we realize there's no link between the two quantities. Profound when we realize we've identified a fundamental principle or
underlying concept of great significance. The independence of the speed of light and the speed of a reference frame is one of these statements. The speed of light is a fundamental constant — one of
three or four in physics.
Now take a look at this curve. As the x variable increases, the y variable increases too. But there are a lot of curves that do this. What makes this one unique? What distinguishes it from all the
other curves that increase (as the mathematicians say) monotonically? The key is in the shape — a straight, non-horizontal line that runs through the origin. With this particular shape, something
special happens.
Pick a point on the line and note its coordinates. Double the value of the x variable and see how the y variable responds. The new value of y should also have doubled. Try it again. Only this time,
cut the x variable in half. The y variable should have responded in the same manner; that is, it too should be cut in half. Whatever x does, y does the same. This illustrates the simplest, nontrivial
form of proportionality — direct proportionality. Two quantities are directly proportional if their ratio is a constant.
Rearranging this definition gives us the general form equation …
y = kx
where k is the constant of proportionality, which everyone should recognize as the the slope of a straight line in the xy plane.
A suitable conclusion statement from such a relationship would be that …
• y is directly proportional to x.
• y varies directly with x
• y and x are directly proportional.
• y ∝ x
For example
• Regular wages are directly proportional to the number of hours worked. Forty hours of work pays four times as much as ten hours of work. One hour of work pays one-tenth as much as ten hours of
• Weight varies directly with mass. Three times more mass means three times more weight, too. Likewise, half the mass means half the weight.
• Distance and time are directly proportional when speed is constant. Driving for two hours gets you twice as far away as one hour would, but only half as far as four hours.
Warning! Don't think that directly proportional means "when one increases, the other increases" or "when one decreases, the other decreases". It's a more specific kind of relationship than that.
Here's a contrary example. A worker who puts in 60 hours on the job works 1.5 times as much as one who puts in 40 hours.
But workers working more that 40 hours a week in the US are supposed to be paid at an overtime rate, which is typically one and a half times their regular wage. Thus the 60 hour-a-week worker earns
1.75 times as much as the 40 hour-a-week worker.
1 × 40 regular hours + 1.5 × 20 overtime hours) = 1.75
1 × 40 regular hours
Since the changes are not the same …
1.75 ≠ 1.5
the wages earned in this example are not directly proportional to the hours worked. A direct relationship is much more special than the general statement, "when one increases, the other increases".
It's more like, "when one changes by a certain ratio, the other changes by the same ratio".
Moving on. Take a look at this curve. This shape is called a rectangular hyperbola — a hyperbola since it has asymptotes (lines that the curve approaches, but never crosses) and rectangular since the
asymptotes are the x and y axes (which are at right angles to one another).
Some say that this curve shows the opposite behavior of the previous one; that is, as the x variable increases, the y variable decreases and as the x variable decreases, the y variable increases. But
like the previous curve there's a more specific kind of change that takes place. Check it out for yourself. Pick a convenient point on the curve. Note the coordinate values at this point. Now double
the x coordinate and see what happens to the y coordinate. It's cut in half. Now try the reverse. Pick a point on the curve and cut its x coordinate in half. The y coordinate is now double its
original value. Triple x and you get one-third of y. Reduce x to one-fourth and watch y increase by four. However you change one of the variables the other changes by the inverse amount. This
illustrates another simple kind of proportionality — inverse proportionality. Two quantities are said to be inversely proportional if their product is a constant.
xy = k
Rearranging this definition gives us the general form equation …
where k is the constant of proportionality.
A suitable conclusion statement from such a relationship would be that …
• y is inversely proportional to x.
• y varies inversely as x.
• y and x are inversely proportional.
• y ∝ 1/x or y ∝ x^−1
For example …
• The time needed to finish a job varies inversely as the number of workers. More workers means less time to finish a job. (Twice as many means it takes half the time.) Fewer workers means it takes
longer. (If only one-third of the normal number of workers show up, the job will take three times longer.)
• The volume of a mass of gas is inversely proportional to the pressure acting on it. Place a balloon in a hyperbaric chamber and double the pressure — the balloon will squash to half its original
volume. Place the balloon in a vacuum chamber and decrease the pressure to one-tenth atmospheric — the balloon will expand ten times in volume (assuming it doesn't break first).
What do we have here? Why it's a parabola with its vertex at the origin. You get this kind of curve when one quantity is proportional to the square of the other. Since this parabola is symmetric
about the y-axis that makes it a vertical parabola and we know that it's the horizontal variable that gets the square. Here's the general form equation for this kind of curve …
y = kx^2
A suitable conclusion statement from such a relationship would be that …
• y is proportional to the square of x.
• y ∝ x^2
For example …
• The distance traveled by an object dropped from rest is proportional to the square of time. How long does it take to fall one meter? Double that time and you'll fall 4 m, triple it and you'll
fall 9 m, and so on.
• The rate at which heat is produced by an electric circuit is proportional to the square of the current. Doubling the current in a toaster oven quadruples its heat output. Reduce the current in
the CPU of a computer to half its previous value and you'll reduce the heat ouput to one-quarter its previous value.
square root
Here's another parabola with its vertex at the origin, This one's tipped on its side and is symmetric about the x-axis. For a horizontal parabola like this one, it's the vertical variable that gets
the square. The general form equation for this kind of curve is …
y = k √ x
A suitable conclusion statement from such a relationship would be that …
• y is proportional to the square root of x.
• y ∝ √x or y ∝ x^½
For example …
• Speed is proportional to the square root of distance for freely falling objects. How fast is an object moving after it has fallen one meter? At four meters it'll have double that speed; at nine
meters, triple; sixteen, quadruple; and so on.
Something to remember — the square root is not an explicit function. It isn't single-valued. Every number has two square roots: one positive and one negative. Typical curve fitting software
disregards the negative root, which is why I only drew half a parabola on the diagram above. Something else to remember — the domain of the square root is restricted to non-negative values. That's a
fancy way of saying you can't find the square root of a negative number (not without expanding your concept of "number", that is).
So far we have five curves and five general form equations …
independent direct inverse square square root
y = k y = kx y = k/x y = kx^2 y = k √ x
They have three common components …
x = an independent variable (or explanatory variable)
y = a dependent variable (or response variable)
k = a constant of proportionality
and one component that varies …
n = power of the independent variable
We could rewrite these general equations with two variables, a constant of proportionality and a power like this …
independent direct inverse square square root
y = kx^0 y = kx^1 y = kx^−1 y = kx^2 y = kx^½
We could even go so far as to write a general form equation for a whole family of equations …
y = kx^n
Any two variables that are related to one another by an equation of this form are said to have a power relation between them.
Power relationships summarized
power general form description appearance
0 y = k independent horizontal, straight line
1 y = kx direct non-horizontal straight line through the origin
2 y = kx^2 square vertical parabola with vertex at the origin
3 y = kx^3 cube
−1 y = k/x inverse rectangular hyperbola
−2 y = k/x^2 inverse square
−3 y = k/x^3 inverse cube
½ y = k √ x square root horizontal parabola with vertex at the origin
⅓ y = k ^3√ x cube root
Description: A combination of constant and direct. A fixed amount is added (or subtracted) at regular intervals.
General form.
y = ax + b
A suitable conclusion statement from such a relationship would be that …
• y is linear with x.
• y varies linearly with x.
• y is a linear function of x.
Appearance: any straight line, regardless of slope or y-intercept
Example(s): utility bills (there's always a service charge)
Description: A combination of square, direct, and constant.
General Form
y = ax^2 + bx + c
A suitable conclusion statement from such a relationship would be that …
• y is quadratic with x.
• y varies quadratically with x.
• y is a quadratic function of x.
Appearance: A vertical parabola when graphed. It's vertex can be anywhere. It could also be flipped upside down.
Example(s): distance during uniform acceleration
Description: A combination of a constant, direct, square, cube, …. Keep going as far as you wish.
General form.
y = a + bx + cx^2 + dx^3 + …
A suitable conclusion statement from such a relationship would be that …
• y can be approximated by an nth order polynomial of x.
• An nth order polynomial of x was fit to y.
Appearance: any non-periodic function without asymptotes
Example(s): Polynomial functions can be used to approximate many continuous, single-valued curves
Polynomial relationships summarized
order general form name
0 y = a constant
1 y = a + bx linear
2 y = a + bx + cx^2 quadratic
3 y = a + bx + cx^2 + dx^3 cubic
4 y = a + bx + cx^2 + dx^3 + ex^4 quartic
5 y = a + bx + cx^2 + dx^3 + ex^4 + fx^5 quintic
⋮ ⋮ ⋮
n y = a[0]x^0 + a[1]x^1 + a[2]x^2 + a[3]x^3 + … = Σ a[i]x^i nth order polynomial
i = 1
exponential growth
General form.
y = an^bx
A suitable conclusion statement from such a relationship would be that …
• y increases exponentially with x.
• y grows exponentially with x.
• y ∝ n^x
The ratio of successive iterations is a constant. The quantity is multiplied by a fixed amount at regular intervals.
Appearance: asymptotic with negative x-axis, followed by runaway expansion
Example(s): unrestricted population growth, the magic of compound interest
exponential decay
General form.
y = an^−bx
A suitable conclusion statement from such a relationship would be that …
• y decreases exponentially with x.
• y decays exponentially with x.
• y ∝ n^−x
The ratio of successive iterations is a constant. The quantity is divided by a fixed amount at regular intervals.
Appearance: large initial value followed by abrupt collapse, approaches positive x-axis asymptotically
Example(s): radioactive decay, discharging a capacitor, de-energizing an inductor
exponential approach
General form.
y = a (1 − n^−bx) + c
A suitable conclusion statement from such a relationship would be that …
• y approaches a final value exponentially.
Appearance: asymptotically approaches a horizontal line
Example(s): charging a capacitor, energizing an inductor, teaching (half the students get it, then half of the remaining students get it, then half of the remaining students get it, and so on …)
General form.
y = a sin (bx + c)
A suitable conclusion statement from such a relationship would be that …
• y varies periodically with x.
• y is periodic with x.
Appearance: A sine curve is the prototypical example, not the only example. Any curve that repeats itself is periodic.
Example(s): Any daily (diurnal), monthly (lunar), yearly (annual, seasonal), or other periodic change. | {"url":"http://physics.info/curve-fitting/","timestamp":"2014-04-18T15:39:04Z","content_type":null,"content_length":"57579","record_id":"<urn:uuid:c8ec6b24-bbd3-41a6-9531-8fdafd0edbaa>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00583-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coordinates of a point
from Latin: coordinare "to set in order, arrange"
A pair of numbers defining the position of a
on a two-dimensional
Try this Drag the point A. As you drag note the two numbers that define its position on the plane.
The coordinates of a point are a pair of numbers that define its exact location on a two-dimensional plane. Recall that the coordinate plane has two axes at right angles to each other, called the x
and y axis. The coordinates of a given point represent how far along each axis the point is located.
Ordered Pair
The coordinates are written as an "ordered pair" as shown below. The letter P is simply the name of the point and is used to distinguish it from others.
The two numbers in parentheses are the x and y coordinate of the point. The first number (x) specifies how far along the x (horizontal) axis the point is. The second is the y coordinate and specifies
how far up or down the y axis to go. It is called an ordered pair because the order of the two numbers matters - the first is always the x (horizontal) coordinate.
The sign of the coordinate is important. A positive number means to go to the right (x) or up(y). Negative numbers mean to go left (x) or down (y). (The figure at the top of the page has the values
of the axes labelled with the appropriate sign).
The abscissa is another name for the x (horizontal) coordinate of a point. Pronounced "ab-SISS-ah" (the 'c;' is silent). Not used very much. Most commonly, the term "x-coordinate" is used.
The ordinate is another name for the y (vertical) coordinate of a point. Pronounced "ORD-inet". Not used very much. Most commonly, the term "y-coordinate" is used.
Things to try
In the figure at the top of the page, first press 'reset'. If you prefer it, you can drag the origin into any corner to display just one quadrant.
• The point A is in the top right quadrant (first quadrant). Note how both x and y coordinates are positive because the point is up and to the right of the origin.
• Drag the point into the top left quadrant (second quadrant). Note now that the x-coordinate is negative because it is to the left of the origin, where x values are negative.
• Drag the point to the lower right quadrant (fourth quadrant). The x-coordinate is positive again because it is to the right of the origin, but now the y coordinate is negative, due to it being
below the origin.
Other Coordinate Geometry entries
• Print blank graph paper
(C) 2009 Copyright Math Open Reference. All rights reserved | {"url":"http://www.mathopenref.com/coordpoint.html","timestamp":"2014-04-20T18:23:03Z","content_type":null,"content_length":"14449","record_id":"<urn:uuid:8432d79f-1c7a-4a7c-ab14-c0ef7299850c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
√(2x-1)=-3 Help?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/505733ebe4b02986d3711b7e","timestamp":"2014-04-18T19:00:43Z","content_type":null,"content_length":"66351","record_id":"<urn:uuid:2a6ac204-ec9a-4fa5-a8f7-8bcd84297b4c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00383-ip-10-147-4-33.ec2.internal.warc.gz"} |
presentations of Sp(n, Z) and cocycles.
up vote 7 down vote favorite
Question: where can I find explicit presentations of the group $Sp(n, \mathbb Z)$, for small $n$?
It is known that $Sp(n, \mathbb Z)$ admits a $2$-cocycle $h$ with values in $\mathbb Z/2\mathbb Z$ which I'd like to view in the following way. If we fix a presentation with relations forming a set
$R$, then $h$ is a function $R \to \mathbb Z/2\mathbb Z$ (which fulfils some condition).
Question: where can I find an explicit description of this cocycle, in the form of what values does it take on elements of some presentation?
gr.group-theory homological-algebra
add comment
3 Answers
active oldest votes
There is a simple and explicit presentation of $SP(2n,\mathbb{Z})$ in Theorem 9.2.13 of Hahn-O’Meara's book "The classical groups and K-theory". In some sense, this presentation is related
to the ideas of Klingen mentioned by Rivin; however, just following Klingen's method leads to an enormous presentation (something on the order of 5 families of generators and 67 families of
relations), as is shown in the paper
MR0280606 (43 #6325) Birman, Joan S. On Siegel's modular group. Math. Ann. 191 1971 59–68.
up vote 8 Another simple presentation can be found in the paper
down vote
MR1152494 (93b:11054) Lu, Ning(1-RICE) A simple presentation of the Siegel modular groups. Linear Algebra Appl. 166 (1992), 185–194.
The basic idea is that there exist nice presentations for the mapping class group, which surjects onto the symplectic group.
Nice references (though does not the N. Lu paper precede the REALLY simple Wajnryb presentation of the MCG?) – Igor Rivin Oct 24 '11 at 18:43
@Igor : Waynryb's presentation is contained in MR0719117 (85g:57007) Wajnryb, Bronislaw(IL-TECH) A simple presentation for the mapping class group of an orientable surface. Israel J.
Math. 45 (1983), no. 2-3, 157–174. – Andy Putman Oct 25 '11 at 4:30
(though he also has a 1999 paper called "An elementary approach to the mapping class group of a surface" which derives a similar presentation without using Cerf theory, which appeared in
work of Hatcher and Thurston that he quoted in his first paper). – Andy Putman Oct 25 '11 at 4:32
(also, there is an error in the statement of the main theorem of Wajnryb's 1983 paper which is corrected in Birman, Joan S.; Wajnryb, Bronislaw, Presentations of the mapping class group.
Errata: "3-fold branched coverings and the mapping class group of a surface'' [in Geometry and topology (College Park, MD, 1983/84), 24--46, Lecture Notes in Math., 1167, Springer,
Berlin, 1985] and "A simple presentation of the mapping class group of an orientable surface'' [Israel J. Math. 45 (1983), no. 2-3, 157--174] by Wajnryb. Israel J. Math. 88 (1994), no.
1-3, 425–427. ) – Andy Putman Oct 25 '11 at 4:34
@Andy: Actually, I was talking about the '96 Wajnryb paper (two-generator), which is slightly tweaked in Korkmaz '06. Presumably this can be processed as per N. Lu to get a reasonably
simple two-generator symplectic group presentation. – Igor Rivin Oct 25 '11 at 7:52
show 2 more comments
An explicit presentation was given by Behr in MR0369562 (51 #5795) Behr, Helmut Eine endliche Präsentation der symplektischen Gruppe Sp4(Z). (German) Math. Z. 141 (1975), 47–56.
And a slightly shorter one by P. Bender in Bender, Peter Eine Präsentation der symplektischen Gruppe Sp(4,Z) mit 2 Erzeugenden und 8 definierenden Relationen. (German) J. Algebra 65
up vote 7 down (1980), no. 2, 328–331.
General $SP(2n, Z)$ can be reduced to $SP(4, Z),$ the argument is in MR0133303 (24 #A3137) Klingen, Helmut Charakterisierung der Siegelschen Modulgruppe durch ein endliches System
definierender Relationen.
add comment
For $Sp_4(\mathbf{Z})$, there is a presentation in H. Behr "Eine endliche Präsentation der symplektischen Gruppe $Sp_4(\mathbf{Z})$", Math. Z. 141 (1975), 47--56.
up vote 4 down vote
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory homological-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/78995/presentations-of-spn-z-and-cocycles?sort=oldest","timestamp":"2014-04-18T20:53:48Z","content_type":null,"content_length":"62821","record_id":"<urn:uuid:9bd1ff33-2773-4e4f-9036-11ae6e552631>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
Characteristic of Subfield of Complex Numbers is Zero
From ProofWiki
The characteristic of any subfield of the field of complex numbers is $0$.
Suppose to the contrary.
Let $K$ be a subfield of $\C$ such that $\operatorname{Char} \left({K}\right) = n$ where $n \in \N, n > 0$.
$\exists a \in K: n \cdot a = 0$
But as $K$ is a subfield of $\C$ it follows that $K \subseteq \C$ which means:
$\exists a \in \C: n \cdot a = 0$
Thus, by definition of characteristic:
$0 < \operatorname{Char} \left({\C}\right) \le n$
But $\C$ is infinite and so $\operatorname{Char} \left({\C}\right) = 0$.
From that contradiction follows the result. | {"url":"http://www.proofwiki.org/wiki/Characteristic_of_Subfield_of_Complex_Numbers_is_Zero","timestamp":"2014-04-18T05:31:55Z","content_type":null,"content_length":"24558","record_id":"<urn:uuid:db3ccb56-36fa-419c-bb38-bb55ec3bf95e>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00248-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Godel's First Incompleteness Theorem as it possibly relates to Physics
[FOM] Godel's First Incompleteness Theorem as it possibly relates to Physics
Brian Hart hart.bri at gmail.com
Mon Oct 13 13:39:15 EDT 2008
On Sat, Oct 11, 2008 at 9:43 AM, Alasdair Urquhart
<urquhart at cs.toronto.edu> wrote:
> On Thu, 9 Oct 2008, Brian Hart wrote:
>> Why doesn't Godel's 1st Incompleteness Theorem imply the
>> incompleteness of any theory of physics T, assuming that T is
>> consistent and uses arithmetic? Shouldn't the constructors of the
>> Theory of Everything be alarmed? I know this suggestion of
>> application of Godel's theorem was made decades ago but why didn't it
>> make a bigger impact? Is it because it is wrong or were there some
>> sociological reasons for mainstream ignorance of it?
> The basic problem with this idea is that it is consistent with
> current knowledge (as far as I know) that there could be a Theory of
> Everything that is in some sense complete in its physical implications,
> though remaining incomplete in its mathematical foundations.
Well, that's the thing: how do you connect the mathematical
incompleteness to its physical analog? If all of physics is
mathematics, then couldn't an axiomatized physics be completely
founded upon ZFC somehow if one were to follow Hilbert's (other)
Program outlined in his 6th problem? The axioms of physics would be
relatively primitive, basic physical truths only decomposable within
mathematical axiomatics like ZFC. If physics were completely founded
upon ZFC then it wouldn't be prone to some form of incompleteness
because incompleteness only rears its head when one considers some
kind of extension of ZFC like ZFC + some large cardinal, correct?
> Of course, it remains rather unclear what we mean by "complete in its
> physical implications." But I would guess that physicists would be
> very happy with a fundamental theory that predicts all of the
> basic properties of the elementary particles, including the
> constants that currently have to be "put in by hand."
> Of course, gravity would have to be included as well, and that
> seems at the moment to be a very intractable problem.
> _______________________________________________
> FOM mailing list
> FOM at cs.nyu.edu
> http://www.cs.nyu.edu/mailman/listinfo/fom
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2008-October/013094.html","timestamp":"2014-04-20T21:02:19Z","content_type":null,"content_length":"5461","record_id":"<urn:uuid:b45b99ed-cbc6-4f0d-bc1b-be9caa14002d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
Global Alliances and Independent Domination in Some Classes of Graphs
A dominating set $S$ of a graph $G$ is a global (strong) defensive alliance if for every vertex $v\in S$, the number of neighbors $v$ has in $S$ plus one is at least (greater than) the number of
neighbors it has in $V\setminus S$. The dominating set $S$ is a global (strong) offensive alliance if for every vertex $v\in V\setminus S$, the number of neighbors $v$ has in $S$ is at least (greater
than) the number of neighbors it has in $V\setminus S$ plus one. The minimum cardinality of a global defensive (strong defensive, offensive, strong offensive) alliance is denoted by $\gamma_a(G)$ ($\
gamma_{\hat a}(G)$, $\gamma_o(G)$, $\gamma_{\hat o}(G))$.
We compare each of the four parameters $\gamma_a, \gamma_{\hat a}, \gamma_o, \gamma_{\hat o}$ to the independent domination number $i$. We show that $i(G)\le \gamma ^2_a(G)-\gamma_a(G)+1$ and $i(G)\
le \gamma_{\hat{a}}^2(G)-2\gamma_{\hat{a}}(G)+2$ for every graph; $i(G)\le \gamma ^2_a(G)/4 +\gamma_a(G)$ and $i(G)\le \gamma_{\hat{a}}^2(G)/4 +\gamma_{\hat{a}}(G)/2$ for every bipartite graph; $i(G)
\le 2\gamma_a(G)-1$ and $i(G)=3\gamma_{\hat{a}}(G)/2 -1$ for every tree and describe the extremal graphs; and that $\gamma_o(T)\le 2i(T)-1$ and $i(T)\le \gamma_{\hat o}(T)-1$ for every tree.
We use a lemma stating that $\beta(T)+2i(T)\ge n+1$ in every tree $T$ of order $n$ and independence number $\beta(T)$.
Full Text: | {"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v15i1r123/0","timestamp":"2014-04-18T01:09:16Z","content_type":null,"content_length":"16207","record_id":"<urn:uuid:5b9a6c44-a9f6-41cb-a68e-af89bef54707>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: [Help-gsl] Does anybody know how to use FFT to compute numerical int
[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Help-gsl] Does anybody know how to use FFT to compute numerical int
From: Michael
Subject: Re: [Help-gsl] Does anybody know how to use FFT to compute numerical integration?
Date: Thu, 28 Jun 2007 04:41:34 -0700
No this is not an option. I don't have FT[f](v) available. Thanks!
On 6/28/07, Evgeny Kurbatov <address@hidden> wrote:
Or you can calculate the expression
Integrate( g(t) * FT[F](t), t )
[Prev in Thread] Current Thread [Next in Thread] | {"url":"http://lists.gnu.org/archive/html/help-gsl/2007-06/msg00045.html","timestamp":"2014-04-20T04:20:44Z","content_type":null,"content_length":"6339","record_id":"<urn:uuid:2821ae28-855f-497c-9f2b-fb786c49431e>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00423-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
The central theme of this book is the interaction between the curvature of a complete Riemannian manifold and its topology and global geometry.
The first five chapters are preparatory in nature. They begin with a very concise introduction to Riemannian geometry, followed by an exposition of Toponogov's theorem--the first such treatment in a
book in English. Next comes a detailed presentation of homogeneous spaces in which the main goal is to find formulas for their curvature. A quick chapter of Morse theory is followed by one on the
injectivity radius.
Chapters 6-9 deal with many of the most relevant contributions to the subject in the years 1959 to 1974. These include the pinching (or sphere) theorem, Berger's theorem for symmetric spaces, the
differentiable sphere theorem, the structure of complete manifolds of non-negative curvature, and finally, results about the structure of complete manifolds of non-positive curvature. Emphasis is
given to the phenomenon of rigidity, namely, the fact that although the conclusions which hold under the assumption of some strict inequality on curvature can fail when the strict inequality on
curvature can fail when the strict inequality is relaxed to a weak one, the failure can happen only in a restricted way, which can usually be classified up to isometry.
Much of the material, particularly the last four chapters, was essentially state-of-the-art when the book first appeared in 1975. Since then, the subject has exploded, but the material covered in the
book still represents an essential prerequisite for anyone who wants to work in the field.
Graduate students and research mathematicians interested in Riemannian manifolds.
"... this is a wonderful book, full of fundamental techniques and ideas."
-- Robert L. Bryant, Director of the Mathematical Sciences Research Institute
"Cheeger and Ebin's book is a truly important classic monograph in Riemannian geometry, with great continuing relevance."
-- Rafe Mazzeo, Stanford University
"Much of the material, particularly the last four chapters, was essentially state-of-the-art when the book first appeared in 1975. Since then, the subject has exploded, but the material covered in
the book still represents an essential prerequisite for anyone who wants to work in the field. To conclude, one can say that this book presents many interesting and recent results of global
Riemannian geometry, and that by its well composed introductory chapters, the authors have managed to make it readable by non-specialists."
-- Zentralblatt MATH | {"url":"http://cust-serv@ams.org/bookstore?fn=20&arg1=chelsealist&ikey=CHEL-365-H","timestamp":"2014-04-19T19:43:10Z","content_type":null,"content_length":"17210","record_id":"<urn:uuid:f1cfbfb3-06cf-4414-add9-c82b85dae48a>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00001-ip-10-147-4-33.ec2.internal.warc.gz"} |
. B
Modern galaxy formation theory is set within the larger scale cold dark matter cosmological model (Blumenthal et al. 1984). The success of that model in explaining the cosmic microwave background
(CMB) (Komatsu et al. 2009) and large scale structure (Seljak et al. 2005, Percival et al. 2007a, Ferramacho et al. 2008, Sanchez et al. 2009) of the Universe makes it the de facto standard. However,
it is important to recognize that the scales on which the simplest cold dark matter (CDM) model has been most precisely tested are much larger than those that matter for galaxy formation ^1. As such,
we cannot fully rule out that dark matter is warm or self-interacting, although good constraints exist on both of these properties (Markevitch et al. 2004, Boehm and Schaeffer 2005, Ahn and Shapiro
2005, Miranda and Macciò 2007, Randall et al. 2008, Yüksel et al. 2008, Boyarsky et al. 2008).
More extensive reviews of the cold dark matter cosmological model can be found in, for example, Narlikar and Padmanabhan (2001), Frenk (2002) and Bertone et al. (2005).
A combination of experimental measures, including studies of the CMB (Dunkley et al. 2009), large scale structure (Tegmark et al. 2004; Cole et al. 2005; Tegmark et al. 2006; Percival et al. 2007a, b
), the Type Ia supernovae magnitude-redshift relation (Kowalski et al. 2008) and galaxy clusters (Mantz et al. 2009, Vikhlinin et al. 2009, Rozo et al. 2010), have now placed strong constraints on
the parameters of the cold dark matter cosmogony. The picture that emerges (Komatsu et al. 2010) is one in which the energy density of the Universe is shared between dark energy ([] = 0.728[-0.016]^
+0.015), dark matter ([c] = 0.227 ± 0.014) and baryonic matter ([b] = 0.0456 ± 0.0016), with a Hubble parameter of 70.4[-1.4]^+1.3 km/s/Mpc. Perturbations on the uniform model seem to be well
described by a scale-free primordial power spectrum with power-law index n[s] = 0.963 ± 0.012 and amplitude [8] = 0.809 ± 0.024.
Given such a cosmological model, the Universe is 13.75 ± 0.11 Gyr old. Galaxies probably began forming at z ~ 20-50 when the first sufficiently deep dark matter potential wells formed to allow gas to
cool and condense to form galaxies (Tegmark et al. 1997; Gao et al. 2007).
The formation of structure in the Universe is seeded by minute perturbations in matter density expanded to cosmological scales by inflation. The dark matter component, having no pressure, must
undergo gravitational collapse and, as such, these perturbations will grow. The linear theory of cosmological perturbations is well understood and provides an accurate description of the early
evolution of these perturbations. Once the perturbations become nonlinear, their evolution is significantly more complicated, but simple arguments (e.g. spherical top-hat collapse and developments
thereof; Gunn 1977, Shaw and Mota 2008) provide insight into the basic behavior. Empirical methods to determine the statistical distribution of matter in the nonlinear regime exist (Hamilton et al.
1991; Peacock and Dodds 1996; Smith et al. 2003; Heitmann et al. 2009). These, together with N-body simulations (e.g. Klypin and Shandarin 1983, Springel et al. 2005b, Heitmann et al. 2008) show that
a network of halos strung along walls and filaments forms, creating a cosmic web. This web is consistent with measurements of galaxy and quasar clustering on a wide range of scales.
The final result of the nonlinear evolution of a dark matter density perturbation is the formation of a dark matter halo: an approximately stable, near-equilibrium state supported against its own
self-gravity by the random motions of its constituent particles. In a hierarchical universe the first halos to form will do so from fluctuations on the smallest scales. Later generations of halos can
be thought of as forming from the merging of these earlier generations of halos. For the purposes of galaxy formation, two fundamental properties of the dark matter halos are of primary concern: (i)
the distribution of their masses at any given redshift and (ii) the distribution of their formation histories (i.e. the statistical properties of the halos from which they formed).
The insight of Press and Schechter (1974) was that halos could be associated with peaks in the Gaussian random density field of dark matter in the early universe. Using the relatively simple
statistics of Gaussian random fields they were able to derive the following form for the distribution of dark matter halo masses such that the number of halos per unit volume in the mass range M to M
+ M is M (dn / dM) where ^2:
where [0] is the mean density of the Universe, M) is the fractional root variance in the density field smoothed using a top-hat filter that contains, on average, a mass M, and [c](t) is the critical
overdensity for spherical top-hat collapse at time t (Eke et al. 1996).
While the Press and Schechter (1974) expression is remarkably accurate given its simplicity, it does not provide a sufficiently accurate description of modern N-body measures of the halo mass
function. Several attempts have been made to "fix" the Press and Schechter (1974) theory by appealing to different filters and barriers (e.g. Sheth et al. 2001) although to date none are able to
accurately predict the measured form without adjusting tunable parameters. The most accurate fitting formulae currently available are those of Tinker et al. (2008; see also Reed et al. 2007,
Robertson et al. 2009). Specifically, the mass function is given by
and where A, a, b and c are parameters determined by fitting to the results of N-body simulations. The mass variance ^2(M) is determined from the power spectrum of density fluctuations
where P(k) is the primordial power spectrum (usually taken to be a scale-free power spectrum with index n[s]), T(k) is the cold dark matter transfer function (Eisenstein and Hu 1999) and [M](k) is
the Fourier transform of the real-space top-hat window function. Tinker et al. (2008) give values for these parameters as a function of the overdensity, [c]). While this is approximately true, the
work of Tinker et al. (2008) demonstrates that universality does not hold when high precision results are considered.
2.3.2. Halo Formation Distribution
A statistical description of the formation of halos, specifically the sequence of merging events and the masses of halos involved in those events, can be extracted using similar arguments as the
original Press and Schechter (1974) approach (Lacey and Cole 1993). These show that the distribution of halo progenitor masses, M[1]m at redshift z[1] for a halo of mass M[2] at later redshift z[2]
is given by
where [1] = M[1]), [2] = M[2]), [c 1] = [c](z[1]), [c 2] = [c](z[2]). With a zero time-lag (i.e. as z[1] → z[2] and therefore [c 1] → [c 2]) this can be interpreted as a merging rate (although see
Neistein and Dekel 2008 for a counter argument). Repeated application of this merging rate can be used to build a merger tree. Finding a suitable algorithm is non-trivial and many attempts have been
made (Kauffmann and White 1993; Somerville and Primack 1999; Cole et al. 2000; Parkinson et al. 2008). A recent examination of alternative algorithms is given by Zhang et al. (2008). Current
implementations of merger tree algorithms are highly accurate and can reproduce the progenitor halo mass distribution over large spans of redshift (Parkinson et al. 2008).
A fundamental limitation of any Press and Schechter (1974) based approach is that the merger rates are not symmetric, in the sense that switching the masses M[1] and M[2] results in two different
predictions for the rate of mergers between halos of mass M[1] and M[2]. Benson et al. (2005) and Benson (2008) showed that a symmetrized form of the Parkinson et al. (2008) merger rate function
could be made to approximately solve Smoluchowski's coagulation equation (Smoluchowksi 1916) and thereby provide a solution free from ambiguities. Other empirical determinations of merger rates have
been made (Cole et al. 2008; Fakhouri and Ma 2008a, b).
In addition to these purely analytic approaches, numerical studies utilizing N-body simulations have lead to the development of an empirical understanding of halo formation histories (Wechsler et al.
2002, van den Bosch 2002) and halo-halo merger rates (Fakhouri and Ma 2008a, 2009, Stewart et al. 2009, Fakhouri et al. 2010, Hopkins et al. 2010). Merger trees can also be extracted directly from
N-body simulations (e.g. Helly et al. 2003a, Springel et al. 2005b) which sidesteps these problems but incorporates any limitations of the simulation (spatial, mass and time resolution), and
additionally provides information on the spatial distribution of halos. Such N-body merger trees also serve to highlight some, perhaps fundamental, limitations of the Press and Schechter (1974) type
approach. For example, halos in N-body simulations undergo periods of mass loss, which is not expected in pure coagulation scenarios. The existence of systems of substructures within halos (see
Section 4.1) can even lead to three-body encounters which cause subhalos to be ejected (Sales et al. 2007).
Dark matter halos are characterized by their large overdensity with respect to the background density. Spherical top-hat collapse models (e.g. Eke et al. 1996) show that the overdensity,
Studies utilizing N-body simulations show that halos approximately obey the virial theorem within this radius and that r[v] is a characteristic radius for the transition from ordered inflow of matter
(on larger scales) to virialized random motions (on smaller scales) (Cole and Lacey 1996). Dark matter halos have triaxial shapes and the distribution of axial ratios has been well characterized from
N-body simulations (Frenk et al. 1988; Jing and Suto 2002; Bett et al. 2007).
Recent N-body studies (Navarro et al. 2004; Merritt et al. 2005; Prada et al. 2006Einasto 1965) than the Navarro-Frenk-White (NFW) profile (Navarro et al. 1996, 1997). The Einasto density profile is
given by
where r[-2] is a characteristic radius at which the logarithmic slope of the density profile equals -2 and [c](z) / M) as has been shown by Gao et al. (2008) who provide a fitting formula,
which is a good match to halos in the Millennium Simulation ^3 (Springel et al. 2005b). The value of r[-2] for each halo is determined from the known virial radius, r[v], and the concentration, c[-2]
r[v] / r[-2]. The NFW profile has a significantly simpler form and is good to within 10-20% making it still useful. It is given by
where r[s] is a characteristic scale radius and [s] is the density at r = r[s]. For NFW halos, the concentration is defined as c[NFW] = r[v] / r[s].
Concentrations are found to depend weakly on halo mass and on redshift and can be predicted from the formation history of a halo (Wechsler et al. 2002). Simple algorithms to approximately determine
concentrations have been proposed by Navarro et al. (1997), Eke et al. (2001) and Bullock et al. (2001b). More accurate power-law fits have also been determined from N-body simulations (Gao et al.
2008; Zhao et al. 2009).
Integrals over the density and mass distribution are needed to compute the enclosed mass, velocity dispersion, gravitational energy and so on for the halo density profile. For NFW halos the integrals
are mostly straightforward, although some require numerical calculation. For the Einasto profile some of these may be expressed analytically in terms of incomplete gamma functions (Cardone et al.
2005). Specifically, expressions for the mass and gravitational potential are provided by Cardone et al. (2005), other integrals (e.g. gravitational energy) must be computed numerically.
^1 Although measurements of the Lyman-Slosar et al. 2007, Viel et al. 2008) and weak-lensing (Mandelbaum et al. 2006) give interesting constraints on the distribution of dark matter on small scales
they typically require modeling of either the nonlinear evolution of dark matter or the behavior of baryons (or both) which complicate any interpretation. They are therefore not as "clean" as CMB and
large scale structure constraints. Back.
^2 The original derivation by Press and Schechter (1974) differed by a factor of 2, resulting in only half of the mass of the Universe being locked up in halos. Later derivations placed the method on
a firmer mathematical basis and resolved this problem, a symptom of the "cloud-in-cloud" problem (Bond et al. 1991; Bower 1991; Lacey and Cole 1993). Back.
^3 We have truncated this fit so that Gao et al. (2008) were not able to probe the behavior of Back. | {"url":"http://ned.ipac.caltech.edu/level5/Sept11/Benson/Benson2.html","timestamp":"2014-04-20T15:52:44Z","content_type":null,"content_length":"28780","record_id":"<urn:uuid:6bf31e72-7a61-40a7-af52-3a3c71e462ae>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00637-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Review of the Euler Mathematics Toolbox
The Euler Math Toolbox
Yet another freely available matrix math language Euler. Euler provides a very capable environment for exploring complex mathematical functions and examining data. It is available for both Windows
and Linux, and provides a minimal GUI environment in both. The Windows version is more complete, including an integrated Maxima package allowing algebraic expressions to be evaluated.
Below is an illustration of a Euler 3-D plot created with the Linux version of Euler. This review will concentrate on the Linux version that is part of the Debian distribution.
From Debian and it's derivative operating systems, the installation of Euler is easy. I installed with the Debian apt-get utility as follows:
As root:
apt-get install euler
apt-get install euler-doc
To run Euler, simply type in euler in a terminal window. That will bring up the GUI work environment for Euler. From the environment, one can interactively graph functions, load programs, load data,
and manipulate matrix data with the many tools. To test out Euler, I converted a program to Euler that I'd written in PDL and converted to Yorick, Octave, R, and Scilab.
The program exercised many but certainly not all of Euler's extensive capabilities. Particularly the program exercised a number of matrix routines, the Fourier transform routines, and both line
plotting and 3D plotting routines. While not the fastest of the languages in which I ran the program, I'd estimate Euler to be comparable in speed to Octave.
In matrix syntax, Euler is somewhat similar to MATLAB and Octave. Enough so that MATLAB or Octave users could easily adapt. Sequences of numbers are generated in the same way, and indexing matrices
is the same. In fact, Euler allows matrix indices to be expressed within parentheses or brackets to provide more compatibility with other languages. I found it best to use brackets for matrix
indexing, because Euler sometimes confuses parenthetical indexing with function calls. A number of matrix operations are different, and somewhat unique to the Euler language. For example:
Keen Designs T-Shirts and Mugs
To combine matrices X and Y by columns:
A = X | Y;
To combine matrices X and Y by rows:
A = X _ Y;
To perform a matrix multiply:
A = X . Y;
As you can see with these examples, Euler isn't intended to be a MATLAB clone, just a language that is similar in capability and appearance. Functions are declared in a way unique to Euler. Similar
to other languages, but not exactly like any of those I'm experienced with. To declare a function that accepts 3 arguments and returns 3 results:
function test(x, y, z)
return {a, b, c};
One nice thing about functions in Euler is that they can return multiple arguments. Note the curly bracket nomenclature for returning multiple arguments. Again similar looking to MATLAB and Octave
users, but not exactly the same as those languages. The same curly bracket nomenclature is used to receive multiple arguments:
{u, v, w} = test(x, y, z);
I was unable to find any form of data structuring in Euler beyond the matrix. That is, to my knowledge there is no way to hold a stack of images for example rather than as individual matrices, or to
combine different data structures into some kind of single record. However, Euler does have an impressive collection of problem solving, linear algebra, and statistics functions.
One thing to note about the FFT routine in Euler is that it only accepts matrices whose dimensions are a power of 2. If you have some other sized matrix or vector, you must pad with zeros to get
dimensions up to the next power of 2, or trim down to a power of 2.
For help, one can click on the help menu at the top of the GUI and Euler will bring up a browser to display the HTML help documentation. The user can set which browser to use in the .euler/euler.cfg
file. Typing help followed by an Euler function will bring up help on that particular function directly to the GUI window.
Also in the .euler/euler.cfg file is a list of the libraries automatically loaded every time Euler is invoked. A user can create his or her own libraries and have them auto-loaded by adding them to
the euler.cfg file.
File I/O operations are limited compared to many matrix languages. It's no problem if you work exclusively within the Euler system, but makes reading files made from other utilties difficult.
In the Windows version, one can simply write out a matrix with the writematrix, and read a formally written matrix with the readmatrix command. In Linux, there is the writematrix command, but reading
in a matrix one has to use getmatrix, which must be supplied with the size of the matrix in the call function.
The Linux version of Euler can read and write data ASCII files, but assumes that the files are purely numbers. There is little support for reading files that might have other kinds of information,
such as data labels mixed with the data. There is a getchar command, and I was able to write an Euler function that could read past a label to a line feed, and then read in the data as a vector.
Euler has only primitive binary file support, being able to read or write single or double word integers, and can only write from vectors, not matrices.
The main confusing thing I found was that the documentation listed a pair of hatch symbols ## to indicate the start of a comment line. It turns out that the comment indicator is actually a pair of
..This is a comment line
Keen Designs T-Shirts and Mugs
Euler Graphics
Euler has many familiar graphic routines that do what you might expect. The plot function will plot a vector or an x,y pair. Mesh will plot a matrix as a 3D filled wire model. Other routines can do
contours, color contours, and wire model 3D presentations with Euler's own unique arguments.
Euler can even display images, though one has to convert an image into one of the simple file forms that Euler can import.
Here you see can example of a multiple-line plot created with Euler. Rather than provide extensive optional arguments to plot functions as do some languages, Euler plot functions accept only the
data to be plotted.
Other function calls allow the user to select the plot color, plot scale, etc. To plot multiple functions in different colors as was done on this plot, I used the Euler holding command to allow
additional functions to be plotted on the same window.
In addition to line plots and 3D plots, Euler can do line contours and colored contours, as in this image. For the color filled contours, the user can select the hue color.
In summary, I find Euler to be very useful, with some solution methods not commonly found in math languages. It has a smaller resource requirement than many matrix languages, is easy to use, and easy
to program. It isn't probably the best choice for large scale projects or data reduction because of it's limited file I/O support and lack of large scale data structures.
What Euler is good for is listed at the Sourceforge Euler Site. Euler has many familiar graphic routines that do what you might expect. The plot function will plot a vector or an x,y pair. Mesh will
plot a matrix as a 3D filled wire model. Other routines can do contours, color contours, and wire model 3D presentations with Euler's own unique arguments.
The following pros and cons are my subjective views of Euler:
Freely available for both Windows and Linux. The Windows version includes more features, such as the integrated use of Maxima.
Loads quickly.
A GUI environment for controlling the Euler environment.
An easy to learn Basic style programming language.
Quite fast when solutions are optimized to use intrinsic matrix operators.
Easy to use 2D and 3D graphics utilities.
The Linux version is only the matrix language portion, no Maxima interface.
File handling utilities are minimal, making it difficult to work with files made with other utilities.
String handling features are limited.
Designed to be used interactively -- not in batch mode. | {"url":"http://www.linuxgoodies.com/review_euler.html","timestamp":"2014-04-17T03:48:05Z","content_type":null,"content_length":"20899","record_id":"<urn:uuid:44baa087-5b5c-4a62-be41-878395b643d2>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
obbers; Need
Hello all,
I have an interesting problem want a computer programming to solve.
Problem. Once upon a time, there were 5 robbers(A, B, C, D, E). One day, they got 100 golds and decided to carve it up. The way was very weird.
The first robber(A) gave a scheme to carve up the golds, the rest of the robbers voted to decide if they acceptd the scheme. If and only if the vote through was over half number of the rest robbers,
the scheme was adopted .(i.e. there must be more than (not equal to) 2 robbers accepted the scheme)
Otherwise, the robber who raised the scheme would be killed.
After robber A was killed, it was robber(B)'s turn to give a scheme, and the rest of the robbers to vote. Like the previous case, if and only if the vote through was over half number of the rest
robbers, the scheme was adopted, because there were 3 robbers to vote, so at least 2 robbers accepted the scheme would lead to the acceptance of the scheme. Otherwise, robber(B) would be killed. And
it was robber©'s turn to give a scheme ... the rest may be deduced by analogy.
Suppose all the robbers were so clever to make sure they wouldn't be killed and make the most gold(s). What's scheme? Who raised the scheme?
(I have no exact answer. I think the scheme was raised by A and the scheme was
A 97
B 0
C 1
D 1
E 1
Can anyone give a computer programming to solve the problem? | {"url":"http://www.dreamincode.net/forums/topic/2412-five-robbers%3b-need-a-computer-aid/page__pid__14027__st__0","timestamp":"2014-04-18T10:21:07Z","content_type":null,"content_length":"75312","record_id":"<urn:uuid:19bc3b4f-6e42-41f1-ace5-2ac1cecc77c6>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00648-ip-10-147-4-33.ec2.internal.warc.gz"} |
Teachers and Teaching: Theoretical Perspectives and Issues Concerning Classroom Implementation
Find out how to access preview-only content
Teachers and Teaching: Theoretical Perspectives and Issues Concerning Classroom Implementation
Purchase on Springer.com
$29.95 / €24.95 / £19.95*
* Final gross prices may vary according to local VAT.
Get Access
This chapter analyses and compares various theoretical frameworks that illuminate the teacher's role in technology-integrated learning environments and the inter-relationship between factors
influencing teachers' use of digital technologies. The first section of the chapter considers three frameworks drawing on instrumental genesis, zone theory, and complexity theory, and examines their
relevance by interpreting lesson excerpts from alternative theoretical perspectives. This section also outlines research on relationships between teachers' beliefs, attitudes, mathematical and
pedagogical knowledge, and institutional contexts and their use of digital technologies in school and university mathematics education. The second section considers classroom implementation issues by
asking what we can learn from teachers who use, or have tried to use, digital technologies for mathematics teaching. Issues arising here concern criteria for effective use and the nature of what
counts as “progress” in technology integration. The final section of the chapter identifies work that needs to be done to further develop, test, and apply useful theoretical frameworks and
Within this Chapter
1. Theoretical Perspectives
2. Classroom Implementation
3. Future Visions
4. References
5. References
Other actions
1. Artigue, M. (2002). Learning mathematics in a CAS environment: the genesis of a reflection about instrumentation and the dialectics between technical and conceptual work. International Journal of
Computers for Mathematical Learning, 7, 245–274. CrossRef
2. Churchhouse, R. F., Cornu, B., Howson, A. G., Kahane, J. P., van Lint, J. H., Pluvinage, F., Ralston, A., & Yamaguti, M. (Eds.) (1986). The influence of computers and informatics on mathematics
and its teaching. ICMI Study Series (Vol. 1). Cambridge, UK: Cambridge University Press.
3. Cuban, L., Kirkpatrick, H., & Peck, C. (2001). High access and low use of technologies in high school classrooms: explaining the apparent paradox. American Educational Research Journal, 38(4),
813–834. CrossRef
4. Davis, B., & Simmt, E. (2003). Understanding learning systems: mathematics education and complexity science. Journal for Research in Mathematics Education, 34(2), 137–167. CrossRef
5. Drijvers, P. (2003). Learning algebra in a computer algebra environment: design research on the understanding of the concept of parameter. Utrecht, the Netherlands: CD-β Press.
6. Farrell, A. (1996). Roles and behaviors in technology-integrated precalculus classrooms. Journal of Mathematical behavior, 15, 35–53. CrossRef
7. Fine, A. E., & Fleener, M. J. (1994). Calculators as instructional tools: perceptions of three preservice teachers. Journal of Computers in Mathematics and Science Teaching, 13(1), 83–100.
8. Gibson, J. (1979). An ecological approach to visual perception. Boston: Houghton Mifflin.
9. Goos, M., Galbraith, P., Renshaw, P., & Geiger, V. (2003) Perspectives on technology mediated learning in secondary school mathematics classrooms. Journal of Mathematical behavior, 22, 73–89.
10. Hoyles, C., Noss, R., & Kent, P. (2004). On the integration of digital technologies into mathematics classrooms. International journal of Computers for Mathematical Learning, 9(3), 309–326.
11. Hoyles, C., Lagrange, J., Son, L. H., & Sinclair, N. (2006). Mathematics education and digital technologies: rethinking the terrain (Proceedings of the 17th ICMI Study Conference, Hanoi
University of Technology, Hanoi, Vietnam, December 3–8). Retrieved 9 February 2008, from http://icmistudy17.didirem.math.jussieu.fr/doku.php.
12. Kaput, J. J. (1992). Technology and mathematics education. In D. A. Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp. 515–556). New York: McMillan.
13. Manoucherhri, A. (1999). Computers and school mathematics reform: implications for mathematics teacher education. Journal of Computers in Mathematics and Science Teaching, 18(1), 31–48.
14. Ruthven, K., & Hennessy, S. (2002). A practitioner model of the use of computer-based tools and resources to support mathematics teaching and learning. Educational Studies in Mathematics, 49(1),
47–88. CrossRef
15. Scarantino, A. (2003). Affordances explained. Philosophy of Science, 70, 949–961. CrossRef
16. Shulman, L. (1987). Knowledge and teaching: foundations of the new reform. Harvard Educational Review, 57, 1–22
17. Simonsen, L. M., & Dick, T. P. (1997). Teachers' perceptions of the impact of graphing calculators in the mathematics classroom. Journal of Computers in Mathematics and Science Teaching, 16(2/3),
18. Valsiner, J. (1997). Culture and the development of children's action: a theory of human development (2nd ed.). New York: Wiley.
19. Vérillon, P., & Rabardel, P. (1995). Cognition and artefacts: a contribution to the study of thought in relation to instrumented activity. European Journal of Psychology of Education, 10(1),
77–101. CrossRef
20. Walen, S., Williams, S., & Garner, B. (2003). Pre-service teachers learning mathematics using calculators: a failure to connect current and future practice. Teaching and Teacher Education, 19(4),
445–462. CrossRef
Teachers and Teaching: Theoretical Perspectives and Issues Concerning Classroom Implementation
Book Title
Book Subtitle
The 17th ICMI Study
pp 311-328
Print ISBN
Online ISBN
Series Title
Series Volume
Series ISSN
Springer US
Copyright Holder
Springer-Verlag US
Additional Links
□ Theoretical frameworks
□ Teachers and teaching
□ Instrumental genesis
□ Zone theory
□ Complexity theory
□ Affordances
□ Teacher beliefs
□ Teacher knowledge
□ Institutional context
□ Technology integration
eBook Packages
Editor Affiliations
□ ID1. Inst. Education, University College London
□ ID2. IUFM de Reims
Author Affiliations
□ 1. University of Queensland, Brisbane, QLD, Australia
□ 2. University Joseph Fourier and IUFM of Grenoble, Grenoble, France
Continue reading...
To view the rest of this content please follow the download PDF link above. | {"url":"http://link.springer.com/chapter/10.1007%2F978-1-4419-0146-0_14","timestamp":"2014-04-19T14:32:02Z","content_type":null,"content_length":"56235","record_id":"<urn:uuid:d77736cf-1220-4398-9aeb-c9761e17598b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00531-ip-10-147-4-33.ec2.internal.warc.gz"} |
Problem involving slits, diffraction, and incident angle.
I am a bit confused with this question:
A plane wave of 400-nm light is incident on a 25-µ(mu) m slit in a screen, as shown in the figure below. At what incident angle will the first null of the diffraction pattern be on a line
perpendicular to the screen? (picture is attached).
Do I use the formula wavelength=(dSinA)/n ? I am a bit unsure what the "first null" refers to and how this should be taken into account into my formula. Is the first null refering to "n"?
Help would be appreciated. Thank you. | {"url":"http://www.physicsforums.com/showthread.php?t=130121","timestamp":"2014-04-17T12:48:50Z","content_type":null,"content_length":"23545","record_id":"<urn:uuid:2a2b638a-2c31-4d65-b2b4-d2cb49336f4f>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
Toricelli's Law
December 12th 2013, 02:24 PM #1
Junior Member
Feb 2009
Torricelli's Law
I am working through Penney and Edwards Differential Equations and Boundary Value Problems 4th edition.
On section 1.4 Problem 60, I am not getting what they have for an answer, and I am not completely certain that it is my fault as they make their fair share of errors.
The question reads:
A cylindrical tank with length 5 ft and radius 3 ft is situated with its axis horizontal. If a circular bottom hole with a radius of 1 in. is opened and the tank is initially half full of xylene,
how long will it take for the liquid to drain completely?
The answer they get is 6 min 3 sec.
I get 8 min 32 sec.
You are supposed to use Toricell's Law which states: A(y)dy/dt = -a * sqrt(2gy) where g is 32ft/sec^2.
To do this, I observed that the area of a cross section A(y) is a constant 9 * pi. a = (1/12) ^ 2 * pi.
After this, it is a routine differential equation. And solving for t in the equation y(t) = 0 yields 162 * sqrt(10) sec or 8 min 32 sec.
Last edited by grandunification; December 12th 2013 at 02:35 PM.
Re: Torricelli's Law
I am working through Penney and Edwards Differential Equations and Boundary Value Problems 4th edition.
On section 1.4 Problem 60, I am not getting what they have for an answer, and I am not completely certain that it is my fault as they make their fair share of errors.
The question reads:
A cylindrical tank with length 5 ft and radius 3 ft is situated with its axis horizontal. If a circular bottom hole with a radius of 1 in. is opened and the tank is initially half full of xylene,
how long will it take for the liquid to drain completely?
The answer they get is 6 min 3 sec.
I get 8 min 32 sec.
You are supposed to use Toricell's Law which states: A(y)dy/dt = -a * sqrt(2gy) where g is 32ft/sec^2.
To do this, I observed that the area of a cross section A(y) is a constant 9 * pi. a = (1/12) ^ 2 * pi.
After this, it is a routine differential equation. And solving for t in the equation y(t) = 0 yields 162 * sqrt(10) sec or 8 min 32 sec.
If the cylinder is on it's side, which I think is what is meant by it's axis is horizontal, then the cross sectional is going to vary as the tank drains.
Re: Torricelli's Law
I think that is certainly what is going on. However, I am not sure what the cross sectional area should be.
Re: Toricelli's Law
When $y=0$, we can say the cylinder is empty. When $y = 3$, the cylinder is half full. So, drop a line from the center of one of the end circles so that it makes a right angle with the horizontal
line that is at the height of the liquid, then to the edge of the circle where the liquid intersects the circle, then back to the center of the circle. This makes a triangle with one side $3-y$
and hypotenuse 3. The other side is then $\sqrt{3^2-(3-y)^2} = \sqrt{6y-y^2}$. The whole length of the liquid along the circle is twice that length, or $2\sqrt{6y-y^2}$. The cross-sectional area
is that length times the length of the cylinder: $10\sqrt{6y-y^2}$. So, $A(y) = 10\sqrt{6y-y^2}$.
December 12th 2013, 02:49 PM #2
MHF Contributor
Nov 2013
December 12th 2013, 04:20 PM #3
Junior Member
Feb 2009
December 12th 2013, 04:31 PM #4
MHF Contributor
Nov 2010 | {"url":"http://mathhelpforum.com/differential-equations/225031-toricelli-s-law.html","timestamp":"2014-04-21T10:51:04Z","content_type":null,"content_length":"36852","record_id":"<urn:uuid:a9728d59-6cdc-4a12-8f7b-90364a56e263>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: December 2004 [00289]
[Date Index] [Thread Index] [Author Index]
Re: problem getting the area under a parmetric curve
• To: mathgroup at smc.vnet.net
• Subject: [mg52775] Re: problem getting the area under a parmetric curve
• From: "Roger L. Bagula" <rlbtftn at netscape.net>
• Date: Mon, 13 Dec 2004 04:22:19 -0500 (EST)
• References: <cpauof$ipf$1@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
This distribution isn't unique either: I get the sam type of result with
a power five, but the plot isn't complete in this case.
I should mention that an attempt to solve for pair functions of this
type failed in Mathematica ( gave the wrong functional results).
(* power five pair and rotated distribution function*)
y0=2^(1/5)*(5 )^(1/5)*t*(1+2 t^10+t^20/5)^(1/5)/(1+t^5)
Roger Bagula wrote:
> I worked on this late last night.
> I had trouble even having the curve well defined
> finding the area under it.
> I used the symmetry to integrate the side
> that was easiest. There is only one real zero for the x parametric
> in t which helps.
> An Infinite integral does appear to exist for the distribution.
> (* cubic pair and rotated distribution function*)
> Clear[x0,y0,ang,x,y,z]
> x0=(1-t^3)/(1+t^3)
> y0=2^(1/3)*t*(3+t^6)^(1/3)/(1+t^3)
> ParametricPlot[{x0,y0},{t,-2*Pi,2*Pi}]
> Simplify[x0^3+y0^3]
> ang=4
> x=Cos[Pi/ang]*x0-Sin[Pi/ang]*y0
> y=Cos[Pi/ang]*y0+Sin[Pi/ang]*x0
> NSolve[x==0,t]
> N[y/.t->0.486313]
> N[x/.t->0.486313]
> ParametricPlot[{x,y},{t,-2*Pi,2*Pi}]
> f[t_]=x
> norm=Integrate[-y*f'[t],{t,0.486313,Infinity}]
> a0=N[2*norm]
> g1=ParametricPlot[{x,y}/(a0),{t,-2*Pi,2*Pi}]
> g2=Plot[Exp[-t^2/2]/Sqrt[2*Pi],{t,-Pi,Pi}]
> Show[{g1,g2}]
> Respectfully, Roger L. Bagula
> tftn at earthlink.net, 11759Waterhill Road, Lakeside,Ca 92040-2905,tel: 619-5610814 :
> alternative email: rlbtftn at netscape.net
> URL : http://home.earthlink.net/~tftn | {"url":"http://forums.wolfram.com/mathgroup/archive/2004/Dec/msg00289.html","timestamp":"2014-04-18T15:51:24Z","content_type":null,"content_length":"36372","record_id":"<urn:uuid:0d1b7a53-0525-4377-b44b-be1619f3a869>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00029-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computer Arithmetic
Lecture Notes
CSCI 301: Great Ideas in Computer Science
(c) 1993 - 2007 John E. Howland, All Rights Reserved
Department of Computer Science
Trinity University
One Trinity Place
San Antonio, Texas 78212-7200
Internet: jhowland@ariel.cs.trinity.edu
Most computers use complement arithmetic for integer representations. The reason for this is mostly to simplify the circuitry required to perform integer arithmetic operations. We will see in Section
3.1.1 that negative numbers may represented in complement form and that the operation of subtraction may be accomplished by adding the complement of a number. We will show that the complement of a
number is very easy to calculate and both addition and subtraction can be accomplished by adding! To explain the basic idea behind complement arithmetic we will use base 10 representations for our
numbers. However, complement arithmetic is independent of the number base used. Our choice of base 10 is one of convenience since most of us have better arithemetic skills using base 10 numbers than
base 2 numbers.
First, note that ordinary mathematical notation for numbers uses positional notation. That is, the rightmost digit has a place value of 1 (10^0), the next rightmost digit has a place value of 10 (10^
1), the next digit has a place value of 100 (10^2), etc. Hence, 123 is 1x100 + 2x10 + 3x1.
To simplify our discussion we limit our arithmetic to 4 digit integers. Limiting the size of numbers we deal with is reasonable since we need to store numbers in the memory of a computer and we do
not want to use a representation which wastes valuable memory space.
Consider the following subtraction problem:
which is the same problem as
which is the same problem as
We first combine
9936 is called the 9's complement of 0063. Next we add 1 to obtain the 10's complement.
The 10's complement is then added to 4221 yielding.
Recall that we still need to subtract 10000, however, notice that this problem will always have a carry of 1 out of the 4th digit and since we are just representing 4 decimal digits, we accomplish
the subtraction of 10000 by ignoring the carry. This process yields a result of 4158 which is the correct answer for the original problem. Note also, that we have accomplished the operation of
subtraction by adding the 10's complement!
The 10's complement of a number acts like a negative representation for that number, that is, adding the 10's complement of x is the same as subtracting x. A little experimentation shows that if the
left most digit is 5 or greater, then that number represents a negative. For example, (using 4 digit 10's complement representations) 9999 is -1 and 5000 is -5000 and 5001 is -4999.
Complement arithmetic is independent of the radix and works in a similar fashion for radix 2 (base 2) as the following example will illustrate.
The reason we wish to use base 2 rather than base 10 is that electronics technology has not been able to produce inexpensive, reliable and fast circuits which have 10 stable states (used to represent
the 10 base 10 digits 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9). However, it is possible to build inexpensive, reliable and fast circuits which have 2 stable states (used to represent the 2 base 2 digits 0
and 1). A popular family of circuits, TTL, (Transistor Transistor Logic) uses 0 volts to represent the digit 0 and +5 volts to represent the digit 1.
For simplicity, we limit our example to base 2 numbers having 6 bits (binary digits). The problem of
is, written in binary notation,
which is the same as the problem:
which is the same problem as:
We first combine
100101 is called the 1's complement of 011010. Notice that the 1's complement is obtained by simply changing 0's to 1's and 1's to 0's. This is easily implemented with a very simple and very basic
TTL circuit element called a complementer or a not circuit. Next we add 1 to the 1's complement.
We call 100110 the 2's complement of 011010. Next we add the 2's complement to our original number.
As before, since we are representing just 6 bits, we accomplish the subtraction of 1000000 by ignoring the carry out of the 6th bit yielding a result of
which is the answer (5 base 10).
2's complement representations may be used to represent negative values. A 2's complement representation is negative if the leftmost bit is 1 otherwise the representation is positive. Using 6 bit 2's
complement representations, 111111 = -1, 100000 = -32, 100001 = - 31 and 011111 = 31.
We need to be able to convert integers from base 10 representation to representations in another base, such as base 2 or base 16. Before we describe this conversion process, notice that in the
positional notation for an integer n, base b, the rightmost digit (one's digit) is multiplied by b^0. The next digit is multiplied by b^1; the next digit by b^2 , etc. The value of the number is
determined as the sum of these respective powers. For example,
124 = 1x10^2 + 2x10^1 + 4x10^0
Also, notice that the ratio of the place value of any two consecutive digits is always b. In Example 3.1.4, b = 10.
There are other commonly used number systems where the ratio of two consecutive digits is not always the same. For example, when we keep time in the form centuries, years, days, hours, minutes and
seconds, the ratios between the consecutive digits are 100, 365, 24, 60 and 60. That is, there are 60 seconds in a minute, 60 minutes in an hour, 24 hours in a day, 365 days in a year and 100 years
in a century. If we want to convert time, measured in seconds to time measured in centuries, years, days, hours, minutes and seconds we would use the following procedure:
a. Divide the number of seconds by 60 (the number of seconds per minute). The remainder of this division is the number of seconds. The quotient is the number of minutes.
b. Divide the number of minutes (the quotient of the previous division) by 60 (the number of minutes per hour). The remainder of this division is the number of minutes. The quotient of this division
is the number of hours.
c. Divide the number of hours (the quotient of the previous division) by 24 (the number of hours per day). The remainder of this division is the number of hours. The quotient of this division is the
number of days.
d. Divide the number of days (the quotient of the previous division) by 365 (the number of days per year). The remainder of this division is the number of days. The quotient of this division is the
number of years.
e. Finally, divide the number of years (the quotient of the previous division) by 100 (the number of years per century). The remainder of this division is the number of years. The quotient of this
division is the number of centuries.
Suppose that we represent the ratios of the place values of each digit in a number system as a list of numbers. We call such a list a base list. The base list for a time number system might be 100
365 24 60 60. This list is determined by the fact that there are 60 seconds in a minute, 60 minutes in an hour, 24 hours in a day, 365 days in a year and 100 years in a century. Algorithm 3.1.5,
which converts time in seconds to time measured in centuries, years, days, hours, minutes and seconds uses the elements of the base list 100 365 24 60 60 in reverse order, dividing each quotient by
the next (in reverse order) base list value. The remainders produced by this process are the digits in the new number system from least significant to most significant. This process is modeled by the
following J verb:
rep =: #:
Suppose we wish to convert 1000000 seconds to centuries, years, days, hours, minutes and seconds. Then we would write:
100 365 24 60 60 rep 1000000 ==>
That is, 1000000 seconds is 0 years, 11 days, 13 hours, 46 minutes and 40 seconds. If you carry out the individual steps of the algorithm on this example, you can see that 1000000 divided by 60 is
16666 with a remainder of 40, 16666 divided by 60 is 277 with a remainder of 46, etc.
Notice that even though this algorithm divides by 100 (the number of years in a century), the result does not contain an item for the number of centuries. This is because the number of centuries is
the quotient of the division by 100 whereas the number of years is the remainder of the division by 100. The rep verb contains a provision to obtain the quotient as well as the remainder of the
division by the last element (in reverse order). If the first element of the base list (last element in reverse order) is zero, no division is performed and the rep verb terminates, returning the
quotient as its answer.
To compute the 6 digit base 2 representation of 25 you would use a base list of 2 2 2 2 2 2.
2 2 2 2 2 2 rep 25 ==>
To compute the 4 digit base 3 representation of 29 you would write:
3 3 3 3 rep 29 ==>
The J verb base performs conversions from a given number system which is determined by a base list to base 10.
base =: #.
For example, to compute the number of seconds in 0 years, 11 days, 13 hours, 46 minutes and 40 seconds you would write:
100 365 24 60 60 base 0 11 13 46 40 ==>
Notice that the base verb un-does the calculation done by the rep verb. The relationship between these two verbs is that base is the inverse of rep. A close examination of Agorithm 3.1.5 reveals a
repeated process of division and subtraction to get the remainders. To un-do this calculation (which base does) we have to un-do (in reverse order) the operations of division and subtraction. This
means we have to add and multiply. Other examples:
2 2 2 2 2 2 2 2 base 0 1 1 1 1 1 1 1 ==>
3 3 3 3 base 1 0 0 2 ==>
Finally, we should notice that the first element (last element in reverse order) of a base list is not used in the rep verb to produce a quotient for any successive divisions (there are none since
this is the last divisior); it is only used to produce the last remainder (most significant digit of the result). This means that the base verb should not use the first element of the base list
either and the first element of the representation list (most significant digit) is the first summand of a sequence of add followed by multiplication. A close examination of the base verb algorithm
will reveal these facts. Most modern computer systems represent several different sized integers in memory. The most common size for an individually addressable memory cell is 8 bits (one byte).
These cells are used singly to represent very small integers and in pairs or groups of 4 to represent progressively larger integer values.
One byte integers, n, satisfy:
unsigned 0 <= n <= 255 = -1 + 2^8.
signed -128 <= n <= 127 = -1 + 2^7
16 bit integers (short integers), n, satisfy:
unsigned 0 <= n <= 65535 = -1 + 2^16
signed -32768 <= n <= 32767 = -1 + 2^15
32 bit integers (long integers), n, satisfy:
unsigned 0 <= n <= 4294967295 = -1 + 2^32
signed -2147483648 <= n <= 2147483647 = -1 + 2^31
Recently, some machines are being produced which operate directly on 64 bit integers. To summarize, given circuitry for addition (later we will see how such circuitry might be built) and circuitry
for computing the 1's complement (we have to have complementing circuitry to add anyway) we can accomplish both addition and subtraction. Also, since multiplication is repeated addition and division
is repeated subtraction, we can accomplish these operations as well. When we wish to view 32 bit or 64 bit memory images of various types, the human eye has trouble dealing with 32 or 64 bits of
information. Hence, groups of 4 bits at a time are used to solve this problem. So a 32 bit quantity requires only 8 symbols. This notation, hexadecimal (radix 16) uses the following symbols:
0000 - 0 1000 - 8
0001 - 1 1001 - 9
0010 - 2 1010 - A
0011 - 3 1011 - B
0100 - 4 1100 - C
0101 - 5 1101 - D
0110 - 6 1110 - E
0111 - 7 1111 - F
Hence, the 32 bit 2's complement value of -1 has the following hexadecimal representation:
The 16 bit 2's complement value of 2 has the following hexadecimal representation:
etc. Some computers support yet another representation for integers, called decimal arithmetic or BCD arithmetic.
BCD integers are variable length strings of bytes in memory where groups of 4 bits are used to represent decimal digits. This means that 2 decimal digits may be packed into each byte. Some systems
support use of unused bits in the leftmost digit to represent the sign of the number. For example, 329 would require two bytes of memory and would look like (using hexadecimal notation)
Scientific notation uses a two part representation for a number which consists of a signed fraction (mantissa) and a signed exponent (characteristic).
An example might be +1.234 x 10^23
^Numbers written in scientific notation are usually written in a normal form (for example the value of the fraction being written so that it is between 1 and 10). If the result of some calculation is
.0001 x 10^15, then it must be normalized as
1.000 x 10^11
When scientific notation is used to represent inexact values on a computer it is usually referred to as floating point notation.
When graphed, adjacent floating point numbers close to zero are very closely spaced, however, the gaps between adjacent floating point numbers far from zero are more than astronomically large!
The Institute for Electrical and Electronics Engineers is an engineering professional society which has been concerned with establishing various standards in the field of electrical engineering,
electronics and computer engineering. During the early years of computer design, each manufacturer designed its own method of using scientific notation to represent inexact computational values. Not
only were these representations not compatible, but they also used different operational methods to handle conversions and rounding of inexact values. This meant that the same sequence of ineact
operations on different computers may often produce different results!
In an effort to try to standardize inexact computation, the IEEE formed a committee in the mid 1980's which produced the IEEE Standard 754 for binary floating point arithmetic. Since then, most
computer manufacturers have begun to use this standard for representations which is given, in part, in Section 3.5.1.
Listed below are three IEEE data types. All three are also SANE (Standard Apple Numeric Environment) data types as well. In fact, Apple's SANE package provides the most complete and accurate
implementation of the IEEE standard to date. Each of the diagrams in the following pages is followed by a table that gives the rules for evaluating the number. In each field of each diagram, the
leftmost bit is the msb (most significant bit) and the rightmost is the lsb (least significant bit). Symbols used in the diagrams are defined in the following table.
Symbol Description
v value of the number
s sign bit
e biased exponent
i explicit one's bit (extended type only)
f fraction
The 32-bit single format is divided into three fields as shown below:
The value v of the number is determined by these fields as shown in the following table:
Values of single-format numbers (32 bits)
e f v class of v
0<e<255 (any) v=(-1)^s x 2^(e-127) x (1.f) normalized
e=0 f!=0 v=(-1)^s x 2^(e-126) x (0.f) denormalized
e=0 f=0 v=(-1)^s x 0 zero
e=255 f=0 v=(-1)^s x infinity infinity
e=255 f!=0 v is a NaN NaN
The 64-bit double format is divided into three fields as shown below:
The value v of the number is determined by these fields as shown in the following table:
Values of double-format numbers (64 bits)
e f v class of v
0<e<2047 (any) v=(-1)^s x 2^(e-1023) x (1.f) normalized
e=0 f!=0 v=(-1)^s x 2^(e-1022) x (0.f) denormalized
e=0 f=0 v=(-1)^s x 0 zero
e=2047 f=0 v=(-1)^s x infinity infinity
e=2047 f!=0 v is a NaN NaN
For example, the double representation (in hex notation) of 1.5 is
The 80-bit extended format is divided into four fields as shown below:
The value v of the number is determined by these fields as shown in the following table:
Values of extended-format numbers (80 bits)
e i f v class of v
0<=e<=32766 1 (any) v=(-1)^s x 2^(e-16383) x (1.f) normalized
0<=e<=32766 0 f!=0 v=(-1)^s x 2^(e-16383) x (0.f) denormalized
0<=e<=32766 0 f=0 v=(-1)^s x 0 zero
e=32767 (any) f=0 v=(-1)^s x infinity infinity
e=32767 (any) f!=0 v is a NaN NaN
Finally, we need some discussion about the kinds of numbers which can be represented. When we talk about the real numbers from a mathematical point of view, we sometimes classify the reals into two
groups; those which are rational and those which are not rational (irrational).
The rationals are those which can be represented in the form
a/b where a and b are integers and b not zero.
An equivalent formulation is that the rationals are those numbers which have a repeating representation using some radix.
The irrational numbers can be characterized as those numbers whose representation in any radix never repeat.
This means that an infinite amount of memory would be required to exactly represent an irrational number. Since this is impossible, we are forced to cut off the representation after a fixed number of
digits. As soon as this is done, we are no longer representing the irrational exactly, but rather, we have substituted a rational (which approximates the irrational) whose radix representation
repeats in the digit zero.
This means that all computer numeric representations (called machine numbers) are necessarily rational numbers. | {"url":"http://www.cs.trinity.edu/About/The_Courses/cs301/03.computer.arithmetic/03.comp.arith.j.html","timestamp":"2014-04-17T10:20:13Z","content_type":null,"content_length":"21875","record_id":"<urn:uuid:dd7331c3-b8f8-462b-b1ac-565b9bfc6070>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00326-ip-10-147-4-33.ec2.internal.warc.gz"} |
Music Package
The functions defined in allow you to make conversions between cents and hertz, and play scales in one of the common tuning systems, or in a user-specified tuning system. In addition, a set of
equal-tempered pitch/frequency equivalents is defined.
When you try the examples in this documentation, your computer display may not look exactly the same, since the graphic displays accompanying Mathematica's sound generation vary from platform to
MusicScale[ilist,freq,dur] create a Sound object that is a sequence of pitches corresponding to ilist, a list of intervals measured in cents, starting at freq hertz and lasting dur seconds
MusicScale creates a pitch sequence from a predefined interval list or an arbitrary list of numbers interpreted as intervals measured in cents.
is an interval list. This plays a major scale in just intonation that starts at 440 Hz and lasts for 3 seconds.
The list of intervals does not have to be in ascending or descending order. Here the starting frequency is 880 Hz.
QuarterTone PythagoreanMajor
PythagoreanChromatic MeanMajor
MeanMinor MeanChromatic
SixthTone JustMajor
JustMinor TemperedMajor
TemperedMinor TemperedChromatic
Predefined interval lists measured in cents.
HertzToCents[flist] convert a list of frequencies measured in Hertz to a list of intervals measured in cents
CentsToHertz[ilist] convert a list of intervals measured in cents to a list of frequencies measured in Hertz, beginning at frequency 440 Hertz
CentsToHertz[ilist,f] convert a list of intervals measured in cents to a list of frequencies measured in Hertz, beginning at frequency f
Converting between Hertz and cents.
The two functions HertzToCents and CentsToHertz convert a list of one type to its complementary type.
This takes a list of frequencies in Hertz and gives the distance from one frequency to the next in cents.
Here is a list consisting of the frequencies in a one-octave, equal-tempered chromatic scale starting at 440 Hertz.
This gives the frequency that is 600 cents above the default frequency, 440 Hertz, or in musical terminology, one-half octave above the pitch A4.
The package provides a list of equal-tempered pitch/frequency equivalents. Pitches are named in pitch class/octave notation, where the pitch class is given by a letter from A to G, and the octave is
an integer from 0 and 7. The names of flat and sharp notes are written as , , and so on.
Most chromatic equivalences are available; for example, C-flat is the same as B, and E-sharp is the same as F. Double-flats and double-sharps are not defined.
The difference between
is 700 cents in equal temperament. | {"url":"http://reference.wolfram.com/mathematica/Music/tutorial/Music.zh.html","timestamp":"2014-04-17T10:08:29Z","content_type":null,"content_length":"38558","record_id":"<urn:uuid:a498e20c-1b70-449a-8f44-e0cb8f68ab38>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fulton Gonzalez
Office: Bromfield-Pearson Building, Room 203
Phone: 72368
Research Interests
Noncommutative harmonic analysis, representations of Lie groups, integral geometry, and Radon transforms.
Selected Publications
On the Range of the Radon Transform on Grassmann Manifolds
Support Theorems for Radon Transforms on Higher Rank Symmetric Spaces (with E. T. Quinto)
Proc. Amer. Math. Soc. 122 (1994), pp. 1045-1052
On the Range of the Radon Transform and Its Dual
Trans. Amer. Math. Soc. 327 (1991), pp. 601-619
Invariant Differential Operators and the Range of the Radon D-Plane
Transform, Math. Ann. 287 (1990), pp. 627-635
Radon Transforms on Grassmann Manifolds
J. Funct. Anal. 71 (1987), pp. 339-362
Invariant Differential Operators on Grassmann Manifolds, (with S. Helgason)
Advan. Math 60 (1986), pp. 81-91
Preprints (available Upon Request)
John's Equation and the plane-to-line transform on R^3
(with T. Kakehi) Radon Transforms on Affine Grassmann Manifolds | {"url":"http://www.tufts.edu/as/math/gonzalez.html","timestamp":"2014-04-16T13:08:42Z","content_type":null,"content_length":"2047","record_id":"<urn:uuid:c0be9b7d-b4bd-44f9-b551-777f60235e7a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00456-ip-10-147-4-33.ec2.internal.warc.gz"} |
Minimally 2-vertex-connected graphs?
up vote 7 down vote favorite
A class of "minimally 2-vertex-connected graphs" - that is, 2-vertex-connected graphs which have the property that removing any one vertex (and all incident edges) renders the graph no longer
2-connected - have come up in my research.
Dirac wrote a paper on "minimally 2-connected graphs" (G. A. Dirac, Minimally 2-connected graphs, J. Reine Angew. Math. 228 (1967),. 204-216), which gives quite a detailed description of the
structure of such graphs. However, in his sense, minimal 2-connectivity means that deleting any EDGE leaves a graph which is not 2-connected, which is not an equivalent property to the
vertex-deletion one. Does anyone know anything about graphs with the latter property?
In the hope of stimulating some discussion, here is a wildly speculative and vague conjecture: The only graphs satisfying this property are simple cycles, and certain cycles with chords.
add comment
2 Answers
active oldest votes
Here is a more general family:
Draw your favorite tree in the plane, with circles for the nodes and "thick" lines for the edges. Now turn every circle into a cycle, and every thick line into a pair of parallel
paths $p_1, \ldots, p_m$ and $q_1, \ldots, q_n$ with various crossbraces. The crossbraces just have to follow the rule that if $p_i$ is connected to $q_\ell$ and $p_j$ is connected to
up vote 2 down $q_k$, for $i<j$ and $k<\ell$, then $j =i+1$ or $\ell=k+1$.
vote accepted
This is probably still not close to a complete characterization, but at least shows that the class is a lot broader than the small class you posited to promote discussion.
add comment
As the edited question mentions, Dirac did something similar, although minimality is with respect to edge deletion while connectivity is with respect to vertices (hence some confustion
arose) The link is here.
up vote I will mention that Chaty and Chein (1979) solved the problem where minimality is with respect to edge deletion, and connectivity is with respect to edges.
1 down
vote Also, I don't think that all such graphs are cycles with some chords, since subdividing any edge of a minimally 2-connected graph preserves minimal 2-connectivity. So, I can certainly
destroy a chord by subdividing it.
Regarding "subdividing any edge of a minimally 2-connected graph preserves minimal 2-connectivity": Consider a cycle with one chord $(u,v)$. Then subdivide the chord: add a new node $t$
2 and replace $(u,v)$ with the path $(u,t,v)$. The resulting graph is no longer minimally 2-connected: you can remove the node $t$ and you are left with a cycle which is 2-connected.
However, if you add two nodes $s$ and $t$ and replace $(u,v)$ with the path $(u,s,t,v)$, then I think the new graph is minimally 2-connected (provided that in the cycle the distance
between $u$ and $v$ is at least 3). – Jukka Suomela Aug 14 '10 at 10:58
add comment
Not the answer you're looking for? Browse other questions tagged graph-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/35491/minimally-2-vertex-connected-graphs?sort=newest","timestamp":"2014-04-24T09:08:02Z","content_type":null,"content_length":"55986","record_id":"<urn:uuid:c11b2409-0c89-452a-8cde-7435bfd0b026>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00034-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
can you help me on this question
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51212db4e4b03d9dd0c5599e","timestamp":"2014-04-18T08:08:41Z","content_type":null,"content_length":"51512","record_id":"<urn:uuid:34ec4304-bc0a-4442-afef-807807b1f9cf>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00401-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ringwood, IL Statistics Tutor
Find a Ringwood, IL Statistics Tutor
...I am able to help in pre-algebra, algebra, Geometry, College-algebra, Trigonometry and Calculus. I am also helping students who is planning to take the AP Calculus, ACT and SAT exams. Many
students who hated math started liking it after my tutoring.
12 Subjects: including statistics, calculus, geometry, algebra 1
...I’ve been teaching and/or tutoring math for almost 20 years now, and in that time, I’ve helped all ages and abilities to achieve their goals in a wide variety of topics in mathematics. My
primary goal as a tutor is to adapt to each individual's learning style in order to make learning as efficie...
25 Subjects: including statistics, calculus, geometry, ESL/ESOL
...I have a Minor in Math/Computer Science and the equivalent of a Minor in English. I have started working on my Master's degree in English. I currently teach GED classes through Rock Valley
38 Subjects: including statistics, reading, English, writing
...While I specialize in high school and college level mathematics, I have had success tutoring elementary and middle school students as well. I have experience working with ACT College Readiness
Standards and have been successful improving the ACT scores of students. In first tutoring sessions wi...
19 Subjects: including statistics, calculus, geometry, algebra 1
...I continued applying trig skills through high school, where I was a straight A student and completed Calculus as a junior. I tutored math through college to stay fresh. Finally, trigonometry
always finds its way into my day-to-day work, from teaching college-level physics concepts to building courses for professional auditors.
13 Subjects: including statistics, calculus, geometry, algebra 1
Related Ringwood, IL Tutors
Ringwood, IL Accounting Tutors
Ringwood, IL ACT Tutors
Ringwood, IL Algebra Tutors
Ringwood, IL Algebra 2 Tutors
Ringwood, IL Calculus Tutors
Ringwood, IL Geometry Tutors
Ringwood, IL Math Tutors
Ringwood, IL Prealgebra Tutors
Ringwood, IL Precalculus Tutors
Ringwood, IL SAT Tutors
Ringwood, IL SAT Math Tutors
Ringwood, IL Science Tutors
Ringwood, IL Statistics Tutors
Ringwood, IL Trigonometry Tutors
Nearby Cities With statistics Tutor
Alden, IL statistics Tutors
Bassett, WI statistics Tutors
Benet Lake statistics Tutors
Camp Lake statistics Tutors
Hebron, IL statistics Tutors
Mccullom Lake, IL statistics Tutors
New Munster statistics Tutors
Pell Lake statistics Tutors
Powers Lake, WI statistics Tutors
Silver Lake, WI statistics Tutors
Tower Lakes, IL statistics Tutors
Trevor statistics Tutors
Union, IL statistics Tutors
Wilmot, WI statistics Tutors
Zenda, WI statistics Tutors | {"url":"http://www.purplemath.com/Ringwood_IL_Statistics_tutors.php","timestamp":"2014-04-21T07:26:21Z","content_type":null,"content_length":"24028","record_id":"<urn:uuid:a2f00e79-f925-47c6-9136-86e166316437>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Applying the Model
Next: Expectation and Arg Max Up: CEM - A Maximum Previous: Conditional Constraints vs. Joint
Assume we have a fully optimized conditional model of the form
During runtime, we need to quickly generate an output G[m], instead of as Gaussian functions. In addition, the conditional Gaussians which were original experts become ordinary Gaussians when we
observe M Gaussians, the marginal density that results is an ordinary sum of M Gaussians in the space of 7.33.
Observe the 1D distribution in Figure 7.12. At this point, we would like to choose a single candidate
Sampling will often return a value which has a high probability however, it may sometimes return low values due to its inherent randomness. The average, i.e. the expectation, is a more consistent
estimate but if the density is multimodal with more than one significant peak, the 5] ^7.2 (as is the case in Figure 7.12). Thus, if we consistently wish to have a response
Next: Expectation and Arg Max Up: CEM - A Maximum Previous: Conditional Constraints vs. Joint Tony Jebara | {"url":"http://www.cs.columbia.edu/~jebara/htmlpapers/ARL/node60.html","timestamp":"2014-04-16T10:19:04Z","content_type":null,"content_length":"9165","record_id":"<urn:uuid:2ad83585-c47c-4294-ac00-f9960ba6e380>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00155-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
what is the whole number in 5,902.87?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/506434fbe4b0583d5cd3bc87","timestamp":"2014-04-18T13:45:13Z","content_type":null,"content_length":"39373","record_id":"<urn:uuid:e6cb3ac6-85d9-464a-995a-ffe81ea2103d>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Formula Weight
Topic Page:
formula weight
formula weight
, in chemistry, a quantity computed by multiplying the
atomic weight
(in atomic mass units) of each
in a
by the number of atoms of that element present in the formula, and then adding all of these products together. For example, the formula weight of water (H2O) is two times the atomic weight of
hydrogen plus one times the atomic weight of oxygen. Numerically, this is (2×1.00797)+(1×15.9994)=2.01594+15.9994=18.01534. If the formula used in computing the formula weight is the molecular
formula, the formula weight computed is the
molecular weight
. The percentage by weight of any atom or group of atoms in a compound can be computed by dividing the total weight of the atom (or group of atoms) in the formula by the formula weight and
multiplying by 100. For example, the weight percentage of hydrogen in water is de...
Continue Reading | {"url":"http://public.credoreference.com/content/topic/formula_weight","timestamp":"2014-04-21T14:50:09Z","content_type":null,"content_length":"47461","record_id":"<urn:uuid:cf972ef7-207d-4ab4-a57b-3039ee4ae72e>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00485-ip-10-147-4-33.ec2.internal.warc.gz"} |
Momentum in R: Part 2
Many of the sites I linked to in the previous post have articles or papers on momentum investing that investigate the typical ranking factors; 3, 6, 9, and 12 month returns. Most (not all) of the
articles seek to find which is the “best” look-back period to rank the assets. Say that the outcome of the article is that the 6 month look-back has the highest returns. A trading a strategy that
just uses a 6 month look-back period to rank the assets leaves me vulnerable to over-fitting based on the backtest results. The backtest tells us nothing more than which strategy performed the best
in the past, it tells us nothing about the future… duh!
Whenever I review the results from backtests, I always ask myself a lot of “what if” questions. Here are 3 “what if” questions that I would ask for this backtest are:
1. What if the strategy based on a 6 month look-back under performs and the 9 month or 3 month starts to over perform?
2. What if the strategies based on 3, 6, and 9 month look-back periods have about the same return and risk profile, which strategy should I trade?
3. What if the assets with high volatility are dominating the rankings and hence driving the returns?
The backtests shown are simple backtests meant to demonstrate the variability in returns based on look-back periods and number of assets traded.
The graphs below show the performance of a momentum strategy using 3, 6, 9, and 12 month returns and trading the Top 1, 4, and 8 ranked assets. You will notice that there is significant volatility
and variability in returns only trading 1 asset. The variability between look-back periods is reduced, but there is still no one clear “best” look-back period. There are periods of under performance
and over performance for all look back periods in the test.
Here is the R code used for the backtests and the plots. Leave a comment if you have any questions about the code below.
RankRB <- function(x){
# Computes the rank of an xts object of ranking factors
# ranking factors are the factors that are ranked (i.e. asset returns)
# args:
# x = xts object of ranking factors
# Returns:
# Returns an xts object with ranks
# (e.g. for ranking asset returns, the asset with the greatest return
# receives a rank of 1)
r <- as.xts(t(apply(-x, 1, rank, na.last = "keep")))
MonthlyAd <- function(x){
# Converts daily data to monthly and returns only the monthly close
# Note: only used with Yahoo Finance data so far
# Thanks to Joshua Ulrich for the Monthly Ad function
# args:
# x = daily price data from Yahoo Finance
# Returns:
# xts object with the monthly adjusted close prices
sym <- sub("\\..*$", "", names(x)[1])
Ad(to.monthly(x, indexAt = 'lastof', drop.time = TRUE, name = sym))
CAGR <- function(x, m){
# Function to compute the CAGR given simple returns
# args:
# x = xts of simple returns
# m = periods per year (i.e. monthly = 12, daily = 252)
# Returns the Compound Annual Growth Rate
x <- na.omit(x)
cagr <- apply(x, 2, function(x, m) prod(1 + x)^(1 / (length(x) / m)) - 1, m = m)
SimpleMomentumTest <- function(xts.ret, xts.rank, n = 1, ret.fill.na = 3){
# returns a list containing a matrix of individual asset returns
# and the comnbined returns
# args:
# xts.ret = xts of one period returns
# xts.rank = xts of ranks
# n = number of top ranked assets to trade
# ret.fill.na = number of return periods to fill with NA
# Returns:
# returns an xts object of simple returns
# trade the top n asset(s)
# if the rank of last period is less than or equal to n,
# then I would experience the return for this month.
# lag the rank object by one period to avoid look ahead bias
lag.rank <- lag(xts.rank, k = 1, na.pad = TRUE)
n2 <- nrow(lag.rank[is.na(lag.rank[,1]) == TRUE])
z <- max(n2, ret.fill.na)
# for trading the top ranked asset, replace all ranks above n
# with NA to set up for element wise multiplication to get
# the realized returns
lag.rank <- as.matrix(lag.rank)
lag.rank[lag.rank > n] <- NA
# set the element to 1 for assets ranked <= to rank
lag.rank[lag.rank <= n] <- 1
# element wise multiplication of the
# 1 period return matrix and lagged rank matrix
mat.ret <- as.matrix(xts.ret) * lag.rank
# average the rows of the mat.ret to get the
# return for that period
vec.ret <- rowMeans(mat.ret, na.rm = TRUE)
vec.ret[1:z] <- NA
# convert to an xts object
vec.ret <- xts(x = vec.ret, order.by = index(xts.ret))
f <- list(mat = mat.ret, ret = vec.ret, rank = lag.rank)
symbols <- c("XLY", "XLP", "XLE", "XLF", "XLV", "XLI", "XLK", "XLB", "XLU", "EFA")#, "TLT", "IEF", "SHY")
stock(symbols, currency = "USD", multiplier = 1)
# create new environment to store symbols
symEnv <- new.env()
# getSymbols and assign the symbols to the symEnv environment
getSymbols(symbols, from = '2002-09-01', to = '2012-10-20', env = symEnv)
# xts object of the monthly adjusted close prices
symbols.close <- do.call(merge, eapply(symEnv, MonthlyAd))
# monthly returns
monthly.returns <- ROC(x = symbols.close, n = 1, type = "discrete", na.pad = TRUE)
# rate of change and rank based on a single period for 3, 6, 9, and 12 months
roc.three <- ROC(x = symbols.close , n = 3, type = "discrete")
rank.three <- RankRB(roc.three)
roc.six <- ROC(x = symbols.close , n = 6, type = "discrete")
rank.six <- RankRB(roc.six)
roc.nine <- ROC(x = symbols.close , n = 9, type = "discrete")
rank.nine <- RankRB(roc.nine)
roc.twelve <- ROC(x = symbols.close , n = 12, type = "discrete")
rank.twelve <- RankRB(roc.twelve)
num.assets <- 4
# simple momentum test based on 3 month ROC to rank
case1 <- SimpleMomentumTest(xts.ret = monthly.returns, xts.rank = rank.three,
n = num.assets, ret.fill.na = 15)
# simple momentum test based on 6 month ROC to rank
case2 <- SimpleMomentumTest(xts.ret = monthly.returns, xts.rank = rank.six,
n = num.assets, ret.fill.na = 15)
# simple momentum test based on 9 month ROC to rank
case3 <- SimpleMomentumTest(xts.ret = monthly.returns, xts.rank = rank.nine,
n = num.assets, ret.fill.na = 15)
# simple momentum test based on 12 month ROC to rank
case4 <- SimpleMomentumTest(xts.ret = monthly.returns, xts.rank = rank.twelve,
n = num.assets, ret.fill.na = 15)
returns <- cbind(case1$ret, case2$ret, case3$ret, case4$ret)
colnames(returns) <- c("3-Month", "6-Month", "9-Month", "12-Month")
charts.PerformanceSummary(R = returns, Rf = 0, geometric = TRUE,
main = "Momentum Cumulative Return: Top 4 Assets")
CAGR(returns, m = 12)
6 thoughts on “Momentum in R: Part 2”
1. Nice post. Your ranking code from prior post was helpful. Glad to see you are back on blog.
2. How would you take account commission in the above code?
3. Hey Ross,
I have a quick question for you, but let me begin by saying, excellent post! In fact, excellent series on Momentum with R.
On with my question. So, in this exercise, you’ve constructed portfolios based on past returns; 3, 6, 9 and 12 month returns. Then, from what I understand, you take a long position in the top
asset (or in two other cases, the top 4 and top 8 assets) and hold this position for 1 month only. What I am wondering is what sort of results you’d get if you were to, say, hold the position for
3, 6, 9, or 12 months, instead of just 1 month. This kind of idea may be of interest to the investor with a longer term perspective.
Is there any chance you could do another post using momentum with R taking into account this slight variant of the exercise?
□ Hi GW,
This is something I could do in a later post. It is not easily doable the way I wrote the functions.
4. nice post,
I’ve been trying to replicate this for the Brazilian markets (Ibovespa), but unfortunately yahoo data lacks treatment for stock splits and/or dividends.
I guess I have lots of work treating the database before testing the models…
□ getSymbols has an argument for adjusting the data. Also, if you’d like to do things manually, I suppose you can just divide the close by the adjusted close of the first time period, then
divide the rest of the data by that same number. | {"url":"http://rbresearch.wordpress.com/2012/10/20/momentum-in-r-part-2/","timestamp":"2014-04-18T19:52:47Z","content_type":null,"content_length":"81333","record_id":"<urn:uuid:dfe62632-d1d6-446e-a9df-2d2291bfbe7d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00213-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: RE: new package margdistfit available on SSC
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: RE: new package margdistfit available on SSC
From Maarten Buis <maartenlbuis@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: RE: new package margdistfit available on SSC
Date Fri, 18 Nov 2011 17:31:04 +0100
On Fri, Nov 18, 2011 at 4:43 PM, Austin Nichols wrote:
> This is an interesting exercise, though I would think only relevant
> for ML since no theoretical distribution is assumed for OLS etc.
Sure, I initially wrote it for -betafit- (available from SSC), which
fits a beta distribution with ML. The main reason for including linear
regression is didactic, more people are familiar with linear
regression and the normal distribution than with beta regression and
the beta distribution. As you remarked, there is a risk attached to
that strategy in that users may over-interpret the graphs in case of
linear regression. Though I do believe that even with linear
regression it allows for a useful view on the data and the model in
that the normal distribution is a useful baseline. Deviations from it
can point to interesting, unusual, disturbing or puzzling patterns in
the data. I find it often useful to know that such patterns exist in
my data even though I do not need to do anything about it.
> Minor points:
> 1. A parametric regression typically does not allow parameters to
> change as X changes, contrary to your text describing the command:
I see how what I wrote could be interpreted in the way you interpreted
it. However, when wrote that I was thinking in terms of a distribution
rather than regression, and the parameter in that case is the mean or
standard deviation or some other parameter e.g. the scale or shape
parameters in the beta-distribution and not the regression parameters.
I need to make that more clear in my text.
> 2. What effect do heteroskedasticity or clustering of errors have on
> your examples? Must you assume i.i.d. errors?
Heteroskedasticity can be accommodated if it is explicitly modeled.
For example, in -betafit- one can let the variance depend on
covariates by adding those covariates in the -phivar() option. When
one has asked for robust standard errors (and thus also in case of
clustered standard errors), one has already relaxed the distribution
assumptions. So in that case the theoretical distribution with which
the empricial distribution is compared only represents a useful
baseline instead of a hard assumption. I have to think a bit on the
consequences of clustering.
> 3. The link to "helpfile" at http://www.maartenbuis.nl/software/margdistfit.html
> pointing to http://repec.org/bocode/m/margdistfit.html
> seems to be broken.
thanks, I will look into it.
Thanks for your comments,
Maarten L. Buis
Institut fuer Soziologie
Universitaet Tuebingen
Wilhelmstrasse 36
72074 Tuebingen
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2011-11/msg00955.html","timestamp":"2014-04-20T03:15:22Z","content_type":null,"content_length":"10917","record_id":"<urn:uuid:695514a8-24f2-48f5-ad30-a08d8eed531e>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00409-ip-10-147-4-33.ec2.internal.warc.gz"} |
Aberration of Forces and Waves
Consider an illuminated charged sphere resting at the origin of a system x,y,z of inertial coordinates, and a small test particle moving with speed v in the positive x direction at a fixed y
coordinate as shown below.
If the distance between the two objects is sufficiently great, the light (electromagnetic waves) emanating from the sphere will consist of essentially planar horizontal waves when it reaches the test
particle. Since the particle is moving tangentially with speed v, the angle of the incoming light will be affected by aberration, such that the apparent source of the light (from the point of view
of the test particle as it crosses the y axis) is at an angle a = arcsin(v/c) ahead of the actual position of the sphere. However, the direction of the electrical force exerted by the sphere on the
test particle points directly toward the actual position of the sphere. Thus, the incoming electromagnetic waves from the sphere experience aberration, but the electromagnetic force of attraction to
the sphere does not. This sometimes misleads people into thinking that the force somehow propagates instantaneously (to account for the absence of aberration).
Of course we can just as well consider the test particle to be at rest, and the charged sphere to be moving in the negative x direction with speed v. From this point of view, if D denotes the
distance from the sphere to the particle, then at any time t the particle “sees” the sphere at the location it occupied at a time t - D/c, because D/c is how long it take for light to travel the
distance D. This is illustrated in the figure below.
Just as before, the light arrives at the (stationary) particle P from a direction differing from the true current direction of the source at time t by the angle a. Also, since we have simply changed
coordinate systems, which can have no effect on any physical attributes, we know the electromagnetic force on the particle at the time t points directly toward the sphere’s actual (not apparent)
position at that time.
The absence of aberration in the direction of the electromagnetic force does not indicate that the force propagates infinitely fast. (In fact, the concept of a “moving force” is not even well
defined.) The force on a test particle at any given instant is due to the electromagnetic field in the immediate vicinity of the particle at that instant. In general the field at any given place
and time consists of contributions from multiple sources at a variety of distances. The number of sources and their distances matter only insofar as they determine the electromagnetic field. The
field of the charged sphere with respect to the rest frame of the sphere is an electro-static configuration (no magnetic field) with spherical symmetry centered on the source. A uniformly moving
charged test particle in this field is subjected to a force proportional to (and therefore pointing in the same direction as) the electric field vector at its present location, so the force obviously
points directly towards the source at all times. On the other hand, in terms of the rest frame of the test particle the charged sphere is in uniform motion and the electromagnetic field has both
electric and magnetic components. However, since the test particle is at rest with respect to these coordinates, it does not experience any magnetic force, so again the force on the particle is
proportional to the electric field vector. To determine the direction of this force we need to know how the components of the electric field transform from one system of inertial coordinates to
another. As explained in Force Laws and Maxwell’s Equations, if E[x], E[y], and E[z] are the components of the electric field at a given point with respect to the x,y,z,t coordinates, then the
components with respect to a similarly oriented system of inertial coordinates x’,y’,z’,t’ moving with speed v in the positive x direction are
Of course we also have
In the unprimed coordinates (the rest frame of the charged sphere) we know the electric field components at the location of the test particle point directly toward the origin, which means E[x], E[y],
and E[z] are proportional to the coordinates x, y, and z of the test particle. Also, since the magnetic field is zero with respect to the unprimed coordinates, and since the origins of the two
coordinate systems coincide at t = 0, we have
Similarly it follows that E[x’]/E[z’] = x’/z’ and E[y’]/E[z’] = y’/z’, confirming that the electric field vector at every location points directly toward the instantaneous source with respect to the
rest frame of the test particle (relative to which the source is moving with a speed v). Thus the absence of “force aberration” for objects in fully developed inertial motion is an immediate
consequence of Lorentz covariance.
The qualifier “fully developed” is necessary, because every object is instantaneously at rest with respect to some inertial frame, but the object’s field in its current rest frame is spherical and
satisfies the steady-state relations only out to a distance D = cDt where Dt is the length of time the object has been unaccelerated. This highlights the fact that although the field of an object
exists and acts at a distance from the object, changes in the field propagate at the finite speed c. When an object changes its state of motion, the field must change accordingly, and these changes
propagate outward from the source at the speed c. In the far field these changes propagate in the form of waves.
One thing that sometimes puzzles people about the lack of force aberration is that they tend to regard the electric field as the gradient of a potential, and they know the equi-potential surfaces for
a uniformly moving charged particle are contracted in the direction of motion so they form ellipsoids instead of spheres, and clearly the spatial gradient of this potential does not point towards the
center (except for lines parallel or perpendicular to the axis of motion). The explanation is that the electric field vector equals the spatial gradient of the potential field only if the field is
stationary, i.e., unchanging with time. If the field is changing with time, the full expression for the electric field must include an additional term to account for this, i.e., we have
where A signifies the vector potential of the electromagnetic field. The second term on the right hand side represents the effect of the changing potential with time. Using the Lorentz gage
the field equations for the electromagnetic potentials are
It follows that, if v is constant (and has been for a sufficiently long time), and we are given a solution f(x,y,z,t) for the scalar potential, we can multiply this solution by v/c to give a solution
of the vector potential
Therefore, under these conditions, the time-dependent term in the last equation for E can be written as
Now, by definition, the total derivative of f along any incremental path dx,dy,dz,dt is
Dividing by dt and solving for the partial of f with respect to t gives
Taking dx/dt etc. as the components of the sphere’s velocity v, the total derivative df/dt represents the change in f with time along a co-moving worldline, and since the (fully developed) field is
stationary with respect to the rest frame of the sphere, we have df/dt = 0. Therefore the partial of f with respect to t equals the negative of the dot product of the spatial gradient of f with the
velocity v, so the previous expression for the time-dependent electric field is
We are considering the case when the sphere’s motion is in the positive x direction, so we have v = (v,0,0) and the above expression becomes
A surface of constant f is a stationary sphere in the rest frame of the source, so it transforms to an ellipsoid due to contraction in the direction of motion as shown below.
The equation of this cross-section is
where we have chosen units so that c = 1. Taking the differential of both sides gives
The slope of the normal to the ellipse at the point (x,y) is the negative reciprocal of dy/dx,, which is
According to our expression for E(r,t) we begin with the gradient of f and then reduce the x component by the factor (1 - v^2), where we still have c = 1. Thus we have
This confirms that the electromagnetic force exerted by the field of the moving charged sphere on the test particle at time t is directed toward the position of the sphere at the same time t. This
is a natural consequence of Lorentz covariance, and does not imply any instantaneous transfer of energy or information.
It’s true that, in quantum theory, the electromagnetic force can be considered to be mediated by photons, but these are virtual photons, which are actually just analytical components of the field.
In effect these virtual photons form a cloud around the source particle, and they “exist” only within the uncertainty envelope. An electromagnetic interaction between two electrons, for example, is
modeled as an exchange of photons between the overlapping fields of two particles. It is not represented by a photon traversing from one particle to the other. Virtual photons don’t even possess
definite trajectories through space and time. They are conceptual entities arising in the quantization of the electromagnetic field.
Another point that sometimes puzzles people is why an equi-potential sphere transforms to an ellipsoid under a Lorentz transformation, whereas a spherical wave of light transforms to a spherical wave
under the same transformation. The reason is that an equi-potential sphere is stationary, whereas a wave of light is expanding, as illustrated below.
A side view showing the intersections of these two surfaces with two difference planes of simultaneity is shown below.
The source of the light pulse and the potential field moves along the t axis. With respect to the x,t coordinate system the expanding spherical shell of light coincides with an equi-potential sphere
at the time t = k with the diameter AB. However, with respect to the x’,t’ coordinate system the left-most point of the expanding sphere of light just touches the left-most point of the
equi-potential sphere at the point A and time t’ = k’. At this time the right-most point of the light sphere is at C, whereas the right-most point of the equi-potential surface is at D. The
“center” of the expanding light sphere (with respect to the primed coordinates) moves along the t’ axis, whereas the center of the potential still moves along the t axis. This illustrates why the
coincidence of the light and the equi-potential spheres (at a particular instant) with respect to one frame of reference does not imply that they ever coincide with respect to another frame of
reference. The wavefront of the light pulse is always spherical with respect to both systems of inertial coordinates, whereas the equi-potential surfaces are spherical only with respect to the rest
frame of the source.
Return to MathPages Main Menu | {"url":"http://mathpages.com/home/kmath562/kmath562.htm","timestamp":"2014-04-20T00:49:34Z","content_type":null,"content_length":"29367","record_id":"<urn:uuid:c6e35000-f17f-4768-904a-47c95375e9d9>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Thermodynamic formalism. The mathematical structures of classical equilibrium. Statistical mechanics. With a foreword by Giovanni Gallavotti.
(English) Zbl 0401.28016
Encyclopedia of Mathematics and its Applications. Vol. 5. Reading, Massachusetts: Addison-Wesley Publishing Company. XIX, 183 p. $ 21.50 (1978).
As aim by the editors of the encyclopedia, the mathematical framework of the title problem, here formulated for classical lattice spin systems, is presented in a pure and rigorous manner. Physical
motivations are only shortly sketched and must be taken from elsewhere. – In the introduction some basic ideas of the thermodynamic formalism are reviewed and related to the main context. The first
two chapters develop the structure of Gibbs states, their relation to thermodynamic limits, and their behaviour under lattice morphisms as well as conditioning. Chapter three introduces equilibrium
states under assumption of translational invariance and contains some general results on phase transitions. The link between Gibbs states and equilibrium states is established in chapter four.
Application to one-dimensional lattice systems is given in chapter five. The last two chapters extend the formalism to the case when the configuration space is a general compact metrizable space with
homeomorphic action of
. Some further background and open problems are collected in appendices. – Concise presentation is balanced by clear notation. The level is generally advanced and especially some knowledge in
functional analysis is presupposed. Since wide-spread literature of the last twenties has been elaborated and is represented in an unified way, this book is a valuable tool for anybody working on
this and related fields.
82-01 Textbooks (statistical mechanics)
37D35 Thermodynamic formalism, variational principles, equilibrium states
37-01 Instructional exposition (Dynamical systems and ergodic theory)
28D20 Entropy and other measure-theoretic invariants
37D20 Uniformly hyperbolic systems (expanding, Anosov, Axiom A, etc.)
37A60 Dynamical systems in statistical mechanics
37C10 Vector fields, flows, ordinary differential equations
54H20 Topological dynamics
82B05 Classical equilibrium statistical mechanics (general)
82B20 Lattice systems (Ising, dimer, Potts, etc.) and systems on graphs
81T08 Constructive quantum field theory | {"url":"http://zbmath.org/?q=an:0401.28016&format=complete","timestamp":"2014-04-21T02:21:44Z","content_type":null,"content_length":"24336","record_id":"<urn:uuid:ef8fa597-2dd6-4b3f-851f-abc762ba95d5>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00401-ip-10-147-4-33.ec2.internal.warc.gz"} |
Recursive Functions
Recursive Functions
I need some help with an assignment.
I've got most of it figured out, I just need to code a part where it tries every possible combination of numbers to see if it fits a certain pattern.
it needs to be able to handle any number of digits and to any depth.
e.g it would have to be able to handle trying all combinations of 1 to 4 for 6 digits.
I have thought about this problem alot and the only way I can think of solving it is by recursive functions. If anybody has had to solve a siimilar problem or can be of any help can you plesase
You can find all combinations with a recursive or a non-recursive solution. Do a google search, you'll get plenty of hits with explanations, source code and libraries. | {"url":"http://cboard.cprogramming.com/cplusplus-programming/54109-recursive-functions-printable-thread.html","timestamp":"2014-04-16T20:42:43Z","content_type":null,"content_length":"6745","record_id":"<urn:uuid:4579417f-c6bf-4f03-986a-a17a988cd197>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00074-ip-10-147-4-33.ec2.internal.warc.gz"} |
Embry Hls, GA Math Tutor
Find an Embry Hls, GA Math Tutor
...I enjoy working with the students and receive many rewards when I see their successes. My hours of availability are Monday - Sunday from 8am to 9pm.My Bachelors Degree is in Applied Math and I
took one course in Differential Equations and received an A. I also took several other courses that included Differential Equations in the solution process.
20 Subjects: including calculus, discrete math, GRE, GMAT
...I love working with students to help them succeed and pursue their dreams.I played four years of varsity soccer throughout high school, receiving several all-star awards. I was then captain my
senior year and played intramurals through college. I have been playing chess for more than 15 years now.
11 Subjects: including algebra 1, SAT math, chemistry, physics
...In addition to taking a semester of differential equations and one of partial differential equations, I took several classes on Advanced Calculus which are proof based classes where you
fundamentally prove and construct the logical basis for Calculus and Differential Equations. I have over 5 yea...
41 Subjects: including calculus, logic, linear algebra, differential equations
...I have experience tutoring family members in basic algebra. I use Excel regularly at work to generate reports on budgets. I also use it for data cleanup and list creation for our customer
14 Subjects: including algebra 2, writing, algebra 1, prealgebra
I have B.S. degree in Chemical Engineering, where the math is a basic knowledge for the specialty. When I studied in middle and high school, I always scored A, and later I got the No.1 high
scores in high school graduate examination, and full scores in Math. In the college, I also scored A of Math.
3 Subjects: including algebra 1, algebra 2, Chinese
Related Embry Hls, GA Tutors
Embry Hls, GA Accounting Tutors
Embry Hls, GA ACT Tutors
Embry Hls, GA Algebra Tutors
Embry Hls, GA Algebra 2 Tutors
Embry Hls, GA Calculus Tutors
Embry Hls, GA Geometry Tutors
Embry Hls, GA Math Tutors
Embry Hls, GA Prealgebra Tutors
Embry Hls, GA Precalculus Tutors
Embry Hls, GA SAT Tutors
Embry Hls, GA SAT Math Tutors
Embry Hls, GA Science Tutors
Embry Hls, GA Statistics Tutors
Embry Hls, GA Trigonometry Tutors
Nearby Cities With Math Tutor
Barrett Parkway, GA Math Tutors
Centerville Branch, GA Math Tutors
Chamblee, GA Math Tutors
Cumberland, GA Math Tutors
Farrar, GA Math Tutors
Fort Gillem, GA Math Tutors
Kelly, GA Math Tutors
North Corners, GA Math Tutors
North Metro Math Tutors
Overlook Sru, GA Math Tutors
Rockbridge, GA Math Tutors
Sandy Plains, GA Math Tutors
Shenandoah, GA Math Tutors
Snapfinger, GA Math Tutors
Westside, GA Math Tutors | {"url":"http://www.purplemath.com/Embry_Hls_GA_Math_tutors.php","timestamp":"2014-04-21T02:34:03Z","content_type":null,"content_length":"23922","record_id":"<urn:uuid:30dadd8e-5875-429e-9dea-0ea749f91243>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |