content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Wallington Trigonometry Tutor
Find a Wallington Trigonometry Tutor
...Throughout the years I have tutored various students in SAT math. Most of these students come to me having scored around a 500 on the math section on their first try, although a few were a
little lower, and a few a little higher. After working with me each student increased their score by at least 100 points, but usually more.
11 Subjects: including trigonometry, Spanish, algebra 2, algebra 1
...But thanks to having a great teacher, I was able to understand every concept that was taught, and even better, I knew how to explain whatever problems there were to the other students. Now, I
am more than ready and able to explain all facets of math to any struggling student. My goal is to make whatever is challenging you most seem easy.
15 Subjects: including trigonometry, geometry, algebra 1, statistics
...I also read out loud to the children and depending on the grade level helped them with their reading skills. I am currently a Spanish major with a concentration in Linguistics. I have tutored
students in both ESL and Spanish using the phonetics.
13 Subjects: including trigonometry, English, Spanish, algebra 2
...I've found that for me, it takes about 6-9 weeks on average of working with a student to get to an 80-100 point improvement, and I can work with Quant, Verbal, or both. Background: I have a BS
in Electrical Engineering from MIT and an MBA with Distinction from the University of Michigan. Over ...
11 Subjects: including trigonometry, calculus, algebra 2, geometry
...That includes the following subjects: algebra I & II, geometry, trigonometry, pre-calculus, and calculus (including AP, AB and BC). I've helped students at many different public and private
high schools in New York, including Stuyvesant, LaGuardia, Bronx Science, Dalton, Horace Mann, Riverdale, a...
12 Subjects: including trigonometry, physics, MCAT, calculus
|
{"url":"http://www.purplemath.com/Wallington_Trigonometry_tutors.php","timestamp":"2014-04-19T05:07:07Z","content_type":null,"content_length":"24238","record_id":"<urn:uuid:1bd95d9f-b76a-467c-8d9e-25b1a2024293>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Tensor Framework for Multidimensional Signal Processing
A Tensor Framework for Multidimensional Signal Processing
Abstract (Summary)
This thesis deals with ltering of multidimensional signals. A large part of the thesis is devoted to a novel filtering method termed "Normalized convolution". The method performs local expansion of a
signal in a chosen lter basis which not necessarily has to be orthonormal. A key feature of the method is that it can deal with uncertain data when additional certainty statements are available for
the data and/or the lters. It is shown how false operator responses due to missing or uncertain data can be significantly reduced or eliminated using this technique. Perhaps the most well-known of
such eects are the various 'edge effects' which invariably occur at the edges of the input data set. The method is an example of the signal/certainty - philosophy, i.e. the separation of both data
and operator into a signal part and a certainty part. An estimate of the certainty must accompany the data. Missing data are simply handled by setting the certainty to zero. Localization or windowing
of operators is done using an applicability function, the operator equivalent to certainty, not by changing the actual operator coefficients. Spatially or temporally limited operators are handled by
setting the applicability function to zero outside the window.The use of tensors in estimation of local structure and orientation using spatiotemporal quadrature filters is reviewed and related to
dual tensor bases. The tensor representation conveys the degree and type of local anisotropy. For image sequences, the shape of the tensors describe the local structure of the spatiotemporal
neighbourhood and provides information about local velocity. The tensor representation also conveys information for deciding if true flow or only normal flow is present. It is shown how normal flow
estimates can be combined into a true flow using averaging of this tensor eld description.Important aspects of representation and techniques for grouping local orientation estimates into global line
information are discussed. The uniformity of some standard parameter spaces for line segmentation is investigated. The analysis shows that, to avoid discontinuities, great care should be taken when
choosing the parameter space for a particular problem. A new parameter mapping well suited for line extraction, the Möbius strip parameterization, is de ned. The method has similarities to the Hough
Transform.Estimation of local frequency and bandwidth is also discussed. Local frequency is an important concept which provides an indication of the appropriate range of scales for subsequent
analysis. One-dimensional and two-dimensional examples of local frequency estimation are given. The local bandwidth estimate is used for dening a certainty measure. The certainty measure enables the
use of a normalized averaging process increasing robustness and accuracy of the frequency statements.
Bibliographical Information:
School:Linköpings universitet
School Location:Sweden
Source Type:Doctoral Dissertation
Date of Publication:01/01/1994
|
{"url":"http://www.openthesis.org/documents/Tensor-Framework-Multidimensional-Signal-Processing-599735.html","timestamp":"2014-04-18T18:17:39Z","content_type":null,"content_length":"10702","record_id":"<urn:uuid:7241a6d2-753c-4f2a-a0eb-ab5cf4dd7c88>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tutorial Details: Introduction to FLUENT
Date: Thursday, August 14, 2008, 01:00 pm - 02:30 pm
Location: 402 Walter
Instructor(s): David Porter, MSI
FLUENT is an integrated software package for performing computational fluid dynamics. It supports simulation of flow with a wide variety of material properties in complicated geometries on
unstructured meshes. Several different fluid solvers are available, which can take advantage of a variety of standard turbulence models. An extremely well developed graphical user interface
facilitates initializing, solving, post processing, and visualizing flows.
In this introductory lecture we will cover the basic operation of FLUENT, including how and where to run FLUENT at MSI. We will cover how to use the graphical user interface. Examples of how to
import a mesh, select a fluid solver, set up an initial flow field, run the problem, and examine results will be given. We will discuss solving for both steady state as well as time dependent
Prerequisites: Basic knowledge of fluid dynamics
|
{"url":"https://www.msi.umn.edu/tutorial/328","timestamp":"2014-04-16T07:31:33Z","content_type":null,"content_length":"10110","record_id":"<urn:uuid:d66623b7-30bc-4a1b-ad87-a91708025518>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is [mD] very ample if D is ample?
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Let D be an ample R-divisor, is the round down [mD] very ample for any sufficiently divisible number m?
I think it's true. But I do not know how to arrange an argument.
up vote 10 down vote favorite
2 ag.algebraic-geometry
add comment
Let D be an ample R-divisor, is the round down [mD] very ample for any sufficiently divisible number m?
I think it's true. But I do not know how to arrange an argument.
I am not sure if this is the shortest proof, but I think that it is a proof.
Let $A=$ very ample line bundle. After replacing D by a multiple, you may assume that $$C=D - K_X - (n+1) A$$ is ample where $n=\dim X$.
By Angehrn and Siu, we know that $$K_X+(n+1)A + \text{(ample line bundle)}$$ is very ample.
up vote 5 down vote accepted
Now $$[mD] = K_X+ (n+1) A + C + [mD] - D$$
and you just need to make sure that $[mD]+C-D$ is ample if $m\gg 0$. This is easy to check by Diophantine approximation.
add comment
I am not sure if this is the shortest proof, but I think that it is a proof.
Let $A=$ very ample line bundle. After replacing D by a multiple, you may assume that $$C=D - K_X - (n+1) A$$ is ample where $n=\dim X$.
By Angehrn and Siu, we know that $$K_X+(n+1)A + \text{(ample line bundle)}$$ is very ample.
Now $$[mD] = K_X+ (n+1) A + C + [mD] - D$$
and you just need to make sure that $[mD]+C-D$ is ample if $m\gg 0$. This is easy to check by Diophantine approximation.
[DEL:At first a question: If I understand correctly, then the rounding down operation depends on your choice of basis for the Néron-Severi group, right? :DEL]
[DEL:So I am assuming you fix a basis $D_1, \ldots, D_k$ of $NS(X) \otimes_\mathbb{Z} \mathbb{R}$ and for each $D := \sum_j r_jD_j$, you define $[mD] := \sum [mr_j]D_j$. :DEL]
[DEL:If this is true, then doesn't your assertion follow from the following geometric fact? :DEL]
[DEL:Let $C$ be a full dimensional cone in $\mathbb{R}^k$ and $K$ be the standard cube of length $2$ in $\mathbb{R}^k$ centered at the origin, i.e. :DEL]
[DEL:$K := \lbrace\sum_{j=1}^k s_je_j: -1 \leq s_j \leq 1$ for all $j$, $1 \leq j \leq k \rbrace$, :DEL]
[DEL:where $e_1, \ldots, e_k$ are unit vectors along the axes. If $v$ belongs to the interior of a full dimensional cone $C$ in $\mathbb{R}^k$, then $mv + K$ also lies in the interior of $C$
for all sufficiently large $m$. :DEL]
[DEL:If as your basis you choose ample divisors, then $K$ can be replaced by a cube of length one. :DEL]
Edit 3: This is my 3rd attempt to give an elementary proof. It is essentially the same proof as in Edits 1 and 2, but with some corrections, and hopefully will be clearer. I hope you see that
the idea is very simple and geometrically almost obvious. If it seems complicated, then the fault is in my exposition.
Set Up: Let $D_1, \ldots, D_k$ be ample divisors and $D := \sum_j r_jD_j$ for positive real numbers $r_1, \ldots, r_k$. Also, let $D_j = \sum_{i=1}^N a_{ji} C_i$, for irreducible divisors
$C_i$ and integers $a_{ji}$. We want to show that $[mD]$ is very ample for large $m$.
In the proof we will use the following fact about finite sums of integral points in a lattice:
Lemma: Let $v_1, \ldots, v_k \in \mathbb{Z}^N$ such that $\mathbb{Z}$-span of $v_j$'s equals $\mathbb{Z}^N$. Let $P$ be the convex hull (over $\mathbb{R}$) of $\lbrace 0, v_1, \ldots, v_k \
rbrace$. Then there exists a positive real number $c$ such that for all $n \geq 1$, if $v \in nP \cap \mathbb{Z}^N$ such that the (Euclidean) distance of $v$ from both the origin and the
boundary of $nP$ is greater than $c$, then $v$ is in fact an non-negative integral linear combination of $v_1, \ldots, v_k$.
up vote The above statement (actually a more precise formulation of it) is due to Khovanskii. The proof is very elementary and beautiful, and is in Proposition 2 of this article.
1 down
vote Here starts the proof:
Step 1: Without loss of generality we may assume that $\mathbb{Z}$-span of $D_j$'s equals the $\mathbb{Z}$-span of $C_i$'s. Indeed, it follows from Kleiman's criterion, and finite
dimensionality of $N_1(X)$ that for every $m \gg 1$ and $\epsilon := (\epsilon_1, \ldots, \epsilon_N) \in \lbrace 1, 0, -1 \rbrace^N$, $D_{m,\epsilon} := mD_1 + \sum_{i=1}^N\epsilon_i C_i$ is
ample. Choosing different values of $\epsilon$ and $m$ and adding $D_{m,\epsilon}$'s to the collection of $D_j$'s, we may ensure that $\mathbb{Z}$-span of $D_j$'s equals the $\mathbb{Z}$-span
of $C_i$'s. Moreover, and this is essential, choosing $D_{m,\epsilon}$'s to be sufficiently close to the ray generated by $D_1$, we may ensure that $D$ still lies in the interior of the cone
generated by $D_j$'s, i.e. $D = \sum_{j=1}^k r_jD_j$ with each $r_j$ being a positive real number.
Step 2: For each $j$, $1 \leq j \leq k$, let $v_j := (a_{j1}, \ldots, a_{jN}) \in \mathbb{R}^N$, i.e. $v_j$ is the "coordinate" vector of $D_j$ for each $j$ (and therefore $v_j \in \mathbb{Z}
^N$ for each $j$). Adding some big multiples of $D_j$'s to the existing collection of $D_j$'s if necessary, we may assume that $v := \sum r_j v_j$ is in the interior of the convex hull $P$ of
$0, v_1, \ldots, v_k$.
Step 3: For each $j$, $1 \leq j \leq k$, there exists a positive integer $m_j$ such that $mD_j$ is very ample for all $m \geq m_j$. Indeed, there is $l_j, n_j$ such that $n_jD_j$ is very
ample and $mD_j$ is globally generated for all $m \geq l_j$. Setting $m_j := l_j + n_j$ does the job (due to Exercise II.7.5(d) of Hartshorne).
Step 4: There exists a positive integer $m_0$ such that $m_0(D_1 + \cdots +D_k) + \sum s_jD_j$ is very ample for all collections of non-negative integers $s_1, \ldots, s_k$. Indeed, set $m_0
:= \max \lbrace m_1, \ldots, m_k \rbrace$ and apply the same exercise of Hartshorne.
Step 5: Let $v, v_1, \ldots, v_k$ and $P$ be as in Step 2. Let $c$ be the constant we get from applying Khovanskii's lemma to $v_1, \ldots, v_k$. Let $v_0 := m_0(v_1 + \cdots + v_k)$, where
$m_0$ is as in Step 4. Since $v$ is in the interior of $P$, it follows that if $m$ is sufficiently large, then $[mv] - v_0$ is in the interior of $mP$ and the distance of $[mv] - v_0$ from
the origin and the boundary of $mP$ is bigger than $c$. Therefore, Khovanskii's lemma implies that $[mv] - v_0 = \sum a_j v_j$ for non-negative integers $a_j$. Consequently, if $m$ is
sufficiently large, then
$$[mD] = m_0(D_1 + \cdots + D_0) + \sum a_j D_j$$
for non-negative integers $a_1, \ldots, a_k$. Step 4 then tells that $[mD]$ is very ample.
show 3 more comments
At first a question: If I understand correctly, then the rounding down operation depends on your choice of basis for the Néron-Severi group, right?
So I am assuming you fix a basis $D_1, \ldots, D_k$ of $NS(X) \otimes_\mathbb{Z} \mathbb{R}$ and for each $D := \sum_j r_jD_j$, you define $[mD] := \sum [mr_j]D_j$.
If this is true, then doesn't your assertion follow from the following geometric fact?
Let $C$ be a full dimensional cone in $\mathbb{R}^k$ and $K$ be the standard cube of length $2$ in $\mathbb{R}^k$ centered at the origin, i.e.
$K := \lbrace\sum_{j=1}^k s_je_j: -1 \leq s_j \leq 1$ for all $j$, $1 \leq j \leq k \rbrace$,
where $e_1, \ldots, e_k$ are unit vectors along the axes. If $v$ belongs to the interior of a full dimensional cone $C$ in $\mathbb{R}^k$, then $mv + K$ also lies in the interior of $C$ for all
sufficiently large $m$.
If as your basis you choose ample divisors, then $K$ can be replaced by a cube of length one.
Edit 3: This is my 3rd attempt to give an elementary proof. It is essentially the same proof as in Edits 1 and 2, but with some corrections, and hopefully will be clearer. I hope you see that the
idea is very simple and geometrically almost obvious. If it seems complicated, then the fault is in my exposition.
Set Up: Let $D_1, \ldots, D_k$ be ample divisors and $D := \sum_j r_jD_j$ for positive real numbers $r_1, \ldots, r_k$. Also, let $D_j = \sum_{i=1}^N a_{ji} C_i$, for irreducible divisors $C_i$ and
integers $a_{ji}$. We want to show that $[mD]$ is very ample for large $m$.
In the proof we will use the following fact about finite sums of integral points in a lattice:
Lemma: Let $v_1, \ldots, v_k \in \mathbb{Z}^N$ such that $\mathbb{Z}$-span of $v_j$'s equals $\mathbb{Z}^N$. Let $P$ be the convex hull (over $\mathbb{R}$) of $\lbrace 0, v_1, \ldots, v_k \rbrace$.
Then there exists a positive real number $c$ such that for all $n \geq 1$, if $v \in nP \cap \mathbb{Z}^N$ such that the (Euclidean) distance of $v$ from both the origin and the boundary of $nP$ is
greater than $c$, then $v$ is in fact an non-negative integral linear combination of $v_1, \ldots, v_k$.
The above statement (actually a more precise formulation of it) is due to Khovanskii. The proof is very elementary and beautiful, and is in Proposition 2 of this article.
Step 1: Without loss of generality we may assume that $\mathbb{Z}$-span of $D_j$'s equals the $\mathbb{Z}$-span of $C_i$'s. Indeed, it follows from Kleiman's criterion, and finite dimensionality of
$N_1(X)$ that for every $m \gg 1$ and $\epsilon := (\epsilon_1, \ldots, \epsilon_N) \in \lbrace 1, 0, -1 \rbrace^N$, $D_{m,\epsilon} := mD_1 + \sum_{i=1}^N\epsilon_i C_i$ is ample. Choosing different
values of $\epsilon$ and $m$ and adding $D_{m,\epsilon}$'s to the collection of $D_j$'s, we may ensure that $\mathbb{Z}$-span of $D_j$'s equals the $\mathbb{Z}$-span of $C_i$'s. Moreover, and this is
essential, choosing $D_{m,\epsilon}$'s to be sufficiently close to the ray generated by $D_1$, we may ensure that $D$ still lies in the interior of the cone generated by $D_j$'s, i.e. $D = \sum_{j=1}
^k r_jD_j$ with each $r_j$ being a positive real number.
Step 2: For each $j$, $1 \leq j \leq k$, let $v_j := (a_{j1}, \ldots, a_{jN}) \in \mathbb{R}^N$, i.e. $v_j$ is the "coordinate" vector of $D_j$ for each $j$ (and therefore $v_j \in \mathbb{Z}^N$ for
each $j$). Adding some big multiples of $D_j$'s to the existing collection of $D_j$'s if necessary, we may assume that $v := \sum r_j v_j$ is in the interior of the convex hull $P$ of $0, v_1, \
ldots, v_k$.
Step 3: For each $j$, $1 \leq j \leq k$, there exists a positive integer $m_j$ such that $mD_j$ is very ample for all $m \geq m_j$. Indeed, there is $l_j, n_j$ such that $n_jD_j$ is very ample and
$mD_j$ is globally generated for all $m \geq l_j$. Setting $m_j := l_j + n_j$ does the job (due to Exercise II.7.5(d) of Hartshorne).
Step 4: There exists a positive integer $m_0$ such that $m_0(D_1 + \cdots +D_k) + \sum s_jD_j$ is very ample for all collections of non-negative integers $s_1, \ldots, s_k$. Indeed, set $m_0 := \max
\lbrace m_1, \ldots, m_k \rbrace$ and apply the same exercise of Hartshorne.
Step 5: Let $v, v_1, \ldots, v_k$ and $P$ be as in Step 2. Let $c$ be the constant we get from applying Khovanskii's lemma to $v_1, \ldots, v_k$. Let $v_0 := m_0(v_1 + \cdots + v_k)$, where $m_0$ is
as in Step 4. Since $v$ is in the interior of $P$, it follows that if $m$ is sufficiently large, then $[mv] - v_0$ is in the interior of $mP$ and the distance of $[mv] - v_0$ from the origin and the
boundary of $mP$ is bigger than $c$. Therefore, Khovanskii's lemma implies that $[mv] - v_0 = \sum a_j v_j$ for non-negative integers $a_j$. Consequently, if $m$ is sufficiently large, then
$$[mD] = m_0(D_1 + \cdots + D_0) + \sum a_j D_j$$
for non-negative integers $a_1, \ldots, a_k$. Step 4 then tells that $[mD]$ is very ample.
|
{"url":"http://mathoverflow.net/questions/76640/is-md-very-ample-if-d-is-ample?sort=oldest","timestamp":"2014-04-21T02:28:16Z","content_type":null,"content_length":"67095","record_id":"<urn:uuid:1b9100c8-6d1b-481f-a9b4-a3ff2555d1b0>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 3
1 Mathematics and Society The recent history of mathematics has been like that of science generally. The number of investigators has grown rapidly and so have the pace and quality of the research. An
intellectual leadership that used to reside in Western Europe is now shared by Americans and others. Today, society, especially through other sciences and technologies, makes unprecedented demands on
mathematics and on the com- munity of mathematical scientists. The purpose of this report is to identify the major demands, to assess the capabilities of the mathe- matical community to satisfy them,
and to propose measures for preserving and extending these capabilities. THE MATHEMATIZATION OF CULTURE Mathematics has long played a central role in the intellectual and technological history of
mankind. Yet this statement hardly begins to convey or account for the current explosive penetration of mathematical methods into other disciplines, amounting to a virtual "mathematization of
culture." Mathematics can be described as the art of symbolic reasoning. Mathematics is present wherever chains of manipulations of ab- stract symbols are used; such chains may occur in the mind of a
human being, as marks on paper, or in an electronic computer. Symbolic reasoning appears to have been first used in connection with counting. For this reason, mathematics is sometimes described,
OCR for page 3
4 Summary though not completely accurately, as the science of numbers. In fact, it turns out that all symbolic reasoning may be reduced to manipulation of whole numbers; and it is this fact that
makes the digital computer into the universal tool it is. Mathematics as we know it originated more than 2,000 years ago warn the ~reeks. They transformed into a deductive science the collection of
facts and procedures about numbers and geometric figures known to the older civilizations of Egypt and Babylon. The Greeks applied mathematics only to astronomy and statics. The possibility of
applying it to other sciences, in particular to dynamics, was discovered in the sixteenth and seventeenth centuries. This dis- covery revolutionized mathematics; it led to the creation of calculus
and thereby made modern physical science and technology possible. The development of the physical sciences continues to use mathe- matical techniques and concepts. So do the new technologies based on
the discoveries of the physical sciences. Furthermore, these sci- ences and technologies use mathematical techniques of ever-increas- ing sophistication, so that the mounting role of physical
sciences and technologies in the contemporary world is a dominant aspect of the mathematization of culture. It is, however, not the only aspect. Mathematical methods are penetrating into fields of
knowl- edge that have been essentially shielded from mathematics until not long ago; for instance, the life sciences. The twentieth-century penetration of mathematical methods into the biological
sciences has come about in several ways, perhaps most importantly through the increasing study of biological phenomena by the methods of chemical physics. The development of statistics, which the
needs of the biological sciences helped to stimulate, has led to the exten- sive field of biostatistics. This has connections with mathematical genetics, which has evolved out of the celebrated
Mendelian laws of inheritance, and with mathematical ecology, which is concerned with such interactions as competition for food or the feeding of one species on another. Differential-equation models
for the con- duction of signals along nerve fibers have had notable success. Computer simulation of functions of living organisms is just one of the ways in which computers are becoming increasingly
im- portant in the biological sciences. Mathematics is penetrating the social and behavioral sciences, too, and even traditionally humanistic areas. Mathematical eco- nomics is now a central part of
economics. The field of econometrics has grown out of applications of probability and statistics in eco . . . ~_ _ OCR for page 3
Mathematics ant! Society nomics; and statistical techniques are important in anthropology, sociology, political science, and psychology. Analysis of mathemati- cal models for various social phenomena
has been greatly aided by computer simulation and data processing. The mathematical view- point has even found new application in linguistics. The above remarks refer to the mathematization of
various academic disciplines. Mathematics is also becoming an indispensable tool in the world of government, industry, and business. The terms `'operations research" and "management science" are
among those describing the rapidly growing use of mathematical methods to solve problems which arise in managing complicated systems involv- ing the movement and allocation of goods and services.
Computers have extended the possibility of applying mathemati- cal methods to a degree that would have seemed fantastic a short time ago. Computer science, which deals with manifold problems of
building and utilizing computers, contains, among other things, very important mathematical components. THE NEED FOR MATHEMATICALLY TRAINED PEOPLE The mathematization of our society brings with it an
increasing need for people able to understand and use mathematics. This need manifests itself at various levels. We need people who can teach mathematics in grade school in a way that will not create
a permanent psychological block against mathematics in so many of our fellow citizens. We need people who can understand a simple formula, read a graph, interpret a state- ment about probability.
Indeed, all citizens should have these skills. We need people who are able to teach mathematics in high school and cope with the necessarily changing curriculum. We need people who know what
computers can do, and also what they can- not do. We need computer programmers who can work with under- standing and efficiency. We need engineers, physicists, chemists, geologists, astronomers,
biologists, physicians, economists, sociol- ogists, and psychologists who possess the mathematical tools used today in their respective disciplines and who have the mathematical literacy for learning
the new skills that will be needed tomorrow. Equally we need, though in smaller numbers of course, people in these fields who are able to use mathematical tools creatively and if necessary to modify
existing mathematical methods. The numbers
OCR for page 3
6 Summary are hard to estimate, but it is clear that our society needs many more mathematically literate and educated people than are available now. For instance, computer programmers are already
more numerous than high school teachers of mathematics, and their numbers will . . continue to increase. These demands call for massive amounts of mathematical edu cation at all levels and so produce
a mounting pressure on the mathematical community. To do all this training calls for a larger body of mathematics teachers. It is difficult to expand the supply rapidly enough. Since teachers of
intermediate-level mathematics must themselves be trained by people of higher competence, these pressures also quickly transmit themselves to the relatively small community of mathematical scientists
who do research. Many of the needs of mathematics for support come from a need for balanced growth meeting all these requirements. Since we recognize a rising level of mathematical literacy as a
national objective, and since the community of mathematical sci- entists bears the primary responsibility for attaining this objective, our report cannot separate problems of research from problems
of education, including those of undergraduate education. (See both Part III and the report) ~ of our Panel on Undergraduate Edu cation.) APPLIED MATHEMATICAL SCIENCES There are now four major areas
in the mathematical sciences that have particularly direct and important relationships with other sciences and technologies: computer science, operations research, statistics, and physical
mathematics (classical applied mathematics). We shall ordinarily refer to these as applied mathematical sciences. For statistics and computer science there is a more accurate term, partly
mathematical sciences, which we shall sometimes use in recognition of the individual character of these fields and their strong extramathematical components. These four major areas must each have
special attention if we are to come close to meeting national needs. At the same time, there is a need for general support of applied mathematical sciences in a way, not closely tied to particular
applications, that will encour ~ Superscript numbers refer to the list of references at the end of the report.
OCR for page 3
Mathematics and Society 7 age creative interaction between mathematics, science, and tech- nology, and among the various applied mathematical sciences themselves. The sciences and technologies
associated with the computer- whether concerned with the nature of information and language, the simulation of cognitive processes, the computer programs that bring individual problems of all kinds
to today's computing systems, the software programs that convert cold hardware into a complex computing system, or the hardware itself face an intense and grow- ing challenge. The field sometimes
labeled operations research is now growing rapidly, though not so explosively as computer science. Its emphasis today is on solving problems of allocation (routing problems and scheduling problems
are two major types) and on a broad class of operational applications of probability (inventory management and improving the service of queues and waiting lines, for example). Again there are
national needs both for a substantial body of people who can apply the techniques effectively and for a leadership that can innovate, reshape, and transform. The field of statistics and data analysis
is older and more firmly established than the two just described. Yet there is still a shortage of statisticians who can bring mathematical techniques and insights to diverse applications. The
development of computer techniques is also having a strong impact on this field. Physical mathematics, also called classical applied mathematics, has evolved into various modern forms. In its
traditional form, it emphasized the mathematics essential to classical physics and the established fields of engineering. Even more it emphasized the evolution of the mathematical models under study.
Nowadays, the concepts of physics have been expanded to include the well- established aspects of quantum mechanics and the theory of rela- tivity. New developments apply to an ever greater variety of
subject-matter fields, and we must now look once again toward a closer collaboration and mutual stimulation between mathematics and all the other sciences. Alongside the four main applied
mathematical sciences there are still newer areas of application where no self-identifying community of mathematical scientists yet exists areas of central importance to a variety of national
objectives of great and growing concern. The interplay between mathematics and sciences and the mutual stimulation, cooperation, and transfer of ideas among applied
OCR for page 3
8 Summary mathematicians working in diverse areas all suggest that the applied mathematical sciences, because of their common features, constitute an area of study worthy of support in its own right.
CORE MATHEMATICS The foundation of the manifold mathematical activities just dis- cussed is the central core of mathematics the traditional dis- ciplines of logic, number theory, algebra, geometry,
and analysis that have been the domains of the so-called "pure mathematician." The relationship between the core and applied areas is not one- sided; many of the essential ideas and concepts in the
central core can be traced ultimately to problems arising outside of mathematics itself. In the central core, mathematical ideas and techniques, no matter what their origin, are analyzed,
generalized, codified, and transformed into tools of wide applicability. In assessing the importance of the core, one should keep in mind that there is always an interplay and exchange of ideas
between so-called "pure" mathematics, that is, mathematics pursued pri- marily for intrinsic intellectual and aesthetic reasons, and so-called "applied" mathematics, that is, mathematics consciously
used as a tool for understanding various aspects of nature. Thus geometry, literally "earth measurement," originated as an applied art, presumably in the Nile delta. The Greeks transformed it into a
pure deductive science, the prototype of pure mathematics. Among the geometric objects studied by the Greeks were the curves (ellipses, parabolas, hyperbolas) obtained by intersecting a cone with a
plane. These "conic sections," though they may have been discovered by observing sundials, were then of interest to the pure mathematicians alone. Today conic sections are working tools of engineers,
physicists, and astronomers. On the other hand, calculus, which was developed by Newton as a mathematical tool for studying the motions of physical bodies, is also the foundation of a large part of
modern "pure" mathematics. The most spectacular uses of core mathematics are its direct applications in science and technology. Remarkably enough, it is impossible to predict which parts of
mathematics will turn out to be important in other fields. We have one guide: Widely useful mathematics, for the most part, has proved to be also the kind that mathematicians earlier characterized as
"profound" or "beautiful."
OCR for page 3
Mathematics and Society 9 Important mathematical ideas have also been generated by people who were not professional mathematicians. The time lag of 2,000 years between the invention of the conic
sections and their applications in astronomy is, of course, not typi- cal of recent developments. But the unexpected character of the application is typical. The theory of Lie groups, for instance,
was developed for many years because of its intrinsic mathematical interest. It seems now to be the natural way of describing sym- metries in elementary-particle physics. The theory of analytic func-
tions of several complex variables has been undergoing a dramatic development during the last two decades. The experts in this theory were quite surprised to discover its usefulness in quantum field
theory. We stress once more the totally unpredictable nature of such applications. It is not the motivation of the mathematician who creates a new theory that determines its future relevance to other
fields of knowledge. In particular, one should not be repelled by the seemingly frivolous origins of many mathematical theories. A puzzle about the seven bridges in Kcinigsberg led to the theory of
graphs, a basic mathematical tool of computer science, and in- directly influenced the development of topology. A question raised by a professional gambler led Pascal and Fermat to the theory of
probability. But the application of a particular mathematical result or a specific mathematical concept is not the only way in which core mathematics is used. The total impact of mathematics on
science and technology is more difficult to document but probably even more important. In all such applications of mathematics (model building and mathematical reasoning about models, statistical
analysis, the use of computers), the investigators will use some of the concepts, methods, and results developed by core mathe- maticians. In a typical case, however, they will not find in the
storehouse of core mathematics the precise tools they need but will rather have to develop those tools either alone or in cooperation with mathematicians. How successful they will be depends to a
large extent on the general status of mathematics in the country, on the level of mathematical knowledge among the people involved, and on the number and quality of mathematically trained people. All
this depends ultimately on a healthy and vigorous develop- ment within the central core of mathematics. We are convinced that without this one cannot have efficient use of mathematical methods
OCR for page 3
10 Summary in science and technology, imaginative mathematization of new fields, or spirited and effective teaching of mathematics at all levels. The central core of mathematics is not static. It is
now under going rapid and in many ways revolutionary development. Many old and famous problems are being solved. The traditional boundaries between different mathematical subfields are disappearing.
New unifying ideas are applied with great success. Though dynamic, the central core of mathematics preserves historical continuity and un- compromisirlgly high standards. THE POSITION OF THE UNITED
STATES IN MATHEMATICS At the beginning of this century, mathematical research activity in this country was chiefly concentrated in a very few centers. Note- worthy was the center at The University of
Chicago under the leadership of E. H. Moore, himself trained in Europe. Among Moore's illustrious students were Oswald Veblen, G. D. Birkhoff, and Leonard Dickson, each a major figure in mathematics.
In the interval between World Wars I and II, mathematics in the United States became somewhat more important relative to world mathe matics. Political developments in Europe in the 1930's led many
Euro- pean mathematicians to seek refuge in the United States and to become active members of the American mathematical community. This greatly stimulated mathematical research activity in this
country. The Institute for Advanced Study in Princeton became a world center of mathematical research. Until after the Second World War, however, financial support for mathematical activity was
extremely limited, and only a handful of undergraduates seri- ously considered careers in research mathematics. During World War II, the relevance of mathematics to the tech- nological might of the
nation and the critical shortage of mathe- matically trained people became apparent. After the war, the mathematical sciences became for the first time a concern of the federal government, initially
through the research departments of defense agencies and then also through the National Science Foundation. While the influx of federal money into the support of mathematics was very modest compared
with the funds poured
OCR for page 3
Mathematics and Society 11 into such expensive fields as high-energy physics, its effect on the scientific life of the United States was stupendous. Before World War II, the United States was a
consumer of mathematics and mathematical talent. Now the United States is universally recognized as the leading producer of these. Moreover, graduate education in mathematical sciences at major
centers in this country is far superior to that in all but two or three centers in the rest of the world. Some more specific indicators of the posi- tion of the United States in the mathematical
world are given below. 1. International congresses of mathematicians, meeting at roughly four-year intervals, were inaugurated around the turn of the cen- tury. The earliest such congress, in 1897 in
Zurich, was attended by only about 200, whereas at the most recent congress in 1966 in Moscow, attendance was approximately 4,300. An invitation to a mathematician to address an international
congress evidences world- wide recognition of his contributions to mathematical discovery at the very highest level. During the first four international congresses, those of 1897, 1900, 1904, and
1908, there were 26 invited addresses, of which only one was by an American. During the four most recent congresses, those of 1954, 1958, 1962, and 1966, there were 274 invited ad- dresses, of which
96, more than one third, were by U.S. mathe maticians. 2. The Fields Medals, for recognition of distinguished achieve- ment by younger mathematicians anywhere in the world, were established in 1932
bY the late Professor T. C. Fields. Beginning with the Oslo congress of 1936, two Fields Medals have been awarded at each international congress of mathematicians, except that most recently at the
Moscow congress of 1966 four were awarded. In all, 14 Fields Medals have been awarded, with the distribution of medalists by country as follows: France, four; the United States, four; England, two;
Finland, Japan, Norway, and Sweden, one each. Of the 12 Fields Medalists since 1945, three have been Americans trained ir1 America: Paul Cohen (1966), John Milnor (1962), and Stephen Smale (1966~.
Three others are long-time residents of the # Simon Newcomb, who addressed the 1908 congress in Rome on the history and present status of the theory of lunar motion. Participants at this congress
num- bered 535, of whom 16 were from the United States.
OCR for page 3
12 Summary United States and should now be considered to be members of the U.S. mathematical community. Four or five others have been and are frequent visitors to the United States, each having spent
at least one academic year here. Some of these are active collaborators with various American mathematicians. 3. English has recently become the dominant language in world mathematical circles. For
instance, in the German journal, Mathe- matische Annalen, the percentage of papers in English rose from approximately 5 percent in the mid-1930's to nearly 20 percent in the mid-1950's, and to 55
percent in the mid-1960's. 4. In some representative issues of three of the leading mathe- matical journals of Europe, Acta Mathematica (Sweden), Com- mentarii 3Iathematici Helvetici (Switzerlandj,
and Mathematische Annalen (Germany), the following percentages of references to papers in U.S. journals were found: Acta YEAR Math. Math. Ann. 935 950 965 3% 12% 42% 12% 24% 25% A% 4% 25% 5. There
has been a significant increase in the number of foreign mathematicians visiting in this country. Figures assembled by the American Mathematical Society show that, in 1956, 73 foreign mathematicians
spent at least a semester at a U.S. university; in 1960, the number came to 144; in 1965 there were 199 such visitors. In addition, other foreign mathematicians made briefer or more casual visits or
more lengthy stays outside universities.
|
{"url":"http://www.nap.edu/openbook.php?record_id=9549&page=3","timestamp":"2014-04-19T17:36:43Z","content_type":null,"content_length":"56810","record_id":"<urn:uuid:e4ec5a13-032b-4570-a3c6-46fe085a3fd4>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gravitational stress-energy tensor
From Wikiversity
Gravitational stress-energy tensor is a symmetric tensor of the second valence (rank), which describes the energy and momentum density of gravitational field in the Lorentz-invariant theory of
gravitation. This tensor in the covariant theory of gravitation is included in the equation for determining the metric along with the stress-energy tensors of acceleration field, pressure field and
electromagnetic field. The covariant derivative of the gravitational stress-energy tensor determines the density of gravitational force acting on the matter.
Lorentz-invariant theory of gravitation (LITG)[edit]
In LITG the gravitational stress-energy tensor is determined through the gravitational tensor $~\Phi_{ik}$ and the metric tensor $~ \eta^{ik}$ in the Lorentzian metrics: ^[1]
$~ U^{ik} = \frac{c^2_{g}} {4 \pi \gamma }\left( -\eta^{im}\Phi_{mr}\Phi^{rk}+ \frac{1} {4} \eta^{ik}\Phi_{rm}\Phi^{mr}\right) ,$
where $~ \gamma$ is the gravitational constant, $~ c_{g}$ is the speed of gravitation.
After replacing $~ \gamma$ by the strong gravitational constant $~ \Gamma$ the gravitational stress-energy tensor can be used to describe strong gravitation at the level of atoms and elementary
particles in gravitational model of strong interaction.
Components of the gravitational stress-energy tensor[edit]
Since the gravitational tensor in LITG consists of the components of vectors of gravitational field strength $~ \mathbf{G}$ and gravitational torsion field $~ \mathbf{\Omega}$, and the tensor $~ \eta
^{ik}$ in 4-coordinates (ct, x,y,z) consists of the numbers 0, 1, -1 and does not depend on the coordinates and time, so the components of the gravitational stress-energy tensor can be written
explicitly in terms of components of the mentioned vectors:
$~ U^{ik} = \begin{vmatrix} u & \frac {H_x}{c_{g}} & \frac {H_y}{c_{g}} & \frac {H_z}{c_{g}} \\ c_{g} P_{gx} & u+ \frac{G^2_x+c^2_g \Omega^2_x}{4\pi\gamma} & \frac{G_xG_y+c^2_g \Omega_x\Omega_y }
{4\pi\gamma} & \frac{G_xG_z+c^2_g \Omega_x\Omega_z }{4\pi\gamma} \\ c_{g} P_{gy} & \frac{G_xG_y+c^2_g \Omega_x\Omega_y }{4\pi\gamma} & u+\frac{G^2_y+c^2_g \Omega^2_y }{4\pi\gamma} & \frac
{G_yG_z+c^2_g \Omega_y\Omega_z }{4\pi\gamma} \\ c_{g} P_{gz} & \frac{G_xG_z+c^2_g \Omega_x\Omega_z }{4\pi\gamma} & \frac{G_yG_z+c^2_g \Omega_y\Omega_z }{4\pi\gamma} & u+\frac{G^2_z+c^2_g \Omega^
2_z }{4\pi\gamma} \end{vmatrix}.$
The time-like components of the tensor denote:
1) The volumetric energy density of gravitational field, negative in value
$~ U^{00} = u= -\frac{1}{8 \pi \gamma }\left(G^2+ c^2_{g} \Omega^2 \right).$
2) The vector of momentum density of gravitational field $~\mathbf{P_g} =\frac{ 1}{ c^2_{g}} \mathbf{H},$ where is the vector of energy flux density of gravitational field or the Heaviside vector
$~\mathbf{H} =-\frac{ c^2_{g} }{4 \pi \gamma }[\mathbf{G}\times \mathbf{\Omega }].$
The components of the vector $~\mathbf{H}$ are part of the corresponding tensor components $~ U^{01}, U^{02}, U^{03}$, and the components of the vector $~\mathbf{P_g}$ are part of the tensor
components $~ U^{10}, U^{20}, U^{30}$, and due to the symmetry of the tensor indices $~ U^{01}= U^{10}, U^{02}= U^{20}, U^{03}= U^{30}$.
According to the Heaviside theorem, the relation holds:
$~ abla \cdot \mathbf{H} =-\frac{ \partial U^{00}}{\partial t} - \mathbf{G}\cdot \mathbf{J},$
where $~\mathbf{J}$ is the 3-vector of mass current density.
3) The space-like components of the tensor form a submatrix 3 x 3, which is the 3-dimensional gravitational stress tensor, taken with a minus sign. Gravitational stress tensor can be written as ^[1]
$~ \sigma^{p q} = \frac {1}{4 \pi \gamma} \left( -G^p G^q - c^2_g \Omega^p \Omega^q + \frac {1}{2} \delta^{pq} (G^2 + c^2_g \Omega^2 ) \right) ,$
where $~p,q =1,2,3,$$~G^1=G_x,$$~G^2=G_y,$$~G^3=G_z,$$~\Omega^1=\Omega_x,$$~\Omega^2=\Omega_y,$$~\Omega^3=\Omega_z,$$~\delta^{pq}$ is the Kronecker delta, $~\delta^{pq}=1$ if $~p=q,$ and $~\delta^
{pq}=0$ if $~p ot=q.$
The calculation of the three-dimensional divergence of the gravitational stress tensor gives:
$~ \partial_q \sigma^{p q} = f^p +\frac {1}{c^2_g} \frac{ \partial H^p}{\partial t},$
where $~ f^p$ denote the components of three-dimensional density of gravitational force, $~ H^p$ – components of Heaviside vector.
Gravitational force[edit]
The gravitational stress-energy tensor has such form that it allows us to find the 4-vector of the gravitational force density $~ f^\alpha$ by differentiation in four-dimensional space:
$~ f^\alpha = -\partial_\beta U^{\alpha \beta} = \Phi^{\alpha}_{i} J^i . \qquad (1)$
As we can see from formula (1), the 4-vector of gravitational force density can be calculated in a different way, through the gravitational tensor with mixed indices $\Phi^{\alpha}_{i}$ and the
4-vector of mass current density $~J^i$. This is due to the fact that in LITG the gravitational field equations have the form:
$~ \partial_n \Phi_{ik} + \partial_i \Phi_{kn} + \partial_k \Phi_{ni}=0,$
$~\partial_k \Phi^{ik} = \frac {4 \pi \gamma }{c^2_{g}} J^i .$
Expressing from the latter equation $~J^i$ in terms of $\Phi^{ik}$ and substituting in (1) and also using the definition of the gravitational stress-energy tensor, we can prove the validity of
equation (1). The components of the 4-vector of the gravitational force density are as follows:
$~ f^\alpha = (\frac {\mathbf{G} \cdot \mathbf{J} }{c_g}, \mathbf{f} ),$
where $~ \mathbf{f}= \rho \mathbf{G} + [\mathbf{J} \times \mathbf{\Omega} ]$ is the 3-vector of the gravitational force density, $~\rho$ is the density of the moving matter, $~\mathbf{J} =\rho \
mathbf{v}$ is the 3-vector of the mass current density, $~\mathbf{v}$ is the 3-vector of the matter unit velocity.
The integral of (1) over the three-dimensional volume of a small particle or a matter unit, calculated in the reference frame co-moving with the particle, gives the gravitational four-force:
$~ F^\alpha = \Phi^{\alpha}_{i} M u^i= \Phi^{\alpha}_{i} p^i = (\frac {\mathbf{G} \cdot \mathbf{p} }{c_g}, \frac{E}{ c^2_g } \mathbf{G}+ [\mathbf{p} \times \mathbf{\Omega}]) .$
In the integration it was taken into account that $~J^i = \rho_0 u^i$, where $~ \rho_0$ is the mass density in the co-moving reference frame, $~ M$ is the invariant mass, $~ u^i$ is the 4-velocity of
the particle, $~ p^i$ is the 4-momentum of the particle, $~\mathbf{p}$ is the relativistic momentum, $~E$ is the relativistic energy of the particle. It is also assumed that the mass densities $~ \
rho_0$ and $~ \rho$ include contributions from the mass-energy of the proper gravitational field and the electromagnetic field of the particle. The obtained 4-force is acting on the particle from the
gravitational field with the tensor $~ \Phi^{\alpha}_{i}$, and in some cases we can neglect the proper gravitational field of the particle and consider its motion only in the external field.
Relation to the 4-vector of energy-momentum[edit]
The gravitational stress-energy tensor contains time-like components $~ U^{0k}$, integrating which over the moving volume we can calculate the 4-vector of energy-momentum of the free gravitational
field, separated from its sources:
$~ Q^k = \int {\frac { U^{0k}}{ c_g } dV} = (\frac {U}{c_g}, \mathbf {Q}) ,$
where $~ U = \int { U^{00} dV}$ is the total energy of gravitational field, $~ \mathbf {Q} = \int { \mathbf {P_g}dV}$ is the total momentum of the field.
If in this volume there is matter as the source of proper gravitational (electromagnetic) field, we should consider the total 4-vector of energy-momentum, which includes contributions from all the
fields in the given volume, including acceleration field and pressure field. In particular, for a uniform spherical body with the radius $~ R$ the 4-vector of energy-momentum in view of the proper
gravitational field of the body has the form: ^[2]
$~ p^k = (m_g + \frac {U_0}{2c^2_g}) u^k =M u^k ,$
where $~ m_g$ is the gravitational mass, which equal to the mass $~ m_b$, defined through mass density and volume, $~ U_0 = - \frac {3 \gamma m^2_g }{5R}$ is the total gravitational energy of the
body in the reference frame in which the body is at rest, $~ M$ is the invariant inertial mass, which includes the contributions of the mass-energy from all the fields.
It is assumed that the mass $~ M$ equal to the mass $~ m'$ of matter particles without their gravitational binding energy, and when the particles combine in the whole body the bulk of their
gravitational energy is compensated by internal energy of the particles motion and the energy of the body pressure. Since the energy $~ U_0$ is negative, then the condition holds $~ m_b =m_g > M =m'$
, that is as long as the scattered matter is undergoing gravitational contraction into the body of finite size, the gravitational mass is increasing. We can find also the invariant energy of the
physical system:
$~ E_0 =Mc^2= c \sqrt {g_{ik}p^I p^k } .$
This can be compared with the approach of the general relativity, in which such mass $~ m_b$ is used that when we add to it the mass from the gravitational field energy, we obtain the relativistic
mass: $~ M = m_b + \frac {U_0}{c^2}$, and there is a relation: $~ m_g = M < m_b < m'$.
Covariant theory of gravitation (CTG)[edit]
CTG is the generalization of LITG to any reference frames and phenomena that occur in the presence of fields and accelerations of the acting forces. In CTG all deviations from LITG relations are
described by the metric tensor $~ g^{ik}$ which becomes the function of coordinates and time. In addition, in the equations the operator of 4-gradient $~ \partial_k$ is replaced by the covariant
derivative $~ abla_k$. After replacing $~ \eta^{ik}$ by $~ g^{ik}$ the gravitational stress-energy tensor becomes in the following form:
$~ U^{ik} = \frac{c^2_{g}} {4 \pi \gamma }\left( -g^{im}\Phi_{mr}\Phi^{rk}+ \frac{1} {4} g^{ik}\Phi_{rm}\Phi^{mr}\right) .$
Transforming the contravariant indices in the gravitational tensor $~ \Phi^{mr}$ into the covariant indices with the help of the metric tensor, and interchanging some indices, which are summed up, we
$~ U^{ik} = \frac{c^2_{g}} {4 \pi \gamma }g^{ms} \left( - g^{ir} g^{\mu k} + \frac{1} {4} g^{ik} g^{\mu r} \right) \Phi_{r m}\Phi_{s \mu}.$
Since the gravitational tensor $~ \Phi_{mr}$ with covariant indices consists of the components of vector of gravitational field strength $~ \mathbf{G}$, divided by the velocity $~ c_g$, and the
components of vector of gravitational torsion field $~ \mathbf{\Omega}$, the formula shows that in the curved spacetime the gravitational stress-energy tensor is the sum of the products of the
components of these vectors with the corresponding coefficients of the components of the metric tensor. And it turns out that the energy density of gravitational field $~ U^{00}$ contains mixed
products of the form $~ G_x \Omega_y$, etc. There are no such products in the flat Minkowski space, which leads to the fact that in the spacetime of special relativity the energy associated with the
strength of gravitational field is not mixed with the energy of the torsion field. The same situation takes place in electromagnetism: in Minkowski space the energy of the electric field is
calculated separately from the energy of the magnetic field, but in the curved spacetime in the energy density of the electromagnetic field there is additional energy from the mixed components with
the products of the components of the strengths of the electric and magnetic fields.
Due to the use of covariant differentiation in four-dimensional space in CTG the gravitational field equations are changed as well as the expression for the 4-vector of gravitational force density
(1), while the expression for the gravitational 4-force remains the same: ^[3]
$~ abla_n \Phi_{ik} + abla_i \Phi_{kn} + abla_k \Phi_{ni}=0,$
$~abla_k \Phi^{ik} = \frac {4 \pi \gamma }{c^2_{g}} J^i ,$
$~ f^\alpha = -abla_\beta U^{\alpha \beta} = \Phi^{\alpha}_{i} J^i ,$
$~ F^\alpha = \Phi^{\alpha}_{i} M u^i= \Phi^{\alpha}_{i} p^i.$
Equation for the metric[edit]
In CTG the metric tensor is determined by solving the equation similar to Hilbert-Einstein equation. In covariant indices this equation can be written as follows: ^[4]
$~ R_{ik} - \frac{1} {4 }g_{ik}R = \frac{8 \pi \gamma \beta }{ c^4} \left( B_{ik}+ P_{ik}+ U_{ik}+ W_{ik} \right),$
where $~ R_{ik}$ is the Ricci tensor, $~ R=R_{ik}g^{ik}$ is the scalar curvature, $~ \beta$ is the coefficient to be determined, $~ B_{ik}$, $~ P_{ik}$, $~ U_{ik}$ and $~ W_{ik}$ are the
stress-energy tensors of the acceleration field, pressure field, gravitational and electromagnetic fields, respectively, and it is assumed that the speed of gravitation $~ c_g$ is equal to the speed
of light.
In contrast to the general theory of relativity, in this equation, there is no cosmological constant $~ \Lambda$, there is an additional constant $~ \beta$ and the metric is dependent on the
gravitational stress-energy tensor. The latter is the consequence of the fact that in CTG gravitation is an independent physical force as well as the electromagnetic force, and therefore participates
in determining the metric according to the principles of the metric theory of relativity.
Equation of motion[edit]
Using the principle of least action allows us to deduce not only the formula for the gravitational stress-energy tensor, but also gives the equation of motion written in tensor form:
$~ abla_k \left( B^{ik}+ U^{ik} +W^{ik}+ P^{ik} \right)=0. \qquad (2)$
The covariant derivative of the stress-energy tensor of acceleration field defines up to the sign the density of the 4-force acting on the field from the matter. At the same time the operator of
proper-time-derivative is applied to the 4-velocity in the Riemannian space:
$~f^i = - g^{in}u_{nk} J^k =abla_k B^{ik}= \rho_0 \frac{ Du^i } {D \tau }= \rho_0 u^k abla_k u^i = \rho_0 \frac{ du^i } {d \tau }+ \rho_0 \Gamma^i_{ks} u^k u^s ,$
where $~ u_{nk}$ is acceleration tensor, $~\tau$ is the proper dynamic time of the particle in the reference frame at rest, $~\Gamma^i_{ks}$ is the Christoffel symbol.
The total density of the 4-force of the gravitational and electromagnetic fields and pressure field is determined by transfer of the stress-energy tensors of the fields to the right side of the
equation of motion (2) and then applying the covariant derivative:
$~f^i = -abla_k \left(U^{ik}+ W^{ik} + P^{ik} \right) = g^{in}\left(\Phi_{nk} J^k + F_{nk} j^k + f_{nk} J^k \right),$
where $~F_{nk}$ is the electromagnetic tensor, $~f_{nk}$ is the pressure tensor, $~j^k = \rho_{0q} u^k$ is the 4-vector of the electromagnetic current density, $~\rho_{0q}$ is the density of electric
charge of the matter unit in the reference frame at rest.
Conservation laws[edit]
In the weak field limit, when the covariant derivative can be replaced by the partial derivative, for the time-like component in (2), which has the index $~ i=0$, the local conservation law of
energy-momentum of matter and gravitational and electromagnetic fields can be written as follows: ^[5]
$~ abla \cdot (\mathbf{ K }+ \mathbf{H}+\mathbf{P}+ \mathbf{F} ) = -\frac{\partial (B^{00}+U^{00}+W^{00}+P^{00} )}{\partial t},$
where $~ \mathbf{ K }$ is the vector of the acceleration field energy flux density, $~ \mathbf{H}$ is the Heaviside vector, $~ \mathbf{P}$ is the Poynting vector, $~ \mathbf{F}$ is the vector of the
pressure field energy flux density, which are determined in the special relativity.
According to this law, the work of the field to accelerate the masses and charges is compensated by the work of the matter to create the field. As a result, the change in time of the total energy in
a certain volume is possible only due to the inflow of energy fluxes into this volume.
The analysis of equation (2) for the space-like components with the index $~ i=1,2,3$ in a weak field shows that, taking into account the equation of motion of the matter in the field, all the force
densities and the change rates of the momentums of the matter and fields are cancelled.
However in general case, when the spacetime is significantly curved by the existing fields and matter, in (2) we should consider the contributions with the additional non-zero components of the
metric tensor and their derivatives, which are absent in the special theory of relativity. This follows from the fact that the covariant derivative is expressed through the partial derivative and the
Christoffel coefficients. Since in CTG the purpose of using the metric tensor is correction of the motion equations in order to take into account the dependence on the fields of the time intervals
and spatial distances, measured by the electromagnetic (gravitational) waves, then such correction changes notation of many physical quantities in the form of 4-vectors and tensors. In particular, if
we consider equation (2) as the local conservation law of energy and momentum of the matter and the gravitational and electromagnetic fields, then appearing of additional contributions with the
components of the metric tensor leads to specification of the theory for the case of the curved space. And the physical meaning of the results obtained for the flat spacetime and the weak field
remains unchanged.
As another example we can consider the integral of the left side of (2) over the entire four-dimensional space. In flat Minkowski space the covariant divergence of the sum of the tensors becomes an
ordinary 4-divergence, to which we can apply the divergence theorem. By this theorem the integral of the 4-divergence of some tensor over the 4-space can be replaced by the integral of the tensor
over the hypersurface surrounding the 4-volume, over which the integration is done. If we choose a projection of this hypersurface on the hyperplane of the constant time in the form of a
three-dimensional volume the integral of the left side of (2) is transformed into the integral of the sum of time components of the tensors in (2) over the volume, which must equal to conserved
4-vector of the physical system under consideration:
$~ \mathbb{Q}^i= \int{ \left( B^{i0}+ U^{i0} +W^{i0}+P^{i0} \right) dV }.$
For these tensor components the 4-vector $~ \mathbb {Q}^i$ vanishes. ^[5] Vanishing of the 4-vector allows us to explain the 4/3 problem, according to which the mass-energy of the gravitational or
electromagnetic field in the momentum of field of the moving system in 4/3 more than in the field energy of fixed system.
General theory of relativity (GTR)[edit]
In GTR, there is a problem of determining the gravitational stress-energy tensor. The reason is that due to the applied principle of geometrization of physics, all manifestations of gravitation are
completely replaced by the geometric effect – the spacetime curvature. Thus, the gravitational field is reduced to the metric field, set by the metric tensor and its derivatives with respect to
coordinates and time. Since in each reference frame there is its own metric, so gravitation is not independent physical interaction but is assumed as the consequence of the metric of the reference
frame. This leads to the fact that instead of the gravitational stress-energy tensor in GTR there is a pseudotensor which depends on the metric. The feature of this pseudotensor is that locally it
can be made equal to zero at any point by choosing the corresponding reference frame. As a result, GTR have to refuse from the possibility to accurately determine the localization of gravitational
energy and the momentum of gravitational field in a physical system, which significantly impedes understanding the physics of gravitation on a fundamental level, and describing the phenomena in a
classical form. The purpose of Hilbert-Einstein equations in GTR is to find the metric:
$~ R_{ik} - \frac{1} {2 }g_{ik}R + g_{ik} \Lambda = \frac{8 \pi \gamma }{ c^4} \left( t_{ik} + W_{ik} \right).$
These equations do not contain the gravitational stress-energy tensor, which is present in the right side of the equations for the metric in CTG. The metric in GTR depends only on the matter and the
electromagnetic field in the considered reference frame. After finding such metric it is used to find the pseudotensor of the energy-momentum of gravitational field, and this pseudotensor must depend
only on the metric tensor, be symmetrical with respect to the indices, and when added to the stress-energy tensors of matter $~ t_{ik}$ and electromagnetic field $~ W_{ik}$ it must give zero
divergence so that the conservation law of energy-momentum of the matter and field would be satisfied. This pseudotensor in a weak field, that is, in the special theory of relativity, must vanish in
order to ensure the principle of equivalence of free fall in the gravitational field and inertial motion.
The Landau-Lifshitz pseudotensor corresponds to the mentioned criteria. ^[6] There is also an Einstein pseudotensor, but it is not symmetrical with respect to the indices. ^[7] Note that in contrast
to GTR, in mathematics under pseudotensor we understand something different, more precisely a tensor quantity, which changes its sign when it is transformed into the coordinate reference frame with
the opposite orientation of the basis.
Since in GTR there is no gravitational stress-energy tensor, the gravitational energy of a body is determined indirectly. For example, one method is when by the given mass density and body size the
mass of the body is calculated, first in the absence of the metric’s influence, and then taking into account the metric’s influence and the corresponding change in the volume differential in the
integral of the mass under influence of the gravitational field. The difference of the mentioned masses is equated to the mass-energy of gravitational field as to the manifestation of the metric
field. ^[8] In this case, the components of the metric tensor are used, found in GTR from the Hilbert-Einstein equation for the metric.
See also[edit]
External links[edit]
|
{"url":"http://en.wikiversity.org/wiki/Gravitational_stress-energy_tensor","timestamp":"2014-04-21T14:42:09Z","content_type":null,"content_length":"75387","record_id":"<urn:uuid:17186620-9006-4d57-8e49-43719d3db794>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hexayurts and African Villages
As you may have guessed I am a fan of mathematics. Numbers themselves leave me cold, however. Even some of the great quests of modern mathematics can occasionally appear to me simply as sudoku for
geniuses. The thing that gets me excited about mathematics is the way of thought. The way a simple proof can take a very difficult question, twist it and make it easy. The way you can take an idea
and reduce it to its simplest possible form. Trying to find the smallest amount of structure one needs without the idea falling apart.
This process often takes you into incredibly abstract theoretical worlds. Yet by thinking in these terms we can often find ideas that were drowned out in complexity of the “real” world. One of my
favourite examples of this is the fractal patterns found in African villages by Ron Eglash:
Once pointed out the ideas are not hard to see, and have been used to help understand the cultures the buildings come from, yet without the right idea of what to look for the structure is not
This example comes as much from anthropology as mathematics and it can often be easiest to describe what I feel is mathematical thought away from mathematics. My second example, therefore, comes from
engineering. Consider a disaster like Haiti, infrastructure and housing have been destroyed. There is a need to quickly create shelter for a large number of people. The standard answer is the relief
tent. This is a very short term solution however. It solves the problem, but a tent may not have a long life. Say a year or two. By this time the attention of the world has drifted. A better idea is
to find cheap, easy to assemble shelters, often called transitional housing.
Vinay Gupta took a different route, and I am going to accuse him of mathematical thought. He cut away all the needs a house had and thought just about the simplest way to make a structure. He was
actually thinking about how to make an easy geodesic dome. The result was the hexayurt, a building made of 12 sheets of (internationally ubiquitous) 4′x8′ plywood:
This is a building that can be made on site without specialist parts (a hammer, saw, some off cuts of wood and the 12 sheets are all that is required). A building that is cheaper than the standard
relief tent (just $100). It does not even need much skill, though a fair amount of man power is needed, this is the one thing that such disasters usually have in abundance. Furthermore it is easy to
adapt the building. A room can be divided off with 3 additional sheets, a taller building can be made by using two sheets to make square walls, as hexagons the buildings can be easily clustered. In
fact, taking advantage of this I could not help but combine the two examples, to make a fractal village of hexayurts:
In all this however we have to be careful, malign forces are watching our every move and we never know when they might pounce:
One thought on “Hexayurts and African Villages”
1. Dera Dr. Gupta
We are considering incorporating the thinking behind the image, HEXAFRACTAL,AFRICAN VILLAGE’ into a design for an African American Museum. We should talk.
Reese Palley
|
{"url":"http://maxwelldemon.com/2010/04/23/hexayurts-and-african-villages/","timestamp":"2014-04-18T10:35:01Z","content_type":null,"content_length":"83629","record_id":"<urn:uuid:ba4268e4-1e79-4a47-877a-685006bf0732>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical tea time
We engage in a leisurely discussion / exploration of exciting, accessible, and significant mathematical achievements and open problem. The mathematical tea times are open anyone who is interested in
higher mathematics (including faculty and students not formally affliated with the Mathematics departement). The meetings are teas and not formal classes or lectures. They are coordinated by
Christopher Simons.
• Prime number races (Student leader: Lucas Willis)
• Projective planes and finite geometries (Student Leader: Laura Doot)
• The spectacular Green-Tao theorem on the arithmetic progression of primes (and a 2006 Field's medal?)! (Student Leader: Ed Greve)
2004-2005 Topics
|
{"url":"http://users.rowan.edu/~simons/tea.html","timestamp":"2014-04-16T07:13:46Z","content_type":null,"content_length":"2329","record_id":"<urn:uuid:965d3157-69a8-4bcf-b3b2-7cdadb88ce11>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Morton Grove Trigonometry Tutor
...I have been in the Glenview area the past four years and have tutored high schoolers from Notre Dame, New Trier, GBS, GBN, Deerfield High, Loyola Academy and Woodlands Academy of the Sacred
Heart. So if you are really struggling with chemistry or math or just want to improve your grades I'm the ...
20 Subjects: including trigonometry, chemistry, calculus, physics
...For three semesters I taught C++ and Matlab to freshmen and sophomore mechanical engineering students. The course began with simple programming commands, progressed to logic and more
complicated problem solving, and culminated with object oriented programming. During my masters degree I was a TA for the intro to computer science course.
17 Subjects: including trigonometry, physics, calculus, GRE
Hello! I hold a Ph.D. in Neuroscience and a B.A. in Biology (with a Neuroscience concentration) and a History Minor (European concentration). I have extensive experience with teaching and tutoring
and have worked with students from first grade through college age. Recently, I conducted biomedical research at an Ivy League university in New York.
29 Subjects: including trigonometry, reading, chemistry, algebra 1
...One of my most important responsibilities is preparing students for placement exams. Many of the students I work with take the ISEE. As a result I have come to know this exam very well.
24 Subjects: including trigonometry, calculus, geometry, GRE
...I teach by asking the student prompting questions so the student can practice the thought processes that lead them to determining correct answers on their own. This increases the success rate
on examinations and enhances the critical thinking skills necessary in the 'real world'. I also provide...
13 Subjects: including trigonometry, chemistry, calculus, geometry
|
{"url":"http://www.purplemath.com/morton_grove_il_trigonometry_tutors.php","timestamp":"2014-04-19T12:05:56Z","content_type":null,"content_length":"24301","record_id":"<urn:uuid:558d4756-9294-4951-b67e-45a8f456e308>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nobel Laureates in Physics
The Nobel Prize in Physics is awarded annually by a committee of five members elected by the Royal Swedish Academy of Sciences to one or more individuals, who have made an outstanding contribution in
physics. Since the Institute opened, eight winners of the Nobel Prize in Physics have attended our programmes.
|
{"url":"http://www.newton.ac.uk/history/nobel_physics.html","timestamp":"2014-04-20T05:46:44Z","content_type":null,"content_length":"4610","record_id":"<urn:uuid:3bd5f2fe-cf9c-4802-8475-7a48ba0c51df>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ellipse problem
January 7th 2010, 07:19 PM #1
Jan 2010
Ellipse problem
How will u find out that if the point $(\alpha, \beta)$ is outside, on or inside the ellipse $\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$?
Hence show that the triangle whose verities are (1,2), (3,-11) and (-2,1) lies wholly inside the ellipse $x^2 + 2y^2 = 13.$
This entire problem can be solved by graphing.
Look at the standard form equation of the ellipse: $\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$
if the constant a>b it represents the length of the major axis, and if a<b it represents the minor axis. The same thing applies to "b".
You can now sketch the ellipse. But you can also check this without drawing a graph, if you have studied the quadratic forms.
substitute $(x,y)$ with $(\alpha,\beta)$, If the value is less than 1, then inside; equal to 1, lie on the curve; is greater than 1, outside.
January 7th 2010, 07:20 PM #2
January 7th 2010, 07:21 PM #3
Jan 2010
January 7th 2010, 08:11 PM #4
Apr 2008
January 7th 2010, 08:16 PM #5
|
{"url":"http://mathhelpforum.com/calculus/122846-ellipse-problem.html","timestamp":"2014-04-17T20:15:34Z","content_type":null,"content_length":"42582","record_id":"<urn:uuid:01bb8676-3c3b-488c-8725-1b941f0047c4>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A critical stress model for cell motility
A detailed theoretical model that combines the conventional viscoelastic continuum description of cell motion with a dynamic active stress is presented. The model describes the ameboid cells movement
comprising of protrusion and adhesion of the front edge followed by detachment and movement of the tail. Unlike the previous viscoelastic descriptions in which the cell movement is steady, the
presented model describes the “walking” of the cell in response to specific active stress components acting separately on the front and rear of the cell. In this locomotive model first the tail of
the cell is attached to the substrate and active stress is applied to the front of the cell. Consequently, the stress in the tail increases. When the stress in the tail exceeds a critical value,
namely critical stress, the conditions are updated so that the front is fixed and the tail of the cell is detached from the substrate and moves towards the front. Consequently, the stress in the tail
decreases. When the stress goes to zero, the starting conditions become active and the process continues. At start the cell is stretched and its length is increased as the front of cell migrates more
than the rear. However, after several steps the front and rear move equally and the cell length stays constant during the movement. In this manuscript we analyzed such cell dynamics including the
length variation and moving velocity. Finally, by considering this fact that at the single-cell level, interactions with the extracellular environment occur on a nanometer length scale, the value of
critical stress was estimated.
Finite difference; Cell motility; Continuum model; Critical stress
Cell motility is based on different biological events and pathological processes. In this regard, understanding the forces between the cells and substrates responsible for cell motility not only
allows the underlying of many pathological processes but also holds promise for designing novel engineered materials for tissue engineering and regenerative medicine [1-3]. Migration involves
different coordinated events such as protrusion of pseudopodia, formation of new adhesions, maturation of traction, and release of old adhesions [4]. To obtain suitable physiological effects, cell
motility must maintain a specific speed and direction in response to environment stimuli. As a challenging issue, migration control by gradients of dissolved and surface-attached chemicals has been
investigated for decades [5-8]. The motility of different cells involves some stages. According to Mitchison and Cramer [9] the motility of ameboid cells includes four different steps of protrusion,
attachment to substrate, translocation of cell body, and detachment of its rear. Cells first extend localized protrusions at the leading edge, which take the form of lamellipodia, filopodia or
pseudopodia. Most current models explain force generation at the leading edge by localized actin polymerization and crosslinking (or gelation) of actin filaments. In the second step, the protrusion
anchors to other cells or to the substrate [10]. A protrusion maintains its stability by the formation of new adhesive complexes which act as sites for molecular signaling as well as transmitting
mechanical force to the substrate. In the next step, actomyosin filaments pull the cell toward the protrusion in fibroblasts by a contract at the cell front, whereas in other kind of cells,
contraction is at the rear and the cytoplasm is compressed from the front. Finally, in the last step, the cell disconnects the adhesive contact, which allows the tail of the cell to follow the main
body [11,12].
During the last few decades, numerous models of cell motility have been reported. In 1989, Lauffenburger [13] studied the correlation between cell speed and receptor density and affinity. He also
reported a model in one-dimension and explained three regions as lamellipod, cell body, and uropod. In 1991, DiMilla et al. [14] analyzed the interactions of the cell and the substrate by additional
Maxwell elements at the front and the rear. In their model the cells consisted of discrete subunits, each with a spring, dash-pot and contractile element connected to each other in parallel. They
showed that this bell-shaped distribution of the cells speed could be described by an asymmetry in adhesiveness from preferable binding at the cell front.
Recently, a method has been studied and applied to a two-dimensional model of nematode sperm by Bottino et al. [15]. They modeled the interactions of the cell and the substrate by a viscous drag
between the substrate and the cell. They also modeled the polymerization of actin network at the forward edge and its disassembly at the rear of the cell both for single and interacting cells. This
model was biochemically regulated and described the fixed continuous movements of the cell.
These models usually treat the cell body as a combination of dashpots and springs, and solve the resulting force balance equations at each node. Although this approach gives qualitative perceptions
into the features of the cell motility, the cell body is more accurately described as a possibly multi-phase continuum. Therefore, it seems that modeling of the cell by means of continuum approach
would be more appropriate.
More recently Gracheva and Othmer [16] developed a continuum model for the cell as a viscoelastic material. They studied the spatial variability of elasticity and viscosity coefficients in addition
to the gradient in physical characteristics of the substrate. This approach gave them the opportunity of modeling different kinds of cells. In 2010, Sarvestani [17] described a physical model to
study the motility of a contractile cell on a substrate. The model demonstrated that the motility of cells significantly depended on the rigidity of the substrate. This dependency was rooted in the
regulation of actomyosin contractile forces by substrate at different anchorage points. It suggested that on stiffer substrates, the traction forces required for cell translocation acquire larger
magnitude. However, this results in weaker asymmetry which causes slower cell motility. Also, on soft substrates, the model suggested a meaningful relationship between the rigidity of the substrate
and the speed of cell movement.
As we explained earlier, the motility of ameboid cells includes four steps of protrusion, adhesion to substrate, cell body movement and detachment of cell tail. In the previous studies, these steps
have not been considered for the cell motility modeling. Instead, a steady movement was attributed to the cell. Although the previous steady models were in agreement with the experimental data in
term of the length and the position of the cell, in order to study the stress generated in the cell during its motion a model based on the steps of ameboid cell motility, which is closer to the
motion of a real cell, is necessary.
In this study, we present a critical stress two-step walking model for the cell motility, which is of great interest to scientists dealing with tissue engineering and nanomedicine. The boundary
conditions in our model are closer to the actual motion of the cell [18-20] which can be schematically shown as in Figure 1. As it is seen in this figure, the cell front moves while the rear is
attached to the substrate. When the stress in the front of the cell exceeds a critical value, the front stops and the rear side starts migrating. At the beginning of the process, the front moves
longer distance than the rear at each step resulting in a stretch in the length of the cell. However, after several steps the stretch counter acts the front motion so that the front and the rear move
equal distances at each step. Therefore, the length of the cell reaches an equilibrium value. The equilibrium length depends on the cell properties.
Figure 1. The steps of cell motility we considered in our model. (a) Cell rear is adhered to the substrate and the cell front is moving. (b) The length of the cell is increased toL[1]+ Δx[f]. (c)
After the stress is reached to a certain degree (critical stress value) the rear of the cell detached and start moving forward while the front of the top is adhered to the substrate. (d) The rear of
the cell moves as much as Δx[r]. (e) The process repeat with the new cell length which is equal to L[1]+ΔL where ΔL=Δx[f]–Δx[r]. Note that the length of the cell cannot stretch indefinitely and it
will reach to a point that the rear and front of the cell have same displacement.
Equations of motion and boundary conditions
Generally, investigating cell functions such as migration and adhesion as well as differentiation requires accurate mimicking of the in vivo microenvironment. This mimicking of the natural extra
cellular matrix requires biomaterials that are tunable down to the nanometer length scale. In this work, a one dimensional simulation used for cell motility is based on the classical continuum model
for ameboid cell. This model was previously developed by Gracheva et al. in 2004 [16]. We extended the model with applying variable boundary conditions to explain the cell motility in a critical
stress two-step walking model. In brief, following ref. [16], the equation of motion for the cell is defined as:
In which x is position, t is time, u is the displacement of the cell, σ is the stress along the cell, and β is an effective drag coefficient or friction. σ is governed by the following equation:
E(x) is the elastic modulus, μ is the viscosity coefficient, and τ(x) is the active stress. β(x) in (1) changes along the cell length and is given by:
β[0] is a constant, k[s] is a coefficient for cell-substrate interaction, ψ[1]≥1 is the linear increase of dissociation rate towards the rear, r and f are the positions of the cell’s rear and front,
respectively, and n[f] is the density of free integrins.
Generally, integrin clustering is required to support cell locomotion as cell motility is regulated by varying ligand spatial presentation at the nanoscale level. As the dynamics of actin network
formation is detailed in ref. [21], it is not represented here. The spatial distribution of actin network density is assumed time-independent with the following description derived in ref. [16], in
which the dependency of the elastic modulus, E(x), to x can be expressed as:
Where E[0] is a constant.
By approximate matching with the presented curve in the study of Gracheva et al. 2004 [16] the a(x) function is obtained as eq. 5:
Finally, τ(x) can be calculated from:
τ[0] is a constant, K[Reg]^+ and K[Reg]^− are the rate of activation and deactivation of bound myosin II, respectively,Ψ[2]≥1, [Reg][0] is the maximum level of regulatory protein, n[b] is the density
of integrins bound to the substrate, n[b0] is its typical value, α is a degree of coupling between regulatory protein and integrins, K[m]^+ and K[m]^− are the rate of myosin binding and decay of
bound myosin, respectively, and m[f] is the concentration of free myosins. Table 1 lists the cell parameters used in the calculations. It is further assumed that E[0]=0.42×10^-10N/mm[22] and the
viscosity is constant, μ=0.0002 Ns/mm^2.
According to the free-body diagram of the cell show in Figure 2, the equations of motion and the boundary condition can be presented as follows:
There are two different boundary conditions in this model. First, the rear of the cell is fixed and active stress is applied to the front of the cell. During this time, the stress at the first point
of the cell increases. When this stress exceeds a critical value (i.e. σ[C]^,, the magnitude of the critical stress), the boundary condition changes. Next the front is fixed and the rear of the cell
is released and starts to move toward the front. During the 2^nd course, the stress of the first point decreases and when it reaches to zero, the previous boundary condition becomes active. The
procedure repeats during the cell movement.
In the first course, when the stress of the first point is still below the critical value, i.e. σ[1]<σ[c], the B.C. is:
In the second course, when σ[1]exceeds σ[c], the B.C. changes to:
In all the equations, σ is defined as in eq. 2. The generated stress by frontal applied load, σ[active], is defined by the following equation:
In which S[cell]=30 μm^2 and F[active]=1000 nN[23]. By setting the strain equation, du/dx, in eq. 2, and substituting eq. 2 in eq. 7, eq. 8 and eq. 9, and discretizing with finite difference method,
the equation of motion becomes:
The boundary conditions of the front and rear of the cell in the steady state motion are obtained as follow:
Leading edge:
Trailing edge:
The superscripts i and j represent the cell node position and the time-step, respectively. In this work, the cell is divided into 100 parts with 101 nodes. The time step dt has to be less than the
time constant of the viscoelastic model which is defined as the ratio of the viscosity to elasticity in Kelvin-Voight model. Here the minimum time constant is 0.00066 minutes. Therefore, dt=0.0001
was chosen.
Considering the first set of boundary conditions (eq. 9 and eq. 10) and using eq. 2, we will have σ[1] as:
When the second set of boundary conditions, eq. 11 and eq. 12, are applied, σ[1] becomes:
Therefore, a general method is derived for obtainingu[i-1], u[i], and u[i+1] in j+1^th timesteps from their values in the j^th timestep:
Multiplying both sides by A^-1results in:
Therefore, by using a finite difference method, displacement of the cell nodes in each time step are calculated from the displacement in the previous time step. Since at t=0 min. the cell is
stationary, u^1=0 is the initial boundary condition for eq. 24. The nodes displacements are calculated and x is updated. The matrices A, B, and C are regenerated accordingly. t is increased by one
time step dt and the process continues until t reaches the final time.
Calculation results
In this study we first simulated 100 minutes of the steady movement of a cell. The results are shown in Figure 3 for the cell position and its length versus time. These results agree well with those
of Gracheva et al. 2004 [16].
Figure 3. (a)The cell position and(b)its length during the cell movement.
The average cell speed was also in agreement with the experimental result of Lo et al. 2000 [24] estimating an average speed of 0.26±0.13 μm/min. Our model calculation resulted in 0.2 μm/min. This
shows that the model is reasonably accurate and can be used for the ultimate model by applying a variable boundary condition, which is a function of the stress in the first node. Suppose that the
rear part of the cell is fixed and the active stress is applied to its front part. The variation of the first node stress versus time is illustrated in Figure 4.
Figure 4. Generated stress in the first point of the cell (rear) while the rear of the cell is fixed, and the active stress is applied to its front.
As can be observed in Figure 4, σ[1] has a maximum point at σ[1]=1.6×10^−7N/mm^2. After this point,σ[1] decreases slowly. This maximum value in Figure 4 was our first estimation for the critical
stress parameter, σ[c]. For more accurate estimation, σ[c] was decreased to 1.2×10^−7N/mm^2 in five equal steps. In each step the results were compared with the steady results of the previous
model. It was concluded that in state of σ[c]=1.4×10^−7N/mm^2 the new model agrees with the results of the steady model. Figure 5 shows the cell length in different critical stress values. The
similarity of the two models results can be observed in Figure 6 where the critical stress for the critical stress model has the appropriate magnitude of σ[c]=1.4×10^−7N/mm^2.
Figure 5. The variation of the cell length during its movement for different values of σ[c].
Figure 6. Cell length in steady and critical stress model.
The average speed of the cell (which was previously estimated from Figure 3 in the steady stage), can be calculated from Figure 7 in the critical stress model, which gives 0.2 μm/min. This value is
in agreement with the reported experimental data for the cell velocity [24].
Figure 7. Calculated positions of the leading and trailing edges of the cell in critical stress model.
For future work, we suggest the introduction of a self-regulatory mechanism that would act on the boundaries as the stress goes up. For instance, something that would change the dissociation rate as
stress increases.
Predicting and evaluating the cell movement, cell speed, and the generated stresses in the cell have been under consideration in recent decades. Mechanical models are gradually created to be able to
give appropriate predictions of the cell motility processes. Based on experimental observations ameboid cell movement includes four steps of protrusion, adhesion to substrate, cell body movement and
detachment of cell tail. In previous studies based on the viscoelastic continuum description of the cell motion, these steps have not been included in cell movement modeling and a steady movement was
attributed to the cell [16,17]. Here, we promoted the previous models by changing the boundary conditions to more realistic assumptions. We analyzed the dynamics of the cell in our model and compared
it with that of the previous models. In the new model the effect of adhesion to the substrate is considered through a cell-substrate interaction parameter along with the two-step boundary conditions
that offers an acceptable survey of cell movement in different environments. The results of our model agree with the overall results of the steady model and provide additional information on the cell
elongation and stress. The calculated cell velocity also agrees with the experimental value. The obtained results can assist nanoscale tissue engineering to achieve its main goal which is predicting
cellular behaviour and interactions between cells and the environment by engineering the nanoscale presentation of biologically relevant molecular signals.
Authors’ contributions
MM performed the initial modeling. FF, MM, DV and LT revised the results, modeling and discussions. All authors contributed to writing the manuscript, and agreed on its final contents. All authors
read and approved the final manuscript.
This work was partially supported by AFOSR under Grant no. FA9550-10-1-0010 and the National Science Foundation (NSF) under Grant no. 0933763.
Sign up to receive new article alerts from Theoretical Biology and Medical Modelling
|
{"url":"http://www.tbiomed.com/content/9/1/49","timestamp":"2014-04-17T16:33:33Z","content_type":null,"content_length":"124713","record_id":"<urn:uuid:b97711b9-9745-4197-8e70-b6c9f7a86b19>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Consecutive Squares
Copyright © University of Cambridge. All rights reserved.
'Consecutive Squares' printed from http://nrich.maths.org/
If we take any 8 consecutive numbers: $$n, n+1, n+2, n+3, n+4, n+5, n+6, n+7$$ then the sum of the squares of four of these numbers is equal to the sum of the squares of the other four.
This means that the terms in $x^2$ in $x$ and the constant term must be split equally.
If we sum the squares of each of the eight consecutive numbers and find the mean, this will equal the sum of each of the four terms needed. So adding all the squares we have: $$n^2 + (n+1)^2 + (n+2)^
2 + (n+3)^2 + (n+4)^2 (n+5)^2 + (n+6)^2 + (n+7)^2 = 8n^2 + 56n + 140 $$ So the two sides of the equality must have the value $$4n^2 + 28n + 70$$ $$(n+1)^2 + (n+2)^2 + (n+4)^2 +(n+7)^2 = n^2 + (n+3)^2
+ (n+5)^2 + (n+6)^2$$
|
{"url":"http://nrich.maths.org/519/solution?nomenu=1","timestamp":"2014-04-19T07:14:14Z","content_type":null,"content_length":"3757","record_id":"<urn:uuid:a8e2c46f-2d0c-4895-a065-18ca42d8258a>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Duality between MIMO Source and Channel Coding
Sandeep Pradhan^1
(Professor Kannan Ramchandran)
(DARPA) F30602-00-2-0538 and (NSF) CCR-0219722
We address duality in a variety of multiple-input-multiple-output (MIMO) source and channel coding problems of interest under different scenarios of "one-sided" inter-terminal collaboration at either
the transmitter or at the receiver, including certain cases of duality between (1) broadcast channel coding and distributed source coding, and (2) multi-access channel coding and
multiple-descriptions source coding. Our notion of duality in this project is in a functional sense, where the optimal encoder mapping for a MIMO source coding problem becomes identical to the
optimal decoder mapping for the dual MIMO channel coding problem, and vice versa. For ease of illustration we give the formulation only for two-input-two-output systems, which can be easily extended
to the MIMO case. We present the precise mathematical conditions under which these encoder-decoder mappings are swappable in the two dual MIMO problems, identifying the key roles played by the source
distortion and channel cost measures respectively in the MIMO source and channel coding problems in capturing this duality.
Since the first observation by Shannon in 1959 [1] that source and channel coding problems can be studied as information-theoretic duals of each other, a number of researchers have made significant
contribution to furthering this understanding, as noted in excellent textbooks such as [2,3]. In [4], this duality has been studied in the context of quantization and modulation. As multiuser
information theory has gained significant interest recently, there has been a corresponding interest in extending this duality to these scenarios. Duality between source and channel coding with side
information was first reported in [5], and later studied in [6].
A mathematical formulation of the functional duality between conventional point-to-point source and channel coding problems was given in [7], where the important roles of distortion and cost measures
for these two problems were highlighted, inspired by seemingly unrelated work on the optimality of uncoded transmission in [8].
This concept was then extended to the case of source and channel coding with side information. The notion of duality addressed in [7] is in a functional sense, i.e., the optimal encoder-decoder
mappings for one problem become the optimal decoder-encoder mappings for the dual problem. The precise conditions under which such swappable mappings are feasible involve dual relationships between
distortion and cost measures in the source and channel coding problems respectively. Study of this functional duality serves two important purposes: (1) it provides new insights into these problems
from the different perspectives of source and channel coding, and allows for cross-leveraging of advances in the individual fields; (2) more importantly, it provides a basis for sharing efficient
constructions of the encoder and decoder functions in the two problems, e.g., through the use of structured algebraic codes, turbo-like codes, trellis-based codes, etc.
In this work we extend this notion of functional duality to more instances of MIMO source and channel coding problems. We study various MIMO structures admitting different scenarios of collaboration
among multi-terminal inputs and/or outputs. (To keep the exposition simple, in this work we describe only two-input-two-output systems.) The collaboration scenarios we consider involve those where
either the multi-terminal encoders or the multi-terminal decoders can collaborate, i.e., be joint, but not both. (The case of collaboration at both ends degenerates to point-to-point MIMO systems.)
Under this one-sided collaboration abstraction, we address four problems of interest in source and channel coding: (1) distributed source coding; (2) broadcast channel coding; (3) multiple
description source coding with no excess sum-rate; and (4) multiple access channel coding with independent message sets. In (1) and (4), the decoders collaborate, whereas in (2) and (3), the encoders
collaborate. These four problems have been studied in the literature extensively. In this project we point out that for a given distributed source coding problem, under certain cases, we can obtain a
specific dual broadcast channel coding problem and vice versa. Similarly, for a given multiple description coding problem, we can find a specific dual multiple access channel coding problem and vice
[1] C. E. Shannon, "Coding Theorems for a Discrete Source with a Fidelity Criterion," IRE Nat. Conv. Rec., Vol. 4, 1959.
[2] I. Csiszar and J. Korner, Information Theory: Coding Theorems for Discrete Memoryless Sources, New York, Academic Press, 1981.
[3] T. M. Cover and J. A. Thomas, Elements of Information Theory, New York, John Wiley and Sons, 1991.
[4] M. V. Eyuboglu and G. D. Forney, "Lattice and Trellis Quantization with Lattice and Trellis-bounded Codebooks--High Rate Theory for Memoryless Sources," IEEE Trans. Information Theory, Vol. 39,
January 1993.
[5] J. Chou, S. S. Pradhan, and K. Ramchandran, "On the Duality between Distributed Source Coding and Data Hiding," Proc. Asilomar Conf. on Signals, Systems, and Computers, November 1999.
[6] M. Chiang and T. M. Cover, "Unified Duality between Channel Capacity and Rate Distortion with Side Information," Proc. Int. Symp. Information Theory, Washington, DC, June 2001.
[7] S. S. Pradhan, J. Chou, and K. Ramchandran, Duality between Source and Channel Coding with Side Information, UC Berkeley Electronics Research Laboratory, Memorandum No. UCB/ERL M01/34, December
[8] M. Gastpar, B. Rimoldi, and M. Vetterli, "To Code or Not to Code," Proc. IEEE Int. Symp. Information Theory, Sorrento, Italy, June 2000.
^1Professor, University of Michigan
More information (http://www.eecs.umich.edu/~pradhanv/) or
Send mail to the author : (pradhanv@eecs.umich.edu)
Edit this abstract
|
{"url":"http://www.eecs.berkeley.edu/XRG/Summary/Old.summaries/03abstracts/pradhanv.2.html","timestamp":"2014-04-19T02:28:08Z","content_type":null,"content_length":"6897","record_id":"<urn:uuid:962cfc45-01e0-4e4d-8414-dd0cca058aa8>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Power factor
From Wikiversity
The power factor of an AC electric power system is defined as the ratio of the real power to the apparent power, and is a number between 0 and 1. Real power is the capacity of the circuit for
performing work in a particular time. Apparent power is the product of the current and voltage of the circuit. Due to energy stored in the load and returned to the source, or due to a non-linear load
that distorts the wave shape of the current drawn from the source, the apparent power can be greater than the real power. Low-power-factor loads increase losses in a power distribution system and
result in increased energy costs.
In a purely resistive AC circuit, voltage and current waveforms are in step (or in phase), changing polarity at the same instant in each cycle. Where reactive loads are present, such as with
capacitors or inductors, energy storage in the loads result in a time difference between the current and voltage waveforms. This stored energy returns to the source and is not available to do work at
the load. A circuit with a low power factor will have thus higher currents to transfer at a given quantity of power than a circuit with a high power factor.
Circuits containing purely resistive heating elements (filament lamps, strip heaters, cooking stoves, etc.) have a power factor of 1.0. Circuits containing inductive or capacitive elements ( lamp
ballasts, motors, etc.) often have a power factor below 1.0. For example, in electric lighting circuits, normal power factor ballasts (NPF) typically have a value of (0.4) - (0.6). Ballasts with a
power factor greater than (0.9) are considered high power factor ballasts (HPF).
The significance of power factor lies in the fact that utility companies supply customers with volt-amperes, but bill them for watts. Power factors below 1.0 require a utility to generate more than
the minimum volt-amperes necessary to supply the real power (watts). This increases generation and transmission costs. Good power factor is considered to be greater than 85%. Utilities may charge
additional costs to customers who have a power factor below some limit.
AC power flow has the three components: real power (P), measured in watts (W); apparent power (S), measured in volt-amperes (VA); and reactive power (Q), measured in reactive volt-amperes (VAr).
The power factor is defined as:
In the case of a perfectly sinusoidal waveform, P, Q and S can be expressed as vectors that form a vector triangle such that:
$S^2\,\! = {P^2\,\!} + {Q^2\,\!}$
If φ is the phase angle between the current and voltage, then the power factor is equal to $\left|\cos\phi\right|$, and:
$P = S \left|\cos\phi\right|$
By definition, the power factor is a dimensionless number between 0 and 1. When power factor is equal to 0, the energy flow is entirely reactive, and stored energy in the load returns to the source
on each cycle. When the power factor is 1, all the energy supplied by the source is consumed by the load. Power factors are usually stated as "leading" or "lagging" to show the sign of the phase
If a purely resistive load is connected to a power supply, current and voltage will change polarity in step, the power factor will be unity (1), and the electrical energy flows in a single direction
across the network in each cycle. Inductive loads such as transformers and motors (any type of wound coil) generate reactive power with current waveform lagging the voltage. Capacitive loads such as
capacitor banks or buried cable generate reactive power with current phase leading the voltage. Both types of loads will absorb energy during part of the AC cycle, which is stored in the device's
magnetic or electric field, only to return this energy back to the source during the rest of the cycle.
For example, to get 1 kW of real power if the power factor is unity, 1 kVA of apparent power needs to be transferred (1 kW ÷ 1 = 1 kVA). At low values of power factor, more apparent power needs to be
transferred to get the same real power. To get 1 kW of real power at 0.2 power factor 5 kVA of apparent power needs to be transferred (1 kW ÷ 0.2 = 5 kVA).
It is often possible to adjust the power factor of a system to very near unity. This practice is known as power factor correction and is achieved by switching in or out banks of inductors or
capacitors. For example the inductive effect of motor loads may be offset by locally connected capacitors.
Energy losses in transmission lines increase with increasing current. Where a load has a power factor lower than 1, more current is required to deliver the same amount of useful energy. Power
companies therefore require that industrial and commercial customers maintain the power factors of their respective loads within specified limits or be subject to additional charges. Engineers are
often interested in the power factor of a load as one of the factors that affect the efficiency of power transmission.
Non-sinusoidal components[edit]
In circuits having only sinusoidal currents and voltages, the power factor effect arises only from the difference in phase between the current and voltage. This is narrowly known as "displacement
power factor". The concept can be generalized to a total, distortion, or true power factor where the apparent power includes all harmonic components. This is of importance in practical power systems
which contain non-linear loads such as rectifiers, some forms of electric lighting, electric arc furnaces, welding equipment, switched-mode power supplies and other devices.
A particularly important example is the millions of personal computers that typically incorporate switched-mode power supplies (SMPS) with rated output power ranging from 150W to 500W. Historically,
these very low cost power supplies incorporated a simple full wave rectifier that conducted only when the mains instantaneous voltage exceeded the voltage on the input capacitors. This leads to very
high ratios of peak to average input current, which also lead to a low distortion power factor and potentially serious phase and neutral loading concerns.
Regulatory agencies such as the EC have set harmonic limits as a method of improving power factor. Declining component cost has hastened acceptance and implementation of two different methods.
Normally, this is done by either adding a series inductor (so-called passive PFC) or the addition of a boost converter that forces a sinusoidal input (so-called active PFC). For example, SMPS with
passive PFC can achieve power factor of about 0.7...0.75, SMPS with active PFC -- up to 0.99, while SMPS without any power factor correction has power factor of about 0.55...0.65 only.
To comply with current EU standard EN61000-3-2 all switched-mode power supplies with output power more than 75W must include at least passive PFC.
A typical multimeter will give incorrect results when attempting to measure the AC current drawn by a non-sinusoidal load and then calculate the power factor. A true RMS multimeter must be used to
measure the actual RMS currents and voltages (and therefore apparent power). To measure the real power or reactive power, a wattmeter designed to properly work with non-sinusoidal currents must be
English-language power engineering students are advised to remember: "ELI the ICE man" or "ELI on ICE"- the voltage E leads the current I in an inductor L, the current leads the voltage in a
capacitor C.
Or even shorter: CIVIL - in a Capacitor the I (current) leads Voltage, Voltage leads I (current) in an inductor L.
See also[edit]
|
{"url":"http://en.wikiversity.org/wiki/Power_factor","timestamp":"2014-04-18T10:58:17Z","content_type":null,"content_length":"39536","record_id":"<urn:uuid:0a9e03b4-bae6-4dd9-a7dd-51de45a54365>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
September 25th 2006, 12:12 AM #1
Junior Member
Feb 2006
Hi, please take a look of the following question.
Show that if a^3 l b^2, then a l b.
Thanks for helping
let p1^a1.p2^a2 ... pn^an be the prime decomposition of a, and
q1^b1.q2^b2. ... qm^bm be the prime decomposition of b.
That a^3|b^2, means that there is some qi which is equal to pj for
each of the pj's in the prime factorisation of a.
Also pj occurs with multiplicity 3aj in the prime decomposition of a^3, and
with multiplicity 2bi in the prime decomposition of b^2.
Hence we must have 2bi>=3aj, or (2/3)bi>=aj which means that bi>aj.
That is all the common primes in the prime decomposition of a and b
occur with greater multiplicity in b than in a which implies that a|b.
And let,
xd=a and yd=b
a^3|b^2 becomes,
We cannot have that x has a non-trivial factor, i.e. besides for 1.
That means that,
September 25th 2006, 12:36 AM #2
Grand Panjandrum
Nov 2005
September 25th 2006, 04:23 AM #3
Global Moderator
Nov 2005
New York City
|
{"url":"http://mathhelpforum.com/number-theory/5793-divisibility.html","timestamp":"2014-04-18T08:19:30Z","content_type":null,"content_length":"36760","record_id":"<urn:uuid:9cde5c4b-1970-402a-a6c3-b2da3584d576>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: import excel - requiring restart occasionally
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: import excel - requiring restart occasionally
From Billy Schwartz <wkschwartz@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject st: import excel - requiring restart occasionally
Date Wed, 2 Nov 2011 20:31:35 -0400
describe" eventually lead to increasing numbers of error 603's when
running the import command. Restarting Stata seems to fix it.
Facts: I have 947 Excel workbooks, both .xls and .xlsx, averaging just
under 3 sheets per book, and I have to organize them. As a first step
I've written a Stata script to create a table of the sheet names in
each workbook. It loops through the *.xls* file names in my folder and
runs "import excel, describe" on each, putting the contents returned
in r() into a data set that accumulates. I open up Stata and run the
script, and it works as intended with 14 error 603s, which I catch and
handle. (So you don't have to look it up, error 603 means the file
could not be opened even though it was found. I don't know why these
error 603's occur since Excel opens the file just fine, but I'm
comfortable with a 1% error rate.) During the debugging process, I've
had to run the script repeatedly in a given instance of Stata. After
three to five runs, the number of error 603's I get goes from 14 to a
couple hundred. One or two more runs gives me an error 603 on each
iteration of the loop, leaving my dataset empty. After restarting
Stata, the problem goes away and I'm back to 14 error 603's. I'm doing
all this on a server, so I've tried reading the data from a different
server over the network and locally on this server. Same pattern both
Question: Has anyone had a similar problem? Can anyone replicate this?
The code below the fold is a skeleton of the algorithm I'm using in
case anyone wants to try. If everyone's out of ideas, I guess I'll
submit this to Stata as a bug report.
Technical details in case it matters: My copy of Stata 12/MP (4-core
license, born on 13oct2011) runs on Windows Server Enterprise 2007 SP1
(64bit, 32GB RAM, four 4-core Xeons @ 1.87Ghz).
*! stata
version 12
local directory "." //replace as appropriate
local files: dir "`directory'" files "*.xls*", respectcase
generate book_name = ""
generate int error = .
generate sheet_name = ""
generate sheet_range = ""
foreach file of local files {
capture import excel using "`directory'/`file'", describe
if c(rc) == 603 {
set obs `=c(N)+1'
replace book_name = "`file'" in `c(N)'
replace error = c(rc) in `c(N)'
else error c(rc)
forvalues i = 1/`r(N_worksheet)' {
set obs `=c(N)+1'
replace book_name = "`file'" in `c(N)'
replace error = c(rc) in `c(N)'
replace sheet_name = r(worksheet_`i') in `c(N)'
replace sheet_range = "`r(range_`i')'" in `c(N)' //range may be missing
count if error //number of errors
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2011-11/msg00166.html","timestamp":"2014-04-19T14:49:57Z","content_type":null,"content_length":"9696","record_id":"<urn:uuid:544737bb-1064-43ed-b438-ee371c030ae0>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Advanced Calculus Definitions 2
18 terms · Advanced Calculus definitions for Test 2.
a function whose domain is the natural numbers
a sequence (aⁿ)→a (a real number) if, for every positive number ∈, there exists an N∈of the natural numbers such that whenever n≥N it follows that |aⁿ-a|<∈
∈-neighborhood of a
Given a real number a∈R and a positive number ∈≥0, the set V₋(a)={x∈R: |x-a|<∈}
a sequence that does not converge
a sequence (aⁿ) is _________ in a set A⊆R if there exists an N∈N such that aⁿ∈A ∀n≥N
a sequence (aⁿ) is __________ in a set A⊆R if, for every N∈N, there exists an n≥N such that aⁿ∈A
a sequence (xⁿ) is _______ if there exists a number M>0 such that |xⁿ|≤M for all n∈N.
a sequence (aⁿ) is ________ if aⁿ< aⁿ⁺¹ for all n∈N
a sequence (aⁿ) is ________ if aⁿ>aⁿ⁺¹ for all n∈N
Let (aⁿ) be a sequence of real numbers, and let n₁<n₂<n₃<... be an increasing sequence of natural numbers. Then the sequence aⁿ₁,aⁿ₂,aⁿ₃,....is a _______
Cauchy sequence
a sequence (aⁿ) is called a ________ if, for every ∈>0, there exists an N∈N such that whenever m,n≥N it follows that |aⁿ -aⁿⁿ| <∈
A set O⊆R is ______ if for all points a∈O there exists and ∈-neighborhood V₃(a) ⊆ O.
limit point
A point x is a _____________of a set A if every ∈-neighborhood V₃(x) of x intersects the set A in some point other than x.
isolated point
A point a∈A is an ____________ of A if it is not a limit.
A set F⊆R is _______ if it contains its limit points
Given a set A⊆R, let L be the set of all limit points of A. The _________ of A is defined to be A = A ∪ L.
A set K⊆R is ________ if every sequence in K has a subsequence that converges to a limit that is also in K.
A set A⊆R is __________ if there exists M > 0 such that |a| ≤ M for all a∈A.
|
{"url":"http://quizlet.com/7536486/advanced-calculus-definitions-2-flash-cards/","timestamp":"2014-04-17T01:17:37Z","content_type":null,"content_length":"62528","record_id":"<urn:uuid:7ba748f5-7993-4ba5-be93-81367ef5fdb2>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Learning monads in Clojure: a warning
Posted on 25 October 2011
I was inspired to learn about monads by Chris Ford recently; his description of encapsulating impurity safely within a pure language had me intrigued immediately. I decided that I wanted to learn
about monads in Clojure, a language I am currently diving into.
However, I found learning about monads in Clojure full of fake difficulty (or accidental complexity, if you will). Here I document the issues I found. And the key issue I came across was this:
Learning monads requires reasoning about types
You probably know where I’m going with this. Clojure is dynamically typed. Haskell, the spiritual home of monads, is statically typed. For me, the key to understanding monads was reasoning about
types — in particular, drawing a clear distinction between the ordinary type and the type of a monadic expression.
In drawing this distinction, it helped me reason about the behaviour of the monadic functions. By learning that m-bind must return a monadic expression and not a simple value, I learned a key fact
about monads; but the number of times I tried to write m-bind expressions beforehand which did not return monadic expressions beforehand was too many.
It’s quite possible to reason about types in a dynamically typed language, but it’s made much harder. If your reasoning is faulty, the program will try to carry on regardless, and in Clojure’s case,
give an incredibly cryptic error message. This is not an environment that makes learning easy. If I had been learning in Haskell, my failure to understand the distinction between monadic expression
and ordinary value would have immediately been set right by the type checker.
But it’s worse than just making learning hard: Clojure’s dynamic typing has led to a pervasive failure of type reasoning.
A key example of this is that Clojure’s implementation of the maybe monad, maybe-m, breaks the monad laws! It does this because it does not properly distinguish between the monadic expression and the
underlying type. The law in question is the first monad law, expressed here as a Midje test:
;;; given a monad which defines m-bind and m-result,
;;; f, an arbitrary function, and
;;; val, an arbitrary value
(fact "The first monad law"
(m-bind (m-result val) f)
=> (f val))
The failure of maybe-m to adhere to this law is demonstrated thus:
;;; failing midje test
(fact "maybe-m should adhere to the first monad law"
(with-monad maybe-m
(m-bind (m-result nil) not))
=> (not nil))
The reason that this law is violated is that the maybe-m monadic expression type is no different from the underlying value type. It is therefore possible to find a value such that (m-result val) is
nil, the maybe monad’s value for failure.
The Haskell Maybe monad is not so sloppy:
> let myNot x = Just (x == Nothing)
> (return Nothing :: Maybe (Maybe Char)) >>= myNot
Just True
> myNot (Nothing :: Maybe (Maybe Char))
Just True
This is because in Haskell, there is no value foo such that Nothing == return foo; in Clojure, there is such a value: (= nil (m-result nil)).
The repercussions of maybe-m’s violation of the first monad law are relatively minor: it means that when using maybe-m, the value nil has been appropriated and given a new meaning; which means that
if you had any other meaning for it, you’re stuffed.
For example, suppose you wanted to implement a distributed hash table retrieval, where failure could be caused by a network outage. You want a function behaviour similar to (get {:a 1} :b), where if
the value is not in the table you return nil. If you use maybe-m to perform this calculation, you cannot tell the difference between failing to communicate with the DHT, and successfully determining
that the DHT does not contain anything under the key :b; both will result in the value nil. Worse, if you want to use this value later in the computation, the maybe-m will assume a value missing in
the DHT to be a failure, and cut your computation short — even if that’s not what you wanted.
If you want to learn monads, do it in Haskell.
If you must do it in Clojure, the key is to understand and distinguish the various types in play. The monadic type is distinct from the underlying type. m-result takes an underlying value and gives
you an equivalent value in the monadic type. m-bind takes a monadic value, and a function from an underlying value to a monadic value.
|
{"url":"http://www.philandstuff.com/2011/10/25/learning-monads-in-clojure-a-warning.html","timestamp":"2014-04-17T09:34:01Z","content_type":null,"content_length":"7202","record_id":"<urn:uuid:5ae36b0e-0a39-4ae0-8e9d-71c280eb537a>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Public Function RectangularSolidCalc( _
Optional ByVal vArea As Variant _
, Optional ByVal vSideOne As Variant _
, Optional ByVal vSideTwo As Variant _
, Optional ByVal vSideThree As Variant _
, Optional ByVal vVolume As Variant _
) As Variant
"Rectangular Solid Calculator"
Calculate some property about rectangular solids given the value of THREE other properties.
RectangularSolidCalc(vSideOne:=1, vSideTwo:=2, vVolume:=10, vArea:="CALC") = 34
RectangularSolidCalc("CALC", 1, 2, Null, 10) = 34
RectangularSolidCalc(100, 2, "CALC", Null, 34) = 1.10419703602177
RectangularSolidCalc(100, 12, "CALC", Null, 200) = "1.38888888888889|-3.83896526696812" ' #4
See also:
CubeCalc Function
RectangleCalc Function
Summary: Three of the arguments should contain a numeric value--the given values for those properties. Pass the word "CALC" to the argument whose value is to be calculated and returned by this
function. The other argument should be missing or Null or non-numeric. Function returns Null if it could not calculate the requested property from the values provided. Otherwise, the function returns
the value of the property whose argument was passed the word "CALC".
vArea: Area of the rectangular solid.
vSideOne: Length of one of the sides of the rectangular solid.
vSideTwo: Length of another of the sides of the rectangular solid.
vSideThree: Length of another of the sides of the rectangular solid.
vVolume: Volume of the rectangular solid.
Note: Function may return a complex number in the form of a string if the given dimensions are not consistent with the shape, as in example #4.
Note: The match on the word "CALC" is case-insensitive, so for example, "CALC" and "Calc" match each other.
Copyright 1996-1999 Entisoft
Entisoft Tools is a trademark of Entisoft.
|
{"url":"http://www.entisoft.com/ESTools/MathGeometry_RectangularSolidCalc.HTML","timestamp":"2014-04-20T05:54:16Z","content_type":null,"content_length":"3643","record_id":"<urn:uuid:f317678f-a57f-4967-b796-5dee13b081dd>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
|
One Step Equations
Balancing Equations
Algebra Equations are used for working out unknown amounts in many real world situations. For example we might need to know how much deposit do we need to put on a new car so that we can pay it off
at $150 a month in under five years. We might have an 8 point Basketball Average so far this season, and we want to know what our average would go up to if we scored an incredible 28 points in our
next game. These type of situations require the creation and solving of Algebra equations. The following BBC Maths Animation interactive gives a good introduction to this concept and shows how simple
equations can be solved. The mini movie is followed by questions that you can do Interactive Online Activity The following BBC Maths Animation gives a good introduction to this concept. The mini
movie is followed by questions that you can do It has a short introductory video, followed by some equations questions that you can try doing. http://www.bbc.co.uk/schools/ks3bitesize/maths/algebra/
Solving an Equation
• An equation is a mathematical statement that has two expressions separated by an equal sign. The expression on the left side of the equal sign has the same value as the expression on the right
• Solving an equation means manipulating the expressions and finding the value of the unknown. In math this is called a variable. Examples of variables are letters such as x,b,c,
• An equation might be: x = 4+8 . To solve this equation we would add 4 and 8 and find that x = 12.
• An equation has two expressions separated by an equal sign. The expression on the left side of the equal sign has the same value as the expression on the right side. For example 3 + 5 = 8. Both
sides equal 8.
The left side always equals the right side.
Here is a short video about solving a One Step Equation using a Balance Beam.
• In equations, one or both of the expressions may contain variables (a pronumeral). Solving an equation means manipulating the expressions and finding out the value of the variables.
• Example : x - 3 = 5. What is x?
behaves like a pair of balanced scales. The scales remain balanced as long as we do the same thing to both scales.
• To keep an equation equal, we must do exactly the same thing to each side of the equation. If we add (or subtract) a number from one side, we must add (or subtract) that same number from the
other side.
• To get x on its own in the above equation we need to add +3. To solve the equation above we would add 3 to both sides. The equation would become:
• x - 3 + 3 = 5 + 3. This becomes x = 5 + 3 or x = 8
• The link below has a series of balancing exercises. Do these exercises in your math book.Here are the answers to the first three questions on the worksheet.
• Q1) 7, 6, 6, 2, 7, 5.5, 2, 7
• Q2) 11, 8, 13, 5
• Q3a) Circle = 6, Rectangle = 8
• Q3b) Circle = 5, Diamond = 2.
• Online Activities and Games
• Try the following games to get a better idea of this concept.Balancing game
• Poodle Weigh In
• This game involves putting number weights on the balance to match the weight of the strange looking Poodle.
• Hover the mouse over the bottom right hand corner “Help” button, to get instructions on how to play the game.
• Hover the mouse over the bottom left hand corner “Hint” button, to reveal the number equation which needs solving.
• Then click on the number weights to make them go onto the balance and add up to the required answer.
• To remove a number off the balance, simply click the number on the right hand side of the balance that we want to remove.
• The game can be played at the following link.
The following powerpoint has an explanation of simple Equations.
This suggests that to solve an equation, we can do the same thing to both sides of an equation. That is:
But lets begin on something not so hard.
The following Slideshare presentation goes through everything we need to know about setting up and solving One Step Addition Equations.
• This is further explained in the following video
Try any of the following worksheets on addition equations
For equations which have a number ADDED to a letter, we SUBTRACT away that number to find out the value of the letter “variable”.
For equations which have a number SUBTRACTED from the letter, we ADD that same number to both sides of the equation. This will allow us to find out the number value of our letter “variable”.
One step equations require one “opposite” operation to be performed on them, which then allows us to obtain the value of their unknown variable.
The opposite of addition is subtraction
The opposite of subtraction is addition
The opposite of multiplication is division
The opposite of division is multiplication.
The following video explains the steps involved in addition and subtraction equations.
Here is a reminder of how to do Addition and Subtraction Equations.
Subtract Example PPT Slide
Try the following subtraction exercises.
• This PDF file has a mixture of one step addition and subtraction equations to do.
One Step Equations Online Tests The following question generator from Cool Math enables you to do as many practice questions as you like, and supplies answers at the click of a button. http://
www.coolmath.com/crunchers/algebra-problems-solving-equations-1.htm http://www.coolmath.com/crunchers/algebra-problems-solving-equations-2.htm Here is an online Quiz from BBC Maths on One Step
Equations. BBC One Step Equations Online Quiz Here is an online quiz from Khan Academy that involves working out fraction answers for multiplication equations. http://www.khanacademy.org/exercises?
Equations involve using backtracking. Watch the youtube video for an initial overview of the process of backtracking.
• Solving Multiplication Equations
• If our variable letter has a number directly in front of it, then this means it is multiplied by that number. Eg. 3m means 3 times m or 3 x m or 3.m The Opposite of Multiplication is Division. To
solve a multiplication equation, we DIVIDE BOTH SIDES by whatever number is in front of our variable letter. Here is an example of how to solve a typical Multiplication
Multiply Example PPT Slide
The inverse operation of × is ÷. So, to solve an equation involving multiplication, we divide both sides of the equation by the same number.
• The following short video shows how to do a Multiplication Equation.
• Here is a more comprehensive video, that includes using a balance beam to represent the equation and solve it.
• Solving Division Equations
If our variable letter has a number directly under it as a fraction, then this means it is divided by that number.
Eg. k/2 means k is divided by 2.
The Opposite of Division is Multiplication.
To solve a Division equation, we MULTIPLY BOTH SIDES by the number that our variable letter is being divided by.
Here is an example of how to solve a typical Division Equation.
Division Example PPT Slide
Here is a great video all about Division Equations.
• The inverse operation of ÷ is ×. So, to solve an equation involving division, we multiply both sides of the equation by the same number. Example 6
□ An equation is a statement that contains an equal sign.
□ To solve an equation, we do the same thing to both sides of the equation.
□ The same number can be subtracted from both sides of an equation.
□ The same number can be added to both sides of an equation.
□ Both sides of an equation can be divided by the same number.
□ Both sides of an equation can be multiplied by the same number.
Click on the following links. There are various activities and explanations on equations for you to work through.The first activity Introduction to Equations is a great introduction activity.
BBC Introduction to equations
• Watch the following power point presentation. Complete each of the exercises in the slides.The answers are given for each. There are various types of equations for you to try as well as two step
equations. If you are feeling confident continue to work through the other slides
Equations Games
In this post we present a number of free Algebra Equations Games and Activities that students can use to reinforce their equation solving skills. Simply click on the image of the game, or the
provided text link, to open the game in a new window on your web browser. Since most of these games use Flash, Shockwave, or Javascript, they probably will not work on Apple devices. Apple products
do not have the functionality to run such applications, but the games should work fine on any normal netbook, laptop, or PC. Battleship One Step Equations
Battleship One Step Equations
This is played just like the classic Battleship game. We click on the opponent’s right hand side grid and get splash circles if there is not a ship on that grid square. However, when there is a ship
there, we get given a one step equation to solve. If we get it correct, we get a dot to confirm the hit. If we get it wrong we can try again by clicking back on the dot and re-doing the same equation
on our next turn. Note that the game does use negative numbers, and so some questions will look like this: 15 = 5 – x . For this example equation, the correct answer from the multiple choice options
will be -10. The game can be played at the following link. http://www.quia.com/ba/36544.html Algebra Planet Blaster This game will not start unless you first click your mouse into the game area, then
the cursor movement and space bar shooter start functioning. The equations are one and two step equations involving both positive and negative numbers. The game only has one level, but restarting the
game gives a new set of equations to do. The game can be played at the following link. http://www.aplusmath.com/Games/PlanetBlast/index.html Balanced Equations In this game, we need to click and drag
numbers down from the top and into the right position to create a balanced equation. In a balanced equation, both sides of the equals sign generate the same number. Eg. 10 x 2 = 5 x 4
The game can be played at the following link. http://funschool.kaboose.com/formula-fusion/games/game_great_equations.html Equation Match Picture Puzzle
Equation Match Picture Puzzle
This game by BBC requires the free Adobe Shockwave player to be installed on your computer. The object of the game is to match up a pair of equations that both have the same Answer. Eg. We could
match x-5 = 2 (which has an answer of x=7) with 3x=21 which also has an answer of 7. When we match correctly, two more parts of the underlying image are revealed. The game has levels, where Level 1
appears to only give simple one step equations. Level 3 gives letters both sides and brackets equations. The game can be played at the following link. http://www.bbc.co.uk/education/mathsfile/
shockwave/games/equationmatch.html One Step Basketball Game One-step adding and subtracting game, as well as a one-step multiplication and division game. The equations are challenging, as they use
fractions, negative numbers and decimals. If you get a question correct, you get to aim your ball and have a shot at the basket. This game can be played at the following link. http://
www.math-play.com/One-Step-Equation-Game.html There is this exact same game, but as a Two Step Equations Game, at the following link: http://www.math-play.com/Two-Step-Equations-Game.html Equation
Buster Game There are four levels of this game, but each level always has the same equation to solve for that level. Level 1 is always the same single step equation, and Level 4 is always the
equation 4w + 2 = 2w – 4 . However it is till worthwhile giving this game a go. The idea is to go through the solving steps one by one, and if we reach the answer in the least possible steps we get a
double tick on our answer. The main page where levels can be selected is at the following link. http://www.gamequarium.com/equations.html Equation Millionaire This game has a mixture of difficulties,
ranging from single step with negative numbers, through to brackets equations and fractions. It has a set of three “hints” that are like lifelines, and give clues such as “The answer is not D”. This
game can be played at the following link. http://www.quia.com/rr/4096.html Equation Solver This is more of an interactive online activity, where we can choose the reversing operation to do, type in
the value we want to do the operation to and then press enter to get to the next line. Note that we use the red “:” for doing divided by. We can also make up our own equation, type it in, and then
solve it. The activity can be found at the following link. http://www.mathsnet.net/algebra/balance.html Equation Substitution Match
Equation Substitution Match
This game required us to install the free “Adobe Shockwave Player” add-in to our browser before we could play the game. The game involves substituting into an equation and working out which is the
correct answer. It has three levels of difficulty. The game can be played at the following link. http://www.bbc.co.uk/education/mathsfile/shockwave/games/postie.html Interactive Equation Balancing
Interactive Equation Balancer
This activity is really cool. We can click on the purple buttons to add or remove x’s or ones. As we do this, the items are added or removed from both sides of the balance. The idea is to reduce the
items on the balance down until we just have one “x” on the balance. The remaining numbers on the other side of the balance tell us what the answer for the value of “x” is. This activity can be found
at the following link. http://www.mathsisfun.com/algebra/add-subtract-balance.html Solve Equations Time Trial
XP Eqns Online Time Trial
This game is more of a time trialled Online Test, rather than a game. It focuses on two step equations and includes negative numbers. The game can be played at the following link. http://
www.xpmath.com/forums/arcade.php?do=play&gameid=64 In addition, there are XP Math One Step Equations Time Trials activities at the web pages below. These cover One Step Addition, Subtraction,
Multiplication, and Division. http://www.xpmath.com/forums/arcade.php?do=play&gameid=69 http://www.xpmath.com/forums/arcade.php?do=play&gameid=68 http://www.xpmath.com/forums/arcade.php?do=play&
gameid=53 http://www.xpmath.com/forums/arcade.php?do=play&gameid=72 Addition Balance Game This one is really a basic primary school game, and involves working out missing values in an addition sum.
However it does the train students to be thinking of the concept of balancing, and is good brain exercise when students push themselves against the timer. The game can be played at the following
link. http://www.softschools.com/math/addition/balance_equations/ That’s it for our selection of Equations Games. These games could be added individually to lessons, or used as a group item when
students are revising their work.
Now look at the following power point . Complete each activity.
• Math Game: Guess My Number
• Objective: To undo a sum to solve for the starting number.
• Number of Players: 2.
• Materials required: Two pens, two sheets of paper.
• How to Play: The two players take it in turn to state a sum. One player is stating the sum, the other is undoing it to solve it.
• Step 1:
• Pick a number between 1 and 10.
• Say out loud to the other player "I pick a number...but do not tell it. Write that number on a sheet of paper.
• Step 2:
• Decide what to do to with it. You could decide to double it, "then say I double it". Write the double of your previous number on your page (unseen by your partner) so you do not forget. You may
then choose to add or subtract another number. For example you decide to add 5. Say I now add 5.Work out the answer and write it down. Your partner writes down the operation performed on the
number eg +5. Do 5 separate math functions to your original number. Call out each one as you go so your partner can write it down.
• Your partner will have on their sheet of paper an equation without the starting number. You will have an equation with both a starting number and the solution.
• Step 3:
• Once you have finished performing all the operations on the original number. Say "What was my starting number?" Tell your partner the final number.
• The job of your partner is to UNDO the equation until the starting number is revealed.
• The Winner: Score can be kept in this game. You can devise pretty much any system you like, but a simple point for each correct guess is the easiest way.
• The process being employed here is called backtracking and is an essential foundation for equation solving, which is also heavily linked to algebra.
|
{"url":"http://mathematicsyear7.wikispaces.com/One+Step+Equations?responseToken=0ca5368e04c6fd35435529c5e424e6316","timestamp":"2014-04-24T10:57:18Z","content_type":null,"content_length":"124158","record_id":"<urn:uuid:074262dc-9a9f-45ae-8809-ecf2ad41cb7b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Tutors
Smithtown, NY 11787
Certified Nursing Tutor ALL Courses & Pre-Req's 100% Success NCLEX
...Thank you for honoring and protecting our country. Allison I have been tutoring K-12th grade privately in all subjects, but mainly science,
, and history. Hi, I've been teaching a Study Skills course at Molloy College for incoming freshman and Graduate Students...
Offering 10+ subjects including algebra 1, geometry and prealgebra
|
{"url":"http://www.wyzant.com/geo_Patchogue_Math_tutors.aspx?d=20&pagesize=5&pagenum=5","timestamp":"2014-04-19T00:24:45Z","content_type":null,"content_length":"58107","record_id":"<urn:uuid:621e352d-bfb2-472a-98ea-7466597428c6>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: November 2008 [00558]
[Date Index] [Thread Index] [Author Index]
Re: FFT in Mathematica
• To: mathgroup at smc.vnet.net
• Subject: [mg93820] Re: FFT in Mathematica
• From: Oliver <sch_oliver2000 at yahoo.de>
• Date: Wed, 26 Nov 2008 05:16:32 -0500 (EST)
Hallo Nasser,
many thanks for ur suggestions..
In 1/(4*(Pi*1*t) it is supposed to be 1 and not I but i just wrote 1 because the original equation is 1/(4*(Pi*Alpha*t) And Alpha is supposed to be constant which is almost equal to 1.
well, i took your solution and then plotted the Spectrum of it like this:
Plot[Cosh[(1 + I)*Sqrt[f]*Sqrt[3*Pi]]/(2*Sqrt[3]*Pi) // Abs, {f, 0,
But i still have the problem that the Plot looks unexcpected and weird.
Actually, my aim is to calculate the Spectrum Bandwidth, but i do not think that i can calculate the bandwidth of the resulted Plot because i do not see any peaks.
or do u think it should be correct Plot?
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2008/Nov/msg00558.html","timestamp":"2014-04-21T15:15:06Z","content_type":null,"content_length":"25515","record_id":"<urn:uuid:fa82fbcf-348d-4e92-b9a0-6dfd63068694>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Parabola Focal Point
Date: 7/3/96 at 17:55:57
From: Mr. Gil Kaelin Jr.
Subject: Parabola Focal Point
What are the rules for the focal point of a parabola? Is there a rule
or rules in parabola dimension (i.e. is H:W a 1:1, must a parabola be
circular, what arc is required, etc.)?
I would like to understand this better. Please help.
Date: 7/8/96 at 9:23:40
From: Doctor Brian
Subject: Re: Parabola Focal Point
Well, here's a little bit about parabolas, but not the definitive
treatise on the subject:
There are two equivalent ways of looking at them (a little algebra
will turn one method into the other):
1. The graph of a second-degree polynomial function in one variable
(known as the old y = ax^2 + bx + c rule usually seen in first or
second year algebra class).
2. The set of all points that at the same time are equidistant from
a given point and a given line (the point is the focus, and the line
is the directrix).
Now, if you know the vertex of the parabola, it's got to be exactly
midway between the focal point and the directrix line (equidistant).
One of the interesting things about a focal point is that the distance
across the parabola through the focus is equal to four times the
distance from the focus to the vertex. This isn't too tough to show,
those side points on the parabola must be twice that distance each to
the directrix, and therefore, to the focus. The arc isn't really
circular. It's more like an infinite valley or mountain (depending on
whether its vertex, or "turning point" is at the top or bottom). The
steepness of the curve gets more steep the further away from the
vertex, near which the curve *looks* more round before starting to
straighten out....not that it actually straightens out, but it is less
obviously curved further away from the vertex.
-Doctor Brian, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
|
{"url":"http://mathforum.org/library/drmath/view/54423.html","timestamp":"2014-04-19T22:43:04Z","content_type":null,"content_length":"6726","record_id":"<urn:uuid:2fde31da-8d0d-4aff-b22f-90213656d581>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Decatur, GA Geometry Tutor
Find a Decatur, GA Geometry Tutor
...So you can tell I love to teach! I have been working with students in math since I was in high school myself. I tutored students throughout college and beyond and have worked with students in
all math subjects from 6th grade through calculus.
25 Subjects: including geometry, reading, calculus, GRE
...Over 100+ hours of tutoring service with immediate and satisfactory results. I have a Associates in Mathematics from Georgia Perimeter College, where I was an outstanding member of the SGA. I
was a NASA Research Scholar and a member of the Math and Science Association (MESA) at Atlanta Metropolitan State College.
14 Subjects: including geometry, calculus, biology, algebra 1
...I am certified to teach in both PA and GA. I have teaching experience at both the Middle School and High School level in both private and public schools. I have chosen to leave the classroom
to tutor from home so that I can be a stay at home mom.
10 Subjects: including geometry, algebra 1, algebra 2, precalculus
...I have experience tutoring K-5th grades for 4 years at two different elementary schools. I have specifically focused on ESOL students but have also tutored regular classes. I helped students
improve their reading and reading comprehension skills, as well as mathematics.
29 Subjects: including geometry, chemistry, reading, Spanish
...Now that I'm back home, I'm looking forward to helping students in ATL reach their educational goals in terms of succeeding in economics or math classes, using my experiences in classes as
well as my experiences tutoring and breaking down topics so that they are easier to comprehend and apply. I...
14 Subjects: including geometry, Spanish, statistics, algebra 1
Related Decatur, GA Tutors
Decatur, GA Accounting Tutors
Decatur, GA ACT Tutors
Decatur, GA Algebra Tutors
Decatur, GA Algebra 2 Tutors
Decatur, GA Calculus Tutors
Decatur, GA Geometry Tutors
Decatur, GA Math Tutors
Decatur, GA Prealgebra Tutors
Decatur, GA Precalculus Tutors
Decatur, GA SAT Tutors
Decatur, GA SAT Math Tutors
Decatur, GA Science Tutors
Decatur, GA Statistics Tutors
Decatur, GA Trigonometry Tutors
Nearby Cities With geometry Tutor
Atlanta geometry Tutors
Avondale Estates geometry Tutors
Belvedere, GA geometry Tutors
Clarkston, GA geometry Tutors
College Park, GA geometry Tutors
Dunwoody, GA geometry Tutors
East Point, GA geometry Tutors
Johns Creek, GA geometry Tutors
Lawrenceville, GA geometry Tutors
Marietta, GA geometry Tutors
North Decatur, GA geometry Tutors
Sandy Springs, GA geometry Tutors
Scottdale, GA geometry Tutors
Smyrna, GA geometry Tutors
Tucker, GA geometry Tutors
|
{"url":"http://www.purplemath.com/decatur_ga_geometry_tutors.php","timestamp":"2014-04-21T11:10:37Z","content_type":null,"content_length":"24021","record_id":"<urn:uuid:68d4d30d-828c-4579-b5b3-9be8dcdb98a7>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This week, several colleagues and I were discussing problems posing people who run datacenters due to the increasingly high churn rates in hardware. Some customers prefer systems which don't change
every other month, so they can plan for longevity. Other customers want the latest and greatest bling. Vendors are stuck in the middle and have their own issues with stocking, manufacturing, plant
obsolescence, etc. So, how can you design your datacenter architecture to cope with these conflicting trends? Well, back in 1999 I wrote a paper for the SuperG conference which discusses this trend,
to some degree. It is called A Model for Datacenter.com. It is now nearly 10 years old, but seems to be holding its own, even though much has changed in the past decade. I'm posting it here, because
I never throw things away and the SuperG proceedings are not generally available.
|
{"url":"https://blogs.oracle.com/relling/tags/availability","timestamp":"2014-04-16T04:35:36Z","content_type":null,"content_length":"145122","record_id":"<urn:uuid:382df966-e679-4984-80fe-e05dba8974a6>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The John von Neumann Lecture
Wednesday, July 15
The John von Neumann Lecture
On Some Descriptions of the Dynamics of Viscous Fluids
4:30 PM-5:30 PM
Chair: John Guckenheimer, President, SIAM; and Cornell University
Room: Convocation Hall
The lecturer will discuss some problems connected with description of the dynamics of viscous fluids by the Navier-Stokes and by the Modified Navier-Stokes equations with the coefficients of
viscosity depending on the strain velocity tensor.
For example: (1). Do the Navier-Stokes equations yield a deterministic description of the viscous fluids dynamics for all values of the Reynolds number or not? (2). What numerical experiments can
demonstrate the collapse of a solution to the Navier-Stokes equations? (3). There are classes of the Modified Navier-Stokes equations that give deterministic description of the dynamics of viscous
fluids for arbitrary velocity gradients. For them the results on a global unique solvability of the principal boundary-value problems, as well as the existence of compact minimal B-attractors will be
formulated. Some approximations to the Modified Navier-Stokes equations will be suggested and the problems of their smoothness will be discussed.
Olga A. Ladyzhenskaya
Steklov Mathematics Institute
Russian Academy of Sciences, Russia
MMD Created: 4/6/98 Updated: 4/6/98
|
{"url":"http://www.siam.org/meetings/an98/ss2.htm","timestamp":"2014-04-21T10:51:23Z","content_type":null,"content_length":"3152","record_id":"<urn:uuid:d6d81625-0198-48e1-b40b-61e2004818ef>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Burbank, IL Trigonometry Tutor
Find a Burbank, IL Trigonometry Tutor
...I'm comfortable with just about any level of math from middle school through advanced undergraduate courses. The level is not important---I'm just here to help you learn! I'm also available to
tutor music theory.
13 Subjects: including trigonometry, calculus, geometry, statistics
...I just completed my student teaching experience (teaching Algebra I and Algebra II) and will be certified June 2014. I have gained a lot of experience learning how to help students through the
step by step process of thinking through Math problems. I love to work with students on algebra, geometry, trigonometry, and precalculus!I have a degree in Mathematics from Augustana College.
7 Subjects: including trigonometry, geometry, algebra 1, algebra 2
...Control systems theory consists of many differential equations. In association with this I received an A in differential equations when I received my bachelor's degree in mechanical
engineering. I have my master's degree in mechanical engineering.
20 Subjects: including trigonometry, physics, statistics, calculus
...Come to the alchemist who will help you understand the language each of these disciplines speaks. In many cases, students fail to achieve the grades they're looking for because the material
being presented to them appears to be in a different language. As a professional tutor, it is my goal to understand where the language barrier is in a topic and help you/your child overcome that
26 Subjects: including trigonometry, chemistry, Spanish, reading
...I have worked at Flossmoor Country Club for 10 years, so I have met many of the South Suburbs' most influential people. I have also helped many kids become great caddies at this club, by
teaching them and helping them if they had any troubles. During college, I tutored the 13 year old daughter of the cook at our fraternity for two years, usually for two hours a week.
28 Subjects: including trigonometry, chemistry, calculus, geometry
Related Burbank, IL Tutors
Burbank, IL Accounting Tutors
Burbank, IL ACT Tutors
Burbank, IL Algebra Tutors
Burbank, IL Algebra 2 Tutors
Burbank, IL Calculus Tutors
Burbank, IL Geometry Tutors
Burbank, IL Math Tutors
Burbank, IL Prealgebra Tutors
Burbank, IL Precalculus Tutors
Burbank, IL SAT Tutors
Burbank, IL SAT Math Tutors
Burbank, IL Science Tutors
Burbank, IL Statistics Tutors
Burbank, IL Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Burbank_IL_Trigonometry_tutors.php","timestamp":"2014-04-20T09:11:30Z","content_type":null,"content_length":"24401","record_id":"<urn:uuid:b8529f92-1a9c-40c2-a46f-e95483e05cbd>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Biased Measurements
Next: Experiments Up: The Proposed Nonlinear Recursive Previous: The EKF Implementation
Biased Measurements
We turn attention here to the issue of biased measurement noise in the EKF and how it relates to representation of object structure.
We have assumed that features are identified in the first frame and that measurements are obtained by comparing new images to the previous images and that our measurements are zero-mean or very close
to zero-mean. This thinking leads to the
It is common to use Kalman filters even when measurements are not truly zero-mean. Good results can be obtained if the biases are small. However, if the measurements are biased a great deal, results
may be inaccurate. In the case of large biases, the biases are observable in the measurements and can therefore be estimated by augmenting the state vector with additional parameters representing the
biases of the measurements. In this way, the Kalman filter can in principle be used to estimate biases in all the measurements.
However, there is a tradeoff between the accuracy that might be gained by estimating bias and the stability of the filter, which is reduced when the state vector is enlarged. When the biases are
large, i.e. compared to the standard deviation of the noise, they can be estimated and can contribute to increased accuracy. But if the biases are small, they cannot be accurately estimated and they
do not affect accuracy much. Thus, it is only worth augmenting the state vector to account for biases when the biases are known to be significant relative to the noise variance.
In the SfM problem, augmenting the state vector to account for bias adds two additional parameters per feature. This results in a geometry representation having a total of 7+3N parameters. Although
we do not recommend this level of state augmentation, it is interesting because it can be related to the large state vector used in [10,42] and others, where each structure point is represented using
three free parameters (X,Y,Z).
If we add noise bias parameters (b[u],b[v]), Equation 14 can be written
This relation is invertible so the representations are analytically equivalent. However, geometrically the (X,Y,Z) because it parameterizes structure along axes physically relevant to the measurement
process. Thus, it allows us to more effectively tune the filter, ultimately reducing the dimensionality of the state space quite significantly.
It is clear that, in general, uncertainty in (b[u],b[v]), the state space is essentially reduced because the system responds more stiffly in the direction of the biases, favoring instead to correct
the depths. In the limit (zero-mean-error tracking) the biases can be removed completely, resulting in the strictly lower dimensional formulation that we typically use in this paper. Our experimental
results demonstrate that bias is indeed a second-order effect and is justifiably ignored in most cases.
Next: Experiments Up: The Proposed Nonlinear Recursive Previous: The EKF Implementation
|
{"url":"http://www.cs.columbia.edu/~jebara/htmlpapers/SFM/node28.html","timestamp":"2014-04-19T01:51:01Z","content_type":null,"content_length":"7648","record_id":"<urn:uuid:c6fdbcd6-d65a-4238-af53-d464a0829a78>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum - Problems Library - Pre-Algebra, Discrete Math and Counting Principles
This page:
graph theory
About Levels
of Difficulty
operations with numbers
number sense
number theory
fractions, decimals,
ratio & proportion
geometry in the plane
geometry in space
logic & set theory
discrete math
Browse all
About the
PoW Library
Discrete Math & Counting Principles
The basics of combinations, permutations, graph theory. These topics are often combined with probability, in which case the problems would then be listed in both categories.
Some of these problems are also in the following subcategories:
Related Resources
Interactive resources from our Math Tools project:
Math 7: Counting Principles
The closest match in our Ask Dr. Math archives:
Middle School Archive
NCTM Standards:
Data Analysis & Probability Standard for Grades 6-8
Access to these problems requires a Membership.
|
{"url":"http://mathforum.org/library/problems/sets/prealg_discrete.html","timestamp":"2014-04-20T09:07:22Z","content_type":null,"content_length":"18750","record_id":"<urn:uuid:d405d20f-317d-45fb-93a5-8f8ccfe57527>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Morten Welinder
Rotating text
Everybody and their brother have their own model for dealing with
rotated text.
Postscript (as I understand it) simply rotates the coordinate
system. Fine.
Pango rotates the image and then translates the origin to be
the upper-left of the rotated image’s bounding box. (It used to be at the top-left of the first letter.)
Excel (for a positive angle) rotates the individual lines of
the image and positions them side-by-side (so the lower corners are all on a horizontal line)
in such a way that the first line is translated just enough to the
right to make the text be below a hypothetical line starting at the
origin and having the desired angle from the X-axis.
Pango’s is weird, but the pixel image is at least continuous as a
function of angle. (Well, actually not, but that is because of a different
issue: hinting is turned off for non-zero angles.) But Excel’s?
As the angle goes towards zero, the image translates continously
to the right. If the angles were real numbers, the image would
translate all the way to infinity. Then, suddenly, at angle zero
everything snaps back to the origin.
The job then is to implement the latter in the world of the two
|
{"url":"http://blogs.gnome.org/mortenw/2005/05/12/rotating-text/","timestamp":"2014-04-18T20:45:29Z","content_type":null,"content_length":"8572","record_id":"<urn:uuid:5e95d0f1-a012-47af-b2e2-b046b019d9b1>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: Fixing nlsur restrictive identification requirements
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: Fixing nlsur restrictive identification requirements
From Alex Olssen <alex.olssen@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject st: Fixing nlsur restrictive identification requirements
Date Mon, 9 May 2011 10:46:31 +1200
Dear Statalist,
I have posted a couple of messages to the list pointing out that
-nlsur- is using an
overly restrictive identification constraint that sometimes results in
failure to estimate
doe to error 2001 - fewer observations than parameters.
Who will be interested in this problem? One group is people
estimating systems off
annual time series data. For example in the AIDS demand model
estimation if we have
data on 5 expenditure shares (after dropping one equation) annually
for 40 years then
currently -nlsur- will not estimate a model with 8 regressors per
equation - any general
dynamic AIDS model will have at least this many regressors. However,
such a system
should be able to be estimated.
I have implemented a very simple fix to this problem. I would be very
happy to hear
your thoughts as to my solution.
Firstly I illustrate the problem and the solution using -sysuse
auto- and then I describe
the simple fix. You cannot run this all at once as it will error
after the -nlsur- command
which is exactly the problem I intend to fix - copying to the do-file
editor and running line
by line will work fine.
** demonstrating nlsur's restrictive identification requirements
** showing that nlsur2 fixes the problem
sysuse auto, clear
** the identification problem requires relatively few observations
keep in 1/10
** it is well known that sur in a linear model without coefficient
** restrictions produces OLS equation by equation
** if we put a linear model into nlsur it should produce the same
** results as OLS equation by equation
nlsur (length = {priceL}*price + {dispL}*disp + {turnL}*turn +
{headL}*head + {trunkL}*trunk + {consL}) ///
(weight = {priceW}*price + {dispW}*disp + {turnW}*turn +
{headW}*head + {trunkW}*trunk + {consW})
** nlsur will not estimate this model
nlsur2 (length = {priceL}*price + {dispL}*disp + {turnL}*turn +
{headL}*head + {trunkL}*trunk + {consL}) ///
(weight = {priceW}*price + {dispW}*disp + {turnW}*turn +
{headW}*head + {trunkW}*trunk + {consW})
** nlsur2 will estimate the model
** comparing with OLS equation by equation we see that coefficient
** estimates are the same but the standard errors differ
** a finite sample correction is probably needed - and this sample
** is very finite
reg length price disp turn head trunk
reg weight price disp turn head trunk
The solution is simply to adjust line 516 of the file nlsur.ado
On a windows computer the default location is
C:\Program Files\Stata11\ado\base\n\nlsur.ado
Change line 516 from
if r(N) < `np' {
if r(N)*`neqn' < `np' {
Kind regards,
Alex Olssen
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2011-05/msg00391.html","timestamp":"2014-04-18T05:46:07Z","content_type":null,"content_length":"9949","record_id":"<urn:uuid:6245b681-5994-4980-bc36-b5d5ea9b748a>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Total # Posts: 69
2x^2 - 12x - 3y^2 - 24y +60 = 0
social studies
how was the south carolina constitution of 1868 different from earlier versions?
Physic A Level
A lift cable which is 50m long ,is made from high tensile steel of Young Modulus 2.0x 10^11 Pa. the cable is made from 100 wires each of radius 2.0mm . Calculate the extra extension in the cable when
10 passengers pile into the lift ,the total mass of the passenger is 500kg .
yes they are all correct (:
A bicycle and its rider together has a mass of 95 kg. What power output of the rider is required to maintain a constant speed of 4.8 m/s (about 10.7 mph) up a 5.0% grade (a road that rises 5.0 m for
every 100 m along the pavement)? Assume that frictional losses of energy are n...
the solution set for the following 1.X3+x2-6x-10=0 2.2x2-4x-4=0 3.x4-x3-4x2+x+1=0
a ball is thrown at a vertical velocity to an angle of 30 degree above the horizontal and rises to a maximum height of 50 meters.if the angle is tripled find the maximum height and the horizontal
range of a projectile for the same intial velocity
Organic chemistry plz help me out (thanks)
1) a) What is the normail boiling point (760 mm Hg) for a compound that boils at 150 degree celsius at 10 mm Hg pressure? b) At which temperature would the compund in (a) boil if the pressure were 40
mm Hg? c) A compound was distilled at atmospheric pressure and had a boiling ...
Yes the answers are b and c :)
Yes the answers are b and c :)
Yes the answers are b and c :)
Yes the answers are b and c :)
Ms.Sue can u please answer my question.
I need to interview an adult about how health care changed in their lifetime. I don't have anyone to interview so can someone that is an adult can you please tell me how health care have changed.
Medical Advances Changes in Cost Changes in the relationship with health care...
Thank you :)
What is the momentum of a wavelength= 0.015nm X-ray photon? p= in kg*m/s Please help I am confused. Thank you.
Photons of energy 12eV are incident on a metal. It is found that current flows from the metal until a stopping potential of 8.0V is applied. If the wavelength of the incident photons is doubled, what
is the maximum kinetic energy of the ejected electrons? What is KEmax in eV. ...
Physics Please COULD SOMEONE HELP ME ASAP PLZ
When an object is placed 55.0cm from a certain converging lens, it forms a real image. When the object is moved to 45.0cm from the lens, the image moves 6.00 cm farther from the lens. Find the focal
length of this lens? Please someone help me out I tried so many times i keep g...
When an object is placed 55.0cm from a certain converging lens, it forms a real image. When the object is moved to 45.0cm from the lens, the image moves 6.00 cm farther from the lens. Find the focal
length of this lens? Please someone help me out I tried so many times i keep g...
When an object is placed 55.0cm from a certain converging lens, it forms a real image. When the object is moved to 45.0cm from the lens, the image moves 6.00 cm farther from the lens. Find the focal
length of this lens? Please someone help me out I tried so many times i keep g...
I found this equation 5380di^2+600di+33000=0 and when i solve i got an imaginary number I had a hard time solving it so could you please help me out
When an object is placed 55.0cm from a certain converging lens, it forms a real image. When the object is moved to 45.0cm from the lens, the image moves 6.00 cm farther from the lens. Find the focal
length of this lens? Please someone help me out I tried so many times i keep g...
pre calculus
for the given function f and g, find the specified value of the function. State the domain: f(x)=2x-5; g(x)=7x-9 a. (f-g)(x)= b.(f/g)(x)= I need help!
math pre calculus
find the following for the function f(x)=(x+5)^2(x-2)^2 a.find the x and y intercepts, b.find the power function that the graph ressembles for large values of x c.determine the maximum number of
turning points on the graph of f d.determine the behavior of the graph of f near e...
chemistry HELP plz!
chemistry HELP plz!
The species __________ contains 16 neutrons. a. 16O b. 31P c. 34S2- d. 80Br- e. 36Cl
We have to write a paper about seashells,sunglasses,and sunscreen My cousins and I went to the beach last summer and we brought things top lay with there like our beach ball and water guns then we
put on our suncreen and sunglasses before going in the water.We played games in ...
find a square root of -7-24i
I'm doing a health project on family. My question is how does your dad support you? I need ideas thank you
help for homework
can you correct that for me pleas? Have you ever sat in a classroom and observed the conversational styles of the students? Nowadays many people flying to another language because several reasons
such as study , business, or even Tourism. However, When they travel to face diff...
A poll estimates that 43% of likely voters are in favor of additional restrictions on teenage drivers. The poll has a margin of error of plus or minus 4%. Write and solve an absolute value equation
to find the minimum and maximum percent of voters that actually support the res...
A poll estimates that 43% of likely voters are in favor of additional restrictions on teenage drivers. The poll has a margin of error of plus or minus 4%. Write and solve an absolute value equation
to find the minimum and maximum percent of voters that actually support the res...
Tranquilizing drugs that inhibit sympathetic nervous system activity often effectively reduce people's subjective experience of intense fear and anxiety. Use one of the major theories of emotion to
account for the emotionreducing effects of such tranquilizers. Which theory...
PLEASE HELP ME. 1/5 of bees flew to a cherry tree, 1/3 flew to a clover tree, and 3 times the difference of these 2 numbers buzzed over to a stand of heather. 3 flew around the hive circling and
protecting the bee community. The question is, how many bees were there altogether?
You are the technical consultant for an action-adventure film in which a stunt calls for the hero to drop off a 18-m-tall building and land on the ground safely at a final vertical speed of 5 m/s. At
the edge of the building's roof, there is a 100-kg drum that is wound wit...
A flywheel with a diameter of 1 m is initially at rest. Its angular acceleration versus time is graphed in the figure. (a) What is the angular separation between the initial position of a fixed point
on the rim of the flywheel and the point's position 8 s after the wheel s...
A ball of mass m = 0.2 kg is attached to a (massless) string of length L = 3 m and is undergoing circular motion in the horizontal plane, as shown in the figure. What should the speed of the mass be
for θ to be 46°? What is the tension in the string?
Two instruments are playing musical note "A" (440 Hz). A beatnote with a frequency of 2.5 Hz is heard. Assuming that one instrument is playing the correct pitch, what is the frequency of the pitch
played by the second instrument?
Explain the important conditions when you crystallize a solid by crystallization.
Explain why would there be more white blood cells found in a person's body who is infected by salmonella bacteria rather than a healthy person?
The 2500 kg cable car shown in the figure below descends a 200 m high hill. In addition to its brakes, the cable car controls its speed by pulling an 1200 kg counterweight up the other side of the
hill. The rolling friction of both the cable car and the counterweight are negli...
English-Their Eyes Were Watching God
Why do you think Starks puts the street lamp on a showcase for a week and then throws a big celebration for its lighting ceremony?
ChemISTRY (WEBWORK)
A student measures the potential for cells containing A+B+and C+ ions and their metals and records his data in the table below: ............. Overall Reaction ................Potential (V) Cell#
1'''''''A(s) A+(aq) B+(aq) B(s)'''''...
Suppose a charge q is placed at point x = 0, y = 0. A second charge q is placed at point x = 6.2 m, y = 0. What charge Q must be placed at the point x = 3.1 m, y = 0 in order that the field at the
point x = 3.1 m, y = 4.2 m be zero?
A certain capacitor stores 360 J of energy when it holds 9.1 10-2 C of charge. (a) What is the capacitance of this capacitor? µC (b) What is the potential difference across the plates? V
Two metal spheres are separated by a distance of 1.1 cm and a power supply maintains a constant potential difference of 710 V between them. The spheres are brought closer to each other until a spark
flies between them. If the dielectric strength of dry air is 3.0 106 V/m, what...
math pre-ALG
how to write 72,700 in word form
Is the total charge 3.0 x 10^-6 C?
How many electrons make up a charge of -30.0 ìC?
Thank you so much! Makes perfect sense =D
A curve in a road forms part of a horizontal circle. As a car goes around it at constant speed 14.0 m/s, the total force exerted on the driver has magnitude 115 N. What are the magnitude and
direction of the total vector force exerted on the driver if the speed is 25.0 m/s ins...
A bolt drops from the ceiling of a train car that is accelerating northward at a rate of 3.15 m/s2. What is the acceleration of the bolt relative to the train car? I don't understand how to do this
problem at all... and why isn't it zero? It seems if the train is going...
super easy (I think) but I must be missing some concept... A race car starts from rest on a circular track. The car increases its speed at a constant rate at as it goes 5.00 times around the track.
Find the angle that the total acceleration of the car makes with the radius con...
2y + 1 < -3 -1 -1 2y < -4 divide each side by two y < -2
2y + 1 < -3 -1 -1 2y < -4 divide each side by two y < -2
2y + 1 < -3 -1 -1 2y < -4 divide each side by two y < -2
8th Grade Algebra
is the common solution then (7,4) ?
8th Grade Algebra
How can you tell if these three equations have a common solution? 3x - 5y = 1 4x - y = 24 2x - 3y = 2 Please help ?
oh ok, I solved it, thank you very much! :]
The element copper, found in nature with an average atomic mass of 63.54u, consists of two isotopes, copper-63 of atomic mass 62.93u and copper-65 of atomic mass 64.93u. Calculate the abundance of
each isotope. I can't come up with an equation to solve it; can someone plea...
Provide a real-world example of a product (a good or service) that has either an external cost or external benefit associated with it and propose a government policy to adjust for the over- or
underproduction of this product.
Do a google search on the concept of a flat tax. Explain how this tax scheme works and does it meet the criteria of tax efficiency and tax fairness/equity?
Assume that the demand for gasoline is inelastic. The government imposes a sales tax on gasoline. The tax revenue is used to fund research into clean fuel alternatives to gasoline, which will improve
the air we all breathe. a. Who bears more of the excess burden of this tax: c...
add. 4 1/7 + 1 1/2=
Congratulations! You have decided to start a new business! You will begin a partnership with one other business associate that you trust and have known for several years. You plan to hire three
managers and another twenty employees. Discuss a plan of action that you and your p...
i agree
Economic profit that is less than zero
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Laila","timestamp":"2014-04-20T02:47:07Z","content_type":null,"content_length":"21692","record_id":"<urn:uuid:45bda895-4b85-4571-a61f-f26058a94164>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
Let $N > M$. Prove that for $k > 1$, $\sum_{n=M+1}^{N} n^{-k} = O(M^{-k+1})$ as $M \to \infty$.
Observe your sum is bounded above by the integral of 1 \x^k from M to N-1, which you can easily evaluate directly obtaining two terms bounded by C/M^(k-1) as needed.
|
{"url":"http://mathhelpforum.com/differential-geometry/101513-big-o.html","timestamp":"2014-04-17T16:14:45Z","content_type":null,"content_length":"27383","record_id":"<urn:uuid:2b4980c1-27f2-4f60-a478-1b4e6318a2ed>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What classes of functions are closed under all rescalings?
up vote 4 down vote favorite
Let us denote by the symbol $\mathcal{G}$, a group of functions $f: \mathbb{R} \rightarrow \mathbb{R}$ (with the composition operation) that is additionally closed under all affine change of
variables of the form (homothety):
$$ h(x) = mx, m>0$$
In other words, I would like the following property to hold for any affine maps $h$ (of the above form): $$hGh^{-1} \in G$$
Intuitively such a group is a group of functions that is invariant under all rescalings.
A simple example of such a group is the group of fractional linear transformations (FLT) (with real coefficients), namely the group $\mathcal{S}$ consisting of that $f(x) = \frac{ax+b}{cx+d}$ where
$a,b,c,d \in R$.
My questions are:
1. Do such groups have a name?
2. What is the classification of all such groups? (with the properties of $f'(x) >0$ and $f\in C^3$ if possible)
3. Is there a general way of constructing such groups or putting this question in a general context?
Thank you in advance to all those who respond,
E(up)lio M.
fa.functional-analysis gr.group-theory mp.mathematical-physics cv.complex-variables
add comment
3 Answers
active oldest votes
If $G$ contains the linear group consisting of all functions of the form $h_m(x) = mx$ then obviously it satisfies your conditions.
Conversely, by the definition of the derivative,
up vote 2 down (*) $lim_{m \to \infty} (h_m^{-1} f h_m(x)) = f'(0) x$
vote accepted
So the closure $\bar G$ of your group (in some appropriate topology) contains all linear functions of the form $h_m$ where $m = f'(0)$ for $f \in G$. So if you add to your conditions
that the group $G$ be closed, and that the set of derivative values $f'(0)$, $f \in G$ consists of all positive real numbers, then $G$ does indeed contain the linear group.
That's an interesting observation. – Euplio M. Apr 23 '12 at 16:51
@Lee: Did you assume that $f(0)=0$ in obtaining (*) above? – Euplio M. Apr 23 '12 at 21:15
Yeah, I think you are right. So this is more a class of examples than anything else. It would not say anything about the extreme opposite case where $\bar G$ has no elements whose
graphs pass through the origin. – Lee Mosher Apr 24 '12 at 11:55
add comment
Let $G$ be any group of functions. Let $H$ be the group of all compositions of functions of the form $f(mx)/m$ where $m>0$ and $f\in G$. Then $H$ is a group of the sort you're looking
up vote 3 down for, and all such groups arise in this way.
$H$ need not be closed under composition. Unless you mean for $H$ to be the group generated by all functions of that form? – Lee Mosher Apr 23 '12 at 2:22
Lee Mosher: I did in fact mean this and dropped a few words along the way. I'm fixing this now; thanks for catching it. – Steven Landsburg Apr 23 '12 at 2:52
add comment
For what its worth, there seem to be $2^{2^{2^{\aleph_0}}}$ many such groups (or at least it is consistent with ZFC that there are this many). It is easy to see the somewhat lesser claim that
there must be at least $2^{2^{\aleph_0}}$ many such groups, since we may simply close a single bijective function under composition and conjugation by affine functions. Since there are only
continuum many affine functions, this leads to a group of size at most continuum, and so at most continuum many functions can lead to the same group this way. But there are $2^{\frak{c}}=2^{2^
{\aleph_0}}$ many bijections. For the larger claim of $2^{2^{2^{\aleph_0}}}$, consider the forcing to add $2^{\frak{c}}$ many mutually generic bijections $f:\mathbb{R}\to\mathbb{R}$, without
up adding reals. The groups generated by any subset of these functions in the corresponding forcing extension, closing under composition and conjugation by affine functions, will be different by
vote 1 mutual genericity, and so there will be $2^{2^{\frak{c}}}=2^{2^{2^{\aleph_0}}}$ many different such groups in the forcing extension. I expect that one can get rid of this forcing argument by
down more a more careful counting, and just prove outright that there are $2^{2^{2^{\aleph_0}}}$ many such groups. For example, all one needs is a family of $2^{\frak{c}}$ many bijective functions
vote $\mathbb{R}\to\mathbb{R}$, such that omitting any one of them $f$ and closing the remaining functions under composition and conjugation by affine functions, does not generate $f$; that is, a
maximal independent family for the generating process in your question. In this case, every subset of that family will generate distinct groups with your property, giving $2^{2^{2^{\aleph_0}}}
$ many such groups. The forcing argument shows that it is consistent with ZFC to have such a large independent family, but probably one can just prove it outright.
add comment
Not the answer you're looking for? Browse other questions tagged fa.functional-analysis gr.group-theory mp.mathematical-physics cv.complex-variables or ask your own question.
|
{"url":"https://mathoverflow.net/questions/94892/what-classes-of-functions-are-closed-under-all-rescalings","timestamp":"2014-04-17T21:41:18Z","content_type":null,"content_length":"65267","record_id":"<urn:uuid:cd042c42-8011-42ab-8b3b-d4221e596c37>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matches for:
Translations of Mathematical Monographs
1991; 404 pp; hardcover
Volume: 84
ISBN-10: 0-8218-4536-5
ISBN-13: 978-0-8218-4536-3
List Price: US$135
Member Price: US$108
Order Code: MMONO/84
Plateau's problem is a scientific trend in modern mathematics that unites several different problems connected with the study of minimal surfaces. In its simplest version, Plateau's problem is
concerned with finding a surface of least area that spans a given fixed one-dimensional contour in three-dimensional space--perhaps the best-known example of such surfaces is provided by soap films.
From the mathematical point of view, such films are described as solutions of a second-order partial differential equation, so their behavior is quite complicated and has still not been thoroughly
studied. Soap films, or, more generally, interfaces between physical media in equilibrium, arise in many applied problems in chemistry, physics, and also in nature.
In applications, one finds not only two-dimensional but also multidimensional minimal surfaces that span fixed closed "contours" in some multidimensional Riemannian space. An exact mathematical
statement of the problem of finding a surface of least area or volume requires the formulation of definitions of such fundamental concepts as a surface, its boundary, minimality of a surface, and so
on. It turns out that there are several natural definitions of these concepts, which permit the study of minimal surfaces by different, and complementary, methods.
In the framework of this comparatively small book it would be almost impossible to cover all aspects of the modern problem of Plateau, to which a vast literature has been devoted. However, this book
makes a unique contribution to this literature, for the authors' guiding principle was to present the material with a maximum of clarity and a minimum of formalization.
Chapter 1 contains historical background on Plateau's problem, referring to the period preceding the 1930s, and a description of its connections with the natural sciences. This part is intended for a
very wide circle of readers and is accessible, for example, to first-year graduate students. The next part of the book, comprising Chapters 2-5, gives a fairly complete survey of various modern
trends in Plateau's problem. This section is accessible to second- and third-year students specializing in physics and mathematics. The remaining chapters present a detailed exposition of one of
these trends (the homotopic version of Plateau's problem in terms of stratified multivarifolds) and the Plateau problem in homogeneous symplectic spaces. This last part is intended for specialists
interested in the modern theory of minimal surfaces and can be used for special courses; a command of the concepts of functional analysis is assumed.
• Historical survey and introduction to the classical theory of minimal surfaces
• Information about some topological facts used in the modern theory of minimal surfaces
• The modern state of the theory of minimal surfaces
• The multidimensional Plateau problem in the spectral class of all manifolds with a fixed boundary
• Multidimensional minimal surfaces and harmonic maps
• Multidimensional variational problems and multivarifolds. The solution of Plateau's problem in the homotopy class of a map of a multivarifold
• The space of multivarifolds
• Parametrizations and parametrized multivarifolds
• Problems of minimizing generalized integrands in classes of parametrizations and parametrized multivarifolds. A criterion for global minimality
• Criteria for global minimality
• Globally minimal surfaces in regular orbits of the adjoint representation of the classical Lie groups
|
{"url":"http://ams.org/bookstore?fn=20&arg1=mmonoseries&ikey=MMONO-84","timestamp":"2014-04-20T11:53:06Z","content_type":null,"content_length":"17365","record_id":"<urn:uuid:0844ecff-2c63-4fd4-806d-88f81b778f82>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Focusing and synthetic rules
So, I was trying to explain to some people at a whiteboard something that I thought was more generally obvious than I guess it is. So, post! This post assumes you have seen lots of sequent caculi and
have maybe have heard of focusing before, but I'll review the focusing basics first. And here's the main idea:
focusing lets you treat propositions as rules
. This is not an especially new idea if you are "A Twelf Person," but the details are still a bit pecular.
Let's start with a little baby logical framework. Here are the types:
A ::= A → A | P⁺ | P⁻
are the
atomic propositions
, and there can be as many of them as we want for there to be.
Focusing, real quick
There are three judgments that we need to be worried about.
Γ ⊢ [ A ]
is the
right focus
Γ[ A ] ⊢ Q
is the
left focus
judgment, and
Γ ⊢ A
is the
Okay. So focusing (any sequent caclulus presentation of logic, really) encourages you to read rules from the bottom to the top, and that's how the informal descriptions will work. The first set of
rules deal with right-focus, where you have to prove
A right now
. If you are focused on a positive atomic proposition, it has to be available
right now
as one of the things in the context. Otherwise (if you are focused on a negative atomic proposition or
A → B
), just try to prove it regular-style.
P⁺ ∈ Γ
Γ ⊢ [ P⁺ ]
Γ ⊢ P⁻
Γ ⊢ [ P⁻ ]
Γ ⊢ A → B
Γ ⊢ [ A → B ]
The second set of rules deal with left-focus. One pecular bit: we write left focus as
Γ[ A ] ⊢ Q
, and by
we mean either a positive atomic proposition
or a negative atomic proposition
. If we're in left focus on the positive atom, then we stop focusing and just add
to the set of antecedents
, but if we're in left focus on a negative atomic proposition
, then we
to be trying to prove
on the right
right now
in order for the proof to succeed. Then, finally, if we're left focused on
A → B
, then we have to prove
in right focus and
in left focus.
Γ, P⁺ ⊢ Q
Γ[ P⁺ ] ⊢ Q
Γ[ P⁻ ] ⊢ P⁻
Γ ⊢ [ A ]
Γ[ B ] ⊢ Q
Γ[ A → B ] ⊢ Q
Finally, we need rules that deal with out-of-focus sequents. If we have an out-of-focus sequent and we're trying to prove
, then we can go ahead and finish if
is already in the context. There is no rule for directly proving
, but if we have a positive or negative atomic proposition that we're trying to prove, we can left-focus and work from there. And if we're trying to prove
A → B
, we can assume
and keep on trying to prove
P⁺ ∈ Γ
Γ ⊢ P⁺
A ∈ Γ
A is not a positive atomic proposition
Γ[ A ] ⊢ Q
Γ ⊢ Q
Γ, A ⊢ B
Γ ⊢ A → B
There are a lot of different similar presentations of focusing, most of which amount to the same thing, and most of which take some shortchuts. This one is no different, but the point is that this
system is "good enough" that it lets us talk about the two big points.
The first big point about focusing is that it's
- any sequent caclulus or natural deduction proof system for intuitionstic logic will prove exactly the same things as the focused sequent calculus. Of course, the "any other sequent calculus" you
picked probably won't have a notion of positive and negative atomic propositions. That's the second big point: atomic propositions can be assigned as either positive or negative, but a given atomic
proposition has to always be assigned the
positive-or-negativeness (that positive-or-negativeness is called
, btw). And on a similar note, you can change an atomic proposition's polarity if you change it everywhere. This may radically change the structure of a proof, but the same things will definitely be
provable. Both of these things, incidentally, were noticed by Andreoli.
Synthetic inference rules
An idea that was also noticed by Andreoli but that was really developed by Kaustuv Chaudhuri is the idea that, when talking about a focused system, we should really think about proofs as being made
up of
synthetic inference rules
, which are an artifact of focusing. The particular case of unfocused sequents where the conclusion is an atomic proposition,
Γ ⊢ Q
, is a special case that we can call
neutral sequents
. The only way we can prove a neutral sequent is to pull something out of the context and either finish (if the thing in the context is the positive atomic proposition we want to prove) or go into
left focus. For instance, say that it is the case that
P⁻ → Q⁻ → R⁻ ∈ Γ
. Then the following derivation consists only of choices that we
to make if we left-focus on that proposition.
... ------ ---------
Γ⊢P⁻ Γ⊢[Q⁻] Γ[R⁻]⊢R⁻
------ ----------------
Γ⊢[P⁻] Γ[Q⁻→R⁻]⊢R⁻
Γ[P⁻→Q⁻→R⁻]⊢R⁻ P⁻ → Q⁻ → R⁻ ∈ Γ
This is a proof that has two leaves which are neutral sequents and a conclusion which is a neutral sequent, and where all the choices (including the choice of what the conclusion was) were totally
forced by the rules of focusing. Therefore, we can cut out all the middle steps (which are totally determined anyway) and say that we have this
synthetic inference rule
P⁻ → Q⁻ → R⁻ ∈ Γ
Γ ⊢ Q⁻
Γ ⊢ P⁻
Γ ⊢ R⁻
This synthetic inference rule is more compact and somewhat clearer than the rule with all the intermediate focusing steps. As a side note, proof search with the inverse method is often much faster,
too, if we think about these synthetic inference rules instead of the regular rules: that's part of the topic of Kaustuv Chaudhuri and Sean McLaughlin's Ph.D. theses. Chaudhri calls these things
"derived rules" in his Ph.D. thesis, but I believe he is also the originator of the terms "synthetic connective" and "synthetic inference rule."
Let's do a few more examples. First, let's look at a synthetic inference rule for a proposition that has positive atomic propositions in its premises:
... ------ ---------
Γ⊢P⁻ Γ⊢[Q⁺] Γ[R⁻]⊢R⁻
------ ----------------
Γ⊢[P⁻] Γ[Q⁺→R⁻]⊢R⁻
Γ[P⁻→Q⁺→R⁻]⊢R⁻ P⁻ → Q⁺ → R⁻ ∈ Γ
By convention, when one of the premises is of the form
Q⁺ ∈ Γ
, we go ahead and write the premise
into the context everywhere, so the synthetic inference rule for this proposition is:
P⁻ → Q⁺ → R⁻ ∈ Γ
Γ, Q⁺ ⊢ P⁻
Γ, Q⁺ ⊢ R⁻
If the conclusion ("head") of the proposition is a positive atom instead of a negative one, then we end up with an arbitrary conclusion.
... ....
Γ⊢P⁻ Γ,Q⁺⊢S
------ -------
Γ⊢[P⁻] Γ[Q⁺]⊢S
Γ[P⁻→Q⁺]⊢S P⁻ → Q⁺ ∈ Γ
The synthetic inference rule looks like this, where
is required to be an atomic proposition, but it can be either positive or negative:
P⁻ → Q⁺ ∈ Γ
Γ ⊢ P⁻
Γ, Q⁺ ⊢ S
Γ ⊢ S
If we have a higher-order premise (that is, an arrow nested to the left of an arrow -
(P⁻ → Q⁺) → R⁻
is one such proposition), then we gain new assumptions in some of the branches of the proof. Note that the basic "shape" of this rule would not be affected if we gave
the opposite polarity - synthetic inference rules are a little less sensitive to the polarity of atoms within higher-order premises.
--------- ---------
Γ⊢[P⁻→Q⁺] Γ[R⁻]⊢R⁻
Γ[(P⁻→Q⁻)→R⁻]⊢R⁻ (P⁻ → Q⁺) → R⁻ ∈ Γ
The synthetic inference rule, one more time, looks like this:
(P⁻ → Q⁺) → R⁻ ∈ Γ
Γ, P⁻ ⊢ Q⁺
Γ ⊢ R⁻
Application to logical frameworks
One annoyance in all of these derived rules is that each of them had a premise like
(P⁻ → Q⁺) → R⁻ ∈ Γ
. However, in a logical framework, we usually define a number of propositions in some "signature" Σ, and consider these propositions to be always true. Therefore, given any finite signature, we can
"compile" that signature into a finite set of synthetic inference rules, add those to our logic, and
throw away the signature
- we don't need it anymore, as the synthetic inference rules contain precisely the logical information that was contained in the signature. Hence the motto, which admittedly may need some work:
focusing lets you treat propositions as rules
This is a strategy that hasn't been explored too much in logics where atomic propositions have mixed polarity - Jason Reed and Frank Pfenning's constructive resource semantics papers are the only
real line of work that I'm familiar with, though Vivek's comment reminds me that I learned about the idea by way of Jason from Vivek and Dale Miller's paper "A framework for proof systems," section
2.3 in particular. (They in turn got it from something Girard wrote in French, I believe. Really gotta learn French one of these days.) The big idea here is that this is expressing the
strongest possible form of adequacy
- the synthetic inference rules that your signature gives rise to have an exact correspondance to the original, "on-paper" inference rules.
If this is our basic notion of adequacy, then I claim that everyone who has ever formalized the sequent calculus in Twelf has actually wanted positive atomic propositions. Quick, what's the synthetic
connective corresponding to this pseudo-Twelf declaration of
in the sequent calculus?
∨L : (hyp A → conc C)
→ (hyp B → conc C)
→ (hyp (A ∨ B) → conc C)
If you thought this:
Γ, hyp (A ∨ B), hyp A ⊢ conc C
Γ, hyp (A ∨ B), hyp B ⊢ conc C
-------------------------------- ∨L
Γ, hyp (A ∨ B) ⊢ conc C
then what you wrote down corresponds to what we like to write in "on-paper" presentations of the intuitionstic sequent calculus, but it is
the correct answer. Twelf has only negative atomic propositions, so the correct answer is this:
Γ ⊢ hyp (A ∨ B)
Γ, hyp A ⊢ conc C
Γ, hyp B ⊢ conc C
-------------------------------- ∨L
Γ ⊢ conc C
This is still adequate in the sense that
on-paper sequent calculus proofs are in one-to-one correspondence with the
LF proofs: the reason that is true is that, when I am trying to prove
Γ ⊢ hyp (A ∨ B)
, by a global invariant of the sequent calculus I can only succeed by left-focusing on some
hyp (A ∨ B) ∈ Γ
and then immediately succeeding. However, the
proofs that focusing and synthetic connectives give rise to are
in one-to-one correspondence.
In order to get the rule that we desire, of course, we need to think of
hyp A
as a
atomic proposition (and
conc C
as negative). If we do that, then the first proposed synthetic inference rule is dead-on correct.
Hey, I'm kind of new at logicblogging and don't really know who is following me. This was really background for a post I want to write in the future. Background-wise, how was this post?
[Update Nov 11, 2010]
Vivek's comments reminded me of the forgotten source for the "three levels of adequacy," Vivek and Dale's "A framework for proof systems," which is probably a more canonical source than Kaustuv's
thesis for using these ideas for representation. Also, the tech report mentioned in Vivek's comment replays the whole story in intuitionistic logic and is very close to the development in this blog
2 comments:
1. Hi,
I was wondering whether you've seen the paper:
"A framework for proof systems"
that Dale Miller and I wrote last year. It seems to contain the same idea that you propose, but in a focused linear logic proof system for classical logic, a la Andreoli's proof system. There was
also a follow-up tech-report showing that our results could be replayed in focused intuitionistic logic.
Anders Starcke Henriksen. Using LJF as a Framework for Proof Systems. Technical Report, University of Copenhagen, 2009.
2. YES THAT'S IT! Since neither Jason nor I know French, I'm quite certain that an early draft of that paper was Jason's reference point (and not Girard's "Le Point Aveugle: Cours de Logique: Tome
1, Vers la Perfection") for the "three levels of adequacy" (Section 2.3).
I was, of course, aware that your work with Dale was considering these same ideas because the idea is very clearly being used in "Algorithmic specifications in linear logic with subexponentials,"
but I'd lost the particular reference to the "three levels of adequacy."
I was, however, unaware of the tech report you mentioned; it's definitely after the same ideas I was discussing here. One follow-up to it: in that tech report, the conclusion mentions "This means
that it would be impossible to encode nicely in LJF, linear systems or systems with multiple conclusions that do not allow weakening. As LLF uses linear logic as a base these systems should be
possible to encode in LLF." - recent unpublished work by Frank Pfenning and Jason Reed discusses how this (and more!) is doable by adding a term language with a dash of algebraic properties. The
most recent draft, "Focus-Preserving Embeddings of Substructural Logics in Intuitionistic Logic", is available from Frank's web page. Critically, while they need no positive connectives,
positive-polarity atoms play an important role.
I think this sort of algebraic or hybrid formulation in an LJF-family logic is where I see the CMU group currently looking in our attempts to formalize substructural logics and formalize systems
within substructural logics; the language in my thesis proposal actually makes me a bit of a lone holdout in this area, though I think even I am wavering :).
|
{"url":"http://requestforlogic.blogspot.com/2010/09/focusing-and-synthetic-rules.html","timestamp":"2014-04-17T15:25:56Z","content_type":null,"content_length":"96528","record_id":"<urn:uuid:fdc32662-ad2a-4037-bd23-7a3474cb193f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculus Tutors
Stanford, CA 94305
Physics, Math -- Improve your scores greatly, quickly, and pleasantly!
...am an experienced tutor from the most prestigious tutoring companies in the Bay Area, and I taught physics for years in universities. My tutoring focuses on physics, high-level math such as
, and test prep. Most of my students are high school juniors, seniors,...
Offering 8 subjects including calculus
|
{"url":"http://www.wyzant.com/geo_Pleasanton_CA_calculus_tutors.aspx?d=20&pagesize=5&pagenum=3","timestamp":"2014-04-18T11:03:45Z","content_type":null,"content_length":"62111","record_id":"<urn:uuid:f3b9ae55-92f1-4ba4-b479-dae7576cafff>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
I don't even know how to start doing this problem involving angles...
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50b4c689e4b0a5a78e15b97f","timestamp":"2014-04-18T14:05:12Z","content_type":null,"content_length":"87663","record_id":"<urn:uuid:b952c5c2-9c4d-4df3-9e17-f159e733029b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Flare Stack Sizing Calculator
This is a sample of the Flare Stack Sizing Calculator. To access the working calculator, please sign up for free membership trial.
Flare stack sizing calculator to determine the required flare stack height based on radiation at the flare tip.
The stack height calculation is based on the allowable radiation at a certain horizontal distance from stack. Here the limit that has been considered is 6.3X10^6 kW/m^2 at a distance of 150 feet
(45.7m) from the flare stack.
|
{"url":"http://www.enggcyclopedia.com/calculators/equipment-sizing/flare-stack-sizing-calculator/","timestamp":"2014-04-20T03:09:50Z","content_type":null,"content_length":"51848","record_id":"<urn:uuid:691a539f-2b5b-414a-a9bc-3f2ac9389f76>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Edexcel Physics Help
1. A golf ball is projected with a horizontal velocity of 30 m/s and takes 4.0 seconds to reach the ground. (Assume g= 10 m/s² and the air resistance is negligible.) Calculate: the height from which
the golf ball was projected. The magnitude of the golf balls' vertical velocity component just before hitting the ground. The horizontal velocity component. Resultant velocity just before the object
strikes the ground. The horizontal component of the object's displacement.
2. Erica kicks a soccer ball 12 m/s at an angle of 40 degrees above the horizontal.
a. What is the ball's maximum height?
b. What is the ball's maximum range?
c. With what velocity does the ball strike the ground?
d. What are the ball's acceleration and velocity at the top of its rise?
A Horizontal Projectile Motion
1. Erica kicks a soccer ball 12 m/s at horizontally from the edge of the roof of a building which is 30.0 m high.
a. When does it strike the ground?
b. With what velocity does the ball strike the ground?
2. A car drives straight off the edge of a cliff that is 54 m high. The police at the scene of the accident note that the point of impact is 130 m from the base of the cliff. How fast was the car
traveling when it went over the cliff?
3. A ball thrown horizontally at 22.2 m/s from the roof of a building lands 36 m from the base of the building. How tall is the building?
4. A boy kicked a can horizontally from a 6.5 m high rock with a speed of 4.0 m/s. How far from the base of the rock the can land?
5. A pilot flying a constant 215 km/h horizontally in a low-flying helicopter, wants to drop secret documents into his contact"s open car which is traveling 155 km/h in the same direction on a level
highway 78.0 m below. At what angle (to the horizontal) should the car be in his sights when the packet is released?
6. A ski jumper travels down a slope and leaves the ski track moving in the horizontal direction with a speed of 25 m/s. The landing incline falls off with a slope of 33º.
a. How long is the ski jumper air borne?
b. Where does the ski jumper land on the incline?
7. Stones are thrown horizontally with the same velocity. One stone lands twice as far as the other stone. What is the ratio of the height of the taller building to the height of the shorter?
8. A fleck moving horizontally to the right at 2.5 m/s begins to accelerate downward at 0.75 m/s2 . Where is the fleck 4.0 s later?
B General Projectile Motion
1. In example 2, if Erica kicked the ball from the edge of the roof of a building which is 30.0 m high.
a. When does it strike the ground?
b. How far from the building does it land?
2 . A daredevil decides to jump a canyon of width 10 m. To do so, he drives a motorcycle up an incline sloped at an angle of 15 degrees. What minimum speed must he have in order to clear the canyon?
3. A ball is kicked from a point 38.9 m away from the goal. The crossbar is 3.05 m high. If the ball leaves the ground with a speed of 20.4 m/s at an angle of 52.2º to the horizontal
a. By how much does the ball clear or fall short of clearing the crossbar?
b. What is the vertical velocity of the ball at the time it reaches the crossbar?
4. A rocket is accelerating vertically upward at 30 m/s2 near Earth's surface. A bolt separates from the rocket. What is the acceleration of the bolt?
5. Water is leaving a hose at 6.8 m/s. If the target is 2 m away horizontally, What angle should the water have initially?
6. A 5.0 kg brick lands 10.1 m from the base of a building. If it was given an initial velocity of 8.6 m/s [61º above the horizontal], how tall is the building?
7. A spear is thrown upward from a cliff 48 m above the ground. Given an initial speed of 24 m/s at an angle of 30º to the horizontal,
a. how long is the spear in flight?
b. what is the magnitude and direction of the spear's velocity just before it hits the ground?
8. A projectile is shot from the edge of a cliff 125 m above ground level with an initial speed of 65.0 m/s at an angle of 37º above the horizontal. Determine the the magnitude and the direction of
the velocity at the maximum height.
9. A projectile leaves a gun at the same instant that the target is dropped from rest. If the projectile is initially aimed straight at the target, will it hit the target?
10. A basketball is lobbed toward a hoop 3.05 m above the floor. If released 2 m above the floor 10 m from the basket and at a 45 degree angle, how fast must the basketball be thrown so that it goes
through the hoop?
11. Dick is tossing chocolates up to Jane's window from 8.0 m below her window and 9.0 m from the base of the wall. If the chocolates are traveling horizontally through the open window, how fast are
they going through her window?
12. A projectile has an initial velocity of 15.0 m/s at an angle of 30 degrees above the horizontal. What is the location of the projectile 2.0 seconds later?
13. If a ball is kicked with an initial velocity of 25 m/s at an angle of 60° above the ground, what is the "hang time"?
14. A water balloon hits a target 26 m away, at the same height as the release point. The horizontal component of the initial velocity was 5 m/s. What was the vertical component of the initial
velocity? What was the launch angle?
15. A soccer ball leaves a cliff 20.2 m above the valley floor, at an angle of 10 degrees above the horizontal. The ball hits the valley floor 3.0 seconds later. What is the initial velocity of the
ball? What maximum height above the cliff did the ball reach?
16. A flea stands 2.00 m from a dog's haunches .55m in height. Jumping at an angle of 32 degrees, what initial speed must the flea have to reach her new home?
17. A bullet hit a target 301.5m away. What maximum height above the muzzle did the bullet reach if it was shot at an angle of 25 degree to the ground?
18. A 3.00 kg parcel is dropped out of a window from a height of 176.4 m. Wind exerts an average 12.0 N force on the parcel away from the building. How long is the parcel in the air? Where does it
land? What is its impact velocity?
19. A projectile is shot from the ground at an angle of 60 degrees with respect to the horizontal, and it lands on the ground 5 seconds later. Find:
a. the horizontal component of initial velocity
b. the vertical component of initial velocity
c. initial speed
20. An arrives 30m away horizontally and 5m above the point from which it was launched. It reaches this point 3 seconds after it was launched. Find:
a. the horizontal component of initial velocity
b. the vertical component of initial velocity
c. the vertical component of the impact velocity
d. the horizontal component of the impact velocity
21. Find the minimum initial speed of a champagne cork that travels a horizontal distance of 11 meters.
22. During practice, a soccer player kicks a ball, giving it a 32.5 m/s initial speed. It travels the maximum possible distance before landing down field.
(a) How much time does the ball spend in the air?
(b) How far did the ball travel?
23. A projectile was launched 64° above the horizontal, attaining a height of 10 m. What is the projectile's initial speed?
24. At what launch angle will the range of a projectile equal its maximum height?
25. A boy kicks a soccer ball directly at a wall 41.8 m away. The ball leaves the ground at 42.7 m/s with an angle of 33.0 degrees to the ground. What height will the ball strike the wall?
26. What is the relationship between the maximum height of the projectile, the projectile's range, and the launch angle?
27. A projectile is fired with an initial velocity of 120 m/s at an angle above the horizontal. If the projectile's initial horizontal speed is 55 m/s, then at what angle was it fired?
28. A boulder rolls 35 m down a hill, starting from rest and accelerating at 3.06 m/s2. The boulder then rolls off a 45 m high vertical cliff, launching at 19.0° below the horizontal. (a) How far
from the cliff's base does the boulder land? (b) How much time does the boulder spend falling?
|
{"url":"http://edexcelphysics.blogspot.com/2010/11/projectiles.html","timestamp":"2014-04-20T15:50:34Z","content_type":null,"content_length":"84194","record_id":"<urn:uuid:dce2c9b5-65fa-481b-9afe-8e59e8c40ee4>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Stability Revisited
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search
Stability Revisited
As defined earlier in §5.6 (page ), a filter is said to be stable if its impulse response poles and zeros, an irreducible filter transfer function is stable if and only if all its poles are inside
the unit circle in the 6.8.6). This is because the transfer function is the z transform of the impulse response, and if there is an observable (non-canceled) pole outside the unit circle, then there
is an exponentially increasing component of the impulse response. To see this, consider a causal impulse response of the form
This signal is a damped complex sinusoid when amplitude envelope. If envelope increases exponentially as
The signal z transform
where the last step holds for ^9.1Now consider what happens when we let increasing amplitude. (Note z transform exists only for z transform no longer exists on the unit circle, so that the frequency
response becomes undefined!
The above one-pole analysis shows that a one-pole filter is stable if and only if its pole is inside the unit circle. In the case of an arbitrary transfer function, inspection of its partial fraction
expansion (§6.8) shows that the behavior near any pole approaches that of a one-pole filter consisting of only that pole. Therefore, all poles must be inside the unit circle for stability.
In summary, we can state the following:
Isolated poles on the unit circle may be called marginally stable. The impulse response component corresponding to a single pole on the unit circle never decays, but neither does it grow.^9.2 In
physical modeling applications, marginally stable poles occur often in lossless systems, such as ideal vibrating string models [86].
Subsections Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email]
|
{"url":"https://ccrma.stanford.edu/~jos/filters/Stability_Revisited.html","timestamp":"2014-04-18T23:27:40Z","content_type":null,"content_length":"17046","record_id":"<urn:uuid:da671bee-7980-40f2-8a88-b6c49dc3ac69>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Lower Bound for Learning Distributions Generated by Probabilistic Automata
Borja Balle, Jorge Castro and Ricard Gavaldà
In: 21st Intl. Conf. on Algorithmic Learning Theory (ALT'10), october 6-8, 2010, Canberra, Australia.
Known algorithms for learning PDFA can only be shown to run in time polynomial in the so-called distinguishability \mu of the target machine, besides the number of states and the usual accuracy and
confidence parameters. We show that the dependence on \mu is necessary for every algorithm whose structure resembles existing ones. As a technical tool, a new variant of Statistical Queries termed
L_inf-queries is defined. We show how these queries can be simulated from samples and observe that known PAC algorithms for learning PDFA can be rewritten to access its target using L_inf-queries
and standard Statistical Queries. Finally, we show a lower bound: every algorithm to learn PDFA using queries with a resonable tolerance needs a number of queries larger than (1/\mu)^c for every c
EPrint Type: Conference or Workshop Item (Paper)
Project Keyword: Project Keyword UNSPECIFIED
Subjects: Theory & Algorithms
ID Code: 8050
Deposited By: Ricard Gavaldà
Deposited On: 17 March 2011
|
{"url":"http://eprints.pascal-network.org/archive/00008050/","timestamp":"2014-04-20T10:50:53Z","content_type":null,"content_length":"6631","record_id":"<urn:uuid:e4e33f18-c0f7-4de0-9aac-08337a3e2211>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
|
University of Southern Maine
BS in Computer Science
The Bachelor of Science in Computer Science prepares the student for either continued study at the graduate level or entry into the labor market. Our students have been successful at both, with some
earning doctoral degrees and some reaching high levels in the private sector, including the director of software development at a major corporation. The curriculum includes a required core of courses
that provides a broad base of fundamental knowledge, but allows for individuals to follow their own specific interests at the advanced level. All courses focus on general principles that will remain
valid into the future but use tools and vehicles reflecting contemporary practice.
Computer Science is perhaps the most pervasive technology of our time, reaching into every aspect of modern life, from work to recreation. It spans many disciplines, from mathematics and electrical
engineering to linguistics, cognitive psychology and graphic design. It is a challenge to provide a definition of the essence of such a sprawling discipline, but one that we like is that Computer
Science is the study of what can be automated.
Many people imagine that one must learn advanced mathematics to become a computer scientist or software developer. To be sure, some applications, such as computational modeling of physical processes,
require techniques from advanced mathematics. Other applications, however, do not require mathematics beyond the basics taught in a strong high school program. Far more important is the ability to
think logically and precisely and the ability to devise a plan to solve a problem. We have had students successfully convert to Computer Science from a variety of non-technical disciplines, including
history, classics, and English literature.
All students are reminded that, in addition to meeting departmental requirements for a major, they must also meet the University Core Curriculum requirements. Students are advised that COS 430
Software Engineering satisfies the Core Curriculum Capstone requirement.
The total number of credits for graduation is 120.
Courses used to fulfill major requirements in sections A through E below must be passed with a grade of C– or better. The accumulative grade point average of all courses applied to the major must be
at least 2.0. At most three credits of COS 497 can be used to meet a degree requirement.
The specific course requirements are as follows:
A. Computer Science:
COS 160 Structured Problem Solving: Java
COS 161 Algorithms in Programming
COS 170 Structured Programming Laboratory
COS 250 Computer Organization
COS 255 Computer Organization Laboratory
COS 285 Data Structures
COS 350 Systems Programming
COS 360 Programming Languages
COS 485 Design of Computing Algorithms
COS 398 Professional Ethics and Social Impact of Computing
B. Software Design:
COS 420 Object Oriented Design
or COS 430 Software Engineering
C. Computer Systems:
COS 450 Operating Systems
or COS 457 Database Systems
D. Completion of three additional COS courses numbered 300 and above, excluding COS 498.
Graduate courses in the Computer Science Department can be used to fulfill the requirements in section D.
E. Mathematics and Science requirements
1. Completion of:
MAT 145 Discrete Mathematics I
COS 280 Discrete Mathematics II
2. Enough additional courses from the following list to total, with the two required courses of the last item, at least 15 credit hours:
MAT 152 Calculus A
MAT 153 Calculus B
MAT 252 Calculus C
MAT 281 Introduction to Probability
MAT 282 Statistical Inference
MAT 292 Theory of Numbers
MAT 295 Linear Algebra
MAT 350 Differential Equations
MAT 352 Real Analysis
MAT 355 Complex Analysis
MAT 364 Numerical Analysis
MAT 366 Deterministic Models in Operations Research
MAT 370 Non-Euclidean Geometry
MAT 380 Probability and Statistics
MAT 383 System Modeling and Simulation
MAT 395 Abstract Algebra
MAT 460 Mathematical Modeling
MAT 461 Stochastic Models in Operations Research
MAT 490 Topology
MAT 492 Graph Theory and Combinatorics
PHI 205 Symbolic Logic
3. Completion of a two-semester sequence of any from the three
CHY 113 with CHY 114 and CHY 115 with CHY 116
PHY 121 with PHY 114 and PHY 123 with PHY 116
BIO 105 with BIO 106 and BIO 107
4. Enough additional courses from E(2) or the sciences to make at least 30 credit hours combined of mathematics and science. A science course taken to fulfill this requirement must be one that
satisfies a degree requirement within its discipline and if it has an accompanying lab course the lab must be taken.
F. Communication skills requirement:
THE 170 Public Speaking
ITP 210 Technical Writing
Recommended Course Sequence
Suggested Schedule
The following schedule of mathematics and computer science courses is typical for the freshman and sophomore years.
Fall Spring
First year COS 160 COS 161
COS 170 MAT 145
Second year COS 280 COS 250
COS 285 COS 255
|
{"url":"https://usm.maine.edu/cos/bs-computer-science","timestamp":"2014-04-17T07:36:51Z","content_type":null,"content_length":"32782","record_id":"<urn:uuid:107cbed8-9086-4932-8e42-2b88428087da>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
|
e GROUPINGS
Date: 1300-1400
Language: Latin
Origin: dividere, from videre 'to separate'
[intransitive and transitive] if something divides, or if you divide it, it separates into two or more parts
keep separate
also divide off [transitive] to keep two areas separate from each other:
also divide up [transitive] to separate something into parts and share them between people
spend time/energy
[transitive] if you divide your time, energy etc between different activities or places, you spend part of your time doing each activity or in each place
a) [transitive]HMN to calculate how many times one number contains a smaller number [↪ multiply]
b) [intransitive]HMN to be contained exactly in a number one or more times
divide into
[transitive]PPG to make people disagree so that they form groups with different opinions:
7PGPPG to defeat or control people by making them argue with each other instead of opposing you
8 a feeling you have when two people you like have argued and you are not sure which person you should support:
—divided adjective:
|
{"url":"http://www.ldoceonline.com/Groupings-topic/divide_1","timestamp":"2014-04-17T04:31:46Z","content_type":null,"content_length":"39070","record_id":"<urn:uuid:30925de5-99b0-4d42-a0f8-aa6d0189c076>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Opa Locka Algebra 2 Tutor
Find an Opa Locka Algebra 2 Tutor
...Also, I have completed my Accelerated Christian Education (ACE) Supervisory Training and Accelerated Christian Education (ACE) Professional Development Training as a Lead Teacher at a private
school. I am looking forward to helping students from all walks of life. I am available for help with term papers, and one on one tutoring.
26 Subjects: including algebra 2, English, statistics, algebra 1
I am fully bilingual, fluent in both Spanish and English with a B.S. in Genetics and Microbiology from Rutgers University, and having completed an MD degree. Great mathematical skills,
specializing in elementary and high school math (basic math, algebra I & II and geometry), SAT math, ASVAB and GED...
46 Subjects: including algebra 2, Spanish, reading, writing
Hey guys!!!! My name is Carmen and I recently graduated from St. Joseph's University in Philadelphia. My major was chemical biology and a minor Spanish with a final grade point average of 3.2.
9 Subjects: including algebra 2, Spanish, algebra 1, chemistry
...I look forward to helping you and am excited to hear from you. Thank you, JennI am very patient with students and enjoy teaching seeing as I aspire to become a mathematics professor. I have
always gotten A's in all my math courses and am a Dual-Enrolled student currently in Senior year of High School and finishing my AA in college.
11 Subjects: including algebra 2, geometry, algebra 1, trigonometry
...I'm able to work well under pressure, and in a fast paced environment. I can handle any situation placed in front of me. I have worked directly with children for 8 years.
13 Subjects: including algebra 2, geometry, algebra 1, GED
|
{"url":"http://www.purplemath.com/Opa_Locka_Algebra_2_tutors.php","timestamp":"2014-04-20T06:55:27Z","content_type":null,"content_length":"23957","record_id":"<urn:uuid:fdc33003-7ac3-4c65-8397-60308ee44150>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dragon Slayer - Brain Teaser
A dragon with one hundred heads is terrorizing a hamlet.
Lancelot, the dragon slayer, has been summoned.
Lancelot's quest : slay the dragon.
Lancelot's arsonal has only four major weapons.
Each weapon is only capable of decapitating a number of heads:
Battle Axe (5 heads),
Slasher (15 heads),
Dragon Scimitar (17 heads),
Abyssal Whip (20 heads).
However, following each attack the dragon grows back heads:
Battle Axe (twenty four heads),
Slasher (two heads),
Dragon Scimitar (fourteen heads),
Abyssal Whip (seventeen heads).
The dragon can only be slayed if Sir Lancelot uses an attack that cuts off exactly all its remaining heads.
If the dragon has ten heads remaining, it can not be slayed in one attack, but if it has twenty heads it can.
What is the minimum number of attacks required to slay the dragon?
|
{"url":"http://www.pedagonet.com/brain/dragonslayer.html","timestamp":"2014-04-19T22:06:19Z","content_type":null,"content_length":"12533","record_id":"<urn:uuid:53352aed-931a-4514-993f-5a3349d6d547>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Conyers Algebra 2 Tutor
Find a Conyers Algebra 2 Tutor
...I've been involved in education for most of my adult life and I've still managed to keep my love for learning alive! I'm a homeschooling graduate who majored in Accounting in college and I
worked as a tax accountant before having my own children. I love English, reading, and literature and I sc...
30 Subjects: including algebra 2, Spanish, reading, biology
...Besides engineering I wanted to be a teacher so I've always tutored on the side to help students/peers specifically excelling in math courses. Seeing my students succeed is better than any
problem solving I can service during my regular job. I've been tutoring ever since I was a sophomore in high school so that gives me about 8 years of experience.
9 Subjects: including algebra 2, physics, calculus, precalculus
...As an undergraduate and graduate student in genetics, this subject is one that I know inside and out. I can tutor basic Mendelian genetics, Complex patterns of inheritance, Molecular biology/
genetics, and eukaryotic and prokaryotic genetics. I have also tutored genetics to undergraduate students.
15 Subjects: including algebra 2, chemistry, geometry, biology
...During my time in college, I have tutored various students in their College Algebra and a huge variety of courses over the course of four semesters and they experienced much improvement. During
my years in college, I tutored many math students in a variety of mathematics topics including College...
13 Subjects: including algebra 2, calculus, geometry, algebra 1
...I enjoy teaching Pre-algebra and I know how to present the topics in an nonthreatening manner. I have experience with tutoring students in AP statistics and college level statistics classes. I
am also knowledgeable in quantitative research methods such as t-tests, ANOVA, regression and SEM and know how to use SPSS, LISREL and R.
12 Subjects: including algebra 2, physics, statistics, algebra 1
|
{"url":"http://www.purplemath.com/Conyers_Algebra_2_tutors.php","timestamp":"2014-04-19T20:25:07Z","content_type":null,"content_length":"24015","record_id":"<urn:uuid:394aeec1-fb07-480e-a318-69b50c20864a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: IEEE 754 vs Fortran arithmetic
wsb@eng.Sun.COM (Walt Brainerd)
25 Oct 90 00:46:27 GMT
From comp.compilers
| List of all articles for this month |
Newsgroups: comp.compilers,comp.lang.fortran
From: wsb@eng.Sun.COM (Walt Brainerd)
Followup-To: comp.lang.fortran
Keywords: Fortran
Organization: Compilers Central
References: <9010230628.AA22160@admin.ogi.edu> <BURLEY.90Oct24025053@world.std.com>
Date: 25 Oct 90 00:46:27 GMT
In article <BURLEY.90Oct24025053@world.std.com>, burley@world.std.com (James C Burley) writes:
> I don't know any references, but I do know we ran into this problem
> implementing Fortran on a machine using an IEEE 754 math chip:
> REAL R(...)
> DATA R/0.5,1.5,2.5,3.5,.../
> DO I=1,...
> PRINT *,NINT(R)
> END DO
> END
> Fortran specifies that the following values must be output:
> 1, 2, 3, 4,...
We have had this discussion before some here, but to be a bit nit-picking,
"Fortran" (i.e., the standard) does not specify such things, as it
does not even specify what + means. It certainly does encourage
such things and a vendor must be prepared to answer to the customer,
but not worry about strict standard conformace in this area.
> However, the IEEE 754 defines nearest-integer so that using its function
> instead of Fortran's definition of NINT produces:
> 0, 2, 2, 4,...
> Also, Fortran specifically prohibits zero from being negative (or being
> significantly negative
The appendix (not a legal part of the standard) says:
"A processor must not consider a negative zero to be different
from a positive zero."
I would take this as a SUGGESTION to make 0 and I-I compare true,
however the result of the subtraction is represented.
The standard (13.5.9) does say that a "negative signed zero"
must not be produced when doing numeric output into a formatted record.
So, from the point of view of the standard, printing 4-4 as 7 is OK,
but printing it as -0 (with say an I2 format) is not!
It's just a matter of who you run to if something doesn't work
the way you think it should.
Walt Brainerd Sun Microsystems, Inc.
wsb@eng.sun.com MS MTV 5-40
Mountain View, CA 94043
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again.
|
{"url":"http://compilers.iecc.com/comparch/article/90-10-105","timestamp":"2014-04-19T09:24:29Z","content_type":null,"content_length":"8203","record_id":"<urn:uuid:719f1860-a9ae-47c4-aa47-34b1735cbca9>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Question about the attitude in my solved basic-mission-6
Greeting folks,
For the basic mission 6, it requires you to used the encrypted password and the encrypt engine to find the decrypted password;
I solved the problem, but I'm not sure whether my attitude/approach is the right way,
so I want some opinion/review/advice...
What I did was:
I keep typing simple word pattern into the encrypt engine and try to find a relationship between the input and output;
so it's like a blind action, I've no idea would I find anything just by doing so;
for this mission, the encryption method is very simple, so I did figure out its algorithm, but
what about more complicated method?? since you don't have access (can't see) the encryption method,
how is it even possible to just blind figuring the behavior of an encryption process??
so, my question is:
if blind-figuring encryption behavior is the correct attitude for now, does it mean that for more complicated encryption, we need to use skills in Linear Algebra or Cryptography?
Re: Question about the attitude in my solved basic-mission-6
This is a great way to solve basic 6. In fact, it was how I solved it. The algorithm wasn't at all obvious to me back then. This is a technique called blackboxing. It's where someone continuously
enters data into a script to see how it responds, and then uses that information to figure shit out. It's a big technique used with SQL injection. For this mission, there would be no obvious way to
what is happening with the algorithm, so I'd say you did a good job.
For those about to rock.
|
{"url":"http://www.hackthissite.org/forums/viewtopic.php?f=79&p=65561","timestamp":"2014-04-18T04:32:02Z","content_type":null,"content_length":"22881","record_id":"<urn:uuid:9bfa7329-984f-4a5b-893e-b6ff97be8941>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CGTalk - how does bit depth look like?
01-01-2010, 11:50 PM
Hi guys. I'm bad at maths, but I want to understand the bit depth of am image.
So 8 bits per channel look like 8 digits?
And 10 bits per channel look like 10 digits?
And 24 bits per pixel look like 24 digits correspondingly?
I simply don't know what is the small digit means over the number, I'm simply zero at maths.
|
{"url":"http://forums.cgsociety.org/archive/index.php/t-838938.html","timestamp":"2014-04-19T09:48:24Z","content_type":null,"content_length":"20644","record_id":"<urn:uuid:4a29d9d8-a649-497d-97c3-b8ef120996df>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Information Theory b-log
Nevanlinna Prize
Nominations of people born on or after January 1, 1974
for outstanding contributions in Mathematical Aspects of Information Sciences including:
1. All mathematical aspects of computer science, including complexity theory, logic of programming languages, analysis of algorithms, cryptography, computer vision, pattern recognition, information
processing and modelling of intelligence.
2. Scientific computing and numerical analysis. Computational aspects of optimization and control theory. Computer algebra.
Nomination Procedure: http://www.mathunion.org/general/prizes/nevanlinna/details/
|
{"url":"https://blogs.princeton.edu/blogit/2013/01/31/nevanlinna-prize/","timestamp":"2014-04-19T12:06:42Z","content_type":null,"content_length":"22542","record_id":"<urn:uuid:66addc01-23a0-4dde-8657-719292d7bee2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-user] Sparse: Efficient metod to convert to CSC format
Robert Cimrman cimrman3 at ntc.zcu.cz
Mon Oct 2 08:13:05 CDT 2006
William Hunter wrote:
> I've been bugging Robert C. and Ed S. (thanks guys) for a while about
> this now, but I think it might be good if I share my experiences with
> other users.
> When converting a sparse matrix to CSC format, one of the quickest
> ways to get there is to give <sparse.csc_matrix> three vectors
> containing;
> (1) [values]: the values,
> (2) [rowind]: the row indices and
> (3) [colptr]: the column pointer.
> If one times how long it takes to get your CSC matrix by doing
> sparse.csc_matrix((values, rowind, colptr)) compared to the other ways
> to get it, e.g., csc_matrix(dense_mtx), I've found that the first
> method is faster (enough to be significant for me).
> One can solve a sparse system with <linsolve.spsolve>, if your matrix
> is in CSC format. And here's my question: Let's say I have a way (and
> I might :-)) to construct those 3 vectors very quickly, how can I use
> them directly as arguments in <linsolve.spsolve>? For example:
> solution = linsolve.spsolve([values],[rowind],[colptr],[RHS])
If you have already (values, rowind, colptr) triplet, creating a CSC
matrix is virtually a no-operation (besides some sanity checks).
Is it really so inconvenient to write:
solution = linsolve.spsolve( csc_matrix( (values, rowind, colptr), shape
On the other hand, the syntax
solution = linsolve.spsolve( (values,rowind,colptr,'csc'), [RHS])
might be useful as a syntactic sugar, so I am not against adding it.
Note however, that you must indicate somehow if your triplet is CSR or
CSC or ... - it is not much shorter then writing the full CSC constructor.
More information about the SciPy-user mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-user/2006-October/009443.html","timestamp":"2014-04-16T04:46:04Z","content_type":null,"content_length":"4547","record_id":"<urn:uuid:88486b62-dbf4-4a8d-8249-1ce4d8dbbdee>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical Modeling and Simulation: Introduction for Scientists and Engineers
Author(s): Prof. Dr. Kai Velten
Published Online: 8 OCT 2009
Print ISBN: 9783527407583
Online ISBN: 9783527627608
DOI: 10.1002/9783527627608
This concise and clear introduction to the topic requires only basic knowledge of calculus and linear algebra - all other concepts and ideas are developed in the course of the book. Lucidly written
so as to appeal to undergraduates and practitioners alike, it enables readers to set up simple mathematical models on their own and to interpret their results and those of others critically. To
achieve this, many examples have been chosen from various fields, such as biology, ecology, economics, medicine, agricultural, chemical, electrical, mechanical and process engineering, which are
subsequently discussed in detail.
Based on the author`s modeling and simulation experience in science and engineering and as a consultant, the book answers such basic questions as: What is a mathematical model? What types of models
do exist? Which model is appropriate for a particular problem? What are simulation, parameter estimation, and validation?
The book relies exclusively upon open-source software which is available to everybody free of charge. The entire book software - including 3D CFD and structural mechanics simulation software - can be
used based on a free CAELinux-Live-DVD that is available in the Internet (works on most machines and operating systems).
|
{"url":"http://onlinelibrary.wiley.com/book/10.1002/9783527627608","timestamp":"2014-04-20T03:41:34Z","content_type":null,"content_length":"35771","record_id":"<urn:uuid:8bec9ba9-827c-42bd-b7fd-00ceb1c13c6b>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Browse by 1. Referees
Number of items: 4.
Gerstenberger, Andreas (2013): Universal moduli spaces in Gromov-Witten theory. Dissertation, LMU München: Faculty of Mathematics, Computer Science and Statistics
Schwingenheuer, Martin (2010): Hamiltonian unknottedness of certain monotone Lagrangian tori in S2xS2. Dissertation, LMU München: Faculty of Mathematics, Computer Science and Statistics
Wehrheim, Jan (2008): Vortex Invariants and Toric Manifolds. Dissertation, LMU München: Faculty of Mathematics, Computer Science and Statistics
Fabert, Oliver (2008): Transversality Results and Computations in Symplectic Field Theory. Dissertation, LMU München: Faculty of Mathematics, Computer Science and Statistics
|
{"url":"http://edoc.ub.uni-muenchen.de/view/gutachter/Cieliebak=3AKai=3A=3A.html","timestamp":"2014-04-19T09:27:55Z","content_type":null,"content_length":"12043","record_id":"<urn:uuid:7ef2b64a-a82b-4d2c-a25c-defd9e8dba5b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Series
Tell me one of the easier approaches.
Impossible, I did not find any easy way except the logarithmic law.
That guy? Do they teach nothing today? Just dry mathematics and none of the flavor of the subject? Is this why humans are so bad at math? You are not pulling my leg are you?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
anonimnystefy wrote:
Oooh,Goldbach Theorem Goldbach. Didn't know his first name was Christian.
And look at the last post on the last page,jst in case you miss it.
Looks like I was too late.
Logarithmic law? Let me search it up.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Series
It will do you no good with the problem but it is interesting.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
There are so many things named Logarithmic Law on the net,so I cannot know which one you are reffering to. Could you tell me?
And what about Goldbach? What does he have to do with anything?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Series
Look up Bensfords law or rule.
Christian Goldbach was an amateur mathematician. Not very good. Yet he was able to come up with a problem that stumped Euler and every mathematician since. It is easy to invent problems that you or
no one else will be able to solve.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
I will look it up.
You are talking about his conjecture,right? Who came up with those two problems you posted?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Series
Another brainless fool!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
I actually know about the Benfords law,but I didn't remember it. Can you somehow calculate the number of digts of that number?
Who is that brainless fool?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Series
I do not know the answer to that. Used up a lot of brain cells working on it though.
Who is that brainless fool?
You really want to know?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
Is there a way to calculate the number of digits?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Series
Not that I was able to discover. That is why it is a lifelong problem.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
Ok. Who is the brainless fool?
And,any other problems?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Series
Ok. Who is the brainless fool?
I can not answer that without a story. You do not like stories. You seem rather on the impatient side.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
I love stories. You do not know me at all.
Last edited by anonimnystefy (2012-04-21 10:02:01)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Series
Col Erich Von Hitler: Do you all know the reason why yous will never succeed bahhhhbbbbyemm?
bobbym: No why?
Col Erich Von Hitler: Becowse ya ahlwaays is working on a praahhhblem, you all can't solve.
Thaaats the difference betweens yous and mee. Eyes know whaaats to work on, problems Eyes can saahlve. That is whys Eyes is the head of this heeere installation and yours boss.
bobbym: I always thought it was because of all those guys with guns that you command.
Col Erich Von Hitler: Yous is dang funny baaaahbbyemm.
bobbym's gf; Oh, Erich you are so wonderful. I bet there is lots of things you can teach me.
bobbym: Colonel would you mind very much stepping outside and standing in front of my car?
Col Erich Von Hitler: Eyes just loves you all yankee sense of humor!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
Ok... Is he the brainless fool?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Series
Facepalm as misheeru would say!
I am that brainless fool that is why I can come up with these problems. The Colonel was and is right. Knowing what to work on and what not to is the secret of success!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
You are not a brainless fool!
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Series
One of the problems solution was so complicated that even after it was explained to me by Robert Israel and someone else I still could not understand it.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
Which problem was it?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Series
Shuffle a deck and lay it out on a table face up in a line. What is the chance that 3 adjacent cards are the same suit?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
Wow,it seems so easy at first glance,but I know that these card problems can get pretty tough.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Series
It was worse than tough. That is why I told the story of old Christian. A simply worded problem can be the hardest to solve.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
Here is a problem with an even simpler statement:
"Double a cube."
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Series
That one is known to be unsolvable with straightedge and compass.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=212611","timestamp":"2014-04-21T05:23:54Z","content_type":null,"content_length":"39311","record_id":"<urn:uuid:9f2507f8-f615-4eaa-a769-53598327089b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
|
which fraction is smaller, -2/7 or -1/3? ty
Re: which fraction is smaller, -2/7 or -1/3? ty
You should notice that in post#1 you have written
and in post #3 you have
these are not the same, that is why I asked.
This one is in a form we can solve for
Add 2 / 3 to both sides.
Do you follow this so far? Are you familiar with how to add to fractions? Or do you need some help with that?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=203891","timestamp":"2014-04-18T06:02:35Z","content_type":null,"content_length":"14212","record_id":"<urn:uuid:20527a74-70b3-4b3d-899f-81c53f737b83>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/cwrw238/medals","timestamp":"2014-04-18T03:22:15Z","content_type":null,"content_length":"111549","record_id":"<urn:uuid:43edee73-2990-4c3f-8b50-ea7bb1dfc8f8>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Partition of a group into small subsets
up vote 6 down vote favorite
A nonempty subset $S$ of a group $G$ is called small if there is an infinite sequence of elements $g_n$ in $G$ such that the translated sets $g_nS$ are pairwise disjoint.
Question: Is there a group which is a (disjoint) union of three small subsets, but it is not a union of two small subsets?
Remark: Such a group must be non-amenable (clear) and must not contain a copy of the non-abelian free group (in fact it is an exercise to see that the groups which are a union of two small sets are
exactly those containing the free group).
Bonus question: Is every non-amenable group a finite union of small subsets?
gr.group-theory geometric-group-theory
Did you look in the literature on "Tarski numbers"? This is not quite what you are asking about, but close. In particular, take a look at the proof that every non-amenable f.g. group is
paradoxical, see if you can use the proof to settle your bonus question. – Misha Jul 18 '13 at 14:00
Thanks Misha. Indeed one motivation is to quantify the Von Neumann problem, like the Tarski number do. In particular the first question is about low complexity counterexamples. Since the Tarski
number are not understood (to put it mildly) I was hoping to find an easier way to measure non-amenability. One can show quite easily that if a group is paradoxical with one of the "halves" puzzle
equivalent with the whole group by cutting in two pieces only, then the group is indeed a finite union of small sets... but no such example is known, except if the group contains the free subgroup.
– Dan Sălăjan Jul 18 '13 at 14:24
@EricTressler: Thanks for reply, but I don t understand what is the group in your example... – Dan Sălăjan Jul 18 '13 at 14:26
Dan: I see, but did you try to extract an answer to the bonus question from the proof of paradoxality for nonamenable groups rather than merely from its statement? (If you did not, it might be
worth trying.) Also, maybe you should directly ask Grigorchuk, he might know some interesting examples. – Misha Jul 18 '13 at 16:55
@Misha: thanks for the suggestion, it looks like worth trying indeed. Btw, the only proof I know is via Folner and Marriage but this cannot be Tarski´s original proof, I guess... – Dan Sălăjan Jul
19 '13 at 14:38
show 1 more comment
1 Answer
active oldest votes
I believe the answer to the bonus question is negative, that is, there exist non-amenable (finitely generated) groups which are not representable as a union of finitely many small
subsets. This is directly related to the problem of constructing non-amenable groups with arbitrarily large Tarski numbers discussed here. Here is a sketch of the argument.
Using a variation of the method that Mark Sapir has described in the above post, one can construct a finitely generated non-amenable group G with the following property:
up vote 3 down For every n there is a finite index subgroup G(n) of G such that every n-generated subgroup of G(n) is finite.
vote accepted
Now suppose such G is a union of k small subsets S_1,..., S_k. Then each finite index subgroup H of G is also a union of k subsets, namely the intersections of S_1,...,S_k with H,
which are still small (as subsets of H). This easily implies that H must have a non-amenable subgroup generated by at most k^2 elements, which is a contradiction.
I accept this very interesting answer. The main question is unsolved for the moment, but this might change. – Dan Sălăjan Aug 17 '13 at 8:39
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory geometric-group-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/137061/partition-of-a-group-into-small-subsets","timestamp":"2014-04-17T19:06:19Z","content_type":null,"content_length":"58540","record_id":"<urn:uuid:19427131-9a21-44f4-90b5-203cd6129465>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Tools Discussion: Research Area, Kissing is secret of Calculus and the wheel
Discussion: Research Area
Topic: Kissing is secret of Calculus and the wheel
<< see all messages in this
Subject: Kissing is secret of Calculus and the wheel
Author: Sonny
Date: May 28 2004
Isaac Barrow taught his student, Isaac Newton, the secret of differentiation:
secant of a
curve becomes tangent to a point of the curve. And Barrow taught Newton the
secret of
integration: as bar graphs under and over a curve shrink to segments up to
curve, their sum is
area under the curve. When a line just touches a point, mathematicians call this
"a point of
osculation", from Latin for "kissing". So kissing is secret of the calculus.
(Karl Menger taught me this in his book, "Calculus".) Also, a wheel can be
considered topologically as "a stack of circles". Each circle touches just just
one point of the flat it rolls on: a
kissing-segment. ONLINE, http://members.fortunecity.com/jonhays/kissing.htm
shows this for differentiation by animation. I need help in animating this for
integration, with full credit.
Reply to this message Quote this message when replying?
yes no
Post a new topic to the Research Area Discussion discussion
Discussion Help
|
{"url":"http://mathforum.org/mathtools/discuss.html?context=dtype&do=r&msg=_____ra-27","timestamp":"2014-04-19T15:01:40Z","content_type":null,"content_length":"15953","record_id":"<urn:uuid:33cfb445-03ba-4783-ac3a-688ea2657b82>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Digest
Summaries of Articles about Math in the Popular Press
Edited by Allyn Jackson,AMS
Contributors: Mike Breen (AMS), Claudia Clark(freelance science writer), Lisa DeKeukelaere (Brown University),Annette Emerson (AMS)
Media Coverage of the 2006 International Congress ofMathematicians, Madrid
See the website http://www.icm2006.org for moreinformation about the Congress."Four Are Given Highest Honor in Mathematics," by Kenneth Chang. New York Times, 22 August2006.
"Russianrecluse snubs academic world, rejecting math's equivalent of the Nobelprize," by Daniel Woolls. USAToday (Associated Press), 22 August 2006. (This wire storyappeared in many newspapers and
web sites.)
"Maverickgenius turns down maths `Nobel'", by James Randerson. TheGuardian, 22 August 2006.
"I Don't Get It,"by Tucker Carlson. MSNBC, 22 August 2006.
"The ColbertReport," by Stephen Colbert. Comedy Central, 22 August 2006.
"Mathsgenius declines top prize." BBC News, 22 August 2006.
"Keepyour million dollar prizes," by Robert Matthews. First Post (dailyonline magazine), 22 August 2006.
"Mittelloser Mathematiker verschäht Millonenpreis (DestituteMathematician Spurns Million-dollar Prize)." Frankfurter Allgemeine Zeitung, 22August 2006.
"ExzentrischesGenie (Eccentric genius)", by Christoph Drösser. DieZeit, 22 August 2006.
"Elruso Perelman rechaza la medalla Fields, la mayor distinción matemática (The Russian Perelman refuses theFields Medal, the highest distinction in mathematics)", by Malen Ruizde Elvira. El País, 22
August 2006.
"Le mathématicien russe Grigory Perelman a refusé lamédaille Fields (The Russian mathematician Grigory Perelman hasrefused the Fields Medal)." Le Monde, 22 August 2006.
"Le Nobel des maths promis à un ermite russe (The Nobel of mathpromised to a Russian hermit)", by Yves Miserey. Le Figaro, 22August 2006.
"Russianmathematician turns down Fields Medal," by Caroline McCarthy.News.com, 22 August 2006.
"La conjecture d'Henri Poincaré a tenu cent ans (The Conjectureof Henri Poincaré required one hundred years)", by CyrilleVanlerberghe. Le Figaro, 22 August 2006.
"Le `Nobel des Mathématiques' conserve son mystère (The`Nobel of mathematics' retains its mystery." Le Figaro (with Agence France Presse), 22 August 2006.
"`A little crazy'Russian is awarded with `Nobel prize' in maths." Regnum, 22August 2006.
"3 University Mathematicians Accept the Fields Medal, While a 4thWinner Declines," by Jason M. Breslow. Chronicle of Higher Education, 23August 2006.
"Genie verschmäht Fields-Medaille:Gregori Perelman lehnt höchste Auszeichnung der Mathematik ab(Genius Spurns Fields Medal: Gregori Perelman refuses the highesthonor in mathematics)". Deutsche Press
Agentur, 23 August 2006.
"Verleihung der Fields-Medaille in Madrid: Auszeichnung unter anderemfür den publikumsscheuen Mathematiker Grigory Perelman (Awardingof the Fields Medal in Madrid: Honor among other things for
thepublicity-shy Grigory Perelman)," by George Szpiro. Neue Zürcher Zeitung, 23 August2006.
"Los matemáticos subrayan en Madrid su crecientevinculación a la sociedad: La Unión MatemáticaInternacional mantiene la medalla Fields para Perelman, que larechaza," by Malen Ruiz de Elvira. El País,
23 August 2006.
"RussianDeclines World's Top Mathematician Award," by Shaveta Bansal. AllHeadline News, 23 August 2006.
"Das verschwundeneGenie (The disappeared genius)," by Ulrich Schnabel and JohannesVoswinkel. Die Zeit, 24 August 2006.
"Maths: quatre lauréats pour la médaille Fields (Math:four laureates for the Fields Medal)", by Jean-Francois Augereau. Le Monde, 24 August 2006.
"Russian recluse shuns maths' highest honour." Nature, 24August 2006, page 859.
"Perelman Declines Math's Top Prize; Three Others Honored in Madrid",by Dana Mackenzie. Science, 25 August2006.
"Fields Medals," by Erica Klarreich. Science News, 26 August2006, page 132.
"Eccentricmathematician shuns prestigious award." New Scientist, 26August 2006, page 6
"Burden of proof," by Marcus du Sautoy. New Scientist, 26August 2006, pages 41-43.
"TheMath Was Complex, the Intentions, Strikingly Simple," by GeorgeJohnson. New York Times,27 August 2006.
"3 Professors Win Elite Math Medals," by JasonM. Breslow. The Chronicle ofHigher Education, 8 September 2006, page A15.
"Meanwhile:Yes, but what about the other math geniuses?", by MalindiCorbel. International HeraldTribune, 13 September 2006.
About Grigory Perelman, Fields Medalist
"Major Math Problem Is Believed Solved By ReclusiveRussian," by Sharon Begley. The Wall StreetJournal, 21 July 2006, page A9.
"Genialer Einsiedler (Brilliant hermit)," by George Szpiro. Neue Zürcher Zeitung, 23 July2006.
"Maths 'Nobel' rumoured for Russian recluse: Recent work confirmsproof of century-old problem," by Jenny Hogan. Nature, 1August 2006, page 859.
"Elusiveproof, elusive prover: A new math mystery," by Dennis Overbye.International Herald Tribune (reprinted from New YorkTimes), 15 August 2006.
"Meet the cleverest man in the world (who's going to say no to a $1mprize)," by James Randerson. The Guardian, 16 August 2006.
"Of Math Proofs and Millionaires." Editorial, New York Times, 16 August2006.
"AskScience: Poincare's Conjecture", by Dennis Overybye. New YorkTimes, 18 August 2006.
"Who Cares AboutPoincaré?: Million-dollar math problem solved. So what?", byJordan Ellenberg. Slate, 18 August 2006.
"World'stop maths genius jobless and living with mother," by Nadejda Lobastova and Michael Hirst. Daily Telegraph, 20August 2006.
"El 'Bobby Fischer' de las matemáticas:El ruso Perelman, que resolvió la conjetura de Poincaré,un problema del milenio, será el gran ausente de lareunión de Madrid," by Malen Ruiz de Elvira. El
País, 20 August 2006.
"Theworld's cleverest man lives in Russia". Pravda, 21 August 2006.
"Prestigious Award, `Nobel' of Mathematics, Fails to Lure ReclusiveRussian Problem Solver," by Kenneth Chang. New York Times, 23 August2006.
"Esperandoa Perelman (n+1)", by Javier Sampedro. El País, 23 August 2006.
"GrigoryPerelman - Jewish genius of Russian math," by Boris Kaimakov. RussianNews and Information Agency Novosti, 23 August 2006.
"Esperandoa Perelman (n!)", by Javier Sampedro. El País, 24 August 2006.
"MathBreakthrough To Be Tested In Berkeley: Decade Old Problem FinallySolved," by Alan Wang. KGO-TV, 24 August 2006.
"Anequation for controversy," by Siobhan Roberts. Globe andMail, 26 August 2006.
"Whenbeing a genius just doesn't add up," by Katherine Kizilos.Sydney Morning Herald, 26 August 2006.
"No prizes for guessing." Comment, The Guardian, 26 August 2006.
"ManifoldDestiny: A legendary problem and the battle over who solved it,"by Sylvia Nasar and David Gruber. The New Yorker, 28 August2006, pages 44-57.
"TheTriumph of the Nerd," by Evgeny Morozov. International HeraldTribune, 31 August 2006.
About Andrei Okounkov, Fields Medalist
"Princetonprofessor wins math's highest honor, a Fields Medal," by RobertStern. Newark Star-Ledger, 23 August 2006.
About Terence Tao, Fields Medalist
"He'sgot the numbers," by Brendan O'Keeefe. The Australian, 23August 2006.
"UCLAMath Professor Receives Fields Medal," by Larry Gordon. LosAngeles Times, 23 August 2006.
"Acountry devoid of ideas: The Government's neglect of higher educationwill haunt our future economy," by Michael Costello. TheAustralian: Opinion, 25 August 2006.
"Mozartof Maths," by Deborah Smith. Sydney Morning Herald, 26August 2006.
About Wendelin Werner
"Lesmaths francaises encore à l'honneur", by Cyrille Vanlerberghe. LeFigaro, 23 August 2006.
About Jon Kleinberg, Nevanlinna Prizewinner
"Auszeichnung für die Erforschung des WWW: Jon Kleinberg gewinntNevanlinna-Preis für Informatik (Honor for investigation of theWWW: Jon Kleinberg wins the Nevanlinna Prize for
Computer Science),"by George Szpiro. Neue Zürcher Zeitung, 25 July2006.
About Kiyosi Ito, Gauss Prizewinner
"Mathematician Ito wins new award." Japan Times, Wednesday,23 August 2006.
|
{"url":"http://ams.org/news/math-in-the-media/mathdigest-md-icm2006","timestamp":"2014-04-17T20:12:19Z","content_type":null,"content_length":"21554","record_id":"<urn:uuid:5b306c87-b8da-42f0-835d-ba853a7a90c6>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
|
In these days of gigahertz and gigabytes, readability comes before small optimizations like these. Would you write x * x instead of using the proper notation if you were writing this on a piece
of paper? I would say that readable code should strive to follow the same mathematical notation as is used everywhere else, . So if i want to multiply a variable with another one, i would use the
* operator, and if i wanted to raise x to some power, i would use pow(), since we obviously cannot use the ^operator. Just because these two operators can be mixed without changing the result,
does not mean it produces more readable or understandable code, on the contrary, it's a trick to avoid using some specific function, sounds a bit like premature optimization to me :D Ofcourse, if
the cmath library hadn't been includde for other purposes in the OP, there would be no point at all in using pow() for such a trivial expression, i just don't see a reason to avoid using the
correct function pow() for squaring, when it is readily available in the program already.
And don't forget heavenly expressions like pow(x, pow(pow(x, pow(x,0)),0))
Jokes aside, I think that there is indeed a line to be drawn somewhere. You draw the line at 1. I usually draw the line at 2, but I can extend it to 3, i.e., I would write (x * x) and I can accept (x
* x * x), but I would consider (x * x * x * x) to be time to reach for pow or an equivalent. It is easy for me to draw the line at 2 because (x * x) is both readable and efficient, whereas pow(x, 2)
is at best equally readable and probably less efficient. Furthermore, it may require unnecessary type conversion.
Would you write x * x instead of using the proper notation if you were writing this on a piece of paper? I would say that readable code should strive to follow the same mathematical notation as is
used everywhere else, whenever possible. So if i want to multiply a variable with another one, i would use the * operator, and if i wanted to raise x to some power, i would use pow(), since we
obviously cannot use the ^operator.
Just because these two operators can be mixed without changing the result, does not mean it produces more readable or understandable code, on the contrary, it's a trick to avoid using some specific
function, sounds a bit like premature optimization to me :D
Ofcourse, if the cmath library hadn't been includde for other purposes in the OP, there would be no point at all in using pow() for such a trivial expression, i just don't see a reason to avoid using
the correct function pow() for squaring, when it is readily available in the program already.
Originally Posted by Neo1
In these days of gigahertz and gigabytes, readability comes before small optimizations like these.
I posit that the function call does not increase readability.
Originally Posted by Neo1
Would you write x * x instead of using the proper notation if you were writing this on a piece of paper?
Probably not. I would use superscript notation. I would certainly not write pow(x, 2).
Originally Posted by Neo1
Would you write x * x instead of using the proper notation if you were writing this on a piece of paper? I would say that readable code should strive to follow the same mathematical notation as
is used everywhere else, whenever possible.
Unfortunately, it is impossible in C++, and you have noted yourself. I might write x ** 2 in Python, but even then x * x looks simpler to me.
That said, there are other considerations, e.g., if x were a more complicated expression instead, then pow(x, 2) (or x ** 2) may well be a simplification (and we avoid evaluating the expression
Originally Posted by Neo1
Ofcourse, if the cmath library hadn't been includde for other purposes in the OP, there would be no point at all in using pow() for such a trivial expression
In other words, pow does not make the code more readable since the expression is so trivial that it is trivially understood either way, hence defeating the core of your argument.
Originally Posted by Neo1
Just because these two operators can be mixed without changing the result, does not mean it produces more readable or understandable code, on the contrary, it's a trick to avoid using some
specific function, sounds a bit like premature optimization to me
Choosing the more efficient of two readable options is not premature optimisation. It is common sense.
The fact that pow() is less efficient is only one of MANY reasons it is inappropriate. Squaring a number is, by definition, multiplying it with itself. Writing it as pow(x, 2.0) is a "premature
generalization." It also implies that you don't understand what squaring means.
Would you write x * x instead of using the proper notation if you were writing this on a piece of paper?
If I were writing it on paper I would use a superscript. I wouldn't write pow(x,2.0).
if i wanted to raise x to some power, i would use pow(), since we obviously cannot use the ^operator.
So if you COULD use '^' that would somehow be more intuitive or obvious? Explain. What does '^' have to do with exponentiation?
Just because these two operators can be mixed without changing the result, does not mean it produces more readable or understandable code, on the contrary, it's a trick to avoid using some
specific function, sounds a bit like premature optimization to me :D
Anybody who actually needs to square numbers as part of real code they are working on will have no problem understanding what is happening, nor will anybody else who reads or works with that
code. You're inventing imaginary problems.
Here's a mind-blower for you -- what's the best way to take a square root? sqrt(x) or pow(x, 0.5)?
Neo, you're beating a dead horse. It seems to me that you've reached the point where you're just disagreeing for the sake of disagreeing.
It comes down to these things:
A mathematical equation in C++ in general isn't going to look like the written form. Trying to make it look exactly so leads to nothing but disappointment and obfuscation. It's a failure to
understand the meaning of the equation and translate that to an appropriate algorithm. If you want to exactly represent a written mathematical equation then you're using the wrong tool for the
Using what is asbolutely well known to be identical except faster and ususally shorter to write out, is not premature optimisation. Using the slower, longer, and more complex to type out one
intentionally, is just outright stupid.
It is quite common for the simpler, more specific operation to be more efficient and overall tidier than the general one. pow is more general because you can raise to any power you like such as
2.1. Straight multiplication between two variables is simpler and specific to the power undeniably being 2.0. sqrt(x) is another case. The sheer fact that sqrt can't raise a value to the power of
0.4 or any other value for example means that the assembly code or circuitry required to perform the operation is going to be able to take advantage of more assumptions and thus be simpler. It's
that lack of generality that gives it the edge.
It's just like how a swiss army knife does a mediocre job at a lot of things. If you want to do one thing and have it done well, then there is a better more specific tool out there for the job.
In this case, multiplication is that tool.
Although I disagree with Neo1, I can explain that one.
Most mathematicians will be familiar with the notion of using ^ for superscripting, and x^2 will be understood as "x squared". That convention is supported by most variants of TeX and also by
maple, which are two most commonly used packages, particularly by mathematicians, for typesetting mathematics. TeX packages, for example, are freely available and most mainstream mathematical
journals will accept submissions in the form of TeX files because it can produce photo-ready output directly for publication.
However, arguing that one should use pow() because we can't use ^ for exponentiation, is the height of silliness. pow() is not a mathematical notation.
Neo1's argument that source code should express concepts in the same way a mathematician would is specious and academic, and doesn't work in the real world. The whole point of mathematical
derivations is to express a concept (or an algorithm) in a form that allows it to be understood and (to anyone except possibly a pure mathematician) to be applied. Any mathematician worth their
salt expects that someone who is attempting to use their work (formulae, etc) will interpret in a context that makes sense to the application. They expect that a software engineer will take steps
to ensure the code works correctly, is numerically stable, is maintainable, etc etc. They do not expect that a software engineer will slavishly follow mathematical notations at the expense of
remotely sane programming practice.
If you want an example of that, consider Gaussian elimination and the Gauss-Jordan elimination. Technically, they are both techniques for solving a system of linear equations with a square
coefficient matrix. They are mathematically equivalent (the only difference is that Gaussian elimination reduces the coefficient matrix to upper triangular form, and Gauss-Jordan elimination
performs operations in a different order and reduces the coefficient matrix to the identity matrix). To a mathematician, the two algorithms are equivalent. However, when used to solve a system of
linear equations on a computer, Gaussian elimination is generally both faster and has better properties related to numerical stability. However, on paper, the solution of a linear equation A*x =
b is often expressed by mathematicians as x = A(inverse)b (or $x=A^{-1}b$ in TeX notation) and Gauss-Jordan elimination can be used to directly produce A(inverse). By Neo1's argument, the best
algorithm to use for such an expression is Gauss-Jordan elimination.
Edit: Oh, and x = pow(A, -1)*b will not work either.
Thankfully, mathematicians and computer engineers are often both more pragmatic than that.
Besides the obvious (from my point of view) choice between x*x and pow() I would also like to point that the gravitational constant could also be written as: 6.673 * 0.00000001 to allow those
compilers without intrinsic functions to make some optimizations. It is not that hard to write a couple of important physics constants this way and the advantage may be big, since this will allow
the compilers to evaluate the expression at compile-time.
I concede, perhaps talking about readability in regards to such trivial expressions don't make sense. I will just say this however, Brewbuck, you strike me as a particularly angry individual.
This thread consisted mostly of friendly banter until you came along, guns blazing, commenting repeatedly about my lack of mathematical prowess, you know full well that i have a firm grasp of
what squaring a value means. I'm here to get smarter, it seems you are here to squash whoever is not in agreement with you.
I concede, perhaps talking about readability in regards to such trivial expressions don't make sense. I will just say this however, Brewbuck, you strike me as a particularly angry individual.
This thread consisted mostly of friendly banter until you came along, guns blazing, commenting repeatedly about my lack of mathematical prowess, you know full well that i have a firm grasp of
what squaring a value means. I'm here to get smarter, it seems you are here to squash whoever is not in agreement with you.
It's been a while since you posted this, but I only just saw it now and you deserve a response from me.
My quip about your knowledge came across a lot worse sounding than I intended, and I will try to be more careful about how I phrase things in the future.
I tend to be rather crushing in my criticisms when I know for sure that something is wrong with an argument, implementation or otherwise. This is sort of inherited from all the people I've worked
with over a long period of time. Respect people, respect effort, respect curiosity, respect experimentation, but have a very low tolerance for things which simply aren't correct. It becomes
heated when one person's idea of "correct" does not match with someone else's. Anger is hard to judge across the Internet. Do I believe that I'm right and you aren't? Certainly, and I do not
apologize for that, but I was not angry and I regret that it seemed that way.
Also, I have a bad habit of using "you" as a stand-in for "a generic person" and I sometimes switch between addressing a specific person and talking generally, which can be confusing at best, and
although I don't remember for sure what I was thinking when I wrote my post, I'm pretty sure I didn't mean to call you an idiot, because I don't think you are one.
My apologies.
|
{"url":"http://cboard.cprogramming.com/cplusplus-programming/141359-function-2-print.html","timestamp":"2014-04-16T08:50:27Z","content_type":null,"content_length":"26676","record_id":"<urn:uuid:6f86710a-8d85-4209-8857-c6ddc02dcd47>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fast Polymorphic Math Parser
The article is a proposal for a mathematical parser implementation which uses polymorphic methods for fast evaluation of given expressions. Although this paper talks about parsing input text, it
focuses on methods for fast calculation of a given mathematical formula. Very good performance can be achieved by generating opcode using basic assembler level instructions. The described library
interprets a math formula and emits, in run-time, machine code (using standard 8087 directives) which calculates a given expression. This approach is a combination of lexical analysis algorithms with
opcode manipulation techniques, widely used in compilers and virtual machines (platforms like .NET or Java).
Further tests have shown that the presented approach is very efficient and is almost as fast as math code generated during program compilation! It is also up to 30 times faster than other popular
math parsers.
It's assumed that the reader is familiar with C/C++ programming and the basics of the Assembler x86 language.
How Compilers Generate Code
Let's see how programs are performed inside a system, how compilers generate code, and how they operate on the computer's memory. The execution of each process in a system demands generally four
kinds of memory areas:
• code area - This part of memory contains machine code - a series of instructions executed by the CPU.
• data segment - This segment contains static data and constants defined in the program.
• stack - The stack is responsible for storing program execution data and local variables.
• heap - Segments of memory that can be allocated dynamically (this part of memory is used by the malloc or new operator).
The program executes a series of instructions from the code area using data from the data segment. When a program calls a function (in a so-called cdecl manner), it pushes on the stack all parameters
defined in the function, a pointer to a place in the code area where it is already in, and performs a jump to the code responsible for executing the given function. The stack is also a place where
local variables are stored - when they are needed, the proper amount of memory is reserved on the top of the stack.
Let's analyze how compilers generate code which is responsible for mathematical calculations on a simple function that calculates a math expression, (x+1)*(x+2):
float testFun( float x )
return ( x + 1.0f ) * ( x + 2.0f );
The compiler for the given function generates these assembler instructions:
; float testFun( float x )
; {
PUSH ebp
MOV ebp,esp
; return ( x + 1.0f ) * ( x + 2.0f );
FLD dword ptr [ebp+8]
FADD qword ptr [__real@3ff0000000000000 (1D0D20h)]
FLD dword ptr [ebp+8]
FADD qword ptr [__real@4000000000000000 (1D0D18h)]
FMULP st(1),st
; }
POP ebp
When the function is called, the caller pushes on the top of the stack a list of parameters and a pointer to the return code. In our example, the function has only one parameter (of type float), so
at the moment of the function's execution, on the top of the stack, there are (in order):
1. 4-byte float variable
2. 4-byte return address
At the beginning of the function, the ebp register is saved on the stack (instruction PUSH ebp), and subsequently, the stack pointer (the esp register) is copied to ebp (MOV ebp,esp). After this
operation, ebp points to a place on the stack where an old value of ebp is stored, ebp+4 points to a return address, and ebp+8 points to the parameter of the function (variable x of type float).
The next part of the code performs strictly mathematical calculations (8087 instructions) using a mathematical cooprocessor's internal stack. This stack contains eight registers symbolized by ST(0),
ST(1), ..., ST(6). The top of the stack is ST(0). The compiler interprets the math formula (x+1.0f)*(x+2.0f) and generates the code which performs the calculations in sequence (with the usage of the
mentioned stack):
1. load variable x on the top of the stack - ST(0) (instruction FLD)
2. add constant to ST(0) and store the result in ST(0)
3. load variable x on the top of the stack - ST(0) becomes ST(1), x is loaded to ST(0)
4. add constant to ST(0) and store the result in ST(0)
5. multiply ST(0) by ST(1), pop the stack, and store the result in ST(0)
After these series of operations, ST(0) (the top of the floating point stack) contains the result of the given math formula. Subsequently, the program recovers the original value of the ebp register
(POP ebp) and returns to the caller (RET). The output result is passed to the caller through the top of the math stack - register ST(0).
The basic idea of creating a fast mathematic parser is to create the code which will generate another code for calculation of the given math formula. The generated opcode should have the structure of
the function presented above. When the program needs to use a math parser, firstly, it should perform the analysis of the mathematical formula and generate the opcode. After this, every time
calculations are needed, the program should execute the generated code, giving the desired result. The main advantage of this approach is that the process of calculation is very fast and highly
efficient - it is an imitation of the math executing the code which was generated during the program compilation.
Lexical Analysis
The basic issue in writing a mathematical parser is lexical analysis - it gives the structure of a given expression, and tells how a formula should be interpreted. Simple math expressions (with
operators +, -, *, /, and parenthesis) can be described by a set of so-called productions:
EXPR -> TERM + TERM
EXPR -> TERM - TERM
EXPR -> TERM
TERM -> FACT * FACT
TERM -> FACT / FACT
TERM -> FACT
FACT -> number
FACT -> x
FACT -> GROUP
GROUP -> ( EXPR )
These operations describe how given expressions are built - how the set of input chars (non-terminals) can be brought out into smaller parts (terminals - like numbers, math operators, or symbols).
The production EXPR -> TERM + TERM means that the non-terminal EXPR can be divided into a sequence of non-terminal TERMs, char '+', and another non-terminal TERM. The non-terminal FACT, for example,
can be brought out into number, a symbol x, or another non-terminal - GROUP.
These productions for a given mathematical expression define a parse tree (presented below) that represents the syntactic structure of the input string. Let's have a look at the example formula,
(x+1)*(x+2). This expression can be derived from the non-terminal EXPR in this series of transformations:
EXPR -> TERM ->
-> FACT ->
-> FACT * FACT ->
-> GROUP * GROUP ->
-> ( EXPR ) * ( EXPR ) ->
-> ( TERM + TERM ) * ( TERM + TERM ) ->
-> ( FACT + FACT ) * ( FACT + FACT ) ->
-> ( x + 1 ) * ( x + 2 )
Productions (defined earlier) were used in each of these transformations. The knowledge of how a given production is used in the derivation process is useful in generating machine instructions for
the calculation of a math formula.
Using Spirit as a Syntactic Analyzer
The algorithm for lexical analysis is not an issue in this article - the paper describes only the basic knowledge which is needed to parse math formulas. Syntactic analysis can be performed by many
(freely available) libraries. In this article, the Spirit library from the Boost package was used.
Spirit is a very useful and easy to use C++ tool, which takes advantage of modern meta-programming methods. The code below defines a grammar which is used to interpret simple math expressions:
class MathGrammar : public grammar<MathGrammar>
template <typename ScannerT>
struct definition
definition( MathGrammar const& self )
expr = term >> *( ( '+' >> term )[ self._doAdd ] |
( '-' >> term )[ self._doSub ] )
term = fact >> *( ( '*' >> fact )[ self._doMul ] |
( '/' >> fact )[ self._doDiv ] )
fact = real_p[ self._doConst ] |
ch_p('x')[ self._doArg ] |
group = '(' >> expr >> ')'
rule<ScannerT> expr, term, fact, group;
rule<ScannerT> const& start() const { return expr; }
AddFunctor _doAdd;
SubFunctor _doSub;
MulFunctor _doMul;
DivFunctor _doDiv;
ArgFunctor _doArg;
ConstFunctor _doConst;
The class MathGrammar derives from the grammar<MathGrammar> generic spirit class. It uses four non-terminals (expr, term, fact, and group). In the constructor of the internal class definition,
productions for the grammar are defined. Productions are developed by using overloaded operators:
• >> is an operator which merges nonterminals and terminals. For example, a>>b matches the expression ab.
• * is the Kleene star operator - the multiple concatenation of symbols inside parentheses. For example, *(a>>b) matches the expressions: ab, abab, ababab, etc.
• | is an OR operator. For example ,(a|b) matches expressions a or b.
• [] these brackets contain functors which will be called in case of usage of the given production.
• real_p is the terminal representing the constants - real numbers.
• ch_p('x') is a terminal representing the char x.
The usage of the presented class is very simple - during the process of expression parsing, when a given production is detected, a defined functor is called. The analysis is always performed
bottom-up - this means that at first, productions are called from the bottom of the parse tree.
Let's analyze the formula (x+1)*(x+2) with the code given below:
// instance of grammar class
MathGrammar calc;
// perform string parsing
if ( false == parse( "(x+1)*(x+2)", calc ).full )
// parse error handling
During the parse procedure execution, the defined functors are called in order:
1. x argument (functor ArgFunctor _doArg)
2. 1 constant (functor ConstFunctor _doConst)
3. + operator (functor AddFunctor _doAdd)
4. x argument (functor ArgFunctor _doArg)
5. 2 constant (functor ConstFunctor _doConst)
6. + operator (functor AddFunctor _doAdd)
7. * operator (functor MulFunctor _doMul)
This order of operations can be directly used during the generation of the proper machine code for the calculation of the given formula. The obtained notation:
x 1 + x 2 + *
is a so-called Reverse Polish Notation (RPN) or postfix notation - operators follow their operands and no parentheses are used. The interpreters of RPN are often stack-based; that is, operands are
pushed onto the stack, and when the operation is performed, its operands are popped from the stack, and its result is pushed back on.
The Machine Code Emission
The idea of the math parser described in this article is based on dynamic machine code generation during program run-time. The program interprets the input text (given by a user) using the syntactic
analyzer - the input formula is transformed to a more suitable postfix notation, which can be calculated with the usage of the processor's math stack. After this analysis, the program generates the
machine code which executes the process of evaluation of the math expression.
How do we generate the machine code in Windows dynamically? Firstly, the program must allocate a new memory segment with read/write access using the function VirtualAlloc (in the code below, it
allocates 1024 bytes of memory).
#include <windows.h>
// allocate memory segment with read/write access
BYTE *codeMem = (BYTE*)::VirtualAlloc( NULL, 1024,
MEM_COMMIT | MEM_RESERVE, PAGE_READWRITE );
if ( NULL == codeMem )
// error handling
This memory will contain the body of the new function that will be generated. The code emitted into the allocated memory should have this structure:
1. PUSH ebp - save the base register ebp on the stack.
2. MOV ebp,esp - put the current stack pointer esp into the base register ebp.
3. The series of mathematical calculations on the function parameter (the parameter is placed on the stack at the address ebp+8).
4. POP ebp - restore the saved base register ebp value.
5. RET - return from the function instruction.
The table below describes the opcodes of instructions which should be emitted when the syntactic analyzer detects a mathematical operation:
Assembler code Machine opcode Operation
PUSH ebp 0x55 Save the base pointer on the stack.
MOV ebp, esp 0x8B, 0xEC Move the stack pointer to the base pointer.
FLD dword ptr [ebp+8] 0xD9, 0x45, 0x08 Load a float parameter (from ebp+8) to ST(0).
FLD [mem32] 0xD9, 0x05, xx, xx, xx, xx Load the constant from a given address to ST(0) (the last four bytes are the address of the constant in memory).
FADDP st(1), st(0) 0xDE, 0xC1 Add ST(1) to ST(0), store the result in ST(1), and pop ST(0).
FSUBP st(1), st(0) 0xDE, 0xE9 Subtract ST(1) from ST(0), store the result in ST(1), and pop ST(0).
FMULP st(1), st(0) 0xDE, 0xE9 Multiply ST(1) by ST(0), store the result in ST(1), and pop ST(0).
FDIVP st(1), st(0) 0xDE, 0xE9 Divide ST(1) by ST(0), store the result in ST(1), and pop ST(0).
POP ebp 0x5D Restore the base pointer from stack.
RET 0xC3 Return from the function.
The described instructions (which begin with Fxxx) operate on the processor's internal math stack, and always use at most two registers - ST(0) and ST(1). With their usage, the program can easily
calculate every simple math expression. Of course, this list of mathematical operations can be extended to more sophisticated functions (like logarithms, trigonometric functions, root, power, etc.),
but this article focuses only on the idea of machine code generation - for simplicity, it describes only simple math operators.
The instruction FLDS is responsible for loading mathematical values onto the stack, from memory. This operation will be used to load the parameter x or constants from the given math formula. The
latter situation demands a special table (inside memory) with saved numeric values.
During the expression's parsing, the program creates a table with constants, which will be used during an evaluation process. When the syntactic analyzer detects a constant in a formula (for example,
1 in (x+1)) - it writes its value into the table, and emits the opcode FLD [mem32] with a pointer to a place in memory where the constant was saved. When the generated code is executed, this
instruction loads the value of the given constant (read from memory) on the top of the stack.
After the process of opcode emission, the program should change the access rights of the allocated memory (given by the pointer codeMem) from read/write to execution only:
// change access protection for memory segment to execute only
DWORD oldProtect;
BOOL res = ::VirtualProtect( codeMem, 1024, PAGE_EXECUTE, &oldProtect );
if ( FALSE == res )
// error handling
After this operation, each process that tries to write into this memory will crash. In Windows systems, this approach insures protection from malware code injections. Execution of emitted code can be
performed by using a simple code as shown below:
float ( *fun )( float );
fun = ( float ( __cdecl * )( float ) )codeMem;
fun( x );
Of course, when the generated formula evaluator is no longer needed, the allocated memory should be released:
if ( NULL != codeMem )
// deallocate memory segment
BOOL res = ::VirtualFree( codeMem , 0, MEM_RELEASE );
if ( FALSE == res )
// error handling
Using the Code
The project attached to this article contains an example implementation of the described fast polymorphic math parser. The library is based on these classes:
• MathFormula - Class containing code emission functionality. It encapsulates the handling of memory segments with the opcode to execute.
• MathGrammar - Class defining the mathematical expression grammar.
• AddFunctor, SubFunctor, MulFunctor, DivFunctor, ArgFunctor, ConstFunctor - The set of functors for handling math grammar productions.
The usage of the library is very simple:
#include "MathFormula.h"
// create MathFormula instance for given expression
MathFormula fun( "(x+1)*(x+2)" );
// perform evaluation of expression for given argument x
float result = fun( 2.0f );
The constructor of the MathFormula object performs a syntactic analysis of the input string by using the MathGrammar class. During the expression interpretation, proper functors are called when the
given production rules are satisfied. Each functor invokes the emission methods in the MathFormula object - generating the machine code. After the initial analysis, math formulas can be evaluated
with the usage of the overloaded parentheses operator.
The attached project can be compiled under Visual Studio 2008 with the usage of the Boost library. Information about the build process can be found in the ReadMe.txt file.
As we can see, the implementation of the math expression evaluator with the usage of dynamically generated machine code is quite simple. It is a combination of a syntactic analyzer (in this article,
we used the Spirit library), a few WinAPI functions for memory allocation, and some basic knowledge about programming in Assembler.
The described implementation of the math parser has a very important advantage - it is almost as fast as the code generated statically during the program compilation! The table below shows the
average math formula evaluation times for the expression (x+1)*(x+2)*(x+3)*(x+4)*(x+5)*(x+6)*(x+7)*(x+8)*(x+9)*(x+10)*(x+11)*(x+12), for different types of parsers:
Math parser Execution time [ns]
C++ 20.7 [ns]
MathParser 22.2 [ns]
muParser 700.69 [ns]
uCalc 762.36 [ns]
fParser 907.07 [ns]
The presented execution times are the average from 100 000 000 executions, and they all were performed on the processor: AMD Turion X2 2.0 GHz.
C++ stands for code generated by compiler during compilation. Our math parser (MathParser) execution time is only 7% worst than C++ code! In comparison, the table also contains execution times for
mathematical parsers which are freely available on the internet - they are at least 30-35 times slower than MathParser!
Future Work
This article describes only an idea of a polymorphic math parser - of course, the presented code can be extended to more advanced solutions:
• more sophisticated mathematical functions could be used,
• the program could use more than one input function parameter,
• for better performance, optimizations on the generated opcode could be performed (for example: mathematical expression reduction),
• SIMD (Single Instruction Multiple Data) assembler instructions could be considered for much faster calculations on the whole vectors of values - the usage of the SSE processor's extensions.
Dynamically generated code techniques, borrowed from JIT (Just In Time) compilers, open new possibilities for very fast evaluation of given math expressions. Polymorphic math parsers could be applied
in systems which need to perform fast, massive function evaluations for every math formula given by the user.
• 30 November 2009: Article creation
• 03 December 2009: Minor modifications
|
{"url":"http://www.codeproject.com/Articles/45797/Fast-Polymorphic-Math-Parser?msg=4278405","timestamp":"2014-04-20T14:25:36Z","content_type":null,"content_length":"128938","record_id":"<urn:uuid:d90663fa-019f-47e6-97ea-37b40b2a3786>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Optimization App with the fmincon Solver
This example shows how to use the Optimization app with the fmincon solver to minimize a quadratic subject to linear and nonlinear constraints and bounds.
Consider the problem of finding [x[1], x[2]] that solves
subject to the constraints
The starting guess for this problem is x[1] = 3 and x[2] = 1.
Step 1: Write a file objecfun.m for the objective function.
function f = objecfun(x)
f = x(1)^2 + x(2)^2;
Step 2: Write a file nonlconstr.m for the nonlinear constraints.
function [c,ceq] = nonlconstr(x)
c = [-x(1)^2 - x(2)^2 + 1;
-9*x(1)^2 - x(2)^2 + 9;
-x(1)^2 + x(2);
-x(2)^2 + x(1)];
ceq = [];
Step 3: Set up and run the problem with the Optimization app.
1. Enter optimtool in the Command Window to open the Optimization app.
2. Select fmincon from the selection of solvers and change the Algorithm field to Active set.
3. Enter @objecfun in the Objective function field to call the objecfun.m file.
4. Enter [3;1] in the Start point field.
5. Define the constraints.
● Set the bound 0.5 ≤ x[1] by entering [0.5,-Inf] in the Lower field. The -Inf entry means there is no lower bound on x[2].
● Set the linear inequality constraint by entering [-1 -1] in the A field and enter -1 in the b field.
● Set the nonlinear constraints by entering @nonlconstr in the Nonlinear constraint function field.
6. In the Options pane, expand the Display to command window option if necessary, and select Iterative to show algorithm information at the Command Window for each iteration.
7. Click the Start button as shown in the following figure.
8. When the algorithm terminates, under Run solver and view results the following information is displayed:
● The Current iteration value when the algorithm terminated, which for this example is 7.
● The final value of the objective function when the algorithm terminated:
Objective function value: 2.0000000268595803
● The algorithm termination message:
Local minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in
feasible directions, to within the default value of the function tolerance,
and constraints are satisfied to within the default value of the constraint tolerance.
● The final point, which for this example is
9. In the Command Window, the algorithm information is displayed for each iteration:
Max Line search Directional First-order
Iter F-count f(x) constraint steplength derivative optimality Procedure
0 3 10 2 Infeasible start point
1 6 4.84298 -0.1322 1 -5.22 1.74
2 9 4.0251 -0.01168 1 -4.39 4.08 Hessian modified twice
3 12 2.42704 -0.03214 1 -3.85 1.09
4 15 2.03615 -0.004728 1 -3.04 0.995 Hessian modified twice
5 18 2.00033 -5.596e-005 1 -2.82 0.0664 Hessian modified twice
6 21 2 -5.327e-009 1 -2.81 0.000522 Hessian modified twice
Local minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in
feasible directions, to within the default value of the function tolerance,
and constraints are satisfied to within the default value of the constraint tolerance.
Active inequalities (to within options.TolCon = 1e-006):
lower upper ineqlin ineqnonlin
|
{"url":"http://www.mathworks.com.au/help/optim/ug/optimization-tool-with-the-fmincon-solver.html?s_tid=gn_loc_drop&nocookie=true","timestamp":"2014-04-25T00:32:52Z","content_type":null,"content_length":"40931","record_id":"<urn:uuid:c5bbd7e4-dd1d-4021-90e0-405c157c934d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Possible Answer
Investopedia explains 'Subjective Probability' Subjective probabilities differ from person to person. Because the probability is subjective, it contains a high degree of personal bias. An example of
subjective probability could be asking New York Yankees fans, ... - read more
subjective probability ... Allbusiness Networks; Allbusiness.com; Allbusiness Experts; Advertise; About; Experts. ... Subjective probability is used in many business situations (i.e., estimating
Share your answer: subjective probability concepts in business?
Question Analizer
subjective probability concepts in business resources
|
{"url":"http://www.askives.com/subjective-probability-concepts-in-business.html","timestamp":"2014-04-21T13:07:31Z","content_type":null,"content_length":"36938","record_id":"<urn:uuid:7fdcb7ee-8989-4b5a-b1a9-09c0a376f227>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An Explanation of Send-Off Intervals
By Kerry O'Brien
Head Coach of Walnut Creek Masters
Intervals and When to Go
A common dilemma among beginner to intermediate swimmers is the concept of Send-Off Intervals. These refer to the total amount of time between repeats in a set. In the interval time, there is the
amount of time needed to perform the swim plus whatever time is left is used for rest. The less time used to complete the swim, the more is left for rest before the beginning of the next repeat
interval cycle.
As an example, by merely resting :15 sec between your repeats, there is no accountability for the time of the swim. The repeats could be getting progressively slower, but the rest remains the same.
There is a time and place for this kind or training, like just trying to build an aerobic base (ex: swim a 1,500 with :15 rest every 300). Here the focus is primarily swimming a lot of laps and
building conditioning.
But, as training progresses and you begin to use different energy systems for pacing, speeds and other training and race strategies, the intervals become more exact and beneficial. Another nice
advantage to using send-off intervals is that you will virtually always leave on a 5 or a 0, therefore calculating speeds and pace is much easier.
The big deal however is accountability - keeping you true to a plan within the set. Most sets have an underlying reason for doing them. Interval Training will make the biggest difference and offer
the largest payback in terms of improvement. Getting comfortable using intervals just takes practice. Start with easy to calculate intervals and progress from there. In no time, your swimming will
become more meaningful to you, guaranteed!
See the workout below for examples.
Good Swimming!
|
{"url":"http://www.ncmasters.org/nl/dec13-sendoff.htm","timestamp":"2014-04-19T22:07:18Z","content_type":null,"content_length":"2419","record_id":"<urn:uuid:1bb2389c-c894-485c-b5a7-04f949f46169>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cellular Automata
Main Introduction Download Examples Links Participants
Cellular Automata And Electric Power
The motivation for developing Capow as an EPRI project is that cellular automata are parallel systems and the electric power grid is a parallel system. By investigating complex cellular automata we
get a better qualitative idea about possible behaviors of a physical parallel system like the power grid. So as to try and make the analogy closer, we specifically looked at cellular automata which
embody wave motions and oscillations similar to those of electrical systems.
Regarding the parallelism of the electric power grid, note that generators, loads, wires, and switching stations interact with each other locally. The laws of physics cause each element of the system
to update itself locally and independently in real time. The behavior of a circuit breaker, for instance, depends on the preset parameters of the circuit breaker and on the current in the wires
coming into the breaker. The current in a wire depends on the wire’s physical parameters and on the generators, switches or loads connected to the wire.
The electric power grid is like a large parallel computer which has been assembled and programmed with an uncertain knowledge of what the full system will do. Although it was built and designed to
distribute power, the grid also embodies an unknown and unpredictable computation system. This is why there is sometimes no one really compelling explanation for a given power surge or outage. An
anomalous event can emerge as the unpredictable result of the system’s massively parallel "computation".
Unpredictability is an essential aspect of parallel systems. It is very commonly the case that it is impossible to predict the behavior of a parallel system by any means other than actually
simulating the system in action. This is known, for instance, to be true of cellular automata. Most Cellular automata computations are "algorithmically incompressible", meaning that there is no
faster way to predict the eventual output of a CA than actually running the CA.
Exploring cellular automata with CAPOW gives us a better intuition about parallel systems such as the power grid.
The Wave Equation mode of CAPOW can be used as a model for an electrical waveform travelling down a transmission line. Switching to a nonlinear wave equation such as Quadratic Wave or Cubic Wave
allows us to model nonlinear effects of physical transmission lines.
The Oscillators and Diverse Oscillators rules allow us to examine behavior of electrical oscillators. In addition we have a family of Oscillator Chaotic.DLL rule which simulates chaotic oscillations.
If we think of these oscillators as generators or loads, we can imagine coupling them together in a network connected by the Wave Equation. The rules Wave Oscillators, Diverse Wave Oscillators
demonstrate this effect. An inspiration for the Diverse Wave Oscillators rule was to represent the dynamics of a power grid to which a large number of differently responding oscillatory loads are
In order for these representations to be meaningful we must think of the CAPOW display as representing a power grid roughly the size of a state. This is because there is an issue of dimensional scale
in our representation of oscillating loads by these cellular automata. A CA of necessity represents the speed of light (or the transmission of electrical signals) at a scale in which "light" travels
at one space cell per update. Focus on one the spacetime representation of a one-dimensional CA rules. If c is the speed of light, then whenever one horizontal screen inch represents WorldXperCM
units of physical distance, then one vertical screen inch represents WorldXperCM/c time units WorldTperCM. Conversely WorldXperCM is c*WorldTperCM. Since c is roughly 3 x 10 ^ 10 cm/sec, if one
horizontal screen centimeter represents 1 centimeter of physical distance, then a vertical screen centimeter represents 0.33 x 10 ^ -10^ seconds, which is much smaller that the oscillation cycle
times considered in power engineering. In electrical power systems we are very often interested in oscillation frequencies of an order of sixty cycles per second, with an oscillation cycle of 0.016.
If we want to spread such a cycle over the height of a typical computer screen, we might want a vertical centimeter to represent something like 0.001 seconds of time, that is, we would want a
WorldTperCM of 0.001, which gives a WorldXperCM of 3 x 10^7 centimeters, which is 300 kilometers, a reasonable length scale for state-wide or nation-wide power grid.
The two-dimensional CAs we investigate are also of interest in modeling the electrical power grid. Here we should regard the CA rules as giving a "satellite-view" of a power grid. Imagine looking
down at a dense power grid from several hundred kilometers above the Earth’s surface. At this distance we might think of each cell as a generator or load node that is connected to an adjacent node;
if the grid is sufficiently dense, as in a city, we can abstract away from representing the connecting wires.
In the two-dimensional CAs we can represent, as before, linear and nonlinear waves as well as chaotic and non-chaotic oscillations which can be linked together by waves. The 2D Oscillator Wave.DLL
and the 2D Oscillator Wave Chaotic.DLL are examples of such rules.
An additional kind of CA phenomenon arises in two-dimensional system, this is the reaction-diffusion kind of rule. 2D Hodge.DLL is a good example of such a rule. This type of rule produces a pattern
of spirals of excitation and inhibition similar to that seen in the spread of rolling power black-outs in which the circuit-breakers repeatedly attempt to reconnect the circuit. The spiral patterns
of the 2D Hodge.DLL rule are similar to a satellite view of the lights going on and off in a city experiencing a rolling power black-out.
|
{"url":"http://www.cs.sjsu.edu/faculty/rucker/capow/power.html","timestamp":"2014-04-17T04:05:05Z","content_type":null,"content_length":"7367","record_id":"<urn:uuid:71026d55-a1c2-41c9-8f7d-4d237310b1a3>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
what is the slope of a line that passes through the point (-5,3) and is parallel to a line that passes through (2,13) and (-4,-11)?
• 10 months ago
• 10 months ago
Best Response
You've already chosen the best response.
Drawing a diag would help figure you out ;)
Best Response
You've already chosen the best response.
Find the slope using (2, 13) and (-4, -11)
Best Response
You've already chosen the best response.
is it 3,11?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/51a4ccfde4b0aa1ad887e4f4","timestamp":"2014-04-16T13:15:41Z","content_type":null,"content_length":"32385","record_id":"<urn:uuid:9f910cea-829c-4b6b-9526-1a9f421b3583>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
|
UCCS | Math - 136Lab3
Math 136 - Calculus II - Lab 3
M136 - Calculus II - Lab 3. Approximating Functions with Taylor Polynomials
We have seen how Taylor polynomials can be constructed if we wish to approximate a given function with a polynomial and we have also seen that functions can be represented by a Taylor series. However
some important questions immediately come to mind. If we wish to represent a function as a Taylor series, how can we be sure that the representation is accurate ? For example, can we accurately
determine the value of ln3 using a Taylor series expansion centered at x=1? Does it matter where we center our series?
For some functions, such as e^x , a Taylor series centered at x[0]=0 is a valid representation of e^x for any value of x . If x is far away from 0, we may have to evaluate many terms in the series
before we can get an accurate enough estimate of the true value, but eventually we can get there. Unfortunately however, we cannot get an accurate estimate for ln3 using a series expanded about x=1 -
no matter how many terms we include. In fact, the estimate actually becomes worse as we increase the number of terms used. The reason is because the series only converges over the interval (0,2].
Outside of this interval, the series does not converge to a finite value. In this lab you will be asked to graphically determine how centering the series expansion about different values of x effects
the interval of convergence.
Maple has the capability to generate Taylor polynomials of arbitrary degree about any given value of . We will use this capablility to graphically estimate intervals of convergence. To illustrate the
method, we will use the preceeding example. We begin by defining our function f(x)=lnx. Note that to define a function in Maple, we write the function name then := then the input variable then the -
and > then the actual function, and of course we end with a semicolon. Also note that Maple displays the natural log function in an odd manner- it just prints the ln, and understands the x.
> f:=x->ln(x);
To get the 4th degree Taylor polynomial for f ( x ) centered at x=1, we have to enter the command, note that we tell it to center about x=1 and ask it to give us 5 terms (one more term than the
> taylor(f(x),x=1,5);
Note that this expression is not polynomial. It contains an error term (the last term in the expression). We must convert this into a polynomial before we can use it:
> convert(",polynom);
This can all be done in one step using:
> convert(taylor(f(x),x=1,5),polynom);
We can now plot this polynomial along with the function and get an idea where the two graphs are close:
> plot({f(x),%},x=0..3);
Which is which? The line that curves down sharply represents the Taylor polynomial while the line that continues upwards is the actual function. Note that as x approaches 3, the polynomial moves away
from the graph of ln(x).
In order to graphically estimate the interval of convergence, we will need to compare the graphs of a sequence of Taylor polynomials with the graph of the original function. The following command
generates such a sequence and assigns it to the variable "tayseq".
> tayseq:={seq(convert(taylor(f(x),x=1,n),polynom),n=1..11)};
This set contains the first through tenth degree Taylor polynomials of ln(x) centered at x=1. Next we add ln(x) to this list by using the command
> tayseq:={f(x)}union tayseq:
We can plot this sequence along with lnx by entering
> plot(tayseq,x=0..3,y=-3..3,color=black);
From the graph, it appears that the polynomials stay close to on the interval (0,2), but outside this interval they are poor approximations. If we want a better estimate of the interval of
convergence, we can use higher degree Taylor polynomials:
> tayseq:={f(x)}union{seq(convert(taylor(f(x),x=1,n),polynom),n=15..25)}:
> plot(tayseq,x=0..3,y=-3..3,color=black);
1. Graphically determine the interval of convergence of the Taylor series of ln(x) about x = 2, x = 3, and x = 4. Turn in graphs supporting your claims.
2. Based on your answers to problem 1, conjecture as to the interval of convergence of the Taylor series of ln(x) centered at
x=k , where k > 0.
3. Graphically estimate the intervals of convergence for the Taylor series of centered about x = 0,1,3,4. Based on your observations, what would you predict the interval of convergence to be for the
Taylor expansion centered at x = 10? Explain.
4. Prove your conjecture in problem 2.
(Hint: describe the nth term of the Taylor series centered at x=k. Use the ratio test to calculate the interval of convergence.)
|
{"url":"http://www.uccs.edu/~math/136lab3.html","timestamp":"2014-04-18T21:03:55Z","content_type":null,"content_length":"18538","record_id":"<urn:uuid:c69353e9-da73-434b-a27a-415a1456c438>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Max-flow Algorithm Improved
The maximum-flow problem ("max flow") is one of the most basic problems in computer science: First solved during preparations for the Berlin airlift, it's a component of many logistical problems and
a staple of introductory courses on algorithms. For decades it was a prominent research subject, with new algorithms that solved it more and more efficiently coming out once or twice a year. But as
the problem became better understood, the pace of innovation slowed. Now, however, MIT researchers, together with colleagues at Yale and the University of Southern California, have demonstrated the
first improvement of the max-flow algorithm in 10 years.
The max-flow problem is, roughly speaking, to calculate the maximum amount of "stuff" that can move from one end of a network to another, given the capacity limitations of the network's links. The
stuff could be data packets traveling over the Internet or boxes of goods traveling over the highways; the links' limitations could be the bandwidth of Internet connections or the average traffic
speeds on congested roads.
More technically, the problem has to do with call graphs. A graph is a collection of vertices and edges, which are generally depicted as circles and the lines connecting them. The standard diagram of
a communications network is a graph, as is, say, a family tree. In the max-flow problem, one of the vertices in the graph -- one of the circles -- is designated the source, where all the stuff comes
from; another is designated the drain, where all the stuff is headed. Each of the edges -- the lines connecting the circles -- has an associated capacity, or how much stuff can pass over it.
Such graphs model real-world transportation and communication networks in a fairly straightforward way. But their applications are actually much broader, explains Jonathan Kelner, an assistant
professor of applied mathematics at MIT, who helped lead the new work. "A very, very large number of optimization problems, if you were to look at the fastest algorithm right now for solving them,
they use max flow," Kelner says. Outside of network analysis, a short list of applications that use max flow might include airline scheduling, circuit analysis, task distribution in supercomputers,
digital image processing, and DNA sequence alignment.
Traditionally, Kelner explains, algorithms for calculating max flow would consider one path through the graph at a time. If it had unused capacity, the algorithm would simply send more stuff over it
and see what happened. Improvements in the algorithms' efficiency came from cleverer and cleverer ways of selecting the order in which the paths were explored.
But Kelner, grad student Aleksander Madry, math undergrad Paul Christiano, and Professors Daniel Spielman and Shanghua Teng of, respectively, Yale and USC, have taken a fundamentally new approach to
the problem. They represent the graph as a matrix, which is math-speak for a big grid of numbers. Each node in the graph is assigned one row and one column of the matrix; the number where a row and a
column intersect represents the amount of stuff that may be transferred between two nodes.
In the branch of mathematics known as linear algebra, a row of a matrix can also be interpreted as a mathematical equation, and the tools of linear algebra enable the simultaneous solution of all the
equations embodied by all of a matrix's rows. By repeatedly modifying the numbers in the matrix and re-solving the equations, the researchers effectively evaluate the whole graph at once. This
approach turns out to be more efficient than trying out edges one by one.
If N is the number of nodes in a graph, and L is the number of links between them, then the execution of the fastest previous max-flow algorithm was proportional to (N + L)^(3/2). The execution of
the new algorithm is proportional to (N + L)^(4/3). The researchers haven't in fact written a program that implements their algorithm, and in practice, the performance of an algorithm can depend on
factors like how efficiently it's coded and how well it manages memory. But in theory, for a network like the Internet, which has about 100 billion nodes, the new algorithm could solve the max-flow
problem 100 times faster than its predecessor.
The immediate practicality of the algorithm, however, is not what impresses John Hopcroft, the IBM Professor of Engineering and Applied Mathematics at Cornell and a recipient of the Turing Prize, the
highest award in computer science. "My guess is that this particular framework is going to be applicable to a wide range of other problems," Hopcroft says. "It's a fundamentally new technique. When
there's a breakthrough of that nature, usually, then, a subdiscipline forms, and in four or five years, a number of results come out."
|
{"url":"http://www.drdobbs.com/parallel/max-flow-algorithm-improved/227500716","timestamp":"2014-04-20T02:41:31Z","content_type":null,"content_length":"94498","record_id":"<urn:uuid:4d685557-8e4a-4e1f-91a3-37d32898b8a6>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Momentum Analysis
In addition, the vector diagram will help you determine what the delta-V (change in velocity) and the principal direction of force (PDOF) are. This "change in velocity" experienced by the vehicle,
and its occupants, is what determines the severity of the collision and will act along the PDOF. The delta-V correlates well with serious injury and fatality collisions. Emergency room doctors are
very interested in the delta-V experienced by their patient. They know if the delta-V is high enough they can expect to see life threatening injuries. The delta-V can be calculated mathematically and
through a vector diagram.
|
{"url":"http://www.collisionrecon.com/services/momentum-analysis.html","timestamp":"2014-04-19T13:08:29Z","content_type":null,"content_length":"13875","record_id":"<urn:uuid:a5e83e94-a4d6-4a48-ac28-65d03045fec6>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
|
proof of that one rational solution to a quadratic implies another
January 20th 2009, 07:13 PM
proof of that one rational solution to a quadratic implies another
Prove that if one solution for a quadratic equation of the form $x^{2}+bx+c=0$ is rational (where $b$ and $c$ are rational), then the other solution is also rational. (Use the fact that if the
solutions of the equation are $r$ and $s$, then $x^{2}+bx+c=(x-r)(x-s)$.
For my proof, I'm hypothesizing that $s$ is the solution that is known to be rational.
So the rationals are closed under multiplication, addition and subtraction (and division, with nonzero divisors.) If $x$ is rational, then that closure ensures a rational $r$, but $x$ isn't
necessarily rational. I know I'm missing something, but I've been doing math all day and I'm a little brain-burned. Can anyone nudge me past the whole non-rational $x$ hurdle?
Thanks! :)
January 20th 2009, 07:16 PM
For my proof, I'm hypothesizing that $s$ is the solution that is known to be rational.
So the rationals are closed under multiplication, addition and subtraction (and division, with nonzero divisors.) If $x$ is rational, then that closure ensures a rational $r$, but $x$ isn't
necessarily rational. I know I'm missing something, but I've been doing math all day and I'm a little brain-burned. Can anyone nudge me past the whole non-rational $x$ hurdle?
Thanks! :)
Try comparing coefficients
So $r+s=-b$ and $rs=c$
And since as you stated the rationals are closed under addition/multiplication the conclusion follows.
January 20th 2009, 10:41 PM
January 20th 2009, 10:45 PM
You only need that the sum of the rational root and the other is a rational and closure under addition and subtraction
January 20th 2009, 10:48 PM
Too quick, post edited, and for some reason duplicated
January 21st 2009, 12:34 PM
January 21st 2009, 02:13 PM
|
{"url":"http://mathhelpforum.com/algebra/69147-proof-one-rational-solution-quadratic-implies-another-print.html","timestamp":"2014-04-21T02:25:51Z","content_type":null,"content_length":"14487","record_id":"<urn:uuid:c5094c90-7c9e-4432-a03d-3bd2d9072eb7>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Provided by:
pcg_abtb, pcg_abtbc, pminres_abtb, pminres_abtbc -- solvers for mixed
linear problems
template <class Matrix, class Vector, class Solver, class Preconditioner, class Size, class Real>
int pcg_abtb (const Matrix& A, const Matrix& B, Vector& u, Vector& p,
const Vector& Mf, const Vector& Mg, const Preconditioner& S1,
const Solver& inner_solver_A, Size& max_iter, Real& tol,
std::ostream *p_cerr = 0, std::string label = "pcg_abtb");
template <class Matrix, class Vector, class Solver, class Preconditioner, class Size, class Real>
int pcg_abtbc (const Matrix& A, const Matrix& B, const Matrix& C, Vector& u, Vector& p,
const Vector& Mf, const Vector& Mg, const Preconditioner& S1,
const Solver& inner_solver_A, Size& max_iter, Real& tol,
std::ostream *p_cerr = 0, std::string label = "pcg_abtbc");
The synopsis is the same with the pminres algorithm.
See the user's manual for practical examples for the nearly
incompressible elasticity, the Stokes and the Navier-Stokes problems.
Preconditioned conjugate gradient algorithm on the pressure p applied
to the stabilized stokes problem:
[ A B^T ] [ u ] [ Mf ]
[ ] [ ] = [ ]
[ B -C ] [ p ] [ Mg ]
where A is symmetric positive definite and C is symmetric positive and
semi-definite. Such mixed linear problems appears for instance with
the discretization of Stokes problems with stabilized P1-P1 element, or
with nearly incompressible elasticity. Formaly u = inv(A)*(Mf - B^T*p)
and the reduced system writes for all non-singular matrix S1:
inv(S1)*(B*inv(A)*B^T)*p = inv(S1)*(B*inv(A)*Mf - Mg)
Uzawa or conjugate gradient algorithms are considered on the reduced
problem. Here, S1 is some preconditioner for the Schur complement
S=B*inv(A)*B^T. Both direct or iterative solvers for S1*q = t are
supported. Application of inv(A) is performed via a call to a solver
for systems such as A*v = b. This last system may be solved either by
direct or iterative algorithms, thus, a general matrix solver class is
submitted to the algorithm. For most applications, such as the Stokes
problem, the mass matrix for the p variable is a good S1 preconditioner
for the Schur complement. The stoping criteria is expressed using the
S1 matrix, i.e. in L2 norm when this choice is considered. It is
scaled by the L2 norm of the right-hand side of the reduced system,
also in S1 norm.
|
{"url":"http://manpages.ubuntu.com/manpages/precise/man5/mixed_solver.5rheolef.html","timestamp":"2014-04-17T07:09:21Z","content_type":null,"content_length":"6504","record_id":"<urn:uuid:eb97d697-4515-4cc5-b32e-f4ea518ead17>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Prime ideals are a generalization of
prime numbers
to rings more general than the
s. They are useful in subjects like
number theory
algebraic geometry
. For simplicity we will restrict to the case of commutative rings.
Fix R a commutative ring.
Definition An ideal I of R is called a prime ideal if it is not R and whenever ab is in I, for some elements a,b of R, the either a is in I or b is in I.
Example If the ring is Z, the ring of integers, then for each prime number p the ideal pZ consisting of all multiples of p is a prime ideal. In fact these are all the prime ideals of Z.
To prove this just observe that in an integral domain then y is in xR if and only if x divides y. In fact this shows that in an integral domain yR is a prime ideal if and only if y is a prime element
of R.
Example The polynomial
f=y^2 - x^2 - x^3
is irreducible in the
Eisenstein's irreducibility criterion
. Thus, since in a
unique factorization domain
irreducible elements are prime we deduce that
is a prime ideal.
We can get an equivalent formulation of prime ideals in terms of quotient rings. The proof is straight from the definitions.
Lemma An ideal I of R is prime if and only if the quotient ring R/I is an integral domain.
As a corollary of this we can get some important examples.
Proposition A maximal ideal of R is prime.
Proof: Let I be maximal. Then R/I is simple and commutative, hence a field. In particular, it is an integral domain, so I is a prime ideal by the lemma.
This allows us to give examples of prime ideals that are not cyclic. By Hilbert's Nullstellensatz we know that the maximal ideals of C[x,y] are exactly (x-a, y-b), for (a,b) in C^2. Thus the
proposition tells us that these ideals are all prime ideals, but it is easy to see that none of them are cyclic.
Let's talk a little bit more about prime ideals in the polynomial ring. So fix k an algebraically closed field. We know about the correspondence between closed sets for the Zariski topology and
radical ideals in the polynomial ring. This begs the question under this correspondence what is the geometric property of a closed set that makes its ideal prime? Well actually we are getting a
little ahead of ourselves. First:
Lemma A prime ideal is radical.
Proof: Let I be a prime ideal. Suppose that a^n is in I for some n>0. We must show that a is in I. We proceed by induction on n. In the case n=1 there is nothing to show. But
So by the definition of a prime ideal either
is in
in which case we are done or the
st power of
is. By induction, we're through.
Back to the polynomial ring
R=k[x[1],..., x[n]]
is a prime ideal and
, for a closed subset of
in the
Zariski topology
what can we say about
Definition If X is any toplogical space we say that X is reducible if there exist two proper closed subsets of X called Y,Z such that X=Y U Z. Otherwise X is called irreducible.
Theorem A closed subset X of k^n is irreducible if and only if I(X) is a prime ideal.
Proof: Suppose that I=I(X) is prime but X=Y U Z. We have that Y=Z(I(Y)) and Z=Z(I(Z)) and further that X = Z(I(Y)I(Z)). Thus I=rad(I(Y)I(Z)). Since I is prime it follows from the definition that
either I(Y) or I(Z) lies inside I. WLOG let's take the first case. Then apply Z(-) to both sides we get that Y contains X and hence is not proper after all.
On the other hand, suppose that X is irreducible and that ab lies in I(X). Let Y=Z(a)^X and let Z=Z(b)^X. Then we have X=Y U Z. Since X is irrreducible, WLOG we have X=Y which says that X is
contained in Z(a) It follows that a is in I(X), so I(X) is prime.
|
{"url":"http://everything2.com/title/prime+ideal?author_id=627322","timestamp":"2014-04-17T19:13:34Z","content_type":null,"content_length":"25715","record_id":"<urn:uuid:8ab32b55-9eef-4c91-83c0-6e54843ebd35>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Converting Celsius to Fahrenheit
Date: 2/28/96 at 14:6:46
From: Marjory R. Kline
Subject: Centigrade-to-Fahrenheit conversions
Dear Dr. Math:
I am Americanizing a couple of British nonfiction books for
children. Normally, the c-to-f conversion is no problem for me,
but these are books on the solar system and I want to be very sure
I get my conversions right. So, here are my questions:
What is the Fahrenheit equivalent for 15 million degrees centigrade?
(That's the temperature at the center of the sun.)
What is the Fahrenheit equivalent of -150 degrees Celsius, and how
does this differ from centigrade?
To convert 500 degrees centigrade to Fahrenheit, what figures do
I use?
Would appreciate your help,
Marjory Kline
Date: 2/28/96 at 15:58:38
From: Doctor Byron
Subject: Re: Centigrade-to-Fahrenheit conversions
Hi Marjory,
Celsius and centigrade both refer to the same temperature scale,
which is calibrated by setting the boiling point of water at 100
and the freezing point of water at 0. The general conversions
between Celsius and Fahrenheit are:
Tf = 9/5 * Tc + 32
Tc = (Tf - 32) * 5/9
where Tf and Tc are Fahrenheit and Celsius temperatures,
Applying these formulas to your questions, we have:
15,000,000 Celsius in Fahrenheit:
Tf = 9/5 * 15,000,000 + 32 = 27 million deg. F
-150 C in F:
Tf = 9/5 * -150 + 32 = -238 deg. F
500 C to F:
Tf = 9/5 * 500 + 32 = 932 deg. F
Good luck with the books!
-Doctor Byron, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/58317.html","timestamp":"2014-04-18T16:01:13Z","content_type":null,"content_length":"6560","record_id":"<urn:uuid:56e4c68c-cf51-4936-9555-35a7806b9740>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trying to figure out a card game
June 12th 2009, 04:23 PM #1
Sep 2006
Trying to figure out a card game
My friend and I play a Vietnamese card game called Thirteen (or Tien len).
I'd like to know, what are the chances of having at least two pairs after drawing thirteen cards?
The simplest way to do this is to ask the opposite questions:
1) What is the probability of drawing 13 cards with NO pair?
The first card can be anything. The second card can be any of the 48 cards that do not pair the first card: probability 48/51. The third card can be any of the 44 cards that do not match either
of first two cards: probability 44/50. Do you see the pattern? For each card we subtract 4 from the numerator and 1 from the denominator. Multiplying all the fractions together, the denominator
will be 51(50)(49)(48)...(39)= 51!/38! and the numerator is 48(44)(40)...(4)= 4^12(12)(11)(10)...(1)=4^1212!. The probability of getting 13 cards with no pair is $\frac{4^12 12! 38!}{51!}$.
2) What is the probability of drawing 13 cards with exactly one pair?
First calculate the probability that the first two cards pair and the other 11 do not. The first card can be anything. The second must match that- probability 3/51. The third can be anything that
does not match those: probability 48/50. The fourth can be anything other than the first two or that: 44/49 and so on: (3/51)(48/50)(44/49)... (8/39).
The probability of drawing 13 cards with at most one pair is sum of those two and the probability of drawing 13 cards with at least two pair is 1 minus that probability.
My friend and I thought similar to you when we first looked at the problem. However, our numbers were a bit skewed when we calculated things out.
In response to you first point, you neglect to count the hands where players have three of a kind and no pair or four of a kind and no pair.
In response to the second point, your method suggest only one order of getting those cards. You could also not get a pair on the second draw, thus increasing your chances of getting a pair as now
the next card can match any of the two drawn before it.
And in addition to all of that, you could draw a pair and then a three of kind or a four of a kind, thus increasing your odds even more. I just don't know how to manipulate probabilities in order
to account for all of that.
Hello, ceasar_19134!
A variation of HallsofIvy's solution . . .
My friend and I play a Vietnamese card game called Thirteen (or Tien len).
What is the probability of having at least two pairs after drawing thirteen cards?
There are: . $_{52}C_{13}$ possible hands.
The opposite of "at least two pairs" is "no pairs" or "one pair."
No Pairs
We must draw one of each of the 13 values.
There are: . $_4C_1 = 4$ ways to draw each value.
Hence, there are: . $4^{13}$ hands with No Pairs.
One Pair
There are 13 choices for the value of the Pair.
There are: . $_4C_2 = 6$ ways to get the Pair.
The other 11 cards must not match the Pair or each other.
There are: . $_{12}C_{11} = 12$ choices for their values.
And: . $_4C_1 = 4$ ways to draw each value.
So there are are: . $12\cdot4^{12}$ choices for the other 11 cards.
Hence, there are: . $13\!\cdot\!6\!\cdot\!12\!\cdot\!4^{12}$ ways to get One Pair.
Then there are: . $4^{13} + 936\!\cdot\!4^{12}\:=\:4^{13}(235)$ ways to get No Pairs or One Pair.
Hence: . $P(\text{No Pair or 1 Pair}) \;=\;\frac{4^{13}(235)} {_{52}C_{13}}$
Therefore: . $P(\text{at least 2 Pairs}) \;=\;1 - \frac{4^{13}(235)}{_{52}C_{13}}$
June 12th 2009, 05:05 PM #2
MHF Contributor
Apr 2005
June 12th 2009, 05:24 PM #3
Sep 2006
June 12th 2009, 06:15 PM #4
Super Member
May 2006
Lexington, MA (USA)
|
{"url":"http://mathhelpforum.com/statistics/92672-trying-figure-out-card-game.html","timestamp":"2014-04-21T17:02:58Z","content_type":null,"content_length":"44013","record_id":"<urn:uuid:98f6436f-9b25-4e6b-9a1a-50a5923ce0a7>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Immaculata Science Tutor
...For the past fifty years, I have worked with and at times struggled with proofreading. A sterile passage may be perfectly correct in grammar, but proofreading will reveal ways to keep the
subject lively and still remain within the boundaries of correct usage, punctuation and grammar. We all nee...
62 Subjects: including physical science, mechanical engineering, sociology, biology
...I expect a lot out of myself and therefore expect a lot from my students, thus if you are not completely satisfied with my session I will not bill you. But I also want to hear from you and
provide feedback on my lessons. I also expect a 2-hour cancellation notice with rescheduling considerations.
12 Subjects: including zoology, botany, genetics, algebra 1
...I have several years of experience working with children in middle and high school. My BA and MA are both in history; I received the highest distinctions possible for both of these degrees, and
my research has been published in academic journals. I received perfect scores in the Reading Compreh...
20 Subjects: including anthropology, archaeology, English, reading
...The writing and reading tests on the SAT can be quite challenging, but we can work together to understand the tests, problems, and strategies to get your best score. I specialize in helping
high school student "cram" for the SATs. I'd be happy to work with you to develop a plan for your particular need.
20 Subjects: including physical science, ACT Science, English, geometry
...As matter of fact, I received a district award for implementing a modern teaching method in my classrooms. I am an excellent one-on-one person and can explain many difficult concepts in many
different ways.I specializes in first year high school chemistry. I do not tutor students just to teach the content; instead, I educate them to master the skills as well.
1 Subject: chemistry
|
{"url":"http://www.purplemath.com/Immaculata_Science_tutors.php","timestamp":"2014-04-18T11:41:52Z","content_type":null,"content_length":"23748","record_id":"<urn:uuid:a10e1389-4268-4591-9841-3c128c2cf4a5>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On a Generalized Theory of Relativity
Authors: Golden Gadzirayi Nyambuya
The General Theory of Relativity (GTR) is essentially a theory of gravitation. It is built on the Principle of Relativity. It is bonafide knowledge, known even to Einstein the founder, that the GTR
violates the very principle upon which it is founded i.e., it violates the Principle of Relativity; because a central equation i.e., the geodesic law which emerges from the GTR, is well known to be
in conflict with the Principle of Relativity because the geodesic law, must in complete violation of the Principle of Relativity, be formulated in special (or privileged) coordinate systems i.e.,
Gaussian coordinate systems. The Principle of Relativity clearly and strictly forbids the existence/use of special (or privileged) coordinate systems in the same way the Special Theory of Relativity
forbids the existence of privileged and or special reference systems. In the pursuit of a more Generalized Theory of Relativity i.e., an all-encampusing unified field theory to include the
Electromagnetic, Weak & the Strong force, Einstein and many other researchers, have successfully failed to resolve this problem. In this reading, we propose a solution to this dilemma faced by
Einstein and many other researchers i.e., the dilemma of obtaining a more Generalized Theory of Relativity. Our solution brings together the Gravitational, Electromagnetic, Weak & the Strong force
under a single roof via an extension of Riemann geometry to a new hybrid geometry that we have coined the Riemann-Hilbert Space (RHS). This geometry is a fusion of Riemann geometry and the Hilbert
space. Unlike Riemann geometry, the RHS preserves both the length and the angle of a vector under parallel transport because the affine connection of this new geometry, is a tensor. This tensorial
affine leads us to a geodesic law that truly upholds the Principle of Relativity. It is seen that the unified field equations derived herein are seen to reduce to the well known Maxwell-Procca
equation, the non-Abelian nuclear force field equations, the Lorentz equation of motion for charged particles and the Dirac equation.
Comments: 40 pages
Download: PDF
Submission history
[v1] 7 Oct 2010
[v2] 20 Dec 2010
Unique-IP document downloads: 170 times
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted
as unhelpful.
|
{"url":"http://vixra.org/abs/1010.0012","timestamp":"2014-04-18T14:07:16Z","content_type":null,"content_length":"8800","record_id":"<urn:uuid:978bdd35-bbba-49f2-8bfb-72af3b175078>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fayetteville, GA Prealgebra Tutor
Find a Fayetteville, GA Prealgebra Tutor
...I do enjoy tutoring or interacting one on one with students. I do strongly believe that each student is capable of succeeding, especially in Math. Working one on one with them does allow me to
find their strengths as well as their weaknesses, and build a strategy that will fit in to make him/her succeed.
13 Subjects: including prealgebra, calculus, geometry, algebra 1
...I have six years experience as a Certified teacher of middle grades and high school science. I spent those years teaching high school science (grades 9-12) including Biology, Physical Science,
and Chemistry to at-risk youth in an alternative school environment. A great proportion of the student...
5 Subjects: including prealgebra, chemistry, biology, algebra 1
...I am currently a high school science teacher who helps tutor all science subjects in school. I have helped students prepare for both the ACT and the SAT with great success. I have a degree in
genetics, and have taught genetics as a teacher in 9th grade biology.
15 Subjects: including prealgebra, chemistry, geometry, biology
...Pre-algebra builds upon mathematical skills like fractions, decimals, percents, positive and negative integers and rational numbers; with additional advanced computation using ratios,
proportions, and solving algebraic equations and word problems. SAT math focuses on the ability to reason quanti...
9 Subjects: including prealgebra, geometry, algebra 1, GED
...While enjoying the classroom again, I also passed 6 actuarial exams covering Calculus (again), Probability, Applied Statistics, Numerical Methods, and Compound Interest. It's this spectrum of
mathematics, from high school through post baccalaureate, which I feel most comfortable tutoring. I also became even more proficient with Microsoft Excel, Word, and PowerPoint.
21 Subjects: including prealgebra, calculus, statistics, geometry
|
{"url":"http://www.purplemath.com/fayetteville_ga_prealgebra_tutors.php","timestamp":"2014-04-19T23:19:49Z","content_type":null,"content_length":"24455","record_id":"<urn:uuid:1d9c1176-9674-4671-a655-8eb0d630350b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How Much Should You Charge to Earn a Profit?
Over the years, I have seen many hardworking contractors who don’t price their jobs or their services so that they make a real profit. They think that they are, but when they get their financial
statements they’ve worked for wages or are losing money — even though they are working hard.
I have developed a series of rules that you can use to ensure that you will price your jobs and services so that you make a profit. Using these rules, you can also “what if” yourself to death. For
example, you can find out how much you need to sell if you raise your overhead by 5%, want to hire a parts runner, or add production personnel.
A Little Background
Before I begin the rules, I need to make sure that everyone is on the same page with respect to the terminology that I use.
First, let’s look at the income statement. The income statement is also called a profit and loss statement. I define the income statement as sales minus cost of goods sold (or direct cost) equals
gross profit. Gross profit minus overhead equals net operating profit before taxes. I define cost of goods sold or direct cost as any expense that you have because you sold something. All production
labor needs to go into cost of goods sold, as well as commissions, spiffs, etc.
Only the materials used on the jobs go into the cost of goods sold. If you get a preseason order, it is inventory until you use it on the job. Other normal costs of goods sold include expenses such
as warranty, freight, permits, subcontractors, etc. Some contractors put truck expenses in cost of goods sold; some don’t. All that matters is that you are consistent. (This means that if you are
going to include truck costs, include them all the time.)
Overhead items are those expenses that you incur to keep the doors open. These include rent, utilities, owners’ salaries, etc. You pay overhead whether you sell one dollar’s worth or not. These
expenses continue during good and bad months.
For the purposes of the Ruth’s Rules calculations, I stop at net operating profit, because there are always extraordinary things that can happen which affect the bottom line. You could have a great
year and give bonuses. You could sell used trucks. You could get interest income from investments.
These occurrences result in dollars that you pay out or receive, and obviously they affect your bottom line. However, they are not revenues and expenses that occur regularly, so they don’t count in
the day-to-day transactions that you do to make a profit and that you base your pricing on. Yes, they do count at the end of the year. However, they are extraordinary events that should not be taken
into consideration when you price jobs.
The other important Ruth’s Rules term is gross margin. Gross margin is gross profit divided by sales. The difference between gross profit and gross margin is that gross profit is always a dollar
amount. Gross margin is always a percentage. You must watch the gross margin for each of your departments. It should remain fairly constant (within 1% to 2% each month) whether you are slow or busy.
Your gross margin will tell you whether you are pricing your jobs right and how much unbilled overtime affects your profits, as well as whether you are accounting for your inventory properly.
The first rule has to do with selling price when you know the direct costs for the job. The second rule determines break-even sales, and the third rule determines sales at a specific net profit
Ruth’s Rule No. 1
The first rule tells you how to price a job or a service ticket when you know the direct costs for that job or service ticket.
Ruth’s Rule No. 1 is selling price equals direct costs divided by (1 minus gross margin). Remember that 1 is 100% and that gross margin is always a percentage. The reason that you divide by 1 minus
the gross margin has to do with the structure of the profit and loss statement. The first part of the profit and loss statement is sales minus direct costs equals gross profit. Sales equals the total
selling price for the job. The sales represent 100%, or the total revenues for the job.
Gross margin, by definition, is gross profit divided by sales. So, the gross margin is the percentage of sales that you have left after you take out your direct cost percentage. Using simple
mathematical formulas to arrive at the selling price when the direct cost is known, you have to divide by the direct cost percentage (or 1 minus the gross margin).
Let’s take a simple example. A service technician spends 2 hrs on a job. He uses $50 worth of materials. His hourly rate is $15/hr. You want to achieve a 55% gross margin on all service calls. What
price should you charge the customer?
The total cost for the job is $50 in parts and $30 in labor ($15/hr x 2 hrs). The total cost is $80. To get the selling price, divide $80 by 45%. You should charge the customer $177.78.
For those of you who include labor burden in direct cost, let’s refigure the example. Assume that the cost of payroll taxes, health insurance, worker’s compensation, etc. for this employee is 33% of
his hourly rate. This 33% comes to $9.90. So, the total cost for the job is $89.90. If you charge the customer $177.78, as in the example above, this time your gross margin is approximately 49.4%
rather than 55%.
For those of you who include labor burden and truck costs in the direct cost, let’s again refigure the example above. Assume that truck cost is $10/hr. Then, the total cost for the job is $89.90 plus
$20 for the 2 hrs he’s on the job, or $119.90. If you charge the customer $177.78, as in the example above, this time your gross margin is approximately 32.6%.
These examples show that depending on how you define direct cost, your gross margin can vary from 32% to 55% with the customer being charged the same price. Of course, the overhead percentage is a
lot higher with the first example than it is with the last example.
So, how do you determine your gross margin? It actually depends on knowing your overhead percentage and the net profit that you want to achieve. These topics are covered by rules No. 2 and 3.
Ruth’s Rule No. 2
Ruth’s Rule No. 2 is break-even sales equals overhead divided by gross margin.
Some of you are probably thinking, “Wait a minute. Above you told me to divide by 1 minus the gross margin. Now you are telling me to divide by the gross margin.” That’s true. In the first instance,
we knew direct cost. Now we know overhead cost. That’s the difference.
Let’s take an example. Suppose your salary is $100,000 and you want to know what the company has to sell to just break even on your salary. When you get your financial statement you see that the
company’s gross margin is 35%. Using Ruth’s Rule No. 2, divide $100,000 by 35%. You get $285,714.29. This means that the service technicians, installation crews etc. have to generate $285,714.29 just
to cover your salary.
Let’s check our answer using the income statement. Remember the formula for the income statement is sales minus direct cost is gross margin. Gross margin minus overhead is net profit before taxes.
In our example, the sales we have to generate are $285,714.29. Our direct cost is 65% of those sales. (Since we have a 35% gross margin, the direct cost has to be 65%.) So our direct cost is 65%
times $285,714.29 or $185,714.29. Subtract: $285,714.29 minus $184,714.29. The gross profit is $100,000. Since the overhead (i.e., your salary) is $100,000, the net profit is zero and you’ve just
broken even.
This rule is very helpful when you want to add an overhead position. For example, if you wanted to hire a warehouse person, this formula lets you know how much additional revenues the company would
have to bring in (or the savings that this person would have to create when hired) to cover the new position.
Let’s say that you will pay the warehouse person $10/hr or $20,800/yr (without overtime). If you assume that his benefits cost another 33%, his compensation package totals $27,664. Let’s round this
to $28,000 per year. If this person is also going to be responsible for parts deliveries, add a year’s truck cost to the $28,000. For this example, I’ll assume that the person will remain in the
warehouse. Let’s assume that the company gross margin is 35% as in the example above.
The additional revenues that the company would have to generate or dollars that he would have to save come to $28,000 divided by 35%, or $80,000 per year.
Can a warehouse person save $80,000 per year? Easily. If this person gets the crews’ materials ready, and they spend an additional hour per day on the job rather than searching for parts, the
warehouse person pays for himself. If three service technicians can do an extra call per day, then that is an additional $75,000 in sales (assuming the average service call ticket is $100 for 50
weeks) so the warehouse person pays for himself. I know of companies with only 5 people, where one of the people is a warehouse/parts runner who pays for himself just in saving time and letting the
revenue-producing people produce revenue rather than waste time that can’t be billed.
Ruth’s Rule No. 3
Ruth’s Rule No. 3 is sales equals overhead divided by (gross margin minus the profit percentage).
I assume that all of you want to do more than just break even. Ruth’s Rule No. 3 tells you how much you have to sell to achieve a preset net profit with a given amount of overhead and gross margin.
Ruth’s Rule No. 3 is used for planning your job, your month, your quarter, or your year. Here’s an example:
You are budgeting the service department financials for next year. You see that your gross margin for the service department is 50% and your overhead for the year is estimated to be $500,000. To
achieve a net profit of 10% in the service department, how much does the service department have to generate?
Using Ruth’s Rule No. 3, sales equals 500,000 divided by (0.50 minus 0.10), or $1,250,000.
What would happen if you set the goal to increase the service department gross margin to 55%?
Using Ruth’s Rule No. 3, sales equals 500,000 divided by (0.55 minus 0.10), or $1,111,111.
A 5% increase in gross margin means that the service department has to generate about $140,000 less to achieve the same profit level. This is the approximate revenues that one technician should
generate. So, a 5% increase in gross margin means you need one less technician, if the other assumptions are true.
Let’s look at an example similar to the ones above. Suppose your salary is $100,000 and you want to know what the company has to sell to earn a 10% profit (rather than just break even) on your
salary. When you get your financial statement you see that the company gross margin is 35%. Using Ruth’s Rule No. 3, divide $100,000 by (35% minus 10%). You get $400,000 (rather than the $285,714.29
calculated above). This means that the service technicians, installation crews, etc., have to generate about $115,000 more than in the above example just to cover your salary.
Let’s check our answer using the income statement. Remember the formula for the income statement is sales minus direct cost is gross margin. Gross margin minus overhead is net profit before taxes.
In our example, the sales we have to generate are $400,000. Our direct cost is 65% of those sales. (Since we have a 35% gross margin, the direct cost has to be 65%). So our direct cost is 65% times
$400,000, or $260,000.
Subtract: $400,000 minus $260,000 is $140,000. The gross profit is $100,000. Since the overhead (i.e., your salary) is $100,000, the net profit is $40,000, which is 10% of $400,000.
These are the three rules that I use to calculate selling prices when I know either direct cost or overhead cost and gross margin. The way that I usually work the process for Ruth’s Rules No. 2 and 3
is to look at the percentage of profit that I want to make, and knowing the overhead, I determine what the gross margin has to be to achieve a certain level of sales. Then I look at the result and
see whether it is realistic — or achievable. That’s how I plan budgets for the contractors I work with and that’s how we look at pricing issues.
Spend some time to get familiar with these rules. I think that they will help you price your jobs accurately as well as plan your budgets. King, of American Contractor’s Exchange, may be reached at
800-511-6844; 770-729-8028 (fax); or www.acecontractor.com (website).
Publication date: 10/09/2000
March 20, 2010
if I give a job to another contractor how much percentage should i get for giving the job?
|
{"url":"http://www.achrnews.com/articles/how-much-should-you-charge-to-earn-a-profit","timestamp":"2014-04-19T02:14:48Z","content_type":null,"content_length":"70575","record_id":"<urn:uuid:1b3ad185-91a0-4029-af06-9cedfa23cd30>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Next Article
Contents of this Issue
Other Issues
ELibM Journals
ELibM Home
EMIS Home
EXTENSIONS OF SOME FIXED POINT THEOREMS OF RHOADES, \'CIRI\'C, MAITI AND PAL
A.C. Babu and B.B. Panda
Department of Mathematics, University College of Engineering, Burla 768 018, India
Abstract: In a recent paper Rhoades [6] has shown, for a selfmap $T$ of a Banach space satisfying the contractive definitions of \'Ciri\'c [1] or of Pal and Maiti [5], that if the sequence of Mann
iterates converges then it converges to a fixed point of $T$. In this note we propose to draw the same conclusion in some of these cases even for subsequential limit points, i.e., every subsequential
limit point of the sequence of Mann iterates will be a fixed point of $T$. Further we shall derive the conclusions of Rhoades in the case of mappings satisfying even weaker conditions. Our final
result will be concerned with the extension of a result of Maiti and Babu [4] to mappings satisfying conditions similar to those in Rhoades [6, Theorem 3]. This is closed in spirit to the main result
of Diaz and Metcalf [2].
Classification (MSC2000): 47H10; 54H25
Full text of the article:
Electronic fulltext finalized on: 2 Nov 2001. This page was last modified: 8 Mar 2002.
© 2001 Mathematical Institute of the Serbian Academy of Science and Arts
© 2001--2002 ELibM for the EMIS Electronic Edition
|
{"url":"http://www.emis.de/journals/PIMB/056/12.html","timestamp":"2014-04-18T05:30:40Z","content_type":null,"content_length":"3834","record_id":"<urn:uuid:a8521981-78c5-4c87-8110-524d1b5e1920>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Complex Equation of a Plane Problem
June 9th 2008, 07:58 PM #1
Find the equation of the plane that passes through the line of intersection of the planes $x-y+2z+5=0$ and $2x+3y-z-1=0$, and is perpendicular to the plane $x+2y-2z=0$.
I'm stuck on tis one big time. Please grant me you help.
First we need to find the equation of the line of the intersection of the two planes.
multiplying the 1st by -2 and adding it to the 2nd gives
$5y-5z=11$ Now we can parameterize the line. Let $z=t$ then $y=t+\frac{11}{5}$ and finally x is
$x-y+2z+5=0 \iff x=y-2z-5=t+\frac{11}{5}-2t-5=-t-\frac{14}{5}$
so we get $<-t-\frac{14}{5},t+\frac{11}{5},t>=<-\frac{14}{5},\frac{11}{5},0>+t<-1,1,1>$
We can now find the normal vector from the plane to be $<1,2,-2>$
Now if we cross these two vectors we will get a vector that is perpendicular to both of them so it is parallel to the plane.
$\begin{vmatrix} i & j & k \\ -1 & 1 & 1 \\ 1 & 2 & -2 \\ \end{vmatrix}=<-4,-1,-3>$
From our work before we know the point $(-\frac{14}{5},\frac{11}{5},0)$ is in the plane and let (x,y,z) be any other point in the plane then the vector
$\vec v= <x+\frac{14}{5},y-\frac{11}{5},z>$ is in the plane.
Now If we dot v with the vector from the cross product and set it equal to zero we will get the equation of the plane perpendicular to the original plane.
$<-4,-1,-3> \cdot <x+\frac{14}{5},y-\frac{11}{5},z>=0$
$-4x-\frac{56}{5}-y+\frac{11}{5}-3z=0 \iff -4x-y-3z-\frac{45}{5}=0 \iff 4x+y+3z+9=0$
June 9th 2008, 11:34 PM #2
|
{"url":"http://mathhelpforum.com/advanced-algebra/41155-complex-equation-plane-problem.html","timestamp":"2014-04-18T01:13:35Z","content_type":null,"content_length":"40012","record_id":"<urn:uuid:b40d6111-717f-4167-a365-1fc61aefd4c6>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sort by:
Per page:
Now showing results 1-10 of 27
This model demonstrates convection currents and uses water, food coloring, a cup of very hot water and a votive candle as heat sources. Movie clips of demonstration setup and convection in action are
provided. This activity is supported by a... (View More) textbook chapter, What Heats the Earth's Interior?, part of the unit, Energy Flow, in Global Systems Science (GSS), an interdisciplinary
course for high school students that emphasizes how scientists from a wide variety of fields work together to understand significant problems of global impact. (View Less)
In this activity, students use mathematics to understand tides and gravitation and how gravity works across astronomical distances, using an apparatus made from a slinky, meter stick, and a hook. A
description of the mathematical relationships seen... (View More) in the demonstration is included. The resource is from PUMAS - Practical Uses of Math and Science - a collection of brief examples
created by scientists and engineers showing how math and science topics taught in K-12 classes have real world applications. (View Less)
This resource describes the physics behind the formation of clouds, and provides a demonstration of those principles using a beaker, ice, a match, hot water, and a laser pointer. This resource is
from PUMAS - Practical Uses of Math and Science - a... (View More) collection of brief examples created by scientists and engineers showing how math and science topics taught in K-12 classes have
real world applications. (View Less)
Sea floor spreading is demonstrated using a model consisting of two classroom desks and an 8-foot strip of paper. Changes in polarity are indicated using a felt marker. The investigation supports
material presented in chapter 3, "What Heats the... (View More) Earth's Interior?" in the textbook Energy flow, part of the Global System Science (GSS), an interdisciplinary course for high school
students that emphasizes how scientists from a wide variety of fields work together to understand significant problems of global impact. (View Less)
This demonstration allows students to visualize inversion in a fluid, explain it in terms of density, and apply the concept to weather systems and convection. Materials required include four
Ehrlenmeyer flasks, two thin glass plates, a heat source,... (View More) and food coloring. The investigation supports material presented in chapter 7, What Causes Thunderstorms and Tornadoes?, in
the textbook Energy flow, part of Global System Science, an interdisciplinary course for high school students that emphasizes how scientists from a wide variety of fields work together to understand
significant problems of global impact. (View Less)
In this demonstration, evidence of the Earth's rotation is observed. A tripod, swiveling desk chair, fishing line and pendulum bob (e.g., fishing weight or plumb bob) are required for the
demonstration. This resource is from PUMAS - Practical Uses... (View More) of Math and Science - a collection of brief examples created by scientists and engineers showing how math and science topics
taught in K-12 classes have real world applications. (View Less)
In this demonstration, students detect the interference of waves and measure wave phenomena using an experimental apparatus consisting of a laser pointer, a second surface mirror scrap (like a
bathroom mirror) binder clips, razor blade, ruler, and a... (View More) white wall or projection screen. Appendices with a discussion of physical principles and extension activities are included.
This resource is from PUMAS - Practical Uses of Math and Science - a collection of brief examples created by scientists and engineers showing how math and science topics taught in K-12 classes have
real world applications. (View Less)
This demonstration shows that similar-appearing lights can be distinctly different, suggesting that the light emitted is generated in different ways. It requires some advance preparation/setup by the
teacher and three recommended sources of orange... (View More) light, that can be purchased at a hardware or department store. Includes extensions and additional background information on light
generation in a section on underlying principles. This resource is from PUMAS - Practical Uses of Math and Science - a collection of brief examples created by scientists and engineers showing how
math and science topics taught in K-12 classes have real world applications. (View Less)
In this activity, students compute the strengths of the gravitational forces exerted on the Moon by the Sun and by the Earth, and demonstrate the actual shape of the Moon's orbit around the Sun. The
lesson begins with students' assumptions about the... (View More) motions of the Moon about the Earth and the Earth about the Sun, and then test their understanding using an experimental apparatus
made from a cardboard or plywood disk and rope. This resource is from PUMAS - Practical Uses of Math and Science - a collection of brief examples created by scientists and engineers showing how math
and science topics taught in K-12 classes have real world applications. (View Less)
In this demonstration, students experience the Doppler effect for sound. Students can compute the frequency change for motion along the line of sight (LOS) and determine the vector LOS component for
motions not exactly on it. A buzzer, battery,... (View More) bicycle wheel, string and a rubber ball and a timer are needed for the demonstration. The resource is from PUMAS - Practical Uses of Math
and Science - a collection of brief examples created by scientists and engineers showing how math and science topics taught in K-12 classes have real world applications. (View Less)
«Previous Page123 Next Page»
|
{"url":"http://nasawavelength.org/resource-search?smdForumPrimary=Earth+Science&facetSort=1&instructionalStrategies=Demonstrations&resourceType=Instructional+materials%3ADemonstration","timestamp":"2014-04-19T19:00:32Z","content_type":null,"content_length":"70804","record_id":"<urn:uuid:1273fe6d-f4ca-4f88-aaa5-995f412946e3>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Andy's Math/CS page
In Plane Sight
Flatland is in tumult after a spate of triangle-on-triangle violence, and the prevailing attitude is one of utter paranoia. The triangles, which can see out of each of their three vertices, are so
mistrustful that they will not tolerate the presence another triangle unless they can see each of the other's vertices, and, moreover, see it with each of their own eyes. Triangles are assumed to be
disjoint and opaque.
It is your job to help as many triangles as possible congregate, in a fashion acceptable to all participants, to promote dialogue and peace. Find yourself a pen, paper, and a bag of Doritos, and
Can three equilateral triangles of equal size be placed in a 'acceptable' arrangement? (If yes, what is the upper limit, if any?)
-I do know the answer, but it took me awhile. I've never really understood triangles...
-I haven't read Abbott's 'Flatland' since maybe age 12, so I have no idea how faithful this scenario is, or whether related visibility issues were explored in the book. As far as I know this puzzle
has not been posed elsewhere, but I do recall there being at least one interesting visibility puzzle in Engel's book
'Problem Solving Strategies'
-Notice how I refrained from saying 'acute paranoia'... it wasn't easy, but I think I'm a better person for having done so.
Labels: geometry, puzzles
I wanted to share a cool idea from the young field of 'network coding', which explores techniques for efficiently disseminating information over networks like the web. I believe it originated in this
seminal paper, which I learned about indirectly by reading Michael Mitzenmacher's very interesting research blog My Biased Coin (which reflects his professional activity in network coding and is a
good source; ref. #7 on this link is a good first article to read).
Consider the following directed network:
Here, each of the three top nodes have a stream of data they need to communicate to the node of corresponding color on the bottom. Each edge shown can transmit a bit per second in the direction
indicated, and the purple nodes in the middle are willing to assist in communication.
The perverse structure of this network seems to pose a problem: each node on top has two pink edges which are headed towards the wrong parties on the bottom. Since bottom nodes cannot themselves
communicate, the pink edges appear useless. It would seem that all information flow must happen thru the black edges, whose bottleneck would allow only one party to get a bit across each second.
WRONG! (puzzle: stop reading and figure out a better strategy.)
Here's the idea: Say the top nodes want to send bits x, y, z respectively on a round. They send their bit on their every available edge. The top purple node receives all of them and computes their
XOR, that is, x + y + z (mod 2). It sends it to bottom-purple node, which then broadcasts it to the three receivers.
Then, for instance, bottom-red receives y, z, and x + y + z, so it can add up all the bits it receives to get y + z + (x + y + z) = x, just what it wanted. Similarly for the other nodes. Repeating
this process, the network transmits at three times the rate we were led to expect by our faulty initial reasoning (a line of thought which restricted us to the best so-called 'multicommodity flow' on
the network).
If these were roads, and we were sending formed substances to their destinations, our first approach would be valid. But information is not a substance--it obeys zany laws of its own, and can at
times be merged, fractionalized, and delocalized in unexpected and useful ways (often involving that mischievous XOR). Better understanding the information capacity of networks more complex than the
one above is a fascinating and important research frontier--so keep following Mitzenmacher's blog!
Labels: general math
Today, enjoy a home-baked puzzle in real analysis--a somewhat old-fashioned subject that I can't help but love (much like automata theory).
We know that pi = 3.1415... is transcendental, i.e., it is not a zero of any nontrivial univariate polynomial with rational coefficients. Is it possible we could generalize this result?
Part I. Is there a continuous function f: R --> R, such that
a) f takes rational numbers to rational numbers,
b) f(x) is zero if and only if x = pi?
Part II. If you think no such f exists, then for which values (in place of pi) does such an f exist?
On the other hand, if f as in Part I does exist (I reveal nothing!), can it be made infinitely differentiable?
For those new to real analysis, 'weird' objects like the desired f above generally have to be created gradually, through an iterative process in which we take care of different requirements at
different stages. We try to argue that 'in the limit', the object we converge to has the desired properties. (Does this sound like computability theory? The two should be studied together as they
have strong kinships, notably the link between diagonalization/finite extension methods and the Baire Category Theorem.)
Passing to the limit can be subtler than it first appears; for example, a pointwise limit of continuous functions need not be continuous. Here is a pretty good fast online introduction to some
central ideas in studying limits of functions.
Labels: general math, puzzles
In an earlier post, we discussed Kleitman's Theorem, which tells us that monotone events are nonnegatively correlated. So, for example, if we generate a random graph G, including each edge with
probability 1/2, and we condition on the event that G is nonplanar, that can only make it more likely that G is Hamiltonian, not less.
There's a further wrinkle we could add to make things more interesting. By a witness for the event f(x) = 1 on a particular bitstring x, we mean a subset of the bits of x that force f(x) = 1. (This
event could have multiple, overlapping or disjoint, witnesses.)
Let f, g be monotone Boolean functions, and consider the event we'll call f^^g, the event (over uniformly chosen x)that f(x), g(x) are both 1 and that, moreover, we can find disjoint witnesses for
these two conditions.
Problem: Is P(f^^g) greater or less than P(f = 1)*P(g = 1)?
On the one hand, f and g `help each other' to be 1, by Kleitman's Theorem. On the other hand, the disjoint-witness requirement seems to undercut their ability to positively influence each other.
Which tendency wins out?
I'll tell you: P(f^^g) <= P(f = 1)*P(g = 1). The latter tendency prevails, although ironically enough, Kleitman's Theorem can actually be used to show this! It's a not-too-hard exercise in
conditioning, which I recommend to the interested reader.
Here's the kicker, though: the inequality above is actually true for arbitrary f, g, not just monotone ones! This result, however, took a decade of work to prove; it was conjectured by van den Berg
and Kesten, and proved by Reimer in 1994; see this article. Reimer's Inequality appears fruitful in probabilistic analysis, including in theoretical CS, see this article (which I have yet to read).
Labels: probability
Time for a personal update: this week I began a stay as a visiting student at MIT's CS and AI Lab (CSAIL)--a very exciting place to be for a young would-be theorist. For this privilege I have friend,
fellow blogger, and new MIT prof Scott Aaronson to thank--thanks, Scott!
While here, I'd absolutely love to meet, get to know, trade ideas or collaborate with any Boston/Cambridge readers. Drop a line or come by my office, 32-G630. I'd also welcome advice on where to eat,
how to furnish an apartment on the cheap, how to stop getting lost constantly around town and campus, and all that good stuff.
Speaking of getting lost, a small PSA: did you know that in a long random walk on a finite undirected graph, you can expect to appear at any given node with frequency proportional to its degree? A
node's location and degree of centrality in the graph are ultimately irrelevant. Equivalently, every edge gets traversed with the same limiting frequency, and with equal frequency in either
Despite its simplicity, this was an eye-popper for me as an undergrad, and my favorite result in our probability seminar, along with the theorem that the harmonic series 1 + 1/2 + 1/3 + ... converges
with probability 1 when each term is instead given a random sign.
|
{"url":"http://andysresearch.blogspot.com/2007_09_01_archive.html","timestamp":"2014-04-17T07:27:49Z","content_type":null,"content_length":"36974","record_id":"<urn:uuid:448c0f14-09f6-4ea5-b8db-168dffa3ef8c>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
|
WebCab Probability and Stat (J2EE Ed.) software application by WebCab Components about probability, distribution, module, discrete and more.
    :: LeoFiles :: Business :: Math & Scientific Tools
WebCab Probability and Stat (J2EE Ed.) software application by WebCab Components about probability, distribution, module, discrete and more.
Version: 3.6 Release: 09/12/2006 OS: Win98,Windows2000,WinXP,Windows2003,Unix,Linux Type: Demo Size: 20129 kB Price:
Keywords: EJB J2EE WebLogic WebSphere JSP Java Basic, Statistics, Discrete, Probability, Standard, Probability, Distributions, Hypothesis,
Statistics Module
The Statistics module incorporates evaluation procedures of standard quantitative measures of centrality (mean) and dispersion of (discrete) numerical sets. This module incorporates weighted
averages, geometric mean, Inter-Quartile range, mean and standard deviation, sample variance and the coefficient of variation.
Discrete Probability Module
The Discrete Probability module encapsulates the foundations of discrete probability and discrete probability distributions. This component includes the addition law, conditional probability,
cumulative distribution function, mean and variance of a distribution, expected values, covariance and simplification of expressions involving random variables.
Correlation and Regression Module
Allows the user to investigate relationships between two variables. These finding can be used to predict one variable from the given values of other variables. We cover linear (Spearman\'s,
t-test, z-transform) and rank (Spearman\'s, Kendall\'s) correlation, linear regression and conditional means.
Standard Probability Distributions Module
This module assists in the development of applications that incorporate the Binomial, Poisson, Normal, Lognormal, Pareto, Uniform, Hypergeometric and Exponential probability distributions.
The probability density function, cumulative distribution function and inverse, mean, variance, Skewness and Kurtosis are implemented where appropriate and/or their approximations for each
distribution. We also offer methods which randomly generate numbers from a given distribution.
Curve Fitting Module
The Curve Fitting module offers procedures by which linear and non-linear functions can be fitted in accordance with the least squares approach to a given data set which may or may not
exhibit measurement errors. We also include functionality which performs ANNOVA type analysis including goodness-of-fit measures such as the R-Squared measure and T-Test statistic.
Confidence Intervals and Hypothesis Testing Module
Within this component we present two aspects of inferential statistics known as confidence intervals and hypothesis testing.
Click to Download
Put your link here.
|
{"url":"http://www.leokrut.com/leofiles/business/mathscientifictools/probabilitydistribution.html","timestamp":"2014-04-17T00:48:33Z","content_type":null,"content_length":"7719","record_id":"<urn:uuid:108a5946-35f1-4527-8823-dd19ae9502f1>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Plotting the Johnson-Cook strength model
Submitted by HS_impact on Thu, 2011-02-03 16:59.
I'm trying to plot the stress-strain curve described by the Johnson-Cook strength (and eventually damage) models. The strength model is defined as:
σ=[A+Bεn][1+C ln(ε_dot*)][1-T*m]
where A, B, C, n, and m are material constants, ε_dot* is the non-dimensionalized strain rate, and T* is the homologous temperature where T*=(T-T0)/(Tmelt-T0)
To calculate the thermal softening (term in the last bracket of the J-C model), I need to determine the increase in temperature related to an increase in stress (and strain). I'm using the following
ΔT=∫ Χ (σ/(ρ*cp)) dε
where Χ is the Taylor-Quinney coefficient (i've set it to 0.9), ρ is the density, and cp is the specific heat.
So my problem is that to calculate the thermal softening, I need to work out the increase in termperature - but that is dependant on stress! Can anybody help me with plotting this mode? The only way
that I can think to do it is rearrange the ΔT equation in terms of σ, and then set up some kind of minimization function where ΔT or T* is the variable. I've tried doing this in MATLAB using the
fminsearch command, but it's not working.
Any help would be really appreciated!!
The answer depends on what you're trying to achieve and how you got the parameters in the first place.
The J-C parameters are usually determined by curve fitting a number of experimental true-stress vs true-strain plots. Some of these experiments are under nominally isothermal conditions while others
are under adiabatic conditions. Temperature measurements under these conditions are difficult, if not close to impossible, to get. In the absence of knowledge of termperature as a function of time,
thermal softening effects cannot be subtracted out from the stress-strain curves.
As a result, the J-C parameters already have temperature effects built into them (including thermal softening) for a given plastic strain, strain rate, and temperature.
However, when you try to simulate something complex, say a Taylor impact test, the plastic strain and strain rate at a material point can be quite different from an adjacent point. Also, the strain
rate will rarely be constant and in many situations the temperature will increase as the energy is dissipated via plastic deformation.
The usual way to deal with variable strain rates is to solve the momentum equation and estimate the strain rate from local velocity gradients.
To deal with variable temperatures is to start with a reference temperature and do all the calculations at that temperature. The energy equation is solved next (the one for delta T) and an increment
of temperature is calculated keeping stress constant and using the increment of plastic strain. This increment in T is used to update the temperature before going to the next time step. This process
is equivalent to splitting the coupled moementum and energy equations into two parts. A significant body of research can be found that discusses numerical issues related to this type of "operator
-- Biswajit
Submitted by
on Sun, 2011-02-06 23:51.
What i'd like to do is reproduce the 4340 steel curves from the Johnson-Cook 1985 publication in Engineering Fracture Mechanics. The following parameters are defined for the material:
A=792 MPa, B=510 MPa, n=0.26, C=0.014, m=1.03 and the reference strain rate ε_dot* is 1.0 1/s. I'm using a room temperature of 293K, and a melting temp of 1793K. The article assumes adiabatic
compression, so i've set the Taylor-Quinney coefficient, Χ, equal to 1.0.
The curves are drawn for three strain rates, 1.0, 10.0, and 100.0. I've tried two different approaches, but neither agree with the curves in the article.
In the first approach, i've tried to apply what I understood from the last paragraph in your post. That is...calculate the stress at a constant temperature (i.e. using just the first two terms in the
J-C equation), then determine the associated increase in temperature and use that to update the stress at the next increment of plastic strain. This differs to your solution as my calculation is not
time-dependent like it would be in a code, so there is some variance there. The curve shape looks similar to that in the publication, but it fails to reach the maximum stress values.
In the second approach I rearranged the equation for deltaT in terms of stress, and wrote a minimization script to find the increase in temperature associated with an increase in plastic strain that
would give the same stress value (in both the J-C model and the deltaT equation). Again, the curve shape is reproduced, but it fails to reach the maximum stress values that are plotted in the J-C
journal article.
l'll try to attach my Excel working sheet.
Thanks again for you help!
Your approach seems to be OK. Congratulations on one of the rare attempts to reproduce the original JC curves.
I recall having to use very small steps in the time integration for the temperature but got reasonably good results for a constant strain rate test. A deltaT less than 10^-6 should give you very
accurate results. The discrepancy is puzzling.
-- Biswajit
/* Style Definitions */
{mso-style-name:"Table Normal";
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-fareast-font-family:"Times New Roman";
mso-bidi-font-family:"Times New Roman";
I would like to study the drop behaviour of Polypropylene at -30oC from a
height of 1.2 m in Altair Radioss. I have two questions in my mind:
Q 1: Is the Johnson/Cook model is the right choice for my kind of application?
I have seen many people using this model in their publications. But my concern
is that is it good model for Polypropylene as it is thermoplastics? I have
checked the material models from Abacus and in their matreial data base they
say it is good for metals only. I can see in this model that it takes into
consideration effects of strain rate, temperature and hardening.
Q2: What kind of tests I have to perfrom to get material data for my model:
tensile, compression etc and at what temperature these tests have to be
conducted? Is it possible that I perform my test at room temperature and at a
particular strain rate and then enter this data and my model will take into
consideration the variation of temperature and strain rates based upon my
boundary conditions etc. I reallly donot understand how it works?
I would be thankful if anyone could write me about my matter.
Thanks very much
Best Regards
I have Stress Vs. Strain data for high speed tensile test
Test conducted at 1 m/s,5 m/s, 10 m/s and 15 m/s
(Corresponding Strain rate are 31.25,156.25,312.5 and 468.75 /S )
I don't know how to determine the parameter C, from the test results
Can you suggest me how to find the parameter C in JC model
Recent comments
|
{"url":"http://imechanica.org/node/9739","timestamp":"2014-04-17T00:58:36Z","content_type":null,"content_length":"32364","record_id":"<urn:uuid:6a7a0c51-f20e-4abb-920e-56c7c130138a>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Missing Multipliers
Copyright © University of Cambridge. All rights reserved.
'Missing Multipliers' printed from http://nrich.maths.org/
The multiplication square below has had all its headings and answers hidden. All of the headings are numbers between 2 and 12, and no heading appears twice horizontally or vertically.
By revealing some of the answers, can you work out what the headings must be?
This text is usually replaced by the Flash movie.
What is the smallest number of answers you need to reveal in order to work out the missing headers?
Can you describe a strategy that allows you to complete the Level 3 challenge most of the time?
Can you suggest any reasons why your strategy might not always work?
Once you have a strategy for completing Level 3, here are some more Missing Multiplier challenges you might like to try:
You might also like to try some challenges which include negative numbers.
A 4 by 4 grid, using multipliers from -4 to +4 A 6 by 6 grid, using multipliers from -6 to +6
|
{"url":"http://nrich.maths.org/7382/index?nomenu=1","timestamp":"2014-04-17T01:14:38Z","content_type":null,"content_length":"5586","record_id":"<urn:uuid:cca64b1d-d924-45b3-bef7-9e7128442c72>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
|
for some scalar $\lambda$, i.e. such that the image of $x$ under the transformation $A$ is a scalar multiple of $x$. One can similarly define left eigenvectors in the case that $A$acts on the right.
to find a form which characterizes the eigenvector $x_{i}$ (any multiple of $x_{i}$ is also an eigenvector). Of course, this is not necessarily the best way to do it; for this, see singular value
SingularValueDecomposition, Eigenvalue, EigenvalueProblem, SimilarMatrix, DiagonalizationLinearAlgebra
Mathematics Subject Classification
no label found
no label found
no label found
no label found
|
{"url":"http://planetmath.org/Eigenvector","timestamp":"2014-04-16T13:15:52Z","content_type":null,"content_length":"42399","record_id":"<urn:uuid:faae5d73-79bc-4f5b-8e61-4421500df1f7>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
|
John Wesley Cain, 34, started graduate school with a mathematician's aversion to biology. He took a course in his first semester at Duke University with David Schaeffer , an applied mathematician who
was just beginning to study models of cardiac rhythms. In the class, Cain had to choose from a list of projects and ended up working on mathematical models of cardiac action potential. "I think that
was secretly his favorite project," Cain says.
Cain himself took quickly to the work. "I thought the mathematics was cool. I thought the applications were cool." Eventually, Schaeffer became Cain's Ph.D. adviser. Now, Cain is an assistant
professor at in the Department of Mathematics and Applied Mathematics at Virginia Commonwealth University in Richmond. There, he works in applied mathematics with an emphasis on cardiac
Much of the work he does is in interdisciplinary teams. In fact, he is a co-principal investigator on a training grant in computational cardiology that focuses on teamwork. "The idea is to try to get
clinicians, basic science researchers, mathematicians, computer scientists -- you name it -- to actually talk to each other," Cain says. The culmination of that grant will be the World Congress on
Mathematical Modeling and Computational Simulation of Cardiovascular and Cardiopulmonary Dynamics at the College of William and Mary from 31 May to 3 June.
This summer, Cain will move to a new position as an associate professor at the University of Richmond, which, he says, is more geared toward undergraduate education. "I have a lot of projects that I
have been really itching to get some of their undergraduates involved [with]," he says. He will continue his collaborations with VCU, in part for its medical center and team of cardiologists.
Cain spoke with Science Careers earlier this spring. Below is a partial transcript of the conversation, edited for clarity and brevity.
Q: Tell me what mathematical cardiology is. What does that mean in terms of research?
J.W.C.: You can take a natural phenomenon, such as a heart attack or anything that you would like to understand the mechanisms for ... [and] you can ... run mathematical and computer-based
simulations ... to gain intuition, as opposed to actually having to maybe excise a heart from a rabbit or a sheep or something like that to run an experiment. I ... try to gain intuition so that I
can then report to the biomedical engineers and then tell them, "These might be the sorts of experiments that you might want to run."
Q: Can you give me an example of a particular cardiac event that you have worked on and how that would translate?
J.W.C.: Sure. Sometimes, for example, regions of damaged tissue can anchor abnormal wave patterns in the heart, and those sorts of things can be simulated. ... So the idea would be: ... take an
example of a substrate for anchoring some particular type of arrhythmia. Try to describe that set-up using a system of equations that could be solved using a computer. And then try to see if we can
actually reproduce the phenomenon at least qualitatively and hopefully quantitatively with some computer assistance. Because then we can experiment with different types of damaged regions of tissues
to see which ones might anchor abnormal rhythms a little bit better than others and then try to simulate or design an experiment for telling a clinician how you might want to use techniques that they
use to terminate the arrhythmia.
[Clinicians] have their own range of techniques, of course. They usually use things like radio frequency ablation ... to try to fix the heart when they detect this type of abnormal rhythm. Our idea
would be to just try to simulate such rhythms on a computer and then try to figure out, well, how would we correct those sorts of rhythms?
Q: Do you work directly with clinicians or bioengineers in understanding the processes and working on the models?
J.W.C.: We do. The groups tend to be very interdisciplinary. ... Cardiologists are the ones who really keep us honest and they tell us exactly what ... they see when they look at a damaged heart and
... then would need to go in and ablate or do something of that sort. The biomedical engineers are very useful folks to talk to because they are very nice in bridging language gaps. ... I am trained
as a mathematician and sometimes it's nice to have somebody who can speak ... from an engineering standpoint and talk about quantitative things, and also understand some of the more technical
physiology that's going on.
Physics folks tend to be good at running experiments, just like the biomedical engineers. And then the mathematicians like myself would be more geared towards analyzing mathematical models and
computational models that are designed to try to explain different types of rhythm [and] explain mechanisms for generating arrhythmias. So there is a big spectrum of scientists who are involved in
these sorts of projects, and it actually makes for very fun and lively discussion groups.
Q: Let's talk a little bit more about your experience. You were originally trained as a mathematician?
J.W.C.: All of my training was in mathematics. Biology, of all the basic sciences, was the one that I had the least interest in when I first started graduate school.
Q: So how did you get exposed to it and how did you end up taking this direction?
J.W.C.: Right when I was getting ready to start graduate school, I thought briefly about maybe trying to take an excursion out of academia and getting a job in the tech sector. But that was around
the year 2000 when the tech sector was in the process of going belly up, so I decided, well, I will try the graduate school route. And in my first semester I took a math course, an applied
mathematics course in ordinary differential equations. And during that course, we all had to choose a project to work on, and one of the projects that we were able to choose from was mathematical
models of cardiac action potentials in a single cell.
As it happened, the professor had just gotten involved in that, and I think that was secretly his favorite project. By happy coincidence, he asked if I would be willing to work on that as an
independent study with him the following semester. And I said "Sure, okay." I thought the mathematics was cool. I thought the applications were cool. We continued it as an independent study the
following semester and then we decided to continue into the following summer. And by the end of the summer, I was ... ready to pop the question of, "will you be my academic advisor? Will you advise
my dissertation?" It seemed like a great match.
The mathematics is very rich and the physiology -- there is just absolutely no end to the types of questions you can pose about dynamics in cardiac tissue. So it's kind of a happy story where I
believe several nice coincidences happened to just put me on the right track.
Q: Have you picked up biology and cardiology along the way or did you go back and take some courses to catch up on that?
J.W.C.: I did have to learn a fair amount of electrophysiology to make sure that I wasn't doing anything too outlandish, because the idea isn't just to come up with some sort of mathematical model
that is purely phenomenological to the point that all it's designed to do is to try to reproduce different graphs. If you design your model, you can make a model do anything you like. That doesn't
necessarily mean that it's grounded in reality.
So we would typically have roundtable discussions where cardiologists, biomedical engineers, physicists and mathematicians would sit around the table and discuss the physiology. This is how I learned
it: from roundtable discussions with a research group where the cardiologists and biomedical engineers were the ones who would really keep us honest and made sure that what we were doing was grounded
in physiologically reasonable assumptions.
I tried taking a course one time but after a couple of lectures decided to abort that because the roundtable discussions were a lot more fruitful. I found that it was a lot easier to learn some of
the physiology background just by directly asking a cardiologist or a biomedical engineer.
Q: How do you go about making a mathematical model that is biologically relevant? Is it understanding the physiology? Is it trying to get the clinicians to understand the math?
J.W.C.: I would say all of the above. It's not necessarily essential that the clinicians understand the math. It's trying to figure out how to ask the right questions of the clinicians to make sure
that your model stays honest.
Usually you want to use the mathematics to try to gain insight as to what you should be looking for, and the more complex the model, ... the less amenable to mathematical analysis it's going to be.
So you really have to try to convey to the clinicians what a mathematician's limitations would be so that they can craft their questions in such a way that it helps design an experiment. And that's a
really delicate tight-wire act to try to walk.
Q: What would you tell undergraduate mathematics students who aren't quite sure where they are going to take their interest in math if they wanted to pursue a biomedical field?
J.W.C.: Biology used to be the science that was not traditionally the one that mathematicians would gravitate toward. There was a nice article that appeared several years back in the Notices of the
AMS, "Why Is Mathematical Biology So Hard?" And in that, [Mike Reed] points out that in biology, there is no Newton's Second Law. There is no F = ma that you can resort to. Physics has traditionally
been one of the fertile grounds for using mathematical modeling. ... But I would say that as computing has improved over the last several decades, biological modeling has become a very hot area. And
a lot of the mathematical techniques are independent of the underlying applications.
So I try to tell students, even if biology is not necessarily your favorite science, it really can grow on you and the techniques you are going to use will be very similar to the techniques you would
use to attack certain problems in physics or chemistry or economics. A lot of those techniques are independent of the underlying application and ... a lot of the most exciting questions are
biologically motivated.
Q: Is there anything else that you'd like to tell me in terms of getting into the field or collaborative work?
J.W.C.: Other than just to drive home this point: that I really encourage any student who is interested in getting into this sort of career, you have to be willing to meet with people in fields ...
very different from yours, and you have to listen. It took work at first. The electrophysiology literature has a lot of long words that I didn't understand and a lot of difficult things that pushed
the limits of what I had remembered from basic chemistry. There was some start up involved but, boy, the rewards sure did make up for that. It's fun to talk to people from a variety of scientific
fields. I encourage anybody who is interested in getting into this, don't hesitate to immerse yourself in any sort of quantitative training that you can and immerse yourself in any other sorts of
science. ... It is a very, very fun field to be in.
Additional Reading
Mathematical cardiology and related fields typically fall under applied mathematics at universities. For example, Duke University, where Cain did his Ph.D., has a program in mathematical biology ;
graduate students in that program are supported in part by a grant from the National Science Foundation. There are math biology departments at several universities throughout the country.
In Cain's article, "Taking Math to Heart: Mathematical Challenges in Cardiac Electrophysiology," published in April in Notices of the AMS, he recommends Mathematical Physiology (J. P. Keener and J.
Sneyd, Springer-Verlag, New York, 1998) as a good introduction to mathematical cardiology.
In addition to the articles linked to in the above article, the American Mathematical Society has also published "Getting Started in Mathematical Biology." in Notices of the AMS,
The Society for Mathematical Biology maintains links to some resources on its Web page.
See also the Science collection, "Mathematics in Biology," published in February 2004.
|
{"url":"http://sciencecareers.sciencemag.org/print/career_magazine/previous_issues/articles/2011_04_29/caredit.a1100039","timestamp":"2014-04-17T12:54:13Z","content_type":null,"content_length":"20379","record_id":"<urn:uuid:fb7f58b9-2672-408e-b834-d1ed0fcbe49d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Orthogonality of invariant subspaces for restricted representations
up vote -4 down vote favorite
Let $G$ be a finite group and $H_1$ and $H_2$ are two proper subgroups of $G$. Also, let $\rho:G \rightarrow \mathbb{C}^m \times \mathbb{C}^m$ be an irreducible non-trivial representation of $G$. Let
$V_1$ and $V_2$ be subspaces of $\mathbb{C}^m$ such that $V_1 \cap V_2=0$. Further, $V_1$ is the maximal subspace such that for every element $h_1 \in H_1$ and $v \in V_1$, $\rho(h_1) v = v$.
Similarly, $V_2$ is the maximal subspace such that for every element $h_2 \in H_2$ and $v \in V_2$, $\rho(h_2) v = v$.
Is it true that $V_1$ and $V_2$ are orthogonal to each other (w.r.t the standard inner product on $\mathbb{C}^m$) ? If not, can you provide a counterexample?
1 orthogonal with respect to what? – Jonas Hartwig Nov 21 '10 at 20:05
@Jonas : orthogonal w.r.t. the standard inner product function. – Anindya De Nov 21 '10 at 20:52
1 What the hell is $\rho:G\to\mathbb C^m\times \mathbb C^m$ supposed to mean? Maybe $\rho:G\to \mathrm{End}\mathbb C^m$ ? – darij grinberg Nov 21 '10 at 20:57
Anyway, by the definition of $V_1$, an irrep of $G$ occurs in $V_1$ iff it occurs in $\mathbb C^m$ and trivializes $H_1$ (i. e., the group $H_1$ acts trivially in this irrep). Similarly, an irrep
2 of $G$ occurs in $V_2$ iff it occurs in $\mathbb C^m$ and trivializes $H_2$ (i. e., the group $H_2$ acts trivially in this irrep). Thus, no irrep can occur in both $V_1$ and $V_2$ (because then it
would trivialize both $H_1$ and $H_2$, and thus occur in $V_1\cap V_2$, contradicting $V_1\cap V_2=0$). Therefore, $V_1$ and $V_2$ are orthogonal with respect to any $G$-invariant (!) scalar
product. – darij grinberg Nov 21 '10 at 21:03
1 Of course, they need not be orthogonal with respect to some random (e. g., standard) scalar product (in fact, there is no reason why the standard scalar product here should be better than any
randomly chosen one - think of representations of $G$ as some vector spaces, not necessarily $\mathbb C^m$). – darij grinberg Nov 21 '10 at 21:04
show 7 more comments
closed as off-topic by Ricardo Andrade, Andrey Rekalo, Stefan Kohl, Chris Godsil, David White Nov 28 '13 at 15:57
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question does not appear to be about research level mathematics within the scope defined in the help center." – Ricardo Andrade, Andrey Rekalo, Stefan Kohl, Chris Godsil
If this question can be reworded to fit the rules in the help center, please edit the question.
1 Answer
active oldest votes
No. Let $G$ be the dihedral group of order $6$ acting on its reflection representation $V_{\mathbb{R}}=\mathbb{R}^2$ (as symmetries of an equilateral triangle) and let $V$ be the
complexification of this representation. Let $H_1$ and $H_2$ be the subgroups of order $2$ generated by distinct reflections. Then the fix spaces of $H_1$ and $H_2$ are not orthogonal wrt
up vote 2 the $G$-inv. inner product, but they intersect trivially.
down vote
Yes, this disproves it for bilinear inner products. – darij grinberg Nov 22 '10 at 12:18
...and for sesquilinear (linear in first variable, conjugate linear in the 2nd) ones. – Sheikraisinrollbank Nov 22 '10 at 13:32
add comment
Not the answer you're looking for? Browse other questions tagged rt.representation-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/46852/orthogonality-of-invariant-subspaces-for-restricted-representations","timestamp":"2014-04-21T02:29:53Z","content_type":null,"content_length":"53218","record_id":"<urn:uuid:41b9de7f-6065-4d0a-b3c5-e13861cfc770>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dual Space Transpose Of A Linear Map
A selection of articles related to dual space transpose of a linear map.
Original articles from our library related to the Dual Space Transpose Of A Linear Map. See Table of Contents for further available material (downloadable resources) on Dual Space Transpose Of A
Linear Map.
Magick >> Rituals
Dual Space Transpose Of A Linear Map is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Dual Space Transpose Of A Linear Map
books and related discussion.
Suggested Pdf Resources
linear maps, matrices, change of bases, direct sums, linear forms, dual spaces, hyperplanes, transpose of a linear maps, are reviewed.
3. 7.2 Isomorphism .
Definition 8.1 A projection on a linear space X is a linear map P : X ā X such that. P2 = P.
is a linear transformation from V into W. Furthermore, if s ā F, the function (sT) .
On the theoretical side, we deal with vector spaces, linear maps, and bilin- ear forms. Vector spaces each variable, dual spaces (which consist of linear mappings from the original space to
the .. c2 ...
Suggested Web Resources
Main article: Dual space#Transpose of a linear map.
Linear functional. Matrix representation. Dual space, conjugate space, adjoint space.
If V and W do not have bilinear forms, then the transpose of a linear map f: Vā W is only defined as a linear map tf : W*ā V* between the dual spaces of W and V.
linear maps, matrices, change of bases, direct sums, linear forms, dual spaces, hyperplanes, transpose of a linear maps, are reviewed.
Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We
appreciate your suggestions and comments on further improvements of the site.
|
{"url":"http://www.realmagick.com/dual-space-transpose-of-a-linear-map/","timestamp":"2014-04-18T13:43:20Z","content_type":null,"content_length":"28039","record_id":"<urn:uuid:8b278c3a-9532-44dc-be2e-60fa55045f28>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Geeks Everywhere are Enjoying a Slice Of Pi Today
Join the Conversation
To find out more about Facebook commenting please read the Conversation Guidelines and FAQs
Reid Champagne, The (Wilmington, Del.) News Journal
If mathematics is the language in which the universe and the Earth speak to us, thenthe diminutive constant pi would be an important component for that language.
"That's a very good way of looking at math and pi," said Tom Fernsler, of the Math Science Educational Resource Center at the University of Delaware. "Everything today that's made that is round or
requires a calculation involving a circle, sphere or curved surface involves the use of pi."
It's even got its own "Day." March 14 is designated as Pi Day, as the calendar day representation of 3/14 matches the first three digits of pi (3.14).
STORY:Five ways to celebrate 3.14159 on Pi Day
That's just the tip of the geeky iceberg, but we'll circle back to that in a moment. It was Greek geek Archimedes who first correctly identified the ratio of a circle's circumference to its diameter
as 22/7.
"Every circle regardless of size contains precisely that ratio," said Fernsler. "If there is any other ratio calculated other than 22/7 for the spherical entity you are measuring, then it's not a
In other words, an oval or an ellipse does not have a ratio of 22/7, and may be considered circular, but are most definitely not circles. But 22/7 is but a crude representation of the precise measure
of pi.
"Pi is an irrational number," Fernsler explained. "That means its decimal representation does not end, nor is it repeated."
For example, 1/2 is .5 decimally stated. 1/8 is .125. Both end and contain no repetition of decimal places. Such numbers are called "rational" numbers. The fraction 1/3, however, is .3333333, etc.
Its decimal equivalent repeats and doesn't end. That makes it an "irrational" number. ("Rational" is a mathematical term and does not in any way suggest a rational number to be sane, or an irrational
number to be insane.)
Pi is irrational, and that's where the fun begins, as long as fun for you includes memorizing and reciting pi out to thousands of decimal places. That turns out to be one of the many events that have
been taking place in Princeton, N.J., each year on March 14.
"The idea came from doing something to commemorate Albert Einstein's birthdate of March 14," said Mimi Omiecinski, founder of Princeton's Pi Day, now in its fifth year. She was surprised how that
inaugural event caught on. "We would have been happy with a handful of attendees," said Omiecinski. "But 1,500 people filled the library building on a day a Nor'easter was pounding the area."
This year's events over a six-day period, which are expected to draw a crowd of 6,000 plus, will include the pi recitation contest, along with an Einstein look-a-like contest, pie- (not pi) eating
and judging contests and a walking tour of Einstein's neighborhood.
"It's a celebration during which a geek can feel like a rock star," said Omiecinski.
Professor Barry Renner, chair of the Department of Mathematics at Wilmington University, puts Pi Day into a larger, more significant context.
"Anything which popularizes math and helps to show it as something fun is a worthwhile activity."
The skinny on pi
• World famous.Pi is the most recognized mathematical constant in the world.
• Around the Earth.If the circumference of the Earth were calculated using pi rounded to only the ninth decimal place, an error of no more than one quarter of an inch in 25,000 miles would result.
• Pi baby.Albert Einstein was born on Pi Day (3/14/1879) in Ulm, Germany.
• Are you a piphilologist?Piphilology is the study and creation of mnemonic techniques for memorizing the never-ending string of decimal digits of pi. The technique of memorizing lines of poetry
(known as a "piem") or prose is one of the best known. When the letters in each of the words in the phrase "How I want a drink, alcoholic of course, after the heavy lectures involving quantum
mechanics" are counted, they correspond to the numerical 3.14159265358979 (carrying pi to 14 decimal places) and you become a hit at geek parties.
• Is pi carried to one trillion digits overkill?While modern computers are capable of calculating pi to one trillion decimal places, it doesn't do much practically for science. According to
mathematicians Jörg Arndt and Christoph Haenel, 39 digits are sufficient to perform most calculations, because that is the accuracy necessary to calculate the volume of the known universe with a
precision of one atom.
• Pi cheers.School spirit at the Massachusetts Institute of Technology has been memorialized in the cheer: "Cosine, secant, tangent, sine 3.14159."
• You can't get there from here.UD's Tom Fernsler says pi was essential to the calculations that landed a man on the moon. "Reaching the moon required the rendezvous of two separate spacecrafts,"
Fernsler said. "That meant the intersection of two spherical orbits required calculations involving pi." In other words, without Archimedes, there would be no Neil Armstrong.
• Pyramid scheme.Egyptologists have been fascinated for centuries by by evidence that suggests the Great Pyramid at Giza seems to approximate pi. The vertical height of the pyramid has the same
relationship to the perimeter of its base as the radius of a circle has to its circumference.
• Pi and the arts."The Little Constant That Could" has wound its way into artistic expression. Carl Sagan used the digits of pi to suggest a secret message from God in his novel "Contact." The 1998
movie "Pi" concerns a mathematician looking for a number to explain the meaning of existence. The Oscar-winning film "Life of Pi" actually has nothing to do with the constant's calculation or
possible hidden meanings. (Though the movie does feature a cool CGI tiger that probably used pi in its design computer calculations.)
• The granddaddy of all Pi Days?San Francisco's Exploratorium is hosting its 25th annual Pi Day. Its website (www.exploratorium.edu/pi/index.html) states the annual celebration has grown into an
international and online event. Included are pie-making and -throwing exhibitions.
• Trekkie pi.In the Star Trek episode "Wolf in the Fold," Spock foils the evil computer by commanding it to "compute to the last digit the value of pi."
|
{"url":"http://www.wltx.com/story/local/2013/03/14/1666344/","timestamp":"2014-04-23T10:55:31Z","content_type":null,"content_length":"55869","record_id":"<urn:uuid:bb11ad8c-66b4-4776-9a18-970a720e7b81>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of explementary angle
In geometry and trigonometry, an angle (in full, plane angle) is the figure formed by two rays sharing a common endpoint, called the vertex of the angle . The magnitude of the angle is the "amount of
rotation" that separates the two rays, and can be measured by considering the length of circular arc swept out when one ray is rotated about the vertex to coincide with the other (see "Measuring
angles", below). Where there is no possibility of confusion, the term "angle" is used interchangeably for both the geometric configuration itself and for its angular magnitude (which is simply a
numerical quantity).
The word angle comes from the Latin word angulus, meaning "a corner". The word angulus is a diminutive, of which the primitive form, angus, does not occur in Latin. Cognate words are the Latin angere
, meaning "to compress into a bend" or "to strangle", the Greek ἀγκύλος (ankylοs), meaning "crooked, curved," and the English word "ankle." All three are connected with the Proto-Indo-European root
*ank-, meaning "to bend" or "bow" .
defines a plane angle as the inclination to each other, in a plane, of two lines which meet each other, and do not lie straight with respect to each other. According to
an angle must be either a quality or a quantity, or a relationship. The first concept was used by
, who regarded an angle as a deviation from a
straight line
; the second by
Carpus of Antioch
, who regarded it as the interval or space between the intersecting lines; Euclid adopted the third concept, although his definitions of right, acute, and obtuse angles are certainly quantitative.
Measuring angles
In order to measure an angle θ, a circular arc centered at the vertex of the angle is drawn, e.g. with a pair of compasses. The length of the arc s is then divided by the radius of the circle r, and
possibly multiplied by a scaling constant k (which depends on the units of measurement that are chosen):
$theta = frac\left\{s\right\}\left\{r\right\}\left(k\right).$
The value of θ thus defined is independent of the size of the circle: if the length of the radius is changed then the arc length changes in the same proportion, so the ratio s/r is unaltered.
In many geometrical situations, angles that differ by an exact multiple of a full circle are effectively equivalent (it makes no difference how many times a line is rotated through a full circle
because it always ends up in the same place). However, this is not always the case. For example, when tracing a curve such as a spiral using polar coordinates, an extra full turn gives rise to a
quite different point on the curve.
Angles are considered dimensionless, since they are defined as the ratio of lengths. There are, however, several units used to measure angles, depending on the choice of the constant k in the formula
above. Of these units, treated in more detail below, the degree and the radian are by far the most common.
With the notable exception of the radian, most units of angular measurement are defined such that one full circle (i.e. one revolution) is equal to n units, for some whole number n. For example, in
the case of degrees, A full circle of n units is obtained by setting in the formula above. (Proof. The formula above can be rewritten as One full circle, for which units, corresponds to an arc equal
in length to the circle's circumference, which is 2πr, so . Substituting n for θ and 2πr for s in the formula, results in )
• The degree, denoted by a small superscript circle (°) is 1/360 of a full circle, so one full circle is 360°. One advantage of this old sexagesimal subunit is that many angles common in simple
geometry are measured as a whole number of degrees. Fractions of a degree may be written in normal decimal notation (e.g. 3.5° for three and a half degrees), but the following sexagesimal
subunits of the "degree-minute-second" system are also in use, especially for geographical coordinates and in astronomy and ballistics:
□ The minute of arc (or MOA, arcminute, or just minute) is 1/60 of a degree. It is denoted by a single prime ( ′ ). For example, 3° 30′ is equal to 3 + 30/60 degrees, or 3.5 degrees. A mixed
format with decimal fractions is also sometimes used, e.g. 3° 5.72′ = 3 + 5.72/60 degrees. A nautical mile was historically defined as a minute of arc along a great circle of the Earth.
□ The second of arc (or arcsecond, or just second) is 1/60 of a minute of arc and 1/3600 of a degree. It is denoted by a double prime ( ″ ). For example, 3° 7′ 30″ is equal to 3 + 7/60 + 30/
3600 degrees, or 3.125 degrees.
• The radian is the angle subtended by an arc of a circle that has the same length as the circle's radius (k = 1 in the formula given earlier). One full circle is 2π radians, and one radian is 180/
π degrees, or about 57.2958 degrees. The radian is abbreviated rad, though this symbol is often omitted in mathematical texts, where radians are assumed unless specified otherwise. The radian is
used in virtually all mathematical work beyond simple practical geometry, due, for example, to the pleasing and "natural" properties that the trigonometric functions display when their arguments
are in radians. The radian is the (derived) unit of angular measurement in the SI system.
• The mil is approximately equal to a milliradian. There are several definitions.
• The full circle (or revolution, rotation, full turn or cycle) is one complete revolution. The revolution and rotation are abbreviated rev and rot, respectively, but just r in rpm (revolutions per
minute). 1 full circle = 360° = 2π rad = 400 gon = 4 right angles.
• The right angle is 1/4 of a full circle. It is the unit used in Euclid's Elements. 1 right angle = 90° = π/2 rad = 100 gon.
• The angle of the equilateral triangle is 1/6 of a full circle. It was the unit used by the Babylonians, and is especially easy to construct with ruler and compasses. The degree, minute of arc and
second of arc are sexagesimal subunits of the Babylonian unit. 1 Babylonian unit = 60° = π/3 rad ≈ 1.047197551 rad.
• The grad, also called grade, gradian, or gon is 1/400 of a full circle, so one full circle is 400 grads and a right angle is 100 grads. It is a decimal subunit of the right angle. A kilometer was
historically defined as a centi-gon of arc along a great circle of the Earth, so the kilometer is the decimal analog to the sexagesimal nautical mile. The gon is used mostly in triangulation.
• The point, used in navigation, is 1/32 of a full circle. It is a binary subunit of the full circle. Naming all 32 points on a compass rose is called "boxing the compass". 1 point = 1/8 of a right
angle = 11.25° = 12.5 gon.
• The astronomical hour angle is 1/24 of a full circle. The sexagesimal subunits were called minute of time and second of time (even though they are units of angle). 1 hour = 15° = π/12 rad = 1/6
right angle ≈ 16.667 gon.
• The binary degree, also known as the binary radian (or brad), is 1/256 of a full circle. The binary degree is used in computing so that an angle can be efficiently represented in a single byte.
• The grade of a slope, or gradient, is not truly an angle measure (unless it is explicitly given in degrees, as is occasionally the case). Instead it is equal to the tangent of the angle, or
sometimes the sine. Gradients are often expressed as a percentage. For the usual small values encountered (less than 5%), the grade of a slope is approximately the measure of an angle in radians.
Positive and negative angles
A convention universally adopted in mathematical writing is that angles given a sign are positive angles if measured anticlockwise, and negative angles if measured clockwise, from a given line. If no
line is specified, it can be assumed to be the x-axis in the Cartesian plane. In many geometrical situations a negative angle of −θ is effectively equivalent to a positive angle of "one full rotation
less θ". For example, a clockwise rotation of 45° (that is, an angle of −45°) is often effectively equivalent to a anticlockwise rotation of 360° − 45° (that is, an angle of 315°).
In three dimensional geometry, "clockwise" and "anticlockwise" have no absolute meaning, so the direction of positive and negative angles must be defined relative to some reference, which is
typically a vector passing through the angle's vertex and perpendicular to the plane in which the rays of the angle lie.
In navigation, bearings are measured from north, increasing clockwise, so a bearing of 45 degrees is north-east. Negative bearings are not used in navigation, so north-west is 315 degrees.
• 1° is approximately the width of a little finger at arm's length
• 10° is approximately the width of a closed fist at arm's length.
• 20° is approximately the width of a handspan at arm's length.
Identifying angles
In mathematical expressions, it is common to use Greek letters (α, β, γ, θ, φ, ...) to serve as variables standing for the size of some angle. (To avoid confusion with its other meaning, the symbol π
is typically not used for this purpose.) Lower case roman letters (a, b, c, ...) are also used. See the figures in this article for examples.
In geometric figures, angles may also be identified by the labels attached to the three points that define them. For example, the angle at vertex A enclosed by the rays AB and AC (i.e. the lines from
point A to point B and point A to point C) is denoted ∠BAC or BÂC. Sometimes, where there is no risk of confusion, the angle may be referred to simply by its vertex ("angle A").
Potentially, an angle denoted, say, ∠BAC might refer to any of four angles: the clockwise angle from B to C, the anticlockwise angle from B to C, the clockwise angle from C to B, or the anticlockwise
angle from C to B, where the direction in which the angle is measured determines its sign (see Positive and negative angles). However, in many geometrical situations it is obvious from context that
the positive angle less than or equal to 180° degrees is meant, and no ambiguity arises. Otherwise, a convention may be adopted so that ∠BAC always refers to the anticlockwise (positive) angle from B
to C, and ∠CAB to the anticlockwise (positive) angle from C to B.
Types of angles
• An angle of 90° (π/2 radians, or one-quarter of the full circle) is called a right angle.
• :Two lines that form a right angle are said to be perpendicular or orthogonal.
• Angles smaller than a right angle (less than 90°) are called acute angles ("acute" meaning "sharp").
• Angles larger than a right angle and smaller than two right angles (between 90° and 180°) are called obtuse angles ("obtuse" meaning "blunt").
• Angles equal to two right angles (180°) are called straight angles.
• Angles larger than two right angles but less than a full circle (between 180° and 360°) are called reflex angles.
• Angles that have the same measure are said to be congruent.
• Two angles opposite each other, formed by two intersecting straight lines that form an "X" like shape, are called vertical angles or opposite angles. These angles are congruent.
• Angles that share a common vertex and edge but do not share any interior points are called adjacent angles.
• Two angles that sum to one right angle (90°) are called complementary angles.
• :The difference between an angle and a right angle is termed the complement of the angle.
• Two angles that sum to a straight angle (180°) are called supplementary angles.
• :The difference between an angle and a straight angle is termed the supplement of the angle.
• Two angles that sum to one full circle (360°) are called explementary angles or conjugate angles.
• An angle that is part of a simple polygon is called an interior angle if it lies in the inside of that the simple polygon. Note that in a simple polygon that is concave, at least one interior
angle exceeds 180°.
• :In Euclidean geometry, the measures of the interior angles of a triangle add up to π radians, or 180°; the measures of the interior angles of a simple quadrilateral add up to 2π radians, or
360°. In general, the measures of the interior angles of a simple polygon with n sides add up to [(n − 2) × π] radians, or [(n − 2) × 180]°.
• The angle supplementary to the interior angle is called the exterior angle. It measures the amount of "turn" one has to make at this vertex to trace out the polygon. If the corresponding interior
angle exceeds 180°, the exterior angle should be considered negative. Even in a non-simple polygon it may be possible to define the exterior angle, but one will have to pick an orientation of the
plane (or surface) to decide the sign of the exterior angle measure.
• :In Euclidean geometry, the sum of the exterior angles of a simple polygon will be 360°, one full turn.
• Some authors use the name exterior angle of a simple polygon to simply mean the explementary (not supplementary!) of the interior angle This conflicts with the above usage.
• The angle between two planes (such as two adjacent faces of a polyhedron) is called a dihedral angle. It may be defined as the acute angle between two lines normal to the planes.
• The angle between a plane and an intersecting straight line is equal to ninety degrees minus the angle between the intersecting line and the line that goes through the point of intersection and
is normal to the plane.
• If a straight transversal line intersects two parallel lines, corresponding (alternate) angles at the two points of intersection are congruent; adjacent angles are supplementary (that is, their
measures add to π radians, or 180°).
A formal definition
Using trigonometric functions
A Euclidean angle is completely determined by the corresponding right triangle. In particular, if
is a Euclidean angle, it is true that
$cos theta = frac\left\{x\right\}\left\{sqrt\left\{x^2 + y^2\right\}\right\}$
$sin theta = frac\left\{y\right\}\left\{sqrt\left\{x^2 + y^2\right\}\right\}$
for two numbers x and y. So an angle in the Euclidean plane can be legitimately given by two numbers x and y.
To the ratio y/x there correspond two angles in the geometric range 0 < θ < 2π, since
$frac\left\{sin theta\right\}\left\{cos theta \right\} = frac\left\{y/sqrt\left\{x^2 + y^2\right\}\right\}\left\{x/sqrt\left\{x^2 + y^2\right\}\right\} = frac\left\{y\right\}\left\{x\right\} =
frac\left\{-y\right\}\left\{-x\right\} = frac\left\{sin \left(theta + pi\right)\right\}\left\{cos \left(theta + pi\right) \right\}.$
Using rotations
Suppose we have two
unit vectors $vec\left\{u\right\}$
in the euclidean plane
. Then there exists one positive
(a rotation), and one only, from
that maps
. Let
be such a rotation. Then the relation
defined by
is an equivalence relation and we call
angle of the rotation r
equivalence class $mathbb\left\{T\right\}/mathcal\left\{R\right\}$
, where
denotes the unit circle of
. The angle between two vectors will simply be the angle of the rotation that maps one onto the other. We have no numerical way of determining an angle yet. To do this, we choose the vector
, then for any point M on
at distance
(on the circle), let
. If we call
the rotation that transforms
, then
is a bijection, which means we can identify any angle with a number between 0 and
Angles between curves
The angle between a line and a curve (mixed angle) or between two intersecting curves (curvilinear angle) is defined to be the angle between the tangents at the point of intersection. Various names
(now rarely, if ever, used) have been given to particular cases:—amphicyrtic (Gr. ἀμφί, on both sides, κυρτόσ, convex) or cissoidal (Gr. κισσόσ, ivy), biconvex; xystroidal or sistroidal (Gr. ξυστρίσ,
a tool for scraping), concavo-convex; amphicoelic (Gr. κοίλη, a hollow) or angulus lunularis, biconcave.
The dot product and generalisation
In the
Euclidean plane
, the angle θ between two
vectors u
is related to their
dot product
and their lengths by the formula
$mathbf\left\{u\right\} cdot mathbf\left\{v\right\} = cos\left(theta\right) |mathbf\left\{u\right\}| |mathbf\left\{v\right\}|.$
This allows one to define angles in any real inner product space, replacing the Euclidean dot product · by the Hilbert space inner product $langlecdot,cdotrangle$.
Angles in Riemannian geometry
Riemannian geometry
, the
metric tensor
is used to define the angle between two
. Where
are tangent vectors and
are the components of the metric tensor
cos theta = frac{g_{ij}U^iV^j} {sqrt{ left| g_{ij}U^iU^j right| left| g_{ij}V^iV^j right}.
Angles in geography and astronomy
In geography we specify the location of any point on the Earth using a Geographic coordinate system. This system specifies the latitude and longitude of any location, in terms of angles subtended at
the centre of the Earth, using the equator and (usually) the Greenwich meridian as references.
In astronomy, we similarly specify a given point on the celestial sphere using any of several Astronomical coordinate systems, where the references vary according to the particular system.
Astronomers can also measure the angular separation of two stars by imagining two lines through the centre of the Earth, each intersecting one of the stars. The angle between those lines can be
measured, and is the angular separation between the two stars.
Astronomers also measure the apparent size of objects. For example, the full moon has an angular measurement of approximately 0.5°, when viewed from Earth. One could say, "The Moon subtends an angle
of half a degree." The small-angle formula can be used to convert such an angular measurement into a distance/size ratio.
See also
External links
|
{"url":"http://www.reference.com/browse/explementary+angle","timestamp":"2014-04-17T12:33:36Z","content_type":null,"content_length":"109949","record_id":"<urn:uuid:a46fd0a6-7367-47a2-abb4-9fe0ffffe0a0>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[PIC]: Extra PWM cycle? Should PR2=0xFE?
I've been thinking about this one for a while. Basically, If I
interpret the datasheet correctly (16F87X), one should set PR2 to 0xFE
*not* 0xFF to get a correct 0-255 PWM value (or 1023 or whatever, I'll
be dealing with 8-bit PWM here to make things simpler, but this
applies the same way if you take into account the extra 2 bits). I'm
not sure if this has ever been mentioned before, but I'm posting it to
see if anyone knows anything about this.
The reason is, PR2=0xFF would create 256 PWM cycles (always PR2+1
since it resets to 0 on the *next* increment cycle, which is why
TIMER2 increments the whole 0x00-0xFF when PR2=0xFF and doesn't skip
0xFF). But for a 255-value PWM you need *255* cycles, since 0% would
be all off and 100% would be all on. If PR2=0xFF that creates *256*
PWM cycles, of which one would always be a 0. In other words, with
0xFF loaded into CCPRxL, that would output 255/256 overall PWM power,
not 255/255 (nor 256/256). To be sure I wasn't getting this all wrong
(my brain started to melt a bit while thinking about this), I wrote a
short Python program that mimics the behavior of the PIC as stated in
the datasheet, and it behaves as I thought it would.
To put it another way, if PR2=0, then TIMER2 would continually reset
to 0, but you would still have a single PWM cycle which you could
control with CCPRxL=0x00 or 0x01. I.e. the number of PWM cycles is
PR2+1, which I think means PR2 should be 0xFE for full 255-step (or
1023-step, it applies the same way) PWM.
Here's a sample output from my program with PR2=0xFF and CCPRxL=0xFF:
TMR2: FE FF 00 01 02 // FD FE FF 00 01 02 // FD FE FF 00 01 02
PIN: 1 0 1 1 1 // 1 1 0 1 1 1 // 1 1 0 1 1 1
I.e. 255 periods PIN=1 and 1 period PIN=0.
Any thoughts on this? Does anyone know if this really works this way
on the silicon?
Hector Martin (
[hidden email]
Public Key:
PIC/SX FAQ & list archive
View/change your membership options at
|
{"url":"http://microcontrollers.2385.n7.nabble.com/PIC-Extra-PWM-cycle-Should-PR2-0xFE-td102521.html","timestamp":"2014-04-18T15:39:37Z","content_type":null,"content_length":"43544","record_id":"<urn:uuid:0eb74d57-1e9a-4936-af1c-fb187262aafd>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] From theorems of infinity to axioms of infinity
josef at us.es josef at us.es
Wed Mar 13 12:52:54 EDT 2013
Sure, the question may be raised in that context, although the
"must" in your last sentence does not follow logically -- it has strong
philosophical presuppositions.
My point was a different one: as you
acknowledge yourself, the natural numbers can be analyzed in set theory
without infinity, but the real numbers do require a set theory with
Moreover, I was pointing out that the notion of "all" real
numbers in an interval (e.g. all decimal expansions corresponding to the
unit interval) motivates the power set axiom.
All the best, Jose F
El 13/03/2013 17:04, Martin Dowd escribió:
> The existence of
infinite sets is in fact a question which arises with
> regard to the
natural numbers. ZFC - infinity has the hereditarily finite
> sets as a
model. It does not have a finite model, analogously to PA, or even
> Q
(see ). The universe of discourse of arithmetic is infinite and contains
> subsets, whose properties are readily studied using PA. An
axiom must be added to
> set theory, so that infinite sets are elemens
of the universe of discourse.
> - Martin Dowd
-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/fom/attachments/20130313/1d5d9e8e/attachment.html>
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2013-March/017110.html","timestamp":"2014-04-16T06:02:19Z","content_type":null,"content_length":"4085","record_id":"<urn:uuid:1eb681e3-77a5-4105-beca-d265b4825093>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
|
e Shape
┃ ┃
┃ applet-magic.com ┃
┃ Thayer Watkins ┃
┃ Silicon Valley ┃
┃ & TornadoAlley ┃
┃ USA ┃
┃ ┃
┃ The Shape of Quadrupeds ┃
┃ as a Function of Scale ┃
This is an analysis of the proportions between the various dimensions of a quadruped body as a function of size. Small animals such as cats have a thin, lithe shape with relatively thin legs whereas
elephants have a boxy shape with relatively thick legs. The legs whose strengths are proportional to their cross section have to hold up a weight which is proportion to the body volume. If animals
had the same shape then there would be a conflict between the weight increasing with the cube of the scale whereas the strength of the legs increasing only with the square of the scale. But animal
shape is not constant and changes with scale.
Length of the Torso of the Body
Consider a quadruped's torso as being made up of a horizontal square prism held up by four vertical square prisms (the legs).
The most convenient scale parameter to use for the analysis is the height amd width of the body prism, which will be denoted as λ. (The analysis would be essentially the same if the cross section of
the body were taken to be circulars rather than quare. In the analysis the exponents are the crucial elements; the coefficients are not crucial.)
The cross section area of the square prism is then λ^2. The amount of weight per unit length the spinal column must support is then D=ρgλ^2, where ρg is the weight density of the body material.
The maximum stress occurs at the midpoint of the spinal column between the support elements above the front and rear legs. Let B be the length of the spinal column between these supporting
structures. Let x be the distance from the nearest support. The moment at the midpoint is
M = ∫[0]^L/2Dxdx = D(1/2)(B/2)^2
= DB^2/8
This moment has to be counteracted by the stress in the spinal column. Assume the spinal column has a square cross section of X units on each side. The horizontal spinal column of a quadruped is
subjected to compressive stress at the top and tension at the bottom. The strain and hence the stress is proportional to the distance z from the midlevel of the cross section. Thus the stress is
equal to kz and the moment due to the stress at z is kz^2. The area of the infinitesimal element at z is Xdz, where X is the width of the column. The moment generated by this stress is
∫[-X/2]^X/2(kz)zXdz = 2Xk(X/2)^3/3
= kX^4/12
The constant k has to such that the moment generated by the stress in the spinal column counterbalances the moment generated from the load of body weight on the column; i.e.,
kX^4 = DB^2/8
k = 3DB^2/(2X^4)
The maximum stress is at z=X/2 so
T[max] = k(X/2) = (3DB^2/(2X^4))(X/2) = 3DB^2/(4X^3)).
The length B must be such that the maximum stress in the spine does not exceed some maximum allowable stress T. This means that
B^2 = (4/3)DT/X^3
But D is proportional to λ^2 and X is proportional to λ so B^2 is proportional to λ and hence
B is proportional to λ^1/2 .
This means that if body thickness is doubled the body length increases not by 100% but by 41% instead. Thus the animal shape get boxier as the scale increases. Another way of stating this is in terms
of the ratio B/λ:
B/λ is inversely proportional to λ^1/2
B/λ = α/λ^1/2.
Leg Thickness
The volume and hence the weight, which is proportional to Bλ^2, is proportional to λ^5/2. Therefore the cross section of the legs has to be proportional to λ^5/2 and hence the thickness of the legs W
has to be proportional to λ^5/4; i.e.,
W = βλ^5/4
and hence
W/λ = βλ^1/4
Thus larger scale quadrupeds' legs are relatively thicker.
Although the thickness of the body is the most convenient parameter for analysis people more commonly think of a body length, such as B, as the measure of animal size. In terms of B the thickness of
a quadruped's leg is proportional to B^5/2. This relationship captures the perception of the thickness of a quadruped's legs as a function of quadruped size.
But total body length is the sum of body length B plus neck length N plus head size H. The analysis for neck length N is a bit more complicated than that for body length B because the stress created
at the shoulders depends not only on the weight distributed along the neck but also on the weight of the head.
Neck Length
The moment created by weight at a distance x from the shoulders is Exdx, where E is the linear density of weight along the neck and is porportional to λ^2. The total moment due to this neck weight is
the integral from 0 to N; i.e., EN^2/2. The moment created by the weight of the head is the head weight, which is proportional to H^3, times the lever arm for the head, which is N+H/2. The stress
parameter k in the spine at the shoulders has to satisfy the equation
kX^4/12 = EN^2/2 + (N+H/2)ρgH^3
where ρg is the weight density of body material. Again the maximum stress is kX/2 so the neck length N has to be such that the maximum stress is equal to the allowable stress T; i.e.,
[3EN^2 + 6(N+H/2)ρgH^3]/X^3 = T.
Since E is proportional to λ^2 and X and H are proportional to λ the equation to be satisfied by N is of the form
c[0]N^2/λ + c[1]N + c[2]λ - T = 0.
where the c[i ] are coefficients independent of λ. This equation has solutions of the form
N = [-c[1] ±(c[0]^2 - (4c[0]/λ)(c[2]λ-T))^1/2](λ/(2c[0]))
which reduces to
N = λ[-(c[1]/2c[0]) ± (c[1]^2-4c[0]c[2] + 4c[0]T/λ)^1/2/(2c[0])]
This means, roughly, that
N = d[0]λ±d[1]λ^1/2
N/λ = d[0] ± d[1]/λ^1/2
with d[0] being more important if the neck is thin and d[1] more important if the neck is thick and the head relatively small.
There are two solutions and it appears that the one involving a negative sign is the empirically relevant one. Therefore
N = d[0]λ−d[1]λ^1/2
This means the neck becomes relatively shorter as scale increases.
Leg Length
The other element of animal shape is the length of the legs relative to the body size. Some of the factors which influence leg length are the need for providing clearance of the underside of the body
above vegetation and the need for compatability of leg length and neck length for ground grazing. But longer legs also create stability problems. For ground grazing the leg length L (from the ground
to the bottom of the body) has to be such that
L + λ = N + H.
This would mean that leg length would be of the form
and hence
L/λ = e[0]−e[1]/λ^1/2 .
Thus leg length is relatively shorter with increases in scale.
Leg Thickness Again
The previous analysis derived the relationship between leg thickness and scale under the assumption that the legs had only to support the torso body weight. When the weight of the neck and head are
also taken into account the relationship will be more complex in form but still will indicate that the legs must be relatively thicker for a larger scale quadruped.
Alternative Shapes
There are obviously alternative viable shapes. Some animal shapes involve a shorter neck and the forgoing of ground grazing for bush and tree grazing as in the case of elephants. The elephan trunk
provides a means of compensating for restrictions of a relatively shorter neck. The giraffe represents another strategy concerning grazing.
The display below depicts the effect of scale on animal shape. The first silhouette represents an animal of the size of a cow. The second reddish silhouette represents an animal whose scale parameter
of body thickness is one half that of the first, roughly the size of a large dog. The grayish third silhouette is for an animal whose body thickness is twice that of the first, roughly the size of a
small elephant.
The second silhouette shows the relatively elongated shape and the relatively thinner legs. The third shows the relative boxiness of the body and the relatively thicker legs.
|
{"url":"http://www.sjsu.edu/faculty/watkins/animalshape.htm","timestamp":"2014-04-17T18:38:36Z","content_type":null,"content_length":"12308","record_id":"<urn:uuid:9f6539ee-484b-4630-b3fa-4647c7851ac5>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
|
WWCC Online Catalog
Master Course Outline
PHYS& 223
Engr Physics III w/Lab
Credits: 5
Clock Hours per Quarter: 60
Lecture Hours:40 Lab Hours:20
This is part three of a calculus-based physics sequence intended for physical science and engineering majors. Topics include electricity and magnetism, with selected topics from optics and modern
physics as time allows. Lab work required. Prerequisite: PHYS& 222. Co-requisite: MATH& 153. Formerly PHYS 203, Physics for Science and Engineering III.
Intended Learning Outcomes
Demonstrate knowledge of the fundamental concepts of electricity and electromagnetism, including electrostatic potential energy, electrostatic potential, potential difference, magnetic ?eld,
induction, and Maxwell?s Laws.
Apply circuit theory, including Ohm?s Law and Kirchhoff ?s Laws to analysis of circuits with potential sources, capacitance, and resistance, including parallel and series capacitance and resistance.
Describe the effects of static charge on nearby materials in terms of Coulomb?s Law.
Use Faraday?s and Lenz?s laws to ?nd the electromagnetic forces.
Analyze a written problem or observed phenomena, simplify it, identify the key known and unknown features, make predictions, and evaluate those predictions based on the principles of physics.
Solve numerical problems related to a broad range of topics from electricity and magnetism using the mathematics of algebra, trigonometry, and calculus.
Investigate topics from electricity and magnetism by designing, performing, and reporting on laboratory experiments.
Course Topics
Electric charge and electric field
Gauss?s Law
Electric potential
Current and resistance in DC circuits
Magnetic field
Electromagnetic induction
Syllabi Listing See ALL Quarters
Course Year Quarter Item Instructor
PHYS& 223 Spring 2013 0864 Frank Skorina View Syllabus
Two Year Projected Schedule
│ Year One* │ Year Two** │
│ │ │ X │ X │ │ │ │ X │ X │ │
*If fall quarter starts on an odd year (2003, 2005, etc.), it's Year One.
**If fall quarter starts on an even year (2002, 2004, etc.), it's Year Two.
|
{"url":"http://www.wwcc.edu/cat/course_details_print.cfm?dc=PHYS%26&cc=200&cl=223","timestamp":"2014-04-17T09:58:03Z","content_type":null,"content_length":"15239","record_id":"<urn:uuid:b5d77126-238a-42a9-9535-94b250d910d3>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Average-Case Complexity Forum
Average case analysis always seemed more relevant than the worst case. Indeed, although NP-complete problems are generally thought of as being computationally intractable, some are easy on average;
and some are complete in the average case, indicating that they remain difficult on randomly generated instances. Motivated and guided by the desires to distinguish (standard, worst-case) NP-complete
problems that are "easy on average" from those that are "difficult on average," the study of average-case NP-completeness opens a new front in complexity theory. This forum provides an overview of
the recent research on average complexity, and shows the subtleties in formulating a coherent framework for studying average-case NP-completeness. It also provides an up-to-date list of works
published in the area. Advisory Board: Yuri Gurevich, Steven Homer, Leonid Levin.
Editor: Jie Wang.
Please send comments to wang@cs.uml.edu.
This material is based upon work supported by the National Science Foundation under Grant No. 9424164 and under Grant No. 9820611. Any opinions, findings and conclusions or recomendations expressed
in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF).
|
{"url":"http://www.cs.uml.edu/~wang/acc-forum/","timestamp":"2014-04-19T04:58:26Z","content_type":null,"content_length":"4175","record_id":"<urn:uuid:0f3f61b5-dcd7-442a-a72f-8947ccc04f89>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] 32/64-bit machines, integer arrays and python ints
Travis Oliphant oliphant.travis at ieee.org
Thu Sep 28 13:03:56 CDT 2006
Bill Spotz wrote:
> I am wrapping code using swig and extending it to use numpy.
> One class method I wrap (let's call it myElements()) returns an array
> of ints, and I convert it to a numpy array with
> PyArray_SimpleNew(1,n,'i');
You should probably use NPY_INT instead of 'i' for the type-code.
> I obtain the data pointer, fill in the values and return it as the
> method return argument.
> In python, it is common to want to loop over this array and treat its
> elements as integers:
> for row in map.myElements():
> matrix.setElements(row, [row-1,row,row+1], [-1.0,2.0,-1.0])
> On a 32-bit machine, this has worked fine, but on a 64-bit machine, I
> get a type error:
> TypeError: in method 'setElements', argument 2 of type 'int'
> because row is a <type 'int32scalar'>.
> It would be nice if I could get the integer conversion to work
> automatically under the covers, but I'm not exactly sure how to make
> that work.
Yeah, It can be confusing, at first. You just have to make sure you are
matching the right c-data-types. I'm not quite sure what the problem
here is given your description, because I don't know what setElements
My best guess, is that it is related to the fact that a Python int uses
the 'long' c-type. Thus, you should very likely be using
PyArray_SimpleNew(1, n, NPY_LONG) instead of int so that your integer
array always matches what Python is using as integers.
The other option is to improve your converter in setElements so that it
can understand any of the array scalar integers and not just the default
Python integer.
The reason this all worked on 32-bit systems is probably the array
scalar corresponding to NPY_INT is a sub-class of the Python integer.
It can't be on a 64-bit platform because of binary incompatibility of
the layout.
Hope that helps.
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-September/011041.html","timestamp":"2014-04-17T09:53:16Z","content_type":null,"content_length":"4978","record_id":"<urn:uuid:d075807b-7a4c-49a8-8622-5a52b38de3c2>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
|
From List Comprehensions to Generator Expressions
List comprehensions were added in Python 2.0. This feature originated as a set of patches by Greg Ewing with contributions by Skip Montanaro and Thomas Wouters. (IIRC Tim Peters also strongly
endorsed the idea.) Essentially, they are a Pythonic interpretation of a well-known notation for sets used by mathematicians. For example, it is commonly understood that this:
{x | x > 10}
refers to the set of all x such that x > 10. In math, this form implies a universal set that is understood by the reader (for example, the set of all reals, or the set of all integers, depending on
the context). In Python, there is no concept of a universal set, and in Python 2.0, there were no sets. (Sets are an interesting story, of which more in a future blog post.)
This and other considerations led to the following notation in Python:
[f(x) for x in S if P(x)]
This produces a list containing the values of the sequence S selected by the predicate P and mapped by the function f. The if-clause is optional, and multiple for-clauses may be present, each with
their own optional if-clause, to represent nested loops (the latter feature is rarely used though, since it typically maps a multi-dimensional entity to a one-dimensional list).
List comprehensions provide an alternative to using the built-in map() and filter() functions. map(f, S) is equivalent to [f(x) for x in S] while filter(P, S) is equivalent to [x for x in S if P(x)].
One would think that list comprehensions have little to recommend themselves over the seemingly more compact map() and filter() notations. However, the picture changes if one looks at a more
realistic example. Suppose we want to add 1 to the elements of a list, producing a new list. The list comprehension solution is [x+1 for x in S]. The solution using map() is map(lambda x: x+1, S).
The part “lambda x: x+1” is Python’s notation for an anonymous function defined in-line.
It has been argued that the real problem here is that Python’s lambda notation is too verbose, and that a more concise notation for anonymous functions would make map() more attractive. Personally, I
disagree—I find the list comprehension notation much easier to read than the functional notation, especially as the complexity of the expression to be mapped increases. In addition, the list
comprehension executes much faster than the solution using map and lambda. This is because calling a lambda function creates a new stack frame while the expression in the list comprehension is
evaluated without creating a new stack frame.
Given the success of list comprehensions, and enabled by the invention of generators (of which more in a future episode), Python 2.4 added a similar notation that represents a sequence of results
without turning it into a concrete list. The new feature is called a “generator expression”. For example:
sum(x**2 for x in range(1, 11))
This calls the built-in function sum() with as its argument a generator expression that yields the squares of the numbers from 1 through 10 inclusive. The sum() function adds up the values in its
argument resulting in an answer of 385. The advantage over sum([x**2 for x in range(1, 11)]) should be obvious. The latter creates a list containing all the squares, which is then iterated over once
before it is thrown away. For large collections these savings in memory usage are an important consideration.
I should add that the differences between list comprehensions and generator expressions are fairly subtle. For example, in Python 2, this is a valid list comprehension:
[x**2 for x in 1, 2, 3]
However this is not a valid generator expression:
(x**2 for x in 1, 2, 3)
We can fix it by adding parentheses around the "1, 2, 3" part:
(x**2 for x in (1, 2, 3))
In Python 3, you also have to use these parentheses for the list comprehension:
[x**2 for x in (1, 2, 3)]
However, in a "regular" or "explicit" for-loop, you can still omit them:
for x in 1, 2, 3: print(x**2)
Why the differences, and why the changes to a more restrictive list comprehension in Python 3? The factors affecting the design were backwards compatibility, avoiding ambiguity, the desire for
equivalence, and evolution of the language. Originally, Python (before it even had a version :-) only had the explicit for-loop. There is no ambiguity here for the part that comes after 'in': it is
always followed by a colon. Therefore, I figured that if you wanted to loop over a bunch of known values, you shouldn't be bothered with having to put parentheses around them. This also reminded me
of Algol-60, where you can write:
for i := 1, 2, 3 do Statement
except that in Algol-60 you can also replace each expression with step-until clause, like this:
for i := 1 step 1 until 10, 12 step 2 until 50, 55 step 5 until 100 do Statement
(In retrospect it would have been cool if Python for-loops had the ability to iterate over multiple sequences as well. Alas...)
When we added list comprehensions in Python 2.0, the same reasoning applied: the sequence expression could only be followed by a close bracket ']' or by a 'for' or 'if' keyword. And it was good.
But when we added generator expressions in Python 2.4, we ran into a problem with ambiguity: the parentheses around a generator expression are not technically part of the generator expression syntax.
For example, in this example:
sum(x**2 for x in range(10))
the outer parentheses are part of the call to sum(), and a "bare" generator expression occurs as the first argument. So in theory there would be two interpretations for something like this:
sum(x**2 for x in a, b)
This could either be intended as:
sum(x**2 for x in (a, b))
or as:
sum((x**2 for x in a), b)
After a lot of hemming and hawing (IIRC) we decided not to guess in this case, and the generator comprehension was required to have a single expression (evaluating to an iterable, of course) after
its 'in' keyword. But at the time we didn't want to break existing code using the (already hugely popular) list comprehensions.
Then when we were designing Python 3, we decided that we wanted the list comprehension:
[f(x) for x in S if P(x)]
to be fully equivalent to the following expansion using the built-in list() function applied to a generator expression:
list(f(x) for x in S if P(x))
Thus we decided to use the slightly more restrictive syntax of generator expressions for list comprehensions as well.
We also made another change in Python 3, to improve equivalence between list comprehensions and generator expressions. In Python 2, the list comprehension "leaks" the loop control variable into the
surrounding scope:
x = 'before'
a = [x for x in 1, 2, 3]
print x # this prints '3', not 'before'
This was an artifact of the original implementation of list comprehensions; it was one of Python's "dirty little secrets" for years. It started out as an intentional compromise to make list
comprehensions blindingly fast, and while it was not a common pitfall for beginners, it definitely stung people occasionally. For generator expressions we could not do this. Generator expressions are
implemented using generators, whose execution requires a separate execution frame. Thus, generator expressions (especially if they iterate over a short sequence) were less efficient than list
However, in Python 3, we decided to fix the "dirty little secret" of list comprehensions by using the same implementation strategy as for generator expressions. Thus, in Python 3, the above example
(after modification to use print(x) :-) will print 'before', proving that the 'x' in the list comprehension temporarily shadows but does not override the 'x' in the surrounding scope.
And before you start worrying about list comprehensions becoming slow in Python 3: thanks to the enormous implementation effort that went into Python 3 to speed things up in general, both list
comprehensions and generator expressions in Python 3 are actually
than they were in Python 2! (And there is no longer a speed difference between the two.)
Of course, I forgot to mention that Python 3 also supports set comprehensions and dictionary comprehensions. These are straightforward extensions of the list comprehension idea.
10 comments:
1. Very interesting and refreshing. Thanks!
2. very much informative! thanks
3. The parallel with set theory is even closer than you suggest, Guido. If you could write in set theory simply:
{ x | x > 10 }
Then that would expose you to Russell's Paradox, and you could also write:
{ x | x ∉ x }
And that would be very naughty indeed. :-)
So in real set notation, one must be so clean as to write, e.g.:
{ x ∈ ℚ | x > 10 }
Which is, after all, the same as Python list/generator comprehensions (other than a slight spelling difference).
4. Correct me if I'm wrong, but wouldn't you be able to support map/reduce just as efficiently as list comprehensions/generator expressions if you simply re-used the stack frame (a la tail
5. @David Mertz: Very clever, but I *did* say that there was a universal set implied by the context. The math books on my shelves show lots of examples where the universal set is omitted from the
notation. Also, {x ∈ ℚ | x > 10} differs from list comprehensions because the latter have to repeat "x" twice: [x for x in Q if x > 10].
@James Brown: list comprehensions *are* map/filter. As for reduce and tail recursion, that discussion is closed.
6. Are you sure about your performance figures at the end there? Comprehensions are still much faster than generator expressions in Py3k for me (which makes sense, since the comprehensions still do
everything inline in one function, while the generator expression has to keep popping in and out of the generator frame from C code). To avoid the name lookup confounding the relative timings, I
used the following timeit snippets:
./python -m timeit -s "seq = [1]*1000" "[x for x in seq]; list"
./python -m timeit -s "seq = [1]*1000" "{x for x in seq}; list
python -m timeit -s "seq = [1]*1000" "list(x for x in seq)"
Those timings were in the vicinity of 45-50 us, 60-70 us, 75-85 us.
The raw speed of the operations is also pretty similar between 2.7 and 3.2 for me (neither being noticeably slower or faster than the other just eyeballing the timeit results). Although both seem
a little faster than 2.6, so maybe the speedups were backported along with dict and set comprehensions (which seems likely, since it should be the same code that handles it all in both branches
One thing that *is* much faster in Py3k is a module level list comprehension, since those now automatically benefit from function local variable access optimisations for their loop variables.
7. Oops, there should be a "./" at the start of last timeit snippet as well (and there was in the shell where I was running the test).
8. @Nick: Thanks for the detailed timings. I had timed something similar for a much smaller sequence, so my numbers probably include more per-loop setup overhead.
9. @James Brown:
It can be difficult to determine what qualifies as an implementation detail and what qualifies as a language feature, especially in the absence of a standard. While I think we have a great
understanding of where that line lies with Python today, there is one subject where the distinction is fairly clear, that is, when talking about speed. If we are talking about the time order of
an operation, then it is possibly part of the language (list indexing is specified as an O(1) operation in python). Otherwise, the time taken for any expression to execute is obviously not
considered part of the language; as long as evaluation of some expression terminates, it can take as long as it likes.
10. It seems like it shouldn't be bad for generator expressions to be a small constant factor slower than list comprehensions. Aren't generator expressions primarily a memory optimization, and only
secondarily possibly a speed optimization? I guess it's bad if you write "sum(x*x for x in L)" and it's slower than "sum([x*x for x in L])"...
New comments are not allowed.
|
{"url":"http://python-history.blogspot.com/2010/06/from-list-comprehensions-to-generator.html?showComment=1277837867032","timestamp":"2014-04-16T16:46:17Z","content_type":null,"content_length":"79223","record_id":"<urn:uuid:0cfdd975-71c7-4d51-af19-6ed559b6fe45>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
|