content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
A time domain approach for avoiding crosstalk in optical blocking multistage interconnection networks
- IEEE Communications Magazine , 1999
"... Optical interconnections for communication networks and multiprocessor systems have been studied extensively. A basic element of optical switching networks is a directional coupler with two
inputs and two outputs (hereafter referred to simply as switching elements or SEs). Depending on the control v ..."
Cited by 16 (7 self)
Add to MetaCart
Optical interconnections for communication networks and multiprocessor systems have been studied extensively. A basic element of optical switching networks is a directional coupler with two inputs
and two outputs (hereafter referred to simply as switching elements or SEs). Depending on the control voltage applied to it, an input optical signal is coupled to either of the two outputs, setting
the SE to either the straight or the cross state. A class of topologies that can be used to construct optical networks is multistage interconnection networks (MINs), which interconnect their inputs
and outputs via several stages of SEs. Although optical MINs hold great promises and have demonstrated advantages over their electronic counterparts, they also introduce new challenges such as how to
deal with the unique problem of avoiding crosstalk in the SEs. In this paper, we survey the research carried out, including major challenges encountered and approaches taken, during the past few
years on opt...
- JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING” , PP.60, , 2000
"... In this paper, we study optical multistage interconnection networks (MINs). Advances in electro-optic technologies have made optical communication a promising networking choice to meet the
increasing demands for high channel bandwidth and low communication latency of high-performance computing/commu ..."
Cited by 12 (6 self)
Add to MetaCart
In this paper, we study optical multistage interconnection networks (MINs). Advances in electro-optic technologies have made optical communication a promising networking choice to meet the increasing
demands for high channel bandwidth and low communication latency of high-performance computing/communication applications. Although optical MINs hold great promise and have demonstrated advantages
over their electronic counterpart, they also hold their own challenges. Due to the unique properties of optics, crosstalk in optical switches should be avoided to make them work properly. Most of the
research work described in the literature are for electronic MINs, and hence, crosstalk is not considered. In this paper, we introduce a new concept, semipermutation, to analyze the permutation
capability of optical MINs under the constraint of avoiding crosstalk, and apply it to two examples of optical MINs, banyan network and Benes network. For the blocking banyan network, we show that
not all semi-permutationsare realizable in one pass, and give the number of realizable semi-permutations. For the rearrangeable Benes network, we show that any semi-permutation is realizable in one
pass and any permutation is realizable in two passes under the constraint of avoiding crosstalk. A routing algorithmfor realizing a semi-permutationin a Benes network is also presented. Withthe speed
and bandwidthprovided by current optical technology, an optical MIN clearly demonstrates a superior overall performance over its electronic MIN counterpart.
- IEEE/ACM Transactions on Networking , 2001
"... Because signals carried by two waveguides entering a common switch element would generate crosstalk, a regular multistage interconnection network (MIN) cannot be directly used as an optical
switch between inputs and outputs in an optical network. A simple solution is to use a 2 2 cube-type MIN to p ..."
Cited by 9 (2 self)
Add to MetaCart
Because signals carried by two waveguides entering a common switch element would generate crosstalk, a regular multistage interconnection network (MIN) cannot be directly used as an optical switch
between inputs and outputs in an optical network. A simple solution is to use a 2 2 cube-type MIN to provide the connections, which needs a much larger hardware cost. A recent research proposed
another solution, called the time-domain approach, that divides the optical inputs into several groups such that crosstalk-free connections can be provided by an regular MIN in several time slots,
one for each group. Researchers studied this approach on Omega networks and defined the class set to be the set of-permutations realizable in two time slots on an Omega network. They proved that the
size of is larger than the size of class\Omega , where\Omega consists of all-permutations admissible to a regular (nonoptical) Omega network. This paper first presents an optimal ( log ) time
algorithm for identifying whether a given permutation belongs to class or not. Using this algorithm, this paper then proves an interesting result that the class is identical to the class\Omega 1which
represents the set of -permutations admissible to a nonoptical one-extra stage Omega network. Index Terms---Conflict graph, crosstalk-free connection, dilated MIN, Omega network, optical switch,
time-domain approach. I.
- JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING , 1997
"... A two-level process for diagnosing crosstalk in photonic Dilated Benes Networks (DBNs) is presented. At level one is the Test-All-Switches (TAS) procedure, which obtains the crosstalk ratios of
each and every switch in a N x N DBN in 4N tests using O(N \Delta log² N) calculations. One of its applica ..."
Cited by 3 (0 self)
Add to MetaCart
A two-level process for diagnosing crosstalk in photonic Dilated Benes Networks (DBNs) is presented. At level one is the Test-All-Switches (TAS) procedure, which obtains the crosstalk ratios of each
and every switch in a N x N DBN in 4N tests using O(N \Delta log² N) calculations. One of its applications is to identify single or multiple crosstalk-faulty switches in the DBN which generate
excessive crosstalk. To reduce the number of tests and amount of computation when diagnosing only a few switches suspected of crosstalk-faulty along an arbitrary path, the Test-One-Path (TOP)
procedure at level two is proposed. A recursive algorithm applicable to both procedures is used to configure the DBN for each test such that the necessary power measurements of the signals can be
taken accurately. An important feature of the proposed diagnostic process is its suitability for automated test generation.
, 2007
"... Analytical modeling techniques can be used to study the performance of optical multistage interconnection network (OMIN) effectively. MINs have assumed importance in recent times, because of
their cost-effectiveness. An N × N MIN consists of a mapping from N processors to N memories, with log 2 N ..."
Cited by 2 (0 self)
Add to MetaCart
Analytical modeling techniques can be used to study the performance of optical multistage interconnection network (OMIN) effectively. MINs have assumed importance in recent times, because of their
cost-effectiveness. An N × N MIN consists of a mapping from N processors to N memories, with log 2 N stages of 2 × 2 switches with N/2 switches per stage. The interest is on the study of the
performance of unbuffered optical multistage interconnection network using the banyan network. The uniform reference model approach is assumed for the purpose of analysis. In this paper the
analytical modeling approach is applied to an N × N OMIN with limited crosstalk (conflicts between messages) up to (log 2 N − 1). Messages with switch conflicts satisfying the constraint of (log 2 N
− 1) are allowed to pass in the same group, but in case of a link conflict, the message is routed in a different group. The analysis is performed by calculating the bandwidth and throughput of the
network operating under a load l and allowing random traffic and using a greedy routing strategy. A number of equations are derived using the theory of probability and the performance curves are
plotted. The results obtained show that the
"... Abstract. In this paper, a new class of optical multistage interconnection network (MIN) architecture is presented, which is constructed utilizing a modularization approach rather than the
traditional recursive or fixed exchange pattern methods. The modified architecture consists of an input module, ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. In this paper, a new class of optical multistage interconnection network (MIN) architecture is presented, which is constructed utilizing a modularization approach rather than the
traditional recursive or fixed exchange pattern methods. The modified architecture consists of an input module, an output module, two point-to-point (PTP) modules, and one modified multicast/
broadcast (M/B) module(s). We also implement the multicast/broadcast module with WDM technique, which reduces the hardware cost required for multicast and the re-computation cost for a new
connection. We show that it has the best application flexibility and provides multicast function without imposing significant negative impacts on the whole network. A new multicast connection pattern
is also proposed in this paper, which makes it practical and economical to apply amplification in space-division networks. Compared with existing multicast architectures, this new architecture with
Dilated Benes PTP modules has better performance in terms of system SNR, the number of switch elements, and system attenuation in point-to-point connections. Moreover, the multicast/broadcast module
adopts wavelength division multiplexing (WDM) technique to increase its multicast/broadcast assignment. As a result, given m available distinguished wavelengths, one M/B module can support at most m
M/B requests at the same time. The new proposed M/B module with WDM is more practical and economical to apply amplification in space-division networks.
- Proc. of Infocom 96 , 1996
"... A photonic switching network may be dilated in either space or time to establish crosstalk-free connections. Space-time tradeoffs are evaluated using an analytical model based on Markov process.
The probability that a new connection can be established without crosstalk is calculated by taking into c ..."
Add to MetaCart
A photonic switching network may be dilated in either space or time to establish crosstalk-free connections. Space-time tradeoffs are evaluated using an analytical model based on Markov process. The
probability that a new connection can be established without crosstalk is calculated by taking into consideration the traffic correlations between stages. The model is applicable to both Banyan and
dilated Banyan networks under either switch or stage control. Our results imply that space-time tradeoffs are improved by using Banyans instead of dilated Banyans. If hardware cost is not a concern,
a multi-plane Banyan network, which is more effective than a dilated Banyan, may be used. 1 Introduction Opto-electronic conversions required by electronic switching can become an impediment in high
bit-rate (e.g. above 50 Gbs) optical communication systems [3]. Photonic switching networks are useful since they can provide virtually unlimited communication bandwidth, as well as bit-rate and
coding f...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1718143","timestamp":"2014-04-17T14:39:27Z","content_type":null,"content_length":"35227","record_id":"<urn:uuid:cbd5cb41-3c1e-4e11-8980-c68602bbf818>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Primality criteria for specific class of Wagstaff numbers ?
up vote 2 down vote favorite
I asked this question on mathstackexchange but didn't get any answer .
Definition :
Let $W_p$ be a Wagstaff number of the form :
$W_p=\frac{2^p+1}{3}$ , with $p\equiv 1 \pmod 4$
Next , define sequence $S_i$ as :
$S_i =8S^4_{i-1}-8S^2_{i-1}+1 $ , with $ S_0=\frac{3}{2} $
How to prove following statement :
Conjecture :
$W_p$ is a prime iff $S_{\frac{p-1}{2}} \equiv \frac{3}{2} \pmod {W_p}$
I checked statement for following Wagstaff primes :
$W_5 , W_{13} , W_{17} , W_{61} , W_{101} , W_{313} , W_{701} , W_{1709} , W_{2617} , W_{10501} , W_{42737} ,W_{95369} , W_{138937} ,W_{267017}$
Also , for $~p < 15000~$ there is no composite $W_p$ that satisfies relation from conjecture .
I am interested in hints (not full solution) .
nt.number-theory prime-numbers
add comment
1 Answer
active oldest votes
Let $Y_0 = 3$ and $Y_{i+1} = Y_i^2-2$. Then your $S_i = \frac{1}{2} Y_{2i}$. So your condition would be that $Y_{p-1}\equiv Y_0 \pmod{W_p}$. This is nearly the same as (and I'd say
up vote 1 equivalent to) the second conjecture posed here on mersenneforum.org. See the link for some discussion, partial results and variations of the test.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory prime-numbers or ask your own question.
|
{"url":"http://mathoverflow.net/questions/91350/primality-criteria-for-specific-class-of-wagstaff-numbers?answertab=oldest","timestamp":"2014-04-20T09:03:56Z","content_type":null,"content_length":"50437","record_id":"<urn:uuid:164fd0d4-8732-4b1b-8628-369b6d607fbb>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
|
all 13 comments
[–]dmazzoni5 points6 points7 points ago
sorry, this has been archived and can no longer be voted on
You will have a few C.S. courses that require writing proofs of theorems, similar to proofs you might have done in high school geometry. You might do a bit of algebra or trigonometry in any of your
classes. In addition, the degree might require taking some other math courses.
If you already know you like programming, and if you think using math to solve problems is interesting, and difficult but not scary, then you should be fine. If you're scared of math I'd be more
Good luck!
[–]2 points3 points4 points ago
sorry, this has been archived and can no longer be voted on
I am currently in my third year of a CS undergraduate degree. I have had to take Cal I- Cal III (three classes at my university, some others I see it done in two), Discrete Math, Linear Algebra,
Stats I and Stats II. In the grand scheme of things it isn't too much math. Just something you have to get done! Hope this helps!
[–]RobToastie1 point2 points3 points ago
sorry, this has been archived and can no longer be voted on
At an undergrad level it isn't incredibly important for CS, but it certainly helps for some classes (upper level ones), and there are a few math classes generally required for CS. Basically, as long
as you got up through Algebra 2 (or whatever is before precal for you), you will be fine. You will probably be expected to take discrete mathematics and linear algebra for CS, and potentially stats
and/or calc. Math reasoning is also helpful, as it deals with proofs and you will probably see some of those as well.
As far as a first year goes, that will vary a lot by school, as that is generally when all the core classes (read: BS freshman classes) are taken, but will probably include a couple of introductory
programming classes, which will most likely either be in Java or Python.
If you have any other questions regarding a CS major, feel free to ask!
[–]foolinator1 point2 points3 points ago
sorry, this has been archived and can no longer be voted on
Take a lot of math. Others may say otherwise, but having a real strong math background will help your career a lot. Those who disagree typically don't share that background so they don't have a frame
of reference.
Now, you don't use such math much at all, but catching onto the concepts of what makes a computer slow and why will be easier to understand and you'll be less intimidated to look into the dirty
details underneath.
Don't worry about what language you learn. You learn one well, you will catch onto others just fine (if you don't then you're in the wrong profession).
Focus on the work that won't change in 40 years from now that your job won't give you the time to focus on. Stuff like this:
• Sorting
• Searching
• data structures
• Graph algorithms
• Computer architecture
• Object oriented coding
• Big-oh notation
For any given problem, try to get comfortable analyzing these five things:
• Does the program exit with all inputs (completeness)
• Does the program work correct (correctness)
• Prove that it works (sometimes requires math)
• How fast is it? What's the worst and best case scenario (big-oh analysis of runtime)
• How much memory is used (big-oh analysis of memory)
Now, you won't understand any of what I said above until 1-2 years after study. That's just fine. But that list above, when you interview with the big guns like IBM and Google, that's the shit
they'll ask a lot about.
Also, if you can, try to get accepted into MIT, Berkley, Carnegie Melon, University of Illinois in Champaign, Stanford, or Cornell. It sucks but in today's world you'll get interviews and jobs REALLY
fast for your entire career of you choose one of those 6 schools.
[–]ehsteve230 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
First year of Uni i had to take a "Maths for CS" module, which was about the same as A-Level maths; integrals, differential equations, 2D vectors. Nothing hugely advanced. I didn't do very good in my
A-Level maths but i still managed a decent grade in Maths for CS, and i haven't actually needed to use any of that maths since.
[–]trimalchio-worktime0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
If you're able to do the CS part you'll be able to do the math. That's my view of it having recently completed my degree.
My program basically had us take Calculus I and II, Linear Algebra, Statistics (for engineers and math majors), and Discrete Math as the only really pure math classes. Our Algorithms class had some
math in it, and all our theory classes had varying levels of math in them. The programming classes were usually not much math at all if any.
The thing is though, my first 3 CS classes were corequisites of 3 different Math classes (Calc I, II and Discrete Math) and those classes were all harder for me than my CS classes then. The thing is
though, my CS classes got WAY harder after that, the hard upper level classes were much harder than doing some fairly straightforward if difficult math. So... don't worry, it's ALL difficult.
[–]i_invented_the_ipod0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
It depends an awful lot on where you end up going to school. If the CS department is part of the Maths department, and/or if the degree is very theory-focused, there may be a surprising number of
not-obviously-relevant maths classes you're required to complete.
On the other hand, at other schools, you just might need a couple of Calculus classes, Linear Algebra, and o formal Logic class to graduate. Check course requirements online to see what you're
getting into.
[–]TheChance0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
It will of course depend on your school and so forth. I'm a second-year student on the west coast of the United States, and the "recommended" computer science track has been like so:
First year: get through discrete mathematics and, ideally, begin calculus track. Take introduction to computer science, which will introduce career paths, the basics of machine architecture and
theory, the fundamentals behind code and programming (but no actual programming) and so on. Otherwise, finish general education prerequisites for your major, such as writing and social studies.
Second year: finish calculus track. Begin "real" computer science classes and your "other" science track (it's going to be calculus-based physics, so prepare yourself) in the fall. Squeeze other
gen-ed prerequisites into the cracks and that's a full courseload.
Have fun and remember that there are lots of subreddits designed specifically to help students like us!
[–]BrosEquis0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
No, you don't need a good background in Math to study CS. A typical CS degree will require you to take sufficient levels of math.
That being said, take as much Math as you can. Double Major in it if possible. You're going to want to die during those 4 years, but your 22 year old self that had job offers paying 90-120k doing Big
Data Algorithm/Analysis work is praising your name.
[–]DealLayLolMo-1 points0 points1 point ago
sorry, this has been archived and can no longer be voted on
|
{"url":"http://www.reddit.com/r/AskComputerScience/comments/15cs9o/studying_cs_at_undergraduate_level/","timestamp":"2014-04-16T22:48:15Z","content_type":null,"content_length":"82425","record_id":"<urn:uuid:3413ba2b-d512-4834-889a-445649b630b0>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical proof reveals magic of Ramanujan's genius
PROOFS are the currency of mathematics, but Srinivasa Ramanujan, one of the all-time great mathematicians, often managed to skip them. Now a proof has been found for a connection that he seemed to
mysteriously intuit between two types of mathematical function.
The proof deepens the intrigue surrounding the workings of Ramanujan's enigmatic mind. It may also help physicists learn more about black holes - even though these objects were virtually unknown
during the Indian mathematician's lifetime.
Born in 1887 in Erode, Tamil Nadu, Ramanujan was self-taught and worked in almost complete isolation from the mathematical community of his time. Described as a raw genius, he independently
rediscovered many existing results, as well as making his own unique contributions, believing his inspiration came from the Hindu goddess Namagiri. But he is also known for his unusual style, often
leaping from insight to insight without formally proving the logical steps in between. "His ideas as to what constituted a mathematical proof were of the most shadowy description," said G. H.Hardy
(pictured, far right), Ramanujan's mentor and one of his few collaborators.
Despite these eccentricities, Ramanujan's work has often proved prescient. This year is the 125th anniversary of his birth, prompting Ken Ono of Emory University in Atlanta, Georgia, who has
previously unearthed hidden depths in Ramanujan's work, to look once more at his notebooks and letters. "I wanted to go back and prove something special," says Ono. He settled on a discussion in the
last known letter penned by Ramanujan, to Hardy, concerning a type of function now known as a modular form.
Functions are equations that can be drawn as graphs on an axis, like a sine wave, and produce an output when computed for any chosen input or value. In the letter, Ramanujan wrote down a handful of
what were then totally novel functions. They looked unlike any known modular forms, but he stated that their outputs would be very similar to those of modular forms when computed for the roots of 1,
such as the square root -1. Characteristically, Ramanujan offered neither proof nor explanation for this conclusion.
It was only 10 years ago that mathematicians formally defined this other set of functions, now called mock modular forms. But still no one fathomed what Ramanujan meant by saying the two types of
function produced similar outputs for roots of 1.
Now Ono and colleagues have exactly computed one of Ramanujan's mock modular forms for values very close to -1. They discovered that the outputs rapidly balloon to vast, 100-digit negative numbers,
while the corresponding modular form balloons in the positive direction.
Ono's team found that if you add the corresponding outputs together, the total approaches 4, a relatively small number. In other words, the difference in the value of the two functions, ignoring
their signs, is tiny when computed for -1, just as Ramanujan said.
The result confirms Ramanujan's incredible intuition, says Ono. While Ramanujan was able to calculate the value of modular forms, there is no way he could have done the same for mock modular forms,
as Ono now has. "I calculated these using a theorem I proved in 2006," says Ono, who presented his insight at the Ramanujan 125 conference in Gainesville, Florida, this week. "It is inconceivable he
had this intuition, but he must have."
Figuring out the value of a modular form as it balloons is comparable to spending a coin in a particular shop and then predicting which town that coin will end up in after a year.
Guessing the difference between regular and mock modular forms is even more incredible, says Ono, like spending two coins in the same shop and then predicting they will be very close a year later.
Though Ono and colleagues have now constructed a formula to calculate the exact difference between the two types of modular form for roots of 1, Ramanujan could not possibly have known the formula,
which arises from a bedrock of modern mathematics built after his death.
"He had some sort of magic tricks that we don't understand," says Freeman Dyson of the Institute for Advanced Study in Princeton, New Jersey.
While modular forms are mostly related to abstract problems, Ono's formula could have applications in calculating the entropy of black holes (see "The black hole connection").
So will Ono's work turn out to be the last of Ramanujan's contributions? "I'm so tempted to say that," says Ono. "But I won't be surprised if I'm dead wrong."
The black hole connection
A new formula, inspired by the mysterious work of Srinivasa Ramanujan, could improve our understanding of black holes.
Devised by Ken Ono of Emory University in Atlanta, Georgia, the formula concerns a type of function called a mock modular form (see main story). These functions are now used to compute the entropy of
black holes. This property is linked to the startling prediction by Stephen Hawking that black holes emit radiation.
"If Ono has a really new way of characterising a mock modular form then surely it will have implications for our work," says Atish Dabholkar, who studies black holes at the French National Centre for
Scientific Research in Paris. "Mock modular forms will appear more and more in physics as our understanding improves."
• New Scientist
• Not just a website!
• Subscribe to New Scientist and get:
• New Scientist magazine delivered every week
• Unlimited online access to articles from over 500 back issues
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but
there are a variety of licensing options available for use of articles and graphics we own the copyright to.
|
{"url":"http://www.newscientist.com/article/mg21628904.200","timestamp":"2014-04-19T09:44:16Z","content_type":null,"content_length":"51504","record_id":"<urn:uuid:eccc884c-b772-4ab7-bb2b-2d3888e0c466>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
|
-Laplacian Equation with Sign-Changing Weight Functions
ISRN Mathematical Analysis
Volume 2014 (2014), Article ID 461965, 7 pages
Research Article
Existence of Nontrivial Solutions of p-Laplacian Equation with Sign-Changing Weight Functions
Département de Mathématiques, Faculté des Sciences de Tunis, Campus Universitaire, 2092 Tunis, Tunisia
Received 30 September 2013; Accepted 9 December 2013; Published 12 February 2014
Academic Editors: E. Colorado, L. Gasinski, and D. D. Hai
Copyright © 2014 Ghanmi Abdeljabbar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
This paper shows the existence and multiplicity of nontrivial solutions of the p-Laplacian problem for with zero Dirichlet boundary conditions, where is a bounded open set in , if , if ), , is a
smooth function which may change sign in , and . The method is based on Nehari results on three submanifolds of the space .
1. Introduction
In this paper, we are concerned with the multiplicity of nontrivial nonnegative solutions of the following elliptic equation: where is a bounded domain of , if , if , , is positively homogeneous of
degree ; that is, holds for all and the sign-changing weight function satisfies the following condition:
(A) with ,, and.
In recent years, several authors have used the Nehari manifold and fibering maps (i.e., maps of the form , where is the Euler function associated with the equation) to solve semilinear and
quasilinear problems. For instance, we cite papers [1–9] and references therein. More precisely, Brown and Zhang [10] studied the following subcritical semilinear elliptic equation with sign-changing
weight function: where . Also, the authors in [10] by the same arguments considered the following semilinear elliptic problem: where . Exploiting the relationship between the Nehari manifold and
fibering maps, they gave an interesting explanation of the well-known bifurcation result. In fact, the nature of the Nehari manifold changes as the parameter crosses the bifurcation value.
Inspired by the work of Brown and Zhang [10], Nyamouradi [11] treated the following problem: where is positively homogeneous of degree .
In this work, motivated by the above works, we give a very simple variational method to prove the existence of at least two nontrivial solutions of problem (1). In fact, we use the decomposition of
the Nehari manifold as vary to prove our main result.
Before stating our main result, we need the following assumptions:(H[1]) is a function such that (H[2]), , and for all .We remark that using assumption (H[1]), for all , , we have the so-called Euler
identity: Our main result is the following.
Theorem 1. Under the assumptions (A), (H[1]), and (H[2]), there exists such that for all , problem (1) has at least two nontrivial nonnegative solutions.
This paper is organized as follows. In Section 2, we give some notations and preliminaries and we present some technical lemmas which are crucial in the proof of Theorem 1. Theorem 1 is proved in
Section 3.
2. Some Notations and Preliminaries
Throughout this paper, we denote by the best Sobolev constant for the operators , given by where . In particular, we have with the standard norm Problem (1) is posed in the framework of the Sobolev
space . Moreover, a function in is said to be a weak solution of problem (1) if Thus, by (6) the corresponding energy functional of problem (1) is defined in by In order to verify , we need the
following lemmas.
Lemma 2. Assume that is positively homogeneous of degree ; then is positively homogeneous of degree .
Proof. The proof is the same as that in Chu and Tang [4].
In addition, by Lemma 2, we get the existence of positive constant such that
Lemma 3 (see [12], Theorem A.2). Let and such that Then for every , one has ; moreover the operator defined by is continuous.
Lemma 4 (See Proposition 1 in [13]). Suppose that verifies condition (12). Then, the functional belongs to , and where denotes the usual duality between and (the dual space of the sobolev space ).
As the energy functional is not bounded below in , it is useful to consider the functional on the Nehari manifold: Thus, if and only if Note that contains every nonzero solution of problem (1).
Moreover, one has the following result.
Lemma 5. The energy functional is coercive and bounded below on .
Proof. If , then by (16) and condition (A) we obtain So, it follows from (8) that Thus, is coercive and bounded below on .
Define Then, by (16) it is easy to see that for , Now, we split into three parts
Lemma 6. Assume that is a local minimizer for on and that . Then, in (the dual space of the Sobolev space E).
Proof. Our proof is the same as that in Brown-Zhang [10, Theorem 2.3].
Lemma 7. One has the following:(i)if , then ;(ii)if , then and ;(iii)if , then .
Proof. The proof is immediate from (21), (22), and (23).
From now on, we denote by the constant defined by then we have the following.
Lemma 8. If , then .
Proof. Suppose otherwise, that such that . Then for , we have From the Hölder inequality, (6) and (8), it follows that Hence, it follows from (27) that then, On the other hand, from condition (A), (8
) and (26) we have So, Combining (30) and (32), we obtain , which is a contradiction.
By Lemma 8, for , we write and define Then, we have the following.
Lemma 9. If , then for some depending on , and .
Proof. Let . Then, from (23) we have So Thus, from the definition of and , we can deduce that .
Now, let . Then, using (6) and (8) we obtain this implies that In addition, by (18) and (38) Thus, since , we conclude that for some . This completes the proof.
For with , set Then, the following lemma holds.
Lemma 10. For each with , one has the following:(i)if , then there exists unique such that and (ii)if , then there are unique such that and
Proof. We fix with and we let Then, it is easy to check that achieves its maximum at . Moreover,
(i) We suppose that . Since as , for and for . There is a unique such that .
Now, it follows from (14) and (27) that Hence, . On the other hand, it is easy to see that for all Thus, .
(ii) We suppose that . Then, by (A), (8) and the fact that we obtain Then, there are unique and such that , , and . We have , and Thus, This completes the proof.
For each with , set Then we have the following.
Lemma 11. For each with , one has the following:(i)if , then there exists a unique such that and (ii)if , then there are unique such that and
Proof. For with , we can take and similar to the argument in Lemma 9, we obtain the results of Lemma 10.
Proposition 12. (i) There exist minimizing sequences in such that
(ii) There exist minimizing sequences in such that
Proof. The proof is almost the same as that in Wu [14, Proposition 9] and is omitted here.
3. Proof of Our Result
Throughout this section, the norm is denoted by for and the parameter satisfies .
Theorem 13. If , then, problem (1) has a positive solution in such that
Proof. By Proposition 12(i), there exists a minimizing sequence for on such that Then by Lemma 5, there exists a subsequence and in such that This implies that as .
Next, we will show that By Lemma 3, we have where . On the other hand, it follows from the Hölder inequality that Hence, as .
By (57) and (58) it is easy to prove that is a weak solution of (1).
Since then by (57) and Lemma 9, we have as . Letting , we obtain Now, we aim to prove that strongly in and .
Using the fact that and by Fatou's lemma, we get This implies that Let ; then by Brézis-Lieb Lemma [3] we obtain Therefore, strongly in .
Moreover, we have . In fact, if then, there exist such that and . In particular we have . Since there exists such that . By Lemma 10, we have which is a contradiction.
Finally, by (63) we may assume that is a nontrivial nonnegative solution of problem (1).
Theorem 14. If , then, problem (1) has a positive solution in such that
Proof. By Proposition 12(ii), there exists a minimizing sequence for on such that Moreover, by (23) we obtain So, by (38) and (72) there exists a positive constant such that This implies that By (70)
and (71), we obtain clearly that is a weak solution of (1).
Now, we aim to prove that strongly in . Supposing otherwise, then By Lemma 9, there is a unique such that . Since , for all , we have which is a contradiction. Hence strongly in .
This imply that By Lemma 5 and (74) we may assume that is a nontrivial solution of problem (1).
Now, we begin to show the proof of Theorem 1: by Theorem 13, we obtain that for all , problem (1) has a nontrivial solution . On the other hand, from Theorem 14, we get the second solution . Since ,
then and are distinct.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
|
{"url":"http://www.hindawi.com/journals/isrn.mathematical.analysis/2014/461965/","timestamp":"2014-04-21T03:09:03Z","content_type":null,"content_length":"646964","record_id":"<urn:uuid:43d21cb3-23c6-4bb9-aa3f-81ba0105ffcf>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How the Brain Keeps the Eyes Still
Results 1 - 10 of 57
- J. Neurosci , 1999
"... this paper I present a network model of spiking neurons in which synapses are endowed with realistic gating kinetics, based on experimentally measured dynamical properties of cortical synapses.
I will focus on how delay-period activity could be generated by neuronally plausible mechanisms; the issue ..."
Cited by 103 (15 self)
Add to MetaCart
this paper I present a network model of spiking neurons in which synapses are endowed with realistic gating kinetics, based on experimentally measured dynamical properties of cortical synapses. I
will focus on how delay-period activity could be generated by neuronally plausible mechanisms; the issue of memory field formation will be addressed in a separate study. A main problem to be
investigated is that of "rate control" for a persistent state: if a robust persistent activity necessitates strong recurrent excitatory connections, how can the network be prevented from runaway
excitation in spite of the powerful positive feedback, so that neuronal firing rates are low and comparable to those of PFC cells (10 --50 Hz)? Moreover, a persistent state may be destabilized
because of network dynamics. For example, fast recurrent excitation followed by a slower negative feedback may lead to network instability and a collapse of the persistent state. It is shown that
persistent states at low firing rates are usually stable only in the presence of sufficiently slow excitatory synapses of the NMDA type. Functional implications of these results for the role of
Received April 14, 1999; revised Aug. 12, 1999; accepted Aug. 12, 1999
- Neural Computation , 2004
"... A large number of human psychophysical results have been successfully explained in recent years using Bayesian models. However, the neural implementation of such mod-els remains largely unclear.
In this paper, we show that a network architecture com-monly used to model the cerebral cortex can implem ..."
Cited by 59 (4 self)
Add to MetaCart
A large number of human psychophysical results have been successfully explained in recent years using Bayesian models. However, the neural implementation of such mod-els remains largely unclear. In
this paper, we show that a network architecture com-monly used to model the cerebral cortex can implement Bayesian inference for an arbi-trary hidden Markov model. We illustrate the approach using an
orientation discrimi-nation task and a visual motion detection task. In the case of orientation discrimination, we show that the model network can infer the posterior distribution over orientations
and correctly estimate stimulus orientation in the presence of significant noise. In the case of motion detection, we show that the resulting model network exhibits direction selectivity and
correctly computes the posterior probabilities over motion direction and position. When used to solve the well-known random dots motion discrimination task, the model generates responses that mimic
the activities of evidence-accumulating neu-rons in cortical areas LIP and FEF. The framework introduced in the paper posits a new interpretation of cortical activities in terms of log posterior
probabilities of stimuli occurring in the natural world. 1 1
- Advances in Neural Information Processing Systems 10 , 1998
"... A simple but powerful modification of the standard Gaussian distribution is studied. The variables of the rectified Gaussian are constrained to be nonnegative, enabling the use of nonconvex
energy functions. Two multimodal examples, the competitive and cooperative distributions, illustrate the repre ..."
Cited by 33 (2 self)
Add to MetaCart
A simple but powerful modification of the standard Gaussian distribution is studied. The variables of the rectified Gaussian are constrained to be nonnegative, enabling the use of nonconvex energy
functions. Two multimodal examples, the competitive and cooperative distributions, illustrate the representational power of the rectified Gaussian. Since the cooperative distribution can represent
the translations of a pattern, it demonstrates the potential of the rectified Gaussian for modeling pattern manifolds. 1 INTRODUCTION The rectified Gaussian distribution is a modification of the
standard Gaussian in which the variables are constrained to be nonnegative. This simple modification brings increased representational power, as illustrated by two multimodal examples of the
rectified Gaussian, the competitive and the cooperative distributions. The modes of the competitive distribution are well-separated by regions of low probability. The modes of the cooperative
distribution are closely sp...
- Advances in Neural Information Processing Systems , 1998
"... One approach toinvariant object recognition employs a recurrent neural network as an associative memory. In the standard depiction of the network's state space, memories of objects are stored as
attractive xed points of the dynamics. I argue for a modi cation of this picture: if an object has a cont ..."
Cited by 29 (5 self)
Add to MetaCart
One approach toinvariant object recognition employs a recurrent neural network as an associative memory. In the standard depiction of the network's state space, memories of objects are stored as
attractive xed points of the dynamics. I argue for a modi cation of this picture: if an object has a continuous family of instantiations, it should be represented by a continuous attractor. This idea
is illustrated with a network that learns to complete patterns. To perform the task of lling in missing information, the network develops a continuous attractor that models the manifold from which
the patterns are drawn. From a statistical viewpoint, the pattern completion task allows a formulation of unsupervised learning in terms of regression rather than density estimation. A classic
approach toinvariant object recognition is to use a recurrent neural network as an associative memory[1]. In spite of the intuitive appeal and biological plausibility of this approach, it has largely
been abandoned in practical applications.
, 2003
"... A parametric working memory network stores the information of an analog stimulus in the form of persistent neural activity that is monotonically tuned to the stimulus. The family of persistent
firing patterns with a continuous range of firing rates must all be realizable under exactly the same exter ..."
Cited by 28 (4 self)
Add to MetaCart
A parametric working memory network stores the information of an analog stimulus in the form of persistent neural activity that is monotonically tuned to the stimulus. The family of persistent firing
patterns with a continuous range of firing rates must all be realizable under exactly the same external conditions (during the delay when the transient stimulus is withdrawn). How this can be
accomplished by neural mechanisms remains an unresolved question. Here we present a recurrent cortical network model of irregularly spiking neurons that was designed to simulate a somatosensory
working memory experiment with behaving monkeys. Our model reproduces the observed positively and negatively monotonic persistent activity, and heterogeneous tuning curves of memory activity. We show
that fine-tuning mathematically corresponds to a precise alignment of cusps in the bifurcation diagram of the network. Moreover, we show that the fine-tuned network can integrate stimulus inputs over
several seconds. Assuming that such time integration occurs in neural populations downstream from a tonically persistent neural population, our model is able to account for the slow ramping-up and
ramping-down behaviors of neurons observed in prefrontal cortex.
, 2000
"... According to a popular hypothesis, short-term memories are stored as persistent neural activity maintained by synaptic feedback loops. This hypothesis has been formulated mathematically in a
number of recurrent network models. Here we study an abstraction of these models, a single neuron with a sy ..."
Cited by 16 (2 self)
Add to MetaCart
According to a popular hypothesis, short-term memories are stored as persistent neural activity maintained by synaptic feedback loops. This hypothesis has been formulated mathematically in a number
of recurrent network models. Here we study an abstraction of these models, a single neuron with a synapse onto itself, or autapse. This abstraction cannot simulate the way in which persistent
activity patterns are distributed over neural populations in the brain. However, with proper tuning of parameters, it does reproduce the continuously graded, or analog, nature of many examples of
persistent activity. The conditions for tuning are derived for the dynamics of a conductance-based model neuron with a slow excitatory autapse. The derivation uses the method of averaging to
approximate the spiking model with a nonspiking, reduced model. Short-term analog memory storage is possible if the reduced model is approximately linear, and its feedforward bias and autapse
strength are precisely...
- In Principles of Neural Ensemble and Distributed Coding in the Nervous System , 2001
"... Introduction Studies of population coding, which explore how the activity of ensembles of neurons represent the external world, normally focus on the accuracy and reliability with which sensory
information is represented. However, the encoding strategies used by neural circuits have undoubtedly bee ..."
Cited by 15 (1 self)
Add to MetaCart
Introduction Studies of population coding, which explore how the activity of ensembles of neurons represent the external world, normally focus on the accuracy and reliability with which sensory
information is represented. However, the encoding strategies used by neural circuits have undoubtedly been shaped by the way the encoded information is used. The point of encoding sensory information
is, after all, to generate and guide behavior. The ease and efficiency with which sensory information can be processed to generate motor responses must be an important factor in determining the
nature of a neuronal population code. In other words, to understand how populations of neurons encode we cannot overlook how they compute. Gain modulation, which is seen in many cortical areas, is a
change in the response amplitude of a neuron that is not accompanied by a modification of response selectivity. Just as population coding is a ubiquitous form of information representation, gain
- I. Existence. SIAM Journal on Applied Dynamical Systems , 2003
"... Abstract. We analyze the stability of standing pulse solutions of a neural network integro-differential equation. The network consists of a coarse-grained layer of neurons synaptically connected
by lateral inhibition with a nonsaturating nonlinear gain function. When two standing single-pulse soluti ..."
Cited by 15 (1 self)
Add to MetaCart
Abstract. We analyze the stability of standing pulse solutions of a neural network integro-differential equation. The network consists of a coarse-grained layer of neurons synaptically connected by
lateral inhibition with a nonsaturating nonlinear gain function. When two standing single-pulse solutions coexist, the small pulse is unstable, and the large pulse is stable. The large single pulse
is bistable with the “all-off ” state. This bistable localized activity may have strong implications for the mechanism underlying working memory. We show that dimple pulses have similar stability
properties to large pulses but double pulses are unstable.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=323323","timestamp":"2014-04-23T08:51:42Z","content_type":null,"content_length":"38892","record_id":"<urn:uuid:34388d92-f186-4f7e-97ff-2f09a17b0108>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This text is designed for graduate-level courses in real analysis.
Real Analysis, Fourth Edition, covers the basic material that every graduate student should know in the classical theory of functions of a real variable, measure and integration theory, and some of
the more important and elementary topics in general topology and normed linear space theory. This text assumes a general background in undergraduate mathematics and familiarity with the material
covered in an undergraduate course on the fundamental concepts of analysis. Patrick Fitzpatrick of the University of Maryland—College Park spearheaded this revision of Halsey Royden’s classic text.
Table of Contents
1. The Real Numbers: Sets, Sequences and Functions
1.1 The Field, Positivity and Completeness Axioms
1.2 The Natural and Rational Numbers
1.3 Countable and Uncountable Sets
1.4 Open Sets, Closed Sets and Borel Sets of Real Numbers
1.5 Sequences of Real Numbers
1.6 Continuous Real-Valued Functions of a Real Variable
2. Lebesgue Measure
2.1 Introduction
2.2 Lebesgue Outer Measure
2.3 The σ-algebra of Lebesgue Measurable Sets
2.4 Outer and Inner Approximation of Lebesgue Measurable Sets
2.5 Countable Additivity and Continuity of Lebesgue Measure
2.6 Nonmeasurable Sets
2.7 The Cantor Set and the Cantor-Lebesgue Function
3. Lebesgue Measurable Functions
3.1 Sums, Products and Compositions
3.2 Sequential Pointwise Limits and Simple Approximation
3.3 Littlewood's Three Principles, Egoroff's Theorem and Lusin's Theorem
4. Lebesgue Integration
4.1 The Riemann Integral
4.2 The Lebesgue Integral of a Bounded Measurable Function over a Set of Finite Measure
4.3 The Lebesgue Integral of a Measurable Nonnegative Function
4.4 The General Lebesgue Integral
4.5 Countable Additivity and Continuity of Integraion
4.6 Uniform Integrability: The Vitali Convergence Theorem
5. Lebesgue Integration: Further Topics
5.1 Uniform Integrability and Tightness: A General Vitali Convergence Theorem
5.2 Convergence in measure
5.3 Characterizations of Riemann and Lebesgue Integrability
6. Differentiation and Integration
6.1 Continuity of Monotone Functions
6.2 Differentiability of Monotone Functions: Lebesgue's Theorem
6.3 Functions of Bounded Variation: Jordan's Theorem
6.4 Absolutely Continuous Functions
6.5 Integrating Derivatives: Differentiating Indefinite Integrals
6.6 Convex Functions
7. The L^Ρ Spaces: Completeness and Approximation
7.1 Normed Linear Spaces
7.2 The Inequalities of Young, Hölder and Minkowski
7.3 L^Ρ is Complete: The Riesz-Fischer Theorem
7.4 Approximation and Separability
8. The L^Ρ Spaces: Duality and Weak Convergence
8.1 The Dual Space of L^Ρ
8.2 Weak Sequential Convergence in L^Ρ
8.3 Weak Sequential Compactness
8.4 The Minimization of Convex Functionals
9. Metric Spaces: General Properties
9.1 Examples of Metric Spaces
9.2 Open Sets, Closed Sets and Convergent Sequences
9.3 Continuous Mappings Between Metric Spaces
9.4 Complete Metric Spaces
9.5 Compact Metric Spaces
9.6 Separable Metric Spaces
10. Metric Spaces: Three Fundamental Theorems
10.1 The Arzelà-Ascoli Theorem
10.2 The Baire Category Theorem
10.3 The Banach Contraction Principle
11. Topological Spaces: General Properties
11.1 Open Sets, Closed Sets, Bases and Subbases
11.2 The Separation Properties
11.3 Countability and Separability
11.4 Continuous Mappings Between Topological Spaces
11.5 Compact Topological Spaces
11.6 Connected Topological Spaces
12. Topological Spaces: Three Fundamental Theorems
12.1 Urysohn's Lemma and the Tietze Extension Theorem
12.2 The Tychonoff Product Theorem
12.3 The Stone-Weierstrass Theorem
13. Continuous Linear Operators Between Banach Spaces
13.1 Normed Linear Spaces
13.2 Linear Operators
13.3 Compactness Lost: Infinite Dimensional Normed Linear Spaces
13.4 The Open Mapping and Closed Graph Theorems
13.5 The Uniform Boundedness Principle
14. Duality for Normed Linear Spaces
14.1 Linear Functionals, Bounded Linear Functionals and Weak Topologies
14.2 The Hahn-Banach Theorem
14.3 Reflexive Banach Spaces and Weak Sequential Convergence
14.4 Locally Convex Topological Vector Spaces
14.5 The Separation of Convex Sets and Mazur's Theorem
14.6 The Krein-Milman Theorem
15. Compactness Regained: The Weak Topology
15.1 Alaoglu's Extension of Helley's Theorem
15.2 Reflexivity and Weak Compactness: Kakutani's Theorem
15.3 Compactness and Weak Sequential Compactness: The Eberlein-Šmulian Theorem
15.4 Metrizability of Weak Topologies
16. Continuous Linear Operators on Hilbert Spaces
16.1 The Inner Product and Orthogonality
16.2 The Dual Space and Weak Sequential Convergence
16.3 Bessel's Inequality and Orthonormal Bases
16.4 Adjoints and Symmetry for Linear Operators
16.5 Compact Operators
16.6 The Hilbert Schmidt Theorem
16.7 The Riesz-Schauder Theorem: Characterization of Fredholm Operators
17. General Measure Spaces: Their Properties and Construction
17.1 Measures and Measurable Sets
17.2 Signed Measures: The Hahn and Jordan Decompositions
17.3 The Carathéodory Measure Induced by an Outer Measure
17.4 The Construction of Outer Measures
17.5 The Carathéodory-Hahn Theorem: The Extension of a Premeasure to a Measure
18. Integration Over General Measure Spaces
18.1 Measurable Functions
18.2 Integration of Nonnegative Measurable Functions
18.3 Integration of General Measurable Functions
18.4 The Radon-Nikodym Theorem
18.5 The Saks Metric Space: The Vitali-Hahn-Saks Theorem
19. General L^Ρ Spaces: Completeness, Duality and Weak Convergence
19.1 The Completeness of L^Ρ ( Χ, μ), 1 ≤ Ρ ≤ ∞
19.2 The Riesz Representation theorem for the Dual of L^Ρ ( Χ, μ), 1 ≤ Ρ ≤ ∞
19.3 The Kantorovitch Representation Theorem for the Dual of L^∞ (Χ, μ)
19.4 Weak Sequential Convergence in L^Ρ (X, μ), 1 < Ρ < 1
19.5 Weak Sequential Compactness in L^1 (X, μ): The Dunford-Pettis Theorem
20. The Construction of Particular Measures
20.1 Product Measures: The Theorems of Fubini and Tonelli
20.2 Lebesgue Measure on Euclidean Space R^n
20.3 Cumulative Distribution Functions and Borel Measures on R
20.4 Carathéodory Outer Measures and hausdorff Measures on a Metric Space
21. Measure and Topology
21.1 Locally Compact Topological Spaces
21.2 Separating Sets and Extending Functions
21.3 The Construction of Radon Measures
21.4 The Representation of Positive Linear Functionals on C[c] (X): The Riesz-Markov Theorem
21.5 The Riesz Representation Theorem for the Dual of C(X)
21.6 Regularity Properties of Baire Measures
22. Invariant Measures
22.1 Topological Groups: The General Linear Group
22.2 Fixed Points of Representations: Kakutani's Theorem
22.3 Invariant Borel Measures on Compact Groups: von Neumann's Theorem
22.4 Measure Preserving Transformations and Ergodicity: the Bogoliubov-Krilov Theorem
Purchase Info ?
With CourseSmart eTextbooks and eResources, you save up to 60% off the price of new print textbooks, and can switch between studying online or offline to suit your needs.
Once you have purchased your eTextbooks and added them to your CourseSmart bookshelf, you can access them anytime, anywhere.
Buy Access
Real Analysis, CourseSmart eTextbook, 4th Edition
Format: Safari Book
$67.99 | ISBN-13: 978-0-321-65682-7
|
{"url":"http://www.mypearsonstore.com/bookstore/real-analysis-coursesmart-etextbook-0321656822","timestamp":"2014-04-21T04:49:39Z","content_type":null,"content_length":"23871","record_id":"<urn:uuid:3aa34320-9fdd-4192-bbfa-655cab96aea7>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
|
College Algebra: Polynomial Zeros & Multiplicities Video | MindBites
College Algebra: Polynomial Zeros & Multiplicities
About this Lesson
• Type: Video Tutorial
• Length: 8:09
• Media: Video/mp4
• Use: Watch Online & Download
• Access Period: Unrestricted
• Download: MP4 (iPod compatible)
• Size: 87 MB
• Posted: 11/18/2008
This lesson is part of the following series:
College Algebra: Full Course (258 lessons, $198.00)
College Algebra Review (30 lessons, $59.40)
College Algebra: Polynomial & Rational Functions (23 lessons, $35.64)
College Algebra: Zeros of Polynomials (5 lessons, $7.92)
The zeros of a polynomial are just the places where a polynomial crosses the x-axis, or those values for x, which if you plug into the polynomial, give zero as a result. In this lesson, we will
define and discuss zeros and their multiplicity for a variety of different functions from the perspective of how to identify and count zeros using a mathematical formula or using a graph of the
function. We'll cover zeros and multiplicity for parabolas and various formulas like (x^2-4x+4), (7x^3+x), and [(x+1)^2*(x-1)^3*(x^2-10)]
This lesson is perfect for review for a CLEP test, mid-term, final, summer school, or personal growth!
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, College Algebra. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/collegealgebra. The full course covers equations and inequalities, relations and functions, polynomial and rational functions, exponential and
logarithmic functions, systems of equations, conic sections and a variety of other AP algebra, advanced algebra and Algebra II topics.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from Connecticut
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger has
won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association of
America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The Heart
of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math journals,
including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of numbers, and the
theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
About this Author
2174 lessons
Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare
students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider
of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/.
Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through...
Recent Reviews
Prof Burger ROCKS!!!!
~ cbrown
I use these videos to help me teach my Alg I students. It is great to provide them with two teachers' point of views. Thanks Prof Burger!
Prof Burger ROCKS!!!!
~ cbrown
I use these videos to help me teach my Alg I students. It is great to provide them with two teachers' point of views. Thanks Prof Burger!
So now we have a sense of how to find zeros of polynomials and what they even mean. So zeros of polynomials are just the places where the polynomial crosses the x-axis, or those values for x, which
if you plug into the polynomial give zero. So you can just take the polynomial, set it equal to zero, solve for x, and you’ve got it.
Okay, but let’s take a look at the possible answers that you can get when you actually do that solving. Let’s just think first about the quadratic and parabolas, since we’re sort of familiar with
those. So if you take a look at a parabola like this, graphically, what you’d see is the roots of the parabola or the zeros of the quadratic would be those places where, in fact, the curve crosses
x-axis, so you’d see them here, there are two. So it turns out there will always be two zeros to a quadratic. Now, those two zeros may, in fact, be imaginary numbers, in which case the picture would
look more like this. It won’t actually cross the real x-axis, but there would be these two imaginary solutions out there somewhere. The other possibility, the more popular one, for at least me and
probably for you, is that there are two real solutions, and there you go. You can see those two points.
Now, there’s a third possibility that’s sort of fun to think about, and that’s the one where the curve just touches, just grazes the x-axis. We saw this earlier when we were playing match game.
Here’s another example of it. Now, in this case, believe it or not, we say--mathematicians don’t know how to count--we say that there are still two zeros, but they both happen to be the same. We say,
“That’s a zero and that’s a zero.” So we just count that same zero twice. The reason why we count it twice is because you see the thing that sort of comes down, just nix it, and it goes up. Notice
that if it were just a little bit lower we would have two, and so if you go up a little higher, we still have two, and so when you keep doing that, basically, mathematicians say, “There’s still two,
but they happen to be equal.”
In this case we say that this is a zero with multiplicity 2. It just means that it’s a zero that actually happens twice. So this is also a zero right there that also happens twice. You can imagine
with a cubic seeing the same kind of phenomena. It would look like this. You’d go up, you’d come down, and just caress the x-axis, and then go back up. This actually has three real roots, three real
zeros. One of them is here, and then two of them are right there. This is a zero with multiplicity 2, because it comes down, just hits it, and comes up.
So, in fact, we can talk about the multiplicities of zeros, and that’s what I wanted to talk to you about just now. Let’s take a look at some examples. For example, let’s look at x² - 4x +4. And I
want to know what are all the zeros of this object and what are the multiplicities. So I set this equal to zero and solve, and I hope, hope, hope that I can factor this, because if I can’t factor it
this is going to be bad news. This plus sign tells me that we’re going to have both the same sign and they will both be minus, so I have a minus, a minus. Something whose product is 4 and sum is -4,
so that would be 2, 2. And look what I see. Well, I’m going to write this out in this sort of compact way. I see that I have (x - 2)². So what are the two roots? What are the two zeros of this?
Either x = 2 or x = 2. so this actually has a zero of x = 2, but it’s with multiplicity 2, because it happens twice, and that little 2 tells you that--it happens twice.
So here’s an example of a parabola that’s going to come down and just touch the world at that one point. The mathematicians would say it touches it sort of twice--once here, and then once at itself.
How about a cubic? Let’s look at 7x3 + x. Let’s find all the roots and the multiplicities. So I would set this equal to zero and try to solve. Well, gee, factoring this looks a little bit tricky.
It’s a cubic. How do you do that? I don't know. However, look, there’s a common factor of x. Let’s begin by just pulling that out. If I pull out the x, what I see is a 7x² + 1 = 0. And you may be
saying, “Now, wait a minute. Where is the +1 coming from? Shouldn’t it just be zero if I take out the x?” No, because when I distribute this back I’ve got to have that x there. And so there’s an
invisible 1 multiplying the x, and so when I pull out the x, don’t forget your special invisible friend, 1.
Now, what can I do with this? Well, I can factor this some more or I can just set everything equal to zero. This means x = 0, so x = 0. There’s one solution, and the other solution is this, which
would mean that 7x² + 1 = 0. And we can try to factor that, or what I’m going to do is just bring the 1 over to the other side as a -1, and I would see 7x² = -1, divide both sides by 7, and I would
see x² = -1/7 and then what happens? Well, I could just take ± square roots, but notice what’s going to happen when I do that. I’m going to see that x = ± the square root of -1/7, which is imaginary.
And so what I see here is ± i so, in fact, I see that there are three different roots. The root x = 0, the root x = i, and the root x = - i. So these are three roots and they all appear just once, so
I’d say each of these roots have multiplicity 1. They just appear once. There’s no coming down and just touching the x-axis. These cross right through, except these are imaginary, so in fact, we’re
only going to cross the x-axis once, and these are going to be somewhere out in no man’s land.
One last example. This one’s real easy, so don’t worry about it. Suppose I just tell you that I have a function f(x) and I give it to you in factored form and I want you to find the zeros of this
polynomial, but also the multiplicities. So there it is: (x + 1)²(x - 13)(x² - 10). To find the zeros I set that whole thing equal to zero. So I just set it equal to zero, and what happens? If I set
it equal to zero, well either this term equals zero, or that term equals zero, or that term equals zero, because I have a product of numbers giving zero. Well, if this term equals zero, the only way
that can happen is if x = -1. So x = -1 is a zero, but it has multiplicity 2. It actually occurs twice. If I were to write this out I could say (x + 1)(x + 1). So I see that the -1 appears twice. So
I say this appears with multiplicity 2.
What about this? Well, here I would see that three times: x - 1, x - 1, x - 1. So the root or the zero, x = 1, that’s what makes it a zero, would actually have multiplicity 3. And what’s the solution
here? The solution here is going to be what? Well, x = ± the square root of 10. So these both occur with multiplicity 1, because they only appear once. All right. Try some of these on your own and
see if you can start to find zeros and multiplicities.
Get it Now and Start Learning
Embed this video on your site
Copy and paste the following snippet:
Link to this page
Copy and paste the following snippet:
|
{"url":"http://www.mindbites.com/lesson/787-college-algebra-polynomial-zeros-multiplicities","timestamp":"2014-04-16T07:57:46Z","content_type":null,"content_length":"63253","record_id":"<urn:uuid:14d3ea3c-7160-430f-ab4c-4f3c8e2d34d2>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fuzzy Urn
A friend of mine had an idea that he said required accurate control of water temperature, this got me thinking and I figured it was a good opportunity to have a play. To control the temperature I
decided to try and put some of the theory I was taught at uni into practice. I had it in my head even though it is probably unnecessarily complicated to use a variable AC chopper to vary the power
and Fuzzy Logic to manage the control.
AC Chopper : Theory
To vary the power I decided to use a variable AC chopper, basically a light dimmer I could control with a micro controller. It works by chopping up the ac signal like you can see in the image below,
where alpha a is the triggering angle. As alpha increases the signal is off for longer therefore the voltage is lower, it seems simple really. The equation below calculates the chopped RMS voltage
where Vp is 240V in Australia and alpha ranges from 0 to pi radians.
Chopped AC Signal
AC Chopper : Triac
To trigger the load I have decided to use a TRIAC, there are probably other ways of doing this but power electronics was not one of my best subjects so I just picked something I recognised the name
of. We were taught about these clever sounding semiconductors at uni, what they seemed to forget was how you actually use one. My first attempt left me quite frustrated when I tried to trigger the
TRIAC directly from my microcontroller and somehow all I managed was to trip the safety switch. After poking around on the Internet turns out you need to drive the TRIAC using some sort of driver
such as an optocoupler. After digging through data sheets I ended up with the circuit below, I trigger the optocoupler using an NPN transistor, even though my micro controller could probably have
supplied sufficient current it's nice to have some isolation. The chip I picked also has a maximum input voltage of 3.3V so I used the zener diode to drop my 5V signal down to 3.3V. All of the
resistors are for limiting the current into my three switching devices they might change depending on what you use, just check the data sheets, try not to do what I do which is just wing it and see
if anything burns out. A and N are the active and neutral mains input and RL is the load, so in this case it's the urn. Try and keep the high voltage side isolated preferably somewhere where it
cannot be touched, it is dangerous!
Triac Trigger Circuit
AC Chopper : Zero Crossing Detector
So now I can turn a load on and off using a micro controller, it was about here when I realised that I need to be able to time alpha and for that I need to be able to detect the zero crossing point
of the AC signal (more backwards thinking on my part).
If you go to google and search for zero crossing detector you will find plenty of different examples, some a lot more simple than others. What I found out in an expensive mistake is that the simple
ones are generally more dangerous as they lack isolation between the high and low voltage sections, of course I didn't think of this until after destroying my laptop. To avoid anything like this
happening to you, only use a circuit if you are 100% confident in its design.
The circuit that I used is shown below, this design is not entirely mine I adapted a comparator circuit I found on the Internet as my knowledge of analogue circuits is limited (when I find the link
again I'll remember to acknowledge the designer). I used a transformer to drop the 240V AC down to a safer 12V AC, the comparator compares the AC signal with a reference voltage and toggles the
output when they match giving a nice TTL level output shown below.
Comparator Output
The circuit also includes a rectifier and regulator for a convenient 5V power supply for the project, the values of the resistors R1-R4 set the reference level for the comparator so use values as
close as possible to what is in the diagram. The LM319 comparator needs a positive and negative power supply, to get the negative 5V I used a 555 timer as a voltage inverter, just google "555 voltage
inverter" and you will find heaps of examples. Now the zero crossing point can be seen at each rising and falling edge on the output of the comparator.
Zero Crossing Schematic
AC Chopper : Finished Product
Using eagle cad software I laid out two PCB's, one board for all the high voltage circuitry and one for everything else, after blowing up one computer I am now being very cautious. The photos below
show the finished products which I made using the toner transfer method (there are plenty of tutorials on the Internet for PCB manufacture so I wont go into it). I used a nice big heat sink on the
TRIAC as it produces a fair amount of heat when running the 1500W urn I am using, the voltage regulator on the zero crossing board also gets a bit too hot so I used a small piece of aluminium angle
as a heat sink.
High Voltage Board
Zero Crossing Board
To test and now demonstrate the working circuits I replaced the urn with a standard light globe and used a trimpot and analogue to digital converter to vary the triggering angle alpha between 0 and
pi radians. You can see in the video I have successfully made an overly complicated light dimmer controlled by my mini dragon board.
Light Dimming Test
Fuzzy Logic : Introduction
To control the temperature I decided to use Fuzzy Logic, mainly because it was taught at uni but yet again the lecturers seemed to miss out the part where they actually teach us how to use it. It
also has other advantages, such as allowing control of the system without needing a mathematical model of the system, this will be convenient if I ever wanted to change the size or power of the urn I
am using.
Fuzzy logic is almost exactly like normal logic except its... fuzzy... Consider a system where we measure temperature, firstly lets look at the normal logic. We will say everything above 40 degrees
is hot and everything below 40 is not hot, what if the temperature is 39.9 degrees? Normal logic would say it is not hot, where I would say that it may as well be hot, which is basically what Fuzzy
Logic does. In Fuzzy Logic the terms hot and not hot would be given a membership function as you can see in the picture below. Now if the temperature is 39.9 degrees our Fuzzy system would say it is
99.5% hot and 0.5% not hot. Combining a membership function with a linguistic rule table a Fuzzy system can produce some sort of output depending on the degree of membership of the input variables.
Fuzzy Logic Example
When using Fuzzy Logic for control the two commonly used inputs are error e (how close is the measured temperature to the desired temperature) and change in error Δe (how fast is the measured
temperature approaching the desired temperature). For the urn I am just using one input e, as it's a fairly slow system. If you are using this as a lesson in Fuzzy for your own project it should not
be very hard to expand upon what I have done to add the second input, when I do a project that requires it I will link it here.
Fuzzy logic : Control
For the urn I have created the input membership function below (not drawn to scale), as there is no cooling device on the urn we can pretty much ignore the positive error inputs as the output will
always be OFF. As I was drawing this I started to think that I could have achieved just as accurate control just by switching the urn on and off similar to a normal thermostat, but I guess that
wasn't exactly the point of the project.
Once you have drawn the input function you have to put in a usable form e.g equations, the equations for the ZERO and -SMALL error case's are shown below, its as simple as finding the equations of
the lines. These equations fuzzify the input, they calculate the degree of membership of the input for each case (ZERO, -SMALL, etc) and are used to calculate the output. For your functions to make
sense there should be no more than two cases with a degree of membership greater than 0 at any one time and their sum should always equal 1.
Fuzzy Input Function
Fuzzy Input Degree of Membership Equations
There are two more part to a Fuzzy controller, the output function and the rule table. The output function for my urn is shown below, the output function is just as you would expect the opposite to
the input function, it performs defuzzification and produces an output. The urn I am using has a maximum power of 1500W, I weighted the medium output more heavily by giving it a larger area, not for
any particular reason I just wanted to see if it worked (you will see why this works later on). Below the output function is the rule table, one of the great things about Fuzzy Logic is the
linguistic rule table, it links the input and the output in the most logical way possible, with words.
Fuzzy Output Function
Fuzzy Rule Table
If error is ZERO then power is OFF
If error is -SMALL then power is LOW
If error is -MED then power is MEDIUM
If error is -LARGE then power is HIGH
If error is -V LARGE then power is MAX
If error is POSITIVE then power is OFF
One thing I should mention here, if you have two inputs you will have a two dimensional rule table (it will probably actually look like a table) and you will probably be using AND and OR, in Fuzzy
Logic if x and y are the degree of membership for two different inputs.
x AND y = min(x,y)
x OR y = max(x,y)
Fuzzy Logic : Worked Example
Well now we have an input function and a rule table that will fire one or two rules depending on the results from the input membership functions, to actually calculate an output the results must be
defuzified. There are several ways of doing this, I have used the center of mass or centroid method (this is why the medium output's larger area causes it to have heavier weighting on the result).
The easiest way to describe this step is with an example, so say we have an input error of -1.8 degrees and go from there.
e = -1.8
-MED: D(e) = 0.8
-SMALL: D(e) = 0.2
Therefore the two rules that fire are MEDIUM at 0.8 or 80% power and LOW at 0.2 or 20% power, drawing this on the output membership function and filling in the area gives the polygon as shown below.
I found the defuzzified output by calculating the centroid of the red polygon using the general equations given below (this is probably the worst part of Fuzzy Logic). In the summations I is the
number of the point on the polygon and xi and yi are the positions in Cartesian coordinated of the i'th point. To find the position of the points in the x direction all you need to do is find the
equations for the output membership functions and rearrange them for x. This sort of thing makes my head hurt unless I draw pictures, if you are having trouble remember they are just equations of
lines (y = mx + c) just draw plenty of pictures and you will figure it out, or wait until I upload my code and just see what I did.
Output Membership Function Example
Output Polygon
Calculate The Centroid
Continuing the example and using these formula the centroid and therefore the output power is 706W, finally we have the defuzzified result. Now all that is required is a temperature sensor and all of
the code to implement this Fuzzy system.
Temperature Sensor
To test my code I have been using a TC1047 temperature to voltage converter, I picked these chips because they come precalibrated, they are rated to 125 degrees and were available from element14 for
cheap. These things come in the SOT23 package, which you can see in the picture below is tiny. You will need to solder a filter capacitor and three wires to that tiny thing so I hope you have a
soldering iron with a very fine tip, if not there are plenty of other sensors out there in larger packages
TC1047 Sensor
I bought about 5 of these because I knew I would either loose or destroy most of them while soldering them (and I did), once I soldered on a 0.1uF 805 style ceramic capacitor and the three wires I
dipped the entire thing in JB Weld to waterproof it and left it overnight to set. I think when I finish the testing stage I will replace this with a more permanent stainless steel sensor similar to
the ones used for measuring coolant temperatures in cars.
Finished Sensor
To use this sensor all you need to do is read the voltage from the output pin (check the data sheet) through an analogue to digital converter and apply the equation below to put the voltage into
degrees C, because its precalibrated no calibration is required.
Voltage To Temperature
tested my urn by running it at several different set points ranging from 30 to 90 degrees. The photos below is my (nice tidy) test setup, you can see I have removed the old thermostat from the urn
and I am using my Arduino to read and display the serial output on my computer screen. The screenshot is a bit hard to read, but I have highlighted in my main.c code the set point temperature of 47
degrees, and the serial output of the measured temperature which is sent via my Arduino every second. The final photo is a temperature reading of the water using my brewers thermometer, as you can
see its smack on 47 degrees C, so my overly complicated urn is a success.
Action Shot
Serial Screenshot
Final Temperature
Finishing Touches : LCD
I wanted to add an LCD display to the controller to keep an eye on temperature and run time, by using several menus and some buttons I can control the urn independently. I bought a simple 16x2 LCD
display from Jaycar model number SD1602G2, the pinout below I got from the datasheet, and I drew a wiring diagram of the setup I am using (don't forget the 330R resistor! missing this will stop
everything from working). I have tied the R/W pin to ground as I only want to write to the LCD, the 10k pot is used for adjusting the contrast.
LCD Pinout
LCD Schematic
Finishing Touches : Case and Buttons
I wanted my controller to be a completely stand alone device so I have rigged up a case for all the electronics made from 100mm PVC pipe. I installed a cooling fan on the back and a plug for the
temperature sensor on the side then painted the whole thing black. The finished product as you can see in the photos below looks more like a bomb than an urn controller, I have included a demo video
of me going through the urns menu's using the buttons.
Finsihed Controller
Menu Test
|
{"url":"http://hownottoengineer.com/projects/fuzzy-urn.html","timestamp":"2014-04-16T07:26:21Z","content_type":null,"content_length":"23879","record_id":"<urn:uuid:69160bcb-c805-4425-925c-89a8aee492ed>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Do good math jokes exist?
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Have a good joke? Share.
I know this is subjective, but the principle "should be of interest to mathematicians" trumps. (I hope.)
up vote 71 down vote favorite
84 examples soft-question big-list
show 11 more comments
I know this is subjective, but the principle "should be of interest to mathematicians" trumps. (I hope.)
Questions on MathOverflow are expected to relate to research level mathematics within the scope defined by the community. Consider editing the question or leaving comments for improvement if you
believe the question can be reworded to fit within the scope. Read more about reopening questions here.If this question can be reworded to fit the rules in the help center, please edit the question.
Abstruse Goose is great for maths and physics jokes.
up vote 4 down vote
add comment
I first heard this on an episode of the Big Bang Theory, I don't know the origin.
The physicist asks the mathematician: "Why did the chicken cross the road?"
up vote 4 down vote
The mathematician ponders a while and then replies: "I have a solution, but it only works for a spherical chicken in a vacuum."
add comment
I first heard this on an episode of the Big Bang Theory, I don't know the origin.
The physicist asks the mathematician: "Why did the chicken cross the road?"
The mathematician ponders a while and then replies: "I have a solution, but it only works for a spherical chicken in a vacuum."
Q: Why was 3 afraid of 5?
A: Because "5 8 13."
up vote 3 down vote
(Works better when you actually say it out loud...)
show 2 more comments
Q: What did the threefold blown up at two points say while waiting in a long line for a restroom?
up vote 3 down vote A: I have to pee too.
add comment
Q: What did the threefold blown up at two points say while waiting in a long line for a restroom?
a pure and applied mathematician were sitting in a bar, when they spotted a hot chick 2 meters away. However, this was a weird place where they could take one 1 meter step and each
consecutive would have to be half of the length of the previous one.
up vote 3 down
vote The pure mathematician was sad because he knew he could never get to the girl. The applied one was happy because he knew that for all practical purposes he can get close enough.
add comment
a pure and applied mathematician were sitting in a bar, when they spotted a hot chick 2 meters away. However, this was a weird place where they could take one 1 meter step and each consecutive would
have to be half of the length of the previous one.
The pure mathematician was sad because he knew he could never get to the girl. The applied one was happy because he knew that for all practical purposes he can get close enough.
Posterior Analysis: when a statistician looks at the rear end of a member of the appropriate sex.
up vote 3 down vote
add comment
Posterior Analysis: when a statistician looks at the rear end of a member of the appropriate sex.
For actual humour, rather than simply bad puns, I recommend the books:
• A Random Walk in Science
• More Random Walks in Science
up vote 2
down vote As well as the odd bad pun, they also contain many anecdotes demonstrating that scientists (and mathematicians) are also human. A few that have stuck in my memory: just about every
"mathematics of big game hunting" method, the various "proof by ...", a (genuine!) article co-authored by a cat, and a disturbing article on refereemanship.
add comment
For actual humour, rather than simply bad puns, I recommend the books:
As well as the odd bad pun, they also contain many anecdotes demonstrating that scientists (and mathematicians) are also human. A few that have stuck in my memory: just about every "mathematics of
big game hunting" method, the various "proof by ...", a (genuine!) article co-authored by a cat, and a disturbing article on refereemanship.
In a math party, all were having a good time. y was the dj, everybody was Riemmanly drunk. Then, when the x saw e^x on a corner crying, he asked: - Hey e^x, why don't you integrate ?
up vote 2 down - Because I keep always the same!!!
add comment
In a math party, all were having a good time. y was the dj, everybody was Riemmanly drunk. Then, when the x saw e^x on a corner crying, he asked: - Hey e^x, why don't you integrate ? - Because I keep
always the same!!!
Quite a few mathematics / academic jokes here.
up vote 1 down vote
add comment
An infinite number of mathematicians walk into a bar. The first one orders a beer, the second one orders half a beer, the third one a quarter of beer and so on. After a while of this
up vote 1 happening, the bartender says "Come on guys! So many people and not even a couple of beers??".
down vote
add comment
An infinite number of mathematicians walk into a bar. The first one orders a beer, the second one orders half a beer, the third one a quarter of beer and so on. After a while of this happening, the
bartender says "Come on guys! So many people and not even a couple of beers??".
Test to tell the difference between a Physicist or a Mathematician
Consider the following scenario: A room with a sink at the far end with a working cold water faucet plus a table with the following items on top – small bucket, ring stand, Bunsen burner,
and a pack of matches. The problem is to boil water.
If the individual picks up the bucket from the table, walks to the sink and fills the bucket from the faucet, brings it back to the table, sets it on the ring stand, puts the Bunsen
burner under the stand, and then lights the burner and waits for the water to boil … this establishes the base line but does not separate which it the Physicist and which is the
up vote 1
down vote Test scenario 2: The bucket is now sitting on the floor under the table and the problem is again to boil water.
If the individual picks up the bucket from under the table, walks directly to the sink and fills the bucket from the faucet, brings it back to the table, sets it on the ring stand, puts
the Bunsen burner under the stand, and then lights the burner and waits for the water to boil … this proves that this individual is the Physicist.
However, if the individual picks up the bucket from under the table and places it back on top of the table thus reducing the current problem to a form that they have previously solved …
this proves that this individual is the Mathematician.
show 1 more comment
Test to tell the difference between a Physicist or a Mathematician
Consider the following scenario: A room with a sink at the far end with a working cold water faucet plus a table with the following items on top – small bucket, ring stand, Bunsen burner, and a pack
of matches. The problem is to boil water.
If the individual picks up the bucket from the table, walks to the sink and fills the bucket from the faucet, brings it back to the table, sets it on the ring stand, puts the Bunsen burner under the
stand, and then lights the burner and waits for the water to boil … this establishes the base line but does not separate which it the Physicist and which is the Mathematician.
Test scenario 2: The bucket is now sitting on the floor under the table and the problem is again to boil water.
If the individual picks up the bucket from under the table, walks directly to the sink and fills the bucket from the faucet, brings it back to the table, sets it on the ring stand, puts the Bunsen
burner under the stand, and then lights the burner and waits for the water to boil … this proves that this individual is the Physicist.
However, if the individual picks up the bucket from under the table and places it back on top of the table thus reducing the current problem to a form that they have previously solved … this proves
that this individual is the Mathematician.
Fesenko's math joke collection, selected from the Cherkaev collection.
up vote 1 down vote
add comment
Ugh, why aren't these posted yet:
Q: What's purple and commutes? A: An Abelian grape.
up vote 0 down vote Q: What's sour, yellow, and equivalent to the axiom of choice? A: Zorn's lemon.
add comment
Q: What's purple and commutes? A: An Abelian grape. Q: What's sour, yellow, and equivalent to the axiom of choice? A: Zorn's lemon.
Q: What's sour, yellow, and equivalent to the axiom of choice? A: Zorn's lemon.
Check out the book 777 Mathematical Conversation Starters by John de Pillis. The subject of the book is mathematics topics to talk about, but it is also full of interesting quotes,
up vote 0 down jokes, and cartoons.
add comment
Check out the book 777 Mathematical Conversation Starters by John de Pillis. The subject of the book is mathematics topics to talk about, but it is also full of interesting quotes, jokes, and
If we can formalize the property of "being a good math joke" good enough to construct a Turing Machine that checks it, then I think we can conclude they don't exist.
The reason is that in that case we can construct a Turing Machine (say of length N) that checks each possible string, and stops only if a good math joke was found. The busy beaver
function on N establishes an upper bound for the number of strings the machine needs to check until we can conclude that it wouldn't halt (and therefore no good math jokes exist).
up vote -1
down vote Based on empirical evidence, it may be possible that all those cases have already been checked (with negative answer), which implies my thesis.
(I'm being ironical, I like much of the jokes posted in here :P)
add comment
If we can formalize the property of "being a good math joke" good enough to construct a Turing Machine that checks it, then I think we can conclude they don't exist.
The reason is that in that case we can construct a Turing Machine (say of length N) that checks each possible string, and stops only if a good math joke was found. The busy beaver function on N
establishes an upper bound for the number of strings the machine needs to check until we can conclude that it wouldn't halt (and therefore no good math jokes exist).
Based on empirical evidence, it may be possible that all those cases have already been checked (with negative answer), which implies my thesis.
(I'm being ironical, I like much of the jokes posted in here :P)
Q: What's purple and commutes? A: An abelian grape!
up vote -3 down vote
add comment
Dear All,
I just stumbled onto this site.
Among other things that I do (chemistry, music), I am a humor theorist who specializes in using mathematical methods to study humor (mostly, I study either the logic of humor or do
neuromathematical modeling of how we think the brain responds to humor in places like the pre-frontal cortex and the brainstem).
up vote
-3 down In any case, I am a reviewer for Humor, which is THE peer-reviewed journal for humor studies and I have written a review of exactly what you are looking for: a book of mathematical humor
vote written by a mathematician. The book is called Comic Sections and was written by the Irish mathematician, Desmond McHale. Unfortunately, Humor is a subscription journal, so the review is
unavailable, as is, apparently, the book. It is out of print. If you wish to contact him, his e-mail may be found through the math department at the University of Cork, Ireland.
Donald Casadonte
add comment
Among other things that I do (chemistry, music), I am a humor theorist who specializes in using mathematical methods to study humor (mostly, I study either the logic of humor or do neuromathematical
modeling of how we think the brain responds to humor in places like the pre-frontal cortex and the brainstem).
In any case, I am a reviewer for Humor, which is THE peer-reviewed journal for humor studies and I have written a review of exactly what you are looking for: a book of mathematical humor written by a
mathematician. The book is called Comic Sections and was written by the Irish mathematician, Desmond McHale. Unfortunately, Humor is a subscription journal, so the review is unavailable, as is,
apparently, the book. It is out of print. If you wish to contact him, his e-mail may be found through the math department at the University of Cork, Ireland.
A mathematician in a job interview was asked, "We need to see what kind of attitude you have toward problem solving. So tell us, is the glass half empty or half full."
His reply, "It's 1-x."
up vote -4 down vote
-William Mauritzen
add comment
A mathematician in a job interview was asked, "We need to see what kind of attitude you have toward problem solving. So tell us, is the glass half empty or half full."
After a 1-dimensional collapse, what did the 1-simplex show that new chick from logistics?
up vote -4 down vote
add comment
After a 1-dimensional collapse, what did the 1-simplex show that new chick from logistics?
As it would be impossible to prove that good math jokes don't exist I would have to say that the probability is better than zero.
up vote -9 down vote
add comment
As it would be impossible to prove that good math jokes don't exist I would have to say that the probability is better than zero.
The answer to the question posed in the title "Do Good Math Jokes Exist" is yes and is easily found on google.
up vote -12 down vote
add comment
The answer to the question posed in the title "Do Good Math Jokes Exist" is yes and is easily found on google.
12 ? The least integer that symbolizes all integers just by itself. Successors: 123, 1234...
up vote -13 down vote
add comment
12 ? The least integer that symbolizes all integers just by itself. Successors: 123, 1234...
|
{"url":"http://mathoverflow.net/questions/1083/do-good-math-jokes-exist/1593","timestamp":"2014-04-18T21:27:26Z","content_type":null,"content_length":"123671","record_id":"<urn:uuid:98d062f3-d7f2-4e32-b0a4-214df501d5af>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How many cups is 1.9 liters?
You asked:
How many cups is 1.9 liters?
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
|
{"url":"http://www.evi.com/q/how_many_cups_is_1.9_liters","timestamp":"2014-04-19T17:10:49Z","content_type":null,"content_length":"53124","record_id":"<urn:uuid:0005adb5-384f-48a4-b8e2-e9ba52220874>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent application title: METHOD FOR SINGLE STREAM BEAMFORMING WITH MIXED POWER CONSTRAINTS
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
System and method for calculating a transmitter beamforming vector related to a channel vector h under per-antenna power constraints combined with total power constraint, under per-antenna power
constraints combined with overall line of site (LOS) effective isotropic radiated power (EIRP) and under all three constraints. Calculating a transmitter beamforming vector may be done in the
transmitter, in the receiver and feedback to the transmitter or in both. The method may be adapted to perform with a multi-antenna receiver and with multi-carrier systems.
A method for producing a transmitter beamforming vector under per-antenna power constraints and an effective isotropic radiated power (EIRP) constraint for N antennas, the method comprising:
assigning full antenna power to each antenna in a group of M antennas, wherein M<N and M is a maximal integer number of antennas for which the EIRP constraint is fulfilled when the group of said M
antennas is transmitting in its maximum allowed power, and wherein the M antennas in the group have higher channel magnitudes than the antennas not included in the group, the channel magnitudes being
the absolute values of components of a channel vector {tilde over (h)}; and assigning zero power to N-(M+1) antennas having the lowest channel magnitudes, said N-(M+1) antennas not being included in
the group of M antennas.
The method of claim 1, further comprising: setting phases of the transmitter beamforming vector to equal minus phases of the channel vector.
The method of claim 1, further comprising ordering said N antennas in decreasing order of their channel magnitudes, wherein said M antennas are the first M ordered antennas.
The method of claim 3, wherein the EIRP constraint is given by: Σ
sup.N|{tilde over (b)}
ltoreq.E, where {tilde over (b)}
is the k-th component of the beamforming vector {tilde over (b)} beamforming vector, the method further comprising: assigning residual power to ordered antenna M+
1. 5.
The method of claim 3, wherein the EIRP is given by: E=P*G*Y, where P is total used power summed on the antennas, G is assumed gains of the antennas, and Y is a beamforming gain, the method further
comprising: assigning zero power to ordered antenna M+
1. 6.
The method of claim 5 wherein Y=N.
The method of claim 1, wherein the transmitter beamforming vector b(h;P,p) amplitudes are: b τ ( k ) := { p τ ( k ) k .di-elect cons. { 1 , , M } E - i = 1 M p τ ( i ) k = M + 1 ≦ N 0 otherwise , ##
EQU00013## where p.sub.τ are components of the per-antenna power constraints vector, E is the EIRP constraint, and τ is permutation on {1, . . . , N} such that: h.sub.τ(1)≧h.sub.τ(2)≧ . . . ≧h.sub.τ
(N), where h.sub.τ are absolute values of components of a channel vector {tilde over (h)}.
The method of claim 1, adapted to be performed with a multi-antenna receiver, the method further comprising: obtaining an estimated channel matrix; assuming a receiver beamforming vector; obtaining
an equivalent channel vector by composing said receiver beamforming vector with the estimated channel matrix; and using said equivalent channel vector as said channel vector {tilde over (h)}.
The method of claim 8, adapted to be performed with a multi-antenna receiver, the method further comprising: calculating a new estimate of receiver beamforming vector based on said transmitter
beamforming vector; obtaining a new effective channel vector, based on said new estimate of receiver beamforming vector; calculating a new transmitter beamforming vector based on said new effective
channel vector; and repeating the steps of calculating the new estimate, obtaining the new effective channel vector, and calculating the new transmitter beamforming vector until a stopping criterion
is met.
The method of claim 1, adapted to be performed with a multi-antenna receiver, the method further comprising: obtaining an effective single-RX-antenna channel by an explicit beamforming feedback from
the receiver; and using said effective single-RX-antenna channel as said channel vector {tilde over (h)}.
The method of claim 10, adapted to be performed with a multi-antenna receiver, the method further comprising: calculating an estimate of receiver beamforming vector based on said transmitter
beamforming vector; obtaining a new effective channel vector, based on said estimate of receiver beamforming vector; calculating a new transmitter beamforming vector based on said new effective
channel vector; and repeating the steps of calculating the estimate, obtaining the new effective channel vector, and calculating the new transmitter beamforming vector until a stopping criterion is
The method of claim 1, adapted to be performed with multi-carrier systems, the method further comprising: dividing said per-antenna power constraints and said EIRP between subcarriers; and
calculating said transmitter beamforming vector for each of said subcarriers.
The method of claim 12, wherein the EIRP constraint is given by: Σ
sup.N|{tilde over (b)}
ltoreq.E, where {tilde over (b)}
is the k-th component of the beamforming vector {tilde over (b)} beamforming vector, the method further comprising: ordering said N antennas in decreasing order of their channel magnitudes, wherein
said M antennas are the first M ordered antennas; and assigning residual power to ordered antenna M+
1. 14.
The method of claim 12, wherein the EIRP is given by: E=P*G*Y, where P is total used power summed on the antennas, G is assumed gains of the antennas, and Y is a beamforming gain, the method further
comprising: ordering said N antennas in decreasing order of their channel magnitudes, wherein said M antennas are the first M ordered antennas; and assigning zero power to ordered antenna M+
1. 15.
The method of claim 14 wherein Y=N.
The method of claim 1, adapted to be performed with multi-carrier systems, the method further comprising: averaging per-subcarrier channel magnitudes for each antenna to obtain a vector of averages;
using said vector of averages as said channel vector; and setting magnitudes of per-subcarrier beamforming vectors to equal the absolute value of said transmitter beamforming vector divided by the
square-root of the total number of subcarriers.
The method of claim 16, further comprising: setting phases of said per-subcarrier beamforming vectors to minus corresponding channel phase.
The method of claim 1, further comprising: obtaining initial beamforming vectors; calculating power {tilde over (p)}
of antenna i resulting from the initial beamforming vectors, for all i; calculating said transmitter beamforming vector b with the channel vector being equal to { {square root over ({tilde over (p)}
)}}; and scaling the initial beamforming vectors by b
/ {square root over ({tilde over (p)}
)} for all antenna index i, where bi are absolute values of components of the transmitter beamforming vector.
The method of claim 18, further comprising: setting phases of the transmitter beamforming vector to equal the phases of said initial beamforming vector b.
The method of claim 18, adapted to perform with a multi-antenna receiver, the method comprising: sending a sounding frame to said multi-antenna receiver, said sounding frame using said scaled initial
beamforming vector; receiving a feedback matrix from said multi-antenna receiver, said feedback matrix comprising a new effective channel vector; calculating an updated transmitter beamforming vector
based on said new effective channel vector; sending an updated sounding frame to said multi-antenna receiver, said updated sounding frame comprising said updated transmitter beamforming vector; and
repeating steps of receiving a feedback matrix, calculating the updated transmitter beamforming vector and sending said updated sounding frame until a stopping criterion is met.
The method of claim 1, adapted to be performed with multi-carrier systems, the method further comprising: obtaining initial beamforming vectors for each subcarrier; summing power levels over all said
subcarriers for all i, to obtain power {tilde over (p)}
of said antenna i resulting from said initial beamforming vectors; and scaling by b
/ {square root over ({tilde over (p)}
)} the i-th component of said initial beamforming vectors of said subcarrier s for all said subcarrier and for all said antenna i, where bi are components of the transmitter beamforming vector.
The method of claim 21 further comprising: obtaining a SNR-estimation, SNR
; and calculating an estimation of SNR
according to: SNR new SNR old = i b i p ~ i i p ~ i . ##EQU00014##
The method of claim 1, wherein: calculation of said transmitter beamforming vector is performed by a transmitter.
The method of claim 1, wherein: calculation of said transmitter beamforming vector is performed by a receiver and fed back to a transmitter.
The method of claim 1, wherein the per-antenna power constraints depend on modulation and coding scheme (MCS).
A method for calculating a transmitter beamforming vector under per-antenna power constraints and an effective isotropic radiated power (EIRP) constraint with a multi-antenna receiver, the method
comprising: calculating a first transmitter beamforming vector by: obtaining an effective single-RX-antenna channel by an explicit beamforming feedback from the receiver; using said effective
single-RX-antenna channel as a channel vector {tilde over (h)}; assigning full antenna power to each antenna in a group of M antennas, wherein M<N and M is a maximal integer number of antennas for
which the EIRP constraint is fulfilled when the group of said M antennas is transmitting in its maximum allowed power, and wherein the M antennas in the group have higher channel magnitudes than the
antennas not included in the group, the channel magnitudes being the absolute values of components of said channel vector {tilde over (h)}; and assigning zero power to N-(M+1) antennas having the
lowest channel magnitudes, said N-(M+1) antennas not being included in the group of M antennas; sending a sounding frame to said multi-antenna receiver, said sounding frame using said first
transmitter beamforming vector; receiving a feedback matrix from said multi-antenna receiver, said feedback matrix comprising a new effective channel vector; calculating an updated transmitter
beamforming vector based on said new effective channel vector by: assigning full antenna power to each antenna in a group of M2 antennas, wherein M2<N and M2 is a maximal integer number of antennas
for which the EIRP constraint is fulfilled when the group of said M2 antennas is transmitting in its maximum allowed power, and wherein the M2 antennas in the group have higher channel magnitudes
than the antennas not included in the group, the channel magnitudes being the absolute values of components of said new effective channel vector; and assigning zero power to N-(M+1) antennas having
the lowest channel magnitudes, said N-(M+1) antennas not being included in the group of M antennas. sending an updated sounding frame to said multi-antenna receiver, said updated sounding frame
comprising said updated transmitter beamforming vector; and repeating steps of receiving a feedback matrix, calculating the updated transmitter beamforming vector and sending said updated sounding
frame until a stopping criterion is met.
CROSS REFERENCE TO RELATED APPLICATIONS [0001]
This application is a continuation of U.S. patent application Ser. No. 13/036,641, filed on Feb. 28, 2011, entitled METHOD FOR SINGLE STREAM BEAMFORMING WITH MIXED POWER CONSTRAINTS, now U.S. Pat.
No. 8,301,089, which in turn claims the benefit of U.S. Provisional Application Ser. No. 61/308,958, filed on Feb. 28, 2010 and entitled SINGLE STREAM BEAMFORMING WITH MIXED POWER CONSTRAINTS, the
entire contents of which are incorporated herein by reference.
FIELD OF THE INVENTION [0002]
The present invention relates to the field of wireless communication. In particular, embodiments of the present invention relate to a method for single stream beamforming with mixed power constrains.
BACKGROUND OF THE INVENTION [0003]
In a communication system with a multi-antenna transmitter and, for example, a single-antenna receiver, the transmitter may have an estimate of the channel between each of its antennas and the
receiver antenna. Alternatively, in case of multi antenna receiver the transmitter can assume a receiver beamforming vector that emulates a single effective antenna. To optimize the signal-to-noise
ratio (SNR) at the receiver, the transmitter should appropriately design the TX beamforming vector, that is, the vector of complex coefficients that multiply the single data symbol before sending it
for transmission in each antenna.
The designed beamforming vector is subject to physical and regulatory constraints. The physical constraints may include a limitation on the TX power of each power amplifier (PA), a limitation on the
overall TX power due to packaging and thermal constraints, etc. The regulatory constraints include any limitation imposed by a regulator, such as FCC or ETSI. Typically, regulatory constraints
include limitations on the overall line of site (LOS) effective isotropic radiated power (EIRP) and the overall TX power.
Thus, practical beamforming design problem is typically subject to three types of constraints: per radio-frequency (RF) chain constraints, also referred to as Per-antenna power constraints, an
overall power constraint, and an EIRP constraint. Therefore, it is desirable to have an efficient method to design a beamforming vector that satisfies all constraints while substantially maximizing
Additionally practical beamforming design problem may be subject to a subset of two constraints. For example, ETSI standard enforces only EIRP constraint. However, per-antenna power constraint may
arise from physical limitation of the practical communication system. Thus, a practical communication system that conforms to ETSI standard may require a beamforming design under per-antenna power
constraints and EIRP constraint. Moreover, in many practical situations, satisfying one constraint does not guarantee conforming to the other constraint. For example, in the 5470-5725 MHz band, the
EIRP limitation of ETSI is 1 W. For a system with 4 isotropic antennas and with 100 mW power amplifiers, satisfying only the per-antenna constraints does not guarantee satisfying the EIRP constraint,
because if all 4 antennas transmit in full power, the EIRP might be as high as 4
.1=1.6 w. In addition, satisfying only the EIRP constraint does not guarantee satisfying the per-antenna constraints, as transmitting from a single antenna in 1 W satisfies the EIRP constraint but
not the per-antenna constraints. Thus, a beamforming design under the per-antenna power constraints and an EIRP constraint is needed.
According to the following scenario, beamforming design problem may be subject to per-antenna and the overall power constraints. In FCC both the EIRP and the overall power are limited, and the limit
on the EIRP is 6 dB higher than the limit on the total power. For a system with 3 isotropic antennas, satisfying the total power constraint assures that the EIRP constraint is satisfied. Hence, it
follows that the only effective regulatory constraint is the total power constraint. Writing P for the total power constraint, if the maximum allowed per-antenna power levels are in the interval (P/
3, P), both the per-antenna and the overall power constraints should be considered. Thus, a beamforming design under per-antenna and overall power constraints is needed.
The following example presents a case where all three types of constraints should be accounted for. Consider transmission in the 5.25 GHz-5.35 GHz sub-band under FCC regulations. In this sub-band,
FCC requires that total TX power is below 24 dBm, and that EIRP is below 30 dBm. Suppose that the maximum output power of each power amplifier of the transmitter is 22 dBm, and that the transmitter
has 4 antennas, each with a gain of 1.5 dBi. Here, satisfying two constraints does not assure that the third constraint is also satisfied. Hence a beamforming design under all three constraint types
is needed.
Unfortunately, existing methods are either far from optimum or too complicated for real time software implementation. On the one side of the scale, there are methods that start with well-known
solutions to a single type of constraint, and then scale down the entire beamforming vector to meet the other constraints. On the other side of the scale, there are convex optimization methods that
are very complicated both computationally and conceptually.
In detail, when the only limitation is on the total power, it is well known in the art that the optimum transmit beamforming vector is the maximum ratio combining (MRC) vector related to the channel
vector and the maximum allowed power. Also, if only the per-antenna powers are constrained, then an optimum beamforming vector is obtained by using all available power of each antenna, while choosing
phases that assure coherent addition at the receiver. Such a beamforming vector will be referred to as a full per-antenna power (FPAP) vector throughout this application.
In a communication system with multi antenna receivers the equivalent of an MRC vector is typically computed using the singular value decomposition (SVD) of the channel matrix. The first column of
the transmit matrix V, obtained from the SVD, corresponding to the largest singular value of the channel, is the transmit MRC vector to the best RX effective antenna. According to IEEE 802.11n/ac
explicit matrix feedback, the receiver usually returns the V matrix to the transmitter.
A simple, yet suboptimum, method for finding the beamforming vector in case of multiple constraints is to start by considering only the total power or the per-antenna power constraints, and then to
scale the resulting MRC or FPAP vector in order to satisfy the other constraints. While simple, this method, referred to as the scaling method throughout this application, is typically quite far from
Alternatively, it is possible to aim at the true optimum of the beamforming design problem, at the cost of considerably higher complexity. Since this problem is a convex optimization problem,
existing convex optimizations algorithms can be used to solve it. While such algorithms have time complexity that is polynomial in the number of variables, they are still considerably more
complicated than the above MRC/FPAP+scaling solutions.
SUMMARY OF THE INVENTION [0014]
According to embodiments of the present invention, there is provided a method for calculating a transmitter beamforming vector related to a channel vector h under per-antenna power constraints
combined with total power constraint. The method may include assigning full power to antennas pertaining to a subset S, and calculating maximum ratio combining (MRC) beamforming vector related to a
sub-channel vector excluding said subset S, with a maximum allowed power constraint equal to the total power constraint minus power of said subset, wherein the transmitter beamforming vector b(h;P,p)
amplitude is composed from square root of the per-antenna power constrains for said subset S and said MRC beamforming vector on remaining components.
Furthermore, according to embodiments of the present invention, the method may include setting phases of the transmitter beamforming vector to equal minus phases of a corresponding channel vector.
Furthermore, according to embodiments of the present invention, the method may include finding said subset S, wherein finding said subset S may include: ordering ratios h
in descending order such that:
τ ( 1 ) 2 p τ ( 1 ) ≧ h τ ( 2 ) 2 p τ ( 2 ) ≧ ≧ h τ ( N ) 2 p τ ( N ) , ##EQU00001##
where h[k]
are absolute values of components of the channel vector, p
are components of the per-antenna power constraints vector, N is number of channels and τ is a permutation on {1, . . . N}, and finding a minimum index k such that:
τ ( k + 1 ) 2 p τ ( k + 1 ) ≦ threshold ( k ) where threshold ( k ) := j = k + 1 N h τ ( j ) 2 P - j = 1 k p τ ( j ) ##EQU00002##
where P is the total power constraint
, wherein said subset S includes S={τ(1), τ(2), . . . , τ(k)}.
Furthermore, according to embodiments of the present invention, wherein the transmitter beamforming vector b(h;P,p) amplitudes may be:
{ b ( h ; P , p ) } τ ( i ) = { p τ ( i ) i .di-elect cons. { 1 , , k } h τ ( i ) P - j = 1 k p τ ( j ) j = k + 1 N h τ ( j ) 2 i .di-elect cons. { k + 1 , , N } ##EQU00003##
Furthermore, according to embodiments of the present invention, the method may include:
a. finding said subset S, wherein finding said subset of full power antennas may include calculating a full MRC beamforming vector related to the channel vector with the total power constraint;
b. replacing components of the full MRC beamforming vector in which the per-antenna constraints are violated with square root of the corresponding per-antenna constraints;
c. calculating partial MRC vector related to remaining components with the maximum allowed power constraint being a difference between the total power constraint and overall power on replaced
d. replacing components of the partial MRC beamforming vector in which the per-antenna constraints are violated with square root of the corresponding per-antenna constraints;
e. repeating steps c and d until the per-antenna power constraints are met.
Furthermore, according to embodiments of the present invention, the method may include finding a flooding level for which a beamforming vector of flooded channel vector satisfies an EIRP constraint,
wherein the flooded channel vector is derived by subtracting said flooding level from components of the channel vector that are larger than said flooding level while reducing to zero components of
the channel vector that are not larger than said flooding level.
Furthermore, according to embodiments of the present invention, the EIRP constraint is given by: Σ
|{tilde over (b)}
≦E, where {tilde over (b)}
is the beamforming vector.
Furthermore, according to embodiments of the present invention, the beamforming vector b.sub.per-ant.+tot.power(h-λ).sup.+;P,p of flooded channel vector (h-λ).sup.+ may satisfy said EIRP constraint
according to:{Σ
[b.sub.per-ant.+tot.power((h-λ).- sup.+;P,p)]
}- {square root over (E)}<ε, where ε is an allowed error.
Furthermore, according to embodiments of the present invention, the method may be adapted to be performed with a multi-antenna receiver, and may further include assuming a receiver beamforming
vector, or obtaining an effective single-RX-antenna channel by an explicit beamforming feedback from a receiver.
Furthermore, according to embodiments of the present invention, the method may be adapted to be performed with a multi-antenna receiver, and may further include:
a. calculating an estimate of receiver beamforming vector based on said transmitter beamforming vector;
b. getting a new effective channel vector, based on said estimate of receiver beamforming vector;
c. calculating a second step transmitter beamforming vector based on said new effective channel vector; and
d. repeating steps a to c until a stopping criterion is met.
Furthermore, according to embodiments of the present invention, the method may be adapted to be performed with a multi-antenna receiver, and may further include:
a. sending a sounding frame to said multi-antenna receiver, said sounding frame comprising said transmitter beamforming vector;
b. receiving a feedback matrix from said multi-antenna receiver, said feedback matrix comprising a new effective channel vector;
c. calculating an updated transmitter beamforming vector based on said new effective channel vector;
d. sending said sounding frame to said multi-antenna receiver, said sounding frame comprising said updated transmitter beamforming vector; and
e. repeating steps b-d until a stopping criterion is met.
Furthermore, according to embodiments of the present invention, the method may be adapted to be performed with a multi-carrier system, and may further include: dividing said per-antenna power
constraints and said total power constraint between subcarriers; and calculating said transmitter beamforming vector for each said subcarriers.
Furthermore, according to embodiments of the present invention, the method may be adapted to be performed with a multi-carrier systems, and may further include: averaging per-subcarrier channel
magnitudes for each antenna to get a vector of averages, using said vector of averages as said channel vector; and setting magnitudes of per-subcarrier beamforming vectors to equal the absolute value
of said transmitter beamforming vector divided by the square-root of the total number of subcarriers.
Furthermore, according to embodiments of the present invention, the method may include setting phases of said per-subcarrier beamforming vectors to minus corresponding channel phase.
Furthermore, according to embodiments of the present invention, the method may include obtaining initial beamforming vectors, calculating the power {tilde over (p)}
of antenna i resulting from initial beamforming vectors, for all i, calculating said transmitter beamforming vectors with an effective input channel magnitude vector being equal to { {square root
over ({tilde over (p)}
)}}, and scaling the initial beamforming vectors by b
/ {square root over ({tilde over (p)}
)} for all antenna index i.
Furthermore, according to embodiments of the present invention, the method may include obtaining initial beamforming vectors for each said subcarrier, summing power levels over all said subcarriers
for all i, to get power {tilde over (p)}
of said antenna i resulting from said initial beamforming vectors, and multiplying by b
/ {square root over ({tilde over (p)}
)} the i-th component of said initial beamforming vectors of said subcarrier s for all said subcarrier s and for all said antenna i.
Furthermore, according to embodiments of the present invention, the method may include obtaining corresponding SNR-estimation, SNR
, and calculating an estimation of SNR
according to:
SNR new SNR old
= i b i p ~ i i p ~ i . ##EQU00004##
Furthermore, according to embodiments of the present invention, calculation of said transmitter beamforming vector may be performed by a transmitter.
Furthermore, according to embodiments of the present invention, calculation of said transmitter beamforming vector is performed by a receiver and fed back to a transmitter.
Furthermore, according to embodiments of the present invention, the method may include obtaining an initial beamforming vector b at a transmitter from a receiver, and setting said channel vector h to
be a complex conjugate of said initial beamforming vector b, and setting phases of the transmitter beamforming vector to equal the phases of said initial beamforming vector b.
BRIEF DESCRIPTION OF THE DRAWINGS [0048]
The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and
method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in
FIG. 1 is a schematic illustration of a wireless communication system in accordance with a demonstrative embodiments of the present invention;
FIG. 2 is a flowchart illustration of a method for calculating beamforming vector under per-antenna constraints combined with total power constraint according to embodiments of the present invention;
FIG. 3 is a schematic illustration of an exemplary implementation of beamforming module according to embodiments of the present invention;
FIG. 4 which is a flowchart illustration of a method for calculating beamforming vector under per-antenna constraints combined with total power constraint according to embodiments of the present
FIG. 5 is a flowchart illustration of a simplified method for calculating beamforming vector under per-antenna constraints combined with EIRP constraint according to embodiments of the present
FIG. 6 is a flowchart illustration of the water flooding method for calculating beamforming vector under a combination of per-antenna, total power and EIRP constraints according to embodiments of the
present invention;
FIG. 7 schematically illustrates channel vector h before and after water flooding according to embodiments of the present invention;
FIG. 8 schematically illustrates another exemplary implementation of beamforming module according to embodiments of the present invention;
FIG. 9 is a flowchart illustration of the per-bin method for calculating beamforming vectors in multi-carrier systems according to embodiments of the present invention;
FIG. 10 is a flowchart illustration of a single vector method for calculating beamforming vector in multi-carrier systems according to embodiments of the present invention; and
FIG. 11 which is a flowchart illustration of a per antenna correction method according to embodiments of the present invention.
It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may
be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.
DETAILED DESCRIPTION OF THE PRESENT INVENTION [0061]
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the
art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to
obscure the present invention.
Although embodiments of the present invention are not limited in this regard, discussions utilizing terms such as, for example, "processing," "computing," "calculating," "determining,"
"establishing", "analyzing", "checking", or the like, may refer to operation(s) and/or process(es) of a computer, a computing platform, a computing system, or other electronic computing device, that
manipulate and/or transform data represented as physical (e.g., electronic) quantities within the computer's registers and/or memories into other data similarly represented as physical quantities
within the computer's registers and/or memories or other information storage medium that may store instructions to perform operations and/or processes.
Although embodiments of the present invention are not limited in this regard, the terms "plurality" and "a plurality" as used herein may include, for example, "multiple" or "two or more". The terms
"plurality" or "a plurality" may be used throughout the specification to describe two or more components, devices, elements, units, parameters, or the like. Unless explicitly stated, the method
embodiments described herein are not constrained to a particular order or sequence. Additionally, some of the described method embodiments or elements thereof can occur or be performed at the same
point in time.
It should be understood that the present invention may be used in a variety of applications. Although the present invention is not limited in this respect, the circuits and techniques disclosed
herein may be used in many apparatuses such as personal computers, stations of a radio system, wireless communication system, digital communication system, satellite communication system, and the
Stations intended to be included within the scope of the present invention include, by way of example only, wireless local area network (WLAN) stations, wireless personal area network (WPAN)
stations, two-way radio stations, digital system stations, analog system stations, cellular radiotelephone stations, and the like.
Types of WLAN communication systems intended to be within the scope of the present invention include, although are not limited to, "IEEE-Std 802.11, 1999 Edition (ISO/IEC 8802-11: 1999)" standard,
and more particularly in "IEEE-Std 802.11b-1999 Supplement to 802.11-1999,Wireless LAN MAC and PHY specifications: Higher speed Physical Layer (PHY) extension in the 2.4 GHz band", "IEEE-Std
802.11a-1999, Higher speed Physical Layer (PHY) extension in the 5 GHz band" standard, "IEEE Std 802.11n-2009," IEEE 802.11ac standard (e.g., as described in "IEEE 802.11-09/0992r21") and the like.
Types of WLAN stations intended to be within the scope of the present invention include, although are not limited to, stations for receiving and transmitting spread spectrum signals such as, for
example, Frequency Hopping Spread Spectrum (FHSS), Direct Sequence Spread Spectrum (DSSS), Orthogonal Frequency-Division Multiplexing (OFDM) and the like.
Devices, systems and methods incorporating aspects of embodiments of the invention are also suitable for computer communication network applications, for example, intranet and Internet applications.
Embodiments of the invention may be implemented in conjunction with hardware and/or software adapted to interact with a computer communication network, for example, a local area network (LAN), a wide
area network (WAN), or a global communication network, for example, the Internet.
Reference is made to FIG. 1, which schematically illustrates a wireless communication system 100 in accordance with demonstrative embodiments of the present invention. It will be appreciated by those
skilled in the art that the simplified components schematically illustrated in FIG. 1 are intended for demonstration purposes only, and that other components may be required for operation of the
wireless devices. Those of skill in the art will further note that the connection between components in a wireless device need not necessarily be exactly as depicted in the schematic diagram.
Although the scope of the present invention is not limited to this example, wireless communication system 100 may include a transmitter 110 transmitting data to receiver 140 through wireless
communication channel 130. Transmitter 110 may include, for example an access point able to transmit and/or receive wireless communication signals, and receiver 140 may include, for example, a
wireless communication station or a wireless communication device able to transmit and/or receive wireless communication signals. It should be noted that while transmitter 110 and receiver 140 are
presented with relation to the main data transmission direction in a given session, both stations may have transition and reception capabilities.
According to embodiments of the present invention, transmitter 110 may include a transmitter beamforming module 112 connected to a plurality of RF front end modules 114, which may be connected to a
plurality of antennas 118. Each RF front end module 114 may include a power amplifier (PA) 116, and may be connected to one antenna 118. Transmitter beamforming module 112 may calculate a beamforming
vector to control signal to antennas 118 and may scale the gain of all PAs 116. Additionally, transmitter 110 may include Media Access Controller (MAC) module 150 and a Physical Layer module (PHY)
160. Receiver 140 may include a receiver beamforming module 142 and a plurality of antennas 148. Receiver beamforming module 142 may calculate a beamforming vector and send the beamforming vector to
transmitter 110, for example by sending explicit beamforming feedback from the receiver 140, such as in the case of IEEE 802.11n/ac explicit matrix feedback.
Although the invention is not limited in this respect, antennas 118, 148 may include, for example, a set of N antennas. Antennas 118, 148 may include, for example, an internal and/or external RF
antenna, e.g., a dipole antenna, a monopole antenna, an omni-directional antenna, an end fed antenna, a circularly polarized antenna, a micro-strip antenna, a diversity antenna, or any other type of
antenna suitable for transmitting and/or receiving wireless communication signals, modules, frames, transmission streams, packets, messages and/or data.
According to embodiments of the present invention transmitter beamforming module 112 and receiver beamforming module 142 may be adapted to perform beamforming design under per-antenna powers, overall
power, and overall EIRP constraints, as will be discussed in detail below. Transmitter beamforming module 112 and receiver beamforming module 148 may be implemented using any suitable combination of
memory, hardwired logic, and/or general-purpose or special-purpose processors, as is known in the art. In accordance with different demonstrative embodiments of the invention, transmitter beamforming
module 112 may be implemented as a separate entity or as subsystem of either MAC module 150 and/or PHY module 160.
According to embodiments of the present invention, the calculation of beamforming vector or vectors as will be discussed in detail infra may be performed in the transmitter side, for example, in
transmitter beamforming module 112. Alternatively, the calculation of beamforming vector or vectors may be performed in the receiver side, for example, in receiver beamforming module 142. In such
case the beamforming vector may be sent to transmitter 110, for example by sending explicit beamforming feedback from the receiver 140, such as in the case of IEEE 802.11n/ac explicit matrix
feedback. Transmitter 110 may use the feedback as is, or alternatively, perform beamforming calculations on top of the vector received from receiver 140, such that both sides may calculate
beamforming vectors according to embodiments of the invention, and may use for the calculations the same method or two different methods.
The problem of beamforming design may be presented as follows: Suppose that transmitter 110 has N antennas. The receiver may be a single antenna receiver or may be a multi antenna receiver that may
be considered as heaving a single equivalent antenna, as described in detail infra. Given the complex-baseband channel vector, also referred to as channel vector throughout the application, {tilde
over (h)}=({tilde over (h)}
, . . . ,{tilde over (h)}
.di-elect cons.C
, the per-antenna-power constraint vector p=(p
, . . . , p
).di-elect cons.R
the total power constraint P.di-elect cons.R
+, and the EIRP constraint E, a beamforming vector {tilde over (b)}=({tilde over (b)}
, . . . , {tilde over (b)}
.di-elect cons.C
at maximizes |{tilde over (b)}
{tilde over (h)}|
while satisfying various combinations of the total power constraint Σ
|{tilde over (b)}
≦P, the per-antenna power constraint |{tilde over (b)}
for all k .di-elect cons.{1, . . . ,N}, and the EIRP constraint has to be found, p
are components of power constraint vector. A mathematical description of the EIRP constraint is deferred to Equation 13 ahead.
Alternatively, transmitter 110 may obtain a beamforming vector from receiver 140, also referred to as initial beamforming vector, and may not have direct knowledge of channel h, for example, as in
the case of explicit BF. If transmitter 110 obtains an initial beamforming vector from receiver 140, the initial beamforming vector may be scaled, and the channel h may be set to be a complex
conjugate of b.
It should be noted that in practice different RF chains may have different power limitations. For example, when each RF chain is calibrated separately to achieve a desired error vector magnitude
(EVM) or when there is a separate closed-loop power control for each RF chain. Therefore, a per-antenna power constraint vector is given, rather than just a single scalar suitable to all antennas.
Reference is now made to FIG. 2 which is a flowchart illustration of a method for calculating beamforming vector under per-antenna constraints combined with total power constraint according to
embodiments of the present invention. According to embodiments of the present invention, the phases of the beamforming vector may be set to minus the corresponding channel vector phases, as indicated
in block 210:
({tilde over (b)}
)=-arg({tilde over (h)}
), (1)
for all k
.di-elect cons.{1, . . . ,N}. Alternatively, if transmitter 110 obtains an initial beamforming vector from receiver 140, then the original phases of the initial beamforming vector may be left
unchanged. Letting b
:=|{tilde over (b)}
| be the absolute value of the k-th component of the beamforming vector {tilde over (b)} and h
:=|{tilde over (h)}
| be the absolute value of the k-th component of the channel vector {tilde over (h)} for all k, the problem of beamforming under a total and per-antenna power constraints reduces to the maximization
of b
subject to Σ
≦P and b
for all k. After finding real values of the beamforming vector b
each entry of the beamforming vector b
may be multiplied by the minus the corresponding channel phase, to form the final beamforming vector. It should be readily understood by those skilled in the art that the above formulation does not
require that b is positive, as this will be an immediate outcome of the optimization.
According to embodiments of the present invention the optimum b has the following shape: There is a subset S.OR right.{1, . . . N} of full power antennas, that is, k .di-elect cons.Sb
, and the remaining antennas of the complementary subset use MRC on the corresponding complementary channel vector {h
S, with the maximum allowed power being the complementary total power constraint that may equal the total power constraint minus the sum power on the subset S of full power antennas, P-Σ
.di-elect cons.Sp
, as in equation (2) infra. At block 220 a subset of full power antennas may be found and at block 230 MRC vector may be calculated for the remaining antennas. At block 240 the resultant beamforming
vector may be output.
According to embodiments of the present invention, subset S may be found using the ratio threshold method. According to the ratio threshold method MRC beamforming vector b
(h;P) corresponding to the channel vector h and the total power constraint P may be calculated. In detail, the MRC vector b
(h;P) may be defined by:
b MRC
( h ; P ) i = h i P j = 1 N h j 2 , i .di-elect cons. { 1 , , N } , ( 2 ) ##EQU00005##
If the MRC vector fulfils the per
-antenna constrains for each antenna:
for all i.di-elect cons.{1, . . . , N}, (3)
then the MRC vector
, b
(h;P), may be the beamforming vector. Alternatively, MRC vector may not be calculated at this point. Further according to the ratio threshold method the ratio h
may be ordered in descending order. Let τ be a permutation on {1, . . . ,N} such that
τ ( 1 ) 2 p τ ( 1 ) ≧ h τ ( 2 ) 2 p τ ( 2 ) ≧ ≧ h τ ( N ) 2 p τ ( N ) ( 4 ) ##EQU00006##
A substantially minimum index k may be found such that h
τ ( k + 1 ) 2 p τ ( k + 1 ) ≦ threshold ( k ) , where ( 5 ) threshold ( k ) := j = k + 1 N h τ ( j ) 2 P - j = 1 k p τ ( j ) ( 6 ) ##EQU00007##
k may start running from
1 in case MRC vector b
(h;P) was calculated or from 0, in case MRC vector b
(h;P) was not calculated. It should be noted that there may be no need to find permutation τ; permutation τ is only presented here for clarity of the mathematical presentation. In practical
applications vector of the ratios h
may be sorted.
Subset S may include:
={τ(1),τ(2), . . . , τ(k)}, (7)
and the beamforming vector may be composed from full per
-antenna power components on S, and with MRC with the remaining power on the complementary subset {1, . . . , N}\S. In detail, the beamforming vector b(h;P,p) may be composed from the square root of
the per-antenna constrains for subset S and MRC on the antennas of the complementary subset:
{ b ( h ; P , p ) } τ ( i ) = { p τ ( i ) i .di-elect cons. { 1 , , k } h τ ( i ) P - j = 1 k p τ ( j ) j = k + 1 N h τ ( j ) 2 i .di-elect cons. { k + 1 , , N } , ( 8 ) ##EQU00008##
It should be noted that a threshold level k as presented in equations (5) and (6) may ensure that the MRC vector for the antennas with indices in {1, . . . ,N}\S:
((h.sub.τ(k+1), h.sub.τ(k+2), . . . ,h.sub.τ(N))
.sub.τ(j)) (9)
may not violate the per antenna power constraints
Reference is made to FIG. 3, which is a schematic illustration of an exemplary implementation of beamforming module 300 according to embodiments of the present invention. According to embodiments of
the present invention beamforming module 300 may include beamforming block 310 and MRC block 320. Beamforming module 300 may be implemented in transmitter 110 as transmitter beamforming module 112 or
in receiver 140 as receiver beamforming module 142. Beamforming block 310 may obtain channel vector {tilde over (h)}, total power constraint P, and power constraint vector p as inputs, and may
calculate the vector h of absolute values and output beamforming vector b(h;P,p) multiplied by appropriate phases, for example, by minus the phases of {tilde over (h)}. Alternatively, beamforming
block 310 may obtain an initial beamforming vector and assume channel vector {tilde over (h)} is the conjugate of the initial beamforming vector, so that the vector of absolute values h is equal to
the vector of absolute values of the initial beamforming vector.
According to embodiments of the present invention, beamforming module 112 may be adapted to perform the ratio threshold method. Beamforming block 310 may order ratios h
according to (4), find subset S by calculating index k according to (5) and (6), and send complementary channel vector (h.sub.τ(k+1), h.sub.τ(k+2), . . . ,h.sub.τ(N))
, and the complementary total power constraint for the partial channel vector, P-Σ
.sub.τ(j)b, to MRC block 320 such that MRC block 320 may calculate the MRC vector for the antennas with indices in {τ(k+1),τ(k+2), . . . ,τ(N)} as presented in (9). MRC block 320 may return the MRC
vector for the antennas with indices in {τ(k+1),τ(k+2), . . . ,τ(N)} to beamforming block 310. Beamforming block 310 may compose beamforming vector b(h;P,p) from per-antenna constrains for subset S
and MRC on the remaining subchannels of the complementary subset, according to (8).
Reference is now made to FIG. 4 which is a flowchart illustration of a method for calculating beamforming vector under per-antenna constraints combined with total power constraint using the iterative
MRC method according to embodiments of the present invention. In block 410 the phases of the beamforming vector may be set to minus the corresponding channel phase, as indicated in (1).
Alternatively, if transmitter 110 obtains an initial beamforming vector from receiver 140, then the original phases of the initial beamforming vector may be left unchanged. Subset S may be found in
an iterative process, as described hereinafter.
In block 420 a full MRC beamforming vector b.sup.(0):=b
(h;P), may be calculated taking into account the total power constraint, but not the per-antenna power constraints. The resultant full MRC beamforming vector b.sup.(0):=b
(h;P), may violate the per-antenna constraints for several antennas. In block 430 it may be verified whether the per-antenna constraints are met. If the per-antenna constraints are met, then b.sup.
(h;P) may be the output of the iterative MRC method as indicated in block 460. If the per-antenna constraint is not met then indices of the MRC beamforming vector b.sup.(0):=b
(h;P) in which per-antenna constraint is violated may be replaced or clipped to equal the square root of per-antenna constraints at these channels, as indicated in block 440. The resultant
beamforming vector b.sup.(1) may meet the per-antenna constraints with equality in the above components, however, the overall power of b.sup.(1) may be smaller than the total power constraint P. In
order to use all available power, the remaining, not clipped, part of b.sup.(1) may be replaced by a partial MRC vector on the corresponding antennas, with the power constraint being the difference
between P and the overall power on the clipped components, as indicated in block 450. Again, the MRC operation may result in violation of some per-antenna constraints. Thus blocks 430, 440 and 450
may be repeated until per antenna power constraints are met.
After a finite number r of iterations, the resultant vector b.sup.(r) may either meet the overall power constraint with equality, or meet all the per-antenna constraints with equality. This vector
b.sup.(r) may be the output of the iterative MRC method, as indicated in block 460.
A beamforming vector b.sub.(S)(h;P,p) for a subset S.OR right.{1, . . . ,N} with P-Σ
.di-elect cons.Sp
≧0, may be defined by setting:
[ b ( S ) ( h ; P , p ) ] k := { p k k .di-elect cons. S h k P - i .di-elect cons. S p i i S h i 2 k ≠ S , ( 10 ) ##EQU00009##
wherein k indicates indices of antennas that pertain to subset S
. It should be noted that if S is an empty subset, then b.sub.(S)(h;P,p)=b
(h;P), and otherwise b.sub.(S)(h;P,p) may be a
beamforming vector with full per-antenna power on S, and with MRC with the remaining power on the remaining components of the complementary subset.
Reference is made again to FIG. 3. According to embodiments of the present invention, beamforming module 112 may be adapted to perform the iterative MRC method. Beamforming block 310 may initiate S
to be empty subset and may send complementary channel vector {h
S, and the total power constraint for the complementary channel vector, P-Σ
.di-elect cons.Sp
, to MRC block 320 such that MRC block 320 may calculate the complementary beamforming MRC vector b
.di-elect cons.Sp
and return it to beamforming block 310, which combines the subset S having full per antenna power with the complementary beamforming MRC vector b
.di-elect cons.Sp
to get b.sub.(S)(h;P,p). If b.sub.(S)(h;P,p) satisfies all per-antenna constraints, beamforming block 310 may output b.sub.(S)(h;P,p). Otherwise, beamforming block 310 may join to subset S all
indices in which b.sub.(S)(h;P,p) violates the per-antenna constraints, assign to them full per antenna power, and again send complementary channel vector {h
S, and the total power constraint for the complementary channel vector, P-Σ
.di-elect cons.Sp
, to MRC block 320 such that MRC block 320 may calculate complementary MRC vector related to k b
.di-elect cons.Sp
and so forth until per antenna power constraints are met for all antennas. Beamforming block 310 may compose beamforming vector b.sub.(S)(h;P,p) as indicated in (10). b.sub.(S)(h;P,p) may be the
output of beamforming block 310.
It can be shown that both ratio threshold and iterative MRC methods may reach substantially optimal results. Each of these methods may have its advantages. For example, the iterative MRC method may
be simpler conceptually, and may not require sorting of ratios. The ratio threshold method, on the other hand, may not require actually calculating several MRC vectors. Hence, in some systems ratio
threshold method may be preferred, while in others the iterative MRC method may be preferred.
Reference is now made to FIG. 5 which is a flowchart illustration of a method for calculating beamforming vector under per-antenna constraints combined with EIRP constraint according to embodiments
of the present invention. According to embodiments of the present invention, a worst-case scenario that all values of the beamforming vector add coherently may be assumed, as indicated in block 510,
and hence the EIRP constraint may be defined as:
|{tilde over (b)}
≦E (13)
It should be noted that in practice the regulator may enforce EIRP in several ways. Equation (13) may apply for enforcing EIRP using radiated measurements in which the regulator measures the EIRP
over the air, and provides for an upper limit on possible EIRP for a given beamforming vector.
According to embodiments of the invention, the beamforming design problem with this simplified EIRP constraint combined with per-antenna constraints allows taking the beamforming phases as minus the
corresponding channel phases, or keep the original phases of the initial beamforming vector unchanged, if transmitter 110 obtains an initial beamforming vector from receiver 140, as indicated in
block 520, similar to the case of per-antenna constraints combined with total power constraint described above. Likewise, only the absolute values vector b has to be optimized.
According to embodiments of the present invention the substantially optimal beamforming vector b may be obtained using the method as described herein below.
In block 530 the antennas are ordered in decreasing order of their channel magnitudes. Let τ be a permutation on {1, . . . ,N} such that:
.sub.τ(1)≧h.sub.τ(2)≧ . . . ≧h.sub.τ(N), (14)
As before
, there may be no need to actually find this permutation; it is only presented here for notational convenience. In block 540 a maximum integer M may be found such that the max full power inequality
is fulfilled:
{square root over (p.sub.τ(k))}≦ {square root over (E)}, (15)
In block 550 beamforming vector b is composed according to:
τ ( k ) := { p τ ( k ) k .di-elect cons. { 1 , , M } E - i = 1 M p τ ( i ) k = M + 1 ≦ N 0 otherwise , ( 16 ) ##EQU00010##
Hence according to (16) beamforming vector has full per-antenna power on sorted channels 1 to M, the residual power allocated for sorted antenna M+1, while the remaining antennas get zero power.
Throughout the application this method will be referred to as the max full power+1 (MFP+1) method. According to embodiments of the present invention the beamforming vector obtained by MFP+1 method
may be substantially optimal for the worst case EIRP constraint presented in (13).
It should be noted that while MFP+1 method may look similar to antenna selection, there may be a significant difference: according to MFP+1, a subset of the antennas may be chosen even if all
antennas are connected to RF chains. MFP+1 method may be different form usual antenna selection methods, where there are less RF chains than antennas, and therefore some antenna subset must be
chosen. For example, in case of several frequency bands, each having its own EIRP constraint. With MFP+1 method, in a high-EIRP band all antennas and all RF chains may be utilized, while in a
low-EIRP band, only some of the antennas and the corresponding RF chains may be utilized.
It can be shown by examples that SNR obtained by MFP+1 method may be considerably higher than that of using known in the art methods, e.g., FPAP+scaling down to meet the EIRP constraint. Hence the
MFP+1 method of embodiments of the current invention provides good tradeoff between performance and complexity, exhibiting nearly optimal performances in comparison to convex optimizations solutions,
at the cost of a minor increase in complexity comparing to existing scaling methods.
According to a second type of EIRP constraint that may be used by the regulator, the measurement may be done in conducted mode. Here the regulator assumes that the EIRP is E=P*G*Y, where P is the
total used power summed on all antenna ports, G is the assumed antenna gain, and Y is the beamforming gain, which may be defined by the regulator to be Y=N, with N the number of used antennas to
which non-zero power is allocated. Since E may be set by the regulator, and G may be a system parameter, it follows that the larger the N is, the less power may be used.
One option to meet this constraint is to use MRC method with appropriate scaling, e.g. calculate MRC with a total power constraint of P=E/G/N, and then scale the beamforming vector such that the
largest antenna power may be below the per antenna constraint. According to embodiments of the invention, another option to meet this constraint may be the MFP method. According to MFP method the
first M antennas may be allocated in the same way as in the MFP+1, but zero power may be assigned to all other antennas. Since with the MFP not all available power may be used, MFP method may not be
optimal. For some channel realizations MFP method may be better than the MRC+scaling, while for other realizations MFP method may be worse. According to embodiments of the invention, transmitter 110
may check both options, MFP or MRC+scaling, and may choose the one resulting in the largest SNR.
Reference is now made to FIG. 6 which is a flowchart illustration of the water flooding method for calculating beamforming vector under a combination of per-antenna, total power and EIRP constraints
according to embodiments of the present invention. According to embodiments of the present invention, a beamforming vector may be found, considering two of the three constraints, typically the
per-antenna power constraints combined with either the total power constraint or the EIRP constraint. Then the channel vector may be modified by "water flooding" until the third constraint is also
satisfied. Here, water flooding means replacing channel magnitude vector h by a new flooded channel vector (h-λ).sup.+, where λ is a non-negative real scalar. λ may be referred to as the flooding
level throughout the application. Flooded channel vector (h-λ).sup.+ may be derived by subtracting λ from components of channel vector h that are larger than λ while reducing to zero components of
channel vector h that are not larger than λ.
For example, the beamforming design method under only the per-antenna and total power constraints may be one of the above ratio threshold or iterative MRC methods. Similarly, the optional beamforming
design method under only the per-antenna and EIRP power constraints may be the MFP+1 method.
Similarly to ratio threshold, iterative MRC and MFP+1 methods, the water flooding method considers only the absolute value channel vector h and absolute value beamforming b, and the final beamforming
vector may be obtained by multiplying the k
entry of the output b by exp-i arg({tilde over (h)}
) for all k, where {tilde over (h)} is the complex channel vector.
In detail, the water flooding method works as follows. At block 610 a beamforming vector b.sub.per-ant.+tot.power (h;P,p) related to the per-antenna and total power constraints may be found, e.g., by
using the ratio threshold or iterative MRC methods. As indicated in block 620, if beamforming vector b.sub.per-ant.+tot.power (h;P,p) satisfies EIRP constraint or simplified EIRP constraint as
presented in (13), then beamforming vector b.sub.per-ant.+tot.power (h; P, p) may be the output of the method. Otherwise, a beamforming vector b.sub.per-ant.+EIRP (h;E,p) related to the per-antenna
power and overall EIRP constraints may be found, e.g., by using the MFP+1 method, as indicated in block 630. As indicated in block 640, if beamforming vector b.sub.per-ant.+EIRP (h;E,p) satisfies the
total power constraint, then beamforming vector b.sub.per-ant.+EIRP (h;E,p) may be the output of the method, as indicated in block 660. It should be noted that blocks 630 and 640 are optional. At
block 650 a flooding level λ>0 may be found, for which beamforming vector b.sub.per-ant.+tot.power ((h-λ).sup.+;P,p) satisfies the simplified EIRP constraint (13) with nearly equality:
[b.sub.per-ant.+tot.power(h-λ).sup.+;P,p].su- b.k}- {square root over (E)}<ε, (17)
ε is the allowed error.
Thus, b.sub.per-ant.+tot.power (h-λ).sup.+;P,p may be the output of the water flooding method, as indicated in block 660.
Note that since Σ
may be a decreasing function of the flooding level λ, finding flooding level may be performed efficiently by a binary search.
Reference is made to FIG. 7, which schematically illustrates channel vector h before 700 and after 750 water flooding according to embodiments of the present invention. According to embodiments of
the present invention, components of channel vector h that are larger than λ, such as component 710 may be reduced by λ as a result of water flooding, the resultant component is illustrated in FIG. 7
as component 760, while components of channel vector h that are smaller than λ, such as component 730, may be reduced to substantially zero.
Reference is made to FIG. 8, which schematically illustrates another exemplary implementation of beamforming module 800 according to embodiments of the present invention. According to embodiments of
the present invention beamforming module 800 may include all constraint beamforming block 810 and auxiliary beamforming block 820. Beamforming module 800 may be implemented in transmitter 110 as
transmitter beamforming module 112 or in receiver 140 as receiver beamforming module 142. Auxiliary beamforming block 820 may be implemented, for example, using beamforming block 310 and MRC block
320 as presented in FIG. 3. All constraint beamforming block 810 may obtain channel vector h, total power constraint P, EIRP constraint e, and per-antenna power constraint vector p, as inputs and may
output beamforming vector b.sub.per-ant.+tot.power (h-λ).sup.+;P,p.
According to embodiments of the present invention, beamforming module 112 may be adapted to perform the water flooding method. Auxiliary beamforming block 820 may obtain flooded channel vector
(h-λ').sup.+ for some initial flooding level λ', for example, λ'=0, total power constraint P, and per-antenna power constraint vector p, from all constraint beamforming block 810 and may return
beamforming vector b.sub.per-ant.+tot.power ((h-λ').sup.+;P,p). All constraint beamforming block 810 may verify whether beamforming vector b.sub.per-ant.+tot.power ((h-λ').sup.+;P,p) satisfies EIRP
constraint as presented in (13). If b.sub.per-ant.+tot.power ((h-λ').sup.+;P,p) satisfies EIRP constraint then beamforming output may equal b.sub.per-ant.+tot.power ((h-λ').sup.+;P,p). Otherwise, all
constraint beamforming block 810 may continue to search for a flooding level λ>0, for which output beamforming vector b.sub.per-ant.+tot.power (h-λ).sup.+;P,p satisfies the simplified EIRP constraint
(13) with nearly equality in (17). For each examined flooding level, all constraints beamforming block 810 may obtain b.sub.per-ant.+tot.power (h-λ).sup.+;P,p from auxiliary beamforming block 820 by
letting the channel input of block 820 be equal to (h-λ').sup.+.
Embodiments of the present invention may be adapted perform with a single antenna or multi-antenna receiver. In the case of a multi-antenna receiver, transmitter 110 may assume a RX beamforming
vector in receiver 140. Composition of this RX beamforming vector and the channel matrix may result in an equivalent single-RX antenna channel vector. Hence, transmitter 110 may use any of the
beamforming methods of embodiments of the present invention with the equivalent channel vector.
For example, transmitter 110 may consider only one of antennas 148 of receiver 140, e.g., the antenna corresponding to the largest-norm row in the channel matrix. It should be noted that considering
only one of antennas 148 of receiver 140 may be equivalent to assuming that receiver 140 is using a standard unit vector for beamforming. The equivalent single-RX antenna channel vector may be the
selected row of the channel matrix. Alternatively, transmitter 110 may assume that receiver 140 is using max-eigenmode beamforming. Multiplying the channel matrix from the left by the RX
max-eigenmode beamforming vector results in a single row vector, and this row vector may be the equivalent single-RX antenna channel vector.
Alternatively, the effective single-RX-antenna channel may be obtained by an explicit beamforming feedback from the receiver 140, such as in the case of IEEE 802.11n/ac explicit matrix feedback. For
example, if some columns of the "V"-part of the singular value decomposition (SVD) of the channel matrix are returned by receiver 140, then the column corresponding to the maximum eigenmode may be
used as the effective channel. Generally, if receiver 140 returns any explicit TX-beamforming vector, then the conjugate vector may serve as an approximated single-antenna channel for the current
According to embodiments of the present invention, in explicit beamforming, the channel may be sounded such that the full dimensions of the beamforming matrix can be extracted by receiver 140 and fed
back to transmitter 110. For example, if both transmitter 110 and receiver 140 are n times n devices having n transmit and receive antennas, transmitter 110 may send a sounding frame to receiver 140
from which a beamforming matrix that may have dimensions of n times n, typically the V matrix of the SVD of the channel, may be extracted and feedback to transmitter 110. Alternatively, in case
transmitter 110 decides to transmit a single spatial stream, transmitter 110 may use only the first row of the beamforming matrix. This first row may be the single stream beamforming vector used as
the initial beamforming vector, under the assumption that this first row is a MRC of the effective single stream channel vector.
Since embodiments of the present invention may manipulate the first row of the beamforming matrix, one of ordinary skill in the art would expect that if receiver 140 implements beamforming
calculations according to embodiments of the current invention, receiver 140 needs to manipulate just the first row of the beamforming matrix before feeding back the full beamforming matrix. However,
this may not be the case.
Since embodiments of the invention described herein may be used mainly for single stream operation with a single effective receiver antenna 148, manipulating just the first row of the beamforming
matrix may distort the matrix in case it is used for transmitting multi streams, in which case transmitter 110 may use number of rows that corresponds to the number of spatial streams. So receiver
140 may need to know in advance if transmitter 110 uses just a single spatial stream, in which case feedback the first row of the beamforming matrix, manipulated according to embodiments of the
present invention, for example ratio threshold method, iterative MRC method, MFP+1 etc. Alternatively, receiver 140 may decide for transmitter 110 that transmitter 110 will use a single spatial
stream, in which case receiver 140 again may send back the manipulated first row of the beamforming matrix, thus forcing transmitter 110 to use a single spatial stream.
According to embodiments of the invention, receiver 140 may send a single row of the beamforming matrix, manipulated according to embodiments of the present invention, for example ratio threshold
method, iterative MRC method, MFP+1 etc. in case a single spatial stream is to be used, and another matrix, e.g. the SVD matrix, having a larger dimension, in case more than one spatial stream is
needed. Receiver 140 may either decide on the number of spatial streams or get an indication of this from transmitter 110.
According to embodiments of the invention, there may be iterations between transmitter beamforming and estimate of receiver beamforming. For example, if transmitter 110 has an estimation of the
channel matrix, then transmitter 110 may calculate the transmitter beamforming vector according to embodiments of the present invention using the first column of V from the SVD as the input effective
channel vector. The result may be a first-step transmitter beamforming vector. Based on this first-step transmitter beamforming vector, transmitter 110 may calculate a new estimate of receiver
beamforming vector e.g., by performing MRC on the effective channel. Finding a new estimate of receiver beamforming vector may give a new effective channel vector in transmitter 110 side, and hence a
second-step transmitter beamforming vector. Transmitter 110 may proceed this way until some stopping criterion is met, e.g., a pre-defined maximum number of iterations is reached, and output the
final transmitter beamforming vector.
According to embodiments of the invention, there may be iterations between transmitter 110 and receiver 140. Iterations between transmitter 110 and receiver 140 may be mainly relevant in case of
explicit beamforming. This may be done by applying the beamforming vector of according to embodiments of the invention to the sounding frame that may be sent to receiver 140 and used to enable
receiver 140 to generate a feedback matrix. By doing that receiver 140 may experience a new effective channel to which receiver 140 may design a new effective channel vector, for example, the optimal
MRC vector. Transmitter 110 may receive said feedback matrix comprising new effective channel vector and calculate an updated transmitter beamforming vector. This way the iterations may be done
seamlessly without any extra overhead or complexity. However, every sounding frame may have to use the last calculated beamforming vector as a beamforming vector, in contrary to the current art
according to which usually sounding frames are sent without beamforming. Optionally, only the amplitudes of the beamforming vectors calculated according to embodiments of the current invention should
scale the sounding frame, such that the original phases of the sounding frame are not changed.
Calculating beamforming vectors according to embodiments of the invention may be useful also in multi-carrier systems, such as frequency-division multiplexing (FDM) or OFDM as well as other
multi-carrier systems.
Reference is now made to FIG. 9 which is a flowchart illustration of the per-bin method for calculating beamforming vectors in multi-carrier systems according to embodiments of the present invention.
According to embodiments of the present invention each subcarrier, also referred to as bin, may be regarded as a separate channel and methods of embodiments of the current invention may be applied
separately in each bin. As indicated in block 910 the constraints may be divided between the subcarriers. For example, the per-antenna power constraints, the total power constraint and the EIRP
constraint may be divided between the subcarriers. For example, if the total power constraint is 1 W and there are 100 subcarriers, then the total power in each subcarrier may not exceed 1/100 W. A
beamforming vector may be calculated for each subcarrier, as indicated in block 920. While this method is not optimal, it is considerably simpler than the existing near-optimal methods, and gives a
considerable SNR improvement over the simple MRC/FPAP+scaling methods.
Reference is now made to FIG. 10 which is a flowchart illustration of a single vector method for calculating beamforming vector in multi-carrier systems according to embodiments of the present
invention. According to embodiments of the present invention a further simplification may be obtained by designing a single vector of beamforming magnitudes for all subcarriers with possibly using
minus the channel phase for each subcarrier and each antenna. As indicated in block 1010 the per-subcarrier channel magnitudes for each antenna are averaged. This vector of averages may be used as
the input channel vector to any of the beamforming methods according to embodiments of the present invention, beamforming vector may be calculated, as indicated in block 1020. The output beamforming
vector may be taken as the absolute value of the beamforming vector in all subcarriers, for example, by letting the absolute values of the entries of the beamforming vectors at all subcarrier s be
equal to the output beamforming vector divided by {square root over (F)} where F is the number of sub-carriers. Alternatively, the power constraints inputs may be scaled prior to calculating a
beamforming vector for the vector of averages according to embodiments of the present invention. For example, per-antenna constraints may be scaled to be p/F, total power constraint may be scaled to
be P/F and EIRP power constraint may be scaled to be E/F. If the power constraints are scaled prior to calculating the beamforming vector then the output beamforming vector may be used as the
absolute-value part of the beamforming vector of each subcarrier. As indicated in block 1030 the phases of the beamforming vector may be set to minus the corresponding channel phase for each
subcarrier and each antenna. Alternatively, if transmitter 110 obtains initial beamforming vectors from receiver 140, then the original phases of the initial beamforming vectors may be left unchanged
for each subcarrier and each antenna.
For example, if there are F subcarriers and {{tilde over (h)}
} is the antenna-by-subcarrier complex channel matrix, then the beamforming design methods according to embodiments of the present invention may get the per-antenna vector
{ 1 F s = 1 F h ~ k , s 2 } k .di-elect cons. { 1 , , N } ( 18 ) ##EQU00011##
It should be readily understood by those skilled in the art that this is
only one of many possible ways to average the channel magnitudes over subcarriers and that many other ways may be suitable.
According to embodiments of the present invention in channels where there is small variance in the amplitude of the different frequency bins, the SNR achieved by the single vector method may be
similar to that of the per-bin method, with a considerable complexity reduction.
Reference is now made to FIG. 11 which is a flowchart illustration of a per antenna correction method according to embodiments of the present invention. According to embodiments of the present
invention, as indicated in block 1110 the transmitter may have some initial transmitter beamforming vectors, e.g., those returned from the receiver through explicit feedback, typically accounting
only for the overall power constraint. In such an application, the absolute value outputs {b
} of beamforming methods according to embodiments of the invention may be divided by the square-root of the known initial per-antenna power, as indicated in block 1140, and the result may be used to
re-scale the gain applied to the signal of the antennas by changing the analog or digital gains of each transmit antenna, as indicated in block 1150. In such an application, Initial beamforming
vectors per-subcarrier may be obtained, as indicated in block 1110, and total per-antenna powers {{tilde over (p)}
} may be calculated, as indicated in block 1120. In block 1130 beamforming vector {b
} may be calculated according to embodiments of the present invention, for example using the single vector { {square root over ({tilde over (p)}
)}} as the input channel. Such that h
= {square root over ({tilde over (p)}
)} for all i. In block 1140 the components of the beamforming vector {b
} may be divided by the square-root of the per-antenna power levels {{tilde over (p)}
}. In block 1150 the gain of the i-th antenna may be multiplied by b
/ {square root over ({tilde over (p)}
)}, for all i. It should be readily understood by those skilled in the art that the intermediate stage of dividing the beamforming vector {b
} by the square-root of the per-antenna power levels { {square root over ({tilde over (p)}
)}} is optional, as it may not be necessary to actually go through this intermediate calculation for eventually multiplying the gain of the i-th antenna by b
/ {square root over ({tilde over (p)}
)}. For example, the gain may first be multiplied by b
and then divided by {square root over ({tilde over (p)}
According to embodiments of the present invention, this scaling may be applied by making an appropriate gain change to analog power amplifiers 116 of transmitter 110. This may be implemented, for
example by setting the appropriate registers of a variable gain amplifier (VGA). In this way the correction may be done substantially in the analog domain and does not necessitate the change of the
digital beamforming matrices values. Alternatively, the digital signal in either the time or frequency domains may be scaled.
According to embodiments of the present invention, the per antenna correction method which may scale the power of each antenna, may be applied for multi stream transmission as well. However, contrary
to the case of single stream, per antenna correction method may not be optimal for multi stream transmission and in general improved performance may not be guarantied in comparison to using
SVD+scaling. Where SVD+scaling refers to using columns of V from an SVD matrix as beamforming vectors, and multiplying these beamforming vectors by a single scaling factor for meeting all
constraints. According to the per antenna correction method, the vector of per antenna power may be computed for the case of using several spatial streams by, for example, summing the streams in each
antenna. The resultant vector of per antenna power may be used as an input to beamforming methods according to embodiments of the present invention. The output beamforming vector b may be used to
scale each antenna. While distorting the original matrices, this operation might be beneficial in certain channel scenarios in comparison with the SVD+scaling method where scaling is done evenly on
all antennas.
According to embodiments of the present invention, blocks 1130 and 1140 may be optional, in case of only per-antenna constrains. As already mentioned, in a typical OFDM applications, the transmitter
may already have some initial transmitter beamforming vectors, e.g., those returned from the receiver through explicit feedback, typically accounting only for the overall power constraint. In such an
application, the vector {{tilde over (p)}
} of overall per-antenna power may be calculated for the default transmitter beamforming vectors. Additionally the maximum per-antenna power, {p
} may be known. In this case the initial power of the i-th PA may be scaled by the maximum possible output power of the i-th PA, p
, divided by the corresponding component {tilde over (p)}
of overall per-antenna power (for all i). It should be noted that the maximum per-antenna power vector, {p
}, may be modulation coding scheme (MCS) dependent.
In practical applications the transmitter may work in the following way: The digital hardware portions of the transmitter may be ignorant of any change and may continue using the default beamforming
vectors and for all transmission chain i, the analog power gain of transmission chain i may be re-scaled, that is, multiplied by a factor of p
/{tilde over (p)}
, so that the output power reaches the maximum value of p
. Alternatively, the re-scaling by factor p
/{tilde over (p)}
can be applied digitally on the time domain signal for each chain, which avoids the need to re-scale the beamforming vector in each OFDM bin.
According to embodiments of the invention the hardware (HW)/software (SW) utilization may be optimized by configuring the hardware portion of beamforming module 112 to calculate the following
information: The overall power of each antenna for each possible number of TX streams, when using the initial beamforming vectors. For example, if the explicit sounding frame includes initial
beamforming information for two streams and the transmitter has 4 antennas, it may be desirable to get from the hardware two vectors of length 4: A vector of powers for 4 antennas assuming a single
stream is used, A vector of powers for 4 antennas assuming two streams are used. Optionally, for each number of streams, it would be desirable to get the vector of inverse per-antenna powers {1/
{tilde over (p)}
This HW/SW partitioning is useful, as it enables HW acceleration to generate the per-antenna power vector, which includes all information required for the software to obtain the re-scaling
Alternatively instead of automatically returning all possible vectors of powers for any possible choice of the number of streams, the hardware may get the required number of streams as an input, and
return the vector of powers only for the specified number of streams.
In a typical application, together with the initial beamforming transmitter 110 may also obtain the corresponding SNR-estimation, SNR
. For example, in the 802.11n explicit feedback exchange, a per stream SNR metric is returned. After scaling the per-antenna power, it is possible to get an estimation SNR
for the post-change SNR by a constant c depending merely on the new beamforming vector {b
} and the per-antenna power levels {{tilde over (p)}
} of the initial per-antenna beamforming vector or antennas-by-subcarriers matrix. For example, after scaling the beamforming vectors by the factor b
/ {square root over ({tilde over (p)}
)} an estimation SNR
for the post-change SNR may be calculated according to the following formula:
SNR new SNR old
= i b i p ~ i i p ~ i ( 19 ) ##EQU00012##
According to embodiments of the present invention the re-scaling of SNR formula may apply to any power allocation change operation, not only those described in this application. Which means that if
the power allocation of the antennas is changed after an explicit feedback operation or implicit beamforming calculation that includes a per stream SNR report, this SNR report can be re-scaled using
the above formula.
Embodiment of the present invention may provide new methods for the beamforming design problem under all the following constraints: per-antenna powers, overall power, and overall EIRP. These methods
may enjoy the following desirable features: The resultant SNR may be typically considerably higher than that of the MRC/FPAP+scaling method. The complexity may typically be significantly smaller than
that of standard convex optimization algorithms. The beamforming design block may be seen as modular, meaning that the block for all three constraints may be built from blocks assuming only two of
the constraints, which, in turn, are based on the single-constraint MRC/FPAP blocks.
Some embodiments of the present invention may be implemented in software for execution by a processor-based system, for example, beamforming module 112. For example, embodiments of the present
invention may be implemented in code and may be stored on a storage medium having stored thereon instructions which can be used to program a system to perform the instructions. The storage medium may
include, but is not limited to, any type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), rewritable compact disk (CD-RW), and magneto-optical disks,
semiconductor devices such as read-only memories (ROMs), random access memories (RAMs), such as a dynamic RAM (DRAM), erasable programmable read-only memories (EPROMs), flash memories, electrically
erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any type of media suitable for storing electronic instructions, including programmable storage devices. Other
implementations of embodiments of the present invention may comprise dedicated, custom, custom made or off the shelf hardware, firmware or a combination thereof.
Embodiments of the present invention may be realized by a system that may include components such as, but not limited to, a plurality of central processing units (CPU) or any other suitable
multi-purpose or specific processors or controllers, a plurality of input units, a plurality of output units, a plurality of memory units, and a plurality of storage units. Such system may
additionally include other suitable hardware components and/or software components.
While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those of ordinary skill in the art.
It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.
Patent applications in class Transmitter controlled by signal feedback from receiver
Patent applications in all subclasses Transmitter controlled by signal feedback from receiver
User Contributions:
Comment about this patent or add new information about this topic:
|
{"url":"http://www.faqs.org/patents/app/20130040580","timestamp":"2014-04-19T11:37:45Z","content_type":null,"content_length":"121721","record_id":"<urn:uuid:9634b83d-6dae-4d98-98eb-fedd2863c435>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cross correlation
June 22nd 2009, 03:46 AM #1
May 2009
Cross correlation
Hi ,
I have [B]
x[t_]:=[1-t]*P[t-0.5] and
{1, Abs[t]£0.5},
{0, Abs[t]³0.5}
when P represents the Rectangle Function .
In the attached photo you can see the system .
and I need to find the cross-corrleation integral (when a=0) of
Integral : x(t+taw)*y(t)dt , between (-inf,inf)
and because a=0 , y(t)=x(t-T0)=x(t-4) , so
integral : x(t+taw)*x(t-4) , between (-inf,inf)
But I'm having trouble finding out the limits of Taw , meaning the limits of the integral.
hints maybe ? thanks in advance !!
Last edited by new guy; June 29th 2009 at 01:47 AM. Reason: needed
Follow Math Help Forum on Facebook and Google+
BTW , I'm sorry but I tried to copy paste from Mathematica so
it would be easier to understand , but the Forum's editor wouldn't let me
present if as it should have been .
I've attached another photo , hopefully it would be clearer (again ,
I'm a newbie , sorry)
Hi ,
I have
x[t_]:=[1-t]*P[t-0.5] and
{1, Abs[t]£0.5},
{0, Abs[t]³0.5}
when P represents the Rectangle Function .
In the attached photo you can see the system .
and I need to find the cross-corrleation integral (when a=0) of
Integral : x(t+taw)*y(t)dt , between (-inf,inf)
and because a=0 , y(t)=x(t-T0)=x(t-4) , so
integral : x(t+taw)*x(t-4) , between (-inf,inf)
But I'm having trouble finding out the limits of Taw , meaning the limits of the integral.
hints maybe ? thanks in advance !!
Last edited by new guy; June 29th 2009 at 01:46 AM.
Follow Math Help Forum on Facebook and Google+
June 22nd 2009, 06:03 AM #2
May 2009
|
{"url":"http://mathhelpforum.com/math-software/93474-cross-correlation.html","timestamp":"2014-04-17T14:09:25Z","content_type":null,"content_length":"34584","record_id":"<urn:uuid:7fd5c728-c972-4a21-9f6c-b8f03d4a2109>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Please help!!!!
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50b62913e4b0f6bfba553790","timestamp":"2014-04-19T02:00:38Z","content_type":null,"content_length":"35604","record_id":"<urn:uuid:58c8fe2c-a271-4856-be64-8b854de3ff13>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
|
scatter diagram and Probability
August 9th 2007, 08:23 AM
scatter diagram and Probability
Please solve following.
Q#1. The owner of a retailing organization is interested in the relationship between price at which a commodity is offer for sale and the quantity sold. The following sample data have been
Price: 25,45,30,50,35,40,65,75,70,60
Quantity: 118,105,112,100,111,108,95,88,91,96. respectively
a)Plot a scatter diagram for the above data.
b)Using the method of least squares, determine the equation for the estimated regression line .Plot this line on the scatter diagram.
c)Also discuss the regression line.
d)Calculate the standard deviation of regression Sy.x.
Two dice are caste: A1 is the event that a 6 appears on at least one die, A2 is the event that a 5 appears on exactly one die and A3 is the event that same number appears on both dice.
□ Are A1 and A2 independent?
□ Are A2 and A3 independent?
August 9th 2007, 08:51 AM
I think you'll have to help out a bit. Do you have a book? What sort of background do you have that would bring you to having been assigned these problems?
August 10th 2007, 11:50 AM
Yes i have got a book. I am the student of BBA. Please solve Q#2. I will solve Q#1.
Thanks alot.
August 10th 2007, 12:14 PM
To show that two events are independent you need to show that:
$<br /> P(A \wedge B)=P(A)~P(B)<br />$
|
{"url":"http://mathhelpforum.com/statistics/17638-scatter-diagram-probability-print.html","timestamp":"2014-04-18T21:50:13Z","content_type":null,"content_length":"6552","record_id":"<urn:uuid:9a49b057-863d-4e6e-a03d-dd061c31210f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Two New Variables
Date: 10/25/2002 at 13:33:10
From: Robert Summers
Subject: Two equations with two unknowns
I am trying to calculate using the Owens-Wendt equation. The equation
will give two equations with two unknowns. I am unable to solve
them, because of the x^1/2 and y^1/2 powers.
Two equations:
69.7 = (21.8x)^1/2 + (51.0y)^1/2
43.9 = (49.5x)^1/2 + (1.3y)^1/2
I was thinking of trying to solve by "squaring" each side, but that
will still leave x or y with 1/2 exponents.
Date: 10/25/2002 at 13:44:07
From: Doctor Achilles
Subject: Re: Two equations with two unknowns
Hi Robert,
Thanks for writing to Dr. Math.
I can think of two ways to approach this problem. The first is to
square both sides twice. So for example, on the first equation, you
will end up with:
69.7^2 = 21.8x + 51.0y + [(21.8*51.0)xy]^1/2
Then you can subtract 21.8x and 51.0y from both sides:
69.7^2 - 21.8x - 51.0y = [(21.8*51.0)xy]^1/2
And then square both sides again. This will fix your problem of
dealing with the square roots of x and y, but then you have the almost
as difficult problem of dealing with x^2 and y^2 in the same equation
as xy.
What I'd recommend trying instead is making up two new variables:
u and z. Define them this way:
u = x^1/2
z = y^1/2
Your first equation can be rewritten as:
69.7 = (21.8^1/2)u + (51.0^1/2)z
And you can re-write your second equation similarly. Then solve for
u and z just as you would for any pair of equations. Then, square u to
find x and square z to find y. Finally, go back and check your
answers, to make sure you didn't lose a negative or something weird
like that.
I hope this helps. If you have other questions about this or you're
still stuck, please write back.
- Doctor Achilles, The Math Forum
Date: 10/28/2002 at 13:46:18
From: Robert Summers
Subject: Thank you (Two equations with two unknowns)
As soon as you mentioned substituting u = x^1/2, it all came
back to me. Thank you very much!
|
{"url":"http://mathforum.org/library/drmath/view/61609.html","timestamp":"2014-04-20T16:59:28Z","content_type":null,"content_length":"7147","record_id":"<urn:uuid:a9195cdf-d84e-4b7c-89d2-e129028dfd14>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Next: Floating-point Numbers Up: Terms Previous: Terms   Contents   Index
The printed form of an integer in HiLog consists of a sequence of digits optionally preceded by a minus sign ( '-'). These are normally interpreted as base '). If a base greater than A-Z or a-z are
used to stand for digits greater than
Using these rules, examples of valid integer representations in XSB are:
1 -3456 95359 9'888 16'1FA4 -12'A0 20'
representing respectively the following integers in decimal base:
1 -3456 95359 728 8100 -120 0
Note that the following:
+525 12'2CF4 37'12 20'-23
are not valid integers of XSB.
A base of
0'A = 65
Baoqiu Cui
|
{"url":"http://www.cs.sunysb.edu/~sbprolog/manual1/node38.html","timestamp":"2014-04-18T23:16:07Z","content_type":null,"content_length":"4062","record_id":"<urn:uuid:af27329d-b1a3-4cd9-86ec-d0ffecbb3241>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Shawn K.
I graduated from the University of Chicago with a PhD in physics in 2011, specializing in experimental particle physics. My background in teaching includes being a teaching assistant (TA) teaching
basic 1st year undergraduate physics. However, my expertise involves various fields.
My education in physics includes subjects typically taught at an advanced (AP) high school and undergraduate level. These include: classical mechanics, electricity and magnetism, and quantum
Mathematics education includes calculus at the undergraduate level: both single and multi-variate, pre-calculus, trigonometry, and algebra. These subjects are a necessary foundation for physics. For
example, the calculus of vector fields is necessary to understand how electric and magnetic fields behave.
My undergraduate degrees are in both physics and astronomy; I graduate from the University of Maryland (College Park) with degrees in both subjects.
Probability and statistics:
Particle physics research is mostly statistical data analysis. Thus, I am thoroughly versed in statistics topics such as fitting, chi-square testing, and hypothesis testing.
After completing my PhD, I worked for two years at a hedge fund in Chicago. Here my work experience was with mostly linear regression and time-series analysis. In addition, I became family with basic
options mathematics.
Shawn's subjects
|
{"url":"http://www.wyzant.com/Tutors/IL/Chicago/8406507/?g=3JY","timestamp":"2014-04-19T07:51:15Z","content_type":null,"content_length":"76871","record_id":"<urn:uuid:c84dfdc1-67f2-45e8-b6d6-b40f5661a24c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent US5617030 - Method and apparatus for gradient power supply for a nuclear magnetic resonance tomography apparatus
The problem of fast gradient switching is especially pronounced in the EPI (echo planar imaging) method. This method is therefore explained in brief with reference to FIGS. 1 through 5. According to
FIG. 1, an excitation pulse RF is emitted into the examination subject together with a gradient SS of FIG. 2 in the z-direction. Nuclear spins in a slice of the examination subject are thus excited.
Subsequently, the direction of the gradient SS is inverted, whereby the negative gradient SS cancels the dephasing of the nuclear spins caused by the positive gradient SS.
After the excitation, a phase-encoding gradient PC according to FIG. 3 and a readout gradient RO according to FIG. 4 are activated. The phase-encoding gradient PC is composed of short, individual
pulses ("blips") that are activated at each polarity change of the readout gradient RO. The phase-encoding gradients PC are respectively preceded by a pre-phasing gradient PCV in negative
phase-encoding direction.
The readout gradient RO is activated with periodically changing polarity, as a result of which the nuclear spins are dephased and in turn rephased in alternation. In a single excitation, so many
signals are acquired that the entire Fourier k-space is sampled, i.e. the existing information suffices for reconstructing an entire tomogram. An extremely fast switching of the readout gradient with
high amplitude is required for this purpose; this being virtually incapable of being realized with the square-wave pulses and conventional, controllable gradient amplifiers otherwise usually employed
in MR imaging. A standard solution of the problem is to operate the gradient coil that generates the readout gradients RO in a resonant circuit, so that the readout gradient RO has a sinusoidal form.
The arising nuclear magnetic resonance signals S are sampled in the time domain, digitized, and the numerical values acquired in this way are entered into a raw data matrix. The raw data matrix can
be considered as being a measured data space, a measured data plane given the two-dimensional case of the exemplary embodiment. This measured data space is referred to as k-space in nuclear magnetic
resonance tomography. The position of the measured data in the k-space is schematically illustrated by dots in FIG. 6. The information about the spatial origin of the signal contributions required
for imaging is coded in the phase factors, whereby the relationship between the locus space (i.e., the image) and the k-space exists mathematically via a two-dimensional Fourier transformation. The
following is valid:
S(k.sub.x, k.sub.y)=∫∫ρ(x,y)e.sup.i(k.sbsp.x.sup.x+k.sbsp.y.sup.y) dxdy
The following definitions thereby apply:
k.sub.x (t)=γ∫.sub.o.sup.t G.sub.x (t')dt'
k.sub.(t)=γ∫.sub.o.sup.t G.sub.y (t')dt'
γ=gyromagnetic ratio
∫=nuclear spin density
G.sub.x =value of the readout gradient RO
G.sub.y =value of the phase-encoding gradient PC
Extremely high gradient amplitudes are required in the EPI method for the location encoding of the radio-frequency signals. These high gradient amplitudes must be activated and deactivated at short
time intervals, so that the required information can be acquired before the nuclear magnetic resonance signal decays. If it is assumed that a pulse duration T of one millisecond is required for a
projection (i.e. for an individual signal under an individual pulse of the readout gradient RO), an overall readout time T.sub.acq of 128 ms derives for a 128 were to use conventional square-wave
pulses having a duration of one millisecond and were to assume a field of view (FOV) of 40 cm, then typical gradient amplitudes G.sub.x for the readout pulse RO for square-wave pulses would be: ##
EQU1## Even larger gradient pulses G.sub.T derive for trapezoidal pulses having a rise time of T.sub.rise =0.5 ms and without readout of the signals on the ramps. ##EQU2##
The demands made on the electric strength of the gradient amplifier in the gradient power supply become increasingly problematical with decreasing rise time. If it is assumed that a current l.sub.max
is required for reaching the maximum gradient strength G.sub.max' then the voltage required due to an inductance L of the gradient coil is calculated as: ##EQU3##
The ohmic voltage drop at the gradient coil has not yet been taken into account. For an inductance of the gradient coil of 1 mH and a maximum current l.sub.max of 200 A, the voltage required at the
output of the gradient amplifier would assume the following values dependent on the rise time T.sub.rise of the gradient current:
______________________________________T.sub.rise = 0.5 ms U = 400 VT.sub.rise = 0.25 ms U = 500 VT.sub.rise = 0.1 ms U = 2000 V.______________________________________
Without a resonant circuit, these requirements can only be met with significant component outlay given short rise times; typically, at best by a parallel and series connection of modular gradient
The problem of the short switching times can be more simply solved when the gradient coil in question is operated together with a capacitor in a resonant circuit, whereby a sinusoidal curve of the
readout gradient RO shown, for example, in FIG. 4 is then obtained. A disadvantage, however, is that an equidistant sampling in the k-space is not obtained in the sampling of the signal in temporally
constant intervals, this being indicated in the raw data matrix RD by means of the non-equidistant dots in the k-space illustration of FIG. 6.
As initially mentioned, it is required in many instances to superimpose a constant gradient DC current on the actual gradient pulses, for the purpose of compensating linear inhomogeneity terms of the
basic magnetic field.
FIG. 7 shows an exemplary embodiment of an inventive circuit wherein such offset currents can also be generated in the resonant mode, given an appropriate drive.
The gradient current l.sub.G is controlled by a gradient amplifier GV. This current flows in a series circuit composed of a gradient coil G and a bridge circuit, which is, in turn, composed of four
switches T1-T4 with respective free-running (unbiased) diodes D1-D4 connected in parallel. This bridge circuit has a bridge diagonal lying in the current path of the gradient current l.sub.G ; a
capacitor C lies in the other bridge diagonal. The gradient current l.sub.G is acquired and supplied to the gradient amplifier GV as an actual value. The current acquisition--as shown in the
exemplary embodiment of FIG. 7 --ensues most simply with a shunt resistor R in the current path.
A reference value for the gradient current l.sub.G is prescribed to the gradient amplifier GV by a gradient control circuit SG. This gradient control circuit SG also controls a driver circuit ST via
which the individual switches T1-T4 are driven. The reference value of the current l.sub.G and the switching times for the switches T1-T4 are set dependent on a desired, selectable pulse sequence. An
offset current is prescribed for the gradient control circuit SG by means of an offset current control stage SO. The offset current required for the compensation of a linear field inhomogeneity can
be determined, for example, with a method as disclosed in U.S. Pat. No. 5,345,178.
FIGS. 8-14 show a switch control sequence and the curve of the gradient current l.sub.G resulting therefrom, as well as the voltage U.sub.C at the capacitor C, without offset current. In FIGS. 10-14,
the darker, thicker portions represent times during which a switch is on or closed, or a diode is conducting (forward biased). According to FIG. 10, the switches T1 and T3 are thereby first switched
on, so that a linearly rising current l.sub.G flows through the gradient coil G given a constant output voltage at the gradient amplifier GV. At time T1, switches T1 and T3 are opened. The inductive
E.sub.L =2L
is stored in the gradient coil G with the inductance L at this point in time. At time t1, the diodes D1 and D4 receive the current driven by the gradient coil G, so that the capacitor C is charged.
The inductive energy stored in the gradient coil G has been fully transferred to the capacitor C at time t.sub.2, so that, given a capacitance C, this stores the capacitive energy E.sub.C =2C this
time, the gradient current l.sub.G drop to zero and the voltage at the capacitor C proceeds to U.sub.cmax. At time t.sub.2, the switches T2, T3 are closed, so that the capacitor C is discharged and
the stored energy is supplied to the gradient coil G. A negative current l.sub.G up to a maximum value -l.sub.gmax thereby flows. At time t.sub.3, the entire energy has again been transferred from
the capacitor C into the gradient coil G, so that the voltage U.sub.C drops to zero. At time t.sub.3, the free wheeling diodes D2 and D3 receive the current driven by the gradient coil G, so that the
capacitor C is again charged.
The illustrated switching cycle is continued, causing a sinusoidal curve of the gradient current l.sub.G to arise due to the charge transfer between capacitor C and gradient coil G. It is important,
for the application of an offset current as described below, that the voltage at the capacitor C is unipolar, i.e. that the voltage U.sub.C fluctuates between zero and a positive maximum value
The above illustration proceeded on the basis of a loss-free resonant circuit. Ohmic losses are compensated by means of the gradient amplifier GV re-supplying the dissipated energy so that, given
deviations between the actual value of the gradient current l.sub.G and the reference value thereof, the gradient amplifier GV supplies a voltage at its output that compensates the energy losses.
The bridge circuit of FIG. 7 also offers the possibility of connecting the gradient coil G directly to the gradient amplifier GV, bypassing the capacitor C, and thus driving a direct current of
unlimited duration, which would not be possible in resonant mode. According to FIG. 14, this operating condition is achieved in that the switches T1 and T3 are switched on at time t.sub.7. The
previously existing current l.sub.G according to FIG. 8 thus continues to flow constantly, and only the ohmic voltage drop need be compensated with a corresponding voltage at the output of the
gradient amplifier GV. Nothing regarding the charge condition of the capacitor C changes in this operating condition.
The possibility of having a direct current flow of unlimited duration through the gradient coil by closing the switches T1 and T3, however, is not suitable for generating the desired offset current,
since this direct current cannot be superimposed on the alternating current in the resonant mode.
A drive of the circuit according to FIGS. 15-21, by contrast, is required for generating an offset current l.sub.off. According to FIG. 17, the switches T1 and T3 are again closed first, and a linear
current rise is generated at the gradient amplifier GV on the basis of a constant output voltage. Given the same, desired current amplitudes through the gradient G of 2 C l.sub.gmax, a charging of
the gradient coil up to the current l.sub.gmax +l.sub.off is required here. The switches T1 and T3 are opened at time t.sub.1 so that the current l.sub.G driven by the inductance of the gradient coil
G now flows across the capacitor C and charges it. At time t.sub.2, the gradient current l.sub.G has dropped to zero, and the inductive energy of the gradient coil G has been converted into
capacitive energy in the capacitor C. Since the gradient coil was charged with a current which was higher by the offset current l.sub.off. compared to the exemplary embodiment of FIG. 22, a higher
voltage U.sub.cmax =is also established. This condition is referenced 1 in FIG. 16.
At time t.sub.2 of the zero-axis crossing of the gradient current l.sub.G, the switches T2 and T3 are closed. The capacitor C is thereby in turn discharged and the gradient current l.sub.G rises,
however, the discharge of the capacitor U.sub.C thereby does not ensue completely, since the switches T2 and T3 are already switched off at time t.sub.3 at which time the capacitor C still has a
residual charge, and thus a voltage U.sub.CR. This condition is referenced 2 in the voltage diagram of FIG. 16. The energy stored in the capacitor C at time t.sub.2 is thus not completely transferred
into the gradient coil G.
When the switches T2 and T3 are switched off at time t.sub.3, the diodes D2 and D3 become conductive. The current driven by the gradient coil G thereby flows across the capacitor C and again charges
it. At time t.sub.4 of the zero-axis crossing (i.e. at point 3 according to the voltage diagram of FIG. 16), the capacitor again has the same charge as at point 1, and thus again has the voltage
U.sub.cmax=, since the energy taken from the capacitor between t.sub.2 and t.sub.3 is exactly resupplied between t.sub.3 and t.sub.4.
The switches T1 and T4 are switched on at time t.sub.4, so that a rising current is driven through the gradient coil U.sub.C by the voltage U.sub.C at the capacitor C. Since the same energy is
available in the capacitor C at point 3 as at point 1, the current l.sub.gmax +l.sub.off is again achieved in the gradient coil G until the capacitor voltage U.sub.C drops to zero at point 4
according to FIG. 16 at time t.sub.5. The switching cycle thus begins again.
A characteristic of the described drive is that the capacitor voltage U.sub.C is allowed to proceed to zero only at every other half-wave, with a residual voltage U.sub.ZR remaining at the half-waves
in between before the capacitor C is again charged by the current l.sub.G driven by the gradient coil G.
FIG. 16 shows that a gradient current l.sub.G is set with the described drive that is composed of a constant offset current l.sub.off as constant part and a superimposed alternating current part that
has an approximate sinusoidal shape.
It is also possible in this operating case to connect the gradient coil directly to the gradient amplifier GV by switching on the switches T1 and T3 by driving the switches T1 and T3 of FIG. 17 in,
for example, a time interval between t.sub.7 and t.sub.8. This permits an arbitrarily long direct current flow independently, of the sinusoidal current and the offset current.
Given the operation with offset current shown herein, the gradient amplifier GV, due to a deviation which may occur between the reference value and the actual value of the gradient current l.sub.G,
supplies a voltage at its output that is acquired in addition to the resonant circuit, for example for compensating ohmic losses.
FIGS. 22-28 show current and voltage curves as well as switching states for that case wherein the switching of the switches T1-T4 does not ensue at the point in time of the zero-axis crossing of the
gradient current R.sub.G, but instead when the gradient current l.sub.G reaches the offset current l.sub.off, i.e. when it would pass through zero without taking the offset current l.sub.off into
consideration. Given direct drive by a gradient current control circuit, this can be simpler since the actual zero-axis crossing need not be acquired.
The charging event by the switches T1 and T3 and the transfer of the current by the diodes D1 and D4 ensues as in the exemplary embodiment set forth before. The switches T2 and T3, however, are
switched on somewhat earlier and the switches T1 and T4 are switched on somewhat later than in the above-described exemplary embodiment. This leads to the fact that the maximum voltage in every other
half-wave (i.e. in the point 3 in the voltage diagram of FIG. 23) is somewhat lower than in the other half-waves (i.e. in point 1 in the exemplary embodiment). As in the preceding exemplary
embodiment, the turn-off time of the transistors T2 and T3 is selected such that the voltage U.sub.C at the capacitor C does not proceed entirely to zero in every other half-wave (i.e., for example,
in points 2 and 5 according to the voltage diagram of FIG. 23). The setting of an offset current l.sub.off is thus also possible with this drive.
With the described arrangement, one thus surprisingly succeeds in setting a constant DC part of the gradient current l.sub.G, i.e. an offset current l.sub.off, even in resonant operation of a
gradient coil. Linear inhomogeneity terms in the three spatial directions of a magnet can thus be corrected by means of the gradient coils. Separate shim coils are not required for this compensation.
The amplitude of the offset current is in fact limited since the capacitor C must also be discharged to a certain extent in every other half-wave (i.e., in the points 2 and 5 according to FIG. 16 and
23). In practice, however, the offset currents are far lower than the gradient pulses so that the range of adjustment is completely adequate for the offset current l.sub.off.
Although modifications and changes may be suggested by those skilled in the art, it is the intention of the inventors to embody within the patent warranted hereon all changes and modifications as
reasonably and properly come within the scope of their contribution to the art.
FIGS. 1-5 illustrate a known EPI sequence for explaining the problem which is solved by the present invention.
FIG. 6 shows the position of the sampled signals in the k-space given a sequence according to FIGS. 1-5.
FIG. 7 shows an exemplary embodiment of a circuit arrangement constructed in accordance with the principles of the present invention.
FIG. 8 shows the curve of the current l.sub.G in the gradient coil G in the circuit of FIG. 7 without the offset current.
FIG. 9 shows the corresponding curve of the voltage U.sub.C at the capacitor C in the circuit of FIG. 7 without the offset current.
FIGS. 10-14 respectively show the corresponding switching times for the switches T1-T4 and the diodes D1-D4 in the circuit of FIG. 7 without the offset current;
FIG. 15 shows the curve of the current l.sub.G in the gradient coil G in the circuit of FIG. 7 with the offset current l.sub.OFF.
FIG. 16 shows the corresponding curve of the voltage U.sub.C at the capacitor C.
FIGS. 17-21 respectively show the corresponding switching times of the switches T1-T4 and of the diodes D1-D4 in the current of FIG. 7 with the offset current l.sub.OFF ;
FIG. 22 shows the curve of current l.sub.G for the switches being switched when the current l.sub.G has reached the offset current l.sub.OFF.
FIG. 23 shows the corresponding curve U.sub.G for the switches being switched when the current l.sub.G has reached the offset current l.sub.OFF.
FIGS. 24-28 respectively show the corresponding switching times of the switches T1-T4 and of the diodes D1-D4 for the switches being switched when the current l.sub.G has reached the offset current
1. Field of the Invention
The present invention is directed to a method and apparatus for gradient power supply in a magnetic resonance imaging apparatus, for producing a gradient offset current.
2. Description of the Prior Art
U.S. Pat. No. 5,245,287 discloses a gradient power supply for a nuclear magnetic resonance tomography apparatus wherein the gradient coil is operated in a resonant circuit. Fast changes in current
can be realized by the resonant operation of the gradient coil; these could not be realized with only a linear gradient amplifier or could be realized therewith with great outlay.
A setting possibility for shim currents is often provided in nuclear magnetic resonant tomography apparatus in order to improve the homogeneity. For example, this can be required before every
measurement given high demands. Linear homogeneity terms can thereby be simply set in that a constant offset current is supplied to the gradient coils present for all three spatial directions in
addition to the gradient power is predetermined by a sequence controller. This does not seem possible given operation of the gradient coil in a resonant circuit since, of course, the DC current
cannot be conducted over a capacitor.
It is an object of the present invention to provide a method and an apparatus for gradient power supply wherein an offset current can also be set in resonant mode of the gradient coil.
The above object is achieved in a method and apparatus constructed and operating in accordance with the principles of the present invention wherein a bridge circuit is provided, having four switches,
each switch being bridged by an unbiased diode and the bridge circuit having a first bridge diagonal and a second bridge diagonal containing a capacitor, a gradient coil being connected to the output
of a controllable gradient amplifier through the first bridge diagonal, the capacitor and gradient coil thereby forming a resonant circuit. The four switches are operated in a sequence for producing
an alternating current through the gradient coil as a gradient current. The gradient current is conducted across the capacitor and periodically charges and discharges the capacitor with the same
voltage polarity. An offset current is generated through the gradient coil, superimposed on the gradient current, by operating the four switches so as to incompletely discharge the capacitor at every
other half wave of the gradient current. The incomplete discharge of the capacitor produces a voltage at the gradient coil. The gradient amplifier is operated to compensate for an ohmic voltage drop
across the gradient coil and the grid circuit, as well as to compensate for the voltage at the gradient coil produced by incompletely discharging the capacitor.
The offset current is preferably generated at a level which compensates for the linear terms of an inhomogeneity in the static magnetic field of the magnetic resonance tomography apparatus.
|
{"url":"http://www.google.ca/patents/US5617030","timestamp":"2014-04-18T03:11:42Z","content_type":null,"content_length":"71262","record_id":"<urn:uuid:9ef4e96b-5e62-45a5-a450-e2421ff2e0e3>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebra: is this right?
Number of results: 111,674
1. 5*3 - 20 = -5 2. right 3. (3x-12)/2 need parenthesis 4. –3(x – 3) + 8(x – 3) = x – 9 -3x+9+8x-24=x-9 4x = -42 x = -21/2 or - 10 1/2 5. right 6. right 7. right 8. right 9. right 10. right 1/2 = -1/
Sunday, December 23, 2007 at 1:48pm by Damon
algebra write word phrase
1. wrong + is not a multiplication sign 2. right 3. wrong 91, not 9.1 4. wrong h less than 6.5 5. right 6. right 7. right 8. right
Monday, October 10, 2011 at 11:29am by drwls
College Algebra
Thank you very much. I'm not doing very good understanding this algebra right now so I will definately have other questions tonight. I'm working on getting a tutor here in my town because right after
this class is over Sunday, I go into Algebra two.
Wednesday, November 18, 2009 at 7:41pm by LeAnn/Please help me
algebra w o r d Problem
so i answered all 3 questions right. i wasnt sure about when it said estimate the number of things. but i did this right? right?
Thursday, October 13, 2011 at 8:21pm by marko
score<10*right-4(50-right) 350<right(10+4) -50 300<right(14) solve for right right>300/14 right>21 right=22 check all that.
Tuesday, October 4, 2011 at 4:31pm by bobpursley
Algebra 2
Here is an Algebra two question that confuses me. Question:: What are the left and right behaviors of f(x)=-3x^5+x^4-9x a.) Rises both to the right and to the left b.)Falls both to the right and left
c.)Cannot be determined d.)Rises to the right and falls to the left e.)Rises ...
Tuesday, November 6, 2007 at 8:43pm by Jazz
Algebra- is my answer right
disagree on the first one -x^3 = -x (x)^2 x^2 is always + -x is positive left and negative right rises to left, falls to right
Thursday, February 3, 2011 at 7:01pm by Damon
grammar..subjects and predicates
1. When a person's entire name is listed, then the entire name is treated as ONE noun. Everything else here is right. 2. right 3. right 4. right 5. Rethink the CS; the rest is right. 6. right 7.
right 8. What will you do with "Unfortunately"? And "never" is an adverb; the rest...
Wednesday, January 30, 2008 at 2:38am by Writeacher
Solve the puzzle =RIGHT(LEFT($B$2,84),1) =RIGHT(LEFT($B$2,17),1) =RIGHT(LEFT($B$2,138),1) =RIGHT(LEFT($B$2,2),1) =RIGHT(LEFT($B$2,234),1) =RIGHT(LEFT($B$2,138),1) =RIGHT(LEFT($B$2,4),1) =RIGHT(LEFT
($B$2,173),1) =RIGHT(LEFT($B$2,2),1) =RIGHT(LEFT($B$2,209),1) =RIGHT(LEFT($B$2,4...
Friday, November 9, 2012 at 8:40pm by Suzanne
Right. You can prove it's right by substituting 4 for x in your original equation.
Sunday, February 1, 2009 at 8:51pm by Ms. Sue
Algebra- is my answer right
Thanks, I see what I did, I squared when adding to the right side
Tuesday, February 1, 2011 at 7:28pm by Jen
Since (0,9) is on the line, you know that it will be "+9" in the function So, what + 9 = 5? -4, right? So, what times 2 = -4? -2, right? y = -2x+9
Friday, November 8, 2013 at 6:39pm by Steve
False. It would be a right triangle if c were 52. a^2 + b^2 = c^2 for right triangles.
Tuesday, May 4, 2010 at 3:55pm by drwls
algebra am i right
John has 75 books in his store. If he originally had 200 books, what percent of the original number of books remains ? 37.5 % am i right? 75/200=.375=37.5% Yep, you're right yes yes its right
Wednesday, September 27, 2006 at 11:49am by cassii
yea, reiny is right ......the answer is 9/sqrt(2)....the procedure is also right.....
Wednesday, May 8, 2013 at 8:03pm by sayan chaudhuri
drwls is right - Algebra
of course you are right, x = -5 works. don't know what I was thinking!
Wednesday, January 21, 2009 at 8:18am by Reiny
i did the work on them...they werent right...i just want to know if im right after i try to solve the problem...thanks
Tuesday, February 9, 2010 at 9:21am by Dawn
Okay that looks right :) Could tell me your steps though? I can see that its right, I just don't really know how you got there.
Thursday, February 16, 2012 at 10:07pm by Betta
college Algebra
Reiny is right, this formula can't be right. Raising it to the power of x makes more sense.
Monday, November 4, 2013 at 4:32pm by Erin
College Algebra
yes, you are right, For these kind of problems why don't you just expand your answer using FOIL to check if you are right?
Sunday, November 18, 2007 at 10:04pm by Reiny
algebra II
i agree and drwls is right.and person is right they teach us and show is harder and confussing than is.
Sunday, January 4, 2009 at 12:09am by sherri
A. Please do the multiplication again. You're close but not right. B. I mistyped the formula above. It should be: A = 3.14*2*2 C. You're right! :-)
Monday, September 17, 2007 at 5:28pm by Ms. Sue
Algebra 1
Graph the equation and identify the y intercept y= 3/2x -4 Would this be the right answer? (-4,0) 3/2 three to the right and 2 up, therefore 3,2?. Thank you.
Monday, January 3, 2011 at 1:48pm by Esther
Hello Reiny, Thanks for the help however it's a multiple choice and ther is non like that. I have: A. 1/4 to the left of the e symbol. 5 on top n=1 on the bottom and n to the right. b. 1/4 to the
left, 6 on top, n to the right and n=1 on the bottom. c. 5 on the top, n=1 on the...
Tuesday, April 9, 2013 at 6:37pm by Anonymous
Not sure on a few of my answers, can you help? 1. Find the slope and y intercept : f(x)=-5x-7 slope = -5 intercept = (0,-7). Is this right? -1/2x=-5/6 my answer was 5/3 was this right? If I have
these two sets of coordinate (-4,0) and (0,2) will my slope be -1/2? Same question...
Friday, January 8, 2010 at 10:06pm by HGO
algebra writeing equations
1. is right. 2. The cost per ticket is $2.83 I think you can find a better inequality. 3. right
Sunday, November 6, 2011 at 6:07pm by Ms. Sue
next definit. please revise
7. Class 8. Probably social construction of race 9. Right 10. Right 11. Right 12. Subordinate group 13. Right 14. Right 15. Right
Thursday, June 12, 2008 at 11:08pm by Ms. Sue
well you got it right in the second try good job thats what we just studied so yes thats correct y=x-4 even though it looks different its right
Sunday, May 4, 2008 at 5:26pm by Kaylee
I meant youd go up 4 and over 2 looking at graph would be on right side. Was just making sure I was doing it right.
Saturday, November 3, 2012 at 11:20am by Lee
X=2 Is that right Ms. Sue? I will spell Algebra correctly from now on thanks for your help.
Tuesday, March 26, 2013 at 5:03pm by Eric
Which of the following describes the end behavior of the graph of the function f(x) = –2x^5 – x^3 + x – 5? A. Downward to the left and upward to the right B.Upward to the left and downward to the
right C. Downward to the left and downward to the right D. Upward to the left and...
Tuesday, July 21, 2009 at 10:52am by Rachale
Which of the following describes the end behavior of the graph of the function f(x) = –2x^5 – x^3 + x – 5? A. Downward to the left and upward to the right B.Upward to the left and downward to the
right C. Downward to the left and downward to the right D. Upward to the left and...
Tuesday, July 21, 2009 at 4:39pm by Cassie
solve 8x -(5x +4)= 5 The solution is x= 8 is this right
Saturday, September 12, 2009 at 1:13pm by algebra yuck
4th grade algebra
i need help right know in algebra
Friday, November 12, 2010 at 11:03am by dianna
yes 2/7 is right - i get now so r/6 = 8 i got r = 48 is correct right? 2b/9 = 4 i got: b = 13 right? 3y = 4/5 y= 4/15 right? 5g = 5/6 got : g = 1/6 right? 3k = 1/9 i got: k = 1/27 right? 3x/5 = 6 i
got x = 10 right?
Monday, January 23, 2012 at 9:04pm by marko
A fundamental right is a right that has its origin in a country's constitution or that is necessarily implied from the terms of that constitution. These fundamental rights usually encompass those
rights considered natural human rights. Some rights generally recognized as ...
Sunday, July 20, 2008 at 3:18am by Anonymous
Which of the following describes the end behavior of the graph of the function f(x) = –x6 – 3x4 + 7x – 5? a. downward to the left and upward to the right b.upward to the left and downward to the
right c.downward to the left and downward to the right d.upward to the left and ...
Monday, November 15, 2010 at 5:12pm by Math is TOUGH
science(CHECK ANSWERS)
1. wrong, C is right 2. right 3. VERY WRONG!!!! C is Right 4. right 8. right 9. right 10. right
Friday, January 10, 2014 at 2:33pm by Damon
The intercept would be 5 right?
Saturday, September 12, 2009 at 4:46pm by algebra yuck
Given a right triangle whose side lengths are integral multiples of 7, how many units are in the smallest possible perimeter
Thursday, January 20, 2011 at 5:15pm by algebra
Algebra- is my answer right
Choose the end behavior of the graph of each polynomial function. A f(x)= -5x^3-4x^2+8x+5 B f(x)= -4x^6+6x^4-6x^3-2x^2 C f(x)= 2x(x-1)^2(x+3) A= falls to the left and rises to the right B= Falls to
the left and right C=Rises to the right and left.
Thursday, February 3, 2011 at 7:01pm by Rachal
How do I identify an exponential growth/decay function on a graph? I'm assuming that if the curve moves from left to right, upwards, it is growth, and if it moves left to right, downwards, then it is
decay. Please tell me whether this reasoning is right or wrong.
Thursday, January 30, 2014 at 5:04pm by Anonymous
33 & 1/3 of $90 = $21.00 -- Wrong 100% of 500 = $500 -- Right 1% of 500 = $5.00 -- Right 50% of 70 = $35.00 -- Right 100% of 70 = $70.00 -- Right
Sunday, October 25, 2009 at 10:35pm by Ms. Sue
OH i get it, ah alright. Now to get y=x-9 i just plug in x=7 right? and would leave me with y= -2 right? and it would look like this y=7-9 and y=-2
Monday, January 14, 2008 at 4:39pm by Chris
NEVERMIND your somewhat right,yet wrong answer helped me and I got the right answer.
Thursday, April 17, 2008 at 8:05pm by Katie
is 5x + 2y = 8 look like horizontal, vertical, slanted right upwards,slanted right downwards?
Tuesday, March 26, 2013 at 12:48am by jr
Algebra- is my answer right
Simplify. (2b^-4 c^6)^5 I get b^20/32c^30 Is that right? Write your answer using only positive exponents.
Tuesday, February 1, 2011 at 8:02pm by Jen
intermidate algebra
I believe you are right. WHatever you do to one side of the equation, you have to do to the other, and it seems like you followed that algebra rule correctly.
Tuesday, May 18, 2010 at 9:54pm by Anonymous
First, I really stink @ Algebra. How do combination of signed numbers work? I keeping getting 6 but for some reason I don't think its right. -1-(-6+-1) -1-6+1 -5+1=6
Sunday, March 20, 2011 at 12:41pm by Jeannie
That does not come out right.
Friday, September 11, 2009 at 9:44pm by algebra
Which of the following describes the end behavior of the graph of the function f(x) = –2x^5 – x^3 + x – 5? A. Downward to the left and upward to the right B. Upward to the left and downward to the
right C. Downward to the left and downward to the right D. Upward to the left ...
Monday, July 20, 2009 at 4:14pm by Audrey
Which of the following describes the end behavior of the graph of the function f(x) = –2x^5 – x^3 + x – 5? A. Downward to the left and upward to the right B. Upward to the left and downward to the
right C. Downward to the left and downward to the right D. Upward to the left ...
Monday, July 20, 2009 at 5:34pm by Audrey
Which of the following describes the end behavior of the graph of the function f(x) = –2x^5 – x^3 + x – 5 A.Downward to the left and upward to the right B.Upward to the left and downward to the right
C.Downward to the left and downward to the right D.Upward to the left and ...
Thursday, July 23, 2009 at 9:53pm by Crystal
6th grade grammar
1. Right. (After is not a preposition in this sentence.) 2. Right. BUT -- there's another prepositional phrase in this sentence. 3. Right. 4. Before is not a preposition in this sentence. The other
phrase is correct. 5. Right. However, up is not a preposition in this sentence...
Tuesday, October 1, 2013 at 8:18pm by Ms. Sue
my teacher gave it to me an i strictly copied an pasted. i think that i have to solve the equation using algebra.by finding the value for A,plug in in an so forth..is that right???
Sunday, January 27, 2008 at 6:46pm by Ash
my teacher gave it to me an i strictly copied an pasted. i think that i have to solve the equation using algebra.by finding the value for A,plug in in an so forth..is that right???
Sunday, January 27, 2008 at 6:46pm by Ash
If it is a right triangle, you can use the c^2=a^2 + b^2. But if it is not a right triangle, it cannot be determined.
Monday, May 4, 2009 at 9:46pm by bobpursley
answer to this problem is: (x+6)(x+2) **you may want to learn how to factor, it is very easy to do so, but it may catch up with you in the long run bc you do alot of factoring in algebra 2...i know!
im currently in it right now:/
Tuesday, January 11, 2011 at 3:09pm by AHS
College Algebra
Choose the end behavior of the polynomial function. f(x)= -5x^5-9x^2+6x+1. I got left end up right end down. Is this right? Thanks
Saturday, January 22, 2011 at 6:56am by Valerie
Bob has balanced what I suspect is an incorrect equation incorrectly. Check it out. 4 Na on left and right. OK. 2 S on left and right. OK. 12 O on left but 10 on right. not ok. 6 H on left and 4 on
right. not ok. I think you (Amanda) must have made a typo in the problem. Are ...
Thursday, January 3, 2008 at 6:40pm by DrBob222
Can anyone help me to understand algebra? Right now I am working on Systems of Linear Equations in Two Variables and I do not understand any of it. Thank you.
Thursday, March 31, 2011 at 2:15pm by patiance
math 116 algebra 1A
can someon help please. why is it important that you follow the steps rather than solve the problem from left to right?I have no clue. this is my first time taking algebra
Thursday, September 17, 2009 at 10:44am by Lea
Thank you Christina.. I am taking Algebra online...something I would never recommend....How would I write the equation of the line passing through (6,37) and (1,12)? the most common way is to use y=
mx + b where m is the slope and b is the y-intercept so the first thing you ...
Friday, July 27, 2007 at 3:11pm by Misty
Algebra 1: 8th Grade
6) is wrong too (I didn't get any further). 6) 4^11/4^8*4^-2 = 4^5 PEMDAS (Please excuse my dear aunt sally). Parentheses exponentiation multiplication & division (left to right) addition &
subtraction (left to right) Because multiplication and division have the same ...
Tuesday, April 28, 2009 at 12:35am by RickP
Now about yesterday : # They're saying 2 is right. I'm so confused. # Algebra - Damon, Sunday, January 25, 2009 at 6:34am MINE: 3y-18=-6y 3y+6y-18=-6y+6y 9y-18 BUT continue with right 9 y - 18 = 0 9y
/9 = -18/9 BUT then add 18 both sides 9 y=18 y=-2 BUT I GET then y = 2
Sunday, January 25, 2009 at 8:55pm by Damon
ALGebra Simplify
I cant figure this out 6 /2 * 8-8 /2 i got 6/2 * 8-8 /2 3*8 4 24 -4 = 20 is this right way? 29 - (3+5) *2 +3 i got 16 is that right? 5-[6+3(-40/8) + (-2)exponent 2 i got 9 x5 = 45 + 4 = 5 -49 = -44
is that correct?
Tuesday, October 11, 2011 at 7:50pm by Marko
abnormal psych
I will only comment on answers I know, especially since I haven't done clinical work in decades. Also, it would make it easier for us to respond, if the answers were given with the question rather
than at the end. 44. He seems to be ignoring the variability among patients. 45...
Tuesday, November 27, 2012 at 1:11pm by PsyDAG
Algebra II
Solve each equation. Check for extraneous solutions. Solve:|x+5|=3x-7 My answers were: x=6 and x=.5 I dont think they are right though, but I should get 2 answers right???....Help!
Tuesday, February 8, 2011 at 7:12pm by Attalah
pre-algebra-is my answer right
Rewrite the expression without using a negative exponent. 4z^-4 Simplify your answer as much as possible. is this right? (1/4z^4)
Wednesday, February 9, 2011 at 4:19am by Rachal
Math (Algebra/Pre-Algebra)
Ok I think I get it... So it would be D then? Because if you got more people in the stands you would have less empty seats. Right?
Thursday, June 4, 2009 at 9:09pm by Samantha
woops. this one, if I'm seeing it right... you would have (7-2x^4)(7-2x^4), and then you have to use the "FOIL" method...right?
Thursday, February 7, 2008 at 7:14pm by DanH
The chemistry is right. It's an algebra problem now. Solve for X. Chopsticks is right. Expand the denominator by using FOIL, gather terms and solve the quadratic equation.
Thursday, February 5, 2009 at 4:30pm by DrBob222
Pre-Algebra [repost]
Oh, you want whole number solutions? The slope of this line is 4/3 so from your first point (3,2) go right 3 and up 4 (6,6) again go right 3 and up 4 (9,10)
Thursday, May 7, 2009 at 5:01pm by Damon
I have two problems that I need some help with. I believe I got to the answer but not quite sure if its right. I need to expand and evaluate each series. 1. There is an E like symbol that has a 7 at
the top. On the bottom k=1 and on the right side (-1)^k K 2. There is an E ...
Monday, April 1, 2013 at 12:08pm by John
Algebra- is my answer right
The one-to-one function f is defined by f(x)=(4x-1)/(x+7). Find f^-1, the inverse of f. Then, give the domain and range of f^-1 using interval notation. f^-1(x)= Domain (f^-1)= Range (f^-1)= Any help
is greatly appreciated. Algebra - helper, Wednesday, February 2, 2011 at 7:...
Wednesday, February 2, 2011 at 8:24pm by Rachal
Okay. I had went ahead and done this. I found out that a negative slope the line goes down left to right and a positive slope got up left to right. Thanks for the help.
Sunday, April 18, 2010 at 10:36am by Carrie
Algebra (College)
I need to divide this polynomial synthetically. I think I did it right, but can someone confirm if my answer is right or wrong? Polynomial: x^4 - x^3 + x^2 - x + 2 Divisor: x-2 My answer: x^3 + x^2 +
3x + 5 + 12 (remainder) ---- x-2
Sunday, April 18, 2010 at 11:08pm by Sarah
1. right 2. right 3. of those answers - right 4. nope - look at the word "like" 5. ( I don't know this poem) 6. right 7. right 8. right 9. a tree/ whose hun/ gry mouth/ is prest/ 10. Remember poetry
was not written at first... but memorized. 11. right 12. right 13.remember ...
Wednesday, May 28, 2008 at 3:56pm by GuruBlue
When the exponent is positive, you will need to the right. In this case, 107 means you need to move the decimal point 7 places to the right of 3.4, giving 34000000 as the correct answer.
Tuesday, July 21, 2009 at 11:35pm by MathMate
I don't know if this is right but this is what I came up with. f^-1=(-7x+1)/(x-4) domain f(^-1)=(-inf,-7)U(-7,inf) range f(^-1)=(-inf,4)U(4,inf) Let me know if it looks right. Thanks
Wednesday, February 2, 2011 at 6:27pm by Rachal
Algebra -- Ms. Sue can you help
I don't understand your first two problems. The next is right. (s/4)- 8 = -5 The last is also right.
Tuesday, November 8, 2011 at 1:19pm by Ms. Sue
solve x+21/x+3<2 x+21/x+3 - 2(x+3)/x+3 <0 x+21-2(x-3)/x+3<0 x+21-2x-6/x+3<0 answer 15-x/x-3<0(is this right?) For Further Reading Algebra - Anonymous, Thursday, April 3, 2008 at 5:07pm You are close.
They want you to find the values of x so that the equation the...
Friday, April 4, 2008 at 4:05pm by Cynthia
3y+6y-18=-6y+6y Right here you added 6 y to both sides to make the right hand side of the equation zero. You still have the equal sign and the zero that results from -6y+6y so on the right you have
to include that =0 The whole thing with algebra is you work with both sides of ...
Sunday, January 25, 2009 at 8:55pm by Damon
Algebra II
ok now this is an algebra 2 problem... I have goten far down to hear e^(ix) = (4i +/- 3)/5 now how do I solve for x I feel really stupid now as this is algebra 2... can't use natural log right or the
common log as ln( e^(ix) ) does not equal ix so what do I do?
Tuesday, July 20, 2010 at 9:47pm by Kate
math 116 algebra 1A
The language of Algebra is precise, and built into it is an expectation that the user will follow the order of operations rules. Going from left to right is not in those rule, and can lead to
erroneous results.
Thursday, September 17, 2009 at 10:44am by bobpursley
Algebra II
Evaluate: ƒ(2) + 5 It says to use this graph to solve the equation: h t t p://i55 . t i n y p i c . com/2qdvfur.jpg Right now we're on Algebra and Composition with Functions. I have no idea what I'm
supposed to do, and there are no instructions for any of these problems in the...
Sunday, September 18, 2011 at 8:06pm by Matt
8a + 8b _______ 12c + 12d factor to a + b _______ c + d Is this right? No, what happend to the 8/12 ? would it be 2 (4a + 4b) _____________ 6 (2c + 2d) Is this right??? No. what about ..lets see, 8/
12 reduces to 2/3, so 2(a+b)/3(c+d)
Tuesday, January 30, 2007 at 8:10am by deborah
1/3x > 2 and 1/4x > 2 both are less than one so how do i graph this on the number line? I'm confused. I will assume you meant (1/3)x>2 and not 1/(3x)>2 With that assumption, you would get x>6 AND x>8
x>6 is a line to the right of 6, excluding the 6 x>8 ...
Monday, August 20, 2007 at 5:22pm by Shawn - tricky question
mrs SUe Algebra check
I cant figure this out 6 /2 * 8-8 /2 +4 i got 6/2 * 8-8 /2 3*8 4 24 -4 = 20 is this right way? 29 - (3+5) *2 +3 i got 16 is that right? 5-[6+3(-40/8) + (-2)exponent 2 i got 9 x5 = 45 + 4 = 5 -49 =
-44 is that correct? [-6*(2-4)]/(-2)-(-9) i got -1.71
Tuesday, October 11, 2011 at 9:06pm by marko
Algebra 1
8.A number raised to a negative exponent is negative. Always, never , or sometimes. 7. Evaluate 1/2a^-4b^2 a = -2 and b = 4. 6. Simplify. Write in scientific notation. (9 x 10^-3)^2 Thanks to anyone
who helps right away and need right answer from the people who have already ...
Monday, February 4, 2013 at 5:41pm by Dave
Is this RIGHT or wrong ?
I have no idea which is the RIGHT answer. Use the figure your text states. And please learn to spell RIGHT. Your answers are RIGHT, but you WRITE words.
Tuesday, September 27, 2011 at 3:57pm by Ms. Sue
College Algebra
It does not happen with equations because there is no one side or the other is to the right or left about an equation. To convince yourself of how this works draw a number line. Mark 3 on it and 4 on
it. then no doubt 3 is to the left, less than 4 so 3<4 now multiply both ...
Monday, February 22, 2010 at 12:54pm by Damon
ms.sue 5 grade math
1. You left something out of the problem. How many cups are in each serving of soup? 2. No. 3. Right. 4. No. 5. No. 6. Right. 7. No. 8. Right. 9. Right. 10. Right.
Tuesday, March 27, 2012 at 7:28pm by Ms. Sue
algebra II
multiplication of polynomials by polynomials. 2^n-3 b^2n-2 (b^2-2^5-n b) I get 2^n-3 b^2n - 4^2 b^2n-1 but that is not right. I am stuck on the -4^2 part only. The rest is right.
Tuesday, September 2, 2008 at 8:30pm by mike
Algebra- is my answer right
The function g is defined by g(x)=3x^2+6. Find g(5y) Algebra - drwls, Friday, February 4, 2011 at 11:07am 3*(5y)^2 + 6 = __? Algebra - Janet, Friday, February 4, 2011 at 3:56pm I get 75y+6
Friday, February 4, 2011 at 4:56pm by Janet
5y + y = 26.4 6y = 26.4 y = 26.4/6 y = 4.4 ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Let's see if that's right. 4.4 + (5*4.4) = 26.4 4.4 + 22 = 26.4 Right!
Thursday, September 13, 2012 at 5:23pm by Ms. Sue
Area of a square find a polynominal A(X) that represents the area of the shaded region u have a square with an arrow pointing left and right at top of the square with x @ the top and a arrow up and
down on the right side of the square.The #3 on the right and left of the square...
Sunday, January 24, 2010 at 8:23pm by Tanya
Aren't both terms negative? I was going to say the answer was B. Upward to the left and downward to the right but you make it sound like A.Downward to the left and upward to the right
Thursday, July 23, 2009 at 9:53pm by Crystal
Algebra 1A
You're correct, but draw them on the number line to make sure you understand the answer. -∞----+0------+5---------+7.5----------+∞ Numbers on the right are greater than the numbers on the left. So if
7.5 is situated on the right of 5, then it is greater than 5.
Thursday, October 8, 2009 at 1:54pm by MathMate
I need to evaluate the following expression. Can anyone tell me if I am right on this or on the right track? Thanks 6 (x+3) - 4 (x+2) 6x+18) – (4x+8) 6x + 18 - 4x -8 2x +10
Thursday, January 6, 2011 at 8:12am by Mary
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
|
{"url":"http://www.jiskha.com/search/index.cgi?query=Algebra%3A+is+this+right%3F","timestamp":"2014-04-16T22:03:49Z","content_type":null,"content_length":"38652","record_id":"<urn:uuid:e691fd8a-b1f3-44e9-902d-9472b6b20d40>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math education: you're doing it wrong
Math education: you’re doing it wrong
Recent discussion about the problems with our educational system reminded me about the story of why my friend J almost didn’t make it into his high school honors math class. Now, to clarify, J is
easily one of the smartest people I know. But he is also a smartass, and thirteen-year-old J was certainly no different.
On the entrance exam for his honors math class, several of the problems asked you to fill in the next number in the sequence, such as: 2, 4, 8, 16, _?_. Obviously, whoever wrote the exam wanted you
to complete that sequence with “32,” because the pattern they’re thinking of is powers of 2. For n = 1, 2, 3, 4, 5, the formula 2^n = 2, 4, 8, 16, 32. But J didn’t write “32.” He wrote “π.”
When his teacher marked that problem wrong (as well as all of the other sequence questions, which J had answered in similar fashion), J explained that there are literally an infinite number of
numbers that could complete that sequence, because there are an infinite number of curves which go through the points (1, 2), (2, 4), (3, 8), and (4, 16). Sure, he said, one of those curves is the
obvious one which also goes through (5, 32), but you can also derive a curve which goes through (5, π). He showed her an example:
As you can see if you try plugging in the numbers 1, 2, 3, 4, and 5, to the equation above, you get the sequence 2, 4, 8, 16, π. Here are the two curves plotted on a graph, both the “correct” curve
and J’s smartass curve (hat tip to the mathematician at www.askamathematician.com for graphing this for me in Mathematica):
Anyway, after thirteen-year-old J explained the math behind his unconventional, but admittedly accurate, answer to the original problem, his teacher replied, “Oh come on, you knew what it was asking
for!” and refused to give him any credit. I can’t think of a better illustration of the triumph of the stick-to-the-book method of teaching over kids’ innate creativity… or of the triumph of math
education over actual math skills.
34 Responses to Math education: you’re doing it wrong
1. How absurd. If the teacher wants to say “Oh come on, you knew what it was asking for!” it’s admitting that J understands already! Not only does he understand her question and the “right” answer,
he gave another answer to show how the question was poorly asked.
He should have gotten extra credit, not zero.
*re-aggravates my longstanding frustration with poor education systems*
2. Very similar to the smartass method of estimating the height of a tower using a barometer: drop it from the top, count how many seconds it takes to hit the ground, and solve d = 16t^2. Again,
“Hey, Teach — this problem has such an obvious answer that it’s boring! Please let me have some fun with it!” As for the specific answer to the question “How can we fill in the blank: 2,4,8,16,
___ ?” — uh, wouldn’t Occam’s Razor impell you toward ’32′?
□ Or does this example point out the limitations of Occam’s Razor?
☆ If climate scientists take temperature or CO2 data that shows an exponential increase, and they fit a curve that takes a sharp downturn, they better have a good reason for it.
☆ @Max: Actually, if climatologists fit data to an exponential curve that goes to infinity, they will also need a good reason such as “if we ignore…” or “in an idealized world…” Exponential
growth in the physical world always either tapers off or “hits a wall” (showing a sharp downturn) due to resource depletion, negative feedbacks, and so on. A model that doesn’t include
such a mechanism is bound to be unreliable for long-term predictions. Just one more issue that should receive more attention from educators in several different subjects. How many
students finish Econ 101 still believing 3% growth is something that can potentially continue forever?
3. I was one of those annoying students in HS physics classes, ha. I was told that a big part of what students are supposed to read in high school is actually how to follow the rules around them,
for future success in society. :(
4. IMO by J’s standards, his response ought to be something to the effect of “not enough information”, then no? By his own admission, π would only be one possible answer, not really *the* answer.
Furthermore, if he sincerely was interested in finding the right answer, he also presumably could have asked the teacher for clarification instead of being confrontational. Just sayin’
PS I do agree that the US educational system is much too heavy on emphasizing blind obedience to authority and not respecting the opinions of students.
5. What’s truly alarming is that this was the entrance exam for an “honors math class,” since it’s obvious that the teachers couldn’t appreciate the caliber of the 13-year old mind they were dealing
with. It would be kind of like blackballing a student from enrolling in an “Ancient Studies Honors” class because he answered all the questions on the entrance test in Greek or Latin.
6. That is not the problem with today’s math curriculum..I can assure you
7. I’m reminded of something in the children’s book A Swiftly Tilting Planet, by Madeleine L’Engle. Speaking of one of the characters, a brilliant child named Charles Wallace Murray, the observation
was made that he would have to live in a world full of people whose minds didn’t work at all in most of the ways his did, and with whom he would have to learn to get along.
In J’s case, the teacher was obviously in the wrong; and I think most would feel a sense of indignation on J’s behalf. Ideally, a kid of J’s caliber would be praised by his teachers for his
creativity and given every encouragement to develop it, even while being taught the at times galling lesson that other people’s often seemingly arbitrary rules really do sometimes have to be
respected if one is going to get along in society.
□ Explain how the teacher was “obviously wrong”.
8. If J were really smart, he’d first give the obvious answer of 32, and then explain why any number would work.
□ If really smart, he’d have just answered 32 and moved on. I know this to my cost having myself being a smartarse at that age and never having quite grown out of it. There is still a point of
contention between me and my (presumably long dead) chemisty teacher about elephants and electrical conductivity. Perhaps I’ll let it go one day.
But if J really wanted to be a smartarse he should have explained his answer in the exam paper rather than afterwards. It’s hard to see how he can get credit after the paper was already
marked. A good teacher with a sense of humour and a genuine desire to teach would recognise that J understood the problem and would hopefully reward it as long as it was done on the actual
exam paper. Special pleading after the fact can hardly count toward performance in the exam. The ironic grin on your face as you write down the smartarse answer without explanation doesn’t
count for much in the marking scheme.
I have a vaguely similar anecdote, where I was the smartarse. It shows how old I am, I’m afraid. It was a LISP exam. We had to write a program in LISP and run some test data through it to
prove it worked in the space of 2 hours. The first hour was to work out how to do it on paper and write down our workings and the second was to write and test the code. It was really, really
easy. It took me about five minutes to write down the method. I’m not boasting or anything, it really was stupidly easy. So with 55 minutes to go, I wrote down three other solutions and wrote
a brief essay about which I was going to implement and why. Then in the implementation phase I wrote all four programs and tested them and plotted the results in a table and suggested this
was evidence that my assumption was likely correct.
I was pretty pleased with myself, as we smartarses usually are. And I got the highest mark on that exam. But I only got 2% more than someone who just answered the question mechanically and
obviously and as expected. And someone else who clearly cheated by typing in every single LISP program we’d written in tutorials (presumably from a crib sheet), one of which was similar to
the exam question, also, somehow, passed the exam.
My conclusion is that you shouldn’t bother to be smart in exams. Do enough to get the highest mark you can and save any cleverness for when you need it. You’re going to need it to stop every
boss you’ll ever have doing stupider things than they’d do if you weren’t there. If that isn’t a noble use for superior intelligence, I don’t know what is.
9. These sequence questions are more appropriate for an IQ test, which tests pattern recognition skills more than math skills.
Answering pi for every sequence doesn’t demonstrate pattern recognition skills. J should’ve at least given the formula on the exam, not wait until he fails the exam.
10. No mention of Wittgenstein, Kripkenstein, and the rule-following problem? Really?
11. Individual teachers can choose to “bend” the system and actually think – like we teach our kids to do. I LOVE those “smartass” kids! They are terrific role models for other kids and they make my
day! Besides, it is fun to have those really bright thinkers and watch them go off into their areas of interest then share it with us all. No, they don’t “fit in” to every teacher’s class, but
some of us are truly grateful for those wonderful smartasses!
12. If a genius kid completely blows off homework and exams, and instead goes off and does some brilliant project that no one asked for, should he get an A?
□ He should drop out of school and start a company.
☆ Putting aside that he’s thirteen years old, who would invest in a company started by a smart-ass high school dropout? Will the smart-ass ignore his customers to do what he feels like?
When he gets vague requirements from users, will he work with them to understand what they really want, or will he show them how stupid they are by producing something that technically
meets their requirements but isn’t what they want?
He’d probably be better off pursuing an academic career, which is what I’m guessing he did.
☆ (You’re being a bit hasty to assume that a 13-year-old’s smartassery isn’t just a phase, no?)
If he’s a “genius” and the project is “brilliant” then it ought to have a market somewhere. And if he chooses to seriously dedicate himself to success in the market (i.e. to understand
his failures as learning experiences), it’ll wring the smartass out of him.
You’re right that another option is academia, but it depends on the kid’s personal ambitions.
13. I’m not sure this is an indictment of our educational system as much as it is a story about a teacher and a student, with the commenters taking one side or the other.. A similar situation could
have happened in an open-ended college program where each student was free to do their own thing and Pass/Fail would be determined solely on the basis of the completion of student-written goals.
Or so it seems. Even here, the Pass/Fail would be influenced by the achievements of others in the program. Why? Because the program has an interest in survival, and could be damaged by individual
human creativity.
J was working towards his own ends, not the class goals. On the opposite end of the intellectual spectrum, I remember at that age a pop quiz on some Easter story that we were supposed have read,
and no idea where this story came from let alone read it. Most of the questions were of the form “What was the …….”, and I answered to each one, “An egg”. I did not mean to be a smartass, just
felt I could not hand in a blank paper. I like J, HAD to create.
But it seems that all educational systems can handle just so much creativity or they will fail, bankrupted by the replacement cost of all of Barry’s broken barometers.
But it seems that all educational systems can handle just so much creativity or they will fail, bankrupted by the replacement cost of all of Barry’s broken barometers.
But answering “egg” is not a creative act, especially if you put the same thing for every answer. If you’d come up with reasons why the answer should be egg for each one, then I’d grant that
you’d been creative. However, unless it was a creativity exam, you should still have got zero marks.
This is not because educational systems ‘can’t handle creativity’ it’s because somebody has to mark your test. That person has to justify the marks given. That person can’t give marks because
she happens to know that the student is actually intelligent and knowledgable because that’s her personal, subjective opinion and not fair on the other students.
Every single person on the planet knows that standard tests are only any good at testing how good you are at the standard tests. But this doesn’t make them worthless. Some people’s abilities
will be harder than others to measure this way and some people indeed slip through the gaps. It’s a real shame. But the answer is NOT to allow teachers to assign marks based on what they
think someone deserves rather than what questions they actually got right.
This has NOTHING to do with creativity. Creativity should indeed be encouraged. However, I’d suggest that a student who decides to express creativity in an exam – when everyone knows what the
exam is about and its purpose and how it will be marked – has made a poor choice.
14. My own calculus professor in college hated those “complete the pattern” problems for precisely this reason. “No sequence can be completely determined by a finite set of its terms,” he used to
□ The real problem statement is to find the simplest rule, or the formula with the fewest parameters that fits the sequence. Think of it as data compression. In the above case, 2^n is more
compact than 2, 4, 8, 16, whereas the smartass formula is less compact.
Finding a compact representation demonstrates familiarity with a variety of functions and concepts: polynomials, exponentials, sinusoids, prime numbers, etc.
☆ Well put. But there is a smartass solution which is more ‘compact’ … depending again on how you define ‘compact’. I believe a continuous solution with the lowest total power – sum of y^2
from -inf to +inf – is the one below which gives a result of zero as the next item… (but then, this approach gives you zero as the next element for all sequences which are assumed to
occur at integer values of x).
☆ The square brackets in the url got mangled by the blog (but it’s a nice picture). This is what was meant:
15. On the general subject of killing mathematical creativity with teaching, I would really recommend A Mathematician’s Lament by Paul Lockhart – a brilliant little essay describing much of whats
wrong with K-12 math education today.
16. Enh, I’m actually not 100% sure I agree with you here. I think it was clever, but on the other hand, recognizing obvious patterns is a useful skill in math. I also think there’s a difference
between being a smartass on an individual project, and on a test, where administrability matters.
I think your larger point that clearly J didn’t deserve to be kept out of higher level math seems obvious enough, but if smart-alecking your way through one question is enough to keep you in
on-level classes, that displays some pretty big problems with the whole testing system in general. (Granted, I say this as someone who was always near the top of her classes in elementary and
middle school but who always choked on standardized tests – which I think informs my skepticism about things like NCLB.)
17. I hope your friend becomes a high school math teacher, math needs out-of-the-box thinking. In the real world, things are not always so straightforward, there’s no ‘back of the book” answer – you
have to sit down and actually THINK. The US education system needs to wake up if this economy is going to ever recover. We can’t afford to raise dumb kids anymore!
18. People are making much ado about nothing. Here is the clue in the statement “my friend J almost didn’t make it into his high school honors math class.” Notice the word “almost”. One can assume
that young J was able to accurately determine which of the other questions were answered correctly and would also be able to establish how many questions he could answer incorrectly for the sake
of humor. A smartass must keep up appearances you know.
When I first went to take a driver’s license exam that was given on a computer terminal me and friend who was taking the test at the same time staged a race to see who could complete the test
fastest. We were not worried about missing one or two questions as it was a ridiculously easy test.
I once confused an EE professor by solving simultaneous equations representing a network on a test using Gaussian elimination instead of using determinants as the syllabus taught. It did not get
marked wrong but he came and asked me what I had done as he was unfamiliar with the method.
19. When I was in 3rd grade, I overheard my teacher telling another girl that you can’t divide 1 by 2. “Yes, you can,” says I. “it’s 1/2.” All I did was think of one candy bar, 2 people, and it was
obvious. But the teacher said, “No, you can’t. We haven’t gotten there yet.”
□ I had a similar experience, a girl who could count by 5 (and I could too) was told sit on a chair and be quiet by my grade 2 teacher.
20. Pingback: Friday Links (13-Apr-12) -- a Nadder!
21. Honestly I gotta say I need to look up stuff smartasses did more often this conversation throughout the comments has been something to read and something I’ve been missing to read about. You guys
have no idea how boggling these comments have been to try and read thorough.
|
{"url":"http://measureofdoubt.com/2011/05/13/math-education-youre-doing-it-wrong/","timestamp":"2014-04-21T01:58:50Z","content_type":null,"content_length":"113300","record_id":"<urn:uuid:f4511ca1-50dc-4c66-b624-d399dbe64c05>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cracking The Da Vinci Code
13 – 3 – 2 – 21 – 1 – 1 – 8 – 5
O, Draconian devil!
Oh, lame saint!
Langdon read the message again and looked up at Fache.
“What the hell does this mean?”
Harvard University Professor Robert Langdon, the hero of Dan Brown’s best-selling novel The Da Vinci Code, is initially baffled by the message, scrawled in invisible ink on the floor of the Louvre in
Paris by a dying man with a passion for secret codes.
Langdon, whose specialty is religious symbology, soon figures out that the words are a pair of anagrams for “Leonardo da Vinci” and “the Mona Lisa.” But what about those numbers? They may puzzle
Langdon for a while, but any mathematician will recognize them at once. They are the first eight members of the Fibonacci sequence, written in a jumbled order. A young French code breaker named
Sophie Neveu makes the same observation and explains that the Fibonacci sequence is one of the most famous mathematical progressions in history.
Having cracked the first two of what turn out to be a whole sequence of secret codes, Langdon and Neveu find themselves on a fast-paced adventure that eventually threatens their lives as they uncover
a sinister conspiracy within the Roman Catholic Church. It’s a fantastic plot that intertwines art history and 2,000 years of church politics.
But what of the mathematical clue? In Chapter 20, Langdon recalls a lecture he gave at Harvard on the Fibonacci numbers and the closely related constant that is his favorite number: the golden ratio,
also known as the divine proportion. In his lecture, Langdon makes a series of amazing claims about the prevalence of the divine proportion in life and nature, and I suspect many readers tacitly
assume most of it is fiction. That is not the case. As with the novel’s many religious, historical, and art references, some of the things Langdon says about the golden ratio are false—or at least
stretch the truth. But some are correct.
The divine proportion—which is sometimes represented by the Greek letter φ, generally written in English as phi and pronounced “fie”—is one of nature’s own mysteries, a mystery that was fully
unraveled only 10 years ago. The quest to uncover the φ Code, as I’ll call it, provides a story with almost as many surprising turns, puzzles, and false leads as The Da Vinci Code.
The story of φ begins, like so many mathematical tales, in ancient Greece. The Greeks, with their love for symmetry and geometric order, searched for what they felt was the most pleasing rectangle.
Believing that the purest and most aesthetically pleasing form of thought was mathematics, they used math to come up with an answer (see “How the Greeks Found φ,” page 69).
When Langdon begins his Harvard lecture on the divine proportion, he begins by writing the number 1.618 on the chalkboard. Strictly speaking, this is not exactly the golden ratio. The true value is
given by the formula
φ = 1 + √5
Unlike authors of best-selling novels, when Mother Nature writes a mystery, she often keeps us from finding the whole answer. Like the ancient Hebrews who could never know the true name of God, we
will never know the true numerical value of φ. If you try to use the formula to calculate its value, you will discover that the decimals keep on appearing. The process never stops. In mathematician’s
language, the number φ is “irrational.”
As an irrational number, φ is like that other mathematical constant π, whose infinite decimal expansion begins 3.14159 . . . Of the two numbers, mathematicians would say that π is more important than
φ. But I have a lot of sympathy with the math major in Langdon’s class who raises his hand and says, “Phi is one H of a lot cooler than pi.” π is hot, but φ is cool.
Comment on this article
|
{"url":"http://discovermagazine.com/2004/jun/cracking-the-da-vinci-code","timestamp":"2014-04-19T06:59:58Z","content_type":null,"content_length":"86275","record_id":"<urn:uuid:1daf43b6-537a-4eba-9956-0a2e4c570ed5>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Welcome to techInterview, a site for technical interview questions, brain teasers, puzzles, quizzles (whatever the heck those are) and other things that make you think!
This is difficult to describe in words, so read this carefully, lest there be any confusion. You have a normal six sided cube. i give you six different colors that you can paint each side of the cube
with (one color to each side). How many different cubes can you make?
Different means that the cubes can not be rotated so that they look the same. This is important! If you give me two cubes and I can rotate them so that they appear identical in color, they are the
same cube.
Let X be the number of “different” cubes (using the same definition as in the problem). Let Y be the number of ways you can “align” a given cube in space such that one face is pointed north, one is
east, one is south, one is west, one is up, and one is down. (We’re on the equator.) Then the total number of possibilities is X * Y. Each of these possibilities “looks” different, because if you
could take a cube painted one way, and align it a certain way to make it look the same as a differently painted cube aligned a certain way, then those would not really be different cubes. Also note
that if you start with an aligned cube and paint it however you want, you will always arrive at one of those X * Y possibilities.
How many ways can you paint a cube that is already “aligned” (as defined above)? You have six options for the north side, five options for the east side, etc. So the total number is 6! (that’s six
factorial, or 6 * 5 * 4 * 3 * 2 * 1). Note that each way you do it makes the cube “look” different (in the same way the word is used above). So 6! = X * Y.
How many ways can you align a given cube? Choose one face, and point it north; you have six options here. Now choose one to point east. There are only four sides that can point east, because the side
opposite the one you chose to point north is already pointing south. There are no further options for alignment, so the total number of ways you can align the cube is 6 * 4.
Remember, Y is defined as the number of ways you can align the cube, so Y = 6 * 4. This gives us 6! = X * 6 * 4, so X = 5 * 3 * 2 * 1 = 30.
|
{"url":"http://www.techinterview.org/post/518750698/cube","timestamp":"2014-04-19T06:51:48Z","content_type":null,"content_length":"45113","record_id":"<urn:uuid:238910d0-a07e-4321-8e48-7fd049357b2c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
|
4 Sep
o RATIO- t he ratio of two quantities of the same kind is the fraction that one quantity is of the other, in other words to say, how many times a given number is in comparison to another number. A
ratio between two nos.x A and B is denoted by A/B
o Some of the points to be remembered :
1. The two quantities must be of the same kind.
2. The units of the two quantities must be the same.
3. The ratio has no measurement.
4. The ratio remains unaltered even if both the antecedent(A) and the consequent(B)are multiplied or divided by the same no.
o If two different ratios ( say A /B and C/D) are expressed in different units, then if we are required to combine these two ratios we will follow the following rule=
The required ratio is AC / BD
o The duplicate ratio of A/B is A^2/B^2 the triplicate ratio of A/B is A^3/B^3
o The subduplicate ratio of A/B is sq.root of A/ sq.root of B
o The subtriplicate ratio of A/B is cube root of A/ cube root of B
o To determine which of the given two ratio A/B and C/D is greater or smaller ,we compare A xD and B xC provided B>0 and D>0;
if AxC> B xD then A/B > C/D and vice versa,but if A xC= B xD then A/B = C/D
1. Inverse ratios of two equal ratios are equal, if A/B=C/D then B/A = D/C.
2. The ratios of antecedents and consequents of two equal ratios are equal if A/B=C/D then A/C=B/D
3. If A/B=C/D THEN A+B/B=C+D/D
4. If A/B=C/D THEN A-B/B=C-D/D
5. If A/B=C/D THEN A+B/A-B=C+D/C-D
6. If A/B=C/D=E/F…..so on then each of the ratio( A/B, C/D…..etc) is equal to
sum of th numerators/sum of the denominators=A+C+E…../B+D+F……=k
o Two ratios of two terms is equal to the ratio of two other terms, then these four terms are said to be in proportion i.e. if A/B=C/D then A,B,C and D are in proportion.
A,B,C and D are called first, second,third and fourth proportionals respectively.
A and D are called Extremes and B and C are called the Means
and it follows that A xD=B xC
o Continued proportion: when A/B=B/C then A, B and C are said to be in continued proportion and B is called the geometric mean of A and C so it follows,
A xC=B^2 ,OR square root of (A xC)=B
o Direct proportion: if two quantities A and B are related and an increase in A decreases B and vice-versa then A and B are said to be in direct proportion.Here A is directly proportional to B is
written as AµB.when a is removed equation comes to be
A = kB,where k is constant.
o Inverse proportion: if two quantities A and B are related and an increase in A increases B and vice-versa then A and B are said to be in inverse proportion. Here A is inversely proportional to B is
written as Aµ1/B or, A=k/B,where k is constant.
It simply means a method by which a quantity may be divided into parts which bear a given ratio to one another .The parts are called propotional parts.
e.g.divide quantity “y” in the ratio a:b:c then
first part= a/(a+b+c)=y second part=b/(a+b+c)=y third part=c/(a+b+c)=y
Now let us work out some questions to understand the underlying concept.
Q1. Find the three numbers in the ratio of 1:2:3 so that the sum of their squares is equal to 504?
Ans:let 1st no. be 1x,2nd no. be 2x and 3rd no. be 3x
their squares- x^2 , (2x)^2 and (3x)^2
as per the question, x^2 + (2x)^2+(3x)^2 = 504
So the three no. are 1x=6,2x=12 and 3x=18
Q2.A,B,C and D are four quantities of the same kind such that A:B=3:4,B:C=8:9 and C:D= 15:16xfind ratio a)A:D b)A:B:C:D
ans: a)A/D=A/B x B/C x C:D=3/4 x 8/9 x 15/16=5/8
in A:B:C:D value of A will be given by product of ABC .
value of B will be given by product of BBC
value of C will be given by product of BCC
value of D will be given by product of BCD
so A:B:C:D is 3x8x15:4x8x15:4x9x15:4x9x16
Q3.if a carton containing a dozen mirrors is dropped, which of the following cannot be the ratio of broken mirrors to unbroken mirrors?
options:a)2:1 b)3:1 c)3:2 d) 1:1 e)7:5
There are 12 mirrors in the cartonx in the given options antecedents tell the broken mirrors and consequents tell the unbroken mirrorsx so, the sum of antecedent and consequent in each ratio should
divide the noxof mirrors perfectlyxout of the given options option ‘c’ which totals 5 cannot divide 12, cannot be the ratio of broken mirrors to unbroken mirrorsx
Q4.find the fourth proportional to the numbers 6,8 and 15?
ans: let K be the fourht proportional, then 6/8=15/K
solving it we get K=(8×15)/6= 20
Q5. find the mean mean proportion between 3 and 75?
ans. this is related to continued proportion.let x be the mean proportionalx then we have
Q6.divide Rs 1350 into three shares proportional to the numbers 2, 3 and 4?
ans: 1st share= Rs 1350x(2/2+3+4)=Rs 300
2nd share = Rs 1350x(3/2+3+4)=Rs 450
3rd share= Rs 1350x(4/2+3+4)=Rs 600
Q7. a certain sum of money is divided among A,B and C such that for each rupee A has ,B has 65 paise and C has 40 paisex if C’s share is Rs 8, find the sum of money?
ans: here A:B:C = 100:65:40 = 20:13:8
as 8/14 of the whole sum=Rs 8
so, the whole sum=Rs 8×41/8=Rs 41
Q8.in 40 litres mixture of milk and water the ratio of milk and water is 3:1. how much water should be added in the mixture so that the ratio of milk to water becomes 2:1.?
ans:here only amount of water is changing. the amount of milk remains same in both the mixtures. so, amount of milk before addition of water =(3/4)X40=30 ltrs. so amount of water is 10 ltrs.
After addition of water the ratio changes to 2:1.here the mixture has two ltrs of milk for every 1 ltr of water. since amount of milk is 30 ltrs the amount of water has to be 15 ltr so that the ratio
is 2:1. so the amount of water to be added is 15-10=5 ltrs.
Q9. three quantities A, B and C are such that AB=kC ,where k is constant. when A is kept constant, B varies directly as C: when B is kept constant, A varies directly C and when C is kept constant, A
varies inversely as B.
initially A was at 5 and A:B:C was 1:3:5. find the value of A when B equals 9 at constant B?
solution: initial values are A=5,B=15 and C=25.
thus we have 5×15=kx25 hence, k=3
thus the equation becomes AB=3C.
for the problem C is kept constant at 25. then,
Comments Leave a Comment
Leave a Reply Cancel reply
• Categories Uncategorized
|
{"url":"http://g2brass.wordpress.com/2012/09/04/ratio-and-proportion-shortcuts/","timestamp":"2014-04-18T00:22:32Z","content_type":null,"content_length":"118920","record_id":"<urn:uuid:9e667e72-8631-415c-9623-1872e59f8eba>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
|
14. Axisymmetric &
14. Axisymmetric & Triaxial Models
Astronomy 626: Spring 1997
Axisymmetric two-integral models of many elliptical galaxies don't fit the observed kinematics, implying that these systems have a third, non-classical integral. Schwarzschild's method permits the
construction of both axisymmetric and triaxial models which implicitly depend on a third integral. But minor orbit families and chaotic orbits both limit the range of axial ratios permitted for
`cuspy' elliptical galaxies; systems with steep power-law cusps probably evolve toward more axisymmetric shapes over tens of orbital times.
14.1 Axisymmetric Models
Numerical calculations show that most orbits in `plausible' axisymmetric potentials have a third integral, since their shapes are not fully specified by the classical integrals, energy and the
z-component of angular momentum. No general expressions for this non-classical integral are known, although for nearly spherical systems it is approximated by |J|, the magnitude of the total angular
momentum, while in very flattened systems the energy invested in vertical motion may be used.
Despite the existence of third integrals in most axisymmetric potentials, it is reasonable to ask if models based on just two integrals can possibly describe real galaxies. In such models the
distribution function has the form
(1) f = f(E,J_z) .
One immediate result is that the distribution function depends on the R and z components of the velocity only through the combination v_R^2+v_z^2; thus in all two-integral models the velocity
dispersions in the R and z directions must be equal:
_______ _______
(2) v_R v_R = v_z v_z ,
This equality does not hold in the Milky Way; for disk stars, the radial dispersion is about twice that in the vertical direction (MB81). Thus our galaxy can't be described by a two-integral model.
For other galaxies, however, the situation is not so clear, and a two-integral model may suffice.
Distribution functions
From f(E,J_z) to rho(R,z): Much as in the spherically symmetric case described before, one may adopt a plausible guess for f(E,J_z), derive the corresponding density rho(R,Phi), and solve Poisson's
equation for the gravitational potential. Perhaps the most interesting example of this approach is a series of scale-free models with r^-2 density profiles (Toomre 1982); however, these models are
somewhat implausible in that the density vanishes along the minor axis.
From rho(R,z) to f(E,J_z): Conversely, one may try to find a distribution function which generates a given rho(R,z). This problem is severely underconstrained because a star contributes equally to
the total density regardless of its sense of motion about the z-axis; formally, if f(E,J_z) yields the desired rho(R,z), then so does f(E,J_z)+f_o(E,J_z), where f_o(E,J_z) = -f_o(E,-J_z) is any odd
function of J_z. The odd part of the distribution function can be found from the kinematics since it determines the net streaming motion in the phi direction (BT87, Chapter 4.5.2(a)).
An `unbelievably simple' and analytic distribution function exists for the mass distribution which generates the axisymmetric logarithmic potential (Evans 1993). This potential, introduced to
describe the halos of galaxies (Binney 1981, BT87, Chapter 2.2.2), has the form
(3) Phi = - v_0 ln(R_c + R + z / q ) ,
where v_0 is the velocity scale, R_c is the core scale radius, and q is the flattening of the potential (the mass distribution is even flatter). The corresponding distribution function has the form
2 4 E 4 E 2 E
(4) f(E,J_z) = A J_z exp(-----) + B exp(-----) + C exp(-----) ,
v_0^2 v_0^2 v_0^2
where A, B, and C are constants. Evans also divides this distribution function up into `luminous' and `dark' components to obtain models of luminous galaxies embedded in massive dark halos; his
results illustrate a number of important points, including the non-gaussian line profiles which result when the luminous distribution function is anisotropic.
But even if kinematic data is available, this approach is not very practical for modeling observed galaxies. The reason is that the transformation from density (and streaming velocity) to
distribution function is unstable; small errors in the input data can produce huge variations in the results (e.g. Dejonghe 1986, BT87). A few two-integral distribution functions are known for
analytic density distributions, and recent developments have removed some mathematical obstacles to the construction of more models (Hunter & Qian 1993).
Jeans-equation models
Since we can't construct distribution functions for real galaxies, consider the simpler problem of modeling observed systems using the Jeans equations. If we assume that the underlying distribution
function depends only on E and J_z we can simplify the Jeans equations, since the radial and vertical dispersions must be everywhere equal; thus
d _______ nu _______ ___________ dPhi
(5) -- nu v_R v_R + -- (v_R v_R - v_phi v_phi) = - nu ---- ,
dR R dR
d _______ dPhi
(6) -- nu v_R v_R = - nu ---- .
dz dz
At each R one can calculate the mean squared velocity in the R direction by integrating Eq. 6 inward from z = infinity; the mean squared velocity in the phi direction then follows from Eq. 5.
The Jeans equations don't tell how to divide up the azimuthal velocities into streaming and random components. One popular choice is
_____ 2 ___________ _______
(7) v_phi = k (v_phi v_phi - v_R v_R) ,
where k is a free parameter (Satoh 1980). The dispersion in the phi direction is then
2 ___________ _____ 2
(8) sigma_phi = v_phi v_phi - v_phi .
Note that if k = 1 the velocity dispersion is isotropic and the excess azimuthal motion is entirely due to rotation, while for k < 1 the azimuthal dispersion exceeds the radial dispersion.
Application to elliptical galaxies
Jeans equations models have been constructed of a number of elliptical galaxies (Binney, Davies, & Illingworth 1990, van der Marel, Binney, & Davies 1990, van der Marel 1991). The procedure is
1. Observe the surface brightness Sigma(x',y');
2. Deproject to get the stellar density nu(R,z), assuming an inclination angle;
3. Compute the potential Phi(R,z), assuming a constant mass-to-light ratio;
4. Solve the Jeans equations for the mean squared velocities;
5. Divide the azimuthal motion into streaming and random parts;
6. Project the velocities back on to the plane of the sky to get the line-of-sight velocity and dispersion v_los(x',y') and sigma_los(x',y');
7. Compare the predicted and observed kinematics.
The inclination angle, mass-to-light ratio, and rotation parameter k are unknowns to be determined by trial and error.
Some conclusions following from this exercise are that:
• Isotropic oblate rotators (k = 1) generally don't fit, even though some galaxies lie close to the expected relation between v_0/sigma_0 and epsilon;
• Some galaxies (e.g. NGC 1052) are well-fit by two-integral Jeans equation models;
• The models predict major-axis velocity dispersions in excess of those observed in most galaxies;
• Consequently, most of the galaxies must have a third integral, or may even be triaxial.
14.2 Triaxial Models
The general problem of modeling triaxial galaxies is illustrated with a couple of special cases. In separable models the allowed orbits are fairly simple, and the job of populating orbits so as to
produce the density distribution is well-understood. In scale-free models the allowed orbits are more complex, and it's not clear if such models can be in true equilibrium.
Schwarzschild's Method
Schwarzschild (1979) invented a powerful method for constructing equilibrium models of galaxies without explicit knowledge of the integrals of motion. To use this method,
1. Specify the mass model rho(x) and find the corresponding potential;
2. Construct a grid of K cells in position space;
3. Chose initial conditions for a set of N orbits, and for each one,
□ integrate the equations of motion for many orbital periods, and
□ keep track of how much time the orbit spends in each cell, which is a measure of how much mass the orbit contributes to that cell;
4. Determine non-negative weights for each orbit such that the summed mass in each cell is equal to the mass implied by the original rho(x).
The last step is the most difficult. Formally, let P_i(c) be the mass contributed to cell c by orbit i; the task is then to find N non-negative quantities Q_i such that
-- N
(9) M(c) = ) Q_i P_i(c)
-- i = 1
simultaneously for all cells, where M(c) is the integral of rho(x) over cell c. This may be accomplished by taking N > K, so as to obtain a reasonably rich set of `basis functions', and using any
one of a number of numerical techniques, including linear programming (Schwarzschild 1979), non-negative least squares (Pfenniger 1984), Lucy's method (Newton & Binney 1984), or maximum entropy
(Richstone & Tremaine 1988).
In general Eq. 9 has many solutions, reflecting the fact that many different distribution functions are consistent with a given mass model. Some methods allow one to specify additional constraints so
as to select solutions with special properties (maximum rotation, radial anisotropy, etc.).
Separable Potentials
In three dimensions, a separable potential permits four distinct orbit families:
• box orbits,
• short-axis tube orbits,
• inner long-axis tube orbits, and
• outer long-axis tube orbits.
The time-averaged angular momentum of a star on a box orbit vanishes; such orbits therefore do not contribute to the net rotation of the system. Short-axis and long-axis tube orbits, on the other
hand, preserve a definite sense of rotation about their respective axes; consequently, their time-averaged angular momenta do not vanish. The total angular momentum vector of a non-rotating triaxial
galaxy may thus lie anywhere in the plane containing the short and long axes (Levison 1987).
Using Schwarzschild's method, it is possible to numerically determine orbit populations corresponding to separable triaxial models (Statler 1987). A somewhat more restricted set of models can be
constructed exactly; these models make use of all available box orbits, but only those tube orbits with zero radial thickness (Hunter & de Zeeuw 1992). Apart from the choice of streaming motion,
thin tube models are unique. One use of such models is to illustrate the effects of streaming motion by giving all tube orbits the same sense of rotation; the predicted velocity fields display a wide
range of possibilities (Arnold, de Zeeuw, & Hunter 1994). Non-zero streaming on the projected minor axis is a generic feature of such models; a number of real galaxies exhibit such motions and
thus must be triaxial (Franx, Illingworth, & Heckman 1989b).
Scale-Free Potentials
In scale-free models, box orbits tend to be replaced by minor orbital families known as boxlets (Gerhard & Binney 1985, Miralda-Escude & Schwarzschild 1989). Each boxlet family is associated
with a closed and stable orbit arising from a resonance between the motions in the x and y directions.
The appearance of boxlets instead of boxes poses a problem for model building because boxlets are `centrophobic' (meaning that they avoid the center) and do not provide the elongated density
distributions of the box orbits they replace. As a result, the very existence of scale-free triaxial systems is open to doubt (de Zeeuw 1995).
Moreover, some scale-free potentials have irregular orbits; these have no integrals of motion apart from the energy E. In principle, such an orbit can wander everywhere on the phase-space
hypersurface of constant E, but in actuality such orbits show a complicated and often fractal-like structure.
The scale-free elliptic disk is a relatively simple two-dimensional analog of a scale-free triaxial system. Because the model is scale-free, the radial dimension can be folded out when applying
Schwarzschild's method; thus the calculations are fast (Kuijken 1993). The result is that self-consistent models can be built using the available boxlets, loops, and irregular orbits, but the minimum
possible axial ratio b/a increases as the numerical resolution of the calculation is improved.
Similar results hold for scale-free models in three dimensions. Models have been constructed for triaxial logarithmic potentials with a range of axial ratios b/a and c/a (Schwarzschild 1993). Tubes
and regular boxlets provide sufficient variety to produce models with c/a > 0.5, but not flatter. However, over intervals of 50 dynamical times, irregular orbits behave like `fuzzy regular orbits',
and by including them it becomes possible to build near-equilibrium models as flat as c/a = 0.3. But these models are not true equilibria; over long times they will become rounder and less triaxial.
14.3 Rotation, Chaos, & Secular Evolution
Figure rotation adds a new level of complexity to the orbit structure of triaxial systems. A few models have been constructed using Schwarzschild's method, but little is known about the existence and
stability of such systems. N-body experiments indicate that at least some such systems are viable models of elliptical galaxies. Rotation tends to steer orbits away from the center and so may lessen
the effects of central density cusps.
Realistic potentials are likely to have some irregular or chaotic orbits, and there is no reason to think that such orbits are systematically avoided by processes of galaxy formation. Over 10^2 or
more dynamical times, such orbits tend to produce nearly round density distributions (Merritt & Valluri 1996).
Consequently, it is likely that secular evolution over timescales of 10^2 dynamical times may be changing the structures of elliptical galaxies (Binney 1982, Gerhard 1986). The outer regions are not
likely to be affected since dynamical times are long at large radii, but significant changes may occur in the central cusps where dynamical times are only 10^6 years (de Zeeuw 1995).
• Arnold, R.A., de Zeeuw, P.T., & Hunter, C. 1994, M.N.R.A.S., submitted.
• Binney, J.J. 1981, M.N.R.A.S. 196, 455.
• Binney, J.J. 1982, M.N.R.A.S. 201, 15.
• Binney. J.J., Davies R.L., & Illingworth, G.D. 1990, Ap.J. 361, 78.
• Dejonghe, H. 1986, Phys. Rep. 133, 218.
• de Zeeuw, P.T. 1995, in The Formation and Evolution of Galaxies, ed. C. Munoz-Tunon & F. Sanchez, p. 231.
• Evans, N.W. 1993, M.N.R.A.S. 260, 191.
• Franx, M., Illingworth, G.D. & Heckman, T.M. 1989b, Ap. J. 344, 613.
• Gerhard, O.E. 1986, M.N.R.A.S. 219, 373.
• Gerhard, O.E. & Binney, J.J. 1985, M.N.R.A.S. 216, 467.
• Hunter, C. & de Zeeuw, P.T. 1992, Ap.J 398, 79.
• Hunter, C. & Qian, E. 1993, M.N.R.A.S. 262, 401.
• Kuijken, K. 1993, Ap.J. 409, 68.
• Levison, H.F. 1987, Ap.J. 320, L93.
• Merritt, D. & Valluri, M. 1996, Ap.J. 471, 82.
• Miralda-Escude, J. & Schwarzschild, M. 1989, Ap.J. 339, 752.
• Newton, A. & Binney, J.J. 1984, M.N.R.A.S. 210, 711.
• Pfenniger, D. 1984, Astr.Ap. 141, 171.
• Richstone, D.O. & Tremaine, S.D. 1988, Ap.J. 327, 82.
• Satoh, C. 1980, P.A.S.J. 32, 41.
• Schwarzschild, M. 1979, Ap.J. 232, 236.
• Schwarzschild, M. 1993, Ap.J. 409, 563.
• Statler, T.S. 1987, Ap.J. 321, 113.
• Toomre, A. 1982, Ap.J. 259, 535.
• van der Marel, R. 1991, M.N.R.A.S. 253, 710.
• van der Marel, R., Binney, J.J., & Davies, R.L. 1990, M.N.R.A.S. 245, 582.
Joshua E. Barnes (barnes@galileo.ifa.hawaii.edu)
Last modified: March 6, 1997
|
{"url":"http://www.ifa.hawaii.edu/~barnes/ast626_97/atm.html","timestamp":"2014-04-20T03:19:56Z","content_type":null,"content_length":"18691","record_id":"<urn:uuid:4ad3397f-8a7f-4525-a2bd-d55aaf5ffd2b>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wayland, MA Geometry Tutor
Find a Wayland, MA Geometry Tutor
...I strive to help students understand the core concepts and building blocks necessary to succeed not only in their current class but in the future as well. I am a second year graduate student
at MIT, and bilingual in French and English. I earned my high school diploma from a French high school, as well as a bachelor of science in Computer Science from West Point.
16 Subjects: including geometry, French, elementary math, algebra 1
...My schedule is flexible, but weeknights and weekends are my preference. I can tutor either at my home or will travel to your location unless driving is more than 30 minutes. My strength is my
ability to look at a challenging concept from different angles.
8 Subjects: including geometry, calculus, algebra 1, algebra 2
...I currently have a Masters Degree in Microbiology from Loyola University in Chicago. I also got my Bachelors degree in Biochemistry from Mount Holyoke College. Since leaving graduate school
I've worked at Brigham and Women's Hospital in a lab where I just expanded my knowledge of microbiology and molecular biology techniques and even got the chance to work with animals.
22 Subjects: including geometry, reading, algebra 1, writing
I am a retired university math lecturer looking for students, who need experienced tutor. Relying on more than 30 years experience in teaching and tutoring, I strongly believe that my profile is
a very good fit for tutoring and teaching positions. I have significant experience of teaching and ment...
14 Subjects: including geometry, calculus, statistics, algebra 1
...When he fell behind in Precalculus, I hired a tutor for him. Watching him work with the tutor as he blossomed and succeeded was very moving. My goal is to be an inspiration for you as well!
29 Subjects: including geometry, calculus, GRE, algebra 1
Related Wayland, MA Tutors
Wayland, MA Accounting Tutors
Wayland, MA ACT Tutors
Wayland, MA Algebra Tutors
Wayland, MA Algebra 2 Tutors
Wayland, MA Calculus Tutors
Wayland, MA Geometry Tutors
Wayland, MA Math Tutors
Wayland, MA Prealgebra Tutors
Wayland, MA Precalculus Tutors
Wayland, MA SAT Tutors
Wayland, MA SAT Math Tutors
Wayland, MA Science Tutors
Wayland, MA Statistics Tutors
Wayland, MA Trigonometry Tutors
Nearby Cities With geometry Tutor
Ashland, MA geometry Tutors
Auburndale, MA geometry Tutors
Concord, MA geometry Tutors
Holliston geometry Tutors
Lincoln Center, MA geometry Tutors
Lincoln, MA geometry Tutors
Maynard, MA geometry Tutors
Needham Jct, MA geometry Tutors
Newtonville, MA geometry Tutors
Southboro, MA geometry Tutors
Southborough geometry Tutors
Sudbury geometry Tutors
Wellesley geometry Tutors
Wellesley Hills geometry Tutors
Weston, MA geometry Tutors
|
{"url":"http://www.purplemath.com/wayland_ma_geometry_tutors.php","timestamp":"2014-04-17T04:20:30Z","content_type":null,"content_length":"24061","record_id":"<urn:uuid:d2a18999-0a6f-45af-a15f-57b39d44c942>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Controllability Matrix [Control Theory]
A slightly clearer but somewhat less rigorous connection (only C'bility [itex]\Longrightarrow[/itex] rank condition!) can be made as follows: We can solve the diff. eq. system that you have provided
and obtain
x(t) = \int_0^{\infty}e^{A(t-\tau)}Bu(\tau)d\tau + e^{At}x(0)
Let's assume zero initial conditions for simplicity. Now, since the controllability means that I can reach any x(t), the integral converges to x(t) with some u(t). Let's use the Taylor series of
x(t) = \int_0^{\infty}\left(I+A(t-\tau) + \frac{A^2}{2!}(t-\tau)^2+\cdots \right)Bu(\tau)d\tau
You can take the constant terms out and obtain a matrix-vector multiplication (though infinite dimensional)
x(t) = \begin{bmatrix}B &AB &A^2B &\cdots\end{bmatrix}\begin{pmatrix}\int_0^{\infty}u(\tau)d\tau \\\int_0^{\infty}(t-\tau)u(\tau)d\tau \\ \int_0^{\infty}\frac{1}{2!}(t-\tau)^2u(\tau)d\tau\\ \vdots\
end{pmatrix} = \mathcal{C}_\infty \mathcal{U}
I would denote the matrix part as [itex]\mathcal{C}_\infty[/itex] . Now, since we assume controllability, we should be able to obtain any x(t), hence [itex]\mathcal{C}_\infty[/itex] must be full row
rank. But from Cayley-Hamilton theorem we know that the powers of A with degree higher then n-1, can be rewritten by the powers of A up to the degree n-1. (This is a bad sentence but looking it up is
easy so I skip that part.) This means that no extra information about the rank of this matrix can be included after the [itex]A^{n-1}B[/itex] since the remaining terms are linear combinations of the
first n terms. Thus,
[tex]rank(\mathcal{C}_\infty) = rank(\mathcal{C}) = rank(\begin{bmatrix}B &AB &A^2B &\cdots &A^{n-1}B\end{bmatrix}[/tex]
In case of SISO systems, [itex]\mathcal{C}[/itex] happens to be square so the rank condition equals to the determinant being nonzero.
|
{"url":"http://www.physicsforums.com/showthread.php?p=2987276","timestamp":"2014-04-16T18:59:21Z","content_type":null,"content_length":"32057","record_id":"<urn:uuid:2a6c76de-483a-430b-b5a8-61941031a6e2>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
Operating system for this investigation. (a) Geometry of the DF-CCP chamber. The LF (10 MHz) is applied on the lower electrode, and the HF (40 MHz) is applied on the upper electrode. One of the two
frequencies is operated in pulse mode with a few tens of kHz PRF. (b) Electrical schematic for the DF-CCP system. The BC is connected in series with the lower electrode.
(Color online) Electron density (left) and temperature (right) when pulsing the HF power at different times during the pulsed cycle (as indicated in the lower figure). (Ar/CF4/O2 = 75/20/5, 40 mTorr,
200 sccm, LF= 250 V at 10 MHz cw, HF = 250 V at 40 MHz in pulse mode with BC = 1 μF, PRF = 50 kHz and 25% duty-cycle.) The electron density is modulated by about 30% during the pulse cycle while the
electron temperature shows nearly instantaneous changes as the HF power toggles on and off, especially near the sheaths due to enhanced stochastic heating.
(Color online) Electron density and temperature when pulsing the LF power at different times during the pulsed cycle (as indicated in the lower figure). (Ar/CF4/O2 = 75/20/5, 40 mTorr, 200 sccm, LF =
250 V at 10 MHz in pulse mode with BC = 1 μF, PRF = 50 kHz and 25% duty-cycle, HF = 250 V at 40 MHz cw.) Pulsing the LF power produces nominal intercycle changes in electron density and temperature
over the pulse period as the majority of the LF power is dissipated in ion acceleration.
(Color online) Plasma potential, VP , and dc-bias, Vdc , during one pulse period when pulsing the HF power (PRF = 50 kHz, 25% duty-cycle). (a) BC = 10 nF and (b) BC = 1 μF. The sheath potential is VS
= VP − Vdc . The LF power is always on and the HF power is on only during the pulse window of 25%. Due to the smaller RC time constant with the small BC, the dc-bias responds more quickly. Since the
voltage amplitude of the LF power rides on the dc-bias, the maximum envelope of the plasma potential has the same shape as the dc-bias.
(Color online) Plasma potential, VP , and dc-bias, Vdc , during one period when pulsing the LF power (PRF = 50 kHz, 25% duty-cycle). (a) BC = 10 nF and (b) BC = 1 μF. The sheath potential is VS = VP
− Vdc . The HF power is always on and the LF power is on only during the pulse window of 25%. The plasma potential is mainly determined throughout the pulse period by the voltage amplitude of the cw
HF power. The dynamic range of dc-bias is larger with the smaller BC.
(Color online) Total IEDs for all ions with different sizes of the BC for the base case (40 mTorr, 250 V at 10 MHz, 250 V at 40 MHz). (a) cw operation, (b) pulsing HF power, and (c) pulsing LF power.
Pulsing has a PRF of 50 kHz and 25% duty-cycle. The IED is insensitive to the size of BC with cw operation while its shape depends on the size of BC with pulsed operation.
(Color online) Total IEDs for all ions for different PRFs when pulsing the HF power with a 25% duty-cycle. (a) BC = 10 nF and (b) BC = 1 μF. The IED becomes single-peaked in appearance with the
smaller BC while the IED maintains a multiple-peaked shape with the larger BC. The IEDs with larger PRFs extend to the higher energies.
(Color online) DC-bias as a function of normalized time (which is time divided by the length of each pulse period) with different PRFs when pulsing the HF power with a 25% duty-cycle. (a) BC = 10 nF
and (b) BC = 1 μF. The LF power is cw. During power-on period, the dc-bias becomes less negative with some overshoot with smaller PRFs.
(Color online) Ion energy distributions for O+, Ar+, and CF3 + when pulsing the HF power. (a) BC = 10 nF and (b) BC = 1 μF.
(Color online) Total IEDs for all ions for different PRFs when pulsing the LF power with a 25% duty-cycle. (a) BC = 10 nF and (b) BC = 1 μF. The IED extends to higher energies with the smaller BC.
(Color online) DC-bias as a function of the normalized time (which is time divided by the length of each pulse period) with different PRFs when pulsing the LF power with a 25% duty-cycle. (a) BC = 10
nF and (b) BC = 1 μF. The HF power is cw. If the size of BC is small enough for the dc-bias to response to the voltage on the electrode, the temporal behavior of dc-bias is similar for different
(Color online) IEDs for O+, Ar+, and CF3 + when pulsing the LF power. (a) BC = 10 nF and (b) BC = 1 μF.
(Color online) Total IEDs for all ions for different duty-cycles when pulsing the HF power with a PRF of 50 kHz. (a) BC = 10 nF and (b) BC = 1 μF. The LF power is cw. The smaller duty-cycle tends to
produce an extended energy range in the IED.
(Color online) Temporal behavior of dc-bias with different duty-cycles when pulsing the HF power with a PRF of 50 kHz. (a) BC = 10 nF and (b) BC = 1 μF. The LF power is cw. The dynamic range of the
dc-bias is from 0 V to −200 V with the smaller BC while the range is only from −60 to −90 V with larger BC.
(Color online) Total IEDs for all ions for different duty-cycles when pulsing the LF power with a PRF of 50 kHz. (a) BC = 10 nF and (b) BC = 1 μF. The HF power is cw. The amplitude of the low energy
peak diminishes while the amplitude of the high energy peak increases as the duty-cycle increases. The IED becomes similar to that of the cw case with further increase of the duty-cycle.
(Color online) Temporal behavior of dc-bias with different duty-cycles when pulsing the LF power with a 50 kHz PRF. (a) BC = 10 nF and (b) BC = 1 μF. The HF power is cw. The dynamic range is from −40
to +80 V with the smaller BC while the range is at most ±15 V at 25% duty-cycle with larger BC. Note that the range of oscillation the dc-bias is similar for different duty-cycles with the smaller BC
while the range is shifted by duty-cycle with the larger BC.
Scitation: Role of the blocking capacitor in control of ion energy distributions in pulsed capacitively coupled plasmas sustained in Ar/CF4/O2
|
{"url":"http://scitation.aip.org/content/avs/journal/jvsta/32/2/10.1116/1.4863948","timestamp":"2014-04-20T17:25:12Z","content_type":null,"content_length":"102599","record_id":"<urn:uuid:8ff09c33-6e70-47c0-87b2-ec6cb7c1d221>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Santa Fe, TX Algebra Tutor
Find a Santa Fe, TX Algebra Tutor
...I have experience tutoring in phonics, reading, reading comprehension and math, including elementary, pre-algebra, algebra & geometry. I possess a special talent for making learning fun by
utilizing creative ways in which each student can relate. I also have experience tutoring for the Texas St...
12 Subjects: including algebra 1, reading, English, geometry
I'm an early admissions student at San Jacinto college. My best subject to teach is English or social studies, but I am good with most basic subjects. If you are requesting more advanced math or
science tutoring (anatomy & physiology, physics, algebra, etc.), I would appreciate a heads up sometime before the lesson, so I can refresh my memory.
25 Subjects: including algebra 1, reading, English, physics
...I have been tutoring for close to 5 years now on most math subjects from Pre-Algebra up through Calculus 3. I have done TA jobs where I hold sessions for groups of students to give them extra
practice on their course material and help to answer any questions that they might have, I have tutored ...
7 Subjects: including algebra 1, algebra 2, calculus, statistics
...I have mentored graduate, undergraduate, and high school students in Biochemistry research projects. I served as a mentor for the American Chemical Society's NC Project SEED. I served as a
teaching assistant for an undergraduate Biochemistry course at Duke.
10 Subjects: including algebra 1, algebra 2, geometry, general computer
...I then did an interdisciplinary Master of Arts in Arid and Semi-arid Land Studies at Texas Tech. At the time, my GRE score was the highest they had ever seen. To relax, I solve logic puzzles,
but also like classical music and film.I am a teacher in the state of Texas, certified to teach grades 4-8, since 2003.
41 Subjects: including algebra 2, English, writing, algebra 1
Related Santa Fe, TX Tutors
Santa Fe, TX Accounting Tutors
Santa Fe, TX ACT Tutors
Santa Fe, TX Algebra Tutors
Santa Fe, TX Algebra 2 Tutors
Santa Fe, TX Calculus Tutors
Santa Fe, TX Geometry Tutors
Santa Fe, TX Math Tutors
Santa Fe, TX Prealgebra Tutors
Santa Fe, TX Precalculus Tutors
Santa Fe, TX SAT Tutors
Santa Fe, TX SAT Math Tutors
Santa Fe, TX Science Tutors
Santa Fe, TX Statistics Tutors
Santa Fe, TX Trigonometry Tutors
Nearby Cities With algebra Tutor
Alta Loma, TX algebra Tutors
Alvin, TX algebra Tutors
Arcadia, TX algebra Tutors
Bacliff algebra Tutors
Bayou Vista, TX algebra Tutors
Clute algebra Tutors
Dickinson, TX algebra Tutors
Hillcrest, TX algebra Tutors
Hitchcock, TX algebra Tutors
La Marque algebra Tutors
Manvel, TX algebra Tutors
Nassau Bay, TX algebra Tutors
Seabrook, TX algebra Tutors
Tiki Island, TX algebra Tutors
Webster, TX algebra Tutors
|
{"url":"http://www.purplemath.com/Santa_Fe_TX_Algebra_tutors.php","timestamp":"2014-04-18T03:44:50Z","content_type":null,"content_length":"24023","record_id":"<urn:uuid:e442e6d4-9959-4834-80ef-0d3db746434f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Report of Sales in cash register
For a cash register program that I am working on for a class I need to display a report of sales that displays the total quantity of each item ordered during the day which updates as orders are
entered, the total sales for each item, and the grand total for sales during the day. right now it displays the report of sales, but everything is at 0. Can anyone help me witht this?
Code: Select all
# Menu
def menu():
"""provides a menu for a restaurant"""
items = [("H", "Hamburger", "$1.29"), ("O", "Onion Rings", "$1.09"), ("C", "Cheeseburger", "$1.49"),
("S", "Small Drink", "$0.79"), ("F", "Fries", "$0.99"), ("L", "Large Drink", "$1.19")]
for x in items:
letter, name, prices = x
print letter, "\t", name, prices
print "A\tEnd Order"
print "R\tReport of Sales"
# Input Function
def input():
"""user inputs what they want"""
choice = "again"
subtotal = 0
htotal = 0
ototal = 0
ctotal = 0
stotal = 0
ftotal = 0
ltotal = 0
while choice.upper() != "A" and choice.upper() != "R":
choice = raw_input("\nEnter a letter that corresponds to what you would like to order: ")
if choice.upper() == "H":
print "Hamburger\t$1.29"
subtotal = subtotal + 1.29
htotal = htotal + 1
elif choice.upper() == "O":
print "Onion Rings\t$1.09"
subtotal = subtotal + 1.09
ototal = ototal + 1
elif choice.upper() == "C":
print "Cheeseburger\t$1.49"
subtotal = subtotal + 1.49
ctotal = ctotal + 1
elif choice.upper() == "S":
print "Small Drink\t$0.79"
subtotal = subtotal + .79
stotal = stotal + 1
elif choice.upper() == "F":
print "Fries\t$0.99"
subtotal = subtotal + .99
ftotal = ftotal + 1
elif choice.upper() == "L":
print "Large Drink\t$1.19"
subtotal = subtotal + 1.19
ltotal = ltotal + 1
elif choice.upper() == "A":
subtotal = subtotal
print "Please enter a correct choice."
grandtotal = htotal + ototal + ctotal + stotal + ftotal + ltotal
return choice, subtotal, htotal, ototal, ctotal, stotal, ftotal, ltotal, grandtotal
#calc function
def calc(choice, subtotal, htotal, ototal, ctotal, stotal, ftotal, ltotal, grandtotal):
tax = subtotal * .05
total = subtotal + tax
print "Subtotal: ", subtotal
print "Tax: ", tax
print "Total: $", total
amount = float(raw_input("\nEnter the amount collected: "))
if amount >= total:
change = amount - total
print "Change: $", change
elif amount < total:
amount = float(raw_input("That is not enough money. Please reenter the amount collected: "))
change = amount - total
print "Change: $", change
return tax
# report function
def report(choice, subtotal, htotal, ototal, ctotal, stotal, ftotal, ltotal, grandtotal, tax):
"""displays the report of sales"""
print "Item\t\t\tQuantity\tSales"
print "Hamburgers\t\t", htotal, "\t\t", htotal * 1.29
print "Cheeseburgers\t\t", ctotal, "\t\t", ctotal * 1.49
print "Fries\t\t\t", ftotal, "\t\t", ftotal * .99
print "Onion Rings\t\t", ototal, "\t\t", ototal * 1.09
print "Small Drink\t\t", stotal, "\t\t", stotal * .79
print "Large Drink\t\t", ltotal, "\t\t", ltotal * 1.19
print "Total Sales for Day:\t\t\t", grandtotal
print "Total Tax for Day:\t\t\t", grandtotal * tax
print "Total:\t\t\t\t\t", (grandtotal * tax) + grandtotal
# main scope
choice = "yes"
while choice == "yes":
choice, subtotal, htotal, ototal, ctotal, stotal, ftotal, ltotal, grandtotal = input()
if choice.upper() == "A":
tax = calc(choice, subtotal, htotal, ototal, ctotal, stotal, ftotal, ltotal, grandtotal)
if choice.upper() == "R":
report(choice, subtotal, htotal, ototal, ctotal, stotal, ftotal, ltotal, grandtotal, tax)
choice = raw_input("\nWould you like to enter a new customer(yes/no)? ")
Re: Report of Sales in cash register
My advice before doing anything else would be... learn to use dictionaries.
This kind of thing:
Code: Select all
menu = {"H":("Hamburger",1.29),"O":("Onion Rings",1.09),"C":("Cheeseburger",1.49),
"S":("Small Drink",0.79),"F":("Fries",0.99),"L":("Large Drink",1.19)}
history = {}
subtotal = 0
again = "y"
while again == "y":
choice = ""
while choice not in menu:
choice = raw_input("Select from the menu: ").upper()
item,price = menu[choice]
print("{:15} ${:.2f}".format(item,price))
subtotal += price
history[item] = history.get(item,0)+1
again = raw_input("Press 'y' to add another item: ".lower())
print("\nSubtotal: ${}\n".format(subtotal))
for food,amount in history.items():
print("{:15}: {}".format(food,amount))
Would replace all this:
Code: Select all
if choice.upper() == "H":
print "Hamburger\t$1.29"
subtotal = subtotal + 1.29
htotal = htotal + 1
elif choice.upper() == "O":
print "Onion Rings\t$1.09"
subtotal = subtotal + 1.09
ototal = ototal + 1
elif choice.upper() == "C":
print "Cheeseburger\t$1.49"
subtotal = subtotal + 1.49
ctotal = ctotal + 1
elif choice.upper() == "S":
print "Small Drink\t$0.79"
subtotal = subtotal + .79
stotal = stotal + 1
elif choice.upper() == "F":
print "Fries\t$0.99"
subtotal = subtotal + .99
ftotal = ftotal + 1
elif choice.upper() == "L":
print "Large Drink\t$1.19"
subtotal = subtotal + 1.19
ltotal = ltotal + 1
Re: Report of Sales in cash register
I would not call a global function raw_input() or input() regardless of version just because of the confusion. Not to mention overwriting the input built-in function. This wouldn't really matter in
2.x, but lets assume you converted it to 3.x. The program would overwrite the input() built-in function and would probably result in an error at worst case scenerio, on other cases it just wouldn't
take the input at all.
Its essentiallys saying in 3.x in 2.x form:
Code: Select all
def raw_input(var):
return True
choice = raw_input('test')
Has the class taught classes yet? That would make this a breeze, but the way coded it is quite confusing:
returning that many values
Code: Select all
choice, subtotal, htotal, ototal, ctotal, stotal, ftotal, ltotal, grandtotal = input()
and that many args:
Code: Select all
report(choice, subtotal, htotal, ototal, ctotal, stotal, ftotal, ltotal, grandtotal, tax)
i beleive your problem is in the last while loop. All those variables do not retain their information between customers. Once the loop ends all that data is gone, so when you report() those values,
it shows the defaults of 0. IF you change the defaults to some other number, you will see the output of that in report.
New Users, Read This
version Python 3.3.2 and 2.7.5, tkinter 8.5, pyqt 4.8.4, pygame 1.9.2 pre
OS Ubuntu 13.04, Mint 11, Arch Linux, Gentoo, Windows 7/8
Re: Report of Sales in cash register
Yeah... all those different variable names to keep track of each item is just bad.
Also... why did you make a new thread about this:
The last topic seems like it should have sufficed.
|
{"url":"http://www.python-forum.org/viewtopic.php?p=4115","timestamp":"2014-04-17T15:27:45Z","content_type":null,"content_length":"30114","record_id":"<urn:uuid:64b98085-150f-4b82-abb4-f695eeef816d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Case Study: Using Coconut to Optimise non-Cartesian MRI for the Cell BE
Christopher Kumar Anand, McMaster University and Optimal Computational Algorithms, Inc.
Coauthors: Wolfram Kahl
We will discuss the opportunities presented by novel multi-core architectures and the Cell BE in particular, and the challenges in harnessing their potential. We will then describe the support
for code optimisation provided by the tool Coconut (COde CONstructing User Tool). Finally, we will demonstrate the use of this tool in optimising non-Cartesian magnetic resonance image
reconstruction. While Coconut makes it easy to develop complicated implementations correctly, it does not automate the replacement of essentially serial algorithms with parallel algorithms. This
is the fun, creative part of software development, which is highly variable from application to application. In the MRI case, we will show how to modify the basic reconstruction processes to tune
the algorithm to the local memory bandwidth, cache size and processor FLOPs.
Adaptive and high-order methods for American option pricing
Duy Minh Dang, University of Toronto
Coauthors: Christina Christara
We present an adaptive and high order method pricing American vanilla options written on single asset with constant volatility and interest rate. The high-order discretization in space is based
on optimal quadratic spline collocation (QSC) methods while the timestepping is handled by the Crank-Nicolson (CN) technique. To control the space error, the mesh redistribution strategy based on
an equidistribution principle is used. At each timestep, solution of only one tridiagonal linear system is required. Numerical results indicate the efficiency of the adaptive techniques and
high-order discretization methods.
Programming and Mathematical Software for Multicore Architectures
Craig C. Douglas, University of Wyoming and Yale University
Essentially all computers sold in 2008 will have multiple cores. The vast majority of multicore CPUs will have identical cores, though not all. In 2009, there will be several CPUs available with
different types of cores. In this talk, I will review multicore architectures briefly and will discuss different ways to program them and provide pointers to mathematical software that now exists
to help use them for scientific programming.
This talk is based on the speaker's experiences teaching a semester course on the topic during the spring 2008 semester as a visiting professor at Texas A&M University.
The Cell BE: architecture/programming, vector elementary functions, and other numerical software
Robert F. Enenkel, IBM Canada Ltd.
Coauthors: Christopher K. Anand, McMaster University and Optimal Computational Algorithms, Inc.
My talk will give an introduction to the Cell BE heterogeneous multiprocessor computer, including topics such as hardware architecture, instruction set and programming considerations, floating
point characteristics, and development/simulation tools. I will then discuss the design of SPU MASS, a high-performance SIMD/vector elementary function library for the Cell BE SPU. If time
allows, I will also give a survey of other numerical libraries and/or scientific computing applications on Cell BE.
Wayne Enright, University of Toronto
Developing Easy to use ODE Software: The associated Cost/Reliability Trade offs
In the numerical solution of ODEs, it is now possible to develop efficient techniques that compute approximate solutions that are more convenient to interpret and use by researchers who are
interested in accurate and reliable simulations of their mathematical models. To illustrate this we have developed a class of numerical ODE solvers that will deliver a piecewise polynomial as the
approximate (or numerical solution) to an ODE problem.
The resulting methods are designed so that the resulting piecewise polynomial will satisfy a perturbed ODE with an associated defect (or residual) that is directly controlled in a consistent way.
We will discuss the quality/cost trade off that one faces when implementing and using such methods. Numerical results on a range of problems will be summarized for methods of orders five through
A Numerical Scheme for the Impulse Control Formulation for Pricing Variable Annuities with a Guaranteed Minimum Withdrawal Benefit (GMWB)
Peter Forsyth, University of Waterloo
Coauthors: Zhuliang Chen
In this paper, we outline an impulse stochastic control formulation for pricing variable annuities with a Guaranteed Minimum Withdrawal Benefit (GMWB) assuming the policyholder is allowed to
withdraw funds continuously. We develop a numerical scheme for solving the Hamilton-Jacobi-Bellman (HJB) variational inequality corresponding to the impulse control problem. We prove the
convergence of our scheme to the viscosity solution of the continuous withdrawal problem, provided a strong comparison result holds. The scheme can be easily generalized to price discrete
withdrawal contracts. Numerical experiments are conducted, which show a region where the optimal control appears to be non-unique.
Dynamically-consistent Finite-Difference Methods for Disease Transmission Models
Abba Gumel, University of Manitoba
Models for the transmission dynamics of human diseases are often formulated in the form of systems of nonlinear differential equations. The (typically) large size and high nonlinearity of some of
these systems make their analytical solutions almost impossible to obtain. Consequently, robust numerical methods must be used to obtain their approximate solutions. This talk addresses the
problem and challenges of designing discrete-time models (finite-difference methods) that are dynamically consistent with the continuous-time disease transmission models they approximate (in
particular in preserving some of the key properties of the continuous-time models such as positivity, boundedness, mimicking correct bifurcations etc.).
Portable Software Development for Multi-core Processors, Many-core Accelerators, and Heterogeneous Architectures
Michael McCool, RapidMind/University of Waterloo
New processor architectures, including many-core accelerators like GPUs, multi-core CPUs, and heterogeneous architectures like the Cell BE, provide many opportunities for improved performance.
However, programming these architectures productively in a performant and portable way is challenging. We have developed a software development platform that uses a common SPMD parallel
programming model for all these processor architectures. The RapidMind platform allows developers to easily create single-source, single-threaded programs with an existing, standard C++ compiler
that can target all the processing resources in such architectures. When compared to tuned baseline code using the best optimizing C++ compilers available, RapidMind-enabled code can demonstrate
speedups of over an order of magnitude on x86 dual-processor quad-core systems and two orders of magnitude on accelerators.
Deterministic and stochastic models in synthetic biology
David McMillen
Dept of Chemical and Physical Sciences, University of Toronto Mississauga
Synthetic biology is an emerging field in which we seek to design and implement novel biological systems, or change the properties of existing ones. Closely related to systems biology, one of the
recurring themes of synthetic biology is the use of cellular models as an aid in design: we want to be able to gain some predictive insight into cellular processes, rather than simply using
A variety of modelling methods have been used to approach this problem, and I will present a non-specialist introduction to the biological issues, and a survey of some popular techniques
currently being applied in biological modelling work.
Modern Radiation Therapy: Image is Everything
Douglas Moseley, Princess Margaret Hospital
Coauthors: Michael B. Sharpe, David A. Jaffray
Recent technical advances in Image-Guided Radiation Therapy (IGRT) have motivated many applications in high-performance computing . These include the addition of a kV x-ray tube and an amorphous
silicone flat-panel detector to the gantry of a medical linear accelerator. A series of 2D kV projection images of the patient are captured as the gantry rotates. These 2D projection images are
then reconstructed into a 3D volumetric image using a filtered back projection technique. The algorithm, a modified Feldkamp, is CPU intensive but highly parallelizable and hence can take
advantage of multi-process architectures. At Princess Margaret Hospital about 25% of our patients receive daily on-line image guidance.
In this talk an introduction to the field of radiation therapy will be presented. The back-projection algorithm will be discussed including an extension of the reconstruction algorithm which in
the presence of patient motion allows for the generation of 4D images (3D in space and 1D in time). Other computing intensive applications in radiation therapy such as image registration (rigid
and non-rigid) will be introduced.
A User-Friendly Fortran 90/95 Boundary Value ODE Solver
Paul Muir, Saint Mary's University
Coauthors: Larry Shampine
This talk will describe a recently developed user-friendly Fortran 90/95 software package for the numerical solution of boundary value ordinary differential equations (BVODEs). This package,
called BVP_SOLVER, was written by the author and Larry Shampine, of Southern Methodist University, and has evolved from an earlier Fortran 77 package called MIKRDC, written by the author and
Wayne Enright, of the University of Toronto. This new solver takes advantage of a number of language features of Fortran 90/95 to provide a substantially simpler experience for the user, while at
the same time employing the underlying robust, high quality algorithms of the original MIRKDC code. BVP_SOLVER has a simple, flexible interface that is related to that of the Matlab BVODE solver,
bvp4c (Kierzenka and Shampine), and at the same time, it has the fast execution speed of Fortran. BVP_SOLVER implements several numerical algorithms that in fact represent improvements over those
employed in MIRKDC. Examples are presented to demonstrate the capabilities of the new solver. Several current projects associated with enhancing the capabilities of BVP_SOLVER will also be
HPC, an effective tool for three-dimensional simulations of nonlinear internal waves
Van Thinh Nguyen, University of Waterloo
Coauthors: K.G. Lamb
Stratified flows over topographies present challenging geophysical fluid dynamic problems with far reaching implications for circulation and mixing in oceans and the Great Lakes. The complexity
of natural boundaries and topographies means that a three-dimensional model is often necessary. In addition, the hydrostatic approximation breaks down for small-scale processes such as the
steepening, formation and breaking of non linear internal and non-hydrostatic waves; therefore, a non-hydrostatic three-dimensional model is required to study such phenomena. Finally, in order to
simulate small-scale processes, high resolutions are necessary; and as a result, this requires use of High Performance Computers (HPC).
Nonlinear internal waves play an important role in redistribution of nutrients, pollutants and sediments in oceans and lakes. In this study, large scale simulations of nonlinear internal waves
generated by tidal and wind forcing over three-dimensional topographies in the St. Lawrence Estuary (SLE) and Lake Erie are described. The simulations are based on the MIT general circulation
model (MITgcm) developed by Marshall et al., (1997) with some modifications for quasi-two layer fluid and open boundary conditions. The code is designed using a multi-level parallel decomposition
to facilitate efficient execution on all foreseeable parallel computer architectures and processor types (Wrappable Application Parallel Programming Environment Resources Infrastructure).
The simulation domain has been divided into a number of subdomains as tiles. Tiles consist of an interior region and an overlap region with an adjacent tile. The resulting tiles are owned by an
individual processor and it is possible for a processor to own several tiles. The owning processors perform the arithmetic operations associated with a tile. Except for periods of communication
or coordination, each processor computes autonomously, working only with data from the tile (or tiles) that the processor owns. The code has been scaled on different parallel platforms (shared
and distributed memories) provided by SHARCNET (Ontario) and RQCHP (Quebec).
The numerical results are compared with data from direct observations in the St. Lawrence Estuary (SLE) and Lake Erie. In addition, a comparison of performances between different platforms will
be presented.
Real-Time Big Physics Applications In National Instruments LabVIEW
Darren Schmidt, National Instruments, et al.
Coauthors: Bryan Marker, Lothar Wenzel, Louis Giannone, Bertrand Bauvir, Toomas Erm
Many big physics applications require an enormous amount of computational power to solve large mathematical problems with demanding real-time constraints. Typically, the mathematical model is fed
data from real-world acquisition channels and required to generate outputs in less than a 1 ms for problem sizes of dimension 1,000 to 100,000 and beyond.
We report on an approach for tokamaks and extremely large telescopes that utilizes an application development environment called LabVIEW. This environment offers algorithm development that
naturally exposes parallel processing through it's graphical programming language and takes advantage of multi-core processors without user intervention. We show that the required computational
power is accessible through LabVIEW via multi-core implementations targeting a combination of processors including CPUs, FPGAs, and GPUs deployed on standard PCs, workstations, and real-time
systems. Several benchmarks illustrate what is achievable with such an approach.
The Numerical Mathematics Consortium - Establishing a Verification Framework for Numerical Algorithm Development
Darren Schmidt, National Instruments
The Numerical Mathematics Consortium (NMC) is a group of numerics experts from academia and industry working to address some of the challenges faced by developers and users of mathematical
software. The goal of the group is to address artifacts of implementation decisions made by library and system developers in the absence of a standard or even a consensus. The lack of a published
standard means that the user of a mathematical software library must rely on their skill, experience and wits, or on the possibly weakly supported opinions of others, to assess the quality of
such a library or its compatibility with other libraries.
The emergence of new technologies, such as multicore processors, further exacerbates the situation. Developers are racing to develop libraries which can take full advantage of these technologies,
and again the lack of a reference standard will seriously pollute the results.
For the library or system developer, the NMC Standard will provide essential information with respect to mathematical decisions (where the mathematical definition is not unique or does not
uniquely determine the computer library definition), calling sequences, return types and algorithm parameters, among other characteristics. It will also provide accuracy and consistency
benchmarks and evaluations. For the user, the successful evaluation of a numerics library or system against the NMC Standard will provide assurance that the library or system can be used as is
and will provide results consistent with other similarly evaluated libraries or systems.
Data analysis for the Microwave Limb Sounder instrument on the EOS Aura Satellite
Van Snyder, Jet Propulsion Laboratory, California Institute of Technology
A brief discussion of the goals of the Microwave Limb Sounder (MLS) instrument on NASA's Earth Observing System (EOS) Aura satellite, which was launched on 15 July 2004, is followed by a
discussion of the organization and mathematical methods of the software used for analysis of data received from that instrument.
The MLS instrument measures microwave thermal emission from the atmosphere by scanning the earth's limb 3500 times per day. The limb tangent altitude varies from 8 to 80 km. Roughly 600 million
measurements in five spectral bands are returned every day. From these, estimates of the concentration of approximately 20 trace constituents, and temperature, are formed at 70 pressure levels on
each of these scans - altogether roughly 5 million results per day. The program is organized to process "chunks" consisting of about 20 scans, with a five-scan overlap at both ends of the chunk,
giving 350 chunks per day. Each chunk requires about 15 hours on a 3.6 GHz Pentium Xeon workstation. Although one chunk can be processed on a workstation, processing all of the data requires the
attention of 350 such processors.
The software is an interpreter of a "little language, " as described in April 2008 Software: Practice and Experience. This confers substantial benefits for organization, development, maintenance
and operation of the software. Most importantly, it separates the responsibilities, and the requirements for expertise, of the software engineers who develop and maintain the software, from the
scientists who configure the software.
Mathematically, the problem consists of tracing rays through the atmosphere, roughly corresponding to limb views of the instrument. The radiative transfer equation, together with variational
equations with respect to the parameters of interested, are then integrated along these rays. A Newton method is then used to solve for the parameters of interest.
Parallel Option Pricing with Fourier Space Time-stepping Method on Graphics Processing Units
Vladimir Surkov, University of Toronto
With the evolution of Graphics Processing Units (GPUs) into powerful and cost-efficient computing architectures, their range of application has expanded tremendously, especially in the area of
computational finance. Current research in the area, however, is limited in terms of options priced and complexity of stock price models. We present algorithms, based on the Fourier Space
Time-stepping (FST) method, for pricing single and multi-asset European and American options with Levy underliers on a GPU. Furthermore, the single-asset pricing algorithm is parallelized to
attain greater efficiency.
Computational challenges in time-frequency analysis
Hongmei Zhu, YorkUniversity
Time-varying frequencies is one of the most common features in the signals found in a wide range of applications such as biology, medicine and geophysics. Time-frequency analysis provides various
techniques to represent a signal simultaneously in time and frequency, with the aim of revealing how the frequency content evolves over time. Often the time-frequency representation of an N-point
time series is stored in an N-by-N matrix. It presents computational challenges as the size or the dimensionality of data increases. Therefore, the availability of efficient algorithms or
computing schemes is the key to fully utilize the properties of the time-frequency analysis for practical applications. This talk addresses these computational challenges arising in biomedical
applications and summarizes recent progresses in this area.
The Design and Implementation of a Modeling Package
Hossein ZivariPiran, University of Toronto
Designing a modeling package with different functionalities (simulation, sensitivity analysis, parameter estimation) is a challenging and difficult process. Time consuming task of implementing
efficient algorithms for doing core computations, designing a user-friendly interface, balancing generality and efficiency, and manageability of the code are just some of the issues. In this
talk, based on our recent experience in developing a package for modeling delay differential equations, we discuss some techniques that can be used to overcome some of these difficulties. We
present a process for incorporating existing codes to reduce the implementation time. We discuss the object-oriented paradigm as a way of having a manageable design and also a user-friendly
|
{"url":"http://www.fields.utoronto.ca/programs/scientific/08-09/highperformance/abstracts.html","timestamp":"2014-04-16T07:30:27Z","content_type":null,"content_length":"37208","record_id":"<urn:uuid:1a323a8a-a080-4eee-af6a-e73051f1053f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
|
aker Program
Hey im new to this forum. Im in grade 10 taking programming for the first time and after 2 weeks of mindf***ing stuff I was given our first project making a change maker basically what its supposed
to do is you place you amount of money ex.) 15.65 and its supposed to convert that amount to the least amount of change so it'll be like 7 toonies 1 loonie 2 quarters 1 dime and 1 nickel. After
constatly bugging my teacher shes given me a hint were you can't divide decimals and x by 100 to remove any decimal after getting decently far I think? I was stumped when my program didnt want me to
do what it wanted to do. I have basic knowlage that Mod means the remainder and \ is like interger division i think it means that it rounds what it divised or something and / is regular. I know you
don't do homework but can you give me more hints or fix my error? Don't laugh please im a programming virgin
Dim num1 As Double
Dim num2 As Double
Dim num3 As Double
Dim num4 As Double
Dim num5 As Double
Dim num6 As Double
Dim num7 As Double
Dim num8 As Double
Dim num9 As Double
Private Sub Command1_Click()
num1 = Text1
num2 = (num1 * 100)
num3 = (num2 \ 200)
num4 = (num2 Mod 200)
num5 = (num4 Mod 25)
Label1.Caption = num3
Label2.Caption = num4
Label3.Caption = num5
End Sub
btw my error is that num3 too label1 displays how many toonies and num4 to label2 is supposed to display how many loonies but it doesnt and num5 to label3 is supposed to display how many quarters and
so on thanks!
|
{"url":"http://www.dreamincode.net/forums/topic/294334-an-error-with-my-change-maker-program/page__p__1716065","timestamp":"2014-04-20T12:26:34Z","content_type":null,"content_length":"88691","record_id":"<urn:uuid:4ef3595d-9aa0-495b-9dd6-de6cec6b8a33>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Specifying Derivatives
The function FindRoot has a option; the functions FindMinimum, FindMaximum, and FindFit have a Gradient option; and the Newton method has a method option . All these derivatives are specified with
the same basic structure. Here is a summary of ways to specify derivative computation methods.
Methods for computing gradient, Jacobian, and Hessian derivatives.
The basic specification for a derivative is just the method for computing it. However, all of the derivatives take options as well. These can be specified by using a list . Here is a summary of the
options for the derivatives.
Options for computing gradient, Jacobian, and Hessian derivatives.
A few examples will help illustrate how these fit together.
With just Method->"Newton", FindMinimum issues an lstol message because it was not able to resolve the minimum well enough due to lack of good derivative information.
The following describes how you can use the gradient option to specify the derivative.
Symbolic derivatives are not always available. If you need extra accuracy from finite differences, you can increase the difference order from the default of 1 at the cost of extra function
Note that the number of function evaluations is much higher because function evaluations are used to compute the gradient, which is used to approximate the Hessian in turn. (The Hessian is computed
with finite differences since no symbolic expression for it can be computed from the information given.)
The information given from about the number of function, gradient, and Hessian evaluations is quite useful. The EvaluationMonitor options are what make this possible. Here is an example that simply
counts the number of each type of evaluation. (The plot is made using Reap and Sow to collect the values at which the evaluations are done.)
Using such diagnostics can be quite useful for determining what methods and/or method parameters may be most successful for a class of problems with similar characteristics.
When Mathematica can access the symbolic structure of the function, it automatically does a structural analysis of the function and its derivatives and uses SparseArray objects to represent the
derivatives when appropriate. Since subsequent numerical linear algebra can then use the sparse structures, this can have a profound effect on the overall efficiency of the search. When Mathematica
cannot do a structural analysis, it has to assume, in general, that the structure is dense. However, if you know what the sparse structure of the derivative is, you can specify this with the method
option and gain huge efficiency advantages, both in computing derivatives (with finite differences, the number of evaluations can be reduced significantly) and in subsequent linear algebra. This
issue is particularly important when working with vector-valued variables. A good example for illustrating this aspect is the extended Rosenbrock problem, which has a very simple sparse structure.
For a function with simple form like this, it is easy to write a vector form of the function, which can be evaluated much more quickly than the symbolic form can, even with automatic compilation.
The solution with the function, which is faster to evaluate, winds up being slower overall because the Jacobian has to be computed with finite differences since the pattern makes it opaque to
symbolic analysis. It is not so much the finite differences that are slow as the fact that it needs to do 100 function evaluations to get all the columns of the Jacobian. With knowledge of the
structure, this can be reduced to two evaluations to get the Jacobian. For this function, the structure of the Jacobian is quite simple.
When a sparse structure is given, it is also possible to have the value computed by a symbolic expression that evaluates to the values corresponding to the positions given in the sparse structure
template. Note that the values must correspond directly to the positions as ordered in the SparseArray (the ordering can be seen using ArrayRules). One way to get a consistent ordering of indices is
to transpose the matrix twice, which results in a SparseArray with indices in lexicographic order.
In this case, using the sparse Jacobian is not significantly faster because the Jacobian is so sparse that a finite difference approximation can be found for it in only two function evaluations and
because the problem is well enough defined near the minimum that the extra accuracy in the Jacobian does not make any significant difference.
|
{"url":"http://reference.wolfram.com/mathematica/tutorial/UnconstrainedOptimizationSpecifyingDerivatives.html","timestamp":"2014-04-20T21:20:41Z","content_type":null,"content_length":"50536","record_id":"<urn:uuid:857eec40-82d3-4851-b073-36217ebe8c2e>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
|
2005.48: Iterative Solution of a Nonsymmetric Algebraic Riccati Equation
2005.48: Chun-Hua Guo and Nicholas J. Higham (2005) Iterative Solution of a Nonsymmetric Algebraic Riccati Equation.
There is a more recent version of this eprint available. Click here to view it.
Full text available as:
PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
205 Kb
We study the nonsymmetric algebraic Riccati equation whose four coefficient matrices are the blocks of a nonsingular $M$-matrix or an irreducible singular $M$-matrix $M$. The solution of practical
interest is the minimal nonnegative solution. We show that Newton's method with zero initial guess can be used to find this solution without any further assumptions. We also present a qualitative
perturbation analysis for the minimal solution, which is instructive in designing algorithms for finding more accurate approximations. For the most practically important case, in which $M$ is an
irreducible singular $M$-matrix with zero row sums, the minimal solution is either stochastic or substochastic and the Riccati equation can be tranformed into a unilateral matrix equation by a
procedure of Ramaswami. The minimal solution of the Riccati equation can then be found by computing the minimal nonnegative solution of the unilateral equation using the Latouche--Ramaswami
algorithm. We show that the Latouche--Ramawami algorithm, combined with a shift technique suggested by He, Mini, and Rhee, is breakdown-free in all cases and is able to find the minimal solution more
efficiently and more accurately than the algorithm without a shift. Our approach is to find a proper stochastic solution using the shift technique even if it is not the minimal solution. We show how
we can easily recover the minimal solution when it is not the computed stochastic solution.
Item Type: MIMS Preprint
Uncontrolled Keywords: nonsymmetric algebraic Riccati equation, $M$-matrix, minimal nonnegative solution, perturbation analysis, Newton's method, Latouche--Ramaswami algorithm, shifts
Subjects: MSC 2000 > 15 Linear and multilinear algebra; matrix theory
MSC 2000 > 65 Numerical analysis
MIMS number: 2005.48
Deposited By: Nick Higham
Deposited On: 16 December 2005
Available Versions of this Item
Download Statistics: last 4 weeks
Repository Staff Only: edit this item
|
{"url":"http://eprints.ma.man.ac.uk/130/","timestamp":"2014-04-19T14:52:46Z","content_type":null,"content_length":"11047","record_id":"<urn:uuid:6507c549-091f-48a6-8171-b83c15bb31f6>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is taller Mount Everest or K2?
You asked:
What is taller Mount Everest or K2?
Mount Everest
Mount Everest - also called Qomolangma Peak, Mount SagarmÄthÄ, Chajamlungma, Zhumulangma Peak or Mount Chomolungma -, the highest mountain on Earth above sea level, and the highest point on the
Earth's continental crust
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
|
{"url":"http://www.evi.com/q/what_is_taller_mount_everest_or_k2","timestamp":"2014-04-21T09:54:29Z","content_type":null,"content_length":"66585","record_id":"<urn:uuid:43d303fd-6523-414b-8cd5-8209afd69ac3>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathFiction: Milo and Sylvie (Eliot Fintushel)
Contributed by "William E. Emba"
"Shapeshifting is treated as a form of Banach-Tarski equidecomposition. And part of a Zorn's Lemma proof is given explicitly."
This story appeared in the March 2000 issue of Asimov's Science Fiction. A sequel appeared in the same magazine almost three years later.
For those who may not know, the Banach-Tarski Theorem is a real, surprising, and somewhat disturbing theorem of geometry. What it says, essentially, is that any sphere can be broken apart into a
finite number of pieces and then reassembled into another sphere of any desired volume. Certainly this is disturbing: one is inclined either to be impressed that mathematics has shown us that volume
is not what we think it is, or perhaps one will conclude that mathematics doesn't make sense after all! [See Division by Zero]. When I learned about it as an undergraduate (back in the 20th century)
we were told that this was an indication of possible problems with the Axiom of Choice (an axiom of set theory that is not universally popular), but that viewpoint seems to be out of date. This is
now seen as just one of many indications that volume is a slipperier topic than one might expect. In particular, as this theorem and others like it show, the notion of volume is not "finitely
additive"...and there is no alternative measure for arbitrary sets in dimensions 3 or higher which are! In other words, when it comes to volume, the whole may NOT be equal to the sum of its parts.
For more information about this funky theorem, check out this link or even this one.
Now, please forgive me for being too serious, but it annoyed me that the story misuses the theorem. If one were to ignore the atomist view of matter, and if one had a way to break matter (even your
own body) up into pieces of arbitrary shape, then the Banach-Tarski theorem WOULD give you a way to reassemble those pieces into something of a different volume. However, this story makes it sound as
if it is the theorem that gives them the power to break their body into pieces, and that's just silly. (Sorry.)
|
{"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf314","timestamp":"2014-04-17T09:36:05Z","content_type":null,"content_length":"10500","record_id":"<urn:uuid:7d1401ed-109d-4065-b8b7-f270e6fdc591>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Business Caluculus
October 11th 2007, 10:12 PM #1
Sep 2007
Business Caluculus
A promissory not will pay $50,000 at maturity 5 years from now. The note has an interest rate of 6.4% compounded continuously...
1. What is the note worth right now?
2. You bought the note and cashed it after 5 years. How much interest did you earn?
I set up the equation as 50,000=Ce^.064*0 but how do I solve that and then I have no idea how do do the second part of the question...PLEASE HELP!
A promissory not will pay $50,000 at maturity 5 years from now. The note has an interest rate of 6.4% compounded continuously...
1. What is the note worth right now?
2. You bought the note and cashed it after 5 years. How much interest did you earn?
I set up the equation as 50,000=Ce^.064*0 but how do I solve that and then I have no idea how do do the second part of the question...PLEASE HELP!
The equation you require is that:
V(t)=Ce^{0.064 t},
where t is in years. So in this case:
C = 50000/e^{0.064*5}.
The interest paid is 50000 - C.
October 11th 2007, 10:55 PM #2
Grand Panjandrum
Nov 2005
|
{"url":"http://mathhelpforum.com/business-math/20424-business-caluculus.html","timestamp":"2014-04-17T16:15:50Z","content_type":null,"content_length":"33167","record_id":"<urn:uuid:5253e498-e2d8-4b1c-b4fd-8ab4c6de21cf>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Next: Arguments Up: Generalized Nonsymmetric Eigenvalue Problems Previous: LA_GGES   Contents   Index
LA_GGES computes for a pair of
The complex-Schur form is a pair of matrices
In both cases the columns of
A generalized eigenvalue of the pair
Next: Arguments Up: Generalized Nonsymmetric Eigenvalue Problems Previous: LA_GGES   Contents   Index Susan Blackford 2001-08-19
|
{"url":"http://www.netlib.org/lapack95/lug95/node302.html","timestamp":"2014-04-16T16:06:33Z","content_type":null,"content_length":"8881","record_id":"<urn:uuid:0fabde53-c902-466c-9d8d-a3a9d07f89fe>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Carnivore Scent-station Surveys: Statistical Considerations
Northern Prairie Wildlife Research Center
Proceedings of the North Dakota Academy of Science
Carnivore Scent-station Surveys: Statistical Considerations
Glen A. Sargeant* and Douglas H. Johnson
Department of Wildlife Ecology, University of Wisconsin,
226 Russell Labs, 1630 Linden Drive, Madison, WI 53706 (GAS)
and United States Geological Survey, Biological Resources Division,
Northern Prairie Science Center, 8711 37th St. SE, Jamestown, ND (DHJ)
This resource is based on the following source (Northern Prairie Publication 1015):
Sargeant, Glen A., and Douglas H. Johnson. 1997. Carnivore scent-station surveys:
statistical considerations. Proceedings of the North Dakota Academy of
Science 51:102-104.
This resource should be cited as:
Sargeant, Glen A., and Douglas H. Johnson. 1997. Carnivore scent-station surveys:
statistical considerations. Proceedings of the North Dakota Academy of
Science 51:102-104. Jamestown, ND: Northern Prairie Wildlife Research Center
Online. http://www.npwrc.usgs.gov/resource/mammals/carnivor/index.htm
(Version 17DEC97).
Scent-station surveys are a popular method of monitoring temporal and geographic trends in carnivore populations. We used customary methods to analyze field data collected in Minnesota during 1986-93
and obtained unsatisfactory results. Statistical models fit poorly, individual carnivores had undue influence on summary statistics, and comparisons were confounded by factors other than abundance.
We conclude that statistical properties of scent-station data are poorly understood. This fact has repercussions for carnivore research and management. In this paper, we identify especially important
aspects of the design, analysis, and interpretation of scent-station surveys.
Animal abundance is one appropriate measure for gauging the success of wildlife management, monitoring the status of threatened and endangered species, and determining the outcome of many
experiments. Thus, estimates of abundance are among the most important information needs of wildlife managers. Unfortunately, many carnivores are cryptic, secretive, and occur at low density.
Accurate estimates of abundance are seldom obtainable for such species, so indices of relative abundance often substitute (see species accounts in Novak et al. [1]). Carnivore scent-station surveys
are one such index.
We used standard methods to analyze scent-station data collected in Minnesota during 1986-93. Although our data set was among the largest in existence, we were frustrated by inadequate sample sizes.
The most popular statistical model for scent-station data fit poorly. Anomalous data had undue influence on summary statistics and affected results of statistical comparisons. To overcome these
problems, we devised improved methods for using scent-station surveys to monitor temporal and geographic trends in carnivore populations.
The difficulties we encountered can be traced to a few key features of survey designs and methods of analysis. These include the spatial distribution of scent stations, the experimental unit chosen
for analyses, the statistic used to summarize results, the statistical model underlying analyses, and confounding of statistical comparisons. In this paper, we discuss these aspects of the design and
analysis of carnivore scent-station surveys. Our presentation will demonstrate the use of field data to resolve issues raised in this paper.
Survey Methods
The carnivore survey conducted annually by the Minnesota Department of Natural Resources and the U.S. Fish and Wildlife Service was the source of field data for our presentation. Each scent station
consisted of a 0.9-m diameter circle of smoothed earth with a scented lure placed at the center. Stations were grouped in lines to simplify data collection. Ten scent stations placed along an unpaved
road at 480 m intervals comprised a line. Minimum spacing between lines was 5 km. Sampling was non-random, but 441 lines were distributed throughout the state. Each line was operated for one night
each year between late August and mid-October, though not all lines were operated every year. Presence or absence of tracks was recorded, by species, at each station when it was checked the day after
Choosing An Experimental Unit
Scent-station surveys vary in design. Sometimes stations are not grouped, as they were in Minnesota. The dispersion of stations should determine how stations are treated in analyses: in some cases,
stations may reasonably be treated as independent samples; in others, they should be considered correlated samples or subsamples. Usually these issues are given inadequate consideration.
Closely spaced stations produce correlated data, but how far correlations extend is unknown. Stations placed too close to one another produce redundant data. Spacing stations more widely than
necessary increases the cost of surveys and precludes intensive sampling of small areas. Subjective estimates of optimum spacing are inconsistent. Some investigators (e.g., Smith et al. [2]) have
treated stations within 320 m of one another as independent samples. Others (e.g., Morrison et al. [3]) thought it necessary to separate stations by as much as 1.6 km. We have used variograms to show
that correlations between stations often extend to 2000 m or more. Separating stations by this great a distance is seldom practical, so we have pursued the development of summary statistics and
methods of analysis that are robust to correlations between stations.
Summary Statistics
Results of scent-station surveys are almost always summarized by visitation rates (p[s]= stations visited/stations operated). As a summary statistic, visitation rates have two serious deficiencies.
First, visitation rates are not directly related to abundance because each station has the capacity for only one detection. When visitation rates are high, many individual carnivores encounter
stations that have already been visited. These additional visits have no effect on visitation rates. The result is a nonlinear relationship between visitation rate and abundance. The form of the
curve is unknown, except for the y-intercept (0) and asymptote (y=1), so visitation rates can be used only to rank abundances.
Second, visitation rates are easily influenced by factors other than abundance, especially when sample sizes are small or visitation rates are low. These may include weather, season, human activity,
or other factors that influence animal behavior. An ideal summary statistic would be robust to such effects. We will use examples to demonstrate the poor performance of visitation rates and present
two alternative summary statistics: the proportion of lines that are visited (p[1]) and the negative natural logarithm of the proportion of lines that are not visited (-ln[1-p[1]]).
Statistical Models
For analytical convenience, some investigators treat stations as independent Bernoulli trials: a visit by one or more individuals of a species is a "success." This model leads naturally to convenient
methods of analyzing binomial data, including logistic regression and log-linear models. The benefit of this approach is the ability to investigate variables that affect visitation probabilities of
individual stations (e.g., habitat characteristics). Aggregating stations into groups--lines, in our example--and treating each group as an experimental unit is a more conservative approach. Group
visitation rates are treated as independent random samples from unknown distributions. This approach has been used by investigators (e.g., Roughton and Sweeny [4]) who were unwilling to assume
stations were independent.
To our knowledge, the fit of the binomial model has never been tested. We devised a goodness-of-fit test and found the binomial distribution to be a poor model for visitation rates, but an adequate
one for the proportion of lines with one or more stations visited.
Statistical Comparisons
With few exceptions, statistical analyses of scent-station data have been limited to pairwise comparisons (e.g., of years, seasons, or geographic locations). Significant differences faithfully
reflect changes in abundance only if other factors that affect visitation are relatively constant over time and through space. Some investigators are unaware of the possible confounding effect of
other factors (e.g. weather). Most often, however, only two or three years of data are available and are inadequate for testing the significance of long-term trends. Long-term data sets and careful
analysis are required for separating changes in abundance from changes in confounding factors. We advocate testing for trends by simple linear regression of rank-transformed data: the method is easy
to apply and interpret and is robust to confounding.
Scent-station surveys are widely viewed as an accurate and inexpensive means of simultaneously gaining reliable information about the distribution and relative abundance of several species of
carnivores (Johnson and Pelton [5]). Whether a particular scent-station survey will meet these high expectations depends largely on how the following issues are resolved:
1. Sampling: How should stations be spatially distributed?
2. Response variables: Is p[s] a suitable summary statistic?
3. Statistical models: The binomial distribution has convenient properties, but does it adequately describe field data?
4. Statistical comparisons: Are comparisons confounded by unidentified factors?
We thank the Minnesota Department of Natural Resources, especially W. E. Berg, and the U.S. Fish and Wildlife Service for generously providing survey data. Funding for manuscript preparation was
provided by the Northern Prairie Science Center and the Wisconsin Cooperative Wildlife Research Unit of the Biological Resources Division, U.S. Geological Survey, and by the Graduate School,
Department of Wildlife Ecology, and College of Agriculture and Life Sciences at the University of Wisconsin-Madison.
Literature Cited
1. Novak, M., Baker, J.A., Obbard, M.E. and Malloch, B., eds. (1987) Wild furbearer management and conservation in North America. Ontario Trapper's Association, North Bay, 1150 pp.
2. Smith, W.P., Borden, D.L. and Endres, K.M. (1994) Scent-station visits as an index to abundance of raccoons: an experimental manipulation. J Mammal 75, 637-647.
3. Morrison, D.W., Edmunds, R.M., Linscombe, G. and Goertz, J.W. (1981) Evaluation of specific scent station variables in northcentral Louisiana. Proc Annu Conf of Southeast Assoc Fish and Wildl
Agencies 35, 281-291.
4. Roughton, R.D., and Sweeny. M.D. (1982) Refinements in scent-station methodology for assessing trends in carnivore populations. J Wildl Manage 46, 217-229.
5. Johnson, K.G., and Pelton, M.R. (1981) A survey of procedures to determine relative abundance of furbearers in the southeastern United States. Proc Annu Conf Southeast Assoc Fish Wildl Agencies
35, 261-272.
Downloading Instructions
-- Instructions on downloading and extracting files from this site.
carnivor.zip ( 12K) -- Carnivore Scent-station Surveys: Statistical Considerations
Extract all files and open
in a web browser.
|
{"url":"http://www.npwrc.usgs.gov/resource/mammals/carnivor/index.htm","timestamp":"2014-04-19T17:10:46Z","content_type":null,"content_length":"16472","record_id":"<urn:uuid:94b740fd-3579-43e0-89c1-668460ba84dd>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
|
January 2
latest paper
by Eric Verlinde on gravity as an entropic force makes me wonder whether I am getting old: Let me admit it: I just don't get it. Is this because I am conservative or lack imagination or too narrow
minded? If it were not for the author, I would have rated it as pure crackpottery. But maybe I am missing something. Today, there were three follow-up
s dealing with cosmological consequences (the idea being roughly that Verlinde uses the equipartition of energy between degrees of freedom each getting a share of 1/2 kT which is not true quantum
mechanically at low temperatures as there the system is in the ground state with the ground state energy. As in this business temperature equals acceleration a la Unruh this means the argument is
modified for small accelerations which is a modification of MOND type).
Maybe later I try once more to get into the details and might have some more sensible comments then but right now the way different equations from all kinds of different settings (Unruh temperature
was already mentioned, E=mc^2, one bit per Planck area, etc) are assembled reminds me of this:
Today, in the "Mathematical Quantum Mechnics" lecture, I learned that the QED vacuum (or at least the quantum mechanical sector of it) is unstable when the fine structure constant gets too big.
To explain this, let's go back to a much simpler problem: Why is the hydrogen-like atom stable? Well, a simple answer is that you just solve it and find the spectrum to be bounded above
First of all, what is the problem we are considering? It's the potential energy of the electron which in natural (for atomic physics) units is
Close to the nucleus however, the momentum can be so big that you have to think relativistically. But then trouble starts as at large momenta the energy grows only linearly with momentum and thus the
kinetic energy only scales like
Luckily, nuclei with large enough
But now comes QED with the possibility of forming electron-positron pairs out of the vacuum. The danger I am talking about is the fact that they can form a relativistic, hydrogen like bound state.
And both are (as far as we know) point like and thus there is no smearing out of the charge. It is only that
Some things come to my mind which in principle could help but which turn out to make things worse:
We know, QED has other problems like the Landau pole (a finite scale where
Any ideas or comments?
|
{"url":"http://atdotde.blogspot.com/2010_01_01_archive.html","timestamp":"2014-04-20T20:55:04Z","content_type":null,"content_length":"69166","record_id":"<urn:uuid:4cd4bf1a-bccc-4612-b869-29a5407e588b>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
|
We use uppercase letters to represent matrices, lowercase letters to represent vectors, and subscripted lowercase letters to represent matrix or vector elements. Thus, the matrix A has elements
denoted a[ij] and the vector v has elements v[j].
A banded matrix has its non-zero elements within a `band' about the diagonal. The bandwidth of a matrix A is defined as the maximum of |i-j| for which a[ij] is nonzero. The upper bandwidth is the
maximum j-i for which a[ij] is nonzero and j>i. See diagonal, tridiagonal and triangular matrices as particular cases.
The condition number of a matrix A is the quantity ||A||[2] ||A^-1||[2]. It is a measure of the sensitivity of the solution of Ax=b to perturbations of A or b. If the condition number of A is
`large', A is said to be ill-conditioned. If the condition number is one, A is said to be perfectly conditioned. (The Matrix Market provides condition number estimates based on Matlab's condest()
function which uses Higham's modification of Hager's one-norm method.)
A defective matrix has at least one defective eigenvalue, i.e. one whose algebraic multiplicity is greater than its geometric multiplicity. A defective matrix cannot be transformed to a diagonal
matrix using similarity transformations.
A matrix A is positive definite if x^T A x > 0 for all nonzero x. Positive definite matrices have other interesting properties such as being nonsingular, having its largest element on the
diagonal, and having all positive diagonal elements. Like diagonal dominance, positive definiteness obviates the need for pivoting in Gaussian elimination. A positive semidefinite matrix has x^T
A x >= 0 for all nonzero x. Negative definite and negative semidefinite matrices have the inequality signs reveresed above.
A diagonal matrix has its only non-zero elements on the main diagonal.
A matrix is diagonally dominant if the absolute value of each diagonal element is greater than the sum of the absolute values of the other elements in its row (or column). Pivoting in Gaussian
elimination is not necessary for a diagonally dominant matrix.
A matrix A is a Hankel matrix if the anti-diagonals are constant, that is, a[ij] = f[i+j] for some vector f.
A Hessenberg matrix is `almost' triangular, that is, it is (upper or lower) triangular with one additional off-diagonal band (immediately adjacent to the main diagonal). A nonsymmetric matrix can
always be reduced to Hessenberg form by a finite sequence of similarity transformations.
A Hermitian matrix A is self adjoint, that is A^H = A, where A^H, the adjoint, is the complex conjugate of the transpose of A.
The Hilbert matrix A has elements a[ij] = 1/(i+j-1). It is symmetric, positive definite, totally positive, and a Hankel matrix.
A matrix is idempotent if A^2 = A.
An ill-conditioned matrix is one where the solution to Ax=b is overly sensitive to perturbations in A or b. See condition number.
A matrix is involutary if A^2 = I.
The Jordan normal form of a matrix is a block diagonal form where the blocks are Jordan blocks. A Jordan block has its non-zeros on the diagonal and the first upper off diagonal. Any matrix may
be transformed to Jordan normal form via a similarity transformation.
A matrix is an M-matrix if a[ij] <= 0 for all i different from j and all the eigenvalues of A have nonnegative real part. Equivalently, a matrix is an M-matrix if a[ij] <= 0 for all i different
from j and all the elements of A^-1 are nonnegative.
A matrix is nilpotent if there is some k such that A^k = 0.
A matrix is normal if A A^H = A^H A, where A^H is the conjugate transpose of A. For real A this is equivalent to A A^T = A^T A. Note that a complex matrix is normal if and only if there is a
unitary Q such that Q^H A Q is diagonal.
A matrix is orthogonal if A^T A = I. The columns of such a matrix form an orthogonal basis.
The rank of a matrix is the maximum number of independent rows or columns. A matrix of order n is rank deficient if it has rank < n.
A singular matrix has no inverse. Singular matrices have zero determinants.
A symmetric matrix has the same elements above the diagonal as below it, that is, a[ij] = a[ji], or A = A^T. A skew-symmetric matrix has a[ij] = -a[ji], or A = -A^T; consequently, its diagonal
elements are zero.
A matrix A is a Toeplitz if its diagonals are constant; that is, a[ij] = f[j-i] for some vector f.
A matrix is totally positive (or negative, or non-negative) if the determinant of every submatrix is positive (or negative, or non-negative).
An upper triangular matrix has its only non-zero elements on or above the main diagonal, that is a[ij]=0 if i>j. Similarly, a lower triangular matrix has its non-zero elements on or below the
diagonal, that is a[ij]=0 if i<j.
A tridiagonal matrix has its only non-zero elements on the main diagonal or the off-diagonal immediately to either side of the diagonal. A symmetric matrix can always be reduced to a symmetric
tridiagonal form by a finite sequence of similarity transformations.
A unitary matrix has A^H = A^-1.
The Matrix Market is a service of the Mathematical and Computational Sciences Division / Information Technology Laboratory / National Institute of Standards and Technology
[ Home ] [ Search ] [ Browse ] [ Resources ]
Last change in this page : December 3, 1999. [ ].
|
{"url":"http://math.nist.gov/MatrixMarket/glossary.html","timestamp":"2014-04-17T01:02:38Z","content_type":null,"content_length":"9910","record_id":"<urn:uuid:90f2f6ee-9cf4-4c0f-8529-9102b3f6ed27>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: reduce needs too much memory
Replies: 0
reduce needs too much memory
Posted: Jul 16, 2013 5:54 AM
I have five integer variables j,k,i,nu,kappa with the following restrictions: 1 <= j <= q, 1 <= k <= q, 1 + q <= i <= n, 0 <= nu <= q,=
0 <= kappa <= q, where 1 < q < n.
I would like to know for which values we have that:
-j + r < r - =CE=BD < i + r - =CE=BA < i - k + r
where r is an integer too but irrelevant since it appears everywhere.
I need the result to be a disjunction (possibly not too long) of conditions of the form:
a1 <= v1 <= b1 && ... && a5 <= v5 <= b5
where v1, ... v5 are the variables j,k,i,nu,kappa in some order and the bounds depend only on the previous variables.
In order to get this I am using Reduce as follows:
Reduce[1 < q < n && 1 <= j <= q && 1 <= k <= q && 1 <= i <= n &=
& 0 <= nu <= q && 0 <= kappa <= q && -j + r < r - =CE=BD < i + r - =CE=BA < i - k + r, {v1,v2,v3,v4,v5},Reals]
The result of this Reduce can be easily brought in the form I need. The only problem is that its computation can take a lot of memory (and time) depending on the order of the variables. In
particular, when v5 is i (which is the case I am interested in), the Mathematica kernel was taking 33Gb of ram when I decided to stop it (the pc had only 24Gb of physical ram, so it was trashing).
I also tried to use Reduce with Integers instead of Reals (after all everything is integer) and while the computation is much faster, the result is full of parameters and hard to make sense of, in
particular I do not know how to translate that in ranges for my variables.
While I tried to be specific, this is part of a bigger problem. If you think you need more details on the big picture ask away.
Any suggestion? Am I doing something wrong? Should I give up on Reduce on r eals and try to make sense of the result of Reduce on integers?
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2582175","timestamp":"2014-04-19T23:08:26Z","content_type":null,"content_length":"15465","record_id":"<urn:uuid:837096eb-543c-45f6-9ccb-df83a41b8d34>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Non-trivial representation of second-smallest dimension
up vote 2 down vote favorite
The complex simple algebraic group $Sp_{m,\mathbb{C}}$ of $2m$-dimensional space $V$ has, for $m≥2$, an irreducible representation of dimension $m(2m−1)−1$ in a subspace of codimension $1$ of the
space $\Lambda^2V$. Is it the irreducible representation of smallest dimension after $V$ itself?
Thank you.
rt.representation-theory lie-groups
It follows from the Weyl dimension formula that the fundamental representations have minimal dimensions. So you only have to check the dimensions of these. – Vít Tuček Jan 10 '13 at 17:52
Well, minimal was not the right word. What I meant to say that the representation with minimal dimension among all representations is the same as the representation which has the smallest
dimension amongst fundamental representations. – Vít Tuček Jan 10 '13 at 17:55
@robot: OP wants to know the second smallest dimension of a nontrivial irreducible representation of a group of type $C_m$. – Mikhail Borovoi Jan 10 '13 at 18:59
The question itself just involves classical ideas, so it might be answered in the literature(?); anyway it's really about the Lie algebra, which may be a tag to add. As robot observes, Weyl's
dimension polynomial gives bigger values for non-fundamental weights. The fundamental irreducibles are close to the exterior powers, with dimensions given as a difference of two binomial
coefficients. A quick calculation for $m=3$ gives (I hope)` dimensions 6, 14, 14, but after that the later ones grow faster. Presumably the answer to your question is yes, but it needs an
argument. – Jim Humphreys Jan 10 '13 at 21:08
1 The answer is yes, see my answer to mathoverflow.net/questions/118472/…. – Mikhail Borovoi Jan 10 '13 at 22:27
show 2 more comments
2 Answers
active oldest votes
The irreducible complex representations of the simply connected simple group $G=Sp_{r,{\mathbb C}}$ of type $C_r$, for $r>1$, of dimension $n<{\rm dim}\ G$ are listed in the paper of
Andreev, Vinberg, and Elashvili, Table 1 (see also the Russian version). They are the fundamental irreducible representations $R(\pi_1)$ of dimension $2r$, $R(\pi_2)$ of dimension $2r^
up vote 3 2-r-1$, and, for $r=3$, $R(\pi_3)$ of dimension 14. For all $r\ge 2$, $r\neq 3$, we have ${\rm dim}\ R(\pi_1)=2r<2r^2-r-1={\rm dim}\ R(\pi_2)$, hence $R(\pi_2)$ is the nontrivial
down vote irreducible representation of second smallest dimension. For $r=3$, as Jim Humphreys noted, the dimensions are $6,14,14$, so ${\rm dim}\ R(\pi_2)={\rm dim}\ R(\pi_3)>{\rm dim}\ R(\pi_1)$,
accepted and $R(\pi_2)$ is a nontrivial irreducible representation of second smallest dimension.
@Gabriel-Kj: Since you have accepted my answer and have thanked Jim, you can also vote up both answers... – Mikhail Borovoi Jan 15 '13 at 19:01
add comment
It may be useful to expand my comments. The question involves Lie type $C_m$ with $m \geq 2$. Without developing Lie group or algebraic group language, it's enough to work with a simple Lie
algebra over $\mathbb{C}$ of this type. Using the standard numbering of vertices in the Dynkin diagram, let $E_i$ be the fundamental representation of highest weight $\varpi_i$ for $i= 1, \
dots, m$. Here $E_1$ is the standard module of dimension $2m$. For the others, there are numerous classical references. There is a thorough discussion of the construction in Bourbaki Groupes
et algebres de Lie (also in English translation), Chap. VIII, $\S13$, no. 3, (IV). In particular, the well-known dimension formula is made explicit:
$$\dim E_i = \binom{2m}{i} - \binom{2m}{i-2} \text{ for } i \geq 2$$.
Clearly $\dim E_1 > \dim E_2$. The claim is that $\dim E_2 \geq \dim E_j$ for all $j >2$. This should require an elementary combinatorial comparison, not involving any Lie theory, though it
would be interesting to see a conceptual argument.
up vote
3 down Granted this inequality, Weyl's dimension formula (as already noted) will complete the desired argument for $ E_2$ being the second smallest nontrivial irreducible representation. The
vote formula involves a fraction, whose denominator can be ignored. The numerator is an integral polynomial in the highest weight, which obviously grows larger as the coordinates of that weight
increase relative to the $\varpi_i$.
P.S. I don't want to leave the impression that I've written down a formal proof. It's only a proof-scheme, but should be fairly easy to complete using straightforward methods. For the
comparison between fundamental and non-fundamental weights, you'd need to look at the root system $C_m$ (say at the end of Chapters IV-VI of Bourbaki): a rough comparison of how often $\
alpha_1, \alpha_2$ occur in each positive root shows for instance how the Weyl dimension for $2\varpi_1$ exceeds the dimension for $\varpi_2$, etc. I don't recall seeing all of this written
down anywhere, but if there is motivation to do so it should be elementary to complete.
Thanks to your great comments. – purelymath Jan 14 '13 at 8:54
1 @Gabriel-Kj: I don't have ready access to the paper cited by Mikhail, so was giving a more hands-on approach to the question. In any case, it might be appropriate to upvote at least one
of our answers ;-) (Though I did resolve quite recently to avoid all future use of smileys.) – Jim Humphreys Jan 15 '13 at 16:56
@Jim: You can find the Russian version for free in mathnet.ru/links/419af2a1d9d33839a49ff8898866b056/faa2839.pdf . There is just a table, no details of calculations. – Mikhail Borovoi Jan
15 '13 at 18:53
1 @Mikhail: The question and the two answers was very helpful for me. Thanks to all of you. As you said there is no details to the table in the cited article. Could you tell me where I can
find some details to the table. I need to check possibility of some embeddings like the question. – Nrd-Math Jan 15 '13 at 22:47
1 @Nerd-Math: For split groups (and I guess compact groups), it's easy to decide via the highest weight, with triviality on the center iff the weight is in the root lattice. In your
example, usually not. – Jim Humphreys Jan 26 '13 at 14:49
show 3 more comments
Not the answer you're looking for? Browse other questions tagged rt.representation-theory lie-groups or ask your own question.
|
{"url":"http://mathoverflow.net/questions/118554/non-trivial-representation-of-second-smallest-dimension?answertab=active","timestamp":"2014-04-16T19:36:29Z","content_type":null,"content_length":"70941","record_id":"<urn:uuid:34da5993-377c-408a-83b3-a8ddf4fd5780>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Game Programming Tutorials
Vectors are a very important part of any game. They are used primarily in graphics, collision detection, physics engines and ai. A vector can be represented graphically as follows:
Vectors have two properties that make them very useful; specifically, magnitude (length) and direction. You will see moving forward how these two properties of vectors make them invaluable in game
We will represent 2-dimensional vectors as an [x,y] value pair and 3-dimensional vectors as an [x,y,z] value pair.
What can we do with vectors you ask? Well, for starters we can add them together. Given two vectors A and B, we can add them together to get a vector C. The following picture illustrates this
The green line represents vector A, the red line represents vector B, and the black line represents vector C. It is important to notice in this picture that the order in which we add the two vectors
does not matter (A + B = B + A). This is called the commutative property.
Vectors can also be subtracted from one another:
The green line represents vector A and the red line represents vector B and the black line represents the new vector C. Here we have B - A. It is important to notice that the vector is drawn from the
tip of A to the tip of B. If we were to subtract vector B from A, vector C would face the opposite direction. We can then state that vector subtraction is non-commutative.
A vector can be multiplied with a scalar value. For instance, say we have a 2-dimensional vector A = [3, 5] and we multiply it with the scalar value 2. Vector A is now twice as long and A = [6, 10].
If we wanted to reverse the direction of the vector we could multiply it with the scalar value -1. Two vectors can be multiplied together but I will save this for part two.
These are the most important basic properties of vectors. In the next tutorial I will explain the dot product, the cross product, vector length, and vector projection.
Continue to Vectors - part 2 Back to Main Page
|
{"url":"http://gameprogrammingtutorials.blogspot.com/2009/11/vectors-part-one.html","timestamp":"2014-04-17T01:18:27Z","content_type":null,"content_length":"59072","record_id":"<urn:uuid:d9cf17ee-deb9-44b4-ac81-1b3349b2eb86>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical and Statistical Models Examples
Results 1 - 10 of 22 matches
US Historical Climate: Excel Statistical part of Examples
Students import US Historical Climate Network mean temperature data into Excel from a station of their choice and use Excel for statistical calculations, graphing, and linear trend estimates.
Earth System Topics: Climate, Atmosphere
Vostok Ice Core: Excel (Mac or PC) part of Examples
Students use Excel to graph and analyze Vostok ice core data (160,000 years of Ice core data from Vostok Station). Data includes ice age, ice depth, carbon dioxide, methane, dust, and deuterium
isotope relative abundance.
Earth System Topics: Solid Earth:Earth Materials, Climate, Atmosphere
Energy Balance Climate Model: Stella Mac and PC part of Examples
Students explore a Global Energy Balance Climate Model Using Stella II. Response of surface temperature to variations in solar input, atmospheric and surface albedo, atmospheric water vapor and
carbon dioxide, volcanic eruptions, and mixed layer ocean depth. Climate feedbacks such as water vapor or ice-albedo can be turned on or off.
Earth System Topics: Atmosphere, Climate
Waves Through Earth: Interactive Online Mac and PC part of Examples
Students vary the seismic P and S wave velocity through each of four concentric regions of Earth and match "data" for travel times vs. angular distance around Earth's surface from the source to
Earth System Topics: Solid Earth:Deformation
Daisyworld: Stella Mac or PC part of Examples
After constructing a Stella model of Daisyworld students perform guided experiments to explore the behavior of Daisyworld to changes in model parameters and assumptions.
Earth System Topics: Atmosphere, Climate, Biosphere, Atmosphere:Weather, Earth's Cycles:Carbon Cycle, Biosphere:Ecology
Daisyworld: Interactive On-line PC and Mac part of Examples
Students use a JAVA interface design by R.M. MacKay to explore the Daisy World model. The JAVA interface comes with a link to a 6-page student activity page in PDF format.
Earth System Topics: Atmosphere, Biosphere:Ecology, Biosphere, Climate, Earth's Cycles:Carbon Cycle
Mass Balance Model part of Examples
Students are introduced to the concept of mass balance, flow rates, and equilibrium using an online interactive water bucket model.
Earth System Topics: Earth's Cycles, Climate, Biosphere:Ecology, Atmosphere
World Population Activity II: Excel part of Examples
(Activity 2 of 2)In this intermediate Excel tutorial students import UNEP World population data/projections, graph this data, and then compare it to the mathematical model of logistic growth.
Earth System Topics: Human Dimensions:Population
World Population Activity I: Excel part of Examples
(Activity 1 of 2) This activity is primarily intended as an introductory tutorial on using Excel. Students use Excel to explore population dynamics using the Logistic equation for (S-shaped)
population growth.
Earth System Topics: Human Dimensions:Population
Wind Surge: Interactive On-line Mac and PC part of Examples
Wind surge is a JAVA based applet for exploring how water level on the windward and leeward side of a basin depends on wind speed, basin length, water depth, and boundary type.
Earth System Topics: Human Dimensions:Natural Hazards, Atmosphere, :Weather
|
{"url":"http://serc.carleton.edu/nnn/mathstatmodels/examples.html","timestamp":"2014-04-16T07:16:12Z","content_type":null,"content_length":"27085","record_id":"<urn:uuid:6cb4ed52-45be-4be8-8440-8eae10a16db7>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
|
how to avoid this stackoverflow?
at this point
now obviously adding
will result in a stack overflow,since
now is there a way to avoid that?
if that helps:i encountered that problem as i was trying to make a BST.i wanted each node of the tree to have 3 fields,one to point to its parent,
two to point to its children...
thanks a lot for your help!
Re: how to avoid this stackoverflow?
The stack overflow is caused by the REPL trying to print the object (but it's fine to have it like you designed it, it doesn't cause stack overflow on its own). You will need to define method
print-object to prevent recursive printing of A. While defining it, you will need to take care of *print-circle* - it must be set to T as shown here: http://clhs.lisp.se/Body/v_pr_cir.htm#
STprint-circleST so that objects aren't printed repeatedly.
Re: how to avoid this stackoverflow?
Thanks a lot for your help wvxvw!
|
{"url":"http://lispforum.com/viewtopic.php?f=2&t=3399","timestamp":"2014-04-21T09:37:47Z","content_type":null,"content_length":"16803","record_id":"<urn:uuid:da978250-784e-414e-9456-9707d81b3c10>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CGTalk - How to check a given axis if it's pointing straight up.
View Full Version : How to check a given axis if it's pointing straight up.
03-10-2009, 07:47 PM
Is Maxscript provide some convienient way to check if a given axis (such as Y) is pointing up?
03-11-2009, 10:09 AM
You could probably use the dot product of the two vectors to work it out:
dot <Point3> [0,0,1]
where <Point3> is the point3 vector of the axis you want to check, eg $.transform.row2
The closer the result is to 1.0 then the closer to 'Up' the vector is pointing. If the result is negative then the vector is pointing 'Down'. If the result is 0.0 then the two vectors are
03-11-2009, 11:16 AM
This may be a bit sloppy, but it should work
03-11-2009, 11:30 AM
That will only tell you where the Z_Axis of the object is pointing, and due to floating point errors won't always work. A tiny variance in the vector in memory means that the expression will evaluate
to false, even though in the listener it prints out correctly, though I've got around this in the past by converting the value to a string first and comparing that.
03-11-2009, 11:51 AM
I did find the answer . Which did give me a slight shock .
Hua*MuLan~ which is me
Thank you for involving.
03-12-2009, 03:08 PM
That will only tell you where the Z_Axis of the object is pointing, and due to floating point errors won't always work. A tiny variance in the vector in memory means that the expression will evaluate
to false, even though in the listener it prints out correctly, though I've got around this in the past by converting the value to a string first and comparing that.
yes! it's true! it's very sloppy! whaaa!! :cry:
You're correct, using the dot product is the best way.
J < :
CGTalk Moderation
03-12-2009, 03:08 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.
vBulletin v3.0.5, Copyright ©2000-2014, Jelsoft Enterprises Ltd.
|
{"url":"http://forums.cgsociety.org/archive/index.php/t-740191.html","timestamp":"2014-04-21T09:48:13Z","content_type":null,"content_length":"6793","record_id":"<urn:uuid:aec57cda-0aa5-42e5-a2a1-d341a80cba26>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: probit residuals
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: probit residuals
From Paul Millar <paul.millar@shaw.ca>
To statalist@hsphsun2.harvard.edu
Subject Re: st: probit residuals
Date Thu, 17 May 2007 02:52:24 -0600
You can easily generate the residuals manually. I assume that you want the squared residual based on the probability value (as opposed to the actual prediction of zero or one).
sysuse auto
probit foreign price mpg weight
predict phat
gen resid=(foreign-phat)^2
- Paul
At 02:34 PM 16/05/2007, you wrote:
Maarten L. Buis wrote
--- Philip Ender wrote:
Stata's logit camand can compute several types of residuals but
probit does not compute any residuals at all. Why is probit
different from logit with respect to residuals?
--- Maarten L. Buis wrote
You can get those residuals when estimating that model with -glm-.
--- Philip Ender wrote:
Thanks Maarten, that helps a lot. But it still leaves unanswered why
you can't compute residuals after the probit command.
I know that my answer didn't answer your question, and neither does
this answer, the true answer is that I don't know the answer, just a
quick workaround.
Maarten "doesn't know all the answers" Buis
You are being much too harsh on yourself. So far, there is only one
question that you don't know the answer to. Maybe your signature could
Maarten "knows all but one answer" Buis
Phil Ender
Statistical Consulting Group
UCLA Academic Technology Services
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2007-05/msg00565.html","timestamp":"2014-04-18T14:35:24Z","content_type":null,"content_length":"8782","record_id":"<urn:uuid:27e49d19-11f2-4ce3-984f-e30a52968f6d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: constrained sum of random values
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: constrained sum of random values
From Eva Poen <eva.poen@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: constrained sum of random values
Date Wed, 25 Mar 2009 19:11:46 +0000
If you put such a constraint on your random values, it means that at
most 76 of them can be random; you need one value to bring the sum up
to 9900. This is very simple to do:
set obs 77
set seed 123
gen n = rnormal(125,25) in 1/76
qui sum n
replace n = 9900-r(sum) in 77
You didn't mention what kind of random values you want, i.e. which
distribution. I assumed normal distribution with mean 125 and sd of
2009/3/25 Carlo Lazzaro <carlo.lazzaro@tiscalinet.it>:
> Dear Statalisters,
> is there any way for imposing Stata 9.2/SE to create a variable composed of
> 77 random values so that the sum of these random values is 9900?
> Thanks a lot for your kindness and for your time.
> Kind Regards,
> Carlo
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2009-03/msg01348.html","timestamp":"2014-04-20T20:57:16Z","content_type":null,"content_length":"6416","record_id":"<urn:uuid:2005efd0-434f-41e7-b55c-5de10f7c92b8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Re: In "square root of -1", should we say "minus 1" or "negative
Replies: 0
Re: In "square root of -1", should we say "minus 1" or "negative
Posted: Dec 8, 2012 7:40 AM
Well, without wishing to appear rude, DrRocket's explanation had me tugging my beard for a while. Of course he is correct, but perhaps a little opaque.
Try this instead. Consider the polynomial . One says that the "zeros" of this polynomial are those values of for which is true.
First note that there are implied integer coefficients in this polynomial - in this case 1.
So a number is defined to be an algebraic number iff it is a zero of some polynomial with integer coefficients
Now there is a theorem, called the Fundamental Theorem of Algebra, that states that any polynomial of degree has at most zeros and moreover that . (The proof is hard!)
So we seek at least one algebraic number such that . In fact there are 2, which is the usual case for degree 2 polynomials, as indeed for those of higher degree.
So one defines the objects as the 2 zeros for our polynomial. Since these quite obviously not real numbers (in the usual sense of the word) our is called the "imaginary unit". (Notice that )
Further, one says that, given the field of real numbers, there is an extension of this field by our imaginary unit such that .
Now it is not hard to show (using the field axioms on ) that is also a field, whose elements must be of the form
And so (finally!!) one says that he complex numbers are the algebraic completion of the reals, which quite simply means that there is no polynomial whose zeros cannot be found in or its subfield
(usually both)
OMG, I had intended to add clarity to this thread - I now see I have done the opposite.
- ---------------
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2419404&messageID=7934164","timestamp":"2014-04-19T17:31:30Z","content_type":null,"content_length":"15380","record_id":"<urn:uuid:7f7f674f-ae0e-41b1-b6ae-64de0d38c7b7>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Discrete Math
Posted by Francesca on Monday, March 21, 2011 at 12:42pm.
Use mathematical induction to prove the truth of each of the following assertions for all n ≥1.
n³ + 5n is divisible by 6
I really do not understand this to much. This is what I have so far:
n = 1, 1³ - 5(1) = 6, which is divisible by 6
Then I really don't know how to go on from here. I appreciate any helpful replies. Thank you!
• Discrete Math - Francesca, Monday, March 21, 2011 at 8:39pm
• Discrete Math - MathMate, Monday, March 21, 2011 at 11:15pm
The next step is to assume the proposition is true for n.
The task is to show that if the proposition is true for n, then it would be true for n+1. Once that is established, then the proof is complete.
6|n^3+5n => 6|1+5 is true for n=1
Assume 6|N^3+5 is true for n, then
for n+1
=n^3+5n +3n^2+3n+6
=n^3+5n +3n(n+1) + 6
We now examine the three terms:
n^3+5n is divisible by 6 by initial assumption.
6 is divisible by 6.
3n(n+1) falls into two cases:
1. n is odd, then n+1 is even, therefore 6 divides 3*(n+1)
2. n is even, then 6 divides 3n.
Since all three terms are divisible by 6, we only have to extract the factor of 6 from each term and declare the expression (n+1)^3+5(n+1) is also divisible by 6.
By the principle of mathematical induction, the proposition is proved. QED.
• Discrete Math - Francesca, Tuesday, March 22, 2011 at 8:37am
Thank you so much for your response! But I have completed that particular question. However, can you please help with this one? I am confused. . .
Use mathematical induction to establish the following formula.
Σ i² / [(2i-1)(2i+1)] = n(n+1) / 2(2n+1)
Thanks for any helpful replies :)
• Discrete Math - Francesca, Tuesday, March 22, 2011 at 3:56pm
Any suggestions?
• Discrete Math - Francesca, Tuesday, March 22, 2011 at 3:57pm
Any suggestions?
• Discrete Math - MathMate, Tuesday, March 22, 2011 at 5:58pm
There are three steps:
1. Basis:
Test case for n=1 (or any other finite number):
Σ i² / [(2i-1)(2i+1)] = n(n+1) / 2(2n+1)
for n=1,
Left hand side=1/[(2*1-1)(2*1+2)=1/3
Right hand side=1(1+1)/[2(2*1+1)]=1/3
So formula is established for n=1.
2. Assume
formula is valid for case n.
3. Show that formula is valid for case n+1.
Left hand side:
Σ i² / [(2i-1)(2i+1)]
Σ i² / [(2i-1)(2i+1)]
+ (n+1)²/[(2*(n+1)-1)(2*(n+1)+2)]
n(n+1) / 2(2n+1) + (n+1)²/[(2*(n+1)-1)(2*(n+1)+2)]
[(n^2+n)*(2n+3)+2(n+1)^2] / [2(2n+1)(2n+3)]
=(n+1)(n+2)(2n+1) / [2(2n+1)(2n+3)]
Which is precise the right hand side with m replacing n+1.
Related Questions
Discrete Math - Use mathematical induction to prove the truth of each of the ...
Math - Use mathematical induction to prove that 5^(n) - 1 is divisible by four ...
Math - Use mathematical induction to prove that 2^(3n) - 3^n is divisible by 5 ...
Calculus - Use mathematical induction to prove that each proposition is valid ...
Mathematical induction. I'm stuck. So far I have.. - For all integers n ≥ ...
MATHS - prove by mathematical induction that 7^n+4^n+1 is divisible by 6
Algebra - Prove by mathematical induction that 3^(3n+1) + 2^(n+1) is divisible ...
Calculus - Use mathematical induction to prove that the statement holds for all ...
maths - prove by mathematical induction that 7n+4n+1 is divisible by 6
Discrete Math - Use mathematical induction to establish the following formula. ...
|
{"url":"http://www.jiskha.com/display.cgi?id=1300725772","timestamp":"2014-04-16T04:27:16Z","content_type":null,"content_length":"11520","record_id":"<urn:uuid:11bb7729-b581-4e8a-b551-aaf6b0025f36>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Free Math Worksheets: 5x TablesFree Math Worksheets: 5x Tables | Free Math Worksheets from Classroom Professor
Video: How to Teach the 5x Tables
The video explains two ways to think about 5x tables and work out the answer:
• multiply the number by 10, then halve the result
• Halve the number first, then make it a number of tens. Of course, an odd multiplier will require some extra thought when halving
Supporting Resources
The worksheets come from Ten Minutes a Day: Level 2, Book 2, written for students in Grade 3 (US), Year 3 (UK) or Year 4 (Australia).
Go to http://store.classroomprofessor.com/Ten_Min_a_Day_Lev_2_Book_2_Multiplication_12x_p/tmad2_2_mult_12x.htm for more information or to purchase your own copy of the full eBook. As a subscriber to
the Free Math Worksheets email list, you can use the voucher included in the worksheets PDF to access a discount on this resource.
Download the Worksheets:
Strategy for Teaching the 5x Tables:
Teaching the 5x tables is easiest if you connect them with the 10x tables.
Since 5 is half of 10, when you have pairs of 5 (eg, 6 fives), the answer will be half that number of tens (eg, 3 tens, or 30).
The bottom line here is to help students to understand the numbers, and work out the patterns themselves. We really do want students to construct their own understanding of the concepts, make sense
of the operation, and so reach the answer. This avoids any tendency to teach students a series of steps – an algorithm – which they will probably forget pretty soon anyway.
Watch the Video for Students:
Download the Worksheets:
Download link: Free Math Worksheets – 5x Tables
Leave a comment!
|
{"url":"http://freemathworksheets.classroomprofessor.com/5x-tables/","timestamp":"2014-04-25T00:41:45Z","content_type":null,"content_length":"31922","record_id":"<urn:uuid:2b20cc86-9a82-4170-a745-c931bee12e5c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Compressing probabilistic Prolog programs
Luc De Raedt, Kristian Kersting, Angelika Kimmig, Kate Revoredo and Hannu Toivonen
Machine Learning Volume 70, Number 2-3, , 2008. ISSN 0885-6125
ProbLog is a recently introduced probabilistic extension of Prolog (De Raedt, et al. in Proceedings of the 20th international joint conference on artificial intelligence, pp. 2468–2473, 2007). A
ProbLog program defines a distribution over logic programs by specifying for each clause the probability that it belongs to a randomly sampled program, and these probabilities are mutually
independent. The semantics of ProbLog is then defined by the success probability of a query in a randomly sampled program. This paper introduces the theory compression task for ProbLog, which
consists of selecting that subset of clauses of a given ProbLog program that maximizes the likelihood w.r.t. a set of positive and negative examples. Experiments in the context of discovering links
in real biological networks demonstrate the practical applicability of the approach.
PDF - PASCAL Members only - Requires Adobe Acrobat Reader or other PDF viewer.
|
{"url":"http://eprints.pascal-network.org/archive/00005922/","timestamp":"2014-04-16T04:14:12Z","content_type":null,"content_length":"7607","record_id":"<urn:uuid:dff6b4e9-9a81-4ff1-a54c-6eeddcc7a932>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
|
cubic meter per day to milliliters per week
Category - start in: main menu • flow rate menu • Cubic meters per day
Amount: 1 cubic meter per day (m3/d) of flow rate
Equals: 7,000,000.00 milliliters per week (mL/wk) in flow rate
TOGGLE : from milliliters per week into cubic meters per day in the other way around.
CONVERT : between other flow rate measuring units - complete list.
Conversion calculator for webmasters.
Flow rate
This unit-to-unit calculator is based on conversion for one pair of two flow rate units. For a whole set of multiple units for volume and mass flow on one page, try the Multi-Unit converter tool
which has built in all flowing rate unit-variations. Page with flow rate by mass unit pairs exchange.
Convert flow rate measuring units between cubic meter per day (m3/d) and milliliters per week (mL/wk) but in the other reverse direction from milliliters per week into cubic meters per day.
conversion result for flow rate:
From Symbol Equals Result To Symbol
1 cubic meter per day m3/d = 7,000,000.00 milliliters per week mL/wk
Converter type: flow rate units
This online flow rate from m3/d into mL/wk converter is a handy tool not just for certified or experienced professionals.
First unit: cubic meter per day (m3/d) is used for measuring flow rate.
Second: milliliter per week (mL/wk) is unit of flow rate.
7,000,000.00 mL/wk is converted to 1 of what?
The milliliters per week unit number 7,000,000.00 mL/wk converts to 1 m3/d, one cubic meter per day. It is the EQUAL flow rate value of 1 cubic meter per day but in the milliliters per week flow rate
unit alternative.
How to convert 2 cubic meters per day (m3/d) into milliliters per week (mL/wk)? Is there a calculation formula?
First divide the two units variables. Then multiply the result by 2 - for example:
7000000 * 2 (or divide it by / 0.5)
1 m3/d = ? mL/wk
1 m3/d = 7,000,000.00 mL/wk
Other applications for this flow rate calculator ...
With the above mentioned two-units calculating service it provides, this flow rate converter proved to be useful also as a teaching tool:
1. in practicing cubic meters per day and milliliters per week ( m3/d vs. mL/wk ) values exchange.
2. for conversion factors training exercises between unit pairs.
3. work with flow rate's values and properties.
International unit symbols for these two flow rate measurements are:
Abbreviation or prefix ( abbr. short brevis ), unit symbol, for cubic meter per day is:
Abbreviation or prefix ( abbr. ) brevis - short unit symbol for milliliter per week is:
One cubic meter per day of flow rate converted to milliliter per week equals to 7,000,000.00 mL/wk
How many milliliters per week of flow rate are in 1 cubic meter per day? The answer is: The change of 1 m3/d ( cubic meter per day ) unit of flow rate measure equals = to 7,000,000.00 mL/wk (
milliliter per week ) as the equivalent measure for the same flow rate type.
In principle with any measuring task, switched on professional people always ensure, and their success depends on, they get the most precise conversion results everywhere and every-time. Not only
whenever possible, it's always so. Often having only a good idea ( or more ideas ) might not be perfect nor good enough solution. If there is an exact known measure in m3/d - cubic meters per day for
flow rate amount, the rule is that the cubic meter per day number gets converted into mL/wk - milliliters per week or any other flow rate unit absolutely exactly.
Find in traditional oven
|
{"url":"http://www.traditionaloven.com/tutorials/flow-rate/convert-m3-cubic-meter-per-day-to-ml-milliliter-per-week.html","timestamp":"2014-04-19T09:39:19Z","content_type":null,"content_length":"29098","record_id":"<urn:uuid:5c542474-7a86-4939-9521-ec938c1b03f3>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions - Trying to find a teaching kids math video I liked
Date: Apr 6, 2012 11:26 AM
Author: Ben Claar
Subject: Trying to find a teaching kids math video I liked
I'm trying to re-find a great math talk I liked.
It was a college-aged guy, modern video -- in it he said we need to be teaching kids math differently (yeah, yeah, I know).
In the video, he shows many geometry examples -- finding the area of a shape by slicing it in certain ways and reasoning about the two sides, using symmetry to reason about a shape's properties, etc.
His (obvious) point was that forcing kids to work with numbers and formulas without letting them use their minds to reason about math doesn't let them get excited about reasoning through the problems.
One phrase I remember is something close to, "If I find myself thinking about triangles, and I often do, ..."
If anyone knows this talk and could post/send me a link, I'd appreciate it!
|
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7763281","timestamp":"2014-04-17T01:11:02Z","content_type":null,"content_length":"1787","record_id":"<urn:uuid:bc8b323d-e47b-4e15-9ba9-238e685ace1c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is the formula for converting grams to atoms or atoms to grams?How many atoms are in 878g of fluorine? - Homework Help - eNotes.com
What is the formula for converting grams to atoms or atoms to grams?
How many atoms are in 878g of fluorine?
You cannot directly convert grams to atoms. First you must covert your grams to moles, then you can take the moles and covert to atoms. If you take your 878 grams of fluorine and then look at the
atomic mass. You divide and find that 1 gram of fluorine is equal to 0.0525350025878 moles. Then you multiply that by your 878 grams. After you get that answer you can use Avagadro’s number,
6.022X10^23 to find the atoms. To get moles from atoms, divide number of atoms by 6.022 x 10^23. To get atoms from moles, multiply number of moles by 6.022 x 10^23.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/what-formula-grams-atoms-atoms-grams-62777","timestamp":"2014-04-20T00:46:58Z","content_type":null,"content_length":"26109","record_id":"<urn:uuid:f4b7b865-10a4-4904-b478-f3961cec0aa5>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Safe Haskell Safe-Inferred
The module containing the AlloyA type-class for working with effectful functions (of the type a -> m a). This module is an analogue to Data.Generics.Alloy.Pure that supports functions that result in
a monadic or applicative functor type.
All the functions in this module have versions for Applicative and for Monad. They have the same behaviour, and technically only the Applicative version is necessary, but since not all monads have
Applicative instances, the Monad versions are provided for convenience.
class AlloyA t o o' whereSource
The Alloy type-class for effectful functions, to be used with sets of operations constructed from BaseOpA and :-*. You are unlikely to need to use transform directly; instead use 'makeRecurseA'\/
'makeRecurseM' and 'makeDescendA'\/'makeDescendM'.
The first parameter to the type-class is the type currently being operated on, the second parameter is the set of operations to perform directly on the type, and the third parameter is the set of
operations to perform on its children (if none of the second parameter operations can be applied).
type RecurseA f opT = forall t. AlloyA t opT BaseOpA => t -> f tSource
A type representing a monadic/applicative functor modifier function that applies the given ops (opT) in the given monad/functor (f) directly to the given type (t).
makeRecurseA :: Applicative f => opT f -> RecurseA f opTSource
Given a set of operations (as described in the AlloyA type-class), makes a recursive modifier function that applies the operations directly to the given type, and then to its children, until it has
been applied to all the largest instances of that type.
type DescendA f opT = forall t. AlloyA t BaseOpA opT => t -> f tSource
A type representing a monadic/applicative functor modifier function that applies the given ops (opT) in the given monad/functor (f) to the children of the given type (t).
makeDescendA :: Applicative f => opT f -> DescendA f opTSource
Given a set of operations, makes a descent modifier function that applies the operation to the type's children, and further down, until it has been applied to all the largest instances of that type.
data BaseOpA m Source
The terminator for effectful opsets. Note that all effectful opsets are the same, and both can be used with the applicative functions or monad functions in this module. Whereas there is, for example,
both makeRecurseA and makeRecurseM, there is only one terminator for the opsets, BaseOpA, which should be used regardless of whether you use makeRecurseA or makeRecurseM.
data (t :-* opT) m Source
The type that extends an opset (opT) in the given monad/applicative-functor (m) to be applied to the given type (t). This is for use with the AlloyA class. A set of operations that operates on Foo,
Bar and Baz in the IO monad can be constructed so:
ops :: (Foo :-* Bar :-* Baz :-* BaseOpA) IO
ops = doFoo :-* doBar :-* doBaz :-* baseOpA
doFoo :: Foo -> IO Foo
doBar :: Bar -> IO Bar
doBaz :: Baz -> IO Baz
The monad/functor parameter needs to be given when declaring an actual opset, but must be omitted when using the opset as part of a type-class constraint such as:
f :: AlloyA a (Foo :-* Bar :-* Baz :-* BaseOpA) BaseOpA => a -> IO a
f = makeRecurse ops
|
{"url":"http://hackage.haskell.org/package/alloy-1.2.0/docs/Data-Generics-Alloy-Effect.html","timestamp":"2014-04-18T15:40:13Z","content_type":null,"content_length":"14999","record_id":"<urn:uuid:696268f3-f36c-4ecf-b91c-9fd08f11660d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A particle moves in the xy-plane so that at any time t its coordinates are x = α cos βt and y = α sin β t , where α and β are constants. The y-component of the acceleration of the particle at any
time t is
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/516b438de4b0be6bc9547135","timestamp":"2014-04-18T23:29:08Z","content_type":null,"content_length":"34907","record_id":"<urn:uuid:0b20e864-3bc3-4cd3-9d0e-1d5ce65d9c9d>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: margins using weights in calculation?
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: margins using weights in calculation?
From Steve Samuels <sjsamuels@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: margins using weights in calculation?
Date Sat, 11 May 2013 12:49:05 -0400
Please tell us about the design that produced weights with average value
Something else is puzzling here beyond the unusual weights. A regression
with pweights would show 13,000 as the number of obs with the sum of the
weights listed as "sum of wgt". If you did -svy: reg-, then the sum of
the weights would be reported as "Population size". If the weighted
regression with pweights is showing 7,701 as the "Number of obs",
instead of 13,000, then about 5,300 observations are being excluded.
Much will be clearer if, as the FAQ request, you show us the
actual code that you wrote and the Stata results. See Nick Cox's summary in
On May 11, 2013, at 11:31 AM, Richard Williams wrote:
I am curious how your number of cases goes down when using pweights. But in any event the help for margins says "By default, margins uses the weights specified on the estimator to average responses and to compute summary statistics. If weights are specified on the margins command, they override previously specified weights." So, I think margins is doing it fine, and there is no need for you to repeat the weight specification on the margins command.
At 02:12 PM 5/10/2013, Brent Gibbons wrote:
> When i run a weighted OLS regression (using either iweight or pweight) with about 13,000 cases, I get a reported number of observations of about 7,701 (which is what it should be given the values of the weights. But when I then run a margins command to compute dydx(*) on these data, with the same weight specified, I get the original unweighted number of cases (about 13,000) as the reported # of observations. Does this mean that when "margins" is averaging marginal effects across all cases, it is disregarding the weights and taking the simple unweighted average (i.e., giving each case a weight = 1)?
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/faqs/resources/statalist-faq/
> * http://www.ats.ucla.edu/stat/stata/
Richard Williams, Notre Dame Dept of Sociology
OFFICE: (574)631-6668, (574)631-6463
HOME: (574)289-5227
EMAIL: Richard.A.Williams.5@ND.Edu
WWW: http://www.nd.edu/~rwilliam
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2013-05/msg00418.html","timestamp":"2014-04-16T19:07:44Z","content_type":null,"content_length":"10757","record_id":"<urn:uuid:b9514c2d-6785-4b22-ace3-2699983f08b7>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Resistivity of doped Si
How do I calculate the resistivity of a doped Si if I have to substrate of the same type. Lets say a P and As doped Si. Is the mobility given by, [itex]\mu_n(N_d + N_d)[/itex], or should I do this in
an other way? Further, when I calculate the resistivity, is the concentration, [itex]N_{tot} = N_d(P)+N_d(As) [/itex]?
|
{"url":"http://www.physicsforums.com/showthread.php?s=2214fbc95c2e81e5c755e9d95d3c754c&p=4630635","timestamp":"2014-04-24T17:35:13Z","content_type":null,"content_length":"22578","record_id":"<urn:uuid:3da54996-4985-4fb4-ba78-75701a4b9d88>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is the metamathematical interpretation of knot diagrams?
up vote 10 down vote favorite
I am not a geometric topologist, but from looking over papers in the field, it's clear that knot diagrams are a major tool and we know how to use them in a way that is rigorous and trustworthy. My
background is in model theory and I am having trouble fitting them into that framework. I'm hoping for pointers to references or a quick sketch of the logical status of these things.
Specific points that I'm hung up on: knot diagrams have nice properties analogous to terms in formulas, like substitutability a la planar algebras. On the other hand, they are relation-like, relating
segments of a link to each other. On the third hand, the Reidemeister moves seem like a set of formulas in a model-theoretic interpretation (of a "theory" of knot isotopy into a "theory" of graphs.)
Finally, there's the standard trick of calculating invariants by recursively applying certain skein relations to get to the unknot.
In other words, I can't see that a knot diagram is always being used as a relation, term, formula, or substructure - it seems like none of these is adequate to fully describe their use.
I can see this playing out in a number of ways:
• Knot diagrams are a tool that can be completely subsumed by algebra in an algorithmic way, and are just a convenience;
• There is a theorem that says proofs using diagrams can be "un-diagrammed", but it's an existence proof
• People keep finding new ways to use knot diagrams in proofs, often by decorating the diagrams with new features like orientations - reproving via alternate techniques is then a useful
contribution, but always tends to work out; e.g., we don't completely understand the metamathematics of knot diagrams, but the general shape of things is clear;
• There are important theorems with no known proof except via diagrams, and nobody knows why;
• It will somehow all become obvious if I take the right course on planar algebras, or o-minimal structures, or category theory;
• It's subtle, but was all cleared up by Haken in the 70's;
• Dude, it's just Reidemeister's Theorem, and you need to go away and think about it some more.
Community wiki, in case the right answer is a matter of opinion.
Update - Just to be clear, this is not in any way a brief for eliminating knot diagrams - quite the opposite. Knot diagrams are honest mathematical objects, while also serving as syntax for other
objects. That seems like a ripe area for mathematical logic.
Also, I'm including partial diagrams, as in skein relations, when I use the phrase "knot diagram".
lo.logic knot-theory
I have trouble figuring out what your difficulty is here. The theory rests on three things which are discussed in any good book on knot theory (eg Burde and Zieschang). First, there is a
4 straightforward way of obtaining a tame knot in S3 from a diagram. Second, this process produces ambient isotopic knots from two different diagrams if and only if those diagrams differ by the
Reidemeister moves (this is the deepest part). Three, any tame knot can be so obtained. Using these three things, one can prove things about tame knots by proving things about diagrams. What more
is there to say? – Andy Putman Jan 30 '12 at 4:28
1 "There are important theorems with no known proof except via diagrams, and nobody knows why" --- this is very true if you replace "theorems" by "invariants" – John Pardon Jan 30 '12 at 5:10
What is your take on commutative diagrams in category theory and in applications of category theory? Gerhard "To Chase Elements Or Not" Paseman, 2012.01.29 – Gerhard Paseman Jan 30 '12 at 5:21
@Andy Putman: Does the syntactic/semantic connection established in Reidemeister's theorem need to be rebuilt from scratch for things like knotted trivalent graphs, or virtual knots, even though
the situations are closely related? Or is there a theory that relates all of these parts of geometric topology, in the same way that universal algebra relates situations that look like "functions
on sets obeying first-order axioms", like groups and rings? – Scott McKuen Jan 30 '12 at 6:29
@Gerhard Paseman: Not sure what you mean. Commutative diagrams are a big deal. The standard way of laying them out is flexible and useful - it adds something for the human mind that you don't get
1 easily from a symbolic language with sorts for object and morphism, and functions for domain/codomain. But would the logical content of category theory be reduced if we couldn't draw arrows?
Conversely, I don't know how to reduce knot diagrams to a set of function and relation symbols in a way that captures all the math, or if that can be done at all. That interests me. – Scott McKuen
Jan 30 '12 at 7:46
show 2 more comments
5 Answers
active oldest votes
Knot diagrams are a special sort of tangle diagrams, so I will reinterpret your question as being about tangle diagrams. Tangle diagrams are a "planar algebra" generated by $\{\text
{overcrossing},\text{undercrossing}\}$, so every tangle can be drawn by taking a finite collection of generators, arranging them in a plane, and connecting each of the four "loose ends" on
each generator by "bridge arcs" to another of the "loose ends" (of the same or a different generator), or leaving "the loose end" "loose". The relations are Reidemeister relations. If you
allow "bridge arcs" to cross (and allow virtual Reidemeister moves), you get virtual tangles, and if not, you get usual tangles.
This is already algebra, but it's algebra in a different sense from "x+3=2" because it takes place in the plane. You could introduce a height function and translate tangles into "algebra" in
the old sense, as some other answers suggest, but surely to do so would constitute an act of violence. Maybe it's better (philosophically at least) to widen one's perspective on what
constitutes "algebra".
I certainly think that yes, "there are important theorems with no known proof except via diagrams, and nobody knows why". Anything proven by using skein relations fits the bill. Nobody
up vote really knows what quantum invariants have to do with 3d topology (other than the Alexander polynomial for links, but the tangle version of the Alexander polynomial also fits the bill), but
5 down it's quite clear what they have to do with diagram algebras if they are defined via linear skein relations.
Surely more than that is true- many invariants of knots extend naturally to invariants of more general "diagrammatic algebras", and maybe this wider context is where we can understand those
invariants and where they make more conceptual sense. Maybe coming to terms with "the metamathematics of diagrams" (tangle diagrams, and more general classes of diagrams as well) as a brave
new algebra is a fruitful direction of research. I interpret current work of Dror Bar-Natan in this vein.
As a concrete example of where this concept has proven useful, see Zsuzsanna Dancso's thesis, which (building on ideas of Bar-Natan and D. Thurston) explains how considering diagrams of
knotted trivalent graphs (a larger "brave new algebra") helps us to understand how the Kontsevich invariant of a framed link changes under handle slides (Kirby 2 moves). Even more so,
Bar-Natan and Dancso's forthcoming w-knotted objects project is an example of a setting in which taking "the metamathematics of diagrams" seriously, treating them with respect as a genuine
form of algebra, motivates the project and yields substantial dividends, at least in the form of better understanding the Alexander polynomial of tangles.
I'm having trouble deciding between this and Qiaochu's answer - they both give me something concrete to look at and are right on point. Thanks. – Scott McKuen Jan 31 '12 at 6:12
add comment
I don't know what you mean by "substitutability a la planar algebras," since I don't know anything about planar algebras, but here's my take. Knot diagrams can be interpreted as
(representatives of) certain morphisms in the category $\text{Tang}$ of tangles, which can be succinctly described as the free braided monoidal category with duals on a self-dual unframed
object. More precisely, this category has a distinguished set of generators given by all of the structure I just described (the braiding, the self-duality, etc.), and a knot diagram is a
description of a certain type of morphism $0 \to 0$ in terms of these generators.
they are relation-like, relating segments of a link to each other.
The category of tangles is analogous in some ways to the category $\text{Rel}$ of sets and relations; in particular, they are both dagger categories.
On the third hand, the Reidemeister moves seem like a set of formulas in a model-theoretic interpretation (of a "theory" of knot isotopy into a "theory" of graphs.)
up vote
4 down I admit I don't really know what you mean by this either. The Reidemeister moves describe certain relations that hold in $\text{Tang}$ between the generators.
Finally, there's the standard trick of calculating invariants by recursively applying certain skein relations to get to the unknot.
By the universal property of $\text{Tang}$, any self-dual unframed object in a braided monoidal category gives rise to a braided monoidal functor from $\text{Tang}$, which imposes some
relations (such as skein relations) on the generators.
From my perspective the situation is at heart no more complicated than describing a group by generators and relations and naming elements of that group in terms of products of the generators
(provided that you've accepted Reidemeister's theorem).
This might be going the right way. Consider knotted surfaces in four dimensions. There are the Roseman moves, analogous to the Reidemeister moves. Is there a suitable dagger category for
this situation? Or for isotopy of knots embedded in some other fixed 3-manifold? If I name a class of manifolds with specified extra structure (e.g., a framing), embedded in a particular
ambient manifold, and ask for a characterization of the isotopies, does this automatically generate a dagger category with a finite set of relations? – Scott McKuen Jan 30 '12 at 6:53
Two related papers by Joyal and Street should be mentioned: The Geometry of Tensor Calculus, I (ivanych.net/doc/GeometryOfTensorCalculusI_Joyal_Street.pdf) and The Geometry of Tensor
Calculus, II (maths.mq.edu.au/~street/GTCII.pdf). – David Corfield Jan 21 at 11:28
add comment
I see this as a general problem in higher dimensional algebra, that there will need to be "higher dimensional rewriting". John Baez has illustrated the higher dimensional thinking by
displaying the picture
$$ ||| \;\; ||| || $$ $$||| \;\; ||||| $$
which is easily seen to illustrate $2 \times (3+5)= 2 \times 3 + 2 \times 5$ but the 1-dimensional formula involves various conventions, and is less transparent. We have found
diagrammatic rewriting useful in dealing with rotations in double groupoids (with connections), and there is a 3-dimensional rewriting argument in Section 5 of
up vote 4 F.-A. Al-Agl, R. Brown, R. Steiner, `Multiple categories: the equivalence between a globular and cubical approach', Advances in Mathematics, 170 (2002) 71-118.
down vote
which proves a key braid relation (Theorem 5.2).
So there is the interesting question of how to cope say with a 5-dimensional rewrite? Maybe computers could handle it?
These situations could well occur in algebras with partial operations whose domains are defined by geometric conditions, and with strict axioms.
add comment
If you want to consider knot diagrams as finitistic algebraic objects, it is not hard to show that they can be encoded as sets. For example, you may choose to label crossings and line
segments between them, then encode the over/under behavior, incidence, orientations, and other decorations using tuples, and finally introduce a notion of equivalence under relabelings. Any
proof using diagrams then has a translation into the algebraic world, but often that translation is too cumbersome to reproduce, and in the real world, you may encounter incomplete proofs.
up vote 1
down vote I agree with your first option in your list, but I feel that the phrase "just a convenience" does not do justice to the power of linguistic and notational choices. It is often very
difficult to find proofs of theorems, and it helps to use any tool available to ease the mental burden.
Hmmm - did not mean to disparage knot diagrams! Fixed above. As to the existence of such a translation to algebra, do you have a reference where this gets carried out and nothing is lost?
E.g., you don't end up with countably many Reidemeister relations, or lose the ability to distinguish non-isomorphic knots? – Scott McKuen Jan 30 '12 at 8:17
You certainly end up with many diagrams that are equivalent under Reidemeister moves and renamings of vertices and edges. However, it is similar to the situation where any isomorphism
class of groups forms a proper class under most set-theoretic foundations. We manage to do mathematics despite this impossibly large array of choices. The fact that nothing is lost and
that non-isomorphic knots remain distinguishable is somewhat more basic than Reidemeister's theorem - it is just the observation that the three types of moves do not change the
topological type of the knot. – S. Carnahan♦ Jan 31 '12 at 0:53
add comment
It seems that you may trust algebra more as a solid foundation for how to describe a mathematical object, so you may be interested in the classification of knot diagrams by knot
polynomials, such as the Jones Polynomial.
Additionally, you can note by Alexander's Theorem that every knot can be created by the closure of some braid. Since braids can be defined by a braid group "word", we can describe a
up vote 0 particular knot diagram by a word in this language. For example, if we have $\sigma_{1} \sigma_{2}^{-1} \sigma_{1} \sigma_{1} \sigma_{2}$, then it describes some knot diagram and is
down vote easier to work with algebraically.
These are, to my understanding, two common approaches to employing the power of algebra to analyze knots by converting knot diagrams into some algebraic representation.
add comment
Not the answer you're looking for? Browse other questions tagged lo.logic knot-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/87002/what-is-the-metamathematical-interpretation-of-knot-diagrams/87004","timestamp":"2014-04-17T12:58:13Z","content_type":null,"content_length":"87841","record_id":"<urn:uuid:dec986af-ff63-4499-b29b-cb20abb0171a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algorithmic Information Theory
Peter Grünwald and Paul Vitányi
In: Handbook of the Philosophy of Science, Volume 8: Philosophy of Information (2008) Elsevier Science .
We introduce *algorithmic information theory*, also known as the theory of *Kolmogorov complexity*. We explain the main concepts of this quantitative approach to defining `information'. We discuss
the extent to which Kolmogorov's and Shannon's information theory have a common purpose, and where they are fundamentally different. We indicate how recent developments within the theory allow one to
formally distinguish between `structural' (meaningful) and `random' information as measured by the *Kolmogorov structure function*, which leads to a mathematical formalization of Occam's razor in
inductive inference. We end by discussing some of the philosophical implications of the theory.
PDF - Requires Adobe Acrobat Reader or other PDF viewer.
|
{"url":"http://eprints.pascal-network.org/archive/00004592/","timestamp":"2014-04-17T13:09:17Z","content_type":null,"content_length":"7018","record_id":"<urn:uuid:bc1c17b3-01e9-4620-9257-3ef6bf335f04>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Two-Soliton Collision for the Gross-Pitaevskii Equation in the Causal Interpretation
Under certain simplified assumptions (small amplitudes, propagation in one direction, etc.), various dynamical equations can be solved, for example, the well-known nonlinear Schrödinger equation
(NLS), also known as the Gross–Pitaevskii equation. It has a soliton solution, whose envelope does not change in form over time. Soliton waves have been observed in optical fibers, optical solitons
being caused by a cancellation of nonlinear and dispersive effects in the medium. When solitons interact with one another, their shapes do not change, but their phases shift. The two-soliton
collision shows that the interaction peak is always greater than the sum of the individual soliton amplitudes. The causal interpretation of quantum theory is a nonrelativistic theory picturing point
particles moving along trajectories, here, governed by the nonlinear Schrödinger equation. It provides a deterministic description of quantum motion by assuming that besides the classical forces, an
additional quantum potential acts on the particle and leads to a time-dependent quantum force . When the quantum potential in the effective potential is negligible, the equation for the force will
reduce to the standard Newtonian equations of classical mechanics. In the two-soliton case, only two of the Bohmian trajectories correspond to reality; all the others represent possible alternative
paths depending on the initial configuration. The trajectories of the individual solitons show that in the two-soliton collision, amplitude and velocity are exchanged, rather than passing through
one another. On the left you can see the position of the particles, the wave amplitude (blue), and the velocity (green). On the right the graphic shows the wave amplitude and the complete
trajectories in (, ) space.
With the potential , the Gross–Pitaevskii equation is the nonlinear version of the Schrödinger equation , where is the complex conjugate and the density of the wavefunction.
The exact two-soliton solution is:
, with
and , and here .
There are two ways to derive the velocity equation: (1) directly from the continuity equation, where the motion of the particle is governed by the current flow; and (2) from the eikonal
representation of the wave, , where the gradient of the phase is the particle velocity. Therefore, the quantum wave guides the particles. The origin of the motion of the quantum particle is the
effective potential , which is the quantum potential plus the potential , . The effective potential is a generalization of the quantum potential in the case of the Schrödinger equation for a free
quantum particle, where . The system is time reversible. In the source code the quantum potential is deactivated, because of the excessive computation time.
J. P. Gordon, "Interaction Forces among Solitons in Optical Fibers",
Optics Letters, 8
(11), 1983 pp. 596–598.
P. Holland,
The Quantum Theory of Motion
, Cambridge: Cambridge University Press, 1993.
|
{"url":"http://demonstrations.wolfram.com/TwoSolitonCollisionForTheGrossPitaevskiiEquationInTheCausalI/","timestamp":"2014-04-19T04:26:54Z","content_type":null,"content_length":"47590","record_id":"<urn:uuid:96fe0a64-204e-4beb-bc24-879879838715>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Implicit Differentiation
3.3: Implicit Differentiation
Created by: CK-12
This activity is intended to supplement Calculus, Chapter 2, Lesson 6.
Problem 1 – Finding the Derivative of $x^2 + y^2 = 36$
The relation, $x^2 + y^2 = 36$implicitly defines two functions, $f_1(x) = y$$f_2(x) = y$$x^2 + y^2 = 36$$y$
$f_1(x) = && f_2(x) =$
Substitute the above functions in the original relation and then simplify.
$x^2 + (f_1(x))^2 = 36 && x^2 + (f_2(x))^2 = 36$
This confirms that $f_1(x)$$f_2(x)$explicitly defines the relation $x^2 + y^2 = 36$
Graph $f_1(x)$$f_2(x)$$x = 2$
• Why might this question be potentially difficult to answer?
• What strategies or methods could you use to answer this question?
One way to find the slope of a tangent drawn to the circle at any point $(x,y)$$f_1(x)$$f_2(x)$
$\frac{dy}{dx}f_1(x)= && \frac{dy}{dx}f_2(x)=$
Check that your derivatives are correct by using the Derivative command (press F3:Calc > 1:d( diffferentiate) on the Calculator screen.
Substitute $2$$x$$x^2 + y^2 = 36$$x = 2$$\frac{dy}{dx}f_1(2)= && \frac{dy}{dx}f_2(2)=$
Another way to find the slope of a tangent is by finding the derivative of $x^2 + y^2 = 36$implicit differentiation. On the Calculator screen press F3:Calc > D:impDif( to access the impDif command.
Enter impDif $(x^2 + y^2 = 36,x, y)$
Use this result to find the slope of the tangents to $x^2 + y^2 = 36$$x = 2$$y-$$x = 2$
$\frac{dy}{dx}(2,y)= && \frac{dy}{dx}(2,y)=$
• Is your answer consistent with what was found earlier?
• Rewrite the implicit differentiation derivative in terms of $x$$x$$y$$f_1(x)$$f_2(x)$impDif command.
Problem 2 – Finding the Derivative of $x^2 + y^2 = 36$
To find the derivative of a relation $F(x, y)$$y$$x$$x^2 + y^2 = 36$
$\frac{d}{dx}(x^2 + y^2) &= \frac{d}{dx}(36)\\\frac{d}{dx}(x^2) + \frac{d}{dx}(y^2) &= \frac{d}{dx}(36)$
Evaluate the following and by hand.
$\frac{d}{dx}(x^2) = && \frac{d}{dx}(36)=$
Use the Derivative command to find $\frac{d}{dx}(y^2)$$\frac{d}{dx}(y(x)^2)$$y(x)$$y$$y$$x$
$\frac{d}{dx}(y^2) =$
You have now evaluated $\frac{d}{dx}(x^2),\frac{d}{dx}(y^2)$$\frac{d}{dx}(36)$$\frac{d}{dx}(x^2) + \frac{d}{dx}(y^2) = \frac{d}{dx}(36)$$\frac{dy}{dx}$
Compare your result to the one obtained using the impDif command.
Problem 3 – Finding the Derivative of $y^2 + xy = 2$
The relation, $y^2 + xy = 2$$f_1(x)$$f_2(x)$explicitly define it.
• What strategy can be used to solve $y^2 + xy = 2$$y$
Solve $y^2 + xy = 2$$y$Solve command (press F2:Algebra > 1:solve() to check your answer.
The derivative of $y^2 + xy = 2$$f_1(x)$$f_2(x)$
Use implicit differentiation to find the derivative of $y^2 + xy = 2$impDif command. (Hint: The product rule must be used to find the derivative of $xy$
Use the derivative you found for $y^2 + xy = 2$$x = -6$$y-$$x = -6$
$\frac{dy}{dx}(-6,y)= && \frac{dy}{dx}(-6,y)=$
Verify your result graphically. Graph the two functions, $f_1(x)$$f_2(x)$
Extension – Finding the Derivative of $x^3 + y^3 = 6xy$
The relation $x^3 + y^3 = 6xy$$y$
• Find the derivative of $x^3 + y^3 = 6xy$impDif command to verify your result.
Use this result to find the slope of the tangents to $x^3 + y^3 = 6xy$$x = 1$Hint: Use the solve command to find the $y$$x = 1$
You can only attach files to None which belong to you
If you would like to associate files with this None, please make a copy first.
|
{"url":"http://www.ck12.org/book/Texas-Instruments-Calculus-Student-Edition/r1/section/3.3/","timestamp":"2014-04-18T21:44:07Z","content_type":null,"content_length":"121791","record_id":"<urn:uuid:7b9df58c-4af3-451b-baab-ca49be13eee9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
can someone help me write a program for MATLAB for cos(theta) SO FAR I HAVE: clc clear theta=0:pi:4*pi; plot(theta,cos(theta)) xlabel('theta') ylabel('cos(theta)') grid off for some reason it's
display the graph not 'curve' and it's not display the x values a 0,pi,2pi,3pi,4pi... someone help please :/
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/508b3dbee4b0d596c460eb5b","timestamp":"2014-04-17T21:40:26Z","content_type":null,"content_length":"64698","record_id":"<urn:uuid:333aa883-a6c8-41a7-a52e-545798381aff>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
|
In this section we describe some of the mathematical details required for investigation of the acceleration and transport of all charged particles stochastically and by shocks, and the steps and
conditions that lead to the specific kinetic equations (Eq. 10) used in this and the previous chapters.
A.1. Stochastic acceleration by turbulence
In strong magnetic fields, the gyro-radii of particles are much smaller than the scale of the spatial variation of the field, so that the gyro-phase averaged distribution of the particles depends
only on four variables: time, spatial coordinate z along the field lines, the momentum p, and the pitch angle cosµ. In this case, the evolution of the particle distribution, f(t, z, p, µ), can be
described by the Fokker-Planck equation as they undergo stochastic acceleration by interaction with plasma turbulence (diffusion coefficients D[pp], D[µµ] and D[pµ]), direct acceleration (with rate
[G]), and suffer losses (with rate [L]) due to other interactions with the plasma particles and fields:
Here c is the velocity of the particles and t, z, p, µ) is a source term, which could be the background plasma or some injected spectrum of particles. The kinetic coefficients in the Fokker-Planck
equation can be expressed through correlation functions of stochastic electromagnetic fields (see e.g. Melrose 1980, Berezinskii et al. 1990, Schlickeiser 2002). The effect of the mean magnetic field
convergence or divergence can be accounted for by adding
to the right hand side.
Pitch-angle isotropy: At high energies and in weakly magnetised plasmas with Alfvén velocity [A] v[A] / c << 1 the ratio of the energy and pitch angle diffusion rates D[pp] / p^2 D[µµ] [A] / ^2 << 1,
and one can use the isotropic approximation which leads to the diffusion-convection equation (see e.g. Dung & Petrosian 1994, Kirk et al. 1988):
At low energies, as shown by Pryadko & Petrosian (1997), specially for strongly magnetised plasmas ([A] > 1), D[pp] / p^2 >> D[µµ], and then stochastic acceleration is more efficient than
acceleration by shocks (D[pp] / p^2 >> [G]). In this case the pitch angle dependence may not be ignored.
However, Petrosian & Liu (2004) find that these dependences are in general weak and one can average over the pitch angles.
A.2. Acceleration in large scale turbulence and shocks
In an astrophysical context it often happens that the energy is released at scales much larger than the mean free path of energetic particles. If the produced large scale MHD turbulence is supersonic
and superalfvénic then MHD shocks are present in the system. The particle distribution within such a system is highly intermittent. Statistical description of intermittent systems differs from the
description of homogeneous systems. There are strong fluctuations of particle distribution in shock vicinities. A set of kinetic equations for the intermittent system was constructed by Bykov &
Toptygin (1993), where the smooth averaged distribution obeys an integro-differential equation (due to strong shocks), and the particle distribution in the vicinity of a shock can be calculated once
the averaged function was found.
The pitch-angle averaged distribution function N(r, p, t) of non-thermal particles (with energies below some hundreds of GeV range in the cluster case) averaged over an ensemble of turbulent motions
and shocks satisfies the kinetic equation
The source term t, r, p) is determined by injection of particles. The integro-differential operators
The averaged kinetic coefficients A, B, D, G, and [] = [] are expressed in terms of the spectral functions that describe correlations between large scale turbulent motions and shocks, the particle
spectra index Bykov & Toptygin 1993). The kinetic coefficients satisfy the following renormalisation equations:
Here G = ( 1 / [sh] + B). T(k, S(k, k, µ(k,
The test particle calculations showed that the low energy branch of the particle distribution would contain a substantial fraction of the free energy of the system after a few acceleration times.
Thus, to calculate the efficiency of the shock turbulence power conversion to the non-thermal particle component, as well as the particle spectra, we have to account for the backreaction of the
accelerated particles on the shock turbulence. To do that, Bykov (2001) supplied the kinetic equations Eqs. 24 - 29 with the energy conservation equation for the total system including the shock
turbulence and the non-thermal particles, resulting in temporal evolution of particle spectra.
|
{"url":"http://ned.ipac.caltech.edu/level5/March09/Petrosian2/Petrosian_appendix.html","timestamp":"2014-04-18T10:39:03Z","content_type":null,"content_length":"11929","record_id":"<urn:uuid:aac6fbbe-cacb-4c8e-b64d-a121ed598159>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This series covers all areas of research at Perimeter Institute, as well as those outside of PI's scope.
One of the cool, frustrating things about quantum theory is how the once-innocuous concept of "measurement" gets really complicated. I'd like to understand how we find out about the universe around
us, and how to reconcile (a) everyday experience, (b) experiments on quantum systems, and (c) our theory of quantum measurements. In this talk, I'll try to braid three [apparently] separate research
projects into the beginnings of an answer.
Quantum field theory in curved spacetime (QFTCS) is the theory of quantum fields propagating in a classical curved spacetime, as described by general relativity. QFTCS has been applied to describe
such important and interesting phenomena as particle creation by black holes and perturbations in the early universe associated with inflation. However, by the mid-1970\'s, it became clear from
phenomena such as the Unruh effect that \'particles\' cannot be a fundamental notion in QFTCS.
I will discuss an alternative approach to simulating Hamiltonian flows with a quantum computer. A Hamiltonian system is a continuous time dynamical system represented as a flow of points in phase
space. An alternative dynamical system, first introduced by Poincare, is defined in terms of an area preserving map. The dynamics is not continuous but discrete and successive dynamical states are
labeled by integers rather than a continuous time variable. Discrete unitary maps are naturally adapted to the quantum computing paradigm. Grover's
Modern motivations for extra spacetime dimensions will be presented, in particular the surprising AdS/CFT connection to particle compositeness. It will be shown how highly curved, "warped",
extra-dimensional geometries can naturally address several puzzles of fundamental physics, including the weakness of gravity, particle mass hierarchies, dark matter, and supersymmetry breaking. The
possibility of direct discovery of warped dimensions at
The progress in neutrino physics over the past ten years has been tremendous: we have learned that neutrinos have mass and change flavor. I will pick out one of the threads of the story-- the
measurement of flavor oscillation in neutrinos produced by cosmic ray showers in the atmosphere, and its confirmation in long distance beam experiments. I will present the history, the current state
of knowledge, and how the next generation of high intensity beam experiments will address some of the remaining puzzles.
Among the possible explanations for the observed acceleration of the universe, perhaps the boldest is the idea that new gravitational physics might be the culprit. In this colloquium I will discuss
some of the challenges of constructing a sensible phenomenological extension of General Relativity, give examples of some candidate models of modified gravity and survey existing observational
constraints on this approach.
The theory of strong interactions is an elegant quantum field theory known as Quantum Chromodynamics (QCD). QCD is deceptively simple to formulate, but notoriously difficult to solve. This simplicity
belies the diverse set of physical phenomena that fall under its domain, from nuclear forces and bound hadrons, to high energy jets and gluon radiation.
Shear viscosity is a transport coefficient in the hydrodynamic description of liquids, gases and plasmas. The ratio of the shear viscosity and the volume density of the entropy has the dimension of
the ratio of two fundamental constants - the Planck constant and the Boltzmann constant - and characterizes how close a given fluid is to a perfect fluid. Transport coefficients are notoriously
difficult to compute from first principles.
Theories of physics beyond the Standard Model predict the existence of relativistic strings, either as composite objects, or as fundamental constituents of matter. If they were created in the Big
Bang, they would very likely still be present in the universe today. This talk reviews the thirty year history of cosmic strings, and describes the latest work which finds intriguing hints in the
Cosmic Microwave Background data that the universe is filled with string.
|
{"url":"http://www.perimeterinstitute.ca/video-library/collection/colloquium?page=21&qt-seminar_series=1","timestamp":"2014-04-18T07:30:13Z","content_type":null,"content_length":"62377","record_id":"<urn:uuid:9a68ff5a-c62e-4371-b360-df9aba36098a>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
|
integral x^3*cos(x^2)
my sheet says integrate by substitution then by parts. feeling a little stumped by the substitution intergal (x^3)(cos(x^2))dx
integral (x^3)cos(x^2)=(1/2)integral ucos(u)du let u=cos(u) du=-usin(u) dv=u v=1/2 u^2 =(1/2)((1/2 u^2)cos(u)-integral (((1/2)u^2)(-usin(u))du) =(1/2)((1/2 u^2)cos(u)+(1/2)integral((u^2)(usin(u))du)
=(1/2)((1/2)(((u^2)cos(u)+integral((u^3)sin(u)du) than what comes after that? or am i missing something here? also how do i make the integral sign and put things to a power?
|
{"url":"http://mathhelpforum.com/calculus/87154-integral-x-3-cos-x-2-a.html","timestamp":"2014-04-18T15:56:20Z","content_type":null,"content_length":"43013","record_id":"<urn:uuid:861e3c2a-76e7-4d85-a4e2-aeabe20ff5ca>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Things You Need To Know About
Your Math 30-1 Diploma Exam
Before you write your Math 30-1 Diploma there are a few things you should know about the format and set-up of the exam. By understanding how your Math 30-1 diploma exam is structured, you’ll feel
more confident and prepared to write it.
How Many Questions Are On It?
Question Format Number of Questions Percentage Emphasis
Multiple Choice 28 70%
Numerical Response 12 30%
The Math 30-1 Diploma contains a total of 40 questions. Of these 28 questions are multiple choice and 12 of them are numerical response.
How Long Do I Have To Write the Exam?
The allotted time for the Math 30-1 Diploma Examination is two and a half hours long, with an additional half hour. This means that you’ll have at maximum three hours to complete the exam.
How is The Mathematical Understanding of the Exam Divided Up?
Mathematical Understanding Emphasis
Conceptual 34%
Procedural 30%
Problem Solving 36%
Conceptual Understanding-This means that you know more than just the definitions and being able to recall simple examples. You’re able to understand what mathematical concepts are being used, and you
can recognize the various meanings and interpretations of the questions.
Procedural Understanding-This means that you know how to carry out all of the mathematical steps in a question, and you’re able to do so efficiently.
Problem Solving Understanding-This means that you can solve unique and unfamiliar problems based on what you know. You should be able to explain the process that you used your mathematical solution.
How Is The Exam Content Divided Up?
Diploma Exam Content Percentage
Relations and Functions 55%
Trigonometry 29%
Permutations, Combinations, and Binomial Theorem 16%
Relations and functions-You must be aware of how to use your calculator properly, laws of logarithms, growth and decay formulas, and the general form of transformed functions.
Trigonometry-You need to know the arc length formula, and trigonometric identities that involve sine, cosine and tangent. You should also be aware of the other identities such as the Pythagorean
trigonometric identity. An understanding of how trigonometric functions are transformed is also required.
Perms, Combs, and Binomial Theorem-You must know the difference between permutations and combinations, and which one to use in counting problems. You must also understand the fundamental counting
principle, and how to find terms and expand out the binomial theorem.
|
{"url":"http://www.bestdiplomaprep.com/math-30-1-articles/math-30-1-diploma-structure.php","timestamp":"2014-04-19T14:46:07Z","content_type":null,"content_length":"13501","record_id":"<urn:uuid:5ab92a29-99b3-4cd7-a084-ec0d2c88a77a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Stephen Haas
Harvey Mudd College Mathematics 2003
The Hausdorff Dimension of the Julia Set of Polynomials of the Form z^d+c
In recent years there has been a fair amount of research about the Hausdorff dimension of Julia sets. Among the most interesting is whether the dimension of the Julia set varies continuously with the
polynomial generating it. We examine this problem and give a near-complete characterization of the answer for polynomials of the form z^d+c.
Read the final thesis (PDF).
Pictures related to my thesis topic:
• Successive zooms on M_3:
• Julia sets for:
|
{"url":"http://www.math.hmc.edu/seniorthesis/archives/2003/shaas/","timestamp":"2014-04-19T20:02:31Z","content_type":null,"content_length":"4586","record_id":"<urn:uuid:e7f68e3d-d2b2-4fae-90c0-1d753a0b8f2a>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Numbers News, Videos, Reviews and Gossip - Gizmodo
Some people gobble up algebra and calculus like their life depended on it; others would rather poke pins into their eyes than solve a simultaneous equation. But why is that? »
The sum of 1 + 2 + 3 + 4 + 5 + ... until infinity is somehow -1/12
Here's a fun little brain wrinkle pinch for all you non-math people out there (that should be everyone in the world*): the sum of all natural numbers, from one to infinity, is not a ridiculously big
number like you would expect but actually just -1/12. Yes, the sum of every number from one to infinity is some weird… »
You've Never Seen Pi Look So Interesting in So Many Ways
Martin Krzywinski is an artist. No, wait, he's a mathematician. Actually, scratch that: he's both, and he can make the number Pi look insanely beautiful. »
Why Times and Timezones Still Confuse the Hell Out of Developers
There have been no end of time and calendar mess-ups in software over the years, and they still seem to keep happening. So why is it that times and timezones still confuse the hell out of developers?
The Math Behind the NSA's Email Hacks
We're all outraged by the NSA's invasions of privacy, sure—but we don't perhaps understand exactly how it managed it. This video explains the maths behind the agency's surveillance. »
When Did There Become Too Many Books to Read in One Lifetime?
We've all done it: stood in a library, looking around, we've been confronted by the fact that there are way, way too many books in existence for us to ever read. But when in history did that happen?
Polynesian People Were Using Binary 600 Years Ago
Binary lies at the heart of our technological lives: those strings of ones and zeroes are fundamental to the way all our digital devices function. But while the invention of binary is usually
credited to German mathematician Gottfried Leibniz in the 18th Century, it turns out the Polynesians were using it as far back… »
It's Always 10:10 in Watch Ads
It's always 10:10 in watch ads, as this video shows. What the hell? »
The Math Hidden in Futurama
You might just watch Futurama and chuckle deeply to yourself—as you should!—but if you study it a little more closely, you'll find that it's stuffed full of numbers and math. »
This Video Finally Makes Sense of Logarithms
If math brings you out in a cold sweat, then logarithms surely leave you in a sobbing heap. But no longer, thanks to the wonderful Vi Hart. »
The Weird Math Behind Paper Sizes
Despite all the talk of the paperless office, for some reason most of us still seem to drown under piles of dead tree. But while we're all intimately familiar with the stuff, understanding where
those weird sizing conventions came from never seems to get any easier. »
What Does a Quadrillion Sour Patch Kids Look Like?
There are depressing moments. There are dark places. And then there's being a 31-year-old man carefully stacking Sour Patch Kids on the kitchen counter in a silent apartment at 2:00am. »
Look at the Insane Number Button Layouts Our Telephones Could Have Had
The year was 1960, and phones were changing. It was the beginning of the end for rotary dialing, and buttons were the future. But engineers faced an important, looming question: what order do you put
those buttons in? »
How Credit Card Numbers Are Created
If you thought the sprawl of 16 numbers across the front of your credit was randomly generated, think again: like any good string of numbers, an algorithm was involved in its creation. »
Four Infinity Puzzles to Melt Your Monday Mind
It's Monday morning and the work week ahead seems infinite. It's not though, and you should be glad because infinity isn't just long, it's also confusing. Take for instance this quartet of infinite
paradoxes that will blow your groggy mind. »
This Simple Math Puzzle Will Melt Your Brain
Adding and subtracting ones sounds simple, right? Not according to the old Italian mathematician Grandi—who showed that a simple addition of 1s and -1s can give three different answers. »
What The Hell Is a Transcendental Number?
There are some mathematical concepts that seem straightforward, but once you dig deeper seem to make less and less sense. Transcendental numbers are one of 'em—but what the hell are they? »
Population Growth and Climate Change Explained Using Lego
There's seemingly no limit to the power of Lego, and in this video Hans Rosling uses it with great panache to explain the problems of population growth and climate change. »
Wait a Minute, Does Math Actually Exist?
If you're studying for the algebra test tomorrow or thinking about how little you use math now after you failed it a million times in high school, here's something to melt your brain with just a tad:
math might not actually exist. It's not an actual thing of the universe, it's just something humans invented. Or is it… »
Why the Googolplex Number Is Insanely Difficult to Visualize, in Song
You might know that a googol—the digit 1 followed by 100 zeroes—is a very large number indeed. You might even know that a googolplex—a 1 followed by a googol of zeros—is an even bigger number. But
this video helps explain why it's such an insane concept to get your head round. »
|
{"url":"http://gizmodo.com/tag/numbers","timestamp":"2014-04-19T18:12:13Z","content_type":null,"content_length":"250948","record_id":"<urn:uuid:344558d9-0790-4571-af76-7ebdf770494a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reply to comment
January 1999
Since the dawn of time mankind has searched for ever quicker ways to travel from A to B. First it was the wheel; then horses and other animals were drafted in to help, and in the last century steam
engines started to power locomotives. Aeroplanes took off in this century, and in 1976 Concorde was the first passenger transport to break the sound barrier. The fastest method of transport in modern
times is the spacecraft, such as NASA's Space Shuttle.
Figure 1: The blue route?
But speed isn't the only consideration in travel: it's also important to make sure that the route chosen is the shortest. Imagine you were piloting Concorde from London to San Francisco, and you had
to choose a route on the map. Would you choose the straight line, marked in blue, or the long curved line marked in yellow? Well the curved line is the shortest one!
Why's that? It's because the Earth isn't flat, but maps are, so maps are always distorted. The shortest route between two points on a globe is along part of a great circle, which is a large circle
going all the way round the globe with the centre of the Earth at the centre of the circle. You can see on the picture of the world why the great circle route (in yellow) is shorter than the route
which looked straight on the flat map. (You can imagine the yellow line as being part of a bigger circle going all the way round the Earth - or you can convince yourself by looking at a toy globe!)
In general, on a surface that isn't flat, a line between two points on the surface which is as short as possible is called a geodesic. On the Earth, all geodesics are parts of great circles.
Calculating the distance
How do we calculate the great circle distance between two airports? It's important to know how far it is so that the airline companies know how much fuel to load onto their planes! It's easy to work
it out using ideas of longitude and latitude, and the scalar product (also called the dot product) of two vectors.
Figure 3: Latitude and Longitude
Think about a point
Its easy to see that the
We can therefore calculate the dot product
But there is another formula for the dot product:
Now we use the well-known formula for the length of an arc of a circle: it is
London to San Francisco
London Heathrow airport has latitude
Look up some locations in an atlas and work out the distances between them - for example, you could work out the distance from your home town to London. Does it differ much from the distance you'd
get if you just measured it on a map? Would the same happen for all pairs of locations?
The real routes that aeroplanes take aren't always the great circle ones, for various reasons. There are sometimes problems with flying over other countries' airspaces. Wind speed and direction make
a difference as well: it can be quicker to deviate from the great circle route in order to pick up a beneficial tailwind!
If you'd like to plot your own great circle routes there's a nifty page at gc.kls2.com which does just that. The same site has a useful help file on the subject.
Travelling at the speed of light?
Einstein's theory of relativity tells us that it's impossible for us (or anything else that has a mass!) to travel at the speed of light. But at the dawn of the new Millennium, some people are
determined to try - after a fashion!
The inhabited land area which will first officially see a dawn on the 1st of January, 2000, is Pitt Island, east of New Zealand in the Pacific Ocean. As the Earth turns, the Sun will rise in each
country in turn going West. Some intrepid travellers plan to be in the Pacific for the sunrise and then fly West at just the right speed so that the Sun is always just rising on the horizon behind
them. Of course they're not really travelling at the speed of light, but from where they are it'll look like they're managing to stay ahead of the Sun! For them, it'll be dawning on the Millennium
continuously for 24 hours.
Figure 6: Pacific sunrise
How fast would they need to fly to accomplish this? Fortunately it turns out to be a sensible speed! If they were flying around the equator, they would need to travel the circumference of the Earth,
a length
About the author
Dr. Robert Hunt is the Editor of PASS Maths. He is a Lecturer in the Department of Applied Mathematics and Theoretical Physics at Cambridge University, and a Fellow of Christ's College.
|
{"url":"http://plus.maths.org/content/comment/reply/2383","timestamp":"2014-04-20T08:29:21Z","content_type":null,"content_length":"37639","record_id":"<urn:uuid:21486862-56e3-409e-abc9-fea559f1820e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
|
GLSL & Matrix Math
12-29-2012, 11:13 PM #1
Junior Member Newbie
Join Date
Dec 2012
GLSL & Matrix Math
I'm new to 3-D math so I'm not sure if I'm doing this correctly.
I currently have a GLfloat[ 36 ] pyramid set to the buffer, and that data is sent to an vec4 Vertex in the shader.
Code :
#version 400 core
layout( location = 0 ) in vec4 Vertex;
uniform mat4 Model;
void main()
gl_Position = Model * Vertex;
I have a 4x4 matrix to rotate around the y-axis, and send that as a uniform to the "Model" variable in the shader.
Code :
glm::mat4 ModelR(
cos( rStepAngle ), 0.f, -sin( rStepAngle ), 0.f,
0.f, 1.f, 0.f, 0.f,
sin( rStepAngle ), 0.f, cos( rStepAngle ), 0.f,
0.f, 0.f, 0.f, 0.f
GLint ModelLoc = glGetUniformLocation( programs.basicShader, "Model" );
glUniformMatrix4fv( ModelLoc, 1, GL_FALSE, glm::value_ptr( ModelR ) );
I'm getting nothing on my screen except the clear color. I think I may be misunderstanding something here. Vertex is going to be 3 components (technically 4, with 1 at the end), that gets
multiplied by the Model rotation. How many times is the main() of the shader being run? My understanding of it was that main() is run until all 12 vertices of the pyramid are multiplied by the
Model matrix. Is this correct?
There's probably something wrong with my math...
Last edited by Qoheleth; 12-30-2012 at 12:27 AM.
You have no projection matrix nor view-to-camera matrix - this code will only move the vertex in world space.
Vertex is going to be 3 components (technically 4, with 0 at the end),
Vertices have 1 at the end so translations can happen; vectors have a 0 so only scaling and rotation can happen.
thanks tonyo. I just wanted to do a rotation in an orthographic projection. I thought by default opengl does that automatically if a perspective projection is not specified. For example with just
gl_Position = vertex, I am able to get a triangle on the screen with orthographic projection.
To see your object your gl_Position xyz values must stay within -1 - 1 and the w values is 1. There are no defaults when using shaders.
You need a projection matrix. For ortho, try
[1, 0, 0, 0]
[0, 1, 0, 0]
[0, 0, -1, 0]
[0, 0, 0, 1]
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
12-29-2012, 11:49 PM #2
Senior Member OpenGL Pro
Join Date
Jan 2012
12-30-2012, 12:04 AM #3
Junior Member Newbie
Join Date
Dec 2012
12-30-2012, 02:07 AM #4
Senior Member OpenGL Pro
Join Date
Jan 2012
12-30-2012, 08:28 AM #5
Super Moderator OpenGL Guru
Join Date
Feb 2000
Montreal, Canada
|
{"url":"http://www.opengl.org/discussion_boards/showthread.php/180634-GLSL-Matrix-Math?p=1246540&viewfull=1","timestamp":"2014-04-19T04:42:43Z","content_type":null,"content_length":"50772","record_id":"<urn:uuid:762422e2-97a2-4a25-9b34-b79abfbc10ae>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Navajo Rug Weaver
Weaving Coordinates
Before creating the design, the weaver marks two places on the rug (usually using chalk):
1) By counting the weft strands and dividing by two, she finds the horizontal center of the rug.
This is the equivalent of finding X = 0.
2) Using a loose strand of yarn, she determines the total height that the rug will be. She then folds the yarn in half, and uses this measure to mark the vertical center of the rug.
This is the equivalent of finding Y = 0.
Therefore, for the rug on the right, the center of the rug is defined where X = 0 (determined by weft count) and Y = 0 (determined by yarn length), which is the Origin on a cartesian grid.
While weaving, the weaver will often count the number of wefts in each design element to ensure that the rug to the right of the horizontal center is an exact reflection of the rug's left side. At
the vertical center, the weaver often reflects the design across the X axis. This process of counting to keep track of where design elements are to be placed is crucial for making the rugs
symmetrical in appearance.
|
{"url":"http://csdt.rpi.edu/na/rugweaver/geometry/Coordinates_5.html","timestamp":"2014-04-19T11:57:18Z","content_type":null,"content_length":"4365","record_id":"<urn:uuid:031f3b7e-2497-486c-96e9-80e24ad86f5b>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Word Problems: Vectors - Non-Right Triangle Relationships AlgebraLAB: Word Problems
In order to solve problems involving vectors and their resultants and dot products, it is necessary to
A typical problem involving vectors which can be solved by adding and/or subtracting vectors, using a dot product, or applying the Law of Sines or Law of Cosines to give us information about vectors
that form a non-right triangle. Usually you are asked to find information about unmeasured components and angles. Note that we will represent vectors either by using bold type
or by using the
vector notation
Let's look at two introductory examples of this type of problem.
First we make a diagram in standard position. The attempted flight path of the
vector OX
. Notice that the
shown is 50° because the 40° bearing is measured clockwise from North. The
vector XY
represents the wind from the South. The
vector OY
is the resultant path of the plane, affected by the wind. In the diagram shown below, the known magnitude and
are labeled. It is useful to put such a diagram in “standard position” with North being at the top of the vertical axis. We then measure a bearing clockwise from North to label the measurement of
an angle.
We need to find
vector OY
(ground speed) which is the
sum of
(air speed) and
(wind speed). One way to do this is to add the components of
and find the magnitude and direction of
, their resultant..We have the following :
given above could also have been written as:
We find the magnitude of OY, the plane’s ground speed, using the Pythagorean Theorem:
makes with the horizontal
in the diagram can be found as follows:
has been pushed (54.194° - 50º) = 4.194º off its attempted path if no corrections are made for the wind. The plane's new bearing is
90° – 54.194° = 35.806°
The final bearing and ground speed of the plane are 35.806° at 439.479 mph.
We first make a
diagram in standard position.
made by
vector OY
with the horizontal
is 50° because bearings are measured clockwise from North. The new bearing is the direction the
should take in order to allow the wind to push it upward to the desired 40° bearing, which is the
made by
vector OY
with the vertical (North) axis.
We do not know the
formed by the
vector OX
and the horizontal axis, labeled as (50º - θ) in our diagram.
In our diagram, we labeled
YOX as
θ. Using
YOX, we can apply the Law of Sines as follows:
must therefore aim 4.6° East of the desired 40° bearing, or at a new bearing of
(40º + θ) = 44.6°
To find the magnitude of
vector OY
, we must first know the measure of the
in the
OXY. To do this, we will start with the fact that the sum of the angles in
equals 180º. Note that
b = 40º due to
vertical angles
being equal.
ÐOXY = 180° - b - θ = 180 - 40° - 4.6° = 135.4°
We will now use the Law of Sines once again to find the magnitude of the
vector OY
Remember that when working vector problems that involve currents the vector equation is:
air speed + wind speed = ground speed
OX + XY = OY
In order to reach its desired destination, our plane will fly with
an air speed of 400 mph on a bearing of 44.6º.
His resultant ground speed will be 436.9 mph at a bearing of 40º.
Now that we can determine the components of the vectors
we can use their dot
to check our calculations and verify that the
YOX (θ in our diagrams) is 4.6º.
OY = (436.9cos(50°), 436.9sin(50°)) = (280.8, 334.7)
OX = (400cos(45.4°), 400sin(45.4°)) = (280.9, 284.8)
Recall that in general, if
= (v
, v
) and
= (u
, u
), are two vectors, their
dot product
can be expressed as either
where and represent the
of vectors
Applying the second formula to our situation results in the formula
Since this is a cumbersome calculation, you should use a calculator. Entering this into a calculator is shown in the screen below.
The result is cos(θ)
In some situations, the Laws of Sines and Cosines work very well. In other situations, it might be necessary to first calculate the
dot product.
|
{"url":"http://www.algebralab.org/Word/Word.aspx?file=Trigonometry_ResultantsDotProducts.xml","timestamp":"2014-04-18T11:11:32Z","content_type":null,"content_length":"43318","record_id":"<urn:uuid:077ab710-c806-477b-9204-1b9e8f772ca9>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A simple Big Data analysis using the RevoScaleR package in Revolution R
This post from Stephen Weller is part of a series from members of the Revolution Analytics Engineering team. Learn more about the RevoScaleR package, available free to academics as part of Revolution
R Enterprise — ed.
The RevoScaleR package, installed with Revolution R Enterprise, offers parallel external memory algorithms that help R break through memory and performance limitations.
RevoScaleR contains:
• The .xdf data file format, designed for fast processing of blocks of data, and
• A growing number of external memory implementations of the statistical algorithms most commonly used with large data sets
Here is a sample RevoScaleR analysis that uses a subset of the airline on-time data reported each month to the U.S. Department of Transportation (DOT) and Bureau of Transportation Statistics (BTS) by
the 16 U.S. air carriers. This data contains three columns: two numeric variables, ArrDelay and CRSDepTime, and a categorical variable, DayOfWeek. It is located in the SampleData folder of the
RevoScaleR package, so you can easily run this example in your Revolution R Enterprise session.
1. Import the sample airline data from a comma-delimited text file to an .xdf file. When we import the data, we convert the string variable to a (categorical) factor variable using
inFile <- file.path(rxGetOption("sampleDataDir"), "AirlineDemoSmall.csv")
rxTextToXdf(inFile = inFile, outFile = "airline.xdf", stringsAsFactors = T, rowsPerRead = 200000)
There are a total of 600,000 rows in the data file. Specifying the argument rowsPerRead allows us to read and write the data in 3 blocks of 200,000 rows each.\
2. View basic data information. The rxGetInfoXdf function allows you to quickly view some basic information about the data set and variables.
rxGetInfoXdf("airline.xdf", getVarInfo = TRUE, numRows = 20)
Setting the 'numRows' argument allows you to retrieve and display the first portion of the data.
3. Explore the data. Use the rxHistogram function to show the distribution of flight delay by the day of week
rxHistogram( ~ ArrDelay|DayOfWeek, data = "airline.xdf")
Next, we compute summary statistics for the arrival delay variable
rxSummary( ~ ArrDelay, data = "airline.xdf")
4. Estimating a Linear Model. Next, we fit a linear model in RevoScaleR using the 'rxLinMod()' function, passing as input the newly created XDF datafile. The purpose for fitting the model is
to compute group means of arrival delay for each scheduled departure hour for both weekdays and weekends. We use this information subsequently to create a 'lattice-style' conditioned lineplot of the
We use the RevoScaleR 'F()' function here, which tells the rxLinMod() function to treat a variable as a 'factor' variable. We also use the ability to create new variables "on-the-fly" by using the
transforms argument to create the variable "Weekend":
test.linmod.fit <- rxLinMod(ArrDelay ~ F(Weekend) : F(CRSDepTime),
transforms=list(Weekend = (DayOfWeek == "Saturday") | (DayOfWeek == "Sunday")),
cube = TRUE, data = "airline.xdf")
The 'test.linmod.fit$countDF' component, contains the group means and cell counts. Since the independent variables in our regression were all categorical, the group means are the same as the
coefficients. We can do a quick check by taking the sum of the differences:
linModDF <- test.linmod.fit$countDF
sum(linModDF$ArrDelay - coef(test.linmod.fit))
The output from our linear model estimation includes standard errors of the coefficient estimates. We can use these to create confidence bounds around the estimated coefficients. Let's add them as
additional variables in our data frame:
linModDF$coef.std.error <- as.vector(test.linmod.fit$coef.std.error)
linModDF$lowerConfBound <- linModDF$ArrDelay - 2*linModDF$coef.std.error
linModDF$upperConfBound <- linModDF$ArrDelay + 2*linModDF$coef.std.error
We'll make two more changes before exploring the data graphically: create an integer variable from the factor variable created by the F() function, and give labels to the "weekend" factor variable.
linModDF$DepartureHour <- as.integer(levels(linModDF$F.CRSDepTime.))[linModDF$F.CRSDepTime.]
levels(linModDF$F.Weekend.) = c("Weekday", "Weekend")
5. Plot the results. We can use rxLinePlot to create a conditioned plot, with weekdays shown in one panel and weekends the other. Here is the call to produce the lineplot:
rxLinePlot( lowerConfBound + upperConfBound + ArrDelay ~ DepartureHour | F.Weekend.,
data = linModDF, lineColor = c("Blue1", "Blue2", "Red"),
title = "Arrival Delay by Departure Hour: Weekdays and Weekends")
The line plot is informative, as it clearly shows that our estimates of arrival delays in the early hours of the morning are not very precise because of the small number of observations.
You can follow this conversation by subscribing to the comment feed for this post.
A comparison of timing between Open Source R and proprietary Revolution R would have been nice...specially since 600K X 3 data can be crunched in GNU R anyways. And, what about a real big data
problem...for eg 3000 columns instead of 3 and 6 million rows instead of 600K? I'd like to see that.
I second Nick's comments.
You can run the same analysis on the large airline data, which contains 123,534,969 observations and 30 variables.
The large airline data can be downloaded here:
The extracted zip contents of the XDF datafile is 13.4 GB large.
|
{"url":"http://blog.revolutionanalytics.com/2011/05/big-data-analysis-in-revolution-r.html","timestamp":"2014-04-19T15:11:43Z","content_type":null,"content_length":"34848","record_id":"<urn:uuid:81f76f39-9b16-43b2-81f0-2871e2a7dc9d>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
|
and survey-related effects, and interpreting the results appropriately. Assessment of groups for the adequacy of intake also involves choosing between two methods: (1) the probability approach or (2)
the Estimated Average Requirement (EAR) cut-point method. Both are presented in detail in Chapter 4.
Individuals in a group vary both in the amounts of a nutrient they consume and in their requirements for the nutrient. If information were available on both the usual intakes and the requirements of
all individuals in a group, determining the proportion of the group with intakes less than their requirements would be straightforward. One would simply observe how many individuals had inadequate
intakes. Unfortunately, collecting such data is impractical. Therefore, rather than actually observing prevalence of inadequate intakes in the group, it can only be approximated by using other
Using the EAR to Assess Groups
Regardless of the method chosen to actually estimate the prevalence of inadequacy, the EAR is the appropriate DRI to use when assessing the adequacy of group intakes. To demonstrate the pivotal
importance of the EAR in assessing groups, the probability approach and the EAR cut-point method are described briefly below.
The Probability Approach
The probability approach is a statistical method that combines the distributions of requirements and intakes in the group to produce an estimate of the expected proportion of individuals at risk for
inadequacy (NRC, 1986). For this method to perform well, little or no correlation should exist between intakes and requirements in the group. The concept is simple: at very low intakes the risk of
inadequacy is high, whereas at very high intakes the risk of inadequacy is negligible. In fact, with information about the distribution of requirements in the group (median, variance, and shape), a
value for risk of inadequacy can be attached to each intake level. Because there is a range of usual intakes in a group, the prevalence of inadequacy—the average group risk—is estimated as the
weighted average of the risks at each possible intake level. Thus, the probability approach combines the two distributions: the requirement distribution which provides the risk of inadequacy at each
intake level, and the usual intake distribution which provides the intake levels for the group and the frequency of each.
|
{"url":"http://www.nap.edu/openbook.php?record_id=9956&page=8","timestamp":"2014-04-18T13:27:17Z","content_type":null,"content_length":"41228","record_id":"<urn:uuid:5cbccd5a-9a1e-431d-abb6-e79da787017a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Understanding Constructions in Set
Eric: How is this (note: terminal object) the universal cone over the empty diagram?
Toby: It seems to me that this is really a question about terminal objects in general than about terminal objects in $Set$. A cone over the empty diagram is simply an object, and a morphism of cones
over the empty diagram is simply a morphism. A universal cone over a diagram $J$ is a cone $T$ over $J$ such that, given any cone $C$, there is a unique cone morphism from $C$ to $T$. So a univeral
cone over the empty diagram is an object $T$ such that, given any object $C$, there is a unique morphism from $C$ to $T$. In other words, a universal cone over the empty diagram is a terminal object.
I don't see the point of the last paragraph before this query box. Already at the end of the previous paragraph, we've proved that $\bullet$ is a terminal object, since there is a unique function
(morphism) to $\bullet$ from any set (object) $C$. It almost looks like you wrote that paragraph by modifying the paragraph that I had written in that place, but that paragraph did something
different: it proved that ${!}$ was unique. Apparently, you thought that this was obvious, since you simply added the word ‘unique’ to the previous paragraph.
Alternatively, if you want to keep a paragraph that proves unicity, then you can remove ‘unique’ and rewrite my original unicity proof in terminology more like yours, as follows:
Now let ${!}'\colon C \to \bullet$ be any function. Then
${!}'(z) = * = {!}(z)$
for any element $z$ of $C$, so ${!}' = {!}$.
Eric: Thanks Toby. I think what I’m looking for is a way to understand that a singleton set is the universal cone over the empty diagram. All these items should be seen as special cases of limit.
Unfortunately, I don’t understand limit well enough to explain it. In fact, that is one of the reasons to create this page, i.e. so that I can understand limits :)
The preceding paragraph was my attempt to make it look like a limit, but I obviously failed :)
Ideally, this section would show how terminal object is a special case of limit somehow.
|
{"url":"http://ncatlab.org/nlab/show/Understanding+Constructions+in+Set","timestamp":"2014-04-21T02:02:00Z","content_type":null,"content_length":"18615","record_id":"<urn:uuid:c5af1588-2a69-4dd9-b8b6-0e7af3cc794c>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What question would you ask to identify whether or not you were chatting with a well developed software or a person? | A conversation on TED.com
• West Lafayette Bra, IN
• United States
Student, Purdue University
TEDCRED 50+
This conversation is closed. Start a new conversation
or join one »
What question would you ask to identify whether or not you were chatting with a well developed software or a person?
Imagine an experiment where you are asked to chat with one hundred people online, no sound or image, just text. Three of them are actually not real, they an extremely good automated response systems.
Your task is to identify those three. You are allowed to ask only one and same question from everyone. People on the other end are specifically chosen such that none of them have similar personality.
Programs are also given a unique personality. Only trick is, while you ask questions, programs observe responses of everybody else and may or may not change behavior based on that. What would your
question be?
P.S. If you would like to be sure how good is 'extremely good' automated response system in the though experiment above, you may consider it to be the best of such systems you think is possible.
Closing Statement from Farrukh Yakubov
Now that the conversation is over I would like to leave you with more thoughts.
Imagine, this experiment took place and you asked your question, and indicated three of the participants as programs. What if this experiment was not what you thought it was, and after the experiment
you were told that 100 participants were all human or all programs, or even a single person answering 100 different ways? What if the purpose of the experiment was not about the capabilities of
programs, but about the people - to see how people percieve an intelligent software? Did you think about this possibility?
On the other hand, if the experiment was to test the programs, how effective do you thinki it would be to use this same question of the experiment? i.e. asking "What question would you ask to
identify whether or not you were chatting with a well developed software or a person?" from each of the 100 participants.
It is up to you to chose the post experinment scenario, and you would be correct. Because, the experiment can work both ways wether you decide to look at this experiment as an attemp to test
programs, or a way of understanding peoples' understanding of programs.
Showing single comment thread. View the full conversation.
Jan 22 2014: You are in a prison with two other people, one always lies and the other always tells the truth. There are two doors in the prison, one leads to sudden death, the other to feedom.
Both people know what is behind each door. You may ask one question to either person and walk out to freedom, what is the question?
Jan 23 2014: The comment above was just how I would answer to Keith's question if I was one of the 100 on the other end of the network. But if you mean how would I judge if I was the
one asking questions, then I would not expect everyone to answer the right way, because there might not be single correct way to answer. Instead, one way I could judge is to get all
100 answers, then compare them.
Jan 23 2014: You and I have very similar interests and knowledge backgrounds I see, my guess is less than 1 in a billion can solve that problem without looking it up on the internet. That
one was easy, would you like to try this one? Can you tell me how to sort data without moving it? That took IBM's best over thirty years, I did it over a weekend 46 years ago. If you get
that one I will give you a really hard one about quantum physics. I am curious to see if you have any limitations.
Jan 23 2014: Concise way of explaining this, is that those two (people in the question) behave like quantum entangled particles. Longer verbal explanation is below:
Hi Yoka, I think Keith said it's wrong because just asking any one of them about the safe way out does not provide sufficient information to identify where the doors lead. You may
just get lucky and ask the person that knows the truth, or it may turn out otherwise. The trick is to assume the person you are going to talk to could be both of them. If you asked
any one of them which way the other one would point to as safe, they could either lie or tell the truth. But if they lie, then the other person would tell the truth, and vice versa.
While having only two possible answer choices, opposite of lie is the truth, of truth is a lie.
Therefore, no matter who you ask, you either get truth about the lie, or a lie about the truth. Thus you get a lie. Now you can be sure about where the doors lead.
Jan 23 2014: Thank you for your elaboration. I think I understand it. But I meant to take all of us three to get out of the prison. I can just let them open the door and follow them to go
out. So if the liar wants to survive, he has to tell the truth.
And actually, I don't think this kind of question can help me judge a person on the internet in our real life. I'd be too lazy to answer it and pass my attention to chat with other
Jan 23 2014: Thanks for your patience Yoka, I figured Farrukh could help better than I could, he is a smart and gentle guy. The question is a brain teaser and not at all easy to solve
but you just plowed into it anyway and I give you a thumbs up for trying. I enjoy your comments and think you have a lot to offer so hang in there and fire away anytime you like.
Jan 23 2014: The heap sort is a comparative sort still an incredibly slow sort compared to mine. I'll give you a hint, my sort does not sort anything. It operates as fast as the records
can be read, no data movement and near zero cpu time. It was ingenious 46 years ago and as far as I know it is still the fastest sort in the world. Some Professors at Stanford challanged
me to beat their sort version because their's is the fastest sort ever published. My sort has never been published and aside from my Professor a retired Air Force Mathematician no one has
ever seen my code. It was my first program, a simple assignment for class and it was supposed to be written in COBOL, however I wrote it in Fortran which I taught myself and he did not
understand the code.
By the way I had a good laugh about your "quantum entangled particles" explanation. By the way if you have not seen Princess Bride by all means watch it some time.. 3 min. part on logic-
Jan 24 2014: The first time I thought about this I assumed no data movement meant using any memory (other than where data already resides) for structures regarding the sorting
information is not allowed. Also, I'm assuming linear complexity when u say "zero cpu time". Please let me know if you meant something else other than the above. Also, does your
design work with any type of data with same efficiency? From what you describe it sounds as if its a method of accessing data as if it was sorted, while order of data entries remain
If the purpose is just to provide the sorted index of a requested entry, Selection algorithm to find kth smallest item from the set has a linear complexity. But it is not ideal if
random kth items are being continuously accessed.
Thus I have a solution in mind, that modifies, reuses and combines existing methods to create generic non-comparative sorting that works with a set of data (let size of the set be
'n'), where each item has arbitrary length, and does so in linear time.
Edit: I don't expect it to be same or similar to what you have in mind, its just another way of doing things.
Algorithm is explained on the next comment. This is going to be divided into few chunks due to limits of this conversation platform.
Jan 24 2014: Continuation of my previous comment:
It does not modify the original data set, but produces an array of pointers (referred as the map) of length n. Other memory that will be used is of size 256 integers (referred as the
workspace), which is no longer required after completion of the algorithm. I'm going to start describing it from the lowest component to highest. Also, I'll use C notation to avoid
wordy sentences.
First component takes advantage of pointer manipulation and underlying architecture.It is a partial Counting sort. This stage takes in only set of bytes.
1.Reset workspace to zeros.
2.for each item e in the input set, perform workspace[e]++ //offset of each entry in workspace represents a value of an item; value at the offset represents the # of items in the set
that are equal to the item.
3.for i=1 to 'size of workspace', perform workspace[i]+=workspace[i-1] //value of each entry in workspace represents #of items in the set that are less than or equal to the item with
value 'offset'.
4.First component does not proceed with constructing sorted array, but instead provides a way find index of each item as if the set was sorted. Index of 'someItem' from the input set
in a sorted set would be workspace[someItem]. Higer level component will obtain index for each item exacly once.
Second component is a radix sort, but bytes will be used for grouping instead of bits. The map is initialized such that map[i] contains adress of set[i]. At each iteration, the first
component is used to divide each subsequent set up to 256 groups, until they no longer need sorting, i.e. is of length 1. Also, actual items in the set will not be moved around,
instead only the pointers in the map are modified such that map[i] is the index of ith item in a "sorted" set.
Complexity is explained on the next comment.
Jan 24 2014: Continuation of my previous comment:
Counting sort(first subcomponent) has complexity O(n+k), k is maximum possible value of each integer item (256 in this case), n is length of the current subset. This is a stable
non-comparative sort.
Radix sort using stable non-comparative sort has execution time of Θ(d(n+k)), d is length of items in the set. n is size of the set. For arbitrary length items, upper bound should be
O(p(n+k)). p is an avarage length of items in set. p.s. items of length less than p, will no longer be in subsets of size larger than 1 after p iterations.
I may not use the same method if the nature of the input is known beforehand.
Final comment, in the process I discovered this platform does not include anything after 'less than' symbol.
Jan 25 2014: Farrukh - can you tell me if I'm not understanding your algorithm properly? I believe your algorithm is n-log-n and not linear, because the original question placed no
constraints on the size of each input element.
If each input element were allowed to be a random 64-bit integer, the size of your work space would be 16 quintillion bytes, which would be an issue.
Am I missing something?
Jan 23 2014: I'm assuming you don't know which is which. It took me some time, but I think I figured it out. You ask either of them: what will the other guy say is the door to sudden death?
The person will indicate a door, that's the one you want to take. Alternatively, you can ask which will the other guy say is the door to freedom, and take the door not indicated.
Jan 23 2014: Very good but Farrukh posted the answer 25 minutes earlier. Did you look it up or figure it out?
Another way to phrase it is: Which door will the other guy tell me go through? and then go through the other one.
Good work out Farrukh, Yoka and Timo... remember it is the journey that is most important and all of you took the same journey. Because you got different answers should in no way spoil
your journey because there "is" no destination, the destination is an illusion. Buddha put it this way:
"Nirvana is this moment seen directly. There is no where else than here. The only gate is now. The only doorway is your own body and mind. There’s nowhere to go. There’s nothing else to
be. There’s no destination. It’s not something to aim for in the afterlife. It’s simply the quality of this moment."
□ Jan 23 2014: The question doesn't matter.
If i choose life i'll go to the door that the lier will show me, no matter what i asked.
Freedom is a lie, the lier will show me the door to freedom, if i ask for it. If i ask the door to sudden death, he will show me the same door, to freedom , because he is a lier.
If ask the person who always tells truth, where is the door to freedom , he will show me the door to sudden death, because , it's a real freedom. If i ask him where is the door to sudden
death , he will show me the door to sudden death, because he always tells truth.
Jan 24 2014: Interesting way of putting things together. If I were to further analyze this, under the above explained conditions, choosing a random door would be as good as talking to
anyone. However, under this concept of the world, there is still a solution that leads to certainty. It's actually more efficient that the one in standard concept. If you ask anyone
of them which way the other one would point to to freedom, they will always point to freedom. Its guaranteed by the design of the preconditions, no post thinking to be done like as in
other case. Since a lier always points to freedom, truthful person would not alter lier's decision. On the other hand, the lier would point to any door other than the truthful person
believes to be freedom. What I find amazing is that formulating a different preconditions allows formulating a logic that does not contradict with those of different setups.
Jan 25 2014: "What do you think is the biggest problem in the world today...?"
This must be one of the hardest questions to answer, due to many problems and not all of them can be compared. Perhaps this would be a good question of choice for the above thought
experiment. :)
Science is key to move everything forward, and computer science seems to be the beating heart of the current era. I am not sure what I would want to tackle first, but I would let my
interests lead the way.
Jan 24 2014: Farrukh I copied your last response to a word file and will go over it next week, I'm not as sharp as I used to be so it will take me a while to figure out your method. I
tried to email you but the link did not work for me. Here is my email (keithwhenline@gmail.com) drop me a line and I will tell you as best I can remember how my sort works for your
information and you can do whatever you like with it. I am curious about your background I assume you spent time or was raised in the Kazakhstan area and moved to the US to further
your education. Also wondering what kind of impact you want to have in the world, with your knowledge you obviously have a wide range of possiblities. What do you think is the biggest
problem in the world today and are you willing to tackle it?
Jan 25 2014: Natasha you are right of course, I have no right to give anyone any more attention than someone else and I apologize for offending you. It was totally my fault. The riddle I
purposed was my way of telling if I was speaking to a bot or a very smart person and was another version of his original Turning type suggestions. Upon reading Farrukh's background which
is very similar to mine I wanted to see how deep the rabbit hole goes and I found it has no bottom to my delight. I got caught up in that as you witnessed and forgot my manners and you
have every right to call me on it, thank you. I hope you can forgive me and I will try not to every do that again.
○ Jan 25 2014: No worries, you don't have any chance to offend me !
I mean, my ego is thin enough :)
Your riddle and that episode from " The Princes Bride " gave me an aha moment and i am grateful for that. Actually, those two are in perfect congruence. Probably i was a bit upset
that there seemed to be nobody who was interested, but on the other hand, it's not easy to language what i've got, so it's OK anyway.
Thank you !
Jan 23 2014: The question about a 100% liar and 100% truth teller assumes that you can find two people, such that one always tells the truth and one always lies. I don't think that ever
happens in reality, and it's not a premise of the original question, so I can't see how this classic logic puzzle is a solution to this Turing test.
Am I missing something?
Jan 23 2014: "Would the other person tell me that the left-hand door leads to freedom?"
If the person says 'Yes' and is lying, then the other person would truthfully say 'No,' so the right-hand door leads to freedom.
If the person says 'Yes' and is truthful, then the other person would deceitfully say 'Yes,' so the right-hand door leads to freedom.
This is a logic question which would be much easier for an advanced computer to figure out than a person.
If the person says 'No' and is truthful, then the other person would deceitfully say 'No' and the left-hand door leads to freedom.
If the person says 'No' and is lying, then the other person would truthfully say 'Yes' and the left-hand door leads to freedom.
So, if the person says 'Yes,' then the right-hand door leads to freedom. And if the person says 'No,' then the left-hand door leads to freedom.
Showing single comment thread. View the full conversation.
|
{"url":"http://www.ted.com/conversations/22663/what_question_would_you_ask_to.html?c=812497","timestamp":"2014-04-20T21:43:52Z","content_type":null,"content_length":"101429","record_id":"<urn:uuid:203c59be-e7d1-4883-abd2-95d9325c8cd8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Series Expansion of Uniform MGF
August 7th 2009, 02:46 AM #1
Junior Member
May 2009
Series Expansion of Uniform MGF
X has uniform dist (0,1).
I found the MGF to be:
$\frac{e^s - 1}{s}$
Now I need to expand the MGF in powers of s up to $s^2$ and use this to find the mean and variance of X. I know the series expansion for $e^s = \Sigma \frac{s^r}{r!}$, but I'm not sure how to
apply this in this situation.
$e^s=\sum_{n=0}^\infty \frac{s^n}{n!}=1+\sum_{n=1}^\infty \frac{s^n}{n!}$
So $\frac{e^s-1}{s}=\sum_{n=1}^\infty \frac{s^{n-1}}{n!}$
by changing the indice, this is $\sum_{n=0}^\infty \frac{s^n}{(n+1)!}$
EDIT: I think I understand what you did there.
But, I'm still stuck on another part of this question:
In adding n real numbers, each is rounded to the nearest integer. Assume that the round-off errors, $X_{i}$, i = 1,...,n, are independently distributed as U(-0.5,+0.5). What is the approximate
distribution of the total error, $\Sigma X_{i}$, in the sum of the n numbers?
How would I approach this question?
Last edited by Zenter; August 9th 2009 at 03:40 AM.
August 7th 2009, 03:09 AM #2
August 7th 2009, 04:37 AM #3
Junior Member
May 2009
|
{"url":"http://mathhelpforum.com/advanced-statistics/97238-series-expansion-uniform-mgf.html","timestamp":"2014-04-19T09:14:52Z","content_type":null,"content_length":"36630","record_id":"<urn:uuid:b4e50467-da20-410d-a088-8d7a4fee37fd>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pleasantville, NY Prealgebra Tutor
Find a Pleasantville, NY Prealgebra Tutor
...I also tutored GED math at Mercy Learning Center in Bridgeport this summer. My modus operandi is to be well prepared for each tutoring session by doing any homework in advance of the student
to enable helping him/her gain a better understanding of the assigned work. All students I have tutored have significantly improved their test scores and overall grades.
6 Subjects: including prealgebra, algebra 1, algebra 2, SAT math
...I've been tutoring for 8+ years, with students between the ages of 6 and 66, with a focus on the high school student and the high school curriculum. I have also been an adjunct professor at
the College of New Rochelle, Rosa Parks Campus. As for teaching style, I feel that the concept drives the skill.
26 Subjects: including prealgebra, physics, calculus, statistics
...I hope this helps you to decide if I am the right kind of tutor for you. Good luck with the studying!I studied Physics with Astronomy at undergraduate level, gaining a master's degree at upper
2nd class honors level (approx. 3.67 GPA equivalent). I then proceeded to complete a PhD in Astrophysic...
8 Subjects: including prealgebra, physics, geometry, algebra 1
The challenging new common core state exams in math and ELA are rapidly approaching. As you can see by my ratings and reviews, I am a very experienced, patient and passionate tutor. I am very
familiar with the new comon core standards that students are expected to demonstrate.
29 Subjects: including prealgebra, reading, biology, ASVAB
...I have devoted over 35 years helping students achieve success on standardized tests and in all subjects from elementary school through high school. My passion is for mathematics and the
sciences, particularly biology and environmental science. I hold teaching certifications in both NJ & NY, K-12, and am highly qualified in math and science.
16 Subjects: including prealgebra, reading, geometry, biology
Related Pleasantville, NY Tutors
Pleasantville, NY Accounting Tutors
Pleasantville, NY ACT Tutors
Pleasantville, NY Algebra Tutors
Pleasantville, NY Algebra 2 Tutors
Pleasantville, NY Calculus Tutors
Pleasantville, NY Geometry Tutors
Pleasantville, NY Math Tutors
Pleasantville, NY Prealgebra Tutors
Pleasantville, NY Precalculus Tutors
Pleasantville, NY SAT Tutors
Pleasantville, NY SAT Math Tutors
Pleasantville, NY Science Tutors
Pleasantville, NY Statistics Tutors
Pleasantville, NY Trigonometry Tutors
|
{"url":"http://www.purplemath.com/pleasantville_ny_prealgebra_tutors.php","timestamp":"2014-04-19T09:51:15Z","content_type":null,"content_length":"24507","record_id":"<urn:uuid:9aa7a39c-dadf-4068-9494-e719429ff1f0>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fachbereich Physik
Superselection rules induced by the interaction with a mass zero Boson field are investigated for a class of exactly soluble Hamiltonian models. The calculations apply as well to discrete as to
continuous superselection rules. The initial state (reference state) of the Boson field is either a normal state or a KMS state. The superselection sectors emerge if and only if the Boson field
is infrared divergent, i. e. the bare photon number diverges and the ground state of the Boson field disappears in the continuum. The time scale of the decoherence depends on the strength of the
infrared contributions of the interaction and on properties of the initial state of the Boson system. These results are first derived for a Hamiltonian with conservation laws. But in the most
general case the Hamiltonian includes an additional scattering potential, and the only conserved quantity is the energy of the total system. The superselection sectors remain stable against the
perturbation by the scattering processes.
|
{"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/collection/id/15998/start/0/rows/10/yearfq/2004/subjectfq/Mathematical+Physics","timestamp":"2014-04-21T09:50:37Z","content_type":null,"content_length":"15746","record_id":"<urn:uuid:f71f98a9-12e3-4abb-9f29-d4d7b3f61c36>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebra: Linear Inequalities Help and Practice Problems
Find study help on linear inequalities for algebra. Use the links below to select the specific area of linear inequalities you're looking for help with. Each guide comes complete with an explanation,
example problems, and practice problems with solutions to help you learn linear inequalities for algebra.
The most popular articles in this category
|
{"url":"http://www.education.com/study-help/study-help-algebra-linear-inequalities/page2/","timestamp":"2014-04-18T20:30:36Z","content_type":null,"content_length":"96389","record_id":"<urn:uuid:9e68005e-3313-47b8-8183-24a6a834cae4>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lakewood Village, TX Math Tutor
Find a Lakewood Village, TX Math Tutor
...With me they will know that they are safe and that they are talking with someone who knows and understands what they are going through. My goal as I pursue my degrees is to finish with my
Ph.D. in Organic Chemistry. I completed both semesters of organic with an "A," as well as, receiving an "A" in both semesters of lab.
17 Subjects: including algebra 1, algebra 2, biology, SAT math
...A little more about me: I have lived in Houston, Dallas, and Charlotte, NC. My parents home schooled me through the seventh grade. Then I attended public school at Downing Middle School and
Marcus High School in Flower Mound, TX.
7 Subjects: including algebra 1, algebra 2, prealgebra, SAT math
...I work very hard to make learning meaningful and fun. As an educational psychologist, I have completed many hours of advanced coursework, and I am well-versed in the current research regarding
learning, memory, and instructional practices. I utilize this knowledge to identify underlying process...
39 Subjects: including ACT Math, English, statistics, reading
...I have a passion for teaching and make anyone understand Mathematical concepts in a clear, effective and simple manner. I am also extremely proficient in helping students prepare for exams
under pressure. Thanks for your interest and please contact me for additional information!I have conducted my Doctoral research simulations mostly in a MATLAB environment.
23 Subjects: including discrete math, SPSS, career development, computer programming
...I want each of my students to have the clearest advantage on test day. The SAT: For each section of the test, I teach only proven methods for raising a score. This includes an easy-to-learn
Challenger Thesaurus.
5 Subjects: including SAT math, GRE, GMAT, SAT reading
Related Lakewood Village, TX Tutors
Lakewood Village, TX Accounting Tutors
Lakewood Village, TX ACT Tutors
Lakewood Village, TX Algebra Tutors
Lakewood Village, TX Algebra 2 Tutors
Lakewood Village, TX Calculus Tutors
Lakewood Village, TX Geometry Tutors
Lakewood Village, TX Math Tutors
Lakewood Village, TX Prealgebra Tutors
Lakewood Village, TX Precalculus Tutors
Lakewood Village, TX SAT Tutors
Lakewood Village, TX SAT Math Tutors
Lakewood Village, TX Science Tutors
Lakewood Village, TX Statistics Tutors
Lakewood Village, TX Trigonometry Tutors
Nearby Cities With Math Tutor
Corral City, TX Math Tutors
Crossroads, TX Math Tutors
Hickory Creek, TX Math Tutors
Highland Village, TX Math Tutors
Krugerville, TX Math Tutors
Krum Math Tutors
Lake Dallas Math Tutors
Little Elm Math Tutors
Oak Point, TX Math Tutors
Pilot Point, TX Math Tutors
Ponder Math Tutors
Prosper Math Tutors
Sanger, TX Math Tutors
Shady Shores, TX Math Tutors
Weston, TX Math Tutors
|
{"url":"http://www.purplemath.com/Lakewood_Village_TX_Math_tutors.php","timestamp":"2014-04-17T13:19:10Z","content_type":null,"content_length":"24162","record_id":"<urn:uuid:1e8e0738-3e67-48fe-af3b-b0e000e8c3cc>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Geometry Portal
Geometry (Greek γεωμε"ρία; geo = earth, metria = measure) is a part of mathematics concerned with questions of size, shape, and relative position of figures and with properties of space.
geometry -
(1) [Euclidean geometry] The measures and properties of points, lines, and surfaces. In a GIS, geometry is used to represent the spatial component of geographic features.
Geometry and the Imagination in Minneapolis
John Conway Peter Doyle Jane Gilman Bill Thurston
June 1991
Version 0.91 dated 12 April 1994 ...
Geometry-symbol layer combinations
For example, adding a fill layer to a line representation rule would generate a warning because there is no polygon geometry to fill.
Close the geometry_columns table and right click the spatial_ref_sys table and select View Data > View All Rows.
coordinate geometry traverse
[ESRI software] In Survey Analyst, a process of computing a sequence of survey point locations starting from an initial known point.
Coordinate Geometry
Used to construct mathematical/geometric models of a design and its environment. See also COGO.
Geometry Converters
PostGIS with shp2pgsql:
shp2pgsql -D lakespy2 lakespy2 test lakespy2.sql
e00pg: E00 to PostGIS filter, see also v.in.e00.
Geometry service expanded to facilitate Web editing
The geometry service exposes a number of new methods to help with geographic feature editing. These are especially useful in Web editing scenarios.
geometryGeometry deals with the measures and properties of points, lines and surfaces. In ARC/INFO, geometry is used to represent the spatial component of geographic features.
Geometry and trigonometry
Hipparchus is recognised as the first mathematician who compiled a trigonometry table, which he needed when computing the eccentricity of the orbits of the Moon and Sun.
Civil Geometry tools are coordinate geometry (COGO) tools that utilize a heads-up interface, preserving user input and design intent. Results of these tools are intelligent graphic elements stored in
the DGN. Civil Geometry includes: ...
Link (geometry): An element of geometry that connects nodes. In a polygon topology, a link defines a polygon edge. Links can contain vertices and true arcs, and can be represented as a line,
polyline, or arc.
Geometry > Projective Geometry > Map Projections >
Conic Projection
A conic projection of points on a unit sphere centered at consists of extending the line for each point until it intersects a cone with apex which tangent to the sphere along a ...
Geometry Alignment removes discrepancies between geometries.
Information Transfer involves updating one dataset with information from the other.
Geometry of color elements of various CRT and LCD displays; phosphor dots in a color CRT display (top row) bear no relation to pixels or subpixels.
Geometry ratio
numeric values are proportional to the original area or length of the feature
Each field in the layer attribute table can have split policies applied.
Geometry and topology; Greater than; Geologian Tutkimuskeskus = Geological Survey of Finland
Gas tank ...
Geometry; Imperial guards
Geography and Map; geography and Map Div.
The geometry of the earth has been discussed, studied, and imagined forever. The ancient Greek philosophers tried to picture a pure geometrical model.
The geometry of the thematic data is entirely or partially described by the base geometry dataset.
The thematic dataset holds information that also describes objects of the base dataset.
The geometry of GIS data is referenced in coordinates that are embedded in the data file. Shape files that you get from ESRI StreetMap and from the ESRI Data CDs use Latitude and Longitude as the
coordinate system.
In the geometry of the sphere, great circles play the part of straight lines. They represent the shortest distance between two points. Every great circle is determined by a plane that contains the
center of the sphere.
About the Geometry of Datums
In order to calculate where latitudes and longitudes occur on the surface of the Earth a number of fundamental geometric concepts and practices need to be applied. In simple terms these include: ...
coordinate geometry (COGO) This refers to a data conversion process in which a digital map is constructed from written descriptions, such as legal descriptions of land parcel boundaries.
Coordinate Geometry
Key Sources are small scale (>=1:50,000) / focus is regional planning
BC Albers (UTM) ...
Coordinate geometry - The methods used to construct graphics mathematically in engineering design. It is usually referred to as COGO.
Coordinate system - A system used to register and measure horizontal and vertical distances on a map.
Coordinate Geometry - COGO
A method of defining geometric features through the input of bearing and distance measurements.
Coordinate Geometry
A third technique for the input of spatial data involves the calculation and entry of coordinates using coordinate geometry (COGO) procedures.
Computational Geometry
This is the discipline of developing efficient algorithms to solve problems of spatial analysis.
Figure 5-9. Geometry of an EDM (Basic Example)
While the vector geometry has a large amount of models the raster model even with the newly added extensions [16] does not contain conceptually new ideas (not even the tesseral indexing is allowed).
COGO: CO-ordinate GeOmetry. Algorithms for handling basic two and three dimensional vector entities built into all surveying, mapping and GIS software. Co-ordinate Numbers representing the position
of a point relative to an origin.
AGG Anti-Grain Geometry A high quality graphics rendering engine that MapServer 5.0+ can use. It supports sub-pixel anti-aliasing, as well as many more features. CGI Wikipedia provides excellent
coverage of CGI. EPSG ...
COGO - Coordinate Geometry: A set of procedures for encoding and manipulating bearings, distances and angles of survey data into co-ordinate data. COGO is frequently a subsystem of GIS.
A feature is not defined in terms of a single geometry, but rather as a conceptually meaningful object within a particular domain of discourse, one or more of whose properties may be geometric.
31 basic components that are sufficient to build a larger system; the primitives of two-dimensional geometry are points, lines, and areas. projection p.
COGO See coordinate geometry. Colour composite In remote sensing, a colour image composed of three bands projected in the red, point and green guns. Column A vertical field in a relational database
management system data file.
Abbreviation of the term COordinate GeOmetry. Land surveyors use COGO functions to enter survey data, to calculate precise locations and boundaries, to define curves, and so on.
2. The name of the ArcInfo coordinate geometry software product.
The shape of Earth's surface or the geometry of landforms in a geographic area.
Trace Element:
An element that is present in very small quantities.
Traction: ...
Geodatabases support large collections of objects in a database table and features with geometry. The feature classes and tables contained in geodatabases can be related to one another.
The Coordinate Geometry (COGO) process includes COGO commands that when executed accomplish meaningful functions for professional surveying and civil engineering applications.
Such a map database is a vector representation of a given road network including road geometry (segment shape), network topology (connectivity) and related attributes (addresses, road class, etc).
- GML parser that handles complex feature data and metadata, including support for multiple geometry types in a nested structure. As any developer can tell you, parsing GML can be a challenge and
this tool takes care of the work for you.
In GIS the shapes and locations of things are stored as coordinate geometry. GIS data is often stored in a database, either storing the coordinates as numbers or using special geometry data types.
View angle is an important component of the imaging geometry. View angle and illumination geometry (solar zenith and azimuth angles) are important determinants of the measured reflectance since
adjustments in observation and illumination geometry ...
Because of the alignment limitations of the squares and cubes used in Cartesian geometry, ...
By 1980, the Loran-C User Handbooks provided separate charts of coverage for each chain based on geometry, noise, and signal strength. These charts assumed a three station, Master-Secondary-
Secondary receiver.
Event features, the segmentation points, are not stored in the geometry of the coverage but are derived as needed.
DOP is an indicator of the quality of the geometry of the satellite constellation. Your computed position can vary depending on which satellites you use for the measurement.
Geodatabases store geometry, a spatial reference system, attributes, and behavioral rules for data.
" These databases contain first the geometry element layer of the roadway that contains links, nodes, shape points, relative elevations, and connectivity.
GIS data used in the model can be raster, vector, textual and hybrid types from many diverse sources using state-of-the-art techniques: geometry from CAD systems; video and slide images; commercially
available digital data (vector and/or raster); ...
A measure of the GPS receiver/satellite geometry. A low DOP value indicates better relative geometry and higher corresponding accuracy.
Acronym for Coordinate Geometry, COGO is a subsystem of CAD or GIS made up of a set of standard procedures for processing survey data such as bearings, ...
Warping: Any process in which an object is stretched differentially so as to change its internal geometry.
page coordinates: The set of coordinate reference values used to place the map elements on the map, and within the map's own geometry rather than the geometry of the ground that the map represents.
helpful in studying the effects of geometry and spatial arrangement of habitat
e.g. size and shape of woodlots on the animal species they can sustain
e.g. value of linear park corridors across urban areas in allowing migration of animal species ...
This field represents the geometry of the features in the GeoDataset. The programmer can use "Shape" to get the collected spatial data and display it by the Map.DrawShape method. The attribute data
can be displayed by Map.DrawText method.
Several optional extension products add application-specific tools to ARC/INFO, including: NETWORK (network modeling), TIN (surface modeling and terrain analysis), COGO (interactive coordinate
geometry data entry and management), ...
Feature, Information, Model, Map, Analysis
|
{"url":"http://en.mimi.hu/gis/geometry.html","timestamp":"2014-04-20T18:51:38Z","content_type":null,"content_length":"36729","record_id":"<urn:uuid:923ce0dd-f002-4c26-915c-62d41b8a2956>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Continuous Everywhere but Differentiable Nowhere
I’m soon going to embark on teaching the chain rule in calculus. I have found ways to help kids remember the chain rule (“the outer function is the mama, the inner function is the baby… when you take
the derivative, you derive the mama and leave the baby inside, and then you multiply by the derivative of baby”), ways to write things down so their information stays organized, and I have shown
them enough patterns to let them see it’s true. But I have never yet found a way to conceptually get them to understand it without confusing them. (The gear thing doesn’t help me get it… Although I
understand the analogy, it feels divorced from the actual functions themselves… and these functions have a constant rate of change.)
I think I now have a way that might help students to get conceptually understand what’s going on. I only had the insight 10 minutes ago so I’m going to use this blogpost to see if I can’t get the
ideas straight in my head… The point of this post is not to share a way I’ve made the chain rule understandable. It’s for me to work through some unformed ideas. I am not yet sure if I have a way to
turn this into something that my kids will understand.
So here’s where I’m starting from. Every “nice” function (and those are the functions we’re dealing with) is basically like an infinite number of little line segments connected together. Thus, when
we take a derivative, we’re pretty much just asking “what’s the slope of the little line segment at $x=3$?” for example.
Now here’s the magic. In my class, we’ve learned that whatever transformations a function undergoes, the tangent line undergoes the same transformations! If you want to see that, you can check it out
For a quick example, let’s look at $f(x)=\sin{x}$ and $g(x)=2\sin{(5x)}+1$.
We see that $g(x)$ is secretly $f(x)$ which has undergone a vertical stretch of 2, a horizontal shrink of 1/5, and has been moved up 1.
Let’s look at the tangent line to $f(x)$ at $x=\pi/3$. It is approximately $y=0.5x+0.34$.
Now let’s put that tangent line through the transformations:
Vertical Stretch of 2: $y=2(0.5x+0.34)=x+0.68$
Horizontal shrink of 1/5: $y=5x+0.68$
Shift up 1: $y=5x+1.68$
Now let’s plot $g(x)$ and our transmogrified tangent line:
Yay! It worked! (But of course we knew that would happen.)
The whole point of this is to show that tangent lines undergo the same transformations as the functions — because the functions themselves are pretty much just a bunch of these infinitely tiny
tangent line segments all connected together! So it would actually be weird if the tangent lines didn’t behave like the functions.
My Thought For Using This for The Chain Rule
So why not look at function composition in the same way?
We can look at a composition of functions at a point as simply a composition of these little line segments.
Let’s see if I can’t clear this up by making it concrete with an example.
Let’s look at $m(x)=\sqrt{x^3+1}$.
And so we can be super concrete, let’s try to find $m'(2)$, which is simply the slope of the tangent line of $m(x)$ at $x=2$.
I’m going to argue that just as $\sqrt{x}$ and $x^3+1$ are composed to get our final function, we can compose the tangent lines to these two functions to get the final tangent line at $x=2$.
Let’s start with the $x^3+1$. At $x=2$, the tangent line is $y_{inner}=12x-15$ (I’m not showing the work, but you can trust me that it’s true, or work it out yourself.)
Now let’s start with the square root function. We have to be thoughtful about this. We are dealing with $m(2)$ which really means that we’re taking the square root of 9. We we want the tangent line
to $\sqrt{x}$ at $x=9$. That turns out to be (again, trust me?): $y_{outer}=\frac{1}{6}x+\frac{3}{2}$.
So now we have our two line segments.
We have to compose them.
This simplifies to:
Let’s look at a graph of $m(x)$ and our tangent line:
Where did we ultimately get the slope of 2 from? When we composed to two lines together, we multiplied the slope of the inner function (12) by the slope of the outer function (1/6). And that became
our new line’s slope.
Chain rule!
How we generalize this to the chain rule
For any composition of functions, we are going to have an inner and an outer function. Let’s write $c(x)=o(i(x))$ where we can clearly remember which one is the inner and which one is the outer
functions. Let’s pick a point $x_0$ where we want to find the derivative.
We are going to have to find the little line segment of the inner function and compose that with the little line segment of the outer function, both at $x_0$. That will approximate the function $c(x)
$ at $x_0$.
The line segment of the inner function is going to be $y_{inner}=i'(x_0)x+blah1$
The line segment of the outer function is going to be $y_{outer}=o'(i(x_0))x+blah2$
I am going to keep those terms blah1 and blah2 only because we won’t really need them. Let’s remember we only want the derivative (the slope of the tangent line), not the tangent line itself. So our
task becomes easier.
Let’s compose them: $y_{composed}=o'(i(x_0))[i'(x_0)x+blah1]+blah2$
This simplifies to $y_{composed}=o'(i(x_0))i'(x_0)x+blah3$
And since we only want the slope of this line (the derivative is the slope of the tangent line, remember), we have:
Of course we chose an arbitrary point $x_0$ to take the derivative at. So we really have:
Which is the chain rule.
I got rid* of Limits in Calculus (*almost entirely)
I’ve been meaning to write this post for a while. I teach non-AP Calculus. My goal in this course is to get my kids to understand calculus with depth — that means my primary focus is on conceptual
understanding, where facility with fancy-algebra things is secondary. Now don’t go thinking my kids come out of calculus not knowing how to do real calculus. They do. It is just that I pare things
down so that they don’t have to find the derivatives of things like $y=\cot(x)$. Why? Because even though I could teach them that (and I have in the past), I would rather spend my time doing less
work on moving through algebraic hoops, and more work on deep conceptual understanding.
Everything I do in my course aims for this. Sometimes I succeed. Sometimes I fail. But I don’t lose sight of my goal.
Each year, I have parts of the calculus curriculum I rethink, or have insights on. In the past few years, I’ve done a lot of thinking about limits and where they fit in the big picture of things.
Each year, they lose more and more value in my mind. I used to spend a quarter of a year on them. In more recent years, I spent maybe a sixth of a year on them. And this year, I’ve reduced the time I
spend on limits to about 5 minutes.*
*Okay, not really. But kinda. I’ll explain.
First I’ll explain my reasoning behind this decision. Then I’ll explain how I did it.
Reasoning Behind My Decision to Eliminate Limits
For me, calculus has two major parts: the idea of the derivative, and the idea of the integral.
Limits show up in both [1]. But where do they show up in derivatives?
• when you use the formal definition of the derivative
and… that’s pretty much it. And where do they show up in integrals?
• when you say you are taking the sum of an infinite sum of infinitely thin rectangles
and… that’s pretty much it. I figure if that’s all I need limits for, I can target how I introduce and use limits to really focus on those things. Do I really need them to understand limits at
infinity of rational functions? Or limits of piecewise functions? Or limits of things like $y=\sin(1/x)$ as $x\rightarrow 0$?
Nope. And this way I’m not wasting a whole quarter (or even half a quarter) with such a simple idea. All I really need — at least for derivatives — is how to find the limit as one single variable
goes to 0. C’est tout!
How I did it
This was our trajectory:
(1) Students talked about average rate of change.
(2) Students talked about the idea of instantaneous rate of change. They saw it was problematic, because how can something be changing at an instant? If you say you’re travelling “58 mph at 2:03pm,”
what exactly does that mean? There is no time interval for this 58mph to pop out of, since we’re talking about an instant, a single moment in time (of 2:03pm). So we problematized the idea of
instantaneous rate of change. But we also recognized that we understand that instantaneous rates of change do exist, because we believe our speedometers in our car which say 60mph. So we have
something that feels philosophically impossible but in our guts and everyday experience feels right. Good. We have a problem we need to resolve. What might an instantaneous rate of change mean? Is it
an oxymoron to have a rate of change at a instant?
(3) Students came to understand that we could approximate the instantaneous rate of change by taking the slope of two points really really really close to each other on a function. And the closer
that we got, the better our approximation was. (Understanding why we got a better and better approximation was quite hard conceptual work.) Similarly students began to recognize graphically that the
slope of two points really close to each other is actually almost the slope of the tangent line to the function.
(4) Now we wanted to know if we could make things exact. We knew we could make things exact if we could bring the two points infinitely close to each other. But each time we tried that, we got either
got two points pretty close to each other or the two points lay directly on top of each other (and you can’t find the slope between a point and itself). So still we have a problem.
And this is where I introduced the idea of introducing a new variable, and eventually, limits.
We encountered the question: “what is the exact instantaneous rate of change for $f(x)=x^2$ at $x=3$?
We started by picking two points close to each other: $(3,9)$ and $(3+h,(3+h)^2)$
This was the hardest thing for students to understand. Why would we introduce this extra variable $h$. But we talked about how $(3.0001,3.0001^2)$ wasn’t a good second point, and how $
(3.0000001,3.0000001^2)$ also wasn’t a good second point. But if they trusted me on using this variable thingie, they will see how our problems would be resolved.
We then found the average rate of change between the two points, recognizing that the second point could be really faraway from the first point if $h$ were a large positive or negative number… or
close to the first point if $h$ were close to 0.
Yes, students had to first understand that $h$ could be any number. And they had to come to the understanding that $h$ represented where the second point was in relation to the first point (more
specifically: how far horizontally the second point was from the first point).
And so we found the average rate of change between the two points to be:
We then said: how can we make this exact? How can we bring the two points infinitely close to each other? Ahhh, yes, by letting $h$ get infinitely close to 0.
And so I introduce the idea of the limit as such:
If I have $\lim_{h\rightarrow 0} blah$, it means what blah gets infinitely close to if $h$ gets infinitely close to 0 but is not equal to 0. That last part is key. And honestly, that’s pretty much
the entirety of my explanation about limits. So that’s the 5 minutes I spend talking about limits.
So to find the instantaneous rate of change, we simply have:
$InstRateOfChange=\lim_{h\rightarrow0} \frac{(3+h)^2-9}{(3+h)-3}$
This is simply the slope between two points which have been brought infinitely close together. Yes, that’s what limits do for you.
And then we simplify:
$InstRateOfChange=\lim_{h\rightarrow0} \frac{9+6h+h^2-9}{h}$
$InstRateOfChange=\lim_{h\rightarrow0} \frac{6h+h^2}{h}$
$InstRateOfChange=\lim_{h\rightarrow0} \frac{h(6+h)}{h}$
$InstRateOfChange=\lim_{h\rightarrow0} \frac{h}{h} \frac{(6+h)}{1}$
Now because we know that $h$ is close to 0, but not equal to 0, we can say with confidence that $\frac{h}{h}=1$. Thus we can say:
$InstRateOfChange=\lim_{h\rightarrow 0} (6+h)$
And now as $h$ goes to 0, we see that $6+h$ gets infinitely close to 6.
Done. (Here’s a do now I did in class.)
We did this again and again to find the instantaneous rate of change of various functions at a points. For examples, functions like:
$f(x)=x^3-2x+1$ at $x=1$
$g(x)=\sqrt{2-3x}$ at $x=-2$
$h(x)=\frac{5}{2-x}$ at $x=1$
For these, the algebra got more gross, but the idea and the reasoning was the same in every problem. Notice to do all of these, you don’t need any more knowledge of limits than what I outlined above
with that single example. You need to know why you can “remove” the $\frac{h}{h}$ (why it is allowed to be “cancelled” out), and then what happens as $h$ goes to 0. That’s all.
Yup, again, notice I only needed to rely on this very basic understanding of limits to solve these three problems algebraically: $\lim_{h\rightarrow 0} blah$means what blah gets infinitely close to
if $h$ gets infinitely close to 0 but is not equal to 0.
(5) Eventually we generalize to find the instantaneous rate of change at any point, using the exact same process and understanding. At this point, the only difference is that the algebra gets
slightly more challenging to keep track of. But not really that much more challenging.
(6) Finally, waaaay at the end, I say: “Surprise! The instantaneous rate of change has a fancy calculus word — derivative.“
Apologies in advance if any of this was unclear. I feel I didn’t explain thing as well as I could have. I also want to point out that I understand if you don’t agree with this approach. We all have
different thoughts about what we find important and why. I can (and in fact, in the past, I have) made the case that going into depth into limits is of critical importance. I personally just don’t
see things the same way anymore.
Now I should also say that there have been a few downsides to this approach, but on the whole it’s been working well for me so far. I would elaborate on the downsides but right now I’m just too
exhausted. Night night!
[1] Okay, I should also note that limits show up in the definition for continuity. But since in my course I don’t really focus on “ugly” functions, I haven’t seen the need to really spend time on the
idea of continuity except in the conceptual sense. Yes, I can ask my kids to draw the derivative of $y=|x|$ and they will be able to. They will see there is a jump at $x=0$. I don’t need more than
A couple years ago, Kate Nowak asked us to ask our kids:
What is 1 Radian?” Try it. Dare ya. They’ll do a little better with: “What is 1 Degree?”
I really loved the question, and I did it last year with my precalculus kids, and then again this year. In fact, today I had a mini-assessment in precalculus which had the question:
What, conceptually, is 3 radians? Don’t convert to degrees — rather, I want you to explain radians on their own terms as if you don’t know about degrees. You may (and are encouraged to) draw pictures
to help your explanation.
My kids did pretty well. They still were struggling with a bit of the writing aspect, but for the most part, they had the concept down. Why? It’s because my colleague and geogebra-amaze-face math
teacher friend made this applet which I used in my class. Since this blog can’t embed geogebra fiels, I entreat you to go to the geogebratube page to check it out.
Although very simple, I dare anyone to leave the applet not understanding: “a radian is the angle subtended by the bit of a circumference of [DEL:the circle:DEL] [DEL:that has 1 radius:DEL] a circle
that has a length of a single radius.” What makes it so powerful is that it shows radii being pulled out of the center of the circle, like a clown pulls colorful a neverending set of handkerchiefs
out of his pocket.
If you want to see the applet work but are too lazy to go to the page, I have made a short video showing it work.
PS. Again, I did not make this applet. My awesome colleague did. And although there are other radian applets out there, there is something that is just perfect about this one.
Mission #8: Sharing is Caring in the MTBoS
Here I’m reblogging our last mission from the Explore the #MTBoS!
Exploring the MathTwitterBlogosphere:
It’s amazing. You’re amazing. You joined in the Explore the MathTwitterBlogosphere set of missions, and you’ve made it to the eighth week. It’s Sam Shah here, and whether you only did one or two
missions, or you were able to carve out the time and energy to do all seven so far, I am proud of you.
I’ve seen so many of you find things you didn’t know were out there, and you tried them out. Not all of them worked for you. Maybe the twitter chats fell flat, or maybe the whole twitter thing wasn’t
your thang. But I think I can be pretty confident in saying that you very likely found at least one thing that you found useful, interesting, and usable.
With that in mind, we have our last mission, and it is (in my opinion) the best mission. Why? Because you get to do something…
View original 501 more words
Trig War
This is going to be a quick post.
Kate Nowak played “log war” with her classes. I stole it and LOVED it. Her post is here. It really gets them thinking in the best kind of way. Last year I wanted to do “inverse trig war” with my
precalculus class because Jonathan C. had the idea. His post is here. I didn’t end up having time so I couldn’t play it with my kids, sadly.
This year, I am teaching precalculus, and I’m having kids figure out trig on the unit circle (in both radians and degrees). So what do I make? The obvious: “trig war.”
The way it works…
I have a bunch of cards with trig expressions (just sine, cosine, and tangent for now) and special values on the unit circle — in both radians and degrees.
You can see all the cards below, and can download the document here (doc).
They played it like a regular game of war:
I let kids use their unit circle for the first 7 minutes, and then they had to put it away for the next 10 minutes.
And that was it!
An expanded understanding of basic derivatives – graphically
The guilt that I feel for not blogging more regularly this year has been considerable, and yet, it has not driven me to post more. I’ve been overwhelmed and busy, and my philosophy about blogging it
is: do it when you feel motivated. And so, I haven’t.
Today, I feel a slight glimmer of motivation. And so here I am.
Here’s what I want to talk about.
In calculus, we all have our own ways of introducing the power rule for derivatives. Graphically. Algebraically. Whatever. But then, armed with this knowledge…
that if $f(x)=x^n$, then $f'(x)=nx^{n-1}$
…we tend to drive forward quickly. We immediately jump to problems like:
take the derivative of $g(x)=4x^3-3x^{-5}+2x^7$
and we hurdle on, racing to the product and quotient rules… We get so algebraic, and we go very quickly, that we lose sight of something beautiful and elegant. This year I decided to take an extra
few days after the power rule but before problems like the one listed above to illustrate the graphical side of things.
Here’s what I did. We first got to the point where we comfortably proved the power rule for derivatives (for n being a counting number). Actually, before I move on and talk about the crux of this
post, I should show you what we did…
Okay. Now I started the next class with kids getting Geogebra out and plotting on two graphics windows the following:
At this point, we saw the transformations. On the left hand graph, we saw that the function merely shifted up one unit. On the right hand graphs, we saw a vertical stretch for one function, and a
vertical shrink for the other.
Here’s what I’m about to try to illustrate for the kids.
Whatever transformation a function undergoes, the tangent lines to the function also undergoes the exact same transformation.
What this means is that if a function is shifted up one unit, then all tangent lines are shifted up one unit (like in the left hand graph). And if a function undergoes vertical stretching or
shrinking, all tangent lines undergo the same vertical stretching or shrinking.
I want them to see this idea come alive both graphically and algebraically.
So I have them plot all the points on the functions where $x=1$. And all the tangent lines.
For the graph with the vertical shift, they see:
The original tangent line (to $f(x)=x^2$) was $y=2x-1$. When the function moved up one unit, we see the tangent line simply moved up one unit too.
Our conclusion?
Yup. The tangent line changed. But the slope did not. (Thus, the derivative is not affected by simply shifting a function up or down. Because even though the tangent lines are different, the slopes
are the same.)
Then we went to the second graphics view — the vertical stretching and shrinking. We drew the points at $x=1$ and their tangent lines…
…and we see that the tangent lines are similar, but not the same. How are they similar? Well the original function’s tangent line is the red one, and has the equation $y=2x-1$. Now the green function
has undergone a vertical shrink of 1/4. And lo and behold, the tangent line has also!
To show that clearly, we did the following. The original tangent line has equation $y=2x-1$. So to apply a vertical shrink of 1/4 to this, you are going to see $y=\frac{1}{4}(2x-1)$ (because you are
multiplying all y-coordinates by 1/4. And that simplifies to $y=0.5x-0.25$. Yup, that’s what Geogebra said the equation of the tangent line was!
Similarly, for the blue function with a vertical stretch of 3, we get $y=3(2x-1)=6x-3$. And yup, that’s what Geogebra said the equation of the tangent line was.
What do we conclude?
And in this case, with the vertical stretching and shrinking of the functions, we get a vertical stretching and shrinking of the tangent lines. And unlike moving the function up or down, this
transformation does affect the slope!
I repeat the big conclusion:
Whatever transformation a function undergoes, the tangent lines to the function also undergoes the exact same transformation.
I didn’t actually tell this to my kids. I had them sort of see and articulate this.
Now they see that if a function gets shifted up or down, they can see that the derivative stays the same. And if there is a vertical stretch/shrink, the derivative is also vertically stretched/
The next day, I started with the following “do now.” We haven’t learned the derivative of $\sin(x)$, so I show them what Wolfram Alpha gives them.
For (a), I expect them to give the answer $g'(x)=3\cos(x)$ and for (b), $h'(x)=-\cos(x)$.
The good thing here is now I get to go for depth. WHY?
And I hear conversations like: “Well, g(x) is a transformation of the sine function which gives a vertical stretch of 3, and then shifts the function up 4. Well since the function undergoes those
transformations, so does the tangent lines. So each tangent line is going to be vertically stretched by 3 and moved up 4 units. Since the derivative is only the slope of the tangent line, we have to
see what transformations affect the slope. Only the vertical stretch affects the slope. So if the original slope of the sine function was $\cos(x)$, then we know that the slope of the transformed
function is $3\cos(x)$.
That’s beautiful depth. Beautiful.
For (b), I heard talk about how the negative sign is a reflection over the x-axis, so the tangent lines are reflected over the x-axis also. Thus, the slopes are the opposite sign… If the original
sine functions slope of the tangent lines was $\cos(x)$, then the new slopes are going to be $-\cos(x)$.
This isn’t easy for my kids, so when I saw them struggling with the conceptual part of things, I whipped up this sheet (.docx).
And here are the solutions
And here is a Geogebra sheet which shows the transformations, and the new tangent line (and equation), for this worksheet.
Now to be fair, I don’t think I did a killer job with this. It was my first time doing it. I think some kids didn’t come out the stronger for this. But I do feel that the kids who do get it have a
much more intuitive understanding of what’s going on.
I am much happier to know that if I ask kids what the derivative of $q(x)=6x^9$ is, they immediately think (or at least can understand) that we get $q'(x)=6*9x^8$, because…
our base function is $x^9$ which has derivative (aka slope of the tangent line) $9x^8$… Thus the transformed function $6x^9$ is going to be a vertical stretch, so all the tangent lines are going to
be stretched vertically by a factor of 9 too… thus the derivative of this (aka the slope of the tangent line) is $q'(x)=6*9x^8$.
To me, that sort of explanation for something super simple brings so much graphical depth to things. And that makes me feel happy.
Infinite Geometric Series
I did a bad job (in my opinion) of teaching infinite geometric series in precalculus in my previous class. I told them I did a bad job. I was rushing. They were confused. (One of them said: “you did
a fine job, Mr. Shah” which made me feel better, but I still felt like they were super confused.)
At the start of the lesson, I gave each group one colored piece of paper. (I got this idea last year from my friend Bowen Kerins on Facebook! He is not only a math genius but he’s also a 5 time world
pinball player champion. Seriously.) I don’t know why but it was nice to give each group a different color piece of paper. Then I had them designate one person to be the “paper master” and two people
to be the friends of the paper master. Any group with a fourth person simply had to have the fourth person be the observer.
I did not document this, so I have made photographs to illustrate ex post facto.
I started, “Paper master, you have a whole sheet of paper! One whole sheet of paper! And you have two friends. You feel like being kind, sharing is caring, so why don’t you give them each a third of
your paper.”
The paper master divided the paper in thirds, tore it, and shared their paper.
Then I said: “Your friends loveeeed their paper gift. They want just a little bit more. Why don’t you give them each some more… Maybe divide what you have left into thirds so you can keep some too.”
And the paper master took what they had, divided it into thirds, and shared it.
To the friends, I said: “Hey, friends, how many of you LOOOOOVE all these presents you’re getting? WHO WANTS MORE?” and the friends replied “MEEEEEEEEEEEEEEE!”
“Paper master, your friends are getting greedy. And they demand more paper. They said you must give them more or they won’t be your friends. And you are peer pressured into giving them more. So
divide what little you have left and hand it to them.”
They do.
“Now do it again. Because your greedy friends are greedy and evil, but they’re still your friends.”
Here we stop. The friends have a lot of slips of paper of varying sizes. The paper master has a tiny speck.
I ask the class: “If we continue this, how much paper is the paper master going to eventually end up with?”
(Discussion ensues about whether the answer is 0 or super duper super close to 0.)
I ask the class: “If we continue this, how much paper are each of the friends going to have?”
(A more lively short discussion ensues… Eventually they agree… each friend will have about 1/2 the paper, since there was a whole piece of paper to start, each friend gets the same amount, and the
paper master has essentially no paper left.)
I then go to the board.
I write $\frac{1}{2}=$
and then I say: “How much paper did you get in your initial gift, friends?”
I write $\frac{1}{2}=\frac{1}{3}+$
and then we continue, until I have:
Ooohs and aahs.
Next year I am going to task each student to do this with two friends or people from their family, and have them write down their friends/family member’s reactions…
I love this.
|
{"url":"http://samjshah.com/page/2/","timestamp":"2014-04-20T00:38:00Z","content_type":null,"content_length":"108589","record_id":"<urn:uuid:bfc429f0-c6cd-4e19-b394-e4fe4e4ecfca>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
|
matrix proof 1
Can some 1 please help me with this as i really hate and sturggle doing matrix proof (Angry) Attachment 9795
Correct. The example given does not answer the question. Also, since one of them is the identity matrix they obviously commute ..... Surely it's not too hard to find two matrices that satisfy the
requirements of (b). In fact, I bet you could choose to 2x2 matrices at random that would work ..... Then the answer to c is obvious. As for (a), two trivial matrices that satisfy the requirements
are [1, 1] [1, 1] [2, 2] [2, 2] It is not too hard to construct less trivial examples .....
|
{"url":"http://mathhelpforum.com/advanced-algebra/69723-matrix-proof-1-a-print.html","timestamp":"2014-04-24T16:13:14Z","content_type":null,"content_length":"8805","record_id":"<urn:uuid:5f72ccb2-2c84-40a2-848e-8a0e57281966>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fractions: On the Order of Operations and Simplifying
Date: 02/27/2010 at 19:48:22
From: Terri
Subject: how fractions fit in to 2nd rule in the order of operations
The 2nd rule in the order of operations says to multiply and divide left to right. I've
been thinking that the only reason for this "left to right" part is so I don't divide by
the wrong amount.
For example, in the problem 3 / 6 * 4, if I didn't follow the order of operations, but
instead did the 6 * 4 first, I'd get a wrong answer.
Now, my text says I can avoid having to work left to right if I convert division to
multiplication by the reciprocal. This makes sense.
My question is: when I write a division problem using the fraction line, do I ever
have to worry about following the left to right rule, or does writing it as a fraction
void the need for this rule just as writing division as multiplication of the reciprocal
did? It seems that in my math text, when it comes to fractions such as ...
... they cancel and do the division and multiplication within a fraction in any order.
For example, I would cancel the 3's and divide the 24 by 8, which isn't doing
division and multiplication from left to right, nor does that treat the fraction line as a
grouping symbol. Even multiplication of fractions doesn't seem to go by the left to
right rule, because we're multiplying numerators first before we're dividing the
numerator by the denominator of each particular fraction. I can write the problem
above as multiplication by the reciprocal and see that I can divide and multiply in
any order.
So I'm wondering if I can make this a general rule: in fractions, the left to right order
is not an issue.
Of course, it seem that just when I think I can generalize about something, there's a
case where it doesn't hold true, and I'm wondering why, if this is the case, I've never
seen it written anywhere.
I've been looking on the Internet and in algebra books to see if anyone addresses
this particular part of the order of operations in detail, and it seems that most just
generalize about the order of operations. I'm wondering if there is an unwritten rule
that when you write division using the fraction line, you no longer need to do the
division and multiplication from left to right.
Another math website stated the order of operations and then said there are a lot
of shortcuts that a person can use because of the associative and commutative
rules, but the site didn't elaborate. Is writing division using the fraction line one of
these shortcuts that allows you to avoid the left to right rule when multiplying and
Thank you for taking the time to read this problem. Sorry to be so long-winded. I
appreciate your time and help very much.
Date: 02/27/2010 at 21:02:19
From: Doctor Peterson
Subject: Re: how fractions fit in to 2nd rule in the order of operations
Hi, Terri.
As Terri wrote to Dr. Math
On 02/27/2010 at 19:48:22 (Eastern Time),
>The 2nd rule in the order of operations says to multiply and divide left to right.
>I've been thinking that the only reason for this 'left to right' part is so I don't divide
>by the wrong amount.
>For example, in the problem 3 / 6 * 4, if I didn't follow the order of operations,
>but instead did the 6 * 4 first, I'd get a wrong answer.
>Now, my text says I can avoid having to work left to right if I convert division to
>multiplication by the reciprocal. This makes sense.
Yes, I've said the same thing; in a sense this is the reason for the left-to-right rule,
since a right-to-left or multiplication-first rule would give different results.
>My question is when it comes to fractions... when I write a division problem using
>the fraction line, do I ever have to worry about following the left to right rule or
>does writing it as a fraction void the need for this rule just as writing division as
>multiplication of the reciprocal did? It seems that in my math text, when it comes
>to fractions such as ...
> 24(3)x
> ------
> 8(3)y
>... they cancel and do the division and multiplication within a fraction in any order.
>For example, I would cancel the 3's and divide the 24 by 8, which isn't doing
>division and multiplication from left to right, nor does that treat the fraction line as
>a grouping symbol. Even multiplication of fractions doesn't seem to go by the left
>to right rule, because we're multiplying numerators first before we're dividing the
>numerator by the denominator of each particular fraction. I can write the problem
>above as multiplication by the reciprocal and see that I can divide and multiply in
>any order.
>So I'm wondering if I can make this a general rule: in fractions, the left to right
>order is not an issue.
You're partly confusing order of operations (which applies to EVALUATING an
expression -- that is, to what it MEANS) with techniques for simplifying or carrying
out operations in practice. Properties of operations are what allow us to simplify, or
to find simpler ways to evaluate an expression than doing exactly what it says. For
example, the commutative property says that if the only operation in a portion of an
expression is multiplication, you can ignore order.
>I've been looking on the Internet and in algebra books to see if anyone addresses
>this particular part of the order of operations in detail, and it seems that most just
>generalize about the order of operations. I'm wondering if there is an unwritten
>rule that when you write division using the fraction line, you no longer need to do
>the division and multiplication from left to right.
In a fraction, the bar acts as a grouping symbol, ensuring that you evaluate the
entire top and the entire bottom before doing the division. Thus, the division is out
of the "left-to-right" picture entirely. In fact, since here the division involves top and
bottom rather than left and right, I'm not sure what it would even mean to do it left
to right.
>Another math website stated the order of operations and then said there are a lot
>of shortcuts that a person can use because of the associative and commutative
>rules, but the site didn't elaborate. Is writing division using the fraction line one of
>these shortcuts that allows you to avoid the left to right rule when multiplying and
Yes, that's what you're talking about -- shortcuts that essentially rewrite an
expression (without actually doing so) as an equivalent expression that you can
evaluate easily. Again, that is outside of the order of operations.
As an example, multiplying fractions is explained here in terms of the
properties on which it is based:
Deriving Properties of Fractions
If you have any further questions, feel free to write back.
- Doctor Peterson, The Math Forum
Date: 02/27/2010 at 22:00:45
From: Terri
Subject: how fractions fit in to 2nd rule in the order of operations
Thank you for your time in answering my question. I appreciate it.
If you have time, I have just two more questions to make sure I can get this straight
in my head...
You mentioned that, for a fraction, the division is out of the "left-to-right" picture
entirely. So, I'm guessing that I can safely say that the left-to-right rule applies only
to division that is written on one line.
Last question: another website says that if I have the problem ...
... then I need to multiply the 4 and 12 first before dividing by the 3, according to
the order of operations, using the fraction line as a grouping symbol. But when I
cancel, of course, I'm not doing it in this order. So is canceling one of those
"properties of operations" you mentioned that allows us to evaluate this without
having to stick to the order of operations?
Thank you again. Have a good weekend.
Date: 02/27/2010 at 22:27:33
From: Doctor Peterson
Subject: Re: how fractions fit in to 2nd rule in the order of operations
Hi, Terri.
As Terri wrote to Dr. Math
On 02/27/2010 at 22:00:45 (Eastern Time),
>Thank you for your time in answering my question. I appreciate it.
>If you have time, I have just two more questions to make sure I can get this
>straight in my head...
>You mentioned that, for a fraction, the division is out of the "left-to-right" picture
>entirely. So, I'm guessing that I can safely say that the left-to-right rule applies
>only to division that is written on one line.
Right. When division is written as a fraction, the order is forced by the grouping-
symbol aspect of the fraction bar; it's as if division were always written like
(a * b) / (c * d)
Mathematicians rarely write division in the horizontal form, probably because
indicating it vertically makes it so much clearer what order is intended.
>Last question: another website says that if I have the problem ...
> 4(12)
> ----
> 3
>... then I need to multiply the 4 and 12 first before dividing by the 3, according to
>the order of operations, using the fraction line as a grouping symbol. But when I
>cancel, of course, I'm not doing it in this order. So is canceling one of those
>"properties of operations" you mentioned that allows us to evaluate this without
>having to stick to the order of operations?
Again, canceling is not the same thing as evaluating; the order of operations only
applies to what an expression MEANS, not to how you must actually carry it out.
To EVALUATE this expression, in the sense of doing exactly what it says, I get 48/3
which becomes 16. I followed all the rules.
To SIMPLIFY the expression, I can follow the rule of simplification. This says that if I
divide ANY factor of the numerator (wherever it falls -- it doesn't matter because of
commutativity) and ANY factor of the denominator by the same number, the
resulting fraction is equivalent. The reason I can use the properties is because the
canceling is equivalent to this sequence of transformations:
4(12) 4 * 4 * 3 4 4 3 4 4
----- = --------- = --- * --- * --- = --- * --- * 1 = 16
3 1 * 1 * 3 1 1 3 1 1
All sorts of properties of multiplication come into play here, but the idea of
canceling wraps it all into a simple process in which, again, the order doesn't
matter. But that only works when it is ONLY multiplication in either part.
- Doctor Peterson, The Math Forum
Date: 03/01/2010 at 02:47:02
From: Terri
Subject: Thank you (how fractions fit in to 2nd rule in the order of operations)
Thank you very much for your help.
I guess my questions must have sounded very confusing; I was confused, looking at
the expression ...
10 * 2
... as being 2 steps in the order of operations -- a division of 10 by 5 and a
multiplication -- like the expression 10 divided by 5 times 2 written all on one line
(with no fractions). But now I see that in my first example above, the fraction is
considered to be just one number for the purposes of the order of operations so
there is just 1 step -- a multiplication of the fraction times 2. Even though the
fraction line means division, it doesn't count as division in the order of operations.
Hope I got this right. A HUGE thank you for taking the time to make sense out of my
confusion!!! Have a great week!!
Date: 03/01/2010 at 10:11:53
From: Doctor Peterson
Subject: Re: Thank you (how fractions fit in to 2nd rule in the order of operations)
Hi, Terri.
For many purposes it is easiest to say that a fraction is just treated as a number in
the order of operations (in fact, I usually do that); but you don't have to, and that
isn't what I've been saying, because I don't think it's what you've been asking about.
Your example certainly CAN be treated as a division followed by a multiplication,
and it doesn't violate anything; you are still working left to right. What's different
from the horizontal expression 10 / 5 * 2 is just that everything isn't left or right of
everything else, so left-to-right isn't the only rule applied.
The fraction bar primarily serves to group the numerator and the denominator, as
I've said; I suppose, though I haven't said this, that it also groups the entire division
relative to anything to its left or right, since it forces you to do the division first. A
clearer example would be ...
2 * ---
... which amounts to 2 * (10 / 5), where we technically have to divide first (so in a
sense we are deviating from the left to right order). However, this is one of those
cases where it turns out not to matter, because the commutative property and
others conspire to make that expression EQUIVALENT to ...
2 * 10
... and therefore if you multiply first and then divide, you get the same answer. But
this is NOT really left-to-right, because the 5 is not "to the right of" the division in
the original form. It's just a simplified version -- a NEW expression that has the
same value, not the way you directly evaluate it. And that's been my main point:
HOW you actually evaluate something need not be identical to WHAT the expression
means, taken at face value.
Your questions until now were about something different -- where the numerator or
denominator was not just a single number -- so it couldn't really be considered a
mere fraction. For example, you asked about
There, you can't just say the fraction is treated as a single number; you have to use
the grouping properties of the fraction bar to determine the meaning of the
To summarize, the fraction bar groups at two levels, first forcing the numerator and
denominator to be evaluated separately, and then forcing the entire division to be
done before anything to the left or right. Thus, this expression ...
2 + 3
1 + ----- * 6
4 + 5
... means the same as this:
1 + ((2 + 3) / (4 + 5)) * 6
In simple cases, where the numerator and denominator are single numbers, this
implies that the one will be divided by the other before anything else, so for all
practical purposes you can think of the fraction as a single number (the result of
that division).
- Doctor Peterson, The Math Forum
Date: 03/02/2010 at 01:29:59
From: Terri
Subject: Thank you (how fractions fit in to 2nd rule in the order of operations)
Thank you for your patience in answering my questions which I'm guessing were a
headache to answer. I apologize for my inconsistency and confusion in writing them.
I have not seen "spelled out" in my algebra books the relationship between order of
operations and evaluating versus shortcuts like simplifying.
I've read and reread your answers, and I think I'm hopefully understanding it.
Thanks again. Have a good week!
|
{"url":"http://mathforum.org/library/drmath/view/75038.html","timestamp":"2014-04-18T17:09:14Z","content_type":null,"content_length":"20681","record_id":"<urn:uuid:d46907f3-df32-4549-a7ec-ed6b4b88907a>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gyroscope Physics
Gyroscope physics is one of the most difficult concepts to understand in simple terms. When people see a spinning gyroscope precessing about an axis, the question is inevitably asked why that
happens, since it goes against intuition. But as it turns out, there is a fairly straightforward way of understanding the physics of gyroscopes without using a lot of math.
But before I get into the details of that, it's a good idea to see how a gyroscope works (if you haven't already). Click on the link below to see a video of a toy gyroscope in action.
(opens in new window)
As you've probably noticed, a gyroscope can behave very similar to a spinning top. Gyroscope physics can therefore be applied directly to a spinning top.
To start off, let's illustrate a typical gyroscope using a schematic as shown below.
is the constant rate of spin of the wheel, in radians/second
is the constant rate of precession, in radians/second
is the length of the rod
is the radius of the wheel
is the angle between the vertical and the rod (a constant)
As the wheel spins at a rate
, the gyroscope precesses at a rate
about the pivot at the base (with
The question is, why doesn't the gyroscope fall down due to gravity?!
The reason is this:
Due to the combined rotation
, the particles in the top half of the spinning wheel experience a component of acceleration
normal to the wheel (with distribution as shown in the figure below), and the particles in the bottom half of the wheel experience a component of acceleration
normal to the wheel in the opposite direction (with distribution as shown). Due to Newton’s second law, this means that a net force
must act on the particles in the top half of the wheel, and a net force
must act on the particles in the bottom half of the wheel. These forces act in opposite directions. Therefore a clockwise torque
is needed to sustain these forces. The force of gravity pulling down on the gyroscope creates the necessary clockwise torque
In other words, due to the nature of the kinematics, the particles in the wheel experience acceleration in such a way that the force of gravity is able to maintain the angle
of the gyroscope as it precesses. This is the most basic explanation behind the gyroscope physics.
As an analogy, consider a particle moving around in a circle at a constant velocity. The acceleration of the particle is towards the center of the circle (centripetal acceleration), which is
perpendicular to the velocity of the particle (tangent to the circle). This may seem counter-intuitive, but the lesson here is that the acceleration of an object can act in a direction that is very
different from the direction of motion. This can result in some interesting physics, such as a gyroscope not falling over due to gravity as it precesses.
So now that we have an intuitive "feel" for gyroscope physics, we can analyze it in full using a mathematical approach. We will hence determine the equation of motion for the gyroscope.
Gyroscope Physics — Analysis
The general schematic for analyzing gyroscope physics is shown below.
is the acceleration due to gravity, point
is the center of mass of the wheel, and point
is the pivot location at the base.
The global
axes is fixed to ground and has origin at
, and
are defined as unit vectors pointing along the positive
, and
axis respectively.
The angular velocity of the wheel, with respect to ground, is
The angular acceleration of the wheel, with respect to ground, is
Looking at the first term:
Looking at the second term:
The angular velocity of the rod, with respect to ground, is
The angular acceleration of the rod, with respect to ground, is zero since
is constant and does not change direction.
Note that the terms
(given above) are calculated using vector differentiation. To learn more about it visit the
vector derivative
Gyroscope Physics — Wheel Analysis
Let's analyze the forces and moments acting on the wheel, due to contact with the rod. A free-body diagram of the wheel (isolated from the rod) is given below. Note that a local
axes is defined as shown, and is attached to the wheel so that it moves with the wheel, and has origin at point
is the moment acting in the local
-direction, at point
G M[y]
is the moment acting in the local
-direction, at point
G M[z]
is the moment acting in the local
-direction, at point
G F[GX]
is the force acting in the global
-direction, at point
G F[GY]
is the force acting in the global
-direction, at point
G F[GZ]
is the force acting in the global
-direction, at point
Apply Newton's Second Law to the wheel:
is the mass of the wheel
is the acceleration of point
in the global
is the acceleration of point
in the global
is the acceleration of point
in the global
Since point
is traveling in a horizontal circle at constant velocity we have no tangential acceleration, so
= 0, and
= 0. So we only need to consider the second and third equation.
The second equation is,
Since point
is traveling in a horizontal circle at constant velocity we have centripetal acceleration. The centripetal acceleration points towards the center of rotation, therefore
The third equation is,
Since point
is traveling in a horizontal circle at constant velocity we have
= 0. Thus,
Next, apply the Euler equations of motion for a rigid body, given that
is aligned with the principal directions of inertia of the wheel (treated as a solid disk).
We have,
This is the angular velocity of the wheel (with respect to ground) resolved along the local
This is the angular acceleration of the wheel (with respect to ground) resolved along the local
Thus, the second and third of Euler's equations are equal to zero, therefore Σ
= 0, and Σ
= 0. As a result, the second and third equations do not contribute to the solution. (Note that
, and
do not exert a moment (torque) about point
, since they are defined as coincident with point
- i.e. the length of the moment arm is zero).
Therefore, we only need to consider the first equation:
is the sum of the moments about point
, in the local
-direction. Note that Σ
, and
are the principal moments of inertia of the wheel about point
about the local
, and
directions (respectively).
By symmetry (treat the wheel as a thin circular disk),
Gyroscope Physics — Rod Analysis
In this part of the gyroscope physics analysis we analyze the moments acting on the rod about point
. A free-body diagram of the rod (isolated from the wheel) is given below. Note that a local
axes is defined as shown, and is attached to the rod so that it moves with the rod, and has origin at point
. The
axes is aligned with the principal directions of inertia of the rod.
Note that point
is treated as a frictionless pivot. Therefore it exerts no moment (torque) on the rod. Since we are summing moments about
(which is a fixed point) we can use the moment (Euler) equations directly.
This is the angular velocity of the rod (with respect to ground) resolved along the local
axes, and
This is the angular acceleration of the rod (with respect to ground) resolved along the local
Thus, the second and third of Euler's equations are equal to zero, and they do not contribute to the solution.
Therefore, we only need to consider the first equation:
is the sum of the moments about point
, in the local
, and
are the principal moments of inertia of the rod about point
about the local
, and
directions (respectively).
By symmetry,
is the mass of the rod.
Combine equations (1)-(4) and we get:
This is a nice compact equation describing the gyroscope physics. We can solve for any one of the values
, or
if the other two values are known.
We can write a more general equation in which we replace the gyroscope wheel with any axisymmetric rotating body (with symmetry about the local
If we assume the mass of the rod is negligible, then
= 0, and the above equation simplifies to a general equation for uniform gyroscopic motion with negligible rod mass:
In the next section on gyroscope physics we will look at gyroscopic stability, which is a very important and practical application of gyroscopes.
Gyroscope Physics — Gyroscopic Stability
From the
angular momentum page
we derived the following equation for a rigid body:
The term on the left is defined as the external impulse acting on the rigid body (between initial time
and final time
), due to the sum of the external moments (torque) acting on the rigid body. The terms on the right are the final angular momentum vector (
), and the initial angular momentum vector (
Although the above equation was derived for a rigid body it also applies to any system of particles (whether they comprise a rigid or non rigid body). The proof of this is commonly found in classical
mechanics textbooks.
As is explained on the
angular momentum page
, the above equation applies for the two cases, where the local
axes has its origin at the center of mass
of the rigid body, or at a fixed point
on the rigid body (if there is one). In the remainder of this section on gyroscope physics, we will apply the former, so the moments, inertia terms, and angular momentum are all with respect to
To illustrate the concept of gyroscopic stability let's say we have an axisymmetric rigid object (such as a wheel) spinning in space with angular velocity
, at a given instant.
In the above figure, the change in the angular momentum vector between time
is given by
, and according to the above equation
is equal to the external impulse (due to the sum of the external moments acting between time
For a given
(which is equal to the external impulse), the angle
decreases as
increases. This means that the greater the magnitude of the initial angular momentum (
), the smaller the angle
is for a given external impulse. Now, the magnitude of the angular momentum vector
is proportional to the magnitude of the angular velocity vector
. Therefore, the faster the object is spinning, the smaller the resulting angle
is for a given external impulse.
If there are no external moments (torque) acting on the object then we say that the object is experiencing torque free motion. Thus, from the above equation,
, and
= 0. Therefore, the angular momentum vector has constant magnitude and direction, and angular momentum is conserved.
For an axisymmetric rigid object experiencing torque free motion, the precession axis is seen (from the point of view of an observer) to coincide with the angular momentum vector, and this precession
axis defines the average orientation of the object. And since this precession axis defines the average orientation of the object, then a small change in direction of the angular momentum vector
(corresponding to a small
, due to an external impulse) means a small change in the average orientation of the object. This of course means that
the external impulse is applied, the object is once again experiencing torque free motion.
Hence, a fast spinning axisymmetric object, experiencing torque free motion, is able to maintain its precession axis (and hence average orientation) with very little change, if an external impulse is
Gyroscope physics sheds light on why mounting a spinning wheel (powered by a motor) in a gimbal (metal frame) is so useful for navigation. The spinning wheel is mounted in the gimbal so as to be free
of external torque. Therefore, given its already inherent orientation stability (as well as the fact that external torque is almost completely eliminated), the gyroscope experiences extremely little
orientation change as a result. This is why gyroscopes are commonly used in navigation, such as in boats and ships. They tend to remain level even if the boat or ship changes orientation (either by
pitching or rolling). The figure below illustrates a gyroscope-gimbal unit.
Gyroscopic stability also explains why a spinning axisymmetric projectile, such as a football, can have its symmetric (long) axis stay aligned with its flight trajectory, without tumbling end over
end when in flight. The spin imparts a gyroscopic response to the aerodynamic forces acting on the projectile, which results in the projectile long axis aligning itself with the flight trajectory.
The physics involved here is a combination of gyroscopic analysis and aerodynamic force analysis due to drag and (potentially) the Magnus effect. This is quite complicated and will not be discussed
here. However, there is a lot of literature available online on gyroscope physics, as related to projectile spin and gyroscopic stability, if one wishes to study this topic further.
Next in the gyroscope physics analysis, we will show that for an axisymmetric rigid body experiencing torque free motion, the precession axis is seen (from the point of view of an observer in the
inertial reference frame) to coincide with the angular momentum vector, which we know is fixed in inertial (ground) space, with constant magnitude and direction.
Gyroscope Physics — Torque Free Motion
Consider the figure below with local
axes as shown.
Let's find an equation that relates the angle
) to the vectors
. A simple way to do this is with the vector dot product:
has been replaced with
(since we are using the angular momentum about the center of mass
of the object)
is a unit vector pointing along the positive
| is the magnitude of the vector
Differentiate the above equation with respect to time to give
which gives the following equation for
From the
vector derivative page
we know that:
From the
angular momentum page
are unit vectors pointing along the positive
axis, respectively
are the components of the angular velocity vector of the object (with respect to ground), resolved along the
directions, respectively
are the principal moments of inertia about the
directions, respectively
Substitute the above three equations into the equation for
and we get
This is an informative equation coming out of the gyroscope physics analysis done here. It tells us that for
= 0 (as the object rotates through space). But if we choose
= 0 then the precession axis coincides with the angular momentum vector
, and as a result
= 0 (which simplifies the calculations). Hence, the angle
is constant and this is why, from the point of view of an observer in the inertial reference frame, the precession axis appears to coincide with the angular momentum vector. But mathematically
speaking it does not matter what axis we choose as the precession axis, since it is simply a component of rotation. Being able to arbitrarily choose the precession axis is similar to how you can
arbitrarily choose the x,y directions for a force calculation. Ultimately the answer is the same and the resultant force is not going to change. To understand this better you can read up on
Euler angles
which are commonly used to define the angular orientation of a body, using the concept of precession, spin, and nutation (which have been used in the gyroscope physics analysis presented here).
Using the above result for
, let's now find an equation relating
. Since
is always constant we can express the angular momentum as follows in terms of its
Now, from before
which can be written as (to match notation used previously):
We can equate the
components to give:
But from geometry we can also write:
Solving the above equations for
we get
are constant.
If we eliminate
from the above two equations we get
Note that this is the same as the equation given previously for uniform gyroscopic motion with negligible rod mass:
for the case
= 0. For
= 0, this equation reduces to torque free motion for an axisymmetric body.
The next section contains some additional information related to gyroscope physics, that is worth mentioning.
Gyroscope Physics — Additional Information
An axisymmetric object, experiencing torque free motion, that is experiencing pure spinning
about its symmetry axis (with no precession,
= 0) will have its angular momentum vector aligned with the spin axis, which is easy to understand. However, if this object is temporarily subjected to an external moment it will likely begin to
precess as well as spin, and its (new) precession axis will coincide with the new angular momentum vector, which will no longer coincide with the spin axis. To calculate the new motion of the object
due to the applied external moment, you need to solve the Euler equations of motion. These will allow you to mathematically determine the new quantities
, and
, due to the applied external moment. After the external moment has been applied, these quantities will correspond to torque free motion.
In problems such as gyroscope physics analysis, solving the Euler equations of motion is necessary when moments are applied, since these equations directly account for them.
In torque free motion, the only external force acting on an object is at most gravity, which acts through the center of mass (
) of the object. The object is said to be experiencing torque free motion, since no torque (moment) is able to rotate the object about its center of mass, and thus the angular momentum about the
center of mass does not change. It can therefore be assumed (for visualization purposes) that the center of rotation of the object is located at its center of mass
, since the angular momentum calculations (about
) are not affected by this assumption. This is why in torque free motion problems the angular velocity vector is typically shown passing through the center of mass of the object being analysed.
In the next section on gyroscope physics we will analyze a general case of gyroscope motion. This is undoubtedly very useful since it can apply to many different problems.
Gyroscope Physics — General Gyroscope Motion
In this final section on gyroscope physics we shall analyze a general case of gyroscope motion, as shown on the
gyro top page
. The gyro top shown there illustrates a state of general motion. The kinematic equations are already derived on the gyro top page so we can use those directly.
Assume that there is no friction anywhere.
Gyroscope Physics — Wheel Analysis
From the gyro top page, the angular velocity of the gyroscope wheel is given by equation (1) on that page:
where the variables in this equation are defined in the gyro top page. Note that the term on the left has been replaced with
in order to match the notation used here.
From the gyro top page, the angular acceleration of the gyroscope wheel is given by equation (2) on that page:
where the variables in this equation are defined in the gyro top page. Note that the term on the left has been replaced with
in order to match the notation used here.
From the gyro top page, point
on the wheel can be treated as the center of mass of the wheel (
). Therefore, for notation purposes,
on the left side of equation (5) on the gyro top page can be replaced with
After applying some considerable algebra to equation (5), and simplifying, we get the acceleration of the center of mass
of the gyroscope wheel:
where the variables in this equation are defined in the gyro top page.
Consider next the following schematic of the gyroscope wheel (used previously), with variables previously defined. The same basic gyroscope physics analysis as used before will be used here. Now,
even though the gyroscope wheel rotates through space, the setup below can be used with local
always oriented as shown, for every stage of the motion. This can be done because the wheel is axisymmetric so that the principal moments of inertia do not change relative to
, as the wheel rotates.
By Newton's second law:
Substitute the acceleration terms for the center of mass
into the above three equations, and we get the following force equations for the gyroscope wheel:
The angular velocity and angular acceleration of the gyroscope wheel is given with respect to the global
axes. Using trigonometry we will resolve these onto the local
axes of the gyroscope wheel.
We get
Apply the Euler equations of motion to the gyroscope wheel:
Next, consider the following schematic of the rod, with variables previously defined. The same basic analysis method as used before will be used here.
Gyroscope Physics — Rod Analysis
Since the local
axes for the rod and wheel have the same orientation, we can find the resolved components of the angular velocity and angular acceleration of the rod by simply setting
= 0 and
= 0 in the equations for the angular velocity and angular acceleration of the wheel. This gives us
For the rod
Note that since point
is treated as a frictionless pivot it exerts no moment (torque) on the rod.
Since we are summing moments about
(which is a fixed point) we can use the moment (Euler) equations directly.
In the equation for Σ
above, note that
, and gravity does not exert a moment about the local
Substitute the force equations and Euler equations for the gyroscope wheel into the above three equations and simplify. We then get the final three equations with which to solve for the general
gyroscope motion:
According to the sign convention used,
Let's assume we have a massless rod where
= 0.
We can rewrite equation (6) as
is a constant.
We can rewrite equation (7) as
is a constant.
From equations (8) and (9) we get
To solve this problem we first need to set boundary conditions. Let's choose the following.
, set
= 0, and
. This results in
Substitute equation 10 (for
) into equation (5) and set
Simplify equation (5) and then perform (very tedious) integration to obtain an equation for
. We get
The angular velocities
, and
in equations 10-12 can be numerically integrated with respect to time to determine the corresponding orientation angles, as a function of time. The name given to these three orientation angles
(corresponding to
, and
) is
Euler angles
, which are a common way to define the orientation of any rigid body in three-dimensional space.
Equations 10-12 mathematically describe the motion of any axisymmetric body attached to and rotating on a massless rod attached to a frictionless pivot, with axis of symmetry of the body pointing in
the direction of the rod axis. These are undoubtedly a nice set of equations coming out of the gyroscope physics analysis for general motion.
Equations 10-12 also apply for an axisymmetric top pivoted about point
, with point
lying on the symmetry axis. In this case
is the distance from point
to the center of mass
of the top. And the inertia terms are calculated about the center of mass
of the top (as was done for the gyroscope wheel).
The motion predicted by equations 10-12 is known as cuspidal motion, based on the boundary conditions given previously.
Gyroscope Physics — Closing Remarks
This completes the gyroscope physics analysis. As you can see, gyroscope physics is a complex subject worthy of deeper understanding. It is hoped that you have gained a real sense of how gyroscopes
work, as well as inspired some curiosity to look into them further on your own.
An intuitive explanation for gyroscope physics was given near the start of the page. This explanation works well to explain how a gyroscope experiencing constant rates of precession and spin can
maintain a constant angle
. But in the gyroscope physics analysis given above, the gyroscope first falls from angle
and then starts to precess, while also experiencing cyclical angle change
. Intuition fails to explain why this happens, which is often the case in physics when dealing with complex problems. But perhaps a decent explanation for this is to use a spring analogy. If a mass
is hanging from a spring while in equilibrium, the vertical position of the mass will not change. But if that mass is raised and then released from rest it will oscillate vertically up and down. The
system is no longer in equilibrium and the oscillatory motion is simply a physical way to "correct" an imbalance of forces. The same basic idea applies to a gyroscope that is released from rest,
which can perhaps help you understand the gyroscopic physics taking place.
Return from Gyroscope Physics to Miscellaneous Physics page Return from Gyroscope Physics to Real World Physics Problems home page
|
{"url":"http://www.real-world-physics-problems.com/gyroscope-physics.html","timestamp":"2014-04-19T17:35:28Z","content_type":null,"content_length":"56152","record_id":"<urn:uuid:8a5c82fd-c7dc-45b4-92e2-3c59d7f6856c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
|
NAME Math::MagicSquare - Magic Square Checker and Designer SYNOPSIS use Math::MagicSquare; $a= Math::MagicSquare -> new ([num,...,num], ..., [num,...,num]); $a->print("string"); $a->printhtml(); $a->
printimage(); $a->check(); $a->rotation(); $a->reflection(); DESCRIPTION The following methods are available: new Constructor arguments are a list of references to arrays of the same length. $a =
Math::MagicSquare -> new ([num,...,num], ..., [num,...,num]); check This function can return 4 value * 0: the Square is not Magic * 1: the Square is a Semimagic Square (the sum of the rows and the
columns is equal) * 2: the Square is a Magic Square (the sum of the rows, the columns and the diagonals is equal) * 3: the Square ia Panmagic Square (the sum of the rows, the columns, the diagonals
and the broken diagonals is equal) print Prints the Square on STDOUT. If the method has additional parameters, these are printed before the Magic Square is printed. printhtml Prints the Square on
STDOUT in an HTML format (exactly a inside a TABLE) printimage Prints the Square on STDOUT in png format. rotation Rotates the Magic Square of 90 degree clockwise reflection Reflect the Magic Square
REQUIRED GD perl module. EXAMPLE use Math::MagicSquare; $A = Math::MagicSquare -> new ([8,1,6], [3,5,7], [4,9,2]); $A->print("Magic Square A:\n"); $A->printhtml; $i=$A->check; if($i == 2) {print
"This is a Magic Square.\n";} $A->rotation(); $A->print("Rotation:\n"); $A->reflection(); $A->print("Reflection:\n"); $A->printimage(); This is the output: Magic Square A: 8 1 6 3 5 7 4 9 2
This is a Magic Square. Rotation: 4 3 8 9 5 1 2 7 6 Reflection: 8 3 4 1 5 9 6 7 2 AUTHOR Fabrizio Pivari fabrizio@pivari.com http://www.pivari.com/ Copyright Copyright 2003, Fabrizio Pivari
fabrizio@pivari.com This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. Are you interested in a Windows cgi distribution? Test http://
www.pivari.com/squaremaker.html and contact me. Availability The latest version of this library is likely to be available from: http://www.pivari.com/magicsquare.html and at any CPAN mirror
Information about Magic Square Do you like Magic Square? Do you want to know more information about Magic Square? Try to visit A very good introduction on Magic Square http://mathworld.wolfram.com/
MagicSquare.html Whole collections of links and documents in Internet http://mathforum.org/alejandre/magic.square.html http://mathforum.org/te/exchange/hosted/suzuki/MagicSquare.html A good
collection of strange Magic Square http://www.geocities.com/pivari/examples.html
|
{"url":"http://www.cpan.org/modules/by-category/06_Data_Type_Utilities/Math/Math-MagicSquare-2.04.readme","timestamp":"2014-04-16T17:20:20Z","content_type":null,"content_length":"4613","record_id":"<urn:uuid:a213454d-b379-4c77-a043-bc34bc3e2820>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
|
5th grade math (word problems)
Posted by Alex on Thursday, May 2, 2013 at 7:03am.
I'm not sure how to solve this: A group of friends went to an amusement park. 10 of them rode the ferris wheel, 15 rode carousel, and 11 rode the roller coaster. 7 of them rode both the ferris wheel
& roller coaster. 4 rode both the ferris wheel & carousel. 5 rode both the carousel and roller coaster. 3 rode all three rides. HOw many friends went to the park?
Ferris wheel & roller coaster 7-2 = 5
Ferris wheel & carousel 4-2 = 2
carousel & roller 5-3 = 2
carousel, ferris wheel, & coaster = 3
5 + 2 +2 +3 = 12 people?
• 5th grade math (word problems) - Reiny, Thursday, May 2, 2013 at 7:56am
These are best done by using Venn diagrams
(google 'Venn diagram' to get a better idea of that)
Here is a nice short youtube video of a problem that resembles yours, and explains it step by step.
draw the 3 circles within the rectangles and follow the steps outlined in the video
you should get 23 going to the park
let me know if you need further help
• 5th grade math (word problems) - Alex, Thursday, May 2, 2013 at 8:27am
Related Questions
5th grade - Will someone tell me a webpage my daughter can get help on solving ...
5th grade math word problems - Frank saved 50% by purchasing his airline ticket ...
5th grade math (word problems) - I'm not sure how to solve this problem. I know ...
HELP MATH - Thank you for taking the time to respond. I am in the 5th grade. ...
5th grade - Having trouble with division word problems
5th grade - what does it means when they ask you give me a answear in units of ...
5th grade math (word problems) - I'm not sure how to solve the following problem...
5th grade math word problems - If 300 napkins cost $1.69 and 100 napkins cost $....
5th grade math (word problems) - I didn't see an answer from yesterday, so ...
5th grade math - yes ,i definetly did,more than once ,i just had trouble ...
|
{"url":"http://www.jiskha.com/display.cgi?id=1367492618","timestamp":"2014-04-17T17:06:50Z","content_type":null,"content_length":"9417","record_id":"<urn:uuid:7fa709e6-827c-431c-8205-829a1e3f9c81>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Redefining the kilogram
New research, published by the National Physical Laboratory (NPL), takes a significant step towards changing the international definition of the kilogram which is currently based on a lump of
platinum-iridium kept in Paris. NPL has produced technology capable of accurate measurements of Planck's constant, the final piece of the puzzle in moving from a physical object to a kilogram based
on fundamental constants of nature. The techniques are described in a paper published in Metrologia on the 20th February.
The international system of units (SI) is the most widely used system of measurement for commerce and science. It comprises seven base units (meter, kilogram, second, Kelvin, ampere, mole and
candela). Ideally these should be stable over time and universally reproducible, which requires definitions based on fundamental constants of nature. The kilogram is the only unit still defined by a
physical artifact.
In October 2011, the General Conference on Weights and Measures (CGPM) agreed that the kilogram should be redefined in terms of Planck's constant (h). It deferred a final decision until there was
sufficient consistent and accurate data to agree a value for h. This paper describes how this can be done with the required level of certainty. It provides a measured value of h and extensive
analysis of possible uncertainties that can arise during experimentation. Although these results alone are not enough, consistent results from other measurement institutes using the techniques and
technology described in this paper will provide an even more accurate consensus value and a change to the way the world measures mass possibly as soon as 2014.
Planck's constant is a fundamental constant of nature which relates the frequency (colour) of a particle of light (a photon) to its energy. By using two quantum mechanical effects discovered in the
last 60 years: the Josephson effect and the quantum Hall effect, electrical power can be measured in terms of Planck's constant (and time).
A piece of kit called the watt balance - first proposed by Brian Kibble at the National Physical Laboratory in 1975 - relates electrical power to mechanical power. This allows it to make very
accurate measurements of Planck's constant in terms of the SI units of mass, length and time. The SI units of length and time are already fixed in terms of fundamental and atomic constants. If the
value of h is fixed, the watt balance would provide a method of measuring mass.
Dr Ian Robinson, who leads the project at the National Physical Laboratory, explains how the watt balance works: "The watt balance divides its measurement into two parts to avoid the errors which
would arise if real power was measured. The principal can be illustrated by considering a loudspeaker placed on its back. Placing a mass on the cone will push it downwards and it can be restored to
its former position by passing a current through the speaker coil. The ratio of the force generated by the current is fixed for a particular loudspeaker coil and magnet and is measured in the second
part of the experiment by moving the speaker cone and measuring the ratio of the voltage produced at the speaker terminals to the velocity of the cone.
When the results of the two parts of the experiment are combined, the product of voltage and current (electrical power) is equated to the product of weight and velocity (mechanical power) and the
properties of the loudspeaker coil and magnet are eliminated, leaving a measurement of the weight of the mass which is independent of the particular speaker used."
Measurements of h using watt balances have provided uncertainties approaching the two parts in one hundred million level, which is required to base the kilogram on Planck's constant. Thanks to
improvements highlighted in the paper published today, measurements at the National Research Council in Canada, which is now using the NPL equipment, look set to provide considerably greater
Another set of data comes from NIST, the USA's measurement institute. Currently the watt balance at NIST is showing slightly different results and the differences are being investigated. If the
results are found to be consistent, it will be the start of the end for the physical kilogram.
A Planck based kilogram would mean a universal standard that could be replicated anywhere at any time. It will also bring much greater long-term certainty to scientists who rely on the SI for precise
measurements, or on h itself. The watt balance would provide a means of realising and disseminating the redefined unit of mass.
Dr Robinson concludes: "This is an example of British science leading the world. NPL invented the watt balance and has produced an apparatus and measurements which will contribute to the
redefinition. The apparatus is now being used by Canada to continue the work, and we anticipate their results will have lower uncertainties than we achieved, and the principle is used by the US and
other laboratories around the world to make their own measurements."
"This research will underpin the world's measurement system and ensure the long term stability of the very top level of mass measurement. Although the man on the street won't see much difference -
you'll still get the same 1kg bag of potatoes these standards will ultimately be used to calibrate the world's weighing systems, from accurate scientific instruments, right down the chain to
domestic scales."
More information: The paper: Toward the redefinition of the kilogram: A measurement of the Planck constant using the NPL Mark II watt balance is published in Metrologia, the leading international
measurement science journal, published by IOP Publishing on behalf of Bureau International des Poids et Mesures (BIPM).
4 / 5 (1) Feb 20, 2012
A Planck based kilogram would mean a universal standard that could be replicated anywhere at any time.
As long as you're in a lab with million dollar equipment.
It's all well and good, but in practice no-one can check how long a meter is either, by using a laser and an atomic clock, because such tools are simply not available to everyone. You'll still have
to send your weights and your measuring sticks to be calibrated in some special laboratory just like before.
1 / 5 (6) Feb 20, 2012
In my theory the changing mass of kilogram prototype is connected with uncertainties of gravitational constant and dilatation of iridium meter prototype recently observed. http://
www.physor...s64.html IMO the solar system is passing trough dense cloud of dark matter or maybe gravitational shadow of galactic center. The increased density of low energy neutrinos and
gravitational waves makes the vacuum more dense and massive objects are swelling in it and become less heavy. These changes are minute, but the replacement of one prototype with another one will not
help us with this situation, until the experimental apparatus cannot account to the changes of vacuum density. For example, if we fix the kilogram definition with Watt's balance, the definition of
meter will fluctuate instead.
1 / 5 (5) Feb 20, 2012
When the massive object appears inside of dense cloud of dark matter, it will expand, i.e. it becomes less dense and heavy (of lower gravity) and more transparent (fine structure constant will
increase and converge to the unitary valie). On the other hand, the contemporary SI definition of meter is based on the wavelength of light in vacuum, so that the time will slow down with the same
rate, like the speed of light, so we cannot observe any difference with it. But the length of measures based on iridium meter prototype will expand with compare to laser meter prototype in the same
way, like the arms of Watt balance, so we will be forced to recalibrate them often. On the other hand, this recalibration will enable us to observe the changes of vacuum density more reliably. The
only problem is, the contemporary accuracy of Watt's balance is one and half of order of magnitude bellow the value required.
3 / 5 (2) Feb 20, 2012
"In my theory the changing mass of kilogram prototype" - Kinedryl
I thought that was part of your theory of advanced hyperfoombidic flush toilets.
Please familiarize yourself with what is required before an idea becomes a scientific theory.
1 / 5 (5) Feb 20, 2012
Vacuum density is indeed a concept of dense aether theory. But this theory doesn't imply, such a density must change in historical perspective. I cannot predict, whether the density of vacuum will
change in the further moment positively or negatively. But if it changes into some direction, we can predict the sign of related effects and events and judge, if they're really related each other. In
this sense this theory is testable.
I don't care whether some model is considered a "scientific" with scientific establishment, because this establishment has an apparent tendency to ignore all uncomfortable ideas and findings for
years, so that it cannot serve as a criterion of itself. From this reason I do care only, if my ideas are correct of wrong. If they will be proven correct, I presume, they will be labelled
"scientific" automatically, when all their opponents will die out.
1 / 5 (3) Feb 20, 2012
For example, the concept of black hole has been originally predicted with geologist John Michell in a personal letter written to Henry Cavendish in 1783, where he proposed an idea of a body so
massive that even light could not escape. Is such idea "scientific enough"? Isn't it the whole basis of the later model of black holes in general relativity theory?
not rated yet Feb 20, 2012
"In my theory the changing mass of kilogram prototype" - Kinedryl
I thought that was part of your theory of advanced hyperfoombidic flush toilets...
I had exactly the same thought except I was thinking it was about what was being flushed into the hyperspace created by hyperfoombidic flush toilets.
1 / 5 (3) Feb 21, 2012
In my theory the global warming effects (which are observable across whole solar system) result from the same source, like the dilatation and lost of weight of iridium prototypes, recent fluctuations
of gravity constant and speed of light constants, the recent increasing of asteroid density and volcanic activity, etc...
Dense aether model enables to explain, how these phenomena are related mutually - but it cannot predict them. After all, general relativity theory predicts the fall of meteorite, when it appears at
the proximity of Earth, but it cannot predict, when/how such a meteorite emerges. You should have additional theory / model for it.
1 / 5 (3) Feb 21, 2012
The characteristic for these phenomena is their low degree of correlation. Their connection will emerge just after consideration of sufficiently general theory. This is the problem for contemporary
science, which is A) overspecialized and fragmented, we have many experts, but they're all dealing with narrow area of physics B) it requires relatively high degree of correlation for claiming some
connection as a real. We could get sufficiently high degree of correlation, if we would consider all these phenomena together as a whole - but we cannot, because every expert is able to handle only
limited number of correlations in qualified manner. In this way many boundary phenomena, which consist of many correlations may escape the attention of specialized experts. The dark matter origin of
global warming is not only phenomena of this category: the cold fusion or various psychic effects which do require a broad multidisciplinary qualification have the same problem with their acceptance.
|
{"url":"http://phys.org/news/2012-02-redefining-kilogram.html","timestamp":"2014-04-20T11:24:35Z","content_type":null,"content_length":"84470","record_id":"<urn:uuid:ed1906d2-7c26-4512-a6dc-fd50e9bd4704>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pacifica Algebra Tutor
Find a Pacifica Algebra Tutor
...I have taught economics, operations research and finance related courses. I have acted as a tutor for MBA students in every course they took in their graduate school curriculum. I have a strong
background in statistics and econometrics.
49 Subjects: including algebra 1, algebra 2, calculus, physics
...I have the 5th edition of the Stewart textbook, an excellent book used by many schools, and the student solutions manual 5th edition. I also enjoy the fact that as complicated as calculus is
mathematically, many of its concepts and theorems are intuitively clear, such as continuity & differentiability, IVT, MVT, extrema, and some infinite series tests. These aid in teaching to
14 Subjects: including algebra 1, algebra 2, physics, calculus
...I have spent the past 2 years as a volunteer math instructor. I am very patient and have lots of experience helping struggling math students. My hours are flexible.
7 Subjects: including algebra 1, algebra 2, calculus, SAT math
...I also helped students from time to time to solve differential equation problems while tutoring them other high-level math and physics courses. I studied Linear Algebra first as an
undergraduate student and then as a graduate student. My research utilized Linear Algebra intensively for many years as well.
8 Subjects: including algebra 2, physics, geometry, calculus
I have a Masters of Arts degree in English Composition and a post secondary reading certificate from San Francisco State University, where I also received a multiple subjects teaching credential
and studied theatre, film and multi-media as an undergraduate. I have taught all levels of high school English for over twenty years. I have also taught college English and elementary school.
15 Subjects: including algebra 1, reading, English, elementary (k-6th)
|
{"url":"http://www.purplemath.com/Pacifica_Algebra_tutors.php","timestamp":"2014-04-20T02:16:37Z","content_type":null,"content_length":"23895","record_id":"<urn:uuid:0c6b2f2c-8041-4392-9b4e-6fcac7d5d2e6>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Shifts of Square Root Functions
11.2: Shifts of Square Root Functions
Created by: CK-12
Practice Shifts of Square Root Functions
What if you had the square root function $y=\sqrt{x}$ x by 3? After completing this Concept, you'll be able to identify various shifts in square root functions.
Watch This
CK-12 Foundation: Shifts of Square Root Functions
We will now look at how graphs are shifted up and down in the Cartesian plane.
Example A
Graph the functions $y=\sqrt{x}, y=\sqrt{x} + 2$ and $y=\sqrt{x} - 2$
When we add a constant to the right-hand side of the equation, the graph keeps the same shape, but shifts up for a positive constant or down for a negative one.
Example B
Graph the functions $y=\sqrt{x}, y=\sqrt{x - 2},$ and $y = \sqrt{x + 2}$
When we add a constant to the argument of the function (the part under the radical sign), the function shifts to the left for a positive constant and to the right for a negative constant.
Now let’s see how to combine all of the above types of transformations.
Example C
Graph the function $y = 2\sqrt{3x - 1} + 2$
We can think of this function as a combination of shifts and stretches of the basic square root function $y = \sqrt{x}$
If we multiply the argument by 3 to obtain $y = \sqrt{3x}$$y$$\sqrt{3}$
Next, when we subtract 1 from the argument to obtain $y = \sqrt{3x - 1}$
Multiplying the function by a factor of 2 to obtain $y = 2 \sqrt{3x - 1}$$y$
Finally we add 2 to the function to obtain $y = 2 \sqrt{3x - 1} + 2$
Each step of this process is shown in the graph below. The purple line shows the final result.
Now we know how to graph square root functions without making a table of values. If we know what the basic function looks like, we can use shifts and stretches to transform the function and get to
the desired result.
Watch this video for help with the Examples above.
CK-12 Foundation: Shifts of Square Root Functions
• For the square root function with the form: $y = a \sqrt{f(x)} + c$$c$
Guided Practice
Graph the function $y = -\sqrt{x +3} -5$
We can think of this function as a combination of shifts and stretches of the basic square root function $y = \sqrt{x}$
Next, when we add 3 to the argument to obtain $y = \sqrt{x +3}$
Multiplying the function by -1 to obtain $y = - \sqrt{x +3}$$x$
Finally we subtract 5 from the function to obtain $y = - \sqrt{x +3}-5$
Graph the following functions.
1. $y = \sqrt{2x - 1}$
2. $y = \sqrt{x - 100}$
3. $y = \sqrt{4x + 4}$
4. $y = \sqrt{5 - x}$
5. $y = 2\sqrt{x} + 5$
6. $y = 3 - \sqrt{x}$
7. $y = 4 + 2 \sqrt{x}$
8. $y = 2 \sqrt{2x + 3} + 1$
9. $y = 4 + \sqrt{2 - x}$
10. $y = \sqrt{x + 1} - \sqrt{4x - 5}$
Files can only be attached to the latest version of Modality
|
{"url":"http://www.ck12.org/book/CK-12-Algebra-I-Concepts/r2/section/11.2/","timestamp":"2014-04-21T02:31:14Z","content_type":null,"content_length":"126227","record_id":"<urn:uuid:1ec3ec2d-580f-4010-886a-1ee329488780>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Subring of R
February 10th 2010, 01:56 PM #1
Senior Member
Jan 2009
Subring of R
Let S be the set of all elements of the form a+(b*cubed root of 2)+(c*cubed root of 4) with a,b,c rationals. Show that S is a subring of R.
My book does not require multiplicative identity.
I think I can handle showing the zero element is there, and closure under addition, but I'm having trouble with closure under multiplication and additive identity. Thanks.
Let S be the set of all elements of the form a+(b*cubed root of 2)+(c*cubed root of 4) with a,b,c rationals. Show that S is a subring of R.
My book does not require multiplicative identity.
I think I can handle showing the zero element is there, and closure under addition, but I'm having trouble with closure under multiplication and additive identity. Thanks.
I'm assuming that this says $S=\left\{a+b\sqrt[3]{2}+c\sqrt[3]{4}:a,b,c\in\mathbb{Q}\right\}$. Closure under mult. is just a lot of work. But, what do you mean by additive identity? Isn't the
zero element the additive identity?
Yes, that is the way it looks. Sorry, I meat additive inverse, not additive identity.
I didn't think it would be that easy lol, but thanks...the multiplication is just troublesome.
February 10th 2010, 03:08 PM #2
February 10th 2010, 03:17 PM #3
Senior Member
Jan 2009
February 10th 2010, 03:19 PM #4
February 10th 2010, 03:47 PM #5
Senior Member
Jan 2009
February 10th 2010, 03:52 PM #6
|
{"url":"http://mathhelpforum.com/advanced-algebra/128225-subring-r.html","timestamp":"2014-04-16T07:32:41Z","content_type":null,"content_length":"46361","record_id":"<urn:uuid:fad990f5-17d5-4444-8d00-6676464763e6>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
|
J.A. Moorer - Analog Vocoder Info
31, RUE ST. MERRI
PARIS F75004
PUBLISHED 6/78
• One of the results of the science of estimation theory has been the developement of the linear prediction algorithms. This allows us to compute the coefficients of a time-varying filter which
simulates the spectrum of a given sound at each point in time. This filter has found uses in many fields, not the least of which is speech analysis and synthesis as well as computer music. The
use of the linear predictor in musical applications allows us to modify speech sounds in many ways, such as changing the pitch without altering the timing, changing timing without changing pitch,
or blending the sounds of musical instruments and voices. This paper is concerned with the fine details of the many choices one must make in the implementation of a linear prediction system and
how to make the sound as clean and crisp as possible.
• Linear prediction is a method of designing a filter to best approximate, in a mean squared error sense, the spectrum of a given signal. Although the approximation gives as results a filter valid
over a limited time, it is often used to approximate time-variant waveforms by computing a filter at certain intervals in time. This gives a series of filters, each one of which best approximates
the signal in its neighbourhood. The uses for such a filter are manyfold, ranging from geological and seismological applications (Burg) to radar and sonar (Robinson), to speech analysis and
synthesis (Atal, et al, 1970, 1971, Itakura and Saito 1971, Makhoul 1975, Markel and Gray 1976), and to computer music (Dodge, Moorer 1977, Petersen 1975, 1976, 1977). We shall concentrate here
on the usage of linear prediction as a method of capturing, simulating, and applying the sounds of the human voice in high-fidelity musical contexts. Even more specifically, we will concentrate
on applications using only the digital computer as the medium.
• This paper reports the results of work done over the last two years in searching for ways to improve the quality of speech synthesis. The findings were determined by largely informal listening
tests with trained musicians.
• This description is taken largely from Makhoul (1977). We start by modeling the sound of the human voice as an all-pole spectrum with a transfer function given by
• coefficients, and p is the number of poles or predictor coefficients in the model. If H(z) is stable (minimim phase), A(z) can be implemented as a lattice filter (Itakura and Saito) as shown in
figure 1. The reflection (or partial correlation) coefficients Km in the lattice are uniquely related to the predictor coefficients. For a stable H(z), we must have
• H(z) can also he implemented as a lattice form as shown in figure 2, as well as a product of first and second order sections by factoring A(z) and combining complex conjugate roots to form second
order sections with all real coefficients as shown in figure 3. Finally, the filter can be implemented in direct form as shown in figure 4.
• We are excluding for the time being models which include both poles and zeros since we have not as yet investigated a satisfactory method to compute both poles and zeros reliably.
• To actually synthesize a speech sound, one must drive this filter with something. This is called the excitation, and it too must be modeled to provide a reasonable representation of the speech
excitation. We usually choose the excitation to be either white noise for unvoiced sounds, a wide-band pulse train for voiced sounds, or silence (for silence) as shown in figure 5, although this
simplification should be discussed further.
• Thus to summarize, if we wish to synthesize speech, the process from analysis to synthesis might be the following:
• 1) Extract the pitch of the original sound.
• 2) Compute the linear prediction coefficients for the original sound at selected points in time.
• 3) Decide at each point in time whether the signal is voiced, unvoiced, or silence.
• 4) Compute the gain factor for the original sound.
• 5) Create an excitation from the pitch and the voicedi unvoiced/silence decision.
• 6) Scale it with the computed gain factor.
• 7) Filter it with the computed predictor coefficients.
• This is roughly the outline of the process from start to finish, but the order is not necessarily rigid. For example, we may decide not to compute a gain factor, but merely to scale the energy of
the synthesized signal to correspond to the original energy in the signal.
• Readers wishing to know more about the subject of linear prediction of speech should refer to the literature (Markel and Gray 1976, Makhoul 1975).
• There are numerous other decisions to be made, such as choosing a method for doing each of these things, choosing the order of the filter, deciding what form the filter should be in, how to
interpolate the parameters between. We will attempt to comment on each of these.
• The most usual application of this technique is with respect to speech communication. The idea there is to reduce the data involved in the transmission of speech. Indeed, using linear prediction,
one can quantize the various parameters and obtain striking reductions in the amount of data involved (Markel and Gray). For musical purposes, however, we cannot generally afford the loss of
quality implied by this quantization. Although even in speech communication the quality is important, it is not as critical as in the case of music production. At each point, we must ask
ourselves "Would I pay $ 5.95 for a record of this voice?".
• In general, there is no point in directly resynthesizing a piece of speech or singing. One could just use directly the original segment. The only point is to be able to modify the speech in ways
that would be difficult or impossible for tiespeaker to do. These include modifications of the driving function, such as changing the pitch or using more complex signals, changing the timing of
the speech, or actually altering the spectral composition. Thus we will concentrate here not only on methods that preserve the speech quality in an unmodified reconstruction, but also that are
less sensitive to modification, that can preserve the quality over a wide range of modifications.
• The first step in the process is to detect the pitch. This is not a simple problem, but has been well studied (Noll, Gold and Rabiner, Sondhi, Moorer 1974). In the musical case, we have more
information beforehand than one would in the speech communication case, in that we can allow the program some amount of information about the speaker. In specific, if the range of frequencies can
be bounded or identified beforehand, this eliminates immediately most of the gross errors that pitch detectors usually commit. What few gross errors remain can be corrected automatically by
heuristic means. We have found that if the pitch at any given time can be limited to a range of only one octave, most of the pitch detectors reported in the literature seem to work adequately.
The only question is how often should the pitch be determined. We are currently using pitch determination every 5 milliseconds and this seems to give fine enough resolution for most purposes.
• Next is the voiced-unvoiced-silence decision. This seems to be the most difficult part to automate. So difficult, in fact, that we have taken to using a graphics program to allow the composer to
go through and mark the segments himself. We use a decision theoretic procedure for the initial labeling (Atal and Rabiner 1976, Rabiner and Sambur 1977). We then synthesize an unmodified trial
replica of the original sound. Using graphics, 150 millisecond windows of both the original and the replica are presented. The voiced-unvoiced-silence decision as determined by the computer is
listed below the images. When the differences between the original and the replica seem to indicate an error in the decision, it is easily corrected by hand. It takes about 15 miniutes to go
through a 12-second segment of speech this way, which represents, for instance, about one stanza of a poem (between 35 and 40 words).
• Usually in speech analysis, the analysis window is stepped by a fixed time, such as 10 or 15 milliseconds, and takes a fixed number of samples at each step, such as 25 milliseconds worth. This
has the problem of inconsistancy. A 25 millisecond window for a male voice will sometimes capture two speech pulses and sometimes three, depending on the pitch and phasing of the speech. This
gives a large frame-to-frame variablility in the spectral estimate. The result is a "roughness" that depends on the relation between the instantaneous pitch and the frame width.
• One can decrease the effect of this phenomenon in several ways. First, by use of an all-pass filter, one may distort the phase of the speech to largely eliminate the prominance of the glottal
pulse (Rabiner, et al, 1977). One can also use a larger analysis window such that more main pulses are incorporated, such that the omission or inclusion on one pulse does not perturb the filter
so strongly. Both of these remedies have the effect of blurring what are often quite sharp boundaries between voiced and unvoiced sounds. The problem is that if the analysis window overlaps
significantly an unvoiced region, the extreme bandwidth of the unvoiced signals contributes to a filter that passes a great deal of high frequencies. If this filter is then used to synthesize a
voiced sound, a strong buzzy quality is heard. The overall effect was that just around fricatives, the voice before and after had a strongly buzzy quality.
• Another problem with using analysis windows larger than a single period is that the filter begins to pick up the fine structure of the spectrum. The fine structure is composed of those features
that contribute to the excitation, notably the pitch of the sound. At the high order required for high-quality sound on wide-bandwidth original signals (we are using 55th order filters for a deep
male voice with a sampling rate of 25600 Hertz), the filter seems to capture some of the pitch of the original signal from overlapping several periods at once. The result is that even though
reasonable unmodified synthesis can be obtained, the sound deteriorates greatly when the pitch is changed. This, then, is a case where the unique musical application of modification implies a
more substantial change from speech communication techniques.
• The solution that we adopted was the use of pitch synchronous analysis, where the analysis window is set to encompass exactly one period, and it is stepped in time by exactly one period. This
prevents any fine structure representing the pitch from being incorporated into the filter itself. It also provides that in the case of the borders between voiced and unvoiced regions, no more
than one period will overlap the border itself. The step size and window width is not so critical in the unvoiced portions, so we simply invent a fictitious pitch by interpolating between the
known frequencies nearest the unvoiced region. There does remain a slow variation in the filters presumably caused by inaccuracies in the pitch detection process. This can be somewhat lessened by
the all-pass filter approach Gtabiner,et al, 1977), but it does not seem to be terribly annoying in musical contexts.
• Note that the adoption of pitch-synchronous analysis has implications for the type of prediction used. The most popular method is the autocorrelation method, but its necessary windowing is not
appropriate for pitch-synchronous analysis. Some kind of covariance or latice method is then required. What we have chosen is Burg's method (Burg 1967) because it does correpond to the
minimization of an error criterion, the filter is unconditionally stable, and there exists a relatively efficient computational technique (Makhoul 1977). We have tried straight covariance methods
with the result that the instabilities of the filters are inherent at high orders in certain circumstances and somewhat difficult to cure. One can always factor the polynomial and replace the
ailing root by its inverse, then reassemble the filter, but besides being expensive, there is another reason to be discussed subsequently that is even more compelling.
• There are also a number of recursive estimation techniques (Morgan and Craig 1976, Morf 1974, Morf, et al, 1977) which allow one to compute the coefficients from the previous coefficients and the
new signal points. This has the advantage that no division of the signal into discrete windows is necessary. In fact, no division is possible. The problem is, again, that if the "memory" of the
recursive calculation is short enough to track the rapid changes, such as from an unvoiced region to a voiced region, then it also tracks the variation of spectrum throughout a single period of
the speech sound. The short-term spectrum changes greatly as the glottis opens and closes. If the memory of the calculation is long enough to smooth out the intra-period variations, then it also
tends to mix the spectra of the adjacent regions.
• To resynthesize the signal, either at the original pitch or at an altered pitch, one must synthesize an excitation function that drives the computed filters that embodies both the pitch and the
voiced-unvoiced-silence decision. The most common method is to use a single impulse for each period in the voiced case and uniform noise of some sort in the unvoiced case. In the case of silence,
the transient response of the filters is allowed to unwind naturally.
• The problem with the single pulse is that it is not a band-limited signal. In places where the pitch is changing rapidly, this produces a roughness in the sound that is quite annoying. For this
reason, it is generally preferable to use a band-limited pulse of some sort (Wynam and Steiglitz 1970). One can further improve the sound by scrambling somewhat the phases of the components of
the band-limited pulse to prevent the highly "peaky" appearance, but this is frosting on the cake that is not clearly perceived by most listeners. It is audible, but it is not the dramatic
transformation from harsh to me lifluous that one might hope. We synthesize the pulse by an inverse fast Fourier transform. This allows us to set the phases of each component independently. We
found that a slight deviation from zero phase was desirable and easily accomplished by adding a random number into the phase corresponding to -.5 to +.5 (radians) seemed sufficient to "round off"
the peak. Since using the FFT is a somewhat expensive way to compute the excitation, we computed it only when the frequency changed enough that a harmonic had to be ommitted or added. The
synthesized driving signal was kept in a table and sampled at the appropriate rate to generate the actual excitation. This provided another benefit that we will discuss presently. Also, since
recomputing the driving function using semi-random phases can give discontinuities when changing from one function to a new one, we used a raised cosine to round the ends of the driving function
to zero. If a DC term is present, this is known to leave the spectrum unchanged, except for the highest harmonics, so we are assured that the driving spectrum is exactly flat up to near the
maximum harmonic. The raised cosine was applied just at the beginning 10 percent and ending 10 percent of the function.
• The production of the noise for the unvoiced regions does not seem to be highly critical. We are using Gaussian noise (Knuth 1969).
• One might ask why we attempt to synthesize the driving function. Why not use the residual of the original signal directly? This indeed has the advantage that there is no pitch detection involved
and no voiced-unvoiced-silence decision at all. The problem is that then for musical purposes, one must be able to modify the residual itself. There exist methods for doing this using the phase
vocoder as a modification tdol (Moorer 1976, Portnoff 1976). There is even a recent study about making the phase vocoder more resistant to degradation from modification (Allen 1977).
• The problem is that to produce the residual, it is often necessary to amplify certain parts of the spectrum that might have been very weak in the original. The very definition of "whitening" the
signal is to bring all parts of the spectrum up to a uniform level. There are inevitable weak parts of the spectrum - nasal zeros or some such. If there is precious little energy at a certain
band of frequencies, then the whitening process will simply amplify wnatever noise was present in the recording process. If this resulting noise then falls under a strong resonance when the
prediction filter is then applied, this filter then just amplifies that noise. The perceptual effect is that the signal looses its "crispness" and becomes "fuzzy", and sometimes even downright
• In discussing excitation functions, one must mention a particular application that has been found quite useful for the production of new musical timbres, and that is the operation of
cross-synthesis (Petersen 1975, 1976, 1976). Here we use another musical sound as excitation, rather than attempting to model the speech excitation. What results is a bizzare but often
interesting combination of the source sound and the speech sound. In this manner we can realize the sounds of "talking violin" or "talking trumpet", in a manner of speaking. In fact, if one uses
the musical signal directly as excitation, quite often the result is not highly intelligible. This is because most musical signals are not spectrally flat wide-band signals, but instead have
complicated spectra. For this reason, it is usually good to whiten the source sound. One can do this also with a low-order linear predictor as shown in figure 6. The error signal of a 4th to 6th
order linear prediction process is usually sufficiently whitened to improve the intelligibility greatly, but at the expense of the clarity of the original musical source. In fact, one can choose
from a continuum of sounds between the original musical source and the speech sound. Depending on the compositional goals, one might choose something more instrument-like and scarcely
intelligible, progressing through stages of increasing intelligibility, or whatever.
• One can use any sound as excitation with varying results. For instance, if the source sound has very band limited spectral caracteristics, such as an instrument with a very small number of
haromonics like a flute, the whitening process will just amplify whatever noise happens to be present in the recording process, producing an effect somewhat like a "whispering" instrument, where
the pitch and articulation of the instrument is clearly audible, but the speech sounds distinctly whispered.
• One can also deliberately defeat the pitch synchronicity of the analysis to capture the fine structure of the spectrum. If one then filters a wide band sound, such as the sound of ocean waves,
one can as the order increases impose the complete sound of the voice on the source. We can in this manner realize something like the sounds of the sirens on the waves, or the "singing ocean". In
general, this sort of effect takes a very high order filter. For example, if the vocal signal were in steady-state, it would require one second-order section for each harmonic of the signal. Thus
for a low male voice of 40 to 60 harmonics, an order of 80 to 120 would be required. Indeed, our experiments have shown that as the order approaches 100 (or 50 for female voice), the pitch of the
vocal sound becomes more and more apparant.
• Using the lattice form for the synthesis filter gives us a convenient way of adjusting the order of the filter continuously. Since with the lattice methods, the first N sections (coefficients) of
the filter are optimal for that order, we can just add one section after another to augment the order. Setting coefficients to zero, starting from the highest, still produces an optimal filter of
a lower order.
• This is not true of the direct form or the factored form. Throwing out one coefficient requires changing all the other coefficients to render the filter optimal again. Indeed, instead of just
turning a coefficient on or off in the latice form, we may also turn it up or down. That is to say when we add a new coefficient, we may add it gradually, starting at zero, and slowly advancing
to its final value (presumably precomputed). This allows us to "play" the order of the filter, causing the vocal quality to strengthen and fade at will in a continuous manner.
• Cross-synthesis between musical instruments and voice seems to make the most sense if the two passages are in some way synchronized. We have done this in two different ways to date: one is to
record some speech, poetry, or whatever, performed by a professional speaker to achieve the desired presentation, then using synchronized recording and playing, either through a multi-track tape
recorder or through digital recording techniques, have the musician(s) play musical passages exactly synchronized with the vocal sounds. This takes a bit of practice for the musician, in that
speech sounds in English are not typically exactly rythmically precise, but can nonetheless be done quite precisely. The other avenue is to record the music first to achieve some musical
performance goal, then have the speaker synchronize the speech with the musical performance. Either one of these approaches achieves synchronicity at the expense of naturalness in one or the
other of the performances, vocal or musical, but renders the combination much more convincing.
• To make the resulting speech as smooth as possible, virtually everything must change smoothly from one point to the next. For instance, if the filter coefficients are changed abruptly at the
beginning of a period, there is a perceivable roughness produced. If we wish to interpolate the filter coefficients, however, we must be careful about the choice of a filter structure.
• We can envision at least three filter structures: direct form, factored form, and latice form. In factored form, the filter is realized using first and second order sections. The direct form is
just a single high-order tapped delay line. The problem with the direct form is that it's numerical properties are somewhat less than ideal and that one cannot necessarily interpolate directly
the coefficents. If you interpolate linearly between the coefficients of two stable potynomials, the resulting intermediate polynomials are not necessarily stable. Indeed, if the roots of the
polynomials are very similar, the intermediate polynomials will probably be stable, but if the roots are very different, the intermediate polynomials are quite likely to be unstable. Thus the
direct form is not suitable for interpolation without further thought.
• The factored form can be interpolated directly in a stable manner. In a second-order section representing a complex conjugate pole pair, the stability depends largely on the term of delay two. As
long as this term is less than unity, the section will probably be stable, depending on the remaining term. There are two problems remaining, though, in the use of the factored form. The first is
that the polynomial must be factored. With 55th order polynomials, this is a non-trivial task. There is no estimation technique known that can produce the linear prediction filter in already
factored form. Although factoring polynomials is an established science, it is still quite time-consuming, especially with high order. In addition to that, one must also group the roots such that
each section changes only between roots that are very similar. Since there is no natural ordering of the roots, one must invent a way of so grouping them. We have tried techniques of minimizing
the Euclidian distance on the Z-plane between pairs of roots, and this seems to give reasonable results, except in certain cases when, for instance, real roots collide and form complex conjugate
pairs. There is no telling at any given time how many real roots a polynomial will have, and quite often there are one or two real roots that move around in seemingly random fashion. The factored
form does, however, have one strong advantage, which is that it is very clear how to directly modify the spectrum at any given point. Since the roots are already factored, it is quite clear which
sections control which parts of the spectrum. Moreover, when the roots are interpolated, they form clear, well-defined patterns that have well-defined effects on the spectrum. Except for the
inefficiencies involved in factoring and ordering the roots, the factored form seems ideal.
• With the latice form, there is no problem in interpolation. The reflection coefficients can be interpolated directly without fear of instabilities because the condition for stability is simply
that each coefficient be of magnitude less than unity. When one interpolates, however, between reflection coefficients of two stable filters, the roots follow very complex paths, thus if the
filters are not already very similar, one can only expect that the intermediate filters will be only very loosely related to the original filters.
• If one wishes to modify the spectrum, however, one must convert the reflection coefficients into direct form and then factor the polynomial. This can be done without problem with the expendature
of sufficient quantities of computer time, but the inverse process, converting the direct form back into reflection coefficients, cannot be done accurately. The only process for doing so is
highly numerically unstable (Markel and Gray), so that for higher orders, it simply cannot be done in reasonable amounts of time. Thus, once factored, the polynomial must stay factored for all
time henceforth.
• For our own synthesis system, we currently use the latice form because it uses directly the output of the analysis technique and because interpolation can be used easily on the reflection
• Filter coefficients are, of course, not the only things that must be interpolated. The frequency must also be continuously interpolated for the most smooth sounding results. This is where the
advantage of using table lookup for the excitation occurs. With table lookup, one can continuously var% the rate at which the table is scanned. If one uses interpolation on the table itself, the
resulting process can be made very smooth indeed. Again, the table must be regenerated each time the frequency changes significantly, but this seems to occur seldom enough to allow the usage of
the FFT for generating the excitation function.
• The amplitude of the synthetic signal should be controlled to produce a loudness contour that corresponds as much as possible to the original loudness. As mentioned by Moorer (1976), what would
be ideal is some kind of direct loudness normalization, using possibly a model of human loudness perception (Zwicker and Scharf 1965). Unfortunately, this computation is so unwieldy as to render
it virtually useless at this time, so some other methods must be chosen.
• Atal (Atal and Hanauer 1971) used a method of normalization of energy such that the energy of the current frame (period) is scaled to correpond exactly to the original energy. Although this
sounds like the right thing to do, it has several problems. First is just the way it is calculated. The filter has presumably been run on the previous frame and now has a non-zero "memory". That
means that even with zero input this frame, it will emit a certain response that will presumably die away. We seek, then, to scale the excitation for this frame such that the combination of the
remaining response from the previous frame and the response for this frame (starting with a fresh filter this frame) will have the correct energy. Since the criterion is energy, a squared value,
this reduces to the solution of a quadratic equation for the gain factor. The problem comes when the energy represented by the tail of the filter response from the previous frame already exceeds
the desired energy of this frame. In this case, the solution of the quadratic is, of course, complex. What this means is that the model being used is imperfect. Either the filter or the
excitation is not an accurate model of the input signal. This is possible since the modeling process, especially for the excitation, is not an exact procedure. There are even instabilities that
can result in the computation of the gain. For instance, if the response from the last frame is large, but not quite as large as the desired energy, then a very small value of gain will be
computed. That means that in the next frame, there will be very little contribution from the previous frame and the gain factor will be quite large. As the model deteriorates, this oscillation in
the gain increases until no solution is possible. The only hope is that this occurs sufficiently rarely as to not be a detriment. Experience, however, seems to indicate the contrary: that this
failure in modeling is something that happens even in quite normal speech and must be taken into account.Beides all that, even if you do normalize the energy, the perceived loudness will often be
found to change noticably over the course of the utterance. This is especially true during voiced fricatives, although the theoretical explanation for this phenomenon is not clear at this time.
• The method of amplitude control that we have chosen is two-fold: for cross-synthesis, we choose a two-pass post-normalization scheme that computes the energies of the original signal and the
synthesized signal. The synthesized signal is then multiplied by a piecewise-linear function, the breakpoints of which are the gain factors required to normalize the energy at the points where
the energies were computed.
• For resynthesis of the vocal sounds, we use an open-loop method of just driving the filter with an excitation that correponds in energy to the energy of the error signal of the inverse filter.
This is, of course, only an approximation because the excitation never corresponds to the actual error signal, but in practice it seems to produce the smoothest most naturally varying sounds.
Note also that this does not guarantee any correspondance between the energies of the original and the synthetic signals. With the autocorrelation method of linear prediction, the error energy is
easily obtained as an automatic result of the filter computation. For other methods, it is generally necessary to actually apply the filter to the original signal to obtain the error energy. As
with all other parameters, we interpolate the gain in a continuous piecewise-linear manner throughout the synthesis.
WHERE TO FROM HERE?
• Problems remain in certain areas, such as the synthesis of nasal consonants and the voiced/unvoiced/silence decision. With nasal consonants, it is theorized that the presence of the nasal zero
must be simulated in the filter. This cannot be entirely true because some nasals can be synthesized quite well and some cannot. Additional work must be done to try to distinguish the features of
the nasals that do not adapt well to the linear prediction method and decide what is to be done about them. Some amount of work has been done on the simultaneous estimation of poles and zeros
(Stieglitz 1977, Tribolet 1974) and we will be very interested to examine the results in critical listening tests. The voiced/unvoiced/silence decision may well require hand correction for the
forseeable future.
• These techniques have been embodied in a series of programs that allow the composer to specify transformations on the timing, pitch, and other parameters in terms of piecewise-linear funttions
that can be defined directly in terms of their breakpoints, graphically or implicitly in terms of resulting contours of time, pitch, or whatever. More work must be done in arranging these in a
more convenient package for smoothly carrying the system through from start to finish without excessive juggling and hit-or-miss estimation.
• FIGURE CAPTIONS
• Figure 1 - Latice form of inverse filter.
• Figure 2 - Latice form of all-pole filter. The filter is unconditionally stable if all the coefficients are of magnitude less than one.
• Figure 3 - Factorization of all-pole filter into second order sections.
• Figure 4 - Direct form for the realization of an all-pole filter.
• Figure 5 - Schema of the synthesis of speech using as excitation either a pulse train or white noise.
• Figure 6 - Diagram of cross synthesis. The source signal, X(n), might be a musical instrument. Its spectrum is whitened by a low-order optimum inverse filter, then filtered by a high-order
all-pole filter representing the spectrum of another signal, Y(n), which is presumably a speech signal of some kind
• REFERENCES
• ALLEN (J.B.), Short Term Spectral Analysis, Synthesis, and Modification by Discrete Fourier Transform, IEEE Trans. on Acoustics, Speech, and Signal Processing, vol ASSP-25, · 3 June 1977, pp
• ATAL, (B.S.), SCHROEDER, (M.R.), Adaptive Predictive Coding of Speech Signals, Bell Syst. Tech. J., vol. 49, 1970, pp1973-1986.
• ATAL, (B.S.), HANAUER, (S.L.), Speech Analysis and Synthesis by Linear Prediction of the Speech Wave, J. Acoust. Soc. Amer., vol. 50, pp 637-655, Feb. 1971.
• ATAL, (B.S.), RABINER, (L.K., A Pattern Recognition Approach to Voiced-Unvoiced-Silence Classification with Applications to Speech Recognition,
IEEE Trans. Acoust., Speech, and Signal Processing, vol. ASSP-24, pp2Ol-211,
June 1976.
• BURG, (J.P.), Maximum Entropy Spectral Analysis, presented at the 37th Annual Meeting Soc. Explor. Geophy., Oklahoma city, OK, 1967.
• DODGE, (C.), Synthetic Speech Music, Composer's Recordings, Inc., New York, CRI-SD-348, 1975 (disk).
• GOLD, (B.), RABINER, (L.R.), Parallel Processing Techniques for Estimating Pitch Periods of Speech in the Time Domain, J. Acoust. Soc. Amer., vol 46,
*2, August 1969, pp442-448.
• ITAKURA and SAITO, Digital filtering techniques for speech analysis and synthesis, presented at the 7th International Congreos on Acoustics, Budapest, 1971, Paper 25-C-I.
• MAKHOUL, (John), Lattice Methods for Linear Prediction, IEEE Trans. on Acoustics, Speech, and Signal Processing, vol ASSP-25, *5, October 1977, pp423-428.
• MAKHOUL (J.), Linear Prediction: A Tutorial Review, Proceedings of the IEEE, Vol. 63, April 1975, pp561-580.
• MARKEL, (J.D.), GRAY, (A.H.), Linear Prediction of Speech, Springer-Verlag, Berlin Heidelberg, 1976.
• McGONEGAL, (C.A.), RABINER (L.R.), ROSENBERG (A.E.), A Subjective Evaluation of Pitch Detection Methods Using LPC Synthesized Speech , IEEE Trans. on Acoustics, Speech, and Signal Processing, vol
ASSP-25, *1 June 1977, pp221-229
• MOORER, (J.A.), The Optimum Comb Method of Pitch PeriQd Analysis of
Continuous Digitized Speech, IEEE Trans. Acoust., Speech, and Signal
Processing, vol. ASSP-22, October 1974, pp330-338.
• MOORER, (J.A.), The Synthesis of Complex Audio Spectra by Means of Discrete Summation Formulas, J. Aud. Eng. Soc., Vol 24, *9, November 197 , pp717-727.
• MOORER (J.A.), Signal Processing Aspects of Computer Music: A Survey, Proc. of the IEEE, vol 65, *8~ August 1977, pp1108-1137.
• MORF, (M.), Fast Algorithms for Multivariable Systems, PhD thesis, Dept. of Electrical Engineering, Stanford University, Stanford California, 1974.
• MORF, (M.), VIEIRA, (A), LEE (D.T.), KAILATH, (T), Recursive Multichannel
Maximum Entropy Method, Proc. 1977 Joint Automatic Control Conf.,
San Francisco, California, 1977.
• MORGAN, (D.R.), CRAIG (S.E.), Real-Time Adaptive Linear Prediction Using the Least Mean Square Gradient Algorithm, IEEE Trans. on Acoustics, Speech, and Signal Processing, vol ASSP-24, *6,
December 1976, pp494-507.
• NOLL (A.M.), Cepstrum Pitch Determination, J. Acoust. Soc. Amer., vol 41, February 1967, pp293-309.
• PETERSEN, (T.L.), Vocal Tract Modulation of Instrumental Sounds by Digital Filtering. presented at the Music Computation Conf. II, School of Music, Univ. Illinois, Urbana-Champaign, Nov. 7-9,
• PETERSEN, (T.L.), Dynamic Sound Processing, in Proc. 1976 ACM Computer Science Conf. (Anaheim, California), February 10-12, 1976.
• PETERSEN (T.L.), Analysis-Synthesis as a Tool for Creating New Families of Sound, presented at the 54th Conv. Audio Eng. Soc. (Los Angeles, California), May 4-7, 1976.
• PORTNOFF, (M.R.), Implementation of the Digital Phase Vocoder Using the
Fast Fourier Transform, IEEE Trans. on Acoustics, Speech, and Signal
Processing, vol ASSP-24, *3, June 1976, pp243-248.
• RABINER (L.R.), CHENG, (M.J.), ROSENBERG, (A.E.), McGONEGAL, (C.A.)1
A comparative Performance Study of Several Pitch Detection Algorithms, IEEE Trans. on Acoustics,Speech, and Signal Processing, vol Assp-24,
*5, October 1976, pp399-418.
• RABINER (L.R.), SAMBUR, (M.R.), Application of an LPC Distance Measure to the Voiced-Unvoiced-Silence Detection Problem, IEEE Trans. on Acoustics Speech, and Signal Processing, vol ASSP-25, *4,
August 1977, pp338-343.
• RABINER, (L.R.), ATAL, (B.S.), SAMBUR, (M.R.), LPC Prediction Error -
Analysis of Its Variation with the Position of the Analysis Frame, IEEE
Trans. on Acoustics, Speech, and Signal Processing, vol ASSP-25, *5,
October 1977, pp434-442.
• ROBINSON, (E.A.), Statistical Communication and Detection, Hafner, New York, 1967.
• SONDHI, (M.M.), New Methods of Pitch Extraction, IEEE Trans. Audio Electroacoust., vol AU-16, June 1968, pp262-266.
• STEIGLITZ, (K.), On the Simultaneous Estimation of Poles and Zeros in Speech Analysis, IEEE Trans. Acoust., Speech and Signal Processing, vbl. ASSP-25, *3, June 1977, pp229-234.
• TRIBOLET, (J.M.), Identification of Linear Discrete Systems with
Applications to Speech Processing, MS Thesis, MIT Department of
Electrical Engineering, January 1974.
• ZWICKER, (E.), SCHARF, (B.), A Model of Loudness Summation, Psychol. Rev., vol. 1, 1965, pp3-26.
• WINHAM, (G.), STEIGLITZ (K), Input Generators for Digital Sound Synthesis, J. Acoust. Soc. Amer., vol 47, *2 (part 2), pp665-666, 1970.
|
{"url":"https://sites.google.com/site/analogvocoderinfo/miscellaneous/j-a-moorer","timestamp":"2014-04-17T07:31:52Z","content_type":null,"content_length":"86557","record_id":"<urn:uuid:9b960216-b5c0-4635-a39f-6de7af22d008>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
|
April 18th
April 18th
Evaluating Expressions Involving Fractions
This short note is intended to remind you of principles already described under the heading Order of Operations, and to illustrate their application to arithmetic expressions involving fractions.
In evaluating an arithmetic expression, the order in which operations are done is:
(1) bracketed expressions first, starting with the innermost pair of brackets
(2) powers second
(3) multiplications and divisions third
(4) additions and subtractions last Within each priority level, operations are done from left to right.
These rules apply to all arithmetic expressions, including those which involve fractions.
The expression in brackets gets evaluated first:
Now, the multiplication in the second term gets done, because it has a higher priority than the subtraction:
Finally, we do the subtraction:
Thus, our final answer is
A common error is to start by doing the subtraction:
This is an error, because it carries out the subtraction ahead of the higher priority brackets and multiplication. If carried to completion this will give an incorrect final answer.
The expression in brackets has the highest priority, so do it first:
So, the original expression becomes
Of the remaining operations, the multiplication has the highest priority, so do it next:
Now, we re left with subtraction and addition. Both of these operations have the same priority, so we work from left to right. First, subtract 5 / 16 from 7 / 8 :
so that
Since 67 is a prime number, this fraction cannot be simplified further, so the final answer here is
Note that if the addition and subtraction were done in the reverse order (thus violating the priority rules), we would get
and then
which is very different from the correct answer. So, to get the correct answer, the conventional priority rules must be followed very carefully.
|
{"url":"http://www.algebra-online.com/tutorials-2/evaluating-expressions-fractions-1.htm","timestamp":"2014-04-18T13:06:19Z","content_type":null,"content_length":"20032","record_id":"<urn:uuid:801a7468-2779-40e8-b0dd-5f448b3a5527>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Setting up a double integral
October 12th 2010, 05:39 PM #1
Oct 2010
Setting up a double integral
I have been thinking about this double integral for a couple hours and I can't seem to recall how to properly set it up.
I am attempting to integrate something under the surface of z = 1 / (y+2) and over the area which is bounded by y=x and y^2+x=2.
I initially figured that this integral would be simpler if you integrated with respect to 'x' first. Making the lower bound for x, y; and the upper bound 2-y^2. From this I got the y lower
boundary to be -sqrt 2 and upper bound sqrt 2. I have attempted multiple variations and cannot seem to arrive at the correct answer.
Can anyone assist me?
I have been thinking about this double integral for a couple hours and I can't seem to recall how to properly set it up.
I am attempting to integrate something under the surface of z = 1 / (y+2) and over the area which is bounded by y=x and y^2+x=2.
I initially figured that this integral would be simpler if you integrated with respect to 'x' first. Making the lower bound for x, y; and the upper bound 2-y^2. From this I got the y lower
boundary to be -sqrt 2 and upper bound sqrt 2. I have attempted multiple variations and cannot seem to arrive at the correct answer.
Can anyone assist me?
The solution to $x^2 + x = 2$ is x = 1 and x = -2.
I did consider that, but to clarify...
Would the bounds for x be -2, 1 (lower/upper)
Bounds for y be 0 and sqrt(2-y) ?
I thought you were going to integrate with respect to x first? Have you drawn the region of integration in the xy-plane and labelled the coordinates of the intersection points of the line and
Yes, well my initial thought was to set x=0 to determine the y bounds. y^2+0=2, y=sqrt(2). I have also thought about the parabola being greater than y=-2, since if it was the problem would
indefinite as a result of the denominator. I am still somewhat at a loss of how to progress further.
I finally figured the problem out, the bounds for x were not as you wrote them. The bounds for x are y and 2-y^2 (lower, upper).
The bounds for y are -2 and 1 (lower, upper).
This yields the correct result of 9/2.
I never said that the bounds for x were -2 and 1. I thought it would be obvious, especially at this level of question, that in my first reply the solutions I gave were for the x-coordinates of
the intersection of the line and the parabola, from which the correct y-coordinates could be got. I thought my second reply made that particularly clear.
Neither of your replies were clear or helpful, but thanks anyways!
October 12th 2010, 06:33 PM #2
October 12th 2010, 08:56 PM #3
Oct 2010
October 12th 2010, 11:03 PM #4
October 13th 2010, 08:39 AM #5
Oct 2010
October 13th 2010, 03:45 PM #6
Oct 2010
October 13th 2010, 04:10 PM #7
October 13th 2010, 04:31 PM #8
Oct 2010
|
{"url":"http://mathhelpforum.com/calculus/159399-setting-up-double-integral.html","timestamp":"2014-04-17T16:19:21Z","content_type":null,"content_length":"52724","record_id":"<urn:uuid:8d733778-6f36-43d9-b7d5-7336f0ffc882>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Posts by nina
Total # Posts: 453
write conclusion:if i place a pencil in a bucket of water what will happen?conclusion is?
write a hypothesis for questio:if i drop a full toilet paper roll and an empty toilet paperroll will they land on the floor at the same time ;
write a hypothesis for question:if i drop 2 objects at the same time will they land at the same time?
How many pounds of hamburger that costs $1.60 per pound must be mixed with 70 pounds of hamburger that costs $2.10 per pound to make a mixture that costs $1.70 per pound.
(Tell whether each expression would give you the area or the perimeter of the rectangle) 3) xy 4) y + 2x +y
Because it is August,the weather is quite unpredictable. The question is-Why is the comma needed after "August"?
math-116 final is it a timed test?
College Algebra
Suppose that the length of a rectangle is 3 inches longer than the width and that the perimeter of the rectangle is 78. Set up an equation involving only W, the width of the rectangle.
Solve for x in the interval 0<x<(pi/2): 2sin(9x)+cos(10x)=cos(8x) PLEASE HELP!!!
Could you help explain how to find the Metric Units of Mass?
To conclude, early adopters are opinion leaders when a new product just launched into the market. They spread after buying experience and affect the majority of buyers¡¦ attitudes (Rogers). As a
result, collect and manage their opinions are important to firms. Mar...
As has been discussed above, the experiences from early adopters can help marketers to understand the advantages and drawbacks of the product. Therefore, an interview of focus group may be one
effective method to listen to consumers¡¦ voice. It has been wildly acc...
Can some one help me in English and to see if I answered the question? According to Rogers, consumers can be grouped according to how quickly they adopt a new product. Some consumers adopt the
product as soon as it becomes available. On the other hand, some consumers are among...
Discuss the likely considerations and strategy for a company aiming its market launch at Everett Rogers¡¦ ¡¥early adopters¡¦ category.
English checking
Thanks for SraJMcGin's proofreading and GuruBlue's information~ This is for the coming exam, hope I can pass without any question.^_^ ############################# The case study is ¡§Seabiscuit-How
to market a hit movie¡¨ and several key success f...
English checking
Hi, is there anyone who can help me with this essay's English checking? Thank you very much! ################################## TITLE:Based on your analysis for the group presentation, what are the
most important implications that you would draw from this case study for su...
When I get to the series part of Microsoft Excel 2003 during graphing, what am I supposed to do?
microsoft excel 2003
Okay, thank you.
microsoft excel 2003
How would I graph equations in Microsoft Excel 2003 and the residuals? I want to graph linear, quadratic and exponential equations but I don't know how.
Thank you.
how to do division.2 goes into 628 how many times
Can someone please tell me what part of a cell are sacs that contain digestive material? Also, what part of a cell is a network of canals?
Maybe I didn't phrase the question correctly. Sorry if anyone is confused. What I meant is, what part of a cell are sacs that contain digestive material? Also, what part of a cell is a network of
canals? That's what I meant to put.
What is the network of canals organelle and the group of sacs that contain digestive material? These are supposed to be a part of a cell.
energy 3 kg as heat is transfered to 0.22 kg co2 at constant pressure at an initial temperature of 30 degree estimate the final temperature of the gas
energy 3 kg as heat is transfered to 0.22 kg co2 at constant pressure at an initial temperature of 30 degree estimate the final temperature of the gas
4th grade question
How do I draw a picture to show how my government system is organized?
A quarter and a nickel. The other one is a nickel :)
how can i advise Robert about the proposed meeting to change the constitution of Wellbuilt Pty Ltd with particular reference to his rights in relation to the his shares? Wellbuilt Pty Ltd is a large
proprietary company running a medium sized commercial construction business in...
law help plz
what would be the argument Susan Smith (the company accountant) has been approached by XYZ Ltd, a major client of Wellbuilt Pty Ltd, with a business proposal that appeared too attractive to refuse.
Susan incorporated her own company to enter this contract. Her company made a p...
corporate law help
what would be the revelent issue Tom, a new director has found he has not been re-imbursed for expenses he incurred when he went on a familiarisation trip to various company premises. What did his
employment contract say? What did his company's written policies say on the ...
jewelry markings
I just purchased a piece of costume jewelry. It has 5 large, rectangular stones with large links between each stone. The clasp has what appears to be 885 with a fish symbol behind it. Any ideas on
what this means? These sites might give you some information: http://www.google....
what ways ae companies to be distinguished from partnership Thank you for using the Jiskha Homework Help Forum. Here is some information for you: 1. (Broken Link Removed) 2. (Broken Link Removed) 3.
what are the methods by which a corporation can be created http://www.google.com/search?q=form+corporation&rls=com.microsoft:en-us:IE-SearchBox&ie=UTF-8&oe=UTF-8&sourceid=ie7&rlz=1I7SUNA There should
be some answers here. I'm sure it varies, state by state. =)
how ca i create a brand name and hw can i say that the name is relevant to the business Thank you for using the Jiskha Homework Help Forum. First of all, you need to pick a business. The yellow pages
can give you lots of ideas.
what can be a new type of office paper recyling service in australia Thank you for using the Jiskha Homework Help Forum. Although I didn't see anything specifically referring to Australia, here is
something to check: http://www.dse.vic.gov.au/DSE/nrence.nsf/LinkView/7F5422...
inro to marketing
what will be a three product layers in recycling Thank you for using the Jiskha Homework Help Forum. Hopefully the following link is referring to y our post: http://www.toyoda-gosei.com/products/
us history intro and conclusion
i posted earlier about my DBQ for my us history class, and i just need to know how i should go about opening and closing what i am saying. it's the only part i have trouble on. heres the question
again: In what ways and to what extent did constitutional and social develope...
us history
i have a DBQ that i'm not sure how to introduce (the first paragraph). the question is: in what ways and to what extent did constitutional and social developements between 1860 and 1877 amount to a
revolution? thanks in advance You might consider something like this: The C...
main legal issues
CATCHWORDS: CORPORATIONS - statutory derivative action - application by 35% shareholder/director to bring derivative proceedings after company's assets were transferred to a company from which the
applicant is excluded - inadequacies of proposed points of claim - whether t...
corporate law
what would be the major issues in this CORPORATIONS LIST AUSTIN J FRIDAY 1 SEPTEMBER 2006 5189/05
Duke Maverick worked for a packaging company. One Day, Duke recieved four seperate orders and acidentally mixed up the addresses, so he applied the address labels at random, what is the probability
that exactly three packages were correctly labeled? If three were labeled corre...
Which one of the following words does not belong with the others and why? Father Aunt Sister Cousin Mother Uncle You and your sister have the same parents. Your father, mother, aunt, and uncle have
the same sets of parents -- your two pairs of grandparents. However, your cous...
3 Years.
What do race and ethnicity mean to you? Since this is your question, I suggest you answer it. Your teacher wants to know what YOU think -- not what some anonymous person online thinks about it. It
might help to think of your friends and what would happen to your feelings towar...
Why are the concepts of race and ethnicity important to United States Society? The concepts of race and ethnicity are important to U.S. society for two main reasons. First, the U.S. enslaved
African-Americans for over 200 years. Their descendants still suffer the effects of th...
Low recovery
Thank you so much. I'm struggling for this question from last night.
what kind of bone comprises the proximal epiphysis of the femur? spongy bone
Math: Direct Variation
how do you solve a direct variation problem?
Suppose f(x)=Ax+B for real constants A and B. If f(60)=70 and f(72)=79, find k such that f(k)=k. PLEASE HELP :] Ok we have f(x)=Ax+B which is a line. We also know (1) f(60)=70 which means A*60+B=70
(2) f(72)=79 " " A*72+B=79 Which means A*12=9 if we subtract (1) from...
A ghastly mistake (A) surprising (B)horrible (C)pale (D)original "horrible" can be considered a synonym for "ghastly". The other words cannot. To find synonyms, it's quite fast to use any of the
following: http://www.dictionary.com http://www.thesaurus....
Find the base three representation of 237. PLEASE HELP. I HAVE NO IDEA 3^4=81 so there must be 3 of those (3*81=162 237-162=75 3^3=27 so there must be two of those 2*27=54 75-54=21 3^2=9, so there
are two of them 21-18=3, so one three 3^0=1, so none of those. 22210 base 3 2*3^...
question 7 a
Pages: <<Prev | 1 | 2 | 3 | 4 | 5
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=nina&page=5","timestamp":"2014-04-17T16:26:42Z","content_type":null,"content_length":"19202","record_id":"<urn:uuid:c128cf88-73d4-4210-a166-5fdc92c0efde>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Google Answers: factorials for addition?
I know what a factorial is, but I can't remember whether there is a
word and/or formula taking a number, lets say 5, and adding 5+4+3+2+1,
which is 15. Does this function (that's probably not the right word,
I'm not a mathematition, have a word, like "factorial"? If so, what
is it?
|
{"url":"http://answers.google.com/answers/threadview/id/708478.html","timestamp":"2014-04-16T10:11:16Z","content_type":null,"content_length":"10277","record_id":"<urn:uuid:bec8fbe1-c42e-4e07-a7cf-3b419879f36a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
|