content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Given the expression 3sin(2x-pi), how would I go about determining the phase angle?
Best Response
You've already chosen the best response.
does it have to do with physics-oscillation?
Best Response
You've already chosen the best response.
This is a homework problem, in which I am given a list of parameters including phase angle, and told to determine those parameters, then sketch the graph of the expression.
Best Response
You've already chosen the best response.
Phase angle was not covered in lecture or recitation. As best I can recall, I have never been asked to determine phase angle of a sine function, even back in trig, and do not know how to go about
that. Any help would be appreciated.
Best Response
You've already chosen the best response.
hm the only time i had to solve for phase angle was in physics finding it by using the equation \[x(t)=Acos(\omega(t)+\phi)\] which will give you the position of a mass attached to a spring at
time t. however, it turns into a sine function when determining the velocity by differentiating so \[x'(t)=-A \omega \sin(\omega(t)+\phi)\]
Best Response
You've already chosen the best response.
A was amplitude (max displacement)
Best Response
You've already chosen the best response.
didu guys get a web reloading notice?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
I believe the amplitude is 3, the period is pi, and the phase shift would be pi/2 to the right. The solution sheet for this problem says the phase angle is pi/2, so maybe phase angle is
synonymous with phase shift.
Best Response
You've already chosen the best response.
that's what i was thinking except i thought pi would be the phase angle. how did you get pi/2?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4eeff08ae4b0367162f64784","timestamp":"2014-04-19T02:26:30Z","content_type":null,"content_length":"47293","record_id":"<urn:uuid:6de939a0-1b0e-4092-baee-d445cbee7b53>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Efficient Hashing with Lookups in Two Memory Accesses
Results 1 - 10 of 13
- 14th Annual European Symposium on Algorithms, LNCS 4168 , 2006
"... Abstract. A counting Bloom filter (CBF) generalizes a Bloom filter data structure so as to allow membership queries on a set that can be changing dynamically via insertions and deletions. As
with a Bloom filter, a CBF obtains space savings by allowing false positives. We provide a simple hashing-bas ..."
Cited by 31 (3 self)
Add to MetaCart
Abstract. A counting Bloom filter (CBF) generalizes a Bloom filter data structure so as to allow membership queries on a set that can be changing dynamically via insertions and deletions. As with a
Bloom filter, a CBF obtains space savings by allowing false positives. We provide a simple hashing-based alternative based on d-left hashing called a d-left CBF (dlCBF). The dlCBF offers the same
functionality as a CBF, but uses less space, generally saving a factor of two or more. We describe the construction of dlCBFs, provide an analysis, and demonstrate their effectiveness experimentally.
"... Abstract. The retrieval problem is the problem of associating data with keys in a set. Formally, the data structure must store a function f: U → {0, 1} r that has specified values on the
elements of a given set S ⊆ U, |S | = n, but may have any value on elements outside S. All known methods (e. g. ..."
Cited by 13 (6 self)
Add to MetaCart
Abstract. The retrieval problem is the problem of associating data with keys in a set. Formally, the data structure must store a function f: U → {0, 1} r that has specified values on the elements of
a given set S ⊆ U, |S | = n, but may have any value on elements outside S. All known methods (e. g. those based on perfect hash functions), induce a space overhead of Θ(n) bits over the optimum,
regardless of the evaluation time. We show that for any k, query time O(k) can be achieved using space that is within a factor 1 + e −k of optimal, asymptotically for large n. The time to construct
the data structure is O(n), expected. If we allow logarithmic evaluation time, the additive overhead can be reduced to O(log log n) bits whp. A general reduction transfers the results on retrieval
into analogous results on approximate membership, a problem traditionally addressed using Bloom filters. Thus we obtain space bounds arbitrarily close to the lower bound for this problem as well. The
evaluation procedures of our data structures are extremely simple. For the results stated above we assume free access to fully random hash functions. This assumption can be justified using space o(n)
to simulate full randomness on a RAM. 1
"... Cuckoo hashing is a highly practical dynamic dictionary: it provides amortized constant insertion time, worst case constant deletion time and lookup time, and good memory utilization. However,
with a noticeable probability during the insertion of n elements some insertion requires Ω(log n) time. Whe ..."
Cited by 10 (3 self)
Add to MetaCart
Cuckoo hashing is a highly practical dynamic dictionary: it provides amortized constant insertion time, worst case constant deletion time and lookup time, and good memory utilization. However, with a
noticeable probability during the insertion of n elements some insertion requires Ω(log n) time. Whereas such an amortized guarantee may be suitable for some applications, in other applications (such
as high-performance routing) this is highly undesirable. Kirsch and Mitzenmacher (Allerton ’07) proposed a de-amortization of cuckoo hashing using queueing techniques that preserve its attractive
properties. They demonstrated a significant improvement to the worst case performance of cuckoo hashing via experimental results, but left open the problem of constructing a scheme with provable
properties. In this work we present a de-amortization of cuckoo hashing that provably guarantees constant worst case operations. Specifically, for any sequence of polynomially many operations, with
overwhelming probability over the randomness of the initialization phase, each operation is performed in constant time. In addition, we present a general approach for proving that the performance
guarantees are preserved when using hash functions with limited independence
"... Abstract Hashing is an extremely useful technique for a variety of high-speed packet-processing applications in routers. In this chapter, we survey much of the recent work in this area, paying
particular attention to the interaction between theoretical and applied research. We assume very little bac ..."
Cited by 9 (1 self)
Add to MetaCart
Abstract Hashing is an extremely useful technique for a variety of high-speed packet-processing applications in routers. In this chapter, we survey much of the recent work in this area, paying
particular attention to the interaction between theoretical and applied research. We assume very little background in either the theory or applications of hashing, reviewing the fundamentals as
necessary. 1
"... Cuckoo hashing is an efficient and practical dynamic dictionary. It provides expected amortized constant update time, worst case constant lookup time, and good memory utilization. Various
experiments demonstrated that cuckoo hashing is highly suitable for modern computer architectures and distribute ..."
Cited by 9 (4 self)
Add to MetaCart
Cuckoo hashing is an efficient and practical dynamic dictionary. It provides expected amortized constant update time, worst case constant lookup time, and good memory utilization. Various experiments
demonstrated that cuckoo hashing is highly suitable for modern computer architectures and distributed settings, and offers significant improvements compared to other schemes. In this work we
construct a practical history-independent dynamic dictionary based on cuckoo hashing. In a history-independent data structure, the memory representation at any point in time yields no information on
the specific sequence of insertions and deletions that led to its current content, other than the content itself. Such a property is significant when preventing unintended leakage of information, and
was also found useful in several algorithmic settings. Our construction enjoys most of the attractive properties of cuckoo hashing. In particular, no dynamic memory allocation is required, updates
are performed in expected amortized constant time, and membership queries are performed in worst case constant time. Moreover, with high probability, the lookup procedure queries only two memory
entries which are independent and can be queried in parallel. The approach underlying our construction is to enforce a canonical memory representation on cuckoo hashing. That is, up to the initial
randomness, each set of elements has a unique memory representation.
- In Proc. 7th Symposium on Discrete Algorithms (SODA , 2006
"... It is well known that if n balls are inserted into n bins, with high probability, the bin with maximum load contains (1 + o(1))log n / loglog n balls. Azar, Broder, Karlin, and Upfal [1] showed
that instead of choosing one bin, if d ≥ 2 bins are chosen at random and the ball inserted into the least ..."
Cited by 9 (2 self)
Add to MetaCart
It is well known that if n balls are inserted into n bins, with high probability, the bin with maximum load contains (1 + o(1))log n / loglog n balls. Azar, Broder, Karlin, and Upfal [1] showed that
instead of choosing one bin, if d ≥ 2 bins are chosen at random and the ball inserted into the least loaded of the d bins, the maximum load reduces drastically to log log n / log d+O(1). In this
paper, we study the two choice balls and bins process when balls are not allowed to choose any two random bins, but only bins that are connected by an edge in an underlying graph. We show that for n
balls and n bins, if the graph is almost regular with degree n ǫ, where ǫ is not too small, the previous bounds on the maximum load continue to hold. Precisely, the maximum load is
, 2010
"... The performance of a dynamic dictionary is measured mainly by its update time, lookup time, and space consumption. In terms of update time and lookup time there are known constructions that
guarantee constant-time operations in the worst case with high probability, and in terms of space consumption ..."
Cited by 7 (3 self)
Add to MetaCart
The performance of a dynamic dictionary is measured mainly by its update time, lookup time, and space consumption. In terms of update time and lookup time there are known constructions that guarantee
constant-time operations in the worst case with high probability, and in terms of space consumption there are known constructions that use essentially optimal space. In this paper we settle two
fundamental open problems: • We construct the first dynamic dictionary that enjoys the best of both worlds: we present a two-level variant of cuckoo hashing that stores n elements using (1+ϵ)n memory
words, and guarantees constant-time operations in the worst case with high probability. Specifically, for any ϵ = Ω((log log n / log n) 1/2) and for any sequence of polynomially many operations, with
high probability over the randomness of the initialization phase, all operations are performed in constant time which is independent of ϵ. The construction is based on augmenting cuckoo hashing with
a “backyard ” that handles a large fraction of the elements, together with a de-amortized perfect hashing scheme for eliminating the dependency on ϵ.
- in Proc. 32nd Australasian Conf. Comput. Sci. (ACSC’09), 2009
"... A hash table is a fundamental data structure in computer science that can offer rapid storage and retrieval of data. A leading implementation for string keys is the cacheconscious array hash
table. Although fast with strings, there is currently no information in the research literature on its perfor ..."
Cited by 4 (1 self)
Add to MetaCart
A hash table is a fundamental data structure in computer science that can offer rapid storage and retrieval of data. A leading implementation for string keys is the cacheconscious array hash table.
Although fast with strings, there is currently no information in the research literature on its performance with integer keys. More importantly, we do not know how efficient an integer-based array
hash table is compared to other hash tables that are designed for integers, such as bucketized cuckoo hashing. In this paper, we explain how to efficiently implement an array hash table for integers.
We then demonstrate, through careful experimental evaluations, which hash table, whether it be a bucketized cuckoo hash table, an array hash table, or alternative hash table schemes such as linear
probing, offers the best performance—with respect to time and space— for maintaining a large dictionary of integers in-memory, on a current cache-oriented processor.
"... In this paper we relate the problem of finding structures related to perfect matchings in bipartite graphs to a stochastic process similar to throwing balls into bins. Given a bipartite graph
with n nodes on each side, we view each node on the left as having balls that it can throw into nodes on the ..."
Cited by 1 (0 self)
Add to MetaCart
In this paper we relate the problem of finding structures related to perfect matchings in bipartite graphs to a stochastic process similar to throwing balls into bins. Given a bipartite graph with n
nodes on each side, we view each node on the left as having balls that it can throw into nodes on the right (bins) to which it is adjacent. If each node on the left throws exactly one ball and each
bin on the right gets exactly one ball, then the edges represented by the ball-placement form a perfect matching. Further, if each thrower is allowed to throw a large but equal number of balls, and
each bin on the right receives an equal number of balls, then the set of ball-placements corresponds to a perfect fractional matching – a weighted subgraph on all nodes with nonnegative weights on
edges so that the total weight incident at each node is 1. We show that several simple algorithms based on throwing balls into bins deliver a near-perfect fractional matching. For example, we show
that by iteratively picking a random node on the left and throwing a ball into its least-loaded neighbor, the distribution of balls obtained is no worse than randomly throwing kn balls into n bins.
Another algorithm is based on the d-choice load-balancing of balls and bins. By picking a constant number of nodes on the left and appropriately inserting a ball into the least-loaded of their
neighbors, we achieve a smoother load distribution on both sides – maximum load is at most log log n / log d + O(1). When each vertex on the left throws k balls, we obtain an algorithm that achieves
a load within k ± 1 on the right vertices. 1
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=4666912","timestamp":"2014-04-21T07:45:47Z","content_type":null,"content_length":"38909","record_id":"<urn:uuid:61cf1920-a773-4f4e-802d-8e01c92a722f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computing power leads to new insights
In his 1989 book "The Emperor's New Mind", Roger Penrose commented on the limitations on human knowledge with a striking example: He conjectured that we would most likely never know whether a string
of 10 consecutive 7s appears in the digital expansion of the number pi. Just 8 years later, Yasumasa Kanada used a computer to find exactly that string, starting at the 22869046249th digit of pi.
Penrose was certainly not alone in his inability to foresee the tremendous power that computers would soon possess. Many mathematical phenomena that not so long ago seemed shrouded and unknowable,
can now be brought into the light, with tremendous precision.
In their article "Exploratory Experimentation and Computation," to appear in the November 2011 issue of the Notices of the American Mathematical Society, David H. Bailey and Jonathan M. Borwein
describe how modern computer technology has vastly expanded our ability to discover new mathematical results. "By computing mathematical expressions to very high precision, the computer can discover
completely unexpected relationships and formulas," says Bailey.
Mathematics, the Science of Patterns
A common misperception is that mathematicians' work consists entirely of calculations. If that were true, computers would have replaced mathematicians long ago. What mathematicians actually do is to
discover and investigate patterns—patterns that arise in numbers, in abstract shapes, in transformations between different mathematical objects, and so on. Studying such patterns requires subtle and
sophisticated tools, and, until now, a computer was either too blunt an instrument, or insufficiently powerful, to be of much use in mathematics. But at the same time, the field of mathematics grew
and deepened so much that today some questions appear to require additional capabilities beyond the human brain.
"There is a growing consensus that human minds are fundamentally not very good at mathematics, and must be trained," says Bailey. "Given this fact, the computer can be seen as a perfect complement to
humans—we can intuit but not reliably calculate or manipulate; computers are not yet very good at intuition, but are great at calculations and manipulations."
|
{"url":"http://machineslikeus.com/news/computing-power-leads-new-insights","timestamp":"2014-04-20T13:59:22Z","content_type":null,"content_length":"32961","record_id":"<urn:uuid:9037ade3-05ff-45fe-ab83-6263474330b4>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Call for Bayesian case studies
January 7, 2013
By Allen Downey
(This article was originally published at Probably Overthinking It, and syndicated at StatsBlogs.)
It's been a while since the last post because I have been hard at work on
Think Bayes
. As always, I have been posting drafts as I go along, so you can read the current version at
I am teaching Computational Bayesian Statistics in the spring, using the draft edition of the book. The students will work on case studies, some of which will be included in the book. And then I hope
the book will be published as part of the
Think X
series (for all
). At least, that's the plan.
In the next couple of weeks, students will be looking for ideas for case studies. An ideal project has at least some of these characteristics:
• An interesting real-world application (preferably not a toy problem).
• Data that is either public or can be made available for use in the case study.
• Permission to publish the case study!
• A problem that lends itself to Bayesian analysis, in particular if there is a practical advantage to generating a posterior distribution rather than a point or interval estimate.
Examples in the book include:
• The hockey problem: estimating the rate of goals scored by two hockey teams in order to predict the outcome of a seven-game series.
• The paintball problem, a version of the lighthouse problem. This one verges on being a toy problem, but recasting it in the context of paintball got it over the bar for me.
• The kidney problem. This one is as real as it gets -- it was prompted by a question posted by a cancer patient who needed a statistical estimate of when a tumor formed.
• The unseen species problem: a nice Bayesian solution to a standard problem in ecology.
So far I have a couple of ideas prompted by questions on Reddit:
But I would love to get more ideas. If you have a problem you would like to contribute, let me know!
Please comment on the article here: Probably Overthinking It
|
{"url":"http://www.statsblogs.com/2013/01/07/call-for-bayesian-case-studies/","timestamp":"2014-04-20T20:56:29Z","content_type":null,"content_length":"35842","record_id":"<urn:uuid:3da0f5dd-84ff-45d9-8a4e-a27416a807b5>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vedic Mathematics
Vedic Mathematics
One of the foremost exponents of Vedic mathematics, the late Bharati Krishna Tirtha Maharaja, author of Vedic Mathematics, has offered a glimpse into the sophistication of Vedic mathematics. Drawing
from the Atharva-veda, Tirtha Maharaja points to many sutras (codes) or aphorisms which appear to apply to every branch of mathematics: arithmetic, algebra, geometry (plane and solid), trigonometry
(plane and spherical), conics (geometrical and analytical), astronomy, calculus (differential and integral), etc.
Utilising the techniques derived from these sutras, calculations can be done with incredible ease and simplicity in one's head in a fraction of the time required by modern means. Calculations
normally requiring as many as a hundred steps can be done by the Vedic method in one single simple step. For instance the conversion of the fraction 1/29 to its equivalent recurring decimal notation
normally involves 28 steps. Utilising the Vedic method, it can be calculated in one simple step.
Secular and spiritual life were so intertwined in Vedic India that mathematical formulas and laws were often taught within the context of spiritual statements (mantras). Thus while learning spiritual
lessons, one could also learn mathematical rules. The Vedic mathematicians prefer to use the devanagari letters of Sanskrit to represent the various numbers in their numerical notations rather than
the numbers themselves, especially where large numbers are concerned. This made it much easier for the students of this mathematics to record the arguments and the appropriate conclusions. In order
to help the pupil to memorise the material studied and assimilated, they made it a general rule of practice to write even the most technical and abstruse textbooks in sutras or in verse (which is so
much easier - even for children - to memorise). And this is why we find not only theological, philosophical, medical, astronomical and other such treatises but even huge dictionaries, in Sanskrit
verse! So from this standpoint, they used verse, sutras and codes for lightening the burden and facilitating the work (by versifying scientific and even mathematical material in a readily assimilable
The code used is as follows:
The Sanskrit consonants
ka, ta, pa, and ya all denote 1;
kha, tha, pha, and ra all represent 2;
ga, da, ba, and la all stand for 3;
Gha, dha, bha, and va all represent 4;
gna, na, ma, and sa all represent 5;
ca, ta, and sa all stand for 6;
cha, tha, and sa all denote 7;
ja, da, and ha all represent 8;
jha and dha stand for 9; and
ka means zero.
Vowels make no difference and it is left to the author to select a particular consonant or vowel at each step. This great latitude allows one to bring about additional meanings of his own choice. For
example kapa, tapa, papa, and yapa all mean 11. By a particular choice of consonants and vowels one can compose a poetic hymn with double or triple meanings. Here is an actual sutra of spiritual
content, as well as secular mathematical significance:
gopi bhagya madhuvrata
srngiso dadhi sandhiga
khala jivita khatava
gala hala rasandara
While this verse is a petition to Lord Krishna, when learning it one can also learn the value of pi/10 (i.e. the ratio of the circumference of a circle to its diameter divided by 10) to 32 decimal
places. It has a self-contained master-key for extending the review to any number of decimal places. The translation is as follows: "O Lord anointed with the yoghurt of the milkmaids' worship
(Krishna), O saviour of the fallen, O master of Shiva, please protect me."
At the same time, by application of the consonant code given above, this verse directly yields the decimal equivalent of pi divided by 10: pi/10 = 0.31415926535897932384626433832792. Thus, while
offering mantric praise to Godhead in devotion, by this method one can also add to memory significant secular truths.
|
{"url":"http://archives.amritapuri.org/bharat/vedicmath.php","timestamp":"2014-04-18T19:00:56Z","content_type":null,"content_length":"21747","record_id":"<urn:uuid:2e488ce2-7bd1-4188-9923-ed4b94a1a916>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Maximizing Profit price
November 24th 2010, 07:42 PM #1
Junior Member
Sep 2010
Maximizing Profit price
Hey guys, I got the first part done and most of the second part, but could someone check over my answers for the part after the YOU ARE CORRECT sign.
Also how do i solve the profit maximizing price (the last box)?
Nvm i solved it...made dumb errors lol
November 24th 2010, 10:11 PM #2
Junior Member
Sep 2010
|
{"url":"http://mathhelpforum.com/calculus/164325-maximizing-profit-price.html","timestamp":"2014-04-16T07:45:06Z","content_type":null,"content_length":"30903","record_id":"<urn:uuid:83ab8681-36c7-40da-a2b4-658ab9f384f3>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Exponents - Simplifying
Simplify: Note: \uparrow means "to the power of" (2xy\uparrow 3)\uparrow 2 (xy)\uparrow 3
Is it this: $(2xy^{3})^{2}(xy)^{3}?$ If so, what have you tried so far?
Yes it is that. I have tried distributing the 2 to 2x and y. Then distributing the 3 to the x and the y. Which got me this: (2x{2} y{5})(x{3} y{3}) | v (4x y{5}) (x{3} y{3}) Then I multiplied the
first parenthesis by the second parenthesis: And I got this: 4x{6}y{15} I checked at the back of my textbook but the answer is wrong.
Exponents multiply in some situations, and add in others. You have, for example, $(x^{2})^{3}=x^{2}x^{2}x^{2}=x^{2\times 3}=x^{6}.$ So they multiply in that circumstance. If I just have $x^{2}x^{3}=x
^{5},$ then they add. Do not confuse those two situations! You have made an error along these lines in your calculations. Try doing it again, with these rules in mind.
|
{"url":"http://mathhelpforum.com/algebra/180119-exponents-simplifying-print.html","timestamp":"2014-04-20T18:50:28Z","content_type":null,"content_length":"5774","record_id":"<urn:uuid:a9ed4499-e456-464a-9156-fa9a67a8095d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Engineering an External Memory Minimum Spanning Tree Algorithm
- In: Proc. of ESA 2005. Volume 3669 of LNCS , 2005
"... for processing huge data sets that can fit only on hard disks. It supports parallel disks, overlapping between disk I/O and computation and it is the first I/O-efficient algorithm library that
supports the pipelining technique that can save more than half of the I/Os. STXXL has been applied both in ..."
Cited by 38 (5 self)
Add to MetaCart
for processing huge data sets that can fit only on hard disks. It supports parallel disks, overlapping between disk I/O and computation and it is the first I/O-efficient algorithm library that
supports the pipelining technique that can save more than half of the I/Os. STXXL has been applied both in academic and industrial environments for a range of problems including text processing,
graph algorithms, computational geometry, gaussian elimination, visualization, and analysis of microscopic images, differential cryptographic analysis, etc. The performance of STXXL and its
applications is evaluated on synthetic and real-world inputs. We present the design of the library, how its performance features are supported, and demonstrate how the library integrates with STL.
KEY WORDS: very large data sets; software library; C++ standard template library; algorithm engineering 1.
, 2009
"... We present Filter-Kruskal – a simple modification of Kruskal’s algorithm that avoids sorting edges that are “obviously” not in the MST. For arbitrary graphs with random edge weights
Filter-Kruskal runs in time O (m + n lognlog m n, i.e. in linear time for not too sparse graphs. Experiments indicate ..."
Cited by 6 (0 self)
Add to MetaCart
We present Filter-Kruskal – a simple modification of Kruskal’s algorithm that avoids sorting edges that are “obviously” not in the MST. For arbitrary graphs with random edge weights Filter-Kruskal
runs in time O (m + n lognlog m n, i.e. in linear time for not too sparse graphs. Experiments indicate that the algorithm has very good practical performance over the entire range of edge densities.
An equally simple parallelization seems to be the currently best practical algorithm on multicore machines.
"... We report on initial experimental results for a practical I/O-efficient Single-Source Shortest-Paths (SSSP) algorithm on general undirected sparse graphs where the ratio between the largest and
the smallest edge weight is reasonably bounded (for example integer weights in {1,...,2 32}) and the reali ..."
Cited by 2 (0 self)
Add to MetaCart
We report on initial experimental results for a practical I/O-efficient Single-Source Shortest-Paths (SSSP) algorithm on general undirected sparse graphs where the ratio between the largest and the
smallest edge weight is reasonably bounded (for example integer weights in {1,...,2 32}) and the realistic assumption holds that main memory is big enough to keep one bit per vertex. While our
implementation only guarantees average-case efficiency, i.e., assuming randomly chosen edge-weights, it turns out that its performance on real-world instances with non-random edge weights is actually
even better than on the respective inputs with random weights. Furthermore, compared to the currently best implementation for external-memory BFS [6], which in a sense constitutes a lower bound for
SSSP, the running time of our approach always stayed within a factor of five, for the most difficult graph classes the difference was even less than a factor of two. We are not aware of any previous
I/O-efficient implementation for the classic general SSSP in a (semi) external setting: in two recent projects [10, 23], Kumar/Schwabe-like SSSP approaches on graphs of at most 6 million vertices
have been tested, forcing the authors to artificially restrict the main memory size, M, to rather unrealistic 4 to 16 MBytes in order not to leave the semi-external setting or produce huge running
times for larger graphs: for random graphs of 2 20 vertices, the best previous approach needed over six hours. In contrast, for a similar ratio of input size vs. M, but on a 128 times larger and even
sparser random graph, our approach was less than seven times slower, a relative gain of nearly 20. On a real-world 24 million node street graph, our implementation was over 40 times faster. Even
larger gains of over 500 can be estimated for ran-
"... Despite extensive study over the last four decades and numerous applications, no I/O-efficient al-gorithm is known for the union-find problem. In this paper we present an I/O-efficient algorithm
for the batched (off-line) version of the union-find problem. Given any sequence of N mixed union andfin ..."
Add to MetaCart
Despite extensive study over the last four decades and numerous applications, no I/O-efficient al-gorithm is known for the union-find problem. In this paper we present an I/O-efficient algorithm for
the batched (off-line) version of the union-find problem. Given any sequence of N mixed union andfind operations, where each union operation joins two distinct sets, our algorithm uses O(SORT(N)) = O
( NB logM/B NB) I/Os, where M is the memory size and B is the disk block size. This bound isasymptotically optimal in the worst case. If there are union operations that join a set with itself, our
algorithm uses O(SORT(N) + MST(N)) I/Os, where MST(N) is the number of I/Os needed to com-pute the minimum spanning tree of a graph with N edges. We also describe a simple and practical O(SORT(N) log
( NM))-I/O algorithm, which we have implemented.The main motivation for our study of the union-find problem arises from problems in terrain analysis. A terrain can be abstracted as a height function
defined over R2, and many problems that deal with suchfunctions require a union-find data structure. With the emergence of modern mapping technologies, huge amount of data is being generated that is
too large to fit in memory, thus I/O-efficient algorithmsare needed to process this data efficiently. In this paper, we study two terrain analysis problems that benefit from a union-find data
structure: (i) computing topological persistence and (ii) constructing thecontour tree. We give the first O(SORT(N))-I/O algorithms for these two problems, assuming that theinput terrain is
represented as a triangular mesh with N vertices.Finally, we report some preliminary experimental results, showing that our algorithms give order-ofmagnitude improvement over previous methods on
large data sets that do not fit in memory.
"... Inverted index data structures are the key to fast search engines. The predominant operation on inverted indices asks for intersecting two sorted lists of document IDs which might have vastly
varying lengths. We compare previous theoretical approaches, methods used in practice, and one new algorithm ..."
Add to MetaCart
Inverted index data structures are the key to fast search engines. The predominant operation on inverted indices asks for intersecting two sorted lists of document IDs which might have vastly varying
lengths. We compare previous theoretical approaches, methods used in practice, and one new algorithm which exploits that the intersection uses small integer keys. We also take different data
compression techniques into account. The new algorithm is very fast, simple, has good space efficiency, and is the only algorithm that performs well over the entire spectrum of relative list length
ratios. 1
"... Network flows have many real-life applications and leased-line installation for telephone network is one of them. In the leased line a major concern is to provide connection of telephone to all
the locations. It is required to install the leased line network that reaches all the locations at the min ..."
Add to MetaCart
Network flows have many real-life applications and leased-line installation for telephone network is one of them. In the leased line a major concern is to provide connection of telephone to all the
locations. It is required to install the leased line network that reaches all the locations at the minimum cost. This chapter deals with this situation with the help of a network diagram in which
each node represents the locations and the edge between each node represents leased line. Each edge has a number attached to it which represents the cost associated with installing that link. Aim of
the paper is to determine the leased line network connecting all the locations at minimum cost of installation.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.85.7721","timestamp":"2014-04-17T15:38:09Z","content_type":null,"content_length":"29080","record_id":"<urn:uuid:59b4898b-693e-45e1-949d-e91d1025a71f>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The angular velocity associated with the optical flow field arising from motion through a rigid environment
Results 1 - 10 of 13
- International Journal of Computer Vision , 1995
"... In this paper we analyze in some detail the geometry of a pair of cameras, i.e. a stereo rig. Contrarily to what has been done in the past and is still done currently, for example in stereo or
motion analysis, we do not assume that the intrinsic parameters of the cameras are known (coordinates of th ..."
Cited by 233 (14 self)
Add to MetaCart
In this paper we analyze in some detail the geometry of a pair of cameras, i.e. a stereo rig. Contrarily to what has been done in the past and is still done currently, for example in stereo or motion
analysis, we do not assume that the intrinsic parameters of the cameras are known (coordinates of the principal points, pixels aspect ratio and focal lengths). This is important for two reasons.
First, it is more realistic in applications where these parameters may vary according to the task (active vision). Second, the general case considered here, captures all the relevant information that
is necessary for establishing correspondences between two pairs of images. This information is fundamentally projective and is hidden in a confusing manner in the commonly used formalism of the
Essential matrix introduced by Longuet-Higgins [40]. This paper clarifies the projective nature of the correspondence problem in stereo and shows that the epipolar geometry can be summarized in one 3
\Theta 3 ma...
- Computer Vision, Graphics, and Image Processing , 1983
"... A method is proposed for determining the motion of a body relative to a fixed environment using the changing image seen by a camera attached to the body. The optical flow in the image plane is
the input, while the instantaneous rotation and translation of the body are the output. If optical flow cou ..."
Cited by 168 (7 self)
Add to MetaCart
A method is proposed for determining the motion of a body relative to a fixed environment using the changing image seen by a camera attached to the body. The optical flow in the image plane is the
input, while the instantaneous rotation and translation of the body are the output. If optical flow could be determined precisely, it would only have to be known at a few places to compute the
parameters of the motion. In practice, however, the measured optical flow will be somewhat inaccurate. It is therefore advantageous to consider methods which use as much of the available information
as possible. We employ a least-squares approach which minimizes some measure of the discrepancy between the measured flow and that predicted from the computed motion parameters. Several different
error norms are investigated. In general, our algorithm leads to a system of nonlinear equations from which the motion parameters may be computed numerically. However, in the special cases where the
motion of the camera is purely translational or purely rotational, use of the appropriate norm leads to a system of equations from which these parameters can be determined in closed form. 1.
- In CVPR , 1996
"... We evaluated six algorithms for computing egomotion from image velocities. We established benchmarks for quantifying bias and sensitivity to noise, and for quantifying the convergence properties
of those algorithms that require numerical search. Our simulation results reveal some interesting and sur ..."
Cited by 59 (0 self)
Add to MetaCart
We evaluated six algorithms for computing egomotion from image velocities. We established benchmarks for quantifying bias and sensitivity to noise, and for quantifying the convergence properties of
those algorithms that require numerical search. Our simulation results reveal some interesting and surprising results. First, it is often written in the literature that the egomotion problem is
difficult because translation (e.g., along the X-axis) and rotation (e.g., about the Y-axis) produce similar image velocities. We found, to the contrary, that the bias and sensitivity of our six
algorithms are totally invariant with respect to the axis of rotation. Second, it is also believed by some that fixating helps to make the egomotion problem easier. We found, to the contrary, that
fixating does not help when the noise is independent of the image velocities. Fixation does help if the noise is proportional to speed, but this is only for the trivial reason that the speeds are
slower under fixatio...
- International Journal of Computer Vision , 1987
"... There has been much concern with ambiguity in the recovery of motion and structure from time-varyin g images. I show here that the class of surfaces leading to ambiguous motion fields is
extremel y restricted-only certain hyperholoids of one sheet (and some degenerate forms) qualify. Furthermore, th ..."
Cited by 31 (1 self)
Add to MetaCart
There has been much concern with ambiguity in the recovery of motion and structure from time-varyin g images. I show here that the class of surfaces leading to ambiguous motion fields is extremel y
restricted-only certain hyperholoids of one sheet (and some degenerate forms) qualify. Furthermore, the viewer must be on the surface for it to lead to a potentially ambiguous motion field. Thus the
motion field over an appreciable image region almost always uniquely defines the instantaneous translationa l and rotational velocities, as well as the shape of the surface (up to a scale factor).
esearch for this article was conducted while the author was on leave at the Department of Electrical Engineering, University of Hawaii at Manoa, Honolulu. Hawaii 96822
- In European Conference on Computer Vision , 1994
"... A condensed version of this paper will be presented at ECCV'94 ..."
, 2002
"... Using zoom lenses in a computer vision system affects many aspects of the processing in the path from image formation to structure recovery. This thesis is concerned with understanding and
addressing the particular issues which arise when wishing to control zoom in an active vision system — one able ..."
Cited by 7 (1 self)
Add to MetaCart
Using zoom lenses in a computer vision system affects many aspects of the processing in the path from image formation to structure recovery. This thesis is concerned with understanding and addressing
the particular issues which arise when wishing to control zoom in an active vision system — one able to fixate upon and track objects in the scene. The optical properties of zoom lenses interact with
the imaging process in a number of ways. The first part of this work begins by confirming that the pinhole camera model is nonetheless valid for the cameras to be used. Then, using geometric
arguments, it is shown how zoom-varying lens distortion adversely affects camera auto-calibration techniques which rely on purely rotational motion. Whilst pin-cushion distortion is tolerable, it is
shown that barrelling distortion causes algorithm failure. The breakdown point is predicted, then verified using synthetic experiments. Suggestions for automatic recovery of the distortion parameters
are given. The lowest level of the visual processing involves detecting and matching image features before robust segmentation and motion estimation. Achieving robustness comes at high computational
cost, and the second part of this work addresses some of the theoretical and computational issues in Torr and Zisserman’s
- Laboratory, Massachusetts Institute of Technology , 1987
"... In this paper we study the conditions under which a perspective motion field can have multiple interpretations, and present analytical expressions for the relationship among these
interpretations. It is shown that, in most cases, the ambiguity in the interpretation of a motion field can be resolved ..."
Cited by 2 (0 self)
Add to MetaCart
In this paper we study the conditions under which a perspective motion field can have multiple interpretations, and present analytical expressions for the relationship among these interpretations. It
is shown that, in most cases, the ambiguity in the interpretation of a motion field can be resolved by imposing the physical constraint that depth is positive over the image region onto which the
surface projects.
"... Abstract — Interactive perception augments the process of perception with physical interactions. By adding interactions into the perceptual process, manipulating the environment becomes part of
the effort to learn task-relevant information, leading to more reliable task execution. Interactions inclu ..."
Add to MetaCart
Abstract — Interactive perception augments the process of perception with physical interactions. By adding interactions into the perceptual process, manipulating the environment becomes part of the
effort to learn task-relevant information, leading to more reliable task execution. Interactions include obstruction removal, object repositioning, and object manipulation. In this paper, we show how
to extract kinematic properties from novel objects. Many objects in human environments, such as doors, drawers, and hand tools, contain inherent kinematic degrees of freedom. Knowledge of these
degrees of freedom is required to use the objects in their intended manner. We demonstrate how a simple algorithm enables the construction of kinematic models for such objects, resulting in knowledge
necessary for the correct operation of those objects. The simplicity of the framework and its effectiveness, demonstrated in our experimental results, indicate that interactive perception is a
promising perceptual paradigm for autonomous mobile manipulation. I.
, 1988
"... A new method is described for interpreting image flow (or optical flow) in a small field of view produced by a rigidly moving curved surface. The equations relating the shape and motion of the
surface to the image flow are formulated. These equations are solved to obtain explicit analytic expression ..."
Add to MetaCart
A new method is described for interpreting image flow (or optical flow) in a small field of view produced by a rigidly moving curved surface. The equations relating the shape and motion of the
surface to the image flow are formulated. These equations are solved to obtain explicit analytic expressions for the motion, orientation and curvatures of the surface in terms of the spatial
derivatives (up to second order) of the image flow. We state and prove some new theoretical results concerning the existence of multiple interpretations. Numerical examples are given for some
interesting cases where multiple solutions exist. The solution method described here is simpler and more direct than previous methods. The method and the representation described here are part of a
unified approach for the interpretation of image motion in a variety of cases (e.g.: planar/curved surfaces, constant/accelerated motion, etc.). Thus the representation and the method of analysis
adopted here have some advanta...
"... A new method is described for interpreting image flow (or optical flow) in a small field of view produced by a rigidly moving curved surface. The equations relating the shape and motion of the
surface to the image flow are formulated. These equations are solved to obtain explicit analytic expression ..."
Add to MetaCart
A new method is described for interpreting image flow (or optical flow) in a small field of view produced by a rigidly moving curved surface. The equations relating the shape and motion of the
surface to the image flow are formulated. These equations are solved to obtain explicit analytic expressions for the motion, orientation and curvatures of the surface in terms of the spatial
derivatives (up to second order) of the image flow. We state and prove some new theoretical results concerning the existence of multiple interpretations. Numerical examples are given for some
interesting cases where multiple solutions exist. The solution method described here is simpler and more direct than previous methods. The method and the representation described here are part of a
unified approach for the interpretation of image motion in a variety of cases (e.g.: planar/curved surfaces, constant/accelerated motion, etc.). Thus the representation and the method of analysis
adopted here have some advantages in comparison with previous approaches
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1750488","timestamp":"2014-04-17T07:36:47Z","content_type":null,"content_length":"38293","record_id":"<urn:uuid:24c46bce-b411-4e83-aa3e-5e4db2ced6a5>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Moved to homework area sorry.-kiyoshi7
Study Triangle Similarity and study Supplementary Angles, from the standard high school Geometry course. The triangle problem with the single square relies on supplementary angles where the upper
right and left square vertices meet two sides of the larger triangle. You can conclude that all three of the smaller triangles are similar, and so the ratios of their corresponding sides are equal. A
proportion can be arranged for the two smaller left & right-hand triangles; and note too that all three of the smaller triangles are RIGHT triangles.
|
{"url":"http://www.physicsforums.com/showthread.php?s=4b5fd51f23c0e3810a99c1767e66a3f3&p=4486439","timestamp":"2014-04-19T12:40:13Z","content_type":null,"content_length":"26687","record_id":"<urn:uuid:c8b4ee2d-a3ae-4812-ac48-d848d816ef56>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mr. Bigler's M
Physics I is a course designed for high school students in grades 11 & 12. Topics studied include motion, forces, momentum, energy, heat, electricity & magnetism, waves & optics, fluid mechanics, and
atomic & particle physics. The course requires that students be comfortable describing and solving real-world problems using algebra and basic trigonometry. The course also requires vector math, but
this topic is taught at the beginning of the course. The course is supported by an interactive, inquiry-based laboratory environment where students gain hands-on experience with the concepts being
studied. The content of the course course exceeds the requirements of the Massachusetts Curriculum Frameworks for high school physics and is recommended for students who are planning to take AP
Physics and/or the SAT subject test in physics.
|
{"url":"http://www.mrbigler.com/moodle/","timestamp":"2014-04-19T04:19:57Z","content_type":null,"content_length":"34773","record_id":"<urn:uuid:768cc31c-21c2-4027-a5f4-0b9b2347347e>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Thinking Mathematically by Robert Blitzer
Chapter 11:
Counting Methods and
Probability Theory
Section 1:
The Fundamental Counting
The Fundamental Counting Principle
If you can choose one item from a group of
M items and a second item from a group of
N items, then the total number of two-item
choices is M N.
You MULTIPLY the numbers!
The Fundamental Counting Principle
At breakfast, you can have eggs, pancakes or cereal.
You get a free juice with your meal: either OJ or apple
juice. How many different breakfasts are possible?
eggs pancakes cereal
OJ apple OJ apple OJ apple
Example: Applying the Fundamental
Counting Principle
• The Greasy Spoon Restaurant offers 6
appetizers and 14 main courses. How many
different meals can be created by selecting
one appetizer and one main course?
• Using the fundamental counting principle,
there are 14 6 = 84 different ways a
person can order a two-course meal.
Example: Applying the Fundamental
Counting Principle
• This is the semester that you decide to take your
required psychology and social science courses.
• Because you decide to register early, there are 15
sections of psychology from which you can
choose. Furthermore, there are 9 sections of social
science that are available at times that do not
conflict with those for psychology. In how many
ways can you create two-course schedules that
satisfy the psychology-social science requirement?
The number of ways that you can satisfy the
requirement is found by multiplying the
number of choices for each course.
You can choose your psychology course
from 15 sections and your social science
course from 9 sections. For both courses
you have:
15 9, or 135 choices.
The Fundamental Counting
The number of ways a series of successive
things can occur is found by multiplying the
number of ways in which each thing can
Example: Options in Planning a
Course Schedule
Next semester you are planning to take three
courses - math, English, and humanities. Based
on time blocks and highly recommended
professors, there are 8 sections of math, 5 of
English, and 4 of humanities that you find
suitable. Assuming no scheduling conflicts, there
8 5 4 = 160 different three course schedules.
Car manufacturers are now experimenting with
lightweight three-wheeled cars, designed for a
driver and one passenger, and considered ideal for
city driving. Suppose you could order such a car
with a choice of 9 possible colors, with or without
air-conditioning, with or without a removable
roof, and with or without an onboard computer. In
how many ways can this car be ordered in terms of
This situation involves making choices with
four groups of items.
color - air-conditioning - removable roof - computer
9 2 2 2 = 72
Thus the car can be ordered in 72 different
Example: A Multiple Choice Test
You are taking a multiple-choice test that
has ten questions. Each of the questions has
four choices, with one correct choice per
question. If you select one of these options
per question and leave nothing blank, in
how many ways can you answer the
We DON’T blindly multiply the first two numbers
we see. The answer is not 10 4 = 40.
We use the Fundamental Counting Principle to
determine the number of ways you can answer the
test. Multiply the number of choices, 4, for each of
the ten questions
Example: Telephone Numbers in
the United States
Telephone numbers in the United States
begin with three-digit area codes followed
by seven-digit local telephone numbers.
Area codes and local telephone numbers
cannot begin with 0 or 1. How many
different telephone numbers are possible?
We use the Fundamental Counting Principle
to determine the number of different
telephone numbers that are possible.
Section 2:
• A permutation is an arrangement of
– No item is used more than once.
– The order of arrangement makes a difference.
Example: Counting
Based on their long-standing contribution to
rock music, you decide that the Rolling
Stones should be the last group to perform
at the four-group Offspring, Pink Floyd,
Sublime, Rolling Stones concert. Given
this decision, in how many ways can you
put together the concert?
We use the Fundamental Counting Principle to
find the number of ways you can put together the
concert. Multiply the choices:
3 choices 2 choices 1 choice 1 choice
offspring whichever only one
pink floyd of the two remaining
sublime remaining
Thus, there are six different ways to arrange the
concert if the Rolling Stones are the final group to
Example: Counting
You need to arrange seven of your favorite
books along a small shelf. How many
different ways can you arrange the books,
assuming that the order of the books makes
a difference to you?
You may choose any of the seven books for the
first position on the shelf. This leaves six choices
for second position. After the first two positions
are filled, there are five books to choose from for
the third position, four choices left for the fourth
position, three choices left for the fifth position,
then two choices for the sixth position, and only
one choice left for the last position.
7 6 5 4 3 2 1 = 5040
There are 5040 different possible permutations.
Factorial Notation
If n is a positive integer, the notation n! is
the product of all positive integers from n
down through 1.
n! = n(n-1)(n-2)…(3)(2)(1)
note that 0!, by definition, is 1.
Permutations of n Things Taken r at a
The number of permutations possible if r
items are taken from n items:
nPr = (n – r)! = n(n – 1) (n – 2) (n – 3) . . . (n – r + 1)
n! = n(n – 1) (n – 2) (n – 3) . . . (n – r + 1) (n - r) (n - r - 1) . . . (2)(1)
(n – r)! = (n - r) (n - r - 1) . . . (2)(1)
Permutations of n Things Taken r at a
The number of permutations possible if
r items are taken from n items:
nPr: starting at n, write down r numbers
going down by one:
nPr = n(n – 1) (n – 2) (n – 3) . . . (n – r + 1)
1 2 3 4 r
A math club has eight members, and it must choose 5
officers --- president, vice-president, secretary, treasurer
and student government representative. Assuming that
each office is to be held by one person and no person can
hold more than one office, in how many ways can those
five positions be filled?
We are arranging 5 out of 8 people into the five distinct
offices. Any of the eight can be president. Once selected,
any of the remaining seven can be vice-president.
Clearly this is an arrangement, or permutation, problem.
8P5 = 8!/(8-5)! = 8!/3! = 8 · 7 · 6 · 5 · 4 = 6720
Permutations with duplicates.
• In how many ways can you arrange the
letters of the word minty?
• That's 5 letters that have to be arranged, so
the answer is 5P5 = 5! = 120
• But how many ways can you arrange the
letters of the word messes?
• You would think 6!, but you'd be wrong!
here are six permutations of messes
me s s e s 1
well, all 3! arrangements of the s's
me s s e s 2 look the same to me!!!!
me s s e s This is true for any arrangement
of the six letters in messes, so
me s s e s 4 every six permutations should
count only once.
me s s e s 5 The same applies for the 2!
arrangement of the e's
Permutations with duplicates.
• How many ways can you arrange the letters
of the word messes?
• The problem is that there are three s's and 2
e's. It doesn't matter in which order the s's
are placed, because they all look the same!
• This is called permutations with duplicates.
Permutations with duplicates.
• Since there are 3! = 6 ways to arrange the
s's, there are 6 permutations that should
count as one. Same with the e's. There are
2! = 2 permutations of them that should
count as 1.
• So we divide 6! by 3! and also by 2!
• There are 6!/3!2! = 720/12 = 60 ways to
arrange the word messes.
Permutations with duplicates.
• In general if we want to arrange n items, of which
m1, m2, .... are identical, the number of
permutations is
A signal can be formed by running different
colored flags up a pole, one above the other.
Find the number of different signals
consisting of 6 flags that can be made if 3
of the flags are white, 2 are red, and 1 is
6!/3!2!1! = 720/(6)(2)(1) = 720/12 = 60
Section 3:
Combination: definition
A combination of items occurs when:
• The item are selected from the same
• No item is used more than once.
• The order of the items makes no
How to know when the problem is a
permutation problem or a
combination problem
• Permutation:
– arrangement, arrange
– order matters
• Combination
– selection, select
– order does not matter.
Example: Distinguishing between
Permutations and Combinations
• For each of the following problems, explain if the
problem is one involving permutations or
• Six students are running for student government
president, vice-president, and treasurer. The
student with the greatest number of votes becomes
the president, the second biggest vote-getter
becomes vice-president, and the student who gets
the third largest number of votes will be student
government treasurer. How many different
outcomes are possible for these three positions?
• Students are choosing three student
government officers from six candidates.
The order in which the officers are chosen
makes a difference because each of the
offices (president, vice-president, treasurer)
is different. Order matters. This is a
problem involving permutations.
Example: Distinguishing between
Permutations and Combinations
• Six people are on the volunteer board of
supervisors for your neighborhood park. A
three-person committee is needed to study
the possibility of expanding the park. How
many different committees could be formed
from the six people on the board of
• A three-person committee is to be formed
from the six-person board of supervisors.
The order in which the three people are
selected does not matter because they are
not filling different roles on the committee.
Because order makes no difference, this is a
problem involving combinations.
Example: Distinguishing between
Permutations and Combinations
• Baskin-Robbins offers 31 different flavors
of ice cream. One of their items is a bowl
consisting of three scoops of ice cream,
each a different flavor. How many such
bowls are possible?
• A three-scoop bowl of three different flavors is to
be formed from Baskin-Robbin’s 31 flavors. The
order in which the three scoops of ice cream are
put into the bowl is irrelevant. A bowl with
chocolate, vanilla, and strawberry is exactly the
same as a bowl with vanilla, strawberry, and
chocolate. Different orderings do not change
things, and so this problem is combinations.
Combinations of n Things Taken r at a
n n!
= nCr =
r r!(n – r)!
Note that the sum of the two numbers on the bottom
(denominator) should add up to the number on the
top (numerator).
Computing Combinations
• Suppose we need to compute 9C3
9! 9!
9 C3
3!(9 3)! 3!6!
• r = 3, n – r = 6
• The denominator is the factorial of smaller of
the two: 3!
Computing Combinations
• Suppose we need to compute 9C3
9! 9!
9 C3
3!(9 3)! 3!6!
• r = 3, n – r = 6
• In the numerator write (the product of) all the
numbers from 9 down to n - r + 1 = 6 + 1 = 7:
• There should be the same number of terms in
the numerator and denominator: 9 8 7
Computing Combinations
• If called upon, there's a fairly easy way to
compute combinations.
– Given nCr , decide which is bigger: r or n – r.
– Take the smaller of the two and write out the
factorial (of the number you picked) as a
– Draw a line over the expression you just wrote.
Computing Combinations
• If called upon, there's a fairly easy way to
compute combinations.
– Now, put n directly above the line and directly
above the leftmost number below.
– Eliminate common factors in the numerator and
– Do the remaining multiplications.
– You're done!
Computing Combinations
• Suppose we need to compute 9C3 .
– n – r = 6, and the smaller of 3 and 6 is 3.
= 3 4 7 = 84
Finding Probabilities from
• If the odds in favor of an event E are a to b,
then the probability of the event is given by
P( E ) a
Finding Probabilities from
• If the odds against an event E are a to b,
then the probability of the event is given
P( E ) b
Finding Probabilities from
• Example:
– Suppose Bluebell is listed as 7:1 in the
third race at the Meadowlands.
– The odds listed on a horse are odds
against that horse winning, that is, losing.
– The probability of him losing is
7 / (7+1) = 7/8.
– The probability of him winning is 1/8.
Finding Probabilities from
• Example:
– Suppose Bluebell is listed as 7:1 in the third
race at the Meadowlands. (a:b against)
– The odds listed on a horse are odds against that
horse winning, that is, losing.
– The probability of him losing is
7 / (7+1) = 7/8. a b
– The probability of him winning is 1/8. a b
Section 7:
Events Involving And;
Conditional Probability
Independent Events
• Two events are independent events if the
occurrence of either of them has no effect
on the probability of the other.
• For example, if you roll a pair of dice two
times, then the two events are independent.
What gets rolled on the second throw is not
affected by what happened on the first
And Probabilities with
Independent Events
• If A and B are independent events, then
P(A and B) = P(A) P(B)
• The example of choosing from four pairs of
socks and then choosing from three pairs of
shoes (= 12 possible combinations) is an
example of two independent events.
Dependent Events
• Two events are dependent events if the occurrence
of one of them does have an effect on the
probability of the other.
• Selecting two Kings from a deck of cards by
selecting one card, putting it aside, and then
selecting a second card, is an example of two
dependent events.
• The probability of picking a King on the second
selection changes because the deck now contains
only 51, not 52, cards.
And Probabilities with
Dependent Events
• If A and B are dependent events, then
• P(A and B) =
P(A) P(B given that A has occurred)
• written as
P(A) P(B|A)
Conditional Probability
• The conditional probability of B, given A,
written P(B|A), is the probability that event
B will occur computed on the assumption
that event A has occurred.
• Notice that when the two events are
independent, P(B|A) = P(B).
Conditional Probability
• Example:
– Suppose you are picking two cards from a deck
of cards. What is the probability you will pick a
King and then another face card?
– The probability of an King is 4 =
1 .
– Once the King is selected, there are 11 face cards
left in a deck holding 51 cards.
– P(A) = 1 . P(B|A) = 11
– The probability in question is 1 11
Applying Conditional
Probability to Real-World Data
P(B|A) =
observed number of times B and A occur together
observed number of times A occurs
P(not E) 1 – P(E)
P(A or B) P(A) + P(B) mutually
– P(A and B) exclusive:
P(A) + P(B)
P(A and B) P(A) P(B|A) independent:
P(A) P(B)
Odds in favor - P(E) / P(not E) probability is
a:b a/(a+b)
Odds against - P(not E) / P(E) probability is
a:b b/(a+b)
Section 8:
Expected Value
Expected Value
• Expected value is a mathematical way to use
probabilities to determine what to expect in various
situations over the long run.
• For example, we can use expected value to find the
outcomes of the roll of a fair dice.
• The outcomes are 1, 2, 3, 4, 5, and 6, each with a
probability of 1 . The expected value, E, is computed
by multiplying each outcome by its probability and
then adding these products.
• E = 1 1 + 2 1 + 3 1 + 4 1 + 5 1 + 6 1
= (1+2+3+4+5+6)/6 = 21 = 3.5
Expected Value
E = 1 1 + 2 1 + 3 1 + 4 1 + 5 1 + 6 1
= (1 + 2 + 3 + 4 + 5 + 6)/6 = 21 = 3.5
Of course, you can't roll a 3½ . But the
average value of a roll of a die over a long
period of time will be around 3½.
Example Expected Value and Roulette
A roulette wheel has 38 different
• One way to bet in roulette is to place $1
on a single number.
• If the ball lands on that number, you are
awarded $35 and get to keep the $1 that
you paid to play the game.
• If the ball lands on any one of the other
37 slots, you are awarded nothing and
the $1 you bet is collected.
Example Expected Value and Roulette
• 38 different numbers.
• If the ball lands on your number,
you win awarded $35 and you keep
the $1 you paid to play the game.
• If the ball lands on any of the other
37 slots, you are awarded nothing
and you lose the $1 you bet.
• Find the expected value for playing roulette if
you bet $1 on number 11 every time. Describe
what this means.
Outcome Gain/Loss Probability
11 $35 38
Not 11 38
E = $35( 1 ) + (-$1)( 37)
= $ - $ = -$ ≈ -$0.05
This means that in the long run, a player can
expect to lose about 5 cents for each game
Expected Value
• A real estate agent is selling a house. She gets a 4-
month listing. There are 3 possibilities:
– she sells the house: (30% chance) earns $25,000
– another agent sells
the house: (20% chance) earns $10,000
– house not sold: (50% chance) loses $5,000
• What is the expected profit (or loss)?
• If the expected profit is at least $6000 she would
consider it a good deal.
Expected Value
Outcome Probability Profit or product
she sells 0.3 +$25,000 +$7,500
other sells 0.2 +$10,000 +$2,000
doesn't sell 0.5 -$5,000 -$2,500
The realtor can expect to make $7,000.
Make the deal!!!!
|
{"url":"http://www.docstoc.com/docs/108207408/Thinking-Mathematically-by-Robert-Blitzer","timestamp":"2014-04-23T08:25:52Z","content_type":null,"content_length":"74631","record_id":"<urn:uuid:d8f5f470-4afd-43dc-934c-8c9f839eebf0>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Command >>> CAUCHY
Manual Page for Command >>> CAUCHY
>>> CAUCHY
Parent Command
>> OPTION
This command selects an objective function that corresponds to a Cauchy or Lorentzian distribution, i.e., the probability density function of the residuals r reads:
This distribution exhibits more extensive tails compared to the normal distribution, and leads therefore to a more robust estimation if outliers are present. The objective function to be minimized is
given by the following equation:
This objective function can bE minimized using the standard Levenberg-Marquardt algorithm which is designed for a quadratic objective function. The objective function can be reasonably well
approximated by a quadratic function, so that the Levenberg-Marquardt algorithm is usually quite efficient.
>> OPTION
>>> assume measurement errors follow a CAUCHY distribution
See Also
>>> ANDREW | >>> L1-ESTIMATOR | >>> LEAST-SQUARES | >>> QUADRATIC-LINEAR
Back to Command Index
Page updated: July 29, 1997
|
{"url":"http://esd.lbl.gov/iTOUGH2/Command/CAUCHY_3.HTML","timestamp":"2014-04-18T19:22:29Z","content_type":null,"content_length":"2713","record_id":"<urn:uuid:70375365-b5dd-471e-a424-c288fb12cddf>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Generating plane tilings with diagrams
I’ve finally set up a diagrams-contrib package to serve as a home for user contributions to the diagrams project—generation of specialized diagrams, fun or instructive examples, half-baked ideas,
stuff which is not sufficiently polished or general to go in the diagrams-lib package but is nonetheless worth sharing.
As the first “contribution” I put some code I wrote for fun that generates tilings of the Euclidean plane by regular polygons.
So how does it work? I’m sure there are more clever ways if you understand the mathematics better; but essentially it does a depth-first search along the edge graph, stopping when it reaches some
user-defined limit, and drawing polygons and edges along the way. This sounds quite simple on the face of it; but there are two nontrivial problems to be worked out:
1. How can we tell whether we’ve visited a given vertex before?
2. How do we represent a tiling in a way that lets us easily traverse its edge graph?
The first question is really a question of representation: how do we represent vertices in such a way that we can decide their equality? Representing them with a pair of floating point coordinates
does not work: taking two different paths to a vertex will surely result in slightly different coordinates due to floating point error. Another idea is to represent vertices by the path taken to
reach them, but now we have to deal with the thorny problem of deciding when two paths are equivalent.
But it turns out we can do something a bit more clever. The only regular polygons that can appear in plane tilings are triangles, squares, hexagons, octagons, and dodecagons. If you remember your
high school trigonometry, these all have “special” angles whose sines and cosines can be represented exactly using square roots. It suffices to work in $\mathbb{Q}[\sqrt{2}, \sqrt{3}]$, that is, the
ring of rational numbers adjoined with $\sqrt{2}$ and $\sqrt{3}$. Put simply, we use quadruples of rational numbers $(a,b,c,d)$ which represent the real number $a + b\sqrt{2} + c\sqrt{3} + d\sqrt{6}$
. Now we can represent vertices exactly, so remembering which we’ve already visited is easy.
The other question is how to represent tilings. I chose to use this “zipper-like” representation:
data Tiling = Tiling [TilingPoly] (Int -> Tiling)
Intuitively, a Tiling tells us what polygons surround the current vertex (ordered counterclockwise from the edge along which we entered the vertex), as well as what configurations we can reach by
following edges out of the current vertex. Thanks to laziness and knot-tying, we can easily define infinite tilings, such as
t4 :: Tiling
t4 = Tiling (replicate 4 Square) (const t4)
This is a particularly simple example, but the principle is the same. You can look at the source for more complex examples.
Of course, this doesn’t really show off the capabilities of diagrams much (you can draw regular polygons with any old graphics library), but it sure was fun!
One Response to Generating plane tilings with diagrams
This entry was posted in diagrams, haskell, math, projects and tagged diagrams, plane, tiling. Bookmark the permalink.
|
{"url":"http://byorgey.wordpress.com/2011/11/12/generating-plane-tilings-with-diagrams/","timestamp":"2014-04-17T18:41:50Z","content_type":null,"content_length":"71444","record_id":"<urn:uuid:637ab43b-d0f2-468c-a3c5-4f7534bccad0>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Faith and Quantum Theory by Stephen M. Barr | First ThingsFaith and Quantum Theory by Stephen M. Barr | Articles | First Things
Quantum theory is unsettling. Nobel laureate Richard Feynman admitted that it appears peculiar and mysterious to everyone-both to the novice and to the experienced physicist. Niels Bohr, one of its
founders, told a young colleague, If it does not boggle your mind, you understand nothing. Physicists have been quarreling over its interpretation since the legendary arguments between Bohr and
Einstein in the 1920s. So have philosophers, who agree that it has profound implications but cannot agree on what they are. Even the man on the street has heard strange rumors about the Heisenberg
Uncertainty Principle, of reality changing when we try to observe it, and of paradoxes where cats are neither alive nor dead till someone looks at them.
Quantum strangeness, as it is sometimes called, has been a boon to New Age quackery. Books such as The Tao of Physics (1975) and The Dancing Wu Li Masters (1979) popularized the idea that quantum
theory has something to do with eastern mysticism. These books seem almost sober today when we hear of quantum telepathy, quantum ESP, and, more recently, quantum healing, a fad spawned by
Deepak Chopra’s 1990 book of that name. There is a flood of such quantum flapdoodle (as the physicist Murray Gell-Mann called it). What, if anything, does it all mean? Amid all the flapdoodle, what
are the serious philosophical ideas? And what of the many authors who claim that quantum theory has implications favorable to religious belief? Are they on to something, or have they been taken in by
fuzzy thinking and New Age nonsense?
It all began with a puzzle called wave-particle duality. This puzzle first appeared in the study of light. Light was understood by the end of the nineteenth century to consist of waves in the
electromagnetic field that fills all of space. The idea of fields goes back to Michael Faraday, who thought of magnetic and electrical forces as being caused by invisible lines of force stretching
between objects. He envisioned space as being permeated by such force fields. In 1864, James Clerk Maxwell wrote down the complete set of equations that govern electromagnetic fields and showed that
waves propagate in them, just as sound waves propagate in air.
This understanding of light is correct, but it turned out there was more to the story. Strange things began to turn up. In 1900, Max Planck found that a certain theoretical conundrum could be
resolved only by assuming that the energy in light waves comes in discrete, indivisible chunks, which he called quanta. In other words, light acts in some ways like it is made up of little particles.
Planck’s idea seemed absurd, for a wave is something spread out and continuous, while a particle is something pointlike and discrete. How can something be both one and the other?
And yet, in 1905, Einstein found that Planck’s idea was needed to explain another puzzling behavior of light, called the photoelectric effect. These developments led Louis de Broglie to make an
inspired guess: If waves (such as light) can act like particles, then perhaps particles (such as electrons) can act like waves. And, indeed, this proved to be the case. It took a generation of
brilliant physicists (including Bohr, Heisenberg, Schrödinger, Born, Dirac, and Pauli) to develop a mathematically consistent and coherent theory that described and made some sense out of
wave-particle duality. Their quantum theory has been spectacularly successful. It has been applied to a vast range of phenomena, and hundreds of thousands of its predictions about all sorts of
physical systems have been confirmed with astonishing accuracy.
Great theoretical advances in physics typically result in profound unifications of our understanding of nature. Newton’s theories gave a unified account of celestial and terrestrial phenomena;
Maxwell’s equations unified electricity, magnetism, and optics; and the theory of relativity unified space and time. Among the many beautiful things quantum theory has given us is a unification of
particles and forces. Faraday saw that forces arise from fields, and Maxwell saw that fields give rise to waves. Thus, when quantum theory showed that waves are particles (and particles waves), a
deep unity of nature came into view: The forces by which matter interacts and the particles of which it is composed are both manifestations of a single kind of thing- quantum fields.
The puzzle of how the same thing can be both a wave and a particle remains, however. Feynman called it the only real mystery in science. And he noted that, while we can tell how it works, we
cannot make the mystery go away by explaining’ how it works. Quantum theory has a precise mathematical formalism, one on which everyone agrees and that tells how to calculate right answers to the
questions physicists ask. But what really is going on remains obscure-which is why quantum theory has engendered unending debates over the nature of physical reality for the past eighty years.
The problem is this: At first glance, wave-particle duality is not only mysterious but inconsistent in a blatant way. The inconsistency can be understood with a thought experiment. Imagine a burst of
light from which a light wave ripples out through an ever-widening sphere in space. As the wave travels, it gets more attenuated, since the energy in it is getting spread over a wider and wider area.
(That is why the farther you are from a light bulb, the fainter it appears.) Now, suppose a light-collecting device is set up, a box with a shutter-essentially, a camera. The farther away it is
placed from the light burst, the less light it will collect. Suppose the light-collecting box is set up at a distance where it will collect exactly a thousandth of the light emitted in the burst. The
inconsistency arises if the original burst contained, say, fifty particles of light. For then it appears that the light-collector must have collected 0.05 particles (a thousandth of fifty), which is
impossible, since particles of light are indivisible. A wave, being continuous, can be infinitely attenuated or subdivided, whereas a particle cannot.
Quantum theory resolves this by saying that the light-collector, rather than collecting 0.05 particles, has a 0.05 probability of collecting one particle. More precisely, the average number of
particles it will collect, if the same experiment is repeated many times, is 0.05. Wave-particle duality, which gave rise to quantum theory in the first place, forces us to accept that quantum
physics is inherently probabilistic. Roughly speaking, in pre-quantum, classical physics, one calculated what actually happens, while in quantum physics one calculates the relative probabilities of
various things happening.
This hardly resolves the mystery. The probabilistic nature of quantum theory leads to many strange conclusions. A famous example comes from varying the experiment a little. Suppose an opaque wall
with two windows is placed between the light-collector and the initial burst of light. Some of the light wave will crash into the wall, and some will pass through the windows, blending together and
impinging on the light-collector. If the light-collector collects a particle of light, one might imagine that the particle had to have come through either one window or the other. The rules of the
quantum probability calculus, however, compel the weird conclusion that in some unimaginable way the single particle came through both windows at once. Waves, being spread out, can go through two
windows at once, and so the wave-particle duality ends up implying that individual particles can also.
Things get even stranger, and it is clear why some people pine for the good old days when waves were waves and particles were particles. One of those people was Albert Einstein. He detested the idea
that a fundamental theory should yield only probabilities. God does not play dice! he insisted. In Einstein’s view, the need for probabilities simply showed that the theory was incomplete. History
supported his claim, for in classical physics the use of probabilities always stemmed from incomplete information. For example, if one says that there is a 60 percent chance of a baseball hitting a
glass window, it is only because one doesn’t know the ball’s direction and speed well enough. If one knew them better (and also knew the wind velocity and all other relevant variables), one could
definitely say whether the ball would hit the window. For Einstein, the probabilities in quantum theory meant only that there were as-yet-unknown variables: hidden variables, as they are called. If
these were known, then in principle everything could be predicted exactly, as in classical physics.
Many years have gone by, and there is still no hint from any experiment of hidden variables that would eliminate the need for probabilities. In fact, the famed Heisenberg Uncertainty Principle says
that probabilities are ineradicable from physics. The thought experiment of the light burst and light-collector showed why: If one and the same entity is to behave as both a wave and a particle, then
an understanding in terms of probabilities is absolutely required. (For, again, 0.05 of a particle makes no sense, whereas a 0.05 chance of a particle does.) The Uncertainty Principle, the bedrock of
quantum theory, implies that even if one had all the information there is to be had about a physical system, its future behavior cannot be predicted exactly, only probabilistically.
This last statement, if true, is of tremendous philosophical and theological importance. It would spell the doom of determinism, which for so long had appeared to spell the doom of free will.
Classical physics was strictly deterministic, so that (as Laplace famously said) if the state of the physical world were completely specified at one instant, its whole future development would be
exactly and uniquely determined. Whether a man lifts his arm or nods his head now would (in a world governed by classical physical laws) be an inevitable consequence of the state of the world a
billion years ago.
But the death of determinism is not the only deep conclusion that follows from the probabilistic nature of quantum theory. An even deeper conclusion that some have drawn is that materialism, as
applied to the human mind, is wrong. Eugene Wigner, a Nobel laureate, argued in a famous essay that philosophical materialism is not logically consistent with present quantum mechanics. And Sir
Rudolf Peierls, another leading physicist, maintained that the premise that you can describe in terms of physics the whole function of a human being . . . including its knowledge, and its
consciousness, is untenable.
These are startling claims. Why should a mere theory of matter imply anything about the mind? The train of logic that leads to this conclusion is rather straightforward, if a bit subtle, and can be
grasped without knowing any abstruse mathematics or physics.
It starts with the fact that for any physical system, however simple or complex, there is a master equation-called the Schrödinger equation-that describes its behavior. And the crucial point on which
everything hinges is that the Schrödinger equation yields only probabilities. (Only in special cases are these exactly 0, or 100 percent.) But this immediately leads to a difficulty: There cannot
always remain just probabilities; eventually there must be definite outcomes, for probabilities must be the probabilities of definite outcomes. To say, for example, there is a 60 percent chance that
Jane will pass the French exam is meaningless unless at some point there is going to be a French exam on which Jane will receive a definite grade. Any mere probability must eventually stop being a
mere probability and become a certainty or it has no meaning even as a probability. In quantum theory, the point at which this happens, the moment of truth, so to speak, is traditionally called the
collapse of the wave function.
The big question is when this occurs. Consider the thought experiment again, where there was a 5 percent chance of the box collecting one particle and a 95 percent chance of it collecting none. When
does the definite outcome occur in this case? One can imagine putting a mechanism in the box that registers when a particle of light has been collected by making, say, a red indicator light to go on.
The answer would then seem plain: The definite outcome happens when the red light goes on (or fails to do so). But this does not really produce a definite outcome, for a simple reason: Any mechanism
one puts into the light-collecting box is just itself a physical system and is therefore described by a Schrödinger equation. And that equation yields only probabilities . In particular, it would say
there is a 5 percent chance that the box collected a particle and that the red indicator light is on, and a 95 percent chance that it did not collect a particle and that the indicator light is off.
No definite outcome has occurred. Both possibilities remain in play.
This is a deep dilemma. A probability must eventually get resolved into a definite outcome if it is to have any meaning at all, and yet the equations of quantum theory when applied to any physical
system yield only probabilities and not definite outcomes.
Of course, it seems that when a person looks at the red light and comes to the knowledge that it is on or off, the probabilities do give way to a definite outcome, for the person knows the truth of
the matter and can affirm it with certainty. And this leads to the remarkable conclusion of this long train of logic: As long as only physical structures and mechanisms are involved, however complex,
their behavior is described by equations that yield only probabilities-and once a mind is involved that can make a rational judgment of fact, and thus come to knowledge, there is certainty.
Therefore, such a mind cannot be just a physical structure or mechanism completely describable by the equations of physics.
Has there been a sleight-of-hand? How did mind suddenly get into the picture? It goes back to probabilities. A probability is a measure of someone’s state of knowledge or lack of it. Since quantum
theory is probabilistic, it makes essential reference to someone’s state of knowledge. That someone is traditionally called the observer. As Peierls explained, The quantum mechanical description is
in terms of knowledge, and knowledge requires somebody who knows.
I have been explaining some of the implications (as Wigner, Peierls, and others saw them) of what is usually called the traditional, Copenhagen, or standard interpretation of quantum theory. The term
Copenhagen interpretation is unfortunate, since it carries with it the baggage of Niels Bohr’s philosophical views, which were at best vague and at worst incoherent. One can accept the essential
outlines of the traditional interpretation (first clearly delineated by the great mathematician John von Neumann) without endorsing every opinion of Bohr.
There are many people who do not take seriously the traditional interpretation of quantum theory-precisely because it gives too great an importance to the mind of the human observer. Many arguments
have been advanced to show its absurdity, the most famous being the Schrödinger Cat Paradox. In this paradox one imagines that the mechanism in the light-collecting box kills a cat rather than merely
making a red light go on. If, as the traditional view has it, there is not a definite outcome until the human observer knows the result, then it would seem that the cat remains in some kind of limbo,
not alive or dead, but 95 percent alive and 5 percent dead, until the observer opens the box and looks at the cat-which is absurd. It would mean that our minds create reality or that reality is
perhaps only in our minds. Many philosophers attack the traditional interpretation of quantum theory as denying objective reality. Others attack it because they don’t like the idea that minds have
something special about them not describable by physics.
The traditional interpretation certainly leads to thorny philosophical questions, but many of the common arguments against it are based on a caricature. Most of its seeming absurdities evaporate if
it is recognized that what is calculated in quantum theory’s wavefunction is not to be identified simply with what is happening, has happened, or will happen but rather with what someone is in a
position to assert about what is happening, has happened, or will happen. Again, it is about someone’s (the observer’s) knowledge . Before the observer opens the box and looks at the cat, he is not
in a position to assert definitely whether the cat is alive or dead; afterward, he is-but the traditional interpretation does not imply that the cat is in some weird limbo until the observer looks.
On the contrary, when the observer checks the cat’s condition, his observation can include all the tests of forensic pathology that would allow him to pin down the time of the cat’s death and say,
for instance, that it occurred thirty minutes before he opened the box. This is entirely consistent with the traditional interpretation of quantum theory. Another observer who checked the cat at a
different time would have a different moment of truth (so the wavefunction that expresses his state of knowledge would collapse when he looked), but he would deduce the same time of death for the
cat. There is nothing subjective here about the cat’s death or when it occurred.
The traditional interpretation implies that just knowing A, B, and C, and applying the laws of quantum theory, does not always answer (except probabilistically) whether D is true. Finding out
definitely about D may require another observation. The supposedly absurd role of the observer is really just a concomitant of the failure of determinism.
The trend of opinion among physicists and philosophers who think about such things is away from the old Copenhagen interpretation, which held the field for four decades. There are, however, only a
few coherent alternatives. An increasingly popular one is the many-worlds interpretation, based on Hugh Everett’s 1957 paper, which takes the equations of physics as the whole story. If the
Schrödinger equation never gives definite and unique outcomes, but leaves all the possibilities in play, then we ought to accept this, rather than invoking mysterious observers with their minds’
moments of truth.
So, for example, if the equations assign the number 0.05 to the situation where a particle has been collected and the red light is on, and the number 0.95 to the situation where no particle has been
collected and the red light is off, then we ought to say that both situations are parts of reality (though one part is in some sense larger than the other by the ratio 0.95 to 0.05). And if an
observer looks at the red light, then, since he is just part of the physical system and subject to the same equations, there will be a part of reality (0.05 of it) in which he sees the red light on
and another part of reality (0.95 of it) in which he sees the red light off. So physical reality splits up into many versions or branches, and each human observer splits up with it. In some branches
a man will see that the light is on, in some he will see that the light is off, in others he will be dead, in yet others he will never have been born. According to the many-worlds interpretation,
there are an infinite number of branches of reality in which objects (whether particles, cats, or people) have endlessly ramifying alternative histories, all equally real.
Not surprisingly, the many-worlds interpretation is just as controversial as the old Copenhagen interpretation. In the view of some thinkers, the Copenhagen and many-worlds interpretation both make
the same fundamental mistake. The whole idea of wave-particle duality was a wrong turn, they say. Probabilities are needed in quantum theory because in no other way can one make sense of the same
entity being both a wave and a particle. But there is an alternative, going back to de Broglie, which says they are not the same entity. Waves are waves and particles are particles. The wave guides,
or pilots, the particles and tells them where to go. The particles surf the wave, so to speak. Consequently, there is no contradiction in saying both that a tiny fraction of the wave enters the
light collector and that a whole-number of particles enters-or in saying that the wave went through two windows at once and each particle went through just one.
De Broglie’s pilot-wave idea was developed much further by David Bohm in the 1950s, but it has only recently attracted a significant following. Bohmian theory is not just a different interpretation
of quantum theory; it is a different theory. Nevertheless, Bohm and his followers have been able to show that many of the successful predictions of quantum theory can be reproduced in theirs. (It is
questionable whether all of them can be.) Bohm’s theory can be seen as a realization of Einstein’s idea of hidden variables, and its advocates see it as a vindication of Einstein’s well-known
rejection of standard quantum theory. As Einstein would have wanted, Bohmian theory is completely deterministic. Indeed, it is an extremely clever way of turning quantum theory back into a classical
and essentially Newtonian theory.
The advocates of this idea believe that it solves all of the quantum riddles and is the only way to preserve philosophical sanity. However, most physicists, though impressed by its cleverness, regard
it as highly artificial. In my view, the most serious objection to it is that it undoes one of the great theoretical triumphs in the history of physics: the unification of particles and forces. It
gets rid of the mysteriousness of quantum theory by sacrificing much of its beauty.
What, then, are the philosophical and theological implications of quantum theory? The answer depends on which school of thought-Copenhagen, many worlds, or Bohmian-one accepts. Each has its strong
points, but each also has features that many experts find implausible or even repugnant.
One can find religious scientists in every camp. Peter E. Hodgson, a well-known nuclear physicist who is Catholic, insists that Bohmian theory is the only metaphysically sound alternative. He is
unfazed that it brings back Newtonian determinism and mechanism. Don Page, a well-known theoretical cosmologist who is an evangelical Christian, prefers the many-worlds interpretation. He isn’t
bothered by the consequence that each of us has an infinite number of alter egos.
My own opinion is that the traditional Copenhagen interpretation of quantum theory still makes the most sense. In two respects it seems quite congenial to the worldview of the biblical religions: It
abolishes physical determinism, and it gives a special ontological status to the mind of the human observer. By the same token, it seems quite uncongenial to eastern mysticism. As the physicist Heinz
Pagels noted in his book The Cosmic Code : Buddhism, with its emphasis on the view that the mind-world distinction is an illusion, is really closer to classical, Newtonian physics and not to quantum
theory [as traditionally interpreted], for which the observer-observed distinction is crucial.
If anything is clear, it is that quantum theory is as mysterious as ever. Whether the future will bring more-compelling interpretations of, or even modifications to, the mathematics of the theory
itself, we cannot know. Still, as Eugene Wigner rightly observed, It will remain remarkable, in whatever way our future concepts develop, that the very study of the external world led to the
conclusion that the content of the consciousness is an ultimate reality. This conclusion is not popular among those who would reduce the human mind to a mere epiphenomenon of matter. And yet matter
itself seems to be telling us that its connection to mind is more subtle than is dreamt of in their philosophy.
Stephen M. Barr is a theoretical particle physicist at the Bartol Research Institute of the University of Delaware and the author of Modern Physics and Ancient Faith and A Student’s Guide to Natural
|
{"url":"http://www.firstthings.com/article/2007/02/faith-and-quantum-theory","timestamp":"2014-04-19T17:10:27Z","content_type":null,"content_length":"42487","record_id":"<urn:uuid:6ded0d20-8402-4598-968a-a32accc19bbe>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Descend project is now officially reactivated and yesterday I committed the current version of the prototype into SVN repository. The UI is very basic, but it does its job of drawing a 3D surface
based on mathematical equations. So far it consist of three important parts:
• A compiler and interpreter of Misc, a programming language designed for calculating geometry with high performance. You can read more about it in the previous post and I will keep returning to it
as it's quite an interesting subject.
• An adaptive tessellator which is described below in more details.
• A vertex and pixel shader that perform per-pixel Phong shading, which looks nice on the Barbie-pink surface :).
A parametric surface is described by a function V(p, q), where V is the vector describing the location of a point in 3D space and p and q are parameters (usually in [0, 1] or some other range).
Obviously the surface consists of an infinite number of points, so we must calculate a number of samples and join the resulting points into triangles. This process is called tessellation. If the
triangles are small enough, the Phong shading will create an illusion that the surface is smooth and curved.
The only difficulty is to determine what does "small enough" mean. Some surfaces are flat or almost flat and need just a few triangles to look good. Other surfaces are very curved and require
thousands of triangles. In practice most surfaces are flatter is some areas and more curved in other areas. Take a sphere for example. It's curvature is the same everywhere, but we must remember that
our samples are not distributed uniformly on its surface. Imagine a globe: meridians are located much closer to each other near the poles than near the equator. So in practice the distance between
two samples located near the equator is greater and the surface needs to be divided into more triangles. This way, the size of all triangles will be more or less the same. Without adaptive
tessellation, triangles would be closely packed near the pole and very large near the equator.
The tessellation algorithm works by first calculating four points at the corners of the surface. Wait, where does a sphere have corners? Just unwrap it mentally into a rectangular map, transforming
it from (x, y, z) space into the (p, q) space. This gives us a square divided diagonally into two triangles. Then we calculate a point in the middle of the diagonal and divide each triangle into two
smaller triangles. This process can be repeated recursively until the desired quality is reached.
How to measure the quality? The simplest method is to calculate the distance between the "new" point and the line that we are attempting to divide. The greater the distance, relatively to the length
of the line, the more curved the surface. If this distance is smaller than some threshold value, we simply assume that the point lays on the line and discard it. The smaller the threshold, the more
accurate the tessellation and the more triangles we get.
Unfortunately there are situations when this gives wrong results. If the curvature of the surface between two points resembles a sinusoid, then the third point in between appears to be located very
near the line drawn between those two points. The tessellation algorithm will assume that the surface is not curved in this area. This produces very ugly artifacts.
So I came up with a method which produces much more accurate results. In order to render the surface with lighting, we need to calculate normal vectors at every point. For the Phong shading to look
nice, those normals must be calculated very accurately. So two more points are calculated at a very small distance from the original one and the resulting triangle is used to calculate the normal.
Note that the angle between normals is a very accurate measure of the curvature. An algorithm which compares the angle between the normals of two endpoints and the normal of the "new" point with a
threshold angle can handle situations like the above much better. It's also more computationally expensive, because we must calculate three samples before we can decide if the point is rejected or
Of course this method can also be fooled in some specific cases, but in combination with the first one it works accurately in most cases. Experimentation shows that the threshold angle of 5° gives
excellent results for every reasonable surface I was able to come up with.
In practice we also have to introduce the minimum and maximum number of divisions. Up to a certain point we simply keep dividing the grid into smaller triangles without even measuring the curvature,
because otherwise the results would be very inaccurate. And since the curvature may be infinite in some points, we also must have some upper limit.
Final notes: Adaptive tessellation of parametric surfaces is the subject of many PhD dissertations and my algorithm is very simplistic, but it's just fast and accurate enough for the purposes of
Descend. Also it should not be confused with adaptive tessellation of displacement mapping, which is a different concept.
Login or register to post comments
Login or register to post comments
|
{"url":"http://www.mimec.org/taxonomy/term/34","timestamp":"2014-04-20T20:55:16Z","content_type":null,"content_length":"16886","record_id":"<urn:uuid:1ea6e8a3-09f5-481f-bf5e-45da6fd091b6>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cost-minimising strategies for data labelling: optimal stopping and active learning
Christos Dimitrakakis and Christian Savu-Krohn
In: Fifth International Symposium on Foundations of Information and Knowledge Systems, 11-15 Feb 2008, Pisa, Italy.
Supervised learning deals with the inference of a distribution over an output or label space Y conditioned on points in an observation space X, given a training dataset D of pairs in X × Y. However,
in a lot of applications of interest, acquisition of large amounts of observations is easy, while the process of generating labels is time-consuming or costly. One way to deal with this problem is
active learning, where points to be labelled are selected with the aim of creating a model with better performance than that of an model trained on an equal number of randomly sampled points. In this
paper, we instead propose to deal with the labelling cost directly: The learning goal is defined as the minimisation of a cost which is a func- tion of the expected model performance and the total
cost of the labels used. This allows the development of general strategies and specific algorithms for (a) optimal stopping, where the expected cost dictates whether label acquisition should continue
(b) empirical evaluation, where the cost is used as a performance metric for a given combination of inference, stopping and sampling methods. Though the main focus of the paper is optimal stopping,
we also aim to provide the background for further developments and discussion in the related field of active learning.
PDF - Requires Adobe Acrobat Reader or other PDF viewer.
|
{"url":"http://eprints.pascal-network.org/archive/00003090/","timestamp":"2014-04-21T02:02:05Z","content_type":null,"content_length":"8248","record_id":"<urn:uuid:c196c3e2-1df5-4c92-a7ad-a556f8ad7c18>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Assignment of Keyfitzed Conditional Probabilities for the Trial Urban
Table of Contents | Search Technical Documentation | References
Assignment of Keyfitzed Conditional Probabilities for the Trial Urban District Assessment School Sample
Conditional probabilities were assigned to control overlap between samples while maintaining the desired probabilities of selection for each sample individually. This was done using a technique
called Keyfitzing. The original reference for Keyfitzing is Keyfitz (1951). Rust and Johnson (1992) discuss the method in its NAEP application. The desired probabilities of selection for the schools
in the TUDA district samples were the ^1 There are three cases which define the sampling status which regulated the conditional probability into the TUDA sample.
• School sampled in Alpha sample. In this case conditional probability was increased.
• School sampled in Beta sample and not in Alpha sample (^2. In this case, the conditional probability was decreased.
• School not sampled in either alpha or beta sample
The goal was to maximize the overlap between TUDA district NAEP and α-sampled schools while minimizing the overlap between TUDA district NAEP and
The desired probability of the school being selected for district NAEP
To recap, it was necessary to select a school for TUDA district NAEP with probability X and minimizing Y. As all the quantities are probabilities, they are restricted to be between 0 and 1. The task
of maximizing X and minimizing Y will, by the algebra, separate out into three cases based on the interrelationships of
For the first case Y can be set to 0 (its absolute minimum), and X can be maximized then by making Z as small as possible (0). Setting Y and Z to 0 gives
Note that X is less than or equal to 1 when
If Y can be set to 0 (its best value), X can be set to its largest value 1 (its best value), with the following equation for Z:
Y can be minimized by making Z as large as possible. Setting Z to its maximum value of 1 gives
Note that in this case
Condition X Y Z
and 1 1
and 1 0
The expression for
Except for new or newly eligible schools, the β sample was selected to minimize overlap with the α sample and
Therefore it follows that
The complete solution is given by the following table.
Condition X Y Z
and 1 0
For new or newly eligible schools the α and β samples were selected independently and
For these schools the complete solution is given by the following table.
Condition X Y Z
The table below summarizes the formulas for assigning conditional probabilities.
Condition School sample
Alpha sample school Beta and not alpha sample school Neither alpha nor beta sample school
CCD school and 1 1
CCD school and 1 1
CCD school and 1 0
New school and 1 1
New school and 1 0
Last updated 11 March 2009 (RF)
|
{"url":"http://nces.ed.gov/nationsreportcard/tdw/sample_design/2002_2003/sampdsgn_2002_state_tua_keyfitz.asp","timestamp":"2014-04-19T09:44:00Z","content_type":null,"content_length":"60685","record_id":"<urn:uuid:28822ccf-7089-40cc-8cdb-30e2ea5bf0c2>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Derivatives/Integrals, properties of sin/cos
July 19th 2008, 11:29 PM
Derivatives/Integrals, properties of sin/cos
Q: ʃʃ cos (x +2y) dy dx where 0 < x < pi; 0 < y < pi/2
When I integrate cos(x+2y) wrt y, do I get sin(x+2y)/2? How do I evaluate sin(x + pi)?
Does sin (x+c) = sin(x) + sin(c)?
July 19th 2008, 11:38 PM
July 19th 2008, 11:45 PM
July 20th 2008, 12:01 AM
1. check with the unit circle ;) ...or
2. plot the graph of sin(x+pi) , i.e. translation by pi to the left. It comes -sinx out :)
3. use trig identities (I don't recommend the last method)
July 20th 2008, 12:14 AM
How are you doing double integrals but you don't know the addition formulas for sin?
$\sin\left(A+B\right)=\sin\left(A\right)\cos\left(B \right)+\cos\left(A\right)\sin\left(B\right)$
July 20th 2008, 12:18 AM
The last math course I took was 6 years ago...and then I went straight to multi-variate calc without any review.
Anyway, the formula you gave is if A and B are both variables right? What about in the case that one is a variable and the other is a contant? Do I just treat the constant as a variable and plug
it through?
July 20th 2008, 12:19 AM
The last math course I took was 6 years ago...and then I went straight to multi-variate calc without any review.
Anyway, the formula you gave is if A and B are both variables right? What about in the case that one is a variable and the other is a contant? Do I just treat the constant as a variable and plug
it through?
July 20th 2008, 12:43 AM
$\int_0^{\pi} \int_0^{\frac{\pi}{2}}\cos (x+2y)~dy~dx$
Ignore the outer integral and do the inner one first.
$\int_0^{\frac{\pi}{2}}\cos (x+2y)~dy$
You can treat x as a constant here. Let $u=x+2y$. Don't forget to change the integration limits. Then you can plug this in the outer integral and the rest is easy..
Also you can use the trigonometric addition formula and Fubini's theorem.
July 20th 2008, 12:44 AM
$\int_0^{\pi} \int_0^{\frac{\pi}{2}}\cos (x+2y)~dy~dx$
Ignore the outer integral and do the inner one first.
$\int_0^{\frac{\pi}{2}}\cos (x+2y)~dy$
You can treat x as a constant here. Let $u=x+2y$. Don't forget to change the integration limits. Then you can plug this in the outer integral and the rest is easy..
Also you can use the trigonometric addition formula and Fubini's theorem.
Would there really be a need for Fubini's Theorem here?
July 20th 2008, 12:54 AM
July 20th 2008, 01:41 AM
thank you both for your help. I was wondering if I hate integrated the first part right. I know you recommended substitution to solve it, but it seems like it wouldn't be completely necessary.
|
{"url":"http://mathhelpforum.com/calculus/44101-derivatives-integrals-properties-sin-cos-print.html","timestamp":"2014-04-20T11:01:19Z","content_type":null,"content_length":"15909","record_id":"<urn:uuid:9b78ecbb-d39f-4045-9c4a-150cce21fe27>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
|
OpenGL GLM C++ vertex and matrix transformations confusion
01-11-2013, 09:08 PM #1
Junior Member Newbie
Join Date
Dec 2012
OpenGL GLM C++ vertex and matrix transformations confusion
I'm pretty confused now that I can no longer do a simply gl_Position = pos * MVP; in my vertex shader, and instead want to transform the vertices before I send them via glBufferSubData.
This is my poor attempt at trying to get the vertex coordinates to be pixel based. Lets say I want a rect to be 100 pixels from 0,0. I was hoping this would work, and I've been fiddling it with
for a while.
What am I doing wrong here, or what is it that I actually want to do?
Code :
glm::mat4 view = glm::translate(glm::mat4(), glm::vec3(0.0f, 0.0f, 0.0f));
glm::mat4 ortho = glm::ortho(0.0f, float(SCREEN_W), float(SCREEN_H), 0.0f, -1.0f, 1.0f);
glm::vec4 rect = glm::vec4(0.0f, 0.0f, 1.0f, 1.0f);
glm::vec4 transformed = rect * view * ortho;
std::cout << "x: " << transformed.x << " y: " << transformed.y << " w: " << transformed.w << " t: " << transformed.t << " z : " << transformed.z << "\n";
float x = transformed.x;
float width = transformed.w;
float y = transformed.y;
float height = transformed.z;
vertices[0][0] = x; // top left X
vertices[0][1] = y; //top left Y
vertices[1][0] = x; // bottom left X
vertices[1][1] = height; // bottom left Y
vertices[2][0] = width; // bottom right X
vertices[2][1] = height; //bottom right Y
vertices[3][0] = width; // top right X
vertices[3][1] = y; // top right Y
gl_Position = pos * MVP
That pretty much explains everything; your transforms are backwards. The matrix goes on the left; the position goes on the right. And if that actually worked in your shader, it's probably because
you transposed the matrix when you uploaded it with glUniformMatrix.
That's why your CPU equivalent is failing; there's no transposition correction being applied, so your math is backwards.
01-11-2013, 11:50 PM #2
Senior Member OpenGL Guru
Join Date
May 2009
|
{"url":"http://www.opengl.org/discussion_boards/showthread.php/180788-OpenGL-GLM-C-vertex-and-matrix-transformations-confusion?p=1247022&viewfull=1","timestamp":"2014-04-17T19:01:43Z","content_type":null,"content_length":"41882","record_id":"<urn:uuid:2470a193-19a5-40db-9bb7-8c4de21f5f4b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Work Tasks
Applied MATHEMATICIANS use theories and techniques, such as modeling and computational methods, to solve practical problems in business, government, engineering, and the physical, life, and social
sciences. They may analyze the most efficient way to schedule airline routes between cities, or the effect and safety of new drugs on disease. They work with others to achieve common solutions to
problems in various industries. Theoretical mathematicians advance mathematical knowledge. They seek to increase basic knowledge without considering its practical use. They are employed as university
faculty and teach and conduct research.
Salary, Size & Growth
• $78,000 average per year ($37.50 per hour)
• A small occupation (2,800 workers in 2010)
• Expected to grow rapidly (2.2% per year)
Entry Requirements
A doctoral degree in mathematics is usually the minimum education needed for MATHEMATICIANS, except the federal government, where entry-level candidates usually must have a four-year degree with a
major in mathematics. In private industry, applicants generally need a master's or a Ph.D. degree. A master's degree in mathematics is sufficient for some research positions and teaching in some
community or 4-year colleges. However, in most 4-year colleges and universities and many research and development positions in private industry, a doctoral degree is required.
Related Occupations
Related Majors
|
{"url":"http://www.act.org/world/occs/occ257.html?TB_iframe=true","timestamp":"2014-04-20T06:26:12Z","content_type":null,"content_length":"3149","record_id":"<urn:uuid:4369a9e1-24a1-4794-9c99-ef1216c442d4>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] The value of a native Blas
Chris Barker Chris.Barker at noaa.gov
Thu Jul 29 12:01:05 CDT 2004
Hi all,
I think this is a nifty bit of trivia.
After getting my nifty Apple Dual G5, I finally got around to doing a
test I had wanted to do for a while. The Numeric package uses LAPACK for
the Linear Algebra stuff. For OS-X there are two binary versions
available for easy install:
One linked against the default, non-optimized version of BLAS (from Jack
Jansen's PackMan database)
One linked against the Apple Supplied vec-lib as the BLAS. (From Bob
Ippolito's PackMan database (http://undefined.org/python/pimp/)
To compare performance, I wrote a little script that generates a random
matrix and vector: A, b, and solves the equation: Ax = b for x
N = 1000
a = RandomArray.uniform(-1000, 1000, (N,N) )
b = RandomArray.uniform(-1000, 1000, (N,) )
start = time.clock()
x = solve_linear_equations(a,b)
print "It took %f seconds to solve a %iX%isystem"%(
time.clock()-start, N, N)
And here are the results:
With the non-optimized version:
It took 3.410000 seconds to solve a 1000X1000 system
It took 28.260000 seconds to solve a 2000X2000 system
With vec-Lib:
It took 0.360000 seconds to solve a 1000X1000 system
It took 2.580000 seconds to solve a 2000X2000 system
for a speed increase of over 10 times! Wow!
Thanks Bob, for providing that package.
I'd be interested to see similar tests on other platforms, I haven't
gotten around to figuring out how to use a native BLAS on my Linux box.
Christopher Barker, Ph.D.
NOAA/OR&R/HAZMAT (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
Chris.Barker at noaa.gov
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2004-July/003331.html","timestamp":"2014-04-21T07:14:48Z","content_type":null,"content_length":"4355","record_id":"<urn:uuid:08967b71-a3bf-4e48-b9c8-d7db48f44882>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Introduction to designing crossovers without measurement - Page 9 - diyAudio
Thank you gpapag and shmb.
This is a good question, and I thought it would help if I showed an example. I used an example similar to the tutorial with a 6.8Ω resistor, an 8.2µF capacitor, which I then increased to 40µF which
seemed extreme enough to prove a point. In all cases the grey traces are the raw driver, green is the normal case, and magenta is the extreme case.
The first image shows the effect of the impedance compensation on the driver. There are six traces, the lower three are the impedance curves that are normally shown, and the upper three are the
impedance phase (which is not the same as acoustic phase). Although technically both work together, it is sometimes easier to just look at the normal impedance curve, but with regards to phase, just
look at whether it is close to zero degrees or far from it meaning for example is it an easy 8 ohms to drive, or a not so easy 8 ohms to drive? If that helps.
So, the impedance drops to 4 ohms around 1kHz with the large value of capacitor, and as the phase is near zero degrees for that case, it will act similarly to a simple 4 ohm resistor, a load that
most amps will handle.
The second image shows the impedance once the 1mH inductor is added. This is the impedance that matters because it is what the amp will see. By adding the inductor, the effect of using the larger
capacitor is partially hidden from the amp... the impedance is well on the rise at 1kHz and upward.
The lowest impedance here is around 5 ohms. If you look at around 600Hz, the phase is a little further from 0 degrees although still less than 90 degrees. You could take a non-technical guess that
this makes it equivalent to maybe a 4 ohm load at that point.
Summarising the effect of the larger capacitor (looking at the first plot), there is little difference at the highest frequencies because all capacitors act like a short circuit at high frequencies.
The effect is to act on the impedance at a lower point than before, encroaching on the lower midrange as the capacitor grows in value.
I've also shown the frequency response on the second plot (the upper two traces) for interests sake. The end result it is showing in this case, is that the greatest effect of that inductor will be
above 1kHz, and the large capacitor is producing an effect both above and below 1kHz. The impedance reduction below 1kHz may be an unwanted side effect, but if it gets you the result you want then it
is valid, and in this case relatively harmless.
|
{"url":"http://www.diyaudio.com/forums/multi-way/189847-introduction-designing-crossovers-without-measurement-9.html","timestamp":"2014-04-17T04:54:09Z","content_type":null,"content_length":"85636","record_id":"<urn:uuid:03c6e96e-183c-41e5-b808-c558a2fb2c95>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Geometry and Connectedness of Heterotic String Compactifications with Fluxes
Le contenu de cette page n’est pas disponible en français. Veuillez nous en excuser.
Geometry and Connectedness of Heterotic String Compactifications with Fluxes
I will discuss the geometry of heterotic string compactifications with fluxes. The compactifications on 6 dimensional manifolds which preserve N=1 supersymmetry in 4 dimensions must be complex
manifolds with vanishing first Chern class, but which are not in general Kahler (and therefore not Calabi-Yau manifolds) together with a vector bundle on the manifold which must satisfy a complicated
differential equation. The flux, which can be viewed as a torsion, is the obstruction to the manifold being Kahler. I will describe how these compactifications are connected to the more traditional
compactifications on Calabi-Yau manifolds through geometric transitions like flops and conifold transitions. For instance, one can construct solutions by flopping rational curves in a Calabi-Yau
manifold in such a way that the resulting manifold is no longer Kahler. Time permitting, I will discuss open problems, for example the understanding of the the moduli space of heterotic
compactifications and the related problem of determining the massless spectrum in the effective 4 dimensional supersymmetric field theory. The study of these compactifications is interesting on its
own right both in string theory, in order to understand more generally the degrees of freedom of these theories, and also in mathematics. For instance, the connectedness between the solutions is
related to problems in mathematics like the conjecture by Mile Reid that complex manifolds with trivial canonical bundle are all connected through geometric transitions.
|
{"url":"http://www.perimeterinstitute.ca/fr/videos/geometry-and-connectedness-heterotic-string-compactifications-fluxes","timestamp":"2014-04-19T01:36:11Z","content_type":null,"content_length":"29799","record_id":"<urn:uuid:4e89834c-c642-431a-8b60-b3c0551b1f3e>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A logic-Algebra problem...
Re: A logic-Algebra problem...
According to the equation, A^C must equal L * O * G * I * C
So, if, according to you, C = 5, then A ^ 5 is equal to L * O * G * I * 5.
What do you mean by A has a 7 next to it?
Your question doesn't make the most sense really.
Boy let me tell you what:
I bet you didn't know it, but I'm a fiddle player too.
And if you'd care to take a dare, I'll make a bet with you.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=7086","timestamp":"2014-04-18T08:09:12Z","content_type":null,"content_length":"11853","record_id":"<urn:uuid:54ebf67e-e069-4c18-88cb-aa24da942df3>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Impulsive force
So this is the same thing i did before but updated with the new signs
I_x=.2(8.48528-9.02723)= -.10839
I_y=.2(8.48528-(-11.9795))= 4.092996
and to solve for the F_average i just do divide by .05
F_x=-.10839/.05 = -2.1678 N
F_y=4.092996/.05 = 81.8591 N
|
{"url":"http://www.physicsforums.com/showpost.php?p=1318602&postcount=5","timestamp":"2014-04-18T00:35:29Z","content_type":null,"content_length":"7023","record_id":"<urn:uuid:53d02d1e-2502-4be2-abb9-648c65a8f1ae>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Analysis Help required
November 9th 2012, 08:25 AM #1
Nov 2012
Analysis Help required
i need help with Analysis please!
how do you prove that a sequence (x[n]) diverges to infinity if and only if (-x[n]) diverges to -infinity??
it seems like quite a simple question but i cant think of a rigorous enough proof. THANKS!
Re: Analysis Help required
November 9th 2012, 08:36 AM #2
|
{"url":"http://mathhelpforum.com/new-users/207111-analysis-help-required.html","timestamp":"2014-04-18T14:43:38Z","content_type":null,"content_length":"33471","record_id":"<urn:uuid:937e116f-d390-4e59-9f6c-ad07e64d7f43>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A logic-Algebra problem...
Re: A logic-Algebra problem...
According to the equation, A^C must equal L * O * G * I * C
So, if, according to you, C = 5, then A ^ 5 is equal to L * O * G * I * 5.
What do you mean by A has a 7 next to it?
Your question doesn't make the most sense really.
Boy let me tell you what:
I bet you didn't know it, but I'm a fiddle player too.
And if you'd care to take a dare, I'll make a bet with you.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=7086","timestamp":"2014-04-18T08:09:12Z","content_type":null,"content_length":"11853","record_id":"<urn:uuid:54ebf67e-e069-4c18-88cb-aa24da942df3>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] CH and mathematics
Arnon Avron aa at tau.ac.il
Wed Jan 23 15:03:38 EST 2008
On Mon, Jan 21, 2008 at 04:08:37PM -0500, joeshipman at aol.com wrote:
> Some set theorists (for example, Patrick Dehornoy here:
> http://www.math.unicaen.fr/~dehornoy/Surveys/DgtUS.pdf.
> ) believe that recent work of Woodin makes CH a much more definite
> question.
> Woodin has argued, roughly speaking, that the properties of the sets of
> hereditary cardinality aleph_1 are invariant under forcing only if CH
> is false; therefore only a set theory where CH is false can settle many
> open questions about sets that are quite small and "early" in the
> set-theoretic hierarchy. If we believe those questions
> themselves have
> definite answers, then we should accept that CH is false.
First, I do not believe those questions have definite
answers either. Second, even if we do, the argument at most
implies that only if CH is false we may be able to
*know* their definite truth value. But it is quite
possible that there are definite questions whose definite
truth-value we shall never be able to actualy know
(maybe the twin-primes problem or GC, or the consistency
of NF are such problems. Who knows).
It seems to me that you confuse "having a definite truth
value" with "it should be possible to determine this
truth-value". Perhaps for constructivists the two claims
are identical, but not for ordinary people.
> I don't necessarily buy this line of reasoning myself, but a lot of
> people regard it as genuine progress, so you should address it if you
> want to argue for CH's indefiniteness or our inability to settle it.
I do not buy this line at all, so I feel no need to address it.
Let me add that in general I cannot understand an argument
that something is true because things will look bad if it is not.
What a kind of a *mathematical* argument is this?? (and
if productivity is the issue rather than meaninmgfulness and
truth, than why not accept as true V=L, which is a very
fruitful axiom?).
Arnon Avron
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2008-January/012574.html","timestamp":"2014-04-18T14:03:12Z","content_type":null,"content_length":"4546","record_id":"<urn:uuid:db889be1-5c6f-4ed7-9e7b-42b3a9bdfad7>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Diode voltage drop (connected in parallel with a resistor)
This means the answer in the book is wrong (when the ideal diode is considered).
They obtained a current through the diode less then current through the series resistor.
Perhaps there's a slight confusion of what "ideal diode" means? The real diode has the current as an exponential function of the voltage; as that's very complicated, there are 3 popular
approximations (each one of them is an "ideal" diode thingie, but different ones)
(a) One is Vd = 0V for any Id > 0; that's a pretty poor model
(b) One is Vd = 0.6V for any Id > 0; that's a pretty popular model
(c) One is Vd = 0.6V + Id.Rd for any Id > 0; that's a less popular model
If one takes the ideal model (b), then you assume Vd = 0.6V, so the current on the second resistor (the one in series) is (Vo - 0.6) / R2. This is the current that passes between the diode and the
first resistor. The current on the first resistor is 0.6 / R1. Then you take the current from the second resistor (Vo - 0.6) / R2 and subtract the one on the first resistor - that will be the current
on the diode Id = (Vo - 0.6) / R2 - 0.6/R1
So the book makes sense, if that's what the book did.
|
{"url":"http://www.physicsforums.com/showthread.php?t=573049","timestamp":"2014-04-19T17:31:18Z","content_type":null,"content_length":"63433","record_id":"<urn:uuid:3f6b5665-aeb0-4799-aa62-325f115f0aa4>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Type I and II error
Type I error
A type I error occurs when one rejects the null hypothesis when it is true. The probability of a type I error is the level of significance of the test of hypothesis, and is denoted by *alpha*.
Usually a one-tailed test of hypothesis is is used when one talks about type I error.
If the cholesterol level of healthy men is normally distributed with a mean of 180 and a standard deviation of 20, and men with cholesterol levels over 225 are diagnosed as not healthy, what is the
probability of a type one error?
z=(225-180)/20=2.25; the corresponding tail area is .0122, which is the probability of a type I error.
If the cholesterol level of healthy men is normally distributed with a mean of 180 and a standard deviation of 20, at what level (in excess of 180) should men be diagnosed as not healthy if you want
the probability of a type one error to be 2%?
2% in the tail corresponds to a z-score of 2.05; 2.05 × 20 = 41; 180 + 41 = 221.
Type II error
A type II error occurs when one rejects the alternative hypothesis (fails to reject the null hypothesis) when the alternative hypothesis is true. The probability of a type II error is denoted by
*beta*. One cannot evaluate the probability of a type II error when the alternative hypothesis is of the form µ > 180, but often the alternative hypothesis is a competing hypothesis of the form: the
mean of the alternative population is 300 with a standard deviation of 30, in which case one can calculate the probability of a type II error.
If men predisposed to heart disease have a mean cholesterol level of 300 with a standard deviation of 30, but only men with a cholesterol level over 225 are diagnosed as predisposed to heart disease,
what is the probability of a type II error (the null hypothesis is that a person is not predisposed to heart disease).
z=(225-300)/30=-2.5 which corresponds to a tail area of .0062, which is the probability of a type II error (*beta*).
If men predisposed to heart disease have a mean cholesterol level of 300 with a standard deviation of 30, above what cholesterol level should you diagnose men as predisposed to heart disease if you
want the probability of a type II error to be 1%? (The null hypothesis is that a person is not predisposed to heart disease.)
1% in the tail corresponds to a z-score of 2.33 (or -2.33); -2.33 × 30 = -70; 300 - 70 = 230.
Conditional and absolute probabilities
It is useful to distinguish between the probability that a healthy person is dignosed as diseased, and the probability that a person is healthy and diagnosed as diseased. The former may be rephrased
as given that a person is healthy, the probability that he is diagnosed as diseased; or the probability that a person is diseased, conditioned on that he is healthy. The latter refers to the
probability that a randomly chosen person is both healthy and diagnosed as diseased. Probabilities of type I and II error refer to the conditional probabilities. A technique for solving Bayes rule
problems may be useful in this context.
If the cholesterol level of healthy men is normally distributed with a mean of 180 and a standard deviation of 20, but men predisposed to heart disease have a mean cholesterol level of 300 with a
standard deviation of 30, and the cholesterol level 225 is used to demarcate healthy from prediposed men; what fration of the population are healthy and diagnosed as predisposed? what fraction of the
population are predisposed and diagnosed as healthy? Assume 90% of the population are healthy (hence 10% predisposed).
Let A designate healthy, B designate predisposed, C designate cholesterol level below 225, D designate cholesterol level above 225. P(D|A) = .0122, the probability of a type I error calculated above.
Hence P(AD)=P(D|A)P(A)=.0122 × .9 = .0110. P(C|B) = .0062, the probability of a type II error calculated above. Hence P(CD)=P(C|B)P(B)=.0062 × .1 = .00062.
A problem requiring Bayes rule or the technique referenced above, is what is the probability that someone with a cholesterol level over 225 is predisposed to heart disease, i.e., P(B|D)=? This is P
(BD)/P(D) by the definition of conditional probability. P(BD)=P(D|B)P(B). For P(D|B) we calculate the z-score (225-300)/30 = -2.5, the relevant tail area is .9938 for the heavier people; .9938 × .1 =
.09938. P(D) = P(AD) + P(BD) = .0122 + .09938 = .11158 (the summands were calculated above). Inserting this into the definition of conditional probability we have .09938/.11158 = .89066 = P(B|D).
If there is a diagnostic value demarcating the choice of two means, moving it to decrease type I error will increase type II error (and vice-versa).
The power of a test is (1-*beta*), the probability of choosing the alternative hypothesis when the alternative hypothesis is correct.
The effect of changing a diagnostic cutoff can be simulated.
Applets: An applet by R. Todd Ogden also illustrates the relative magnitudes of type I and II error (and can be used to contrast one versus two tailed tests). [To interpret with our discussion of
type I and II error, use n=1 and a one tailed test; alpha is shaded in red and beta is the unshaded portion of the blue curve. Because the applet uses the z-score rather than the raw data, it may be
confusing to you. The allignment is also off a little.]
Competencies: Assume that the weights of genuine coins are normally distributed with a mean of 480 grains and a standard deviation of 5 grains, and the weights of counterfeit coins are normally
distributed with a mean of 465 grains and a standard d eviation of 7 grains. Assume also that 90% of coins are genuine, hence 10% are counterfeit.
What is the probability that a randomly chosen genuine coin weighs more than 475 grains?
What is the probability that a randomly chosen counterfeit coin weighs more than 475 grains?
What is the probability that a randomly chosen coin weighs more than 475 grains and is genuine?
What is the probability that a randomly chosen coin weighs more than 475 grains and is counterfeit?
What is the probability that a randomly chosen coin which weighs more than 475 grains is genuine?
Reflection: How can one address the problem of minimizing total error (Type I and Type II together)?
|
{"url":"http://faculty.cns.uni.edu/~campbell/stat/inf5.html","timestamp":"2014-04-19T07:06:19Z","content_type":null,"content_length":"7579","record_id":"<urn:uuid:2ba291ee-02a9-43db-a112-4abf84c40ea6>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Falling object
June 6th 2007, 01:58 PM #1
Junior Member
Apr 2007
Q) A stone is thrown vertically upwards with a speed of 16m/s from a point h metres above the ground. The stone hits the ground 4s later. Find the value of h.
Using suvat equations, how is this done?
Thanks alot
the SUVAT equation we need here is:
$s = vt - \frac {at^2}{2} = ut + \frac {at^2}{2}$
where $u$ is the initial velocity, $v$ is the final velocity, $s$ is the displacement or position, and $a$ is the acceleration.
However, since we did not start on the ground, $s$ has to be modified by some constant. that is, we will use
$s = ut + \frac {at^2}{2} + C$ where $C$ is the height we are when time, $t$, is zero.
now $u=16$, and $a = -9.8$
So, $s = 16t + \frac {-9.8t^2}{2} + C$
$\Rightarrow s = 16t - 4.9t^2 + C$
Now, after 4 seconds, the ball hits the ground, so it's height is zero.
$\Rightarrow s(4) = 0$
$\Rightarrow 0 = 16(4) - 4.9(4)^2 + C$
$\Rightarrow C = 14.4$
So, $s = -4.9t^2 + 16t + 14.4$
So $h = 14.4$, since that is what $s$ is when $t = 0$
Thanks...I kept getting something like 13.06(my way). The solution in the paper returns the same answer as you, but it is done like this:
s=ut + 0.5at^2
s= ?
u= 16
v= -
a= -9.8
t= 4
What I don't understand, is why if it's done this way (using these values), it actually gives the value of h, as oppose to the distance from h, to where v=0, and then to the ground. i.e., the
whole journey of the rock minus the distance h.
Can you please explain how this method works?
Thanks...I kept getting something like 13.06(my way). The solution in the paper returns the same answer as you, but it is done like this:
s=ut + 0.5at^2
s= ?
u= 16
v= -
a= -9.8
t= 4
What I don't understand, is why if it's done this way (using these values), it actually gives the value of h, as oppose to the distance from h, to where v=0, and then to the ground. i.e., the
whole journey of the rock minus the distance h.
Can you please explain how this method works?
think of it this way. we can use $h$ instead of $C$.
Now we know that $s = ut + 0.5at^2$. however, this equation assumes that the displacement is zero when time is zero. (if you plug in $t=0$ we get 0). here that is not the case. when time is zero,
we are at a distance $h$ above the ground. so we must have:
$s = ut + 0.5at^2 + h$
Now if we plug in $t=0$, we get the distance $h$, which is what we want.
Now we simply must find $h$. we know that $s = 0$ when $t = 4$, so we simply plug those values in to find $h$
i'm not sure if i answered your question. did i?
Oh right, I see what you mean.
Thanks very much for your help
June 6th 2007, 02:19 PM #2
June 6th 2007, 03:24 PM #3
Junior Member
Apr 2007
June 6th 2007, 03:51 PM #4
June 6th 2007, 04:12 PM #5
Junior Member
Apr 2007
|
{"url":"http://mathhelpforum.com/math-topics/15695-falling-object.html","timestamp":"2014-04-18T01:58:40Z","content_type":null,"content_length":"49628","record_id":"<urn:uuid:1dad5bd2-5004-40b2-9f21-620c2b5ce997>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Puzzle Help - Twelve Crayons Solution
Home / Puzzle Help /
There are several possible ways to solve the puzzle.
Way 1. Making just plain figures like in Solutions 1, 2, and 5 below.
Way 2. Less obvious. You have to make your starting and finish shapes 3-dimensional. One of the most "classic" solutions is based on placing 9 crayons (which have to form the final shape with 3
squares) along edges of a cube without one corner - you need exactly 9 crayons for this. This shape has exactly 3 perfect squares.
Solutions 4 and 6 use this principle. Solution 6 describes how to build the final shape.
Way 3. Using crayons to form digits/numbers like in Solution 3.
We show all the six winning solutions.
Remove crayons a, b and c
Solution 1 by Nicole Takahashi
I would have had a hard time describing a soln to the twelve crayons, so I drew a picture (attached)*.
* Nice drawings Nicole! Thank you!
Solution 2 by Joao Paulo
Remove the red ones
Interesting puzzle I think the answer is correct
Thank you for the great site
Solution 3 by Jensen Lai
Arrange 12 crayons into 4 lines with 3 crayons in each line. This yields the numbers 1, 1, 1, and 1. Each is a perfect square. Take away three crayons and you are left with 1,1 and 1. Thus, you are
left with 3 perfect squares.
Solution 4 by Alex Packard
Arranging twelve matchsticks to give four perfect squares.
I cannot show this on a picture but it is a front square, a top square, a side square, and a square adjacent to the side square. This configuration is similar to a box with no bottom and missing
one side. This box has a 'lid' - the square adjacent to the side square.
Another way to put it is a die with two sides removed, and one more side flipped up; keep that side intact.
Removing the three crayons which comprise the 'lid' of the box (or the flipped up part that is not an integral part of the three sided die), leaves only 3 perfect squares left.
Solution 5 by Federico Bribiesca Argomedo
If we consider the following figure:
It's marking four different perfect squares with twelve crayons, if we remove the three crayons outside the 2x1 rectangle, we'll have three perfect squares and nine crayons, so we've accomplished
our goal.
Solution 6 by Jeffrey Czarnowski
Set the match sticks up in a cube. Then remove any three sticks that are perpendicular to each other (i.e. remove a corner).
Last Updated: July 8, 2007 | Posted: January 24, 2002
|
{"url":"http://puzzles.com/PuzzleHelp/TwelveCrayons/TwelveCrayonsSol.htm","timestamp":"2014-04-21T02:51:37Z","content_type":null,"content_length":"26323","record_id":"<urn:uuid:323fa8ef-cd88-4d83-a1ef-2d6fe3a06adc>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Classification of fat projective lines?
up vote 3 down vote favorite
In section III.3.4 of Eisenbud & Harris's "The Geometry of Schemes," we/they construct an infinite family of double structures on $\mathbb{P}^1 \subset \mathbb{P}^3$ that are distinguished from each
other by their genus. Here is the construction (let's just worry about $\mathbb{C}$ for now):
Let $d$ be a non-negative integer, and let $S = \mathbb{C}[u,v,x,y]/(x^2, xy, y^2, u^dx - v^dy)$.
Then $X_d = Proj$ $S$ is a double line with arithmetic genus $-d$.
Eisenbud and Harris then go on to say that "every projective double line of genus $-d$, with $d \geq 0$, is isomorphic to $X_d$."
My question is: where can I find a proof of this statement?
More generally: if you fix a curve $C$ of genus $g$, how can I describe the moduli of genus $d$ double curves lying over top of $C$?
add comment
1 Answer
active oldest votes
(Note: I don't assume $C$ is a curve, or even that it is smooth.)
Suppose $C'$ is a 2-fold thickening of $C$. Then $C'$ has the same underlying topological space as $C$. On that space, we have a short exact sequence of $\newcommand{\O}{\mathcal O}\O_
{C'}$-modules $$\newcommand{\L}{\mathcal L}\tag{$\dagger$} 0\to \L\to \O_{C'}\to \O_C\to 0 $$ with $\O_{C'}\to \O_C$ a ring homomorphism.
Since $C'$ is a square-zero thickening, the sheaf of ideals $\L$ squares to zero, so it is actually an $\O_C$-module. Since $C'$ is a 2-fold thickening, $\L$ is actually a line bundle
on $C$ (thus the suggestive notation). In fact, a 2-fold thickening of $C$ is no more than an extension of sheaves of rings $\O_{C'}\to \O_C$ whose kernel is a line bundle. An
isomorphism of thickenings will respect the map to $\O_C$, but need not induce the identity map on $\L$, so we care about parameterizing such extensions of algebras "up to scalar" in
some sense.
Given a line bundle $\L$, the set of extensions of algebras of the form $(\dagger)$ is parameterized by $Ext^1_{\O_C}(L_C,\L)$, where $L_C$ is the cotangent complex of $C$. This is a
special case of Theorem 1.2.3 in Chapter III of Illusie's Complexe Cotangent et Deformations I. The "up to scalar" in the previous paragraph should correspond to looking for elements of
up vote 6 this $Ext^1$ up to scalar. If $C$ is smooth, then $L_C\cong \Omega^1_C$ is locally free, so $Ext^1(L_C,\L)\cong H^1(C,\mathcal{T}\otimes \L)$, where $\mathcal T = (\Omega^1_C)^\vee$ is
down vote the tangent bundle of $C$.
Upshot: A 2-fold thickening is given by choosing some $\L\in Pic(C)$ and some element (up to scalar) of $Ext^1(L_C,\L)$. Note that in such a thickening $C'$, $\L$ is the conormal bundle
of $C$ in $C'$.
In the case $\newcommand{\P}{\mathbb P}C=\P^1$ and $\L=\O(d)$ ($d\ge 0$), we have that $\mathcal T=\O(2)$, and $H^1(\P^1,\O(2+d))=0$, so there is a unique extension for a given $d$. If
I haven't made a mistake in this answer, there should be two non-isomorphic thickenings of $\P^1$ with conormal bundle $\L=\O(-4)$, and an infinite number of non-isomorphic thickenings
with conormal bundle $\L=\O(d)$ if $d<-4$.
Special case: The element $0\in Ext^1(L_C,\L)$ corresponds to the first infintessimal neighborhood of $C$ in the total space $\newcommand{\V}{\mathbb V}\V(\L^\vee)$. Here, $C$ is
thought of as the zero section of $\V(\L^\vee)$.
1 A beautifully written answer, Anton ! – Georges Elencwajg Mar 13 '11 at 9:35
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry reference-request schemes or ask your own question.
|
{"url":"http://mathoverflow.net/questions/58288/classification-of-fat-projective-lines","timestamp":"2014-04-17T07:23:19Z","content_type":null,"content_length":"54244","record_id":"<urn:uuid:38f22e22-6841-42eb-9b62-2f13b5f6e47e>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Eaton, CO Algebra Tutor
Find an Eaton, CO Algebra Tutor
...I have been a PowerPoint user since 1995. I have constructed presentations for simple group meetings, executive presentations and over 25 conference presentations. I understand not only how
develop all features in PowerPoint but how teach someone what level of complexity is appropriate for the type of presentation you are presenting at.
14 Subjects: including algebra 1, algebra 2, physics, Microsoft Excel
...During this program I spent time as a teacher's assistant for a human anatomy course and also worked in a research lab on campus. I know that my success in college, my many years of teaching
experience, along with passing the Praxis II: General Science (with top honors of ETS Recognition of Exce...
26 Subjects: including algebra 2, algebra 1, reading, geometry
...Now I would love to help you succeed as well. Looking forward to working with you,Anne I have experience with comprehensive math classes through calculus, including Nursing Math. I have also
excelled at a variety of science classes.
13 Subjects: including algebra 2, algebra 1, geometry, statistics
Hello everyone! My name is Scott and I have been a substitute teacher in Greeley, Colorado for three years. I have a dual major History/English undergraduate degree and a Masters degree in
28 Subjects: including algebra 1, algebra 2, English, reading
...During my undergrad I tutored in biology, chemistry, math, and physics. I now focus on biology because that is my specialty. I love to teach college courses.
13 Subjects: including algebra 1, biology, elementary (k-6th), anatomy
Related Eaton, CO Tutors
Eaton, CO Accounting Tutors
Eaton, CO ACT Tutors
Eaton, CO Algebra Tutors
Eaton, CO Algebra 2 Tutors
Eaton, CO Calculus Tutors
Eaton, CO Geometry Tutors
Eaton, CO Math Tutors
Eaton, CO Prealgebra Tutors
Eaton, CO Precalculus Tutors
Eaton, CO SAT Tutors
Eaton, CO SAT Math Tutors
Eaton, CO Science Tutors
Eaton, CO Statistics Tutors
Eaton, CO Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Eaton_CO_Algebra_tutors.php","timestamp":"2014-04-21T10:56:25Z","content_type":null,"content_length":"23516","record_id":"<urn:uuid:70ea5cbb-7d1e-4e78-8700-8a92681cdd2f>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Response to Slate: How the recent article on technology misses the point.
Ah, summer. A great time to kick back, relax, and have time to write reactions to things that bug me.
I read through the article on Slate titled 'Why Johnny Can't Add Without a Calculator' and found it to be a rehashing of a whole slew of arguments that drive me nuts about technology in education. It
also does a pretty good job of glossing over a number of issues relative to learning math.
The problem isn't that Johnny can't add without a calculator. It's that we sometimes focus too much about turning our brain into one.
This was the sub-heading underneath the title of the article:
Technology is doing to math education what industrial agriculture did to food: making it efficient, monotonous, and low-quality.
The author then describes some ancedotes describing technology use and implementation:
• An experienced teacher forced to give up his preferred blackboard in favor of an interactive whiteboard, or IWB.
• A teacher unable to demonstrate the merits of an IWB beyond showing a video and completing a demo of an electric circuit.
• The author trying one piece of software and finding it would not accept an answer without sufficient accuracy.
I agree with the author's implication that blindly throwing technology into the classroom is a bad idea. I've said many times that technology is only really useful for teaching when it is used in
ways that enhance the classroom experience. Simply using technology for its own sake is a waste.
These statements are true about many tools though. The mere presence of one tool or another doesn't make the difference - it is all about how the tool is used. A skilled teacher can make the most of
any textbook - whether recently published or decades old - for the purposes of helping a student learn. Conversely, just having an interactive whiteboard in the classroom does not make students learn
more. It is all about the teacher and how he or she uses the tools in the room. The author acknowledges this fact briefly at the end in arguing that the "shortfall in math and science education can
be solved not by software or gadgets but by better teachers." He also makes the point that there is no "technological substitute for a teacher who cares." I don't disagree with this point at all.
The most damaging statements in the article surround how the author's misunderstanding of good mathematical education and learning through technology.
Statement 1: "Educational researchers often present a false dichotomy between fluency and conceptual reasoning. But as in basketball, where shooting foul shots helps you learn how to take a
fancier shot, computational fluency is the path to conceptual understanding. There is no way around it."
This statement gets to the heart of what the author views as learning math. I've argued in previous posts on how my own view of the relationship between conceptual understanding and learning
algorithms has evolved. I won't delve too much here on this issue since there are bigger fish to fry, but the idea that math is nothing more than learning procedures that will someday be used and
understood does the whole subject a disservice. This is a piece of the criticism of Khan Academy, but I'll leave the bulk of that argument to the experts.
I will say that I'm really tired of the sports skills analogy for arguing why drilling in math is important. I'm not saying drills aren't useful, just that they are never the point. You go through
drills in basketball not just to be able to do a fancier shot (as he says) but to be able to play and succeed in a game. This analogy also falls short in other subjects, a fact not usually brought up
by those using this argument. You spend time learning grammar and analysis in English classes (drills), but eventually students are also asked to write essays (the game). Musicians practice scales
and fingering (drills), but also get opportunities to play pieces of music and perform in front of audiences (the game).
The general view of learning procedures as the end goal in math class is probably the most destructive reason why people view math as something acceptable not to be good at. Learning math this way
can be low-quality because it is "monotonous [and] efficient", which is not technology's fault.
One hundred percent of class time can't be spent on computational fluency with the expectation that one hundred percent of understanding can come later. The two are intimately entwined, particularly
in the best math classrooms with the best teachers.
Statement 2: "Despite the lack of empirical evidence, the National Council of Teachers of Mathematics takes the beneficial effects of technology as dogma."
If you visit the link the author includes in his article, you will see that what NCTM actually says is this:
"Calculators and other technological tools, such as computer algebra systems, interactive geometry software, applets, spreadsheets, and interactive presentation devices, are vital components of a
high-quality mathematics education."
...and then this:
"The use of technology cannot replace conceptual understanding, computational fluency, or problem-solving skills."
In short, the National Council for Teachers of Mathematics wants both understanding and computational fluency. It really isn't one or the other, as the author suggests.
The author's view of what "technology" entails in the classroom seems to be the mere presence of an interactive whiteboard, new textbooks, calculators in the classroom, and software that teaches
mathematical procedures. This is not what the NCTM intends the use of technology to be. Instead the use of technology allows exploration of concepts in ways that cannot be done using just a
blackboard and chalk, or pencil and paper. The "and other technological tools next to calculators in the quote has become much more significant over the past five years, as Geometers Sketchpad,
Geogebra, Wolfram Alpha, and Desmos have become available.
Teachers must know how to use these tools for the nature of math class to change to one that emphasizes mathematical thinking over rote procedure. If they don't, then math continues as it has been
for many years: a set of procedures that students may understand and use some day in the future. This might be just fine for students that are planning to study math, science, or engineering high
school. What about the rest of them? (They are the majority, by the way.)
Statement 3: "...the new Common Core standards for math...fall short. They fetishize “data analysis” without giving students a sufficient grounding to meaningfully analyze data. Though not as
wishy-washy as they might have been, they are of a piece with the runaway adaption of technology: The new is given preference over the rigorous."
If "sufficient grounding" here means students doing calculations done by hand, I completely disagree. Ask a student to add 20 numbers by hand to calculate an average, and you'll know what I mean. If
calculation is the point of a lesson, I'll have students calculate. The point of data analysis is not computation. Just because the tools take the rigor out of calculation does not diminish the
mathematical thinking involved.
Statement 4: "Computer technology, while great for many things, is just not much good for teaching, yet. Paradoxically, using technology can inhibit understanding how it works. If you learn how
to multiply 37 by 41 using a calculator, you only understand the black box. You’ll never learn how to build a better calculator that way."
For my high school students, I am not focused on students understanding how to multiply 37 by 41 by hand. I do expect them to be able to do it. Usually when my students do get it wrong, it is because
they feel compelled to do it by hand because they are taught (in my view incorrectly) that doing so is somehow better, even when a calculator sits in front of them. As with Statement 3, I am not
usually interested in students focusing on the details of computation when we are learning difference quotients and derivatives. This is where technology comes in.
I tweeted a request to the author to check out Conrad Wolfram's TED Talk on using computers to teach math, and asked for a response. I still haven't heard back. I think it would be really revealing
for him to listen to Wolfram's points about computation, the traditional arguments against computation, and the reasons why computers offer students new opportunities to explore concepts in ways they
could not with mere pencil and paper. His statement that math is much more than computation has really changed the way I think about teaching my students math in my classroom.
Statement 5: "Technology is bad at dealing with poorly structured concepts. One question leads to another leads to another, and the rigid structure of computer software has no way of dealing with
this. Software is especially bad for smart kids, who are held back by its inflexibility."
Looking at computers used purely as rote instruction tools, I completely agree. That is a fairly narrow view of what learning mathematics can be about.
In reality, technology tools are perfectly suited for exploring poorly structured concepts because they let a student explore the patterns of the big picture. The situation in which "one question
leads to another" is exactly what we want students to feel comfortable exploring in our classroom! Finally, software that is designed for this type of exploration is good for the smart students (who
might quickly make connections between different graphical, algebraic, and numerical representations of functions, for example) and for the weaker students that might need different approaches to a
topic to engage with a concept.
The truly inflexible applications of technology are, sadly, the ones that are also associated with easily measured outcomes. If technology is only used to pass lectures and exercises to students so
they can perform well on standardized tests, it will be "efficient, monotonous, and low quality" as the author states at the beginning.
The hope that throwing calculators or computers in the classroom will "fix" problems of engagement and achievement without the right people in the room to use those tools is a false one, as the
author suggests. The move to portray mathematics as more than a set of repetitive, monotonous processes, however, is a really good thing. We want schools to produce students that can think
independently and analytically, and there are many ways that true mathematical thinking contributes to this sort of development. Technology enables students to do mathematical thinking even when
their computation skills are not up to par. It offers a different way for students to explore mathematical ideas when these ideas don't make sense presented on a static blackboard. In the end, this
gets more students into the game.
This should be our goal. We shouldn't going back to the most basic textbooks and rote teaching methods because it has always worked for the strongest math students. There must have been a form of
mathematical Darwinism at work there - the students that went on historically were the ones that could manage the methods. This is why we must be wary of the argument often made that since a
pedagogical method "worked for one person" that that method should be continued for all students. We should instead be making the most of resources that are available to reach as many students as
possible and give them a rich experience that exposes them to the depth and variety associated with true mathematical thinking.
4 Responses to A Response to Slate: How the recent article on technology misses the point.
1. I wish that everyone would read this excellent analysis. Every day, I meet adults who proudly claim their dislike of or ineptitude in mathematics. I am certain that some of these feelings come
from the soul-killing, mind-numbing experience of calculation after calculation after calculation with no connection to anything meaningful. It's been a very long time since I've been in the
classroom (as a teacher), and I would be terrified to face an IWB. BUT that tool is merely a tool. And I would gladly use it, if it meant helping students understand the *concepts* and *meaning*
of the mathematics behind the computations. Students don't learn math from drills. They learn math from (duh) exploring mathematical concepts and applications.
□ Thanks for the comments!
I think the idea that technology is only a tool is frightening to those that are looking for easy ways to improve math achievement in a classroom. I found that teaching with it initially
provided more engagement to students because it was new and different. I made sure, however, to change the way I presented things so that I made the most of its capabilities in my using it to
teach. There are many things that it made much easier than just using a blackboard or whiteboard, the big one being the elimination of dead time spent erasing or drawing diagrams. It took a
lot of effort though to use it to change how ideas were presented, and this took time. I still required my students to practice their skills given the additional insight (I hope) my
presentation added using the technology.
For technology to be useful (and Kakaes definitely points this out in the article and one written since) the teacher must know how to use it. This requires serious investment on the part of
administrators for training and time spent playing around with the technology if it is to make a difference.
2. While there are parts of this response that I agree with, I'm still troubled by a few things:
1) The belief that there is a way to achieve understanding in mathematics without computational fluency. I've not yet seen a student in my 13 years of teaching who truly had one without the
other. I don't disagree with the statement that "One hundred percent of class time can’t be spent on computational fluency with the expectation that one hundred percent of understanding can come
later. The two are intimately entwined, particularly in the best math classrooms with the best teachers." But this gets to a bigger point in Kakaes' article, which this post ignores - there
aren't a whole lot of these "best teachers" out there, especially in K-8.
2) To point #3, on the Common Core Standards and data analysis - A better question might be: what's the purpose of teaching statistics (beyond mean, median, mode, and perhaps - perhaps - the
normal curve and standard distribution) for most students? Is there one?
3) Point 4 - you expect your students to be able to multiply 37 by 41 by hand - and I agree with you. But by the time I see them, in high school, it has been such a long time since they've done
it that many don't remember the algorithm. And again, without well-prepared K-8 teachers, they won't.
4) Point 5 - I disagree that technology allows students to do mathematical thinking even when their computational skills are not up to par. As far as I can tell, the only part of mathematics that
can be suitably explored and understood via technology without a modicum of computational skills to back them up, in my experience, is geometry. And the technology for that subject is great - I
agree completely with you. It's just too bad that many of our students are spending time with Geometer's Sketchpad or Geogebra before they've had a chance to measure some angles for real with a
protractor or bisect some angles with a real (!) compass and straightedge. But I'm not convinced that allowing students to explore linear (or whatever type of) equations on Desmos.com or Wolfram|
Alpha accomplishes anything meaningful unless they have the number sense to comprehend the result.
5) To me, though, the real point of Kakaes' article, and one I'd love to hear your thoughts on, is the amazing lack of mathematical preparedness of many K-8 teachers, and the concomitant rush to
technology that schools are now going to in order to remedy this deficit. It's wonderful that high school teachers and teachers of other grades who are math experts can use this amazing new
technology in our classrooms. But until our students are coming to us with a stronger background - more computationally fluent AND with deeper basic understanding - we will be in the woods,
wondering why all of this new technology is not developing greater knowledge of and appreciation for mathematics in our kids.
3. I appreciate your thoughtful comments. I'll try to address them as best as I can.
The point about K-8 preparation is an important one. Schools are flocking to solutions like Khan Academy and instructional software because these options have the appearance of being
well-developed in both content and pedagogy, thereby bypassing any deficiencies teachers have in these areas. The reality is that these options are sorely lacking in both, and an effort is
currently underway to show precisely how Khan academy falls short. It is easy to sit students in front of these tools and see rows of students staring at screens as evidence that learning is
going on, so it's also easy to conclude that this is a productive way to apply technology funds. My response was not meant to address the reasons that technology is often thrown in classrooms in
inappropriate ways. I bring this up every time I see anyone excited about getting access to technology in classrooms without also knowing how it will change their instruction. Preparation
programs for mathematics teachers at universities are, more often than not, painfully inadequate in preparing teachers pedagogically and mathematically for teaching K-12 mathematics.
The problem I still see in starting sentences with "until students are coming to us with a stronger background" is the idea that students can't do without the background knowledge and can't
possibly even explore the mathematics that depends on it. Looking at the big picture through technology can give students the intuition to then understand the background knowledge through
observing patterns and testing theories - exactly what mathematics is supposed to be about. Conventional wisdom is that students have to memorize the unit circle before making a graph of f(x) =
sin(x). Why not have them make observations of the function graph and its properties? If we are willing to put students through drilling a concept or skill over and over again in order to follow
subsequent procedures that require this concept or skill, why not reverse the process? Technology makes this possible. It can be used to give the answer, and then have students figure out where
those answers come from. This is a much richer process than merely following procedures. With only a pencil and paper, this isn't possible.
As for using Geometer's Sketchpad or Geogebra - you are absolutely right that students should be trying to do things by paper and pencil. I go back and forth myself between technology and pencil
and paper activities. My students get the tactile experience of plotting by hand, bisecting angles and segments, and comparing triangles cut out of cardboard, not just on a screen. This keeps
things interesting, as having the same tool the whole time gets boring.
I also agree that conceptual understanding and computational abilities can go hand in hand, but they don't have to. They are definitely related. I have had students with 100% computational
ability with no conceptual understanding. These students typically lose most of their knowledge from my class after the final exam. The more typical example though is a student that makes
arithmetic mistakes but can explain mathematical concepts and ideas clearly and can figure out an appropriate solution method for a problem. If I'm testing students on algebra, I want to test
their abilities in algebra, not arithmetic. I think they are distinct. You can understand the idea of using inverse properties to solve an equation while also getting -18 + 14 incorrect. To show
that both are important, a teacher must devote appropriate time to both in the classroom.
Finally, on the point about data analysis and statistics: there is too much data out in the world to not have students learn some tools to understand what it means. Constructing models for the
real world is the most powerful thing that math allows us to do. Evaluating those models and knowing how to navigate the use of statistics in the real world is something our students must
understand to be literate in today's society.
I know I've said a lot here - I figure it is better to get it all out there. Skim as needed and call me out on anything you disagree with me on.
Filed under reflection, teaching philosophy, Uncategorized
|
{"url":"http://evanweinberg.com/2012/06/28/a-response-to-slate-how-the-recent-article-on-technology-misses-the-point/","timestamp":"2014-04-19T20:23:42Z","content_type":null,"content_length":"57514","record_id":"<urn:uuid:7bdeea81-0be6-4aa7-8bd7-405fa9e20b86>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Post a reply
Hi angie38;
Welcome to the forum.
First find the slope of the line you are given.
solve for y
The slope of this line is 2 / 3. For a line to be perpendicular to this one its slope must be a negative reciprocal. So therefore the slope of the new line is -(3 / 2).
The general form of the line you want is:
since we know the slope m
You should be able to solve for b and get the equation of the line that is perpendicular to the given one and passes through the point (-2,-2).
|
{"url":"http://www.mathisfunforum.com/post.php?tid=18381&qid=239921","timestamp":"2014-04-20T16:56:12Z","content_type":null,"content_length":"20275","record_id":"<urn:uuid:6c77915a-215c-41a4-8cd3-c423e6109091>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hurry up please it’s time
June 1st, 2007 by Hancock
I mentioned the problem of representing time in a
(working) mathematical model.
For many purposes, the natural numbers are perfect. This time is discrete,
has an origin, and no end.
For other purposes, we may prefer the (positive and negative) integers
[no beginning]. Or the rational numbers [dense: between two moments,
a third]. Or maybe the real numbers, or some interval, or (who knows)
the irrationals,… whatever.
Yampa takes very seriously the idea of values that are a function of
continuous time – these are called signals, and are in some sense
virtual – the real things are signal transformers. Discrete streams
are captured by maybe-valued signals f : T -> A+1.
Now I think I understand T=natural numbers quite well, as a signal
transformer is then of type Str A -> Str B, and any such function
which is continuous has an implementation by an object of type
nu x. mu y. B * x + (A -> y)
But this depends on Str B being a final coalgebra, at least
weakly. How far can one push eating techniques to time-domains like
the real numbers, or whatever suits a digital-analog interface?
Lamport’s notion of stuttering
I mentioned an attractive and unusual approach to time taken by Leslie
(Search for the word stuttering throughout the page.)
Lamport restricts his specification language TLA (a form of
linear-time temporal logic) so that the temporal properties it can
state (of streams of states) are all invariant under stuttering:
introducing or deleting finite repetition. The reason he does this is
deeply bound up with his notion of what is a (low-level)
implementation of a (high-level) specification.
Think of it like this: a formula of temporal logic describes a movie
of the system being modelled. The camera’s shutter speed must be fast
enough to capture every state-change that takes place, It can be
faster, in which case there will be repeated frames.
Because the formulas are stuttering-insensitive, they are not so much
properties of movies, as properties of what a movie is of. If you see
what I mean.
Thorsten held forth for a while about “special double relativity”,
or something like that.
|
{"url":"http://sneezy.cs.nott.ac.uk/fplunch/weblog/?p=62","timestamp":"2014-04-17T19:28:41Z","content_type":null,"content_length":"10782","record_id":"<urn:uuid:635228a5-edad-43fa-ab45-bc5a4c5b559b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Plainfield, IL Algebra 1 Tutor
Find a Plainfield, IL Algebra 1 Tutor
...I love teaching, and whilst I await registration at state level here I really want to continue to teach, tutor and help wherever possible. Secondary teaching Maths in Scotland includes
everything from basic skills (i.e., 5 year old ability due to our inclusive system) up to calculus, geometry, t...
9 Subjects: including algebra 1, calculus, geometry, SAT math
...I'm a life-long learner and desire to help others. I have years of experience helping children, teenagers and adults. My goal is to make you a better student, learner and person.
13 Subjects: including algebra 1, reading, geometry, AutoCAD
...I have a special talent for encouraging students who need extra help in American History. I have taught 3rd grade as a regular certified teacher and have given tutorial services to several
students this year. I also have passed the WyzAnt vocabulary and reading test with excellent result.
19 Subjects: including algebra 1, reading, English, physics
...Lastly, I have completed 31/41 credits for my M.S. Special Education degree from Western Governors University. I have one year's worth of experience working with students with disabilities,
including Autism, and I have practice Whole Brain Teaching by Chris Biffle, et al.
19 Subjects: including algebra 1, reading, English, writing
...I have successfully tutored students in Pre-Algebra, Algebra I & II, Geometry, College Algebra, and Biology. Students are more confident, parents are happier, and all are pleased with the
report card results. I usually meet with students in the evening at our local library.
13 Subjects: including algebra 1, geometry, biology, algebra 2
|
{"url":"http://www.purplemath.com/plainfield_il_algebra_1_tutors.php","timestamp":"2014-04-16T05:05:46Z","content_type":null,"content_length":"24015","record_id":"<urn:uuid:2f61e2fe-f1bb-49ed-9b59-150b7a5dc137>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
|
geometric problems of antiquity
geometric problems of antiquity, three famous problems involving elementary geometric constructions with straight edge and compass, conjectured by the ancient Greeks to be impossible but not proved
to be so until modern times. The three problems are: (1) the duplication of the cube, also known as the Delian problem because it is said to have originated with the task of constructing a cubical
altar at Delos having twice the volume of the original cubical altar; (2) the trisection of an arbitrary angle; (3) the squaring, or quadrature, of the circle, i.e., the construction of a square
whose area is equal to that of a given circle. These problems were solved in the 19th cent. by first transforming them into algebraic problems involving "constructible numbers." A constructible
number is one that can be obtained from a whole number by means of addition, subtraction, multiplication, division, or extraction of square roots. The problems of antiquity correspond to the
following algebraic problems: (1′) Is 32art/cube-root-of-2.gifthe cube root of 3 constructible? (2′) Given an angle A for which cos A is constructible, is cos ( A /3) constructible? (3′) Is the area
π of a unit circle constructible? The number 32art/cube-root-of-2.gifthat is the cube root of 2 is not constructible, since it involves a cube root. (Note, however, that roots that are powers of 2,
e.g., 4th, 8th, 16th roots, are constructible because they can be expressed as combinations of square roots.) In problem (2′), certain special angles can be trisected, e.g., 90°, since both cos 90°
and cos 30° are constructible, but for most angles this is easily shown to be impossible. Finally, the solution of problem (3′) did not come until 1882, when the German Ferdinand Lindemann showed
that π is a transcendental number and thus cannot be expressed in terms of any roots of any rational numbers (see number). Although these problems cannot be solved using only straight edge and
compass, the Greeks developed methods of solving them using higher curves.
See F. Klein, Famous Problems of Elementary Geometry (1956).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on geometric problems of antiquity from Fact Monster:
See more Encyclopedia articles on: Mathematics
|
{"url":"http://www.factmonster.com/encyclopedia/science/geometric-problems-antiquity.html","timestamp":"2014-04-18T15:03:49Z","content_type":null,"content_length":"22408","record_id":"<urn:uuid:3abad51b-f1f2-478a-9c74-c5d92635ce9b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Microwave VNAs Add Nonlinear Network Analysis
A dverse effects of nonlinear behavior in active and passive high-frequency components can disrupt communications systems. As a result, it is desirable to understand such behavior, which requires the
right test equipment. To help, Agilent Technologies (www.agilent.com) has announced nonlinear vector network analyzer (NVNA) capability for its PNA-X series of microwave vector network analyzers
(VNAs). Working with new software, the analyzers can perform both nonlinear component characterization and directly measure nonlinear scattering parameters called X-parameters from 10 MHz to 26.5
The Agilent NVNA software transforms one of the firm's fourport PNA-X VNA systems into a nonlinear VNA (see figure). The NVNA capability is based on a standard PNA-X microwave network analyzer, which
maintains its linear measurement capabilities.
A four-port PNA-X with NVNA capability measures all of the input and output spectra for a device under test (DUT), including fundamental input and output signals, harmonics, and cross-frequency
products generated by the DUT's intermodulation distortion (IMD). The analyzer can display all amplitudes and phases of these products, along with the relative amplitude and phase of any frequencies
of interest. Such information can be used to develop a matching circuit and/ or filter to minimize unwanted distortion from a power amplifier (PA). The test data can be shown in frequency, time, or
power domains, as well as in terms of user-defined ratios. For example, if a DUT's output is distorted in the time domain, a user can switch to the frequency domain to identify the amplitude and
phase of individual frequency components. Input test power can be varied to study the sensitivity of a DUT to different power levels.
The NVNA also provides a nonlinear pulse-envelope domain measurement, which enables researchers to gain a deeper understanding of the memory effects exhibited by their devices by displaying the
harmonic pulse envelopes. The pulse amplitude and phase can be displayed in the time domain, showing changes as a function of time.
The NVNA measures X-parameters to better understand a DUT under saturated signal conditions. In contrast to the four standard S-parameters, the number of X-parameters for a given measurement can be
much higher, since X-parameters represent a DUT's crossfrequency dependencies under nonlinear conditions. One X-parameter might be the gain of the output fundamental frequency as a function of the
input third-harmonic frequency. Because of the synergy of Agilent's test and computer-aided-engineering (CAE) tools, the X-parameters measured with the NVNA system can be used in Agilent's Advanced
Design System to accurately simulate and design using nonlinear components, module and systems. P&A: $56,000 and up (Agilent N5242A PNA-X with NVNA options); stock. Agilent Technologies, Electronic
Measurements Group, 5301 Stevens Creek Blvd., MS 54LAK, Santa Clara, CA 95052; Internet: www.agilent.com.
|
{"url":"http://mwrf.com/print/test-and-measurement/microwave-vnas-add-nonlinear-network-analysis","timestamp":"2014-04-17T04:21:39Z","content_type":null,"content_length":"16952","record_id":"<urn:uuid:07742469-1acb-4e12-bcfc-9087b457be61>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fin515 Final Exam
Fin515 Final Exam
Final Exam Page 1
1. (TCO A) Which of the following does NOT always increase a company's market value? (Points : 5)
Increasing the expected growth rate of sales
Increasing the expected operating profitability (NOPAT/Sales)
Decreasing the capital requirements (Capital/Sales)
Decreasing the weighted average cost of capital
Increasing the expected rate of return on invested capital
2. (TCO F) Which of the following statements is correct? (Points : 5)
The NPV, IRR, MIRR, and discounted payback (using a payback requirement of 3 years or less) methods always lead to the same accept/reject decisions for independent projects.
For mutually exclusive projects with normal cash flows, the NPV and MIRR methods can never conflict, but their results could conflict with the discounted payback and the regular IRR methods.
Multiple IRRs can exist, but not multiple MIRRs. This is one reason some people favor the MIRR over the regular IRR.
If a firm uses the discounted payback method with a required payback of 4 years, then it will accept more projects than if it used a regular payback of 4 years.
The percentage difference between the MIRR and the IRR is equal to the project’s WACC.
3. (TCO D) Church Inc. is presently enjoying relatively high growth because of a surge in the demand for its new product. Management expects earnings and dividends to grow at a rate of 25% for the
next 4 years, after which competition will probably reduce the growth rate in earnings and dividends to zero, i.e., g = 0. The company's last dividend, D0, was $1.25, its beta is 1.20, the market
risk premium is 5.50%, and the risk-free rate is 3.00%. What is the current price of the common stock?
a. $26.77
b. $27.89
c. $29.05
d. $30.21
e. $31.42
(Points : 20)
4. (TCO G) Singal Inc. is preparing its cash budget. It expects to have sales of $30,000 in January, $35,000 in February, and $35,000 in March. If 20% of sales are for cash, 40% are credit sales paid
in the...
|
{"url":"http://www.termpaperwarehouse.com/essay-on/Fin515-Final-Exam/184067","timestamp":"2014-04-19T14:56:35Z","content_type":null,"content_length":"19063","record_id":"<urn:uuid:eecedcc0-bdad-437b-a80c-178b8f7fb252>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Comfortable with shehes
A Bag of Contradictions
From: F William Lawvere, To: Posina Venkata Rayudu, Date: 30 Aug 2011
The lack of structure expressed by “completely distinguished from each other but with no distinguishing properties” for the elements of an abstract set is of course not an end in itself, but an
attempt to insure that when we model mathematical objects as diagrams of maps in the category of sets, that background contaminates as little as possible and all properties are a result of the
explicit assumptions in a particular discussion. The ancient Greeks had a similar concept of arithmos, in some sense the minimal structure derived from a given situation to which one could assign a
number; in other words, an abstract set is just one hair less abstract than a cardinal number, but that enables them to potentially carry mappings, which cardinals are too abstract in concept (being
equivalence classes) to do.
This sharp contradiction becomes very objective whenever we have a topos defined over another one by a
Unity & Identity of Adjoint Opposites
Discrete …. Points …. Codiscrete
where each is left adjoint to the next. The two opposites map the lower more abstract category to opposite ends of the upper more cohesive or variable category, while the middle abstracts from each
object in the upper its denuded version.
1. To: categories@mta.ca, Subject: categories: Adjoint cylinders, From: F W Lawvere wlawvere@acsu.buffalo.edu, Date: 1 Nov 2000
I would be happy to learn the results which Till Mossakowski has found concerning those situations involving an Adjoint Unity and Identity of Opposites as I discussed in my “Unity and Identity of
Opposites in Calculus and Physics“, in Applied Categorical Structures vol.4, 167-174,
Two parallel functors are adjointly opposite if they are full and faithful and if there is a third functor left adjoint to one and right adjoint to the other; the two subcategories are opposite
as such but identical if one neglects the inclusions.
A simple example which I recently noted is even vs odd. That is, taking both the top category and the smaller category to be the poset of natural numbers, let L(n)=2n but R(n)=2n+1. Then the
required middle functor exists; a surprising formula for it can be found by solving a third-order linear difference equation.
I hope that Till Mossakowski’s results may help to compute some other number-theoretic functions that arise by confronting Hegel’s Aufhebung idea (or one mathematical version of it) with
multi-dimensional combinatorial topology. Consider the set of all such AUIO situations within a fixed top category. This set of “levels” is obviously ordered by any of the three equivalent
L1 belongs to L2, R1 belongs to R2, F2 depends on F1.
(Here “belongs” and “depends” just mean the existence of factorizations, but in dual senses). However there is also the stronger relation that
L1 might belong to R2;
for a given level, there may be a smallest higher level which is strongly higher in that sense, and if so it may be called the Aufhebung of the given level.
In case the given containing category is such that the set of all levels is isomorphic to the natural numbers with infinity (the top) and minus infinity (the initial object=L and terminal object=
R), then the Aufhebung exists, but the specific function depends on the category. Mike Roy in his 1997 U. of Buffalo thesis studied the topos of ball complexes, finding in particular that both
Aufhebung and coAufhebung exist and that they are both equal to the successor function on dimensions.
Still not calculated is that function for the topos of presheaves on the category of nonempty finite sets. This category is important logically because the presheaf represented by 2 is generic
among all Boolean algebra objects in all toposes defined over the same base topos of sets, and topologically because of its close relation with classical simplicial complexes. Here the levels or
dimensions just correspond to those subcategories of finite sets that are closed under retract. It is easy to see that the Aufhebung of dimension 0 (the one point set) is dimension 1 (the
two-point set and its retracts), but what is the general formula?
2. Elementary nature of the notion of adjointness
The motivation for introducing 40 years ago the construction, of which “categories of elements” is a special case, was to make clear the elementary nature of the notion of adjointness. Given an
opposed pair of functors between two arbitrary given categories, one obviously elementary way of providing them with an adjointness is to give two natural transformations satisfying two
equations; but very useful also is the definition in terms of bijections of hom sets which should be equivalent. The frequent mode for expressing the latter in terms of presheaf categories
involved the complicated logical notion of “smallness” and the additional axiom that a category of small sets actually exists, but had the disadvantage that it would therefore not apply to
arbitrarily given categories. By contrast, a formulation of this bijection in terms of discrete fibrations required no such additional apparatus and was universally applicable.
Unfortunately, since I had given the construction no name, people in reading it began to use the unfortunate term “comma”. It would indeed be desirable to have a more objective name for such a
basic construction. (The notation involving the comma was generalized from the very special case when the two functors to B, to which the construction is applied, both have the category 1 as
domain, and the result of the construction is the simple hom set in B between the two objects, which is often denoted by placing a comma between the names of the objects and enclosing the whole
in parentheses.)
One habit which it would be useful to drop is that of agonizing over the true definition of elements. In any category the elements of an object B are the maps with codomain B, these elements
having various forms which are their domains. For example, if the category has a terminal object, we have in particular the elements often called punctiform. On the other hand, it is often
appropriate to apply the term point to elements more general than that, for example, in algebraic geometry over a non-algebraically closed field, points are the elements whose forms are the
spectra of extensions of the ground field. As Volterra remarked already in the 1880s, the elements of a space are not only points, but also curves, etc.; it is often convenient to use the term
“figure” for elements whose forms belong to a given subcategory.
Date: 30 Sep 2003, From: F W Lawvere, Subject: categories: Re: Categories of elements (Pat Donaly)
Trackbacks & Pingbacks
|
{"url":"http://conceptualmathematics.wordpress.com/2012/09/23/comfortable-with-shehes/","timestamp":"2014-04-16T10:48:20Z","content_type":null,"content_length":"78892","record_id":"<urn:uuid:8b84b03f-af14-4123-8ca7-5fcba2602404>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
|
geometric problems of antiquity
geometric problems of antiquity, three famous problems involving elementary geometric constructions with straight edge and compass, conjectured by the ancient Greeks to be impossible but not proved
to be so until modern times. The three problems are: (1) the duplication of the cube, also known as the Delian problem because it is said to have originated with the task of constructing a cubical
altar at Delos having twice the volume of the original cubical altar; (2) the trisection of an arbitrary angle; (3) the squaring, or quadrature, of the circle, i.e., the construction of a square
whose area is equal to that of a given circle. These problems were solved in the 19th cent. by first transforming them into algebraic problems involving "constructible numbers." A constructible
number is one that can be obtained from a whole number by means of addition, subtraction, multiplication, division, or extraction of square roots. The problems of antiquity correspond to the
following algebraic problems: (1′) Is 32art/cube-root-of-2.gifthe cube root of 3 constructible? (2′) Given an angle A for which cos A is constructible, is cos ( A /3) constructible? (3′) Is the area
π of a unit circle constructible? The number 32art/cube-root-of-2.gifthat is the cube root of 2 is not constructible, since it involves a cube root. (Note, however, that roots that are powers of 2,
e.g., 4th, 8th, 16th roots, are constructible because they can be expressed as combinations of square roots.) In problem (2′), certain special angles can be trisected, e.g., 90°, since both cos 90°
and cos 30° are constructible, but for most angles this is easily shown to be impossible. Finally, the solution of problem (3′) did not come until 1882, when the German Ferdinand Lindemann showed
that π is a transcendental number and thus cannot be expressed in terms of any roots of any rational numbers (see number). Although these problems cannot be solved using only straight edge and
compass, the Greeks developed methods of solving them using higher curves.
See F. Klein, Famous Problems of Elementary Geometry (1956).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on geometric problems of antiquity from Fact Monster:
See more Encyclopedia articles on: Mathematics
|
{"url":"http://www.factmonster.com/encyclopedia/science/geometric-problems-antiquity.html","timestamp":"2014-04-18T15:03:49Z","content_type":null,"content_length":"22408","record_id":"<urn:uuid:3abad51b-f1f2-478a-9c74-c5d92635ce9b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
|
PDEs solvable as ODEs
May 9th 2009, 12:06 AM #1
PDEs solvable as ODEs
We have started PDEs in our MAth course in BE ME. We are following Kreyszig and I had problem understanding the basic concepts. It says "PDEs SOLVABLE AS ODEs : It happens if a PDE involves
derivatives with respect to one variable only (or can be transformed to such a form), so that the other variable9s) can be treasted as parameters(s)"
There's an example I am confused in,
u_xy = -u_x where $u=u(x,y)$
The steps goes as,
$u_x = p$, $p_y = -1$, $\frac{p_y}{p}=-1$, $lnp=-y +c(x)$ and by integration with respect to x,
$u(x,y) = f(x)e^-y + g(y)$ where $f(x) = \int c(x) dx$
I do not understand the bold part and the natural log step how does $y$ come in ?
We have started PDEs in our MAth course in BE ME. We are following Kreyszig and I had problem understanding the basic concepts. It says "PDEs SOLVABLE AS ODEs : It happens if a PDE involves
derivatives with respect to one variable only (or can be transformed to such a form), so that the other variable9s) can be treasted as parameters(s)"
There's an example I am confused in,
u_xy = -u_x where $u=u(x,y)$
The steps goes as,
$u_x = p$, $p_y = -1$, $\frac{p_y}{p}=-1$, $lnp=-y +c(x)$ and by integration with respect to x,
$u(x,y) = f(x)e^-y + g(y)$ where $f(x) = \int c(x) dx$
I do not understand the bold part and the natural log step how does $y$ come in ?
Let's start with a simplier example. Suppose that $u = x^2 + f(y)$ where $f(y)$ is arbitary. So taking the $x$ derivative gives $u_x = 2x$. Now given $u_x = 2x$ find $u$. So $\partial u = 2x\, \
partial x$ (separable, just like ODEs) and integrating gives $u = x^2 + c$. This would be correct if it was an ODE but it's a PDE so instead of a constant of integration, we should have a
function of integration and with respect to the other variable. So in our case $u = x^2 + f(y)$
Now for your question, you have $\frac{p_y}{p} = - 1$ or $\frac{\partial p}{p} = - \partial y$ however we typical write $\frac{d p}{p} = - d y$ noting it's partial integration (integration wrt a
certain variable)
so $\ln p = - y + c(x)$ or $p = e^{-y + c(x)}$
$<br /> u_x = e^{-y} e^{ c(x)}<br />$
so $du = e^{-y} e^{ c(x)}dx \; \Rightarrow\; u = f(x) e^{-y} + g(y)$
where $f(x) = \int e^{ c(x)}dx$ (Since $c(x)$ is abitrary then $\int e^{ c(x)}dx$ is abritrary which we call $f(x)$).
We have started PDEs in our MAth course in BE ME. We are following Kreyszig and I had problem understanding the basic concepts. It says "PDEs SOLVABLE AS ODEs : It happens if a PDE involves
derivatives with respect to one variable only (or can be transformed to such a form), so that the other variable9s) can be treasted as parameters(s)"
There's an example I am confused in,
u_xy = -u_x where $u=u(x,y)$
The steps goes as,
$u_x = p$,
$p_y = -1$
That is incorrect. You meant $p_y= -p$ didn't you?
, $\frac{p_y}{p}=-1$, $lnp=-y +c(x)$ and by integration with respect to x,
Well, first take the exponential of each side: exp(ln p)= exp(-y+ c(x))= exp(c(x))exp(-y). And since "c(x)" is an unknown function, g(x)= exp(c(x)) is again an unknown function: $p(x)= g(x)e^{-y}
Now, $p(x)= du/dx= g(x)e^{-y}$
Integrating with respect to x gives $u(x)= \left(\int g(x)dx\right)e^{-y}$. And since g(x) is an unknown function, $\int g(x)dx$ is some unknown function: call it "f(x)". That gives
$u(x,y) = f(x)e^{-y} + g(y)$ where $f(x) = \int c(x) dx$
exept that f(x) is not " $\int c(x) dx$". It is $\int e^{c(x)}dx$
I do not understand the bold part and the natural log step how does $y$ come in ?
May 9th 2009, 04:52 AM #2
May 9th 2009, 10:55 AM #3
MHF Contributor
Apr 2005
|
{"url":"http://mathhelpforum.com/differential-equations/88209-pdes-solvable-odes.html","timestamp":"2014-04-19T10:04:11Z","content_type":null,"content_length":"49929","record_id":"<urn:uuid:de0a679a-4ef0-4728-866b-ef261030445a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
|
LDT Seminar
The weight system derived from Heisenberg Lie algebra and the polynomial representation of it is discussed. Although the use of the infinite-dimensional representation implies the values of the
weight system for web diagrams on circles diverge, it is shown that the values for the diagrams on a circle are finite in our case. In the result, the induced knot invariant is proved to be the
inverse of Alexander-Conway polynomial.
|
{"url":"http://www.kurims.kyoto-u.ac.jp/~kenkyubu/proj01/abst18.html","timestamp":"2014-04-16T16:07:14Z","content_type":null,"content_length":"1395","record_id":"<urn:uuid:9dbcd03b-8dee-498a-84b5-3e9450d79040>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Preprints 2013
The three-letter code attached to the preprint number indicates the scientific programme during which the paper was written. Click on the code to see the programme details.
Preprint Author(s) Title and publication details
NI13001-TOD GV Levina Helical organization of tropical cyclones
NI13002-DAE P Goos and SG Gilmour Testing for Lack of Fit in Blocked and Split-Plot Response Surface Designs
NI13003-AMM M Ainsworth Dispersive behaviour of high order finite element schemes for the one-way wave equation
NI13004-SAS AG Melnikov and A Nies The classification problem for compact computable metric spaces
NI13005-SAS M Lauria, P Pudlák, V Rödl and N Thapen The complexity of proving that a graph is Ramsey
NI13006-GDO I Gálvez-Carrillo, L Lombardi and A Tonks An A$_\infty$ operad in spineless cacti
NI13007-NPA LO Müller and EF Toro A global multi-scale mathematical model for the human circulation with emphasis on the venous system
NI13008-INV J Eckhardt, F Gesztesy, R Nichols and G Teschl Weyl-Titchmarsh Theory for Sturm-Liouville operators with distributional coefficients
NI13009-GDO J Goodman and U Krähmer Untwisting a twisted Calabi-Yau algebra
NI13010-INI JF Toland Energy-minimising parallel flows with prescribed vorticity distribution
NI13011-INI JF Toland Non-existence of global energy-minimisers in Stokes wave problems
NI13012-SAS M Garlík Ajtai's completeness theorem for nonstandard finite structures
NI13013-MLC P Bauman, D Phillips and J Park Existence of solutions to boundary value problems for smectic liquid crystals
NI13014-GDO V Dotsenko, S Shadrin and B Vallette Givental action is homotopy gauge symmetry
NI13015-MLC M Arroyo and A De Simone Shape control of active surfaces inspired by the movement of euglenids
NI13016-TOD RV Buniy and TW Kephart Generalized helicity and Beltrami fields
NI13017-MLC I Busjatskaya and M Monastyrsky Immortality of platonic solids
NI13018-DAE RA Bailey and P Druilhet Optimal cross-over designs for full interaction models
NI13019-POP JB Lasserre Tractable approximations of sets defined with quantifiers
NI13020-CFM C Beaume, H-C Kao, E Knobloch and A Bergeon Localized rotating convection with no-slip boundary conditions
NI13021-TOD R Kerner Discrete groups and internal symmetries of icosahedral viral capsids
NI13022-MLC M Leoni and TB Liverpool Synchronisation and liquid crystalline order in soft active fluids
NI13023-MLC G Dhont and B Zhilinskií The action of the orthogonal group on planar vectors: invariants, covariants, and syzygies
NI13024-CFM HO Jacobs, TS Ratiu and M Desbrun On the coupling between an ideal fluid and immersed particles
NI13025-GDO J Giansiracusa and N Giansiracusa Equations of tropical varieties
NI13026-POP M Kocvara On robustness criteria and robust topology optimization with uncertain loads
NI13027-MLC VL Golo, EI Kats, AA Sevenyuk and DO Sinitsyn Twisted quasiperiodic textures of biaxial nematics
NI13028-CFM C Paterson, SK Wilson and BR Duffy Strongly coupled interaction between a ridge of fluid and an external airflow
NI13030-CFM D Tseluiko, M Galvagno and U Thiele Heteroclinic snaking near a heteroclinic chain in dragged meniscus problems
NI13031-CFM U Thiele Patterned deposition at moving contact lines
NI13032-CFM U Thiele, DV Todorova and H Lopez Gradient dynamics description for films of mixtures and suspensions - the case of dewetting triggered by coupled film height and
concentration fluctuations
NI13033-POP IM Bomze Copositivity for second-order optimality conditions in general smooth optimization problems
NI13034-CFM F Gay-Balmaz, TS Ratiu and C Tronci Equivalent theories of liquid crystal dynamics
NI13035-CFM AM Rubio, K Julien, E Knobloch and JB Weiss Upscale energy transfer in three-dimensional rapidly rotating turbulent convection
NI13036-CFM F Gay-Balmaz, DD Holm and TS Ratiu Integrable G-Strands on semisimple Lie groups
NI13037-DAE W Wang, R-B Chen, C-C Huang and WK Wong Particle swarm optimization techniques for finding optimal mixture designs
NI13038-DAE J Qiu, R-B Chen, W Wang and WK Wong Using animal instincts to design efficient biomedical studies
NI13039-DAE R-B Chen, S-P Chang, W Wang, H-C Tung and WK Wong Optimal Minimax Designs via Particle Swarm Optimization Methods
NI13040-DAE H-Q Li, M-L Tang and W-K Wong Confidence intervals for ratio of two Poisson rates using the method of variance estimates recovery
NI13041-DAE BPM Duarte and WK Wong A Semidefinite Programming based approach for finding Bayesian optimal designs for nonlinear models
NI13042-CFM BR Duffy, D Pritchard and SK Wilson The shear-driven Rayleigh problem for generalised Newtonian fluids
NI13043-MLC CL Kane and TC Lubensky Topological Boundary Modes in Isostatic Lattices
NI13044-GDO C Castańo Bernard and TM Gendron Modular invariant of quantum tori
NI13045-CFM G Grün On convergent schemes for diffuse interface models for two-phase flow of incompressible fluids with general mass densities
NI13046-SRO GR Barrenechea, L Boulton and N Boussai Eigenvalue enclosures and applications to the Maxwell operator
NI13047-POP JA De Loera, J Lee, S Margulies and J Miller Weak orientability of matroids and polynomial equations
NI13048-MLC M Trcek, G Cordoyiannis, V Tzitzios, S Kralj and G Nanoparticle-induced twist-grain boundary phase
Nounesis et al
NI13049-AMM PS Peixoto and SRM Barros On vector field reconstructions for semi-Lagrangian transport methods on geodesic staggered grids
NI13050-HOL M Blake, D Tong and D Vegh Holographic lattices give the graviton a mass
NI13051-HOL A Buchel and DA Galante Cascading gauge theory on $\it dS_4$ and String Theory Landscape
NI13052-HOL K Kontoudi and G Policastro Flavor corrections to the entanglement entropy
NI13053-CFM D Pritchard, BR Duffy and SK Wilson Shallow flows of generalised Newtonian fluids on an inclined plane
NI13054-TOD J Cantarella and C Shonkwiler The symplectic geometry of closed equilateral random walks in 3-Space
NI13055-MLC A Ranjkesh, M Ambrozic, S Kralj and TJ Sluckin Computational studies of history-dependence in nematic liquid crystals in random environments
NI13056-POP J Fiala, M Kocvara and M Stingl PENLAB: A MATLAB solver for nonlinear semidefinite optimization
NI13057-MFE V Lucarini, R Blender, S Pascale, J Wouters and C Mathematical and physical ideas for climate science
NI13058-TOD Y Kimura and HK Moffatt Reconnection of skewed vortices
NI13059-MFE C Beck Possible resonance effect of axionic dark matter in S/N/S Josephson junctions
NI13060-MFE V Lucarini, D Faranda, J Wouters and T Kuna Towards a general theory of extremes for observables of chaotic dynamical systems
NI13061-MFE N Glatt-Holtz, V verák and V Vicol On inviscid limits for the stochastic Navier-Stokes equations and related models
NI13062-MFE J Földes, N Glatt-Holtz, G Richards and E Thomann Ergodic and mixing properties of the Boussinesq equations with a degenerate random forcing
NI13063-POP IM Bomze and ML Overton Narrowing the difficulty gap for the Celis-Dennis-Tapia problem
NI13064-POP IM Bomze Copositive relaxation beats Lagrangian dual bounds in quadratically and linearly constrained QPs
NI13065-MFE V Lucarini and S Pascale Entropy production and coarse graining of the climate fields in a general circulation model
NI13066-POP M Fampa, J Lee and W Melo On global optimization with indefinite quadratics
NI13067-HOL T Andrade, S Fischetti, D Marolf, SF Ross and M Rozali Entanglement and Correlations near Extremality: CFTs dual to Reissner-Nordström AdS$_5$
NI13068-MQI V Giovannetti, AS Holevo and R García-Patrón A solution of the Gaussian optimizer conjecture
NI13069-POP J Gouveia, PA Parrilo and RR Thomas Approximate cone factorizations and lifts of polytopes
|
{"url":"http://www.newton.ac.uk/preprints2013.html","timestamp":"2014-04-21T14:49:16Z","content_type":null,"content_length":"30366","record_id":"<urn:uuid:c4a87ad4-4270-4fb9-abed-6c9dff47ab7a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Happy birthday @Goodman
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/502453afe4b09c3cae9dd977","timestamp":"2014-04-17T09:46:34Z","content_type":null,"content_length":"98646","record_id":"<urn:uuid:975c91fe-aa6d-49f2-a2ac-211700d5c127>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Whenever you pass coordinates to matplotlib, the question arises, what kind of coordinates you mean. Consider the following example
axes.text(x,y, "my label")
A label 'my label' is added to the axes at the coordinates x,y, or stated more clearly: The text is placed at the theoretical position of a data point (x,y). Thus we would speak of "data coords".
There are however other coordinates one can think of. You might e.g. want to put a label in the exact middle of your graph. If you specified this by the method above, then you would need to determine
the minimum and maximum values of x and y to determine the middle. However, using transforms, you can simply use
axes.text(0.5, 0.5, "middle of graph", transform=axes.transAxes)
There are four built-in transforms that you should be aware of (let ax be an Axes instance and fig a Figure instance):
1 matplotlib.transforms.identity_transform() # display coords
2 ax.transData # data coords
3 ax.transAxes # 0,0 is bottom,left of axes and 1,1 is top,right
4 fig.transFigure # 0,0 is bottom,left of figure and 1,1 is top,right
These transformations can be used for any kind of Artist, not just for text objects.
The default transformation for ax.text is ax.transData and the default transformation for fig.text is fig.transFigure.
Of course, you can define more general transformations, e.g. matplotlib.transforms.Affine, but the four listed above arise in a lot of applications.
xy_tup() is no more. Please see the official Matplotlib documentation at http://matplotlib.sourceforge.net/users/transforms_tutorial.html for further reference.
Example: tick label like annotations
If you find that the built-in tick labels of Matplotlib are not enough for you, you can use transformations to implement something similar. Here is an example that draws annotations below the tick
labels, and uses a transformation to guarantee that the x coordinates of the annotation correspond to the x coordinates of the plot, but the y coordinates are at a fixed position, independent of the
scale of the plot:
1 import matplotlib as M
2 import Numeric as N
3 import pylab as P
4 blend = M.transforms.blend_xy_sep_transform
6 def doplot(fig, subplot, function):
7 ax = fig.add_subplot(subplot)
8 x = N.arange(0, 2*N.pi, 0.05)
9 ax.plot(x, function(x))
11 trans = blend(ax.transData, ax.transAxes)
13 for x,text in [(0.0, '|'), (N.pi/2, r'$\rm{zero\ to\ }\pi$'),
14 (N.pi, '|'), (N.pi*1.5, r'$\pi\rm{\ to\ }2\pi$'),
15 (2*N.pi, '|')]:
16 ax.text(x, -0.1, text, transform=trans,
17 horizontalalignment='center')
19 fig = P.figure()
20 doplot(fig, 121, N.sin)
21 doplot(fig, 122, lambda x: 10*N.sin(x))
22 P.show()
Example: adding a pixel offset to data coords
Sometimes you want to specify that a label is shown a fixed pixel offset from the corresponding data point, regardless of zooming. Here is one way to do it; try running this in an interactive
backend, and zooming and panning the figure.
The way this works is by first taking a shallow copy of transData and then adding an offset to it. All transformations can have an offset which can be modified with set_offset, and the copying is
necessary to avoid modifying the transform of the data itself. New enough versions of matplotlib (currently only the svn version) have an offset_copy function which does this automatically.
1 import matplotlib
2 import matplotlib.transforms
3 from pylab import figure, show
5 # New enough versions have offset_copy by Eric Firing:
6 if 'offset_copy' in dir(matplotlib.transforms):
7 from matplotlib.transforms import offset_copy
8 def offset(ax, x, y):
9 return offset_copy(ax.transData, x=x, y=y, units='dots')
10 else: # Without offset_copy we have to do some black transform magic
11 from matplotlib.transforms import blend_xy_sep_transform, identity_transform
12 def offset(ax, x, y):
13 # This trick makes a shallow copy of ax.transData (but fails for polar plots):
14 trans = blend_xy_sep_transform(ax.transData, ax.transData)
15 # Now we set the offset in pixels
16 trans.set_offset((x,y), identity_transform())
17 return trans
19 fig=figure()
20 ax=fig.add_subplot(111)
22 # plot some data
23 x = (3,1,4,1,5,9,2,6,5,3,5,8,9,7,9,3)
24 y = (2,7,1,8,2,8,1,8,2,8,4,5,9,0,4,5)
25 ax.plot(x,y,'.')
27 # add labels
28 trans=offset(ax, 10, 5)
29 for a,b in zip(x,y):
30 ax.text(a, b, '(%d,%d)'%(a,b), transform=trans)
32 show()
|
{"url":"http://wiki.scipy.org/Cookbook/Matplotlib/Transformations?action=subscribe","timestamp":"2014-04-18T16:47:25Z","content_type":null,"content_length":"40061","record_id":"<urn:uuid:5b9859b6-7eca-442e-a56f-2a32e554440e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
|
"No Child left Behinds" Horrible logical conclusion.
02-01-2008, 08:04 PM
"No Child left Behinds" Horrible logical conclusion.
DeTurck is stirring the pot again, this time in a book scheduled to be published this year. Not only does he favor the teaching of decimals over fractions to elementary school students, he's also
taking on long division, the calculation of square roots and by-hand multiplication of long numbers.
Basically, the only conclusion I can reach from this is he is saying that, instead of teaching the students to work out problems by hand, he wants, instead, them to be taught how to do it by
calculators. Afterall, what other reason is there to remove square roots and long division by hand?
I see this as an insanity. America's education system is already a joke compared to the rest of the world. The idea that you can't let anybody feel inferior, or let anyone have less knowledge,
seems to have led American educators to the conclusion that everything should be dumbed down.
And don't get me started on the reliance on calculators. This laziness has been growing over the past decade, so that people in my math classes can't even seem to work out the most basic
algebraic equations unassisted by a calculator. I feel shame that we are losing the building blocks of mathematics for this. We are rotting our brains with convenience and malaise, with disuse.
Does anybody see this as I do? Does anybody else shudder to think of the next generation of American engineer's? Or am I the lone voice in the wilderness on this issue?
02-01-2008, 08:16 PM
It seems awfully lazy to me. Fractions may be difficult to learn, but they're hardly obsolete even in the daily lives of people who don't work in math-based fields. What about in baking, for
example? Does he expect people to grab a calculator every time they need to add half a cup of something and then measure 0.5 of a cup? Does he work for a calculator company or something? (I'd
actually be surprised if he had no connections to at least one. Everyone is motivated by money these days.)
Besides, it's a well-known fact that you need to learn the basics of something in order to grasp it at all. Using calculators at that age means that the only education children will be getting is
in the use of calculators. The adults that they become won't understand math at all, and unfortunately, math is absolutely necessary these days from small tasks like buying food at the grocery
store to calculating finances and paying mortgages. If adults become unable to do simple calculations in their minds, who's to say that businesses won't take advantage of this and be able to
raise or falsely advertise prices/rates/etc. with the public being totally unaware?
02-01-2008, 08:29 PM
I agree people need to learn to do things by hand. But, when you take upper division courses in physics and mathematics, no one wants to spend hours and hours doing things by hand. Upper devision
mathematics is much more than being able to work things by hand. It's no longer just computing problems. It's more theory and working with abstract subjects. You can be really good at computing
problems but that won't do you much if your going for the graduate level.
02-01-2008, 08:35 PM
It's all right when you're taking a high-level university or even highschool course, though - some of those equations would just be ridiculous to do by hand. The article isn't about that, though;
it's about not teaching children these basic principles. The reason calculators are acceptable - and even required - in subsequent courses is because it's obvious the students already know those
basics perfectly. A child who hasn't been taught fractions or long division may not even be able to grasp equations in subjects like calculus.
02-01-2008, 08:43 PM
It's lame. Kids should learn how solve problems. Factions happen in real life.
02-01-2008, 09:27 PM
I agree with u guys about that kids need to be able to do that but still it takes a while if you have to do 50 problems of long division with out one but they first should make sure they can do
the problem before letting them use calculators
02-01-2008, 10:04 PM
Since when are little children doing 50 long division problems at one time?
And no, the fact is that they need to do those all by hand, not only to see that they know how to do the work, but so that they will have a firm grasp of it, I think we can all agree.
02-01-2008, 10:20 PM
This guy makes me want to cry.
In Pre-Calc, we need to have our answers correct to 3 decimal places, as per the AP calc standard. If we have a long fraction, along with some square roots in there, the simple fact is that I
have a far easier time with the problem when using fractions/square roots to solve it through, and only converting to decimals at the end.
I'm interested in how this guy wants to change something even as simple as the quadratic formula though, since that definitely involves fractions.
Yeah, basically in short, during high school, fractions and square roots are much more important than decimals during calculations. Without those, life becomes hell.
And how do you not understand fractions? How do you not get what they mean? I still can't understand how some people find that hard. Are the teachers too dumb to explain it?
02-01-2008, 10:26 PM
u never had my teachers or my mom. they made me work. my mom wanted me to be some to have a job that involves math even though i hated it
02-01-2008, 11:07 PM
This guy makes me want to cry.
In Pre-Calc, we need to have our answers correct to 3 decimal places, as per the AP calc standard. If we have a long fraction, along with some square roots in there, the simple fact is that I
have a far easier time with the problem when using fractions/square roots to solve it through, and only converting to decimals at the end.
Wait, what is this nonsense. You have to convert to decimals? I find fractions much easier to work with than decimals, to be honest - they're way easier to understand and manipulate, aren't they?
Not to mention more accurate... 1/3 is precise. 0.3333333... well, I know which looks better to me. (Of course, not all fractions look as pretty as that, I'll admit.)
Not really. Calculators only get you so far once you're at that level, since you tend to, you know, not use numbers in your problems as much. They're not going to get the degree if they don't
know what they're doing.
Teaching kids to use calculators is pointless, let them learn that on their own once they understand the mathematics - calculators aren't as useful at higher levels anyway.
|
{"url":"http://stoptazmo.com/chit-chat/27315-no-child-left-behinds-horrible-logical-conclusion-print.html","timestamp":"2014-04-24T20:12:53Z","content_type":null,"content_length":"24610","record_id":"<urn:uuid:cb2aa73d-40b8-4571-a52d-266885481033>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chapter 8
Chapter 4 flashback ...
□ H[0] true H[0] false
□ Reject H[0] Type I error Correct
□ Fail to Correct Type II error
□ reject H[0]
□ Type I error is the probability of rejecting the null hypothesis when it is really true
□ The probability of making a type I error is denoted as
□ Type II error is the probability of failing to reject a null hypothesis that is really false
□ The probability of making a type II error is denoted as
In this chapter, you'll often see these outcomes represented with distributions
To make these representations clear, let's first consider the situation where H[0] is, in fact, true:
Now assume that H[0] is false (i.e., that some "treatment" has an effect on our dependent variable, shifting the mean to the right)
Thus, power can be defined as follows:
Assuming some manipulation effects the dependent variable, power is the probability that the sample mean will be sufficiently different from the mean under H[0] to allow us to reject H[0]
As such, the power of an experiment depends on three (or four) factors:
As alpha is moved to the left (for example, if one used an alpha of 0.10 instead of 0.05), beta would decrease, power would increase ... but, the probability of making a type I error would
The further that H[1] is shifted away from H[0], the more power (and lower beta) an experiment will have
Standard error of the mean:
The smaller the standard error of the mean (i.e., the less the two distributions overlap), the greater the power. As suggested by the CLT, the standard error of the mean is a function of the
population variance and N. Thus, of all the factors mentioned, the only one we can really control is N
Effect Size (d)
Most power calculations use a term called effect size which is actually a measure of the degree to which the H[0] and H[1] distributions overlap
As such, effect size is sensitive to both the difference between the means under H[0] and H[1], and the standard deviation of the parent populations
In English then, d is the number of standard deviations separating the mean of H[0] and the mean of H[1]
Note: N has not been incorporated in the above formula. You'll see why shortly
Estimating the Effect Size
As d forms the basis of all calculations of power, the first step in these calculations is to estimate d
Since we do not typically know how big the effect will be a priori, we must make an educated guess on the basis of:
□ Prior research
□ An assessment of the size of effect that would be important
□ Rule of thumb:
small effect d=.20
medium effect d=.50
large effect d=.80
Bringing N back into the picture:
The calculation of d took into account 1) the difference between the means of H[0] and H[1] and 2) the standard deviation of the population
However, it did not take into account the third variable the effects the overlap of the two distributions; N
This was done purposefully so that we have one term that represents the relevant variables we, as experimenters, can do nothing about (d) and another representing the variable we can do
something about; N
The statistic we use to recombine these factors is called delta and is computed as follows:
Power Calcs for One Sample t
In the context of a one sample t-test, the
Thus, when calculating the power associated with a one sample t, you must go through the following steps:
1) Estimate d, or calculate it using:
3) Go to the power table, and find the power associated with the calculated
Say I find a new stats textbook and after looking at it, I think it will raise the average mark of the class by about 8 points. From previous classes, I am able to estimate the population
standard deviation as 15. If I now test out the new text by using it with 20 new students, what is my power to reject the null hypothesis (that the new students marks are the same as the old
students marks)
How many new students would I have to test to bring my power up to .90?
Note: Don't worry about the bit on "noncentrality parameters" in the book
Power Calcs for Independent Samples t
When an independent t-test is used, the power calculations use the same computation for calculating d, but the calculations of
When sample sizes are equal, you do the following:
1) Estimate d, or calculate it using:
where N is the number of subjects in one of the samples
3) Go to the power table, and find the power associated with the calculated
More Examples:
Assume I am going to run two groups of 18 subjects through a non-smoking study. One group will receive the treatment of interest, the other will not. I expect the treatment to have a
medium effect, but I have nothing to go on other than that. Assuming there really is a medium effect, what is my power to detect it?
How many subjects would I need to run to increase my power to 0.80?
Unequal N
Power calculations for independent samples t-tests become slightly more complicated when Ns are unequal.
The proper way to deal with the situation is to do everything the same as above except to use the harmonic mean of the two Ns (N[1] & N[2]) in the place where you enter N
The harmonic mean of two Ns is denoted and computed as follows:
So, as a final example, reconsider the power of my smoking study if I had run 24 subjects in my stop smoking group, but only 12 in my control group.
|
{"url":"http://www.psych.utoronto.ca/courses/c1/chap8/chap8b.html","timestamp":"2014-04-18T02:59:28Z","content_type":null,"content_length":"11299","record_id":"<urn:uuid:913dfe18-8600-4936-b44e-130f95f2a14d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Excel CUMPRINC Function
Basic Description
The Excel CUMPRINC function calculates the cumulative payment on the principal of a loan or investment, between two specified periods.
The syntax of the function is :
CUMPRINC( rate, nper, pv, start_period, end_period, type )
Where the arguments are as follows:
rate - The interest rate, per period
nper - The number of periods over which the loan or investment is to be paid
pv - The present value of the loan / investment
start_period - The number of the first period over which the payment of the principal is to be calculated (must be an integer between 1 and nper)
end_period - The number of the last period over which the payment of the principal is to be calculated (must be an integer between 1 and nper)
An integer (equal to 0 or 1), that defines whether the payment is made at the start or the end of the period
type - The value 0 or 1 has the following meaning:
0 - the payment is made at the end of the period
1 - the payment is made at the beginning of the period
Cash Flow Convention :
Note that, in line with the general cash flow convention, outgoing payments are represented by negative numbers and incoming payments are represented by positive numbers. This is seen in the example
Excel Cumprinc Function Example
The following spreadsheet shows the Excel Cumprinc function used to calculate the cumulative payment on the principal, during each year of a loan of $50,000 which is to be paid off over 5 years.
Interest is charged at a rate of 5% per year and the payment to the loan is to be made at the end of each month.
The spreadsheet on the left shows the format of the functions, and the spreadsheet on the right shows the results.
Note that in this example :
• The payments are made monthly, so we have had to convert the annual interest rate of 5% into the monthly rate (=5%/12), and the number of years into months (=5*12).
• The calculated payments are negative values, as they represents outgoing payments (for the individual taking out the loan).
Further examples of the Excel Cumprinc function are provided on the Microsoft Office website.
Cumprinc Function Errors
If you get an error from the Excel Cumprinc function, this is likely to be one of the following:
Common Errors
Occurs if either:
- The supplied start_period or end_period is ≤0 or >nper
#NUM! - - The supplied start_period > end_period
- Either of the supplied rate, nper or pv arguments are ≤ 0
- The supplied type argument is not equal to 0 or 1
#VALUE! - Occurs if any of the supplied arguments are not recognised as numeric values
Also, the following problem is encountered by some users:
Common Problem
The result from the Excel Cumprinc function is much higher or much lower than expected.
Possible Reason
Many users, when calculating monthly or quarterly payments, forget to convert the interest rate or the number of periods to months or quarters.
Solve this problem by ensuring that the rate and the nper arguments are expressed in the correct units. i.e. :
months = 12 * years; monthly rate = annual rate / 12
quarters = 4 * years; quarterly rate = annual rate / 4
|
{"url":"http://www.excelfunctions.net/Excel-Cumprinc-Function.html","timestamp":"2014-04-19T04:25:41Z","content_type":null,"content_length":"18584","record_id":"<urn:uuid:9304f0ea-1644-480b-a465-c1c65b9b9f98>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
|
4.2.1 Boolean Type
f95 supports constants and expressions of Boolean type. However, there are no Boolean variables or arrays, and there is no Boolean type statement.
4.2.1.1 Rules Governing Boolean Type
• For masking operations, a bitwise logical expression has a Boolean result; each of its bits is the result of one or more logical operations on the corresponding bits of the operands.
• For binary arithmetic operators, and for relational operators:
□ If one operand is Boolean, the operation is performed with no conversion.
□ If both operands are Boolean, the operation is performed as if they were integers.
• No user-specified function can generate a Boolean result, although some (nonstandard) intrinsics can.
• Boolean and logical types differ as follows:
□ Variables, arrays, and functions can be of logical type, but they cannot be Boolean type.
□ There is a LOGICAL statement, but no BOOLEAN statement.
□ A logical variable, constant, or expression represents only two values, .TRUE. or .FALSE. A Boolean variable, constant, or expression can represent any binary value.
□ Logical entities are invalid in arithmetic, relational, or bitwise logical expressions. Boolean entities are valid in all three.
4.2.1.2 Alternate Forms of Boolean Constants
f95 allows a Boolean constant (octal, hexadecimal, or Hollerith) in the following alternate forms (no binary). Variables cannot be declared Boolean. Standard Fortran does not allow these forms.
ddddddB, where d is any octal digit
• You can use the letter B or b.
• There can be 1 to 11 octal digits (0 through 7).
• 11 octal digits represent a full 32-bit word, with the leftmost digit allowed to be 0, 1, 2, or 3.
• Each octal digit specifies three bit values.
• The last (right most) digit specifies the content of the right most three bit positions (bits 29, 30, and 31).
• If less than 11 digits are present, the value is right-justified—it represents the right most bits of a word: bits n through 31. The other bits are 0.
• Blanks are ignored.
Within an I/O format specification, the letter B indicates binary digits; elsewhere it indicates octal digits.
X’ddd’ or X"ddd", where d is any hexadecimal digit
• There can be 1 to 8 hexadecimal digits (0 through 9, A-F).
• Any of the letters can be uppercase or lowercase (X, x, A-F, a-f).
• The digits must be enclosed in either apostrophes or quotes.
• Blanks are ignored.
• The hexadecimal digits may be preceded by a + or - sign.
• 8 hexadecimal digits represent a full 32-bit word and the binary equivalents correspond to the contents of each bit position in the 32-bit word.
• If less than 8 digits are present, the value is right-justified—it represents the right most bits of a word: bits n through 31. The other bits are 0.
Accepted forms for Hollerith data are:
│ nH… │ ’…’H │ "…"H │
│ nL… │ ’…’L │ "…"L │
│ nR… │ ’…’R │ "…"R │
Above, “…” is a string of characters and n is the character count.
• If any character constant is in a bitwise logical expression, the expression is evaluated as Hollerith.
• A Hollerith constant can have 1 to 4 characters.
Examples: Octal and hexadecimal constants.
│ Boolean Constant │ Internal Octal for 32-bit Word │
│ 0B │ 00000000000 │
│ 77740B │ 00000077740 │
│ X"ABE" │ 00000005276 │
│ X"-340" │ 37777776300 │
│ X’1 2 3’ │ 00000000443 │
│ X’FFFFFFFFFFFFFFFF’ │ 37777777777 │
Examples: Octal and hexadecimal in assignment statements.
│ │
│i = 1357B │
│j = X"28FF"│
│k = X’-5A’ │
Use of an octal or hexadecimal constant in an arithmetic expression can produce undefined results and do not generate syntax errors.
4.2.1.3 Alternate Contexts of Boolean Constants
f95 allows BOZ constants in the places other than DATA statements.
│ B’bbb’ │ O’ooo’ │ Z’zzz’ │
│ B"bbb" │ O"ooo" │ Z"zzz" │
If these are assigned to a real variable, no type conversion occurs.
Standard Fortran allows these only in DATA statements.
|
{"url":"http://docs.oracle.com/cd/E19205-01/819-5263/6n7c0ccbc/index.html","timestamp":"2014-04-18T12:40:47Z","content_type":null,"content_length":"11742","record_id":"<urn:uuid:fb2cafbe-ce4d-4ff2-b2e0-e331d35bea7b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Practice Problem (Intro to Discrete Math Class)
January 15th 2009, 08:32 AM
Practice Problem (Intro to Discrete Math Class)
Hey Everyone,
Just so you know I am in an Intro to Discrete Math class so this stuff should be pretty simple to the rest of you, but I'm not very good at it. This is an example question that is up to me if i
want to do it or not, the professor does not care. Any help would be appreciated.
Given that $A_n=5n+3$ for any $n>=1$, is it true that $A_n=5*A_{(n-1)} -3$?
January 15th 2009, 08:37 AM
Hey Everyone,
Just so you know I am in an Intro to Discrete Math class so this stuff should be pretty simple to the rest of you, but I'm not very good at it. This is an example question that is up to me if i
want to do it or not, the professor does not care. Any help would be appreciated.
Given that $A_n=5n+3$ for any $n>=1$, is it true that $A_n=5*A_(n-1) -3$?
I'm new to this so not sure how to make (n-1) all subscript, but it should be.
To make all of some expression a subscript enclose it in { }.
To calculate that, think of $A_n$ as a function A(n).
If $A_n= 5n+3$, then $A_{n-1}= 5(n-1)+ 3= 5n- 2$.
Put 5n+3 in place of $A_n$ and 5n- 2 in place of $A_{n-1}$ in that expression and see if it is true.
Is 5n+3= 5(5n-2)-3?
January 15th 2009, 09:38 AM
To make all of some expression a subscript enclose it in { }.
To calculate that, think of $A_n$ as a function A(n).
If $A_n= 5n+3$, then $A_{n-1}= 5(n-1)+ 3= 5n- 2$.
Put 5n+3 in place of $A_n$ and 5n- 2 in place of $A_{n-1}$ in that expression and see if it is true.
Is 5n+3= 5(5n-2)-3?
ok so here is what I cam up with:
5n+3 = 5(n-2)-3 = 25n-10-3 = 25n-7
so the answer is false.
is this correct?
|
{"url":"http://mathhelpforum.com/discrete-math/68343-practice-problem-intro-discrete-math-class-print.html","timestamp":"2014-04-18T12:10:12Z","content_type":null,"content_length":"8997","record_id":"<urn:uuid:32437d68-4db1-41e5-8899-c588426c4557>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trigonometry H
Trigonometry Help
Math trigonometry help is accessible in Assignment Expert Company. We provide our clients with the best trigonometry solutions. We know answers almost to every your questions. If our trigonometry
help is interesting for you, then we are waiting for you. Be calm that we are real professionals in the subject. And there are no difficult tasks for us. Don’t shy, to ask us for help.
Trigonometry help at Assignment Expert is highly demanded because:
• trigonometry homework demands specific knowledge about triangles nature;
• trigonometry assignment may be difficult due to its application in different math sciences;
• math trigonometry help may be necessary as trigonometry homework answers may require knowledge of new trends in mathematics;
• trigonometry homework help may be needed as student often have to know all the formulas.
Trigonometry is a very important and complicated field of study. It is a branch of mathematics that is closely related with triangles, if to be more specific plane triangles, where one angle has 90
degrees (right triangles). Trigonometry studies the relationship between the sides and the angles of the triangle, as well as trigonometric functions to illustrate those relationships. Trigonometry
acquires applications in both pure mathematics and in applied mathematics, which are used by many other branches of science and technology.
In order to make proper calculations in this sphere of mathematics the student has to possess not only basic knowledge, but also be familiar with other trends in mathematics. Solving these types of
tasks might cause problems due to the complexity of the assignment and the necessity to acquire knowledge of all trigonometric formulas and the need to possess a taste of solving trigonometry
assignments. This is when our service becomes in handy.
Trigonometry homework online is provided by competent experts including other benefits:
• our services hire a team of degree-holding math experts for online math trigonometry help;
• every trigonometry assignment is carefully matched up with trigonometry homework helper suitable for trigonometry homework you need to be resolved;
• we even have PhD level solvers for trigonometry help online;
• we offer reliable payment for trigonometry homework help, feedback, and contact methods;
• your privacy is guaranteed – we never share your information with anyone.
Our expert will gladly assist you in completing trigonometry homeworks for an appropriate price. All our experts obtain Master’s and PhD degrees and mathematics and that is the reason why completing
trigonometry assignments will not be a problem for them. Please be sure that your trigonometry homework will be completed according to all instructions and requirements of your teacher.
Trigonometry help online with winning advantages:
• experts help you to oppose difficult challenges referring to trigonometry homework;
• we provide you with the highest quality trigonometry homework help, and timely delivery;
• charging sensible prices for math trigonometry help that fit into your needs and your budget;
• using the formats of trigonometry homework required by your school with careful attention to each detail.
With the help of our service you will be able to spend your time on thing that are of higher importance to you. We always aim at satisfying the needs of our customers in order for them to achieve
high results in their process of studying. In case the customer is dissatisfied with the result of the work he can always ask for a revision which will be completed for absolutely no charge. Our
service will bring great appreciation among your fellow student and teachers as the high school, college or even university trigonometry assignments completed by our professionals are of high quality
and are always on time.
Our areas of expertise in Math::
|
{"url":"http://www.assignmentexpert.com/math/trigonometry.html","timestamp":"2014-04-20T18:25:51Z","content_type":null,"content_length":"30000","record_id":"<urn:uuid:5960da59-c2ac-4d96-8532-a98cf4c374c8>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Voorhees Kirkwood, NJ Math Tutor
Find a Voorhees Kirkwood, NJ Math Tutor
...While in my job search I want to help any kids in math who may need it, as well providing any high schoolers seeking help or advice for possible engineering degree for college purposes. I am
currently serving part time in the military for the Air National Guard of New Jersey, Civil Engineering Squadron. I am also a referee for soccer and will not be available most weekends.
10 Subjects: including algebra 1, algebra 2, calculus, geometry
...Essentially, I am an in-school tutor for students. Not only do I assist the teacher in the classroom, I also work with students both individually and in small groups. I work with a variety of
students, including those who are autistic, ADHD/ADD, or who are just struggling in one particular area.
7 Subjects: including algebra 1, logic, probability, prealgebra
...ANOVA (Analysis of Variance) I recently taught Introduction to Statistics at Burlington (NJ) County College (8 semesters from 2004 to 2006) and have successfully tutored statistics and
biostatistics up through the PhD level. You will find numerous statistics and biostatistics student testimoni...
22 Subjects: including algebra 1, algebra 2, grammar, geometry
...I am located in the South Jersey area and would be happy to tutor you in your home or at a location that is convenient to you. I look forward to hearing from you!I have been trained in
classical piano since the age of 4, and jazz piano since the age of 15. I have won national competitions and performed all across the country.
12 Subjects: including algebra 1, algebra 2, biology, prealgebra
Hi, my name is Ian. I'm an SAT tutor and instructor at American University and Philadelphia nonprofit Mighty Writers. I work with students to help them improve their Math skills and their scores
on the SAT.
11 Subjects: including prealgebra, algebra 1, algebra 2, GRE
Related Voorhees Kirkwood, NJ Tutors
Voorhees Kirkwood, NJ Accounting Tutors
Voorhees Kirkwood, NJ ACT Tutors
Voorhees Kirkwood, NJ Algebra Tutors
Voorhees Kirkwood, NJ Algebra 2 Tutors
Voorhees Kirkwood, NJ Calculus Tutors
Voorhees Kirkwood, NJ Geometry Tutors
Voorhees Kirkwood, NJ Math Tutors
Voorhees Kirkwood, NJ Prealgebra Tutors
Voorhees Kirkwood, NJ Precalculus Tutors
Voorhees Kirkwood, NJ SAT Tutors
Voorhees Kirkwood, NJ SAT Math Tutors
Voorhees Kirkwood, NJ Science Tutors
Voorhees Kirkwood, NJ Statistics Tutors
Voorhees Kirkwood, NJ Trigonometry Tutors
Nearby Cities With Math Tutor
Collingswood Math Tutors
Deptford Township, NJ Math Tutors
Echelon, NJ Math Tutors
Evesham Twp, NJ Math Tutors
Gibbsboro Math Tutors
Haddonfield Math Tutors
Hi Nella, NJ Math Tutors
Laurel Springs, NJ Math Tutors
Lindenwold, NJ Math Tutors
Mount Laurel Math Tutors
Pine Hill, NJ Math Tutors
Somerdale, NJ Math Tutors
Stratford, NJ Math Tutors
Voorhees Math Tutors
Voorhees Township, NJ Math Tutors
|
{"url":"http://www.purplemath.com/voorhees_kirkwood_nj_math_tutors.php","timestamp":"2014-04-18T18:57:12Z","content_type":null,"content_length":"24379","record_id":"<urn:uuid:a48a47cf-eb7f-4e3d-85ba-6d0a4e0cb9ae>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Total and Local Quadratic Indices of the Molecular Pseudograph's Atom Adjacency Matrix: Applications to the Prediction of Physical Properties of Organic Compounds
Author(s): Yovani Marrero PonceJournal: Molecules
ISSN 1420-3049
Start page:
Original pageKeywords: Molecular Vector Space
Total and Local Quadratic Index
Physical Property
Organic CompoundABSTRACT
A novel topological approach for obtaining a family of new molecular descriptors is proposed. In this connection, a vector space E (molecular vector space), whose elements are organic molecules, is
defined as a “direct sum“ of different ℜi spaces. In this way we can represent molecules having a total of i atoms as elements (vectors) of the vector spaces ℜi (i=1, 2, 3,..., n;
where n is number of atoms in the molecule). In these spaces the components of the vectors are atomic properties that characterize each kind of atom in particular. The total quadratic indices are
based on the calculation of mathematical quadratic forms. These forms are functions of the k-th power of the molecular pseudograph's atom adjacency matrix (M). For simplicity, canonical bases are
selected as the quadratic forms' bases. These indices were generalized to “higher analogues“ as number sequences. In addition, this paper also introduces a local approach (local invariant)
for molecular quadratic indices. This approach is based mainly on the use of a local matrix [Mk(G, FR)]. This local matrix is obtained from the k-th power (Mk(G)) of the atom adjacency matrix M. Mk
(G, FR) includes the elements of the fragment of interest and those that are connected with it, through paths of length k. Finally, total (and local) quadratic indices have been used in QSPR studies
of four series of organic compounds. The quantitative models found are significant from a statistical point of view and permit a clear interpretation of the studied properties in terms of the
structural features of molecules. External prediction series and cross-validation procedures (leave-one-out and leave-group-out) assessed model predictability. The reported method has shown similar
results, compared with other topological approaches. The results obtained were the following: a) Seven physical properties of 74 normal and branched alkanes (boiling points, molar volumes, molar
refractions, heats of vaporization, critical temperatures, critical pressures and surface tensions) were well modeled (R>0.98, q2>0.95) by the total quadratic indices. The overall MAE of 5-fold
cross-validation were of 2.11 oC, 0.53 cm3, 0.032 cm3, 0.32 KJ/mol, 5.34 oC, 0.64 atm, 0.23 dyn/cm for each property, respectively; b) boiling points of 58 alkyl alcohols also were well described by
the present approach; in this sense, two QSPR models were obtained; the first one was developed using the complete set of 58 alcohols [R=0.9938, q2=0.986, s=4.006oC, overall MAE of 5-fold
cross-validation=3.824 oC] and the second one was developed using 29 compounds as a training set [R=0.9979, q2=0.992, s=2.97 oC, overall MAE of 5-fold cross-validation=2.580 oC] and 29 compounds as a
test set [R=0.9938, s=3.17 oC]; c) good relationships were obtained for the boiling points property (using 80 and 26 cycloalkanes in the training and test sets, respectively) using 2 and 5 total
quadratic indices: [Training set: R=0.9823 (q2=0.961 and overall MAE of 5-fold crossvalidation= 6.429 oC) and R=0.9927 (q2=0.977 and overall MAE of 5-fold crossvalidation= 4.801 oC); Test set: R=
0.9726 and R=0.9927] and d) the linear model developed to describe the boiling points of 70 organic compounds containing aromatic rings has shown good statistical features, with a squared correlation
coefficient (R2) of 0.981 (s=7.61 oC). Internal validation procedures (q2=0.9763 and overall MAE of 5-fold cross-validation=7.34 oC) allowed the predictability and robustness of the model found to be
assessed. The predictive performance of the obtained QSPR model also was tested on an extra set of 20 aromatic organic compounds (R=0.9930 and s=7.8280 oC). The results obtained are valid to
establish that these new indices fulfill some of the ideal requirements proposed by Randić for a new molecular descriptor.
You may be interested in:
|
{"url":"http://www.journaldatabase.org/articles/total_local_quadratic_indices.html","timestamp":"2014-04-20T20:58:04Z","content_type":null,"content_length":"13872","record_id":"<urn:uuid:5a754426-a398-4501-9955-4449fce3d95b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Case Study: Lamberton, MN
Formula for Fence Setback
D = H (sina) (12 + 49P + 7P^2 - 37P^3)
• D is the setback (m).
• H is the height of the fence (m).
• sina is the attack angle of the prevailing winter wind striking the road.
• P is the porosity percentage of the fence.
|
{"url":"http://www.climate.umn.edu/snow_fence/Components/Lamberton/lamberton8.htm","timestamp":"2014-04-21T09:42:27Z","content_type":null,"content_length":"10340","record_id":"<urn:uuid:92adb854-bd99-4bcf-a839-44e8ba3869e4>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The parabola is the locus of points on the plane that is equidistant from a fixed point called the focus and a fixed line called the directrix.
Elements of the Parabola
The focus is the fixed point F.
The directrix is the fixed line d.
Focal Parameter
The focal parameter is the distance from the focus to the directrix. It is denoted by p.
The axis is the line perpendicular to the directrix that passes through the focus.
The vertex is the point of intersection of the parabola with its axis.
|
{"url":"http://www.vitutor.com/geometry/conics/parabola.html","timestamp":"2014-04-17T00:48:12Z","content_type":null,"content_length":"13982","record_id":"<urn:uuid:57e1aa5e-c34e-4f1b-bff1-bf1085d438b2>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Complex AGM
John Cremona on Wed, 15 Feb 2012 17:20:57 +0100
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
• To: Andreas Enge <andreas.enge@inria.fr>
• Subject: Re: Complex AGM
• From: John Cremona <john.cremona@gmail.com>
• Date: Wed, 15 Feb 2012 16:20:45 +0000
• Cc: pari-dev@pari.math.u-bordeaux.fr
• Delivery-date: Wed, 15 Feb 2012 17:21:02 +0100
• Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=
IVjq4UCw370JTJK9QduwkyM2pd9QjN2qiKN1EuiCAdU=; b=lKRDBWken9HlfiUqVOCfphX1uN0pg4+X02zft6PevH75mP8phfDRZGzWoQVtvsngUF KcrK2is/EriT0WxUcmVEANVexopk9Y41kxcm06F62h9cJeJe5rwh5zGT3gCxUKl7wS3i
• In-reply-to: < 20120215155409.GA15994@debian>
• References: < 20120215150953.GA11263@yellowpig> < CAD0p0K7m1QPJyZHBdQU1-oCyicGmonnFEKKytiCwMsLvskuWbg@mail.gmail.com> < 20120215155409.GA15994@debian>
On 15 February 2012 15:54, Andreas Enge <andreas.enge@inria.fr> wrote:
> On Wed, Feb 15, 2012 at 03:24:21PM +0000, John Cremona wrote:
>> For a definition of what "optimal" means and why it matters for
>> elliptic curve period computations, see http://arxiv.org/abs/1011.0914
> In his PhD thesis, Dupont makes the additional assumption that if there
> are two "good" choices, then Im (b_n, a_n) > 0 (it could be that this is
> taken from Cox1984, I have not verified). So I would suggest to modify
> pari to compute the optimal sequence together with this normalisation.
Yes, that is essentially Cox's definition. But this ambiguous case
only happens at the first step of the algorithm anyway, and when it
does happen the two limits you get by making both choices have exactly
the same absolute value.
> Andreas
|
{"url":"http://pari.math.u-bordeaux.fr/archives/pari-dev-1202/msg00048.html","timestamp":"2014-04-16T13:18:52Z","content_type":null,"content_length":"6091","record_id":"<urn:uuid:a0f05a85-fe57-4c67-ac7f-b15e69a21f82>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Parallel Session P11: 1545-1730, Tuesday 13th April 2010
The Dark Art of Dark Matter - Session 1
Location: Bute
The goal of this session is to understand the mysterious dark matter that comprises 95% of the matter content of the Universe. The UK is at the forefront of research in the fields of Cosmology and
Particle Physics with strengths in both theory and observations. We welcome any researcher in theory or observation, from all fields, working towards the goal of understanding dark matter. Together
in this session we will discuss key questions such as: “how can we better exploit synergies between direct and indirect detection?” To aid discussion, in addition to the two scheduled parallel
sessions there will be a 20-30 person break-out session, to be held on the morning of 14th April. Our aim is to foster a collaborative environment at the meeting that will lead to strong UK-led
research in this rapidly developing field.
• Catherine Heymans (IfA, University of Edinburgh)
• Richard Massey (IfA, University of Edinburgh)
• Tom Kitching (IfA, University of Edinburgh)
13 April, 15:45 Direct Detection of Dark Matter
Pawel Majewski (Rutherford Appleton Laboratory)
13 April, 16:10 Dark Matter in the Milky Way
Justin Read (University of Leicester)
13 April, 16:35 The impact of dark matter cusps and cores on the satellite galaxy populatio
Jorge Penarrubia (University of Cambridge, IoA)
13 April, 16:50 Applications of a New and Rapid Simulations Method for Weak Lensing Analysis
Alina Kiessling (University of Edinburgh)
13 April, 17:00 Wave-mechanics of Large Scale Structure
Edward Thomson (University of Glasgow)
13 April, 17:10 Weighing galaxies using gravitationally lensed SNLS supernovae
Jakob Jonsson (University of Oxford)
13 April, 17:25 Poster adverts
Sudden Future Singularity models as an alternative to Dark Energy?
Hoda Ghodsi (University of Glasgow)
LoCuSS: Weak Lensing Analysis of 21 Galaxy Clusters at z=0.15-0.3
Victoria Hamilton-Morris (University of Birmingham)
A New Pixon Weak Lensing Cluster Mass Reconstruction Method
Daniel John (Durham University)
Extreme value statistics: predicting the frequency of the densest clusters and sparsest voids
Olaf Davis (Oxford)
The impact of delensing gravitational wave standard sirens on determining cosmological parameters
Craig Lawrie (University of Glasgow)
Do dark matter halos have cusps?
Chris Brook (Jeremiah Horrocks Institute, UCLan)
Probing the dark matter halos of early-type galaxies via lensing
Ignacio Ferreras (MSSL/UCL)
Probing the Dark Universe with Weak Lensing Tomography and the CFHTLS
Catherine Heymans (IfA, University of Edinburgh)
Bright Ideas and Dark Thoughts: "Universal Baryonic Scale" at "Maximum Halo Gravity"
Hongsheng Zhao (U. of St Andrews (SUPA))
The new path to time delays?
Gülay Gürkan (The Universtiy of Manchester)
TeVeS and the straight arc of A2390
Martin Feix (University of St Andrews)
Talk Abstracts
Direct Detection of Dark Matter
Majewski, Pawel
Rutherford Appleton Laboratory
13 April, 15:45
Dark Matter is one of the greatest mysteries in science. Although it makes up five sixths of the matter content of the Universe, it has never been directly detected. For several decades, the hunt for
detection of dark mater particle has accelerated and motivated many ingenious experiments around the world. I will review existing and planned dark matter direct detection experiments, focussing on
the variety of implemented experimental techniques.
Dark Matter in the Milky Way
Read, Justin
University of Leicester
13 April, 16:10
Experiments designed to detect a dark matter particle in the laboratory need to know the very local phase space density of dark matter, both to motivate detector design and to interpret any future
signal. I discuss recent progress on estimating this and its implications.
The impact of dark matter cusps and cores on the satellite galaxy populatio
Penarrubia, Jorge, A. Benson, M. Walker, G. Gilmore, A. McConnachie, L. Mayer
University of Cambridge, IoA
13 April, 16:35
In this talk I will show the results from N-body simulations that study the effects that a divergent (i.e. "cuspy") dark matter (DM) profile introduces on the tidal evolution of dwarf spheroidal
galaxies (dSphs). I will show that the resilience of dSphs to tidal stripping is extremely sensitive to the slope of the inner halo profile. I will also outline the results from calculations that
simulate the hierarchical build-up of spiral galaxies assuming different halo profiles and disc masses, which show that the size-mass relation established from Milky Way (MW) dwarfs strongly supports
the presence of cusps in the majority of these systems, as cored models systematically underestimate the masses of the known Ultra-Faint dSphs. These models also indicate that a massive M31 disc may
explain why many of its dSphs fall below the size-mass relationship derived from MW dSphs. We also use our models to constrain the mass threshold below which star formation is suppressed in DM
haloes, finding that luminous satellites must be accreted with masses above 10^8--10^9 M_sol in order to explain the size-mass relation observed in MW dwarfs.
Applications of a New and Rapid Simulations Method for Weak Lensing Analysis
Kiessling, Alina, Andy Taylor, Alan Heavens
University of Edinburgh
13 April, 16:50
Gravitational lensing is sensitive to all gravitating mass - both Baryonic and Dark Matter - making it the ideal tool to study Cosmology independently of any assumptions about the dynamical or
thermal state of objects. The Next Generation of Survey Telescope will observe more of the sky than ever before and the volume of data they will produce is unprecedented. To realise the potential of
these surveys, experiments require full large end-to-end simulations of the Surveys to test analysis methods and provide realistic errors. We have developed a new line-of-sight integration approach
to simulating 3-D Weak Gravitational Lens Shear and Convergence fields. These light cones are faster to generate than traditional ray-tracing, so we can run an ensemble of simulations allowing us to
generate covariance matricies for cosmological parameter estimation and statistical analysis. This presentation will introduce our new analysis method and discuss some of its many applications in
weak lensing experiments.
Wave-mechanics of Large Scale Structure
Thomson, Edward, Martin Hendry, Luis Teodoro
University of Glasgow
13 April, 17:00
Simulations of Large Scale Structure using N-Body codes have helped define the Lambda-CDM paradigm. While N-Body codes remain the most popular approach, a lesser known method was developed in the
early 90's that formulates the equations describing large scale structure (LSS) formation within a wave-mechanical framework. This method couples the Schroedinger equation with the Poisson equation
of gravity. The wavefunction encapsulates information about the density and velocity fields as a single continuous field with complex values.
In this presentation I will review some of the key features of the wave-mechanical approach to LSS. The method avoids the addition of an artificial smoothing parameter, as seen in N-body codes, and
is able to follow 'hot streams' - something that is difficult to do with phase space methods. The method is competitive with N-body codes in terms of processing time. The wave-mechanical approach can
be interpreted in two ways: (1) as a purely classical system that includes more physics than just gravity, or (2) as the representation of a dark matter field, perhaps an Axion field, where the de
Broglie wavelength of the particles is large.
Weighing galaxies using gravitationally lensed SNLS supernovae
Jonsson, Jakob, M. Sullivan, I. Hook, S. Basa, R. Carlberg, A. Conley, D. Fouchez, D.A. Howell, K. Perrett, C. Pritchet
University of Oxford
13 April, 17:10
Gravitational lensing by foreground matter can magnify or de-magnify background sources. Standard candles, like type Ia supernovae (SNe Ia), can therefore be used to weigh the foreground galaxies via
gravitational lensing. We present constraints on dark matter halo properties obtained using 175 SNe Ia from the first 3-years of the Supernova Legacy Survey (SNLS). The dark matter halo of each
galaxy in the foreground is modelled as a truncated singular isothermal sphere with velocity dispersion and truncation radius obeying luminosity dependent scaling laws. We cannot constrain the
truncation radius, but the best-fitting velocity dispersion scaling law agrees well with results from galaxy-galaxy lensing measurements. The normalisation of the velocity dispersion scaling laws are
furthermore consistent with empircal Faber-Jackson and Tully-Fisher relations. We have also measured the brightness scatter of SNe Ia due to gravitational lensing. This scatter contributes only
little to the SNLS sample (z < 1), but would contribute significantly at z > 1.6.
Poster Abstracts
Do dark matter halos have cusps?
Brook, Chris
Jeremiah Horrocks Institute, UCLan
Pure N-body simulations have shown that cold dark matter halos have steep inner density profiles, or "cusps". Yet observations of rotation curves of disk galaxies infer a flatter, cored inner density
profile. Using self consistent cosmological galaxy formation simulations, we show that the inclusion of baryons, which are dynamically significant in the inner regions of halos, can dramatically
alter the profile of the dark matter. Our simulations result in "bulgeless" disk galaxies with dark matter cores.
Extreme value statistics: predicting the frequency of the densest clusters and sparsest voids
Davis, Olaf, Stephane Colombi, Julien Devriendt, Joe Silk
One interesting property of random fields - such as the observed density field of the universe - is the distribution of their highest maxima and lowest minima. In particular, the maxima of the dark
matter field translate to the the locations of the most massive clusters, which exist in the highly non-linear regime of gravitational clustering and probe the evolution of the power spectrum under
To relate the theoretical maxima of the DM density with observed maxima in a small region of the universe requires an understanding of the behaviour of sample maxima: this is the domain of extreme
value or Gumbel statistics.
We present analytical calculations which can predict the distribution of such maxima and minima from the underlying power spectrum, and demonstrate a good agreement with simulated Gaussian fields. We
also compare our predictions to the Horizon 4Pi simualtion, a cosmological scale dark matter simulation containing 70 billion particles.
Potential applications will be discussed, including likelihood constraints on void cosmologies, and application to observed CMB anomalies such as the cold spot and 'Axis of Evil'.
TeVeS and the straight arc of A2390
Feix, Martin, HongSheng Zhao, Cosimo Fedeli, José Luis Garrido Pestaña, Henk Hoekstra
University of St Andrews
We suggest to test the combined framework of tensor-vector-scalar theory (TeVeS) and massive neutrinos in galaxy clusters via gravitational lensing, choosing the system A2390 with its notorious
straight arc as an example. Adopting quasi-equilibrium models for the matter content of A2390, we show that such configurations cannot produce the observed image. Generally, nonlinear effects induced
by the TeVeS scalar field are very small, meaning that curl effects are basically negligible. Based on this result, we outline a systematic approach on how to model strong lenses in TeVeS, which is
demonstrated for A2390. Compared to general relativity, we conclude that discrepancies between the independent mass estimates from lensing and X-ray observations are amplified. Finally, we address
the question of the model’s feasibility and possible implications/problems for TeVeS.
Probing the dark matter halos of early-type galaxies via lensing
Ferreras, Ignacio
The combination of gravitational lensing on galaxy scales and stellar population synthesis enables us to constrain the baryon fraction in galaxies, probing the interplay between the dark matter halo
and the baryon physics transforming gas into stars. I will present recent work based on a sample of strong (early-type) lenses from the Castles survey. The combination of a non-parametric approach to
the lensing data and the analysis of the HST/NICMOS images of the lens give a remarkably good agreement between baryon and lensing mass in the inner regions. The radial trend of the baryon fraction
out to 4-5 Re is shown, along with its connection with the Fundamental Plane. I will put this result in context with recent estimates of the global baryon fraction in galaxies.
Sudden Future Singularity models as an alternative to Dark Energy?
Ghodsi, Hoda, Dr Martin A. Hendry
University of Glasgow
One of the key challenges facing cosmologists today is the nature of the mysterious dark energy introduced in the standard model of cosmology to account for the current accelerating expansion of the
universe. In this regard, many other non-standard cosmologies have been proposed which would eliminate the need to explicitly include any form of dark energy. One such model is the Sudden Future
Singularity (SFS) model, in which no equation of state linking the energy density and the pressure in the universe is assumed to hold. In this model it is possible to have a blow up of the pressure
occurring in the near future while the energy density would remain unaffected. The particular evolution of the scale factor of the Universe in this model that results in a singular behaviour of the
pressure also admits acceleration in the current era as required. In this contribution I will present the results of the tests of an example SFS model against the current data from high redshift
supernovae, baryon acoustic oscillations (BAO) and the cosmic microwave background (CMBR). We explore the limits placed on the SFS model parameters by the current data through employing grid-based
and MCMC search methods. This lets us discuss the viability of the SFS model in question as an alternative to the standard concordance cosmology.
The new path to time delays?
Gürkan, Gülay, Neal Jackson
The Universtiy of Manchester
To better understand the universe and its dynamics, the Hubble constant is a crucial parameter which provides valuable information about the expansion rate of the universe. So far, the Hubble
constant has been determined by various methods such as Cepheid variables by utilizing HST Key Project data and WMAP. The accuracy of the Hubble constant value is not better than 10% due to intrinsic
constraints/assumptions of each method.
Gravitational lens systems provide another probe of the Hubble constant using time delay measurements. Current investigations of time delay lenses have resulted in different values of Ho ranging from
50-80 km/s/Mpc. The main problem in gravitational lens systems is that requires a mass model for the lens which is difficult to measure independently unless observational constraints are available.
Moreover, in order to see time delays clearly, fluxes of sources have to be variable. On the other hand, using a typical value of the Hubble constant and measured time delays enable us to determine a
better/more accurate mass model for the lens galaxy.
Here we attempt to develop a new and more efficient method for measuring time delays, which does not require regular monitoring with a high-resolution interferometer array or with optical telescopes.
Instead, the WSRT is used for flux monitoring of double image lens systems in which the brighter image is expected to vary first. Triggered VLA observations can then be used to catch the subsequent
variability of the fainter image. We present preliminary results from such a program.
LoCuSS: Weak Lensing Analysis of 21 Galaxy Clusters at z=0.15-0.3
Hamilton-Morris, Victoria, G.P. Smith, E. Egami, T. Targett, C. Haines, A. Sanderson...
University of Birmingham
The Local Cluster Substructure Survey (LoCuSS) is a multi-wavelength survey of 100 X-ray luminous galaxy clusters at 0.15
Probing the Dark Universe with Weak Lensing Tomography and the CFHTLS
Heymans, Catherine, Emma Grocutt, Alan Heavens, Tom Kitching, CFHTLenS team
IfA, University of Edinburgh
Weak gravitational lensing is a powerful technique for measuring the properties of dark matter and dark energy from their gravitational effects alone. The Canada-France-Hawaii Telescope Legacy Survey
is currently the largest deep optical data set for weak lensing analysis covering 172 square degrees over 5 optical bands. We present an investigation into the optimal tomographic three-dimensional
analysis of the CFHTLS weak lensing signal that minimises the impact of systematics arising from intrinsic galaxy alignments. With systematics under control, since the influence of dark energy on
structure growth is redshift-dependent, tomographic analysis of redshift bins will allow us to constrain the properties of the Dark Universe.
A New Pixon Weak Lensing Cluster Mass Reconstruction Method
John, Daniel, V. R. Eke, L. F. A. Teodoro
Durham University
We present a new pixon-based method for cluster mass reconstructions using weak gravitational lensing. Pixons are an adaptive smoothing scheme for image reconstruction, where the local smoothing
scale is determined by the data. We also introduce a new goodness-of-fit statistic based on the autocorrelation of the residuals of the shear field. We test our algorithm on simulated lensing
datasets using NFW halos with and without substructure. We compare our results to previous methods such as Kaiser-Squires(KS), Maximum Entropy(ME) and the Intrinsic Correlation Function(ICF) and show
an increased accuracy in the mass reconstructions. We finally discuss future applications to data.
The impact of delensing gravitational wave standard sirens on determining cosmological parameters
Lawrie, Craig, Martin Hendry, Fiona Speirits, Joshua Logue
University of Glasgow
Recently there has been much attention in the cosmology literature on the potential future use of compact binary inspirals, so-called gravitational wave standard sirens, as high precision probes of
the luminosity distance redshift relation.
It has been recognised, however, that weak lensing due to intervening large scale structure will significantly degrade the precision of standard sirens. Shapiro et al (2010) present a method for
"de-lensing" sirens, by combining gravitational wave observations with maps of cosmic shear and flexion along each siren's line of sight.
In this presentation we explore the impact of this de-lensing procedure for constraining cosmological parameters. Using Monte Carlo simulations we investigate the accuracy with which the
dimensionless density parameters may be determined, before and after de-lensing, with future data from the proposed LISA satellite and Einstein Telescope.
Bright Ideas and Dark Thoughts: "Universal Baryonic Scale" at "Maximum Halo Gravity"
Zhao, Hongsheng, Gianfranco Gentile, Benoit Famaey, Paolo Salucci, Andrea Maccio, Baojiu Li, Henk Hoekstra, Martin Feix
U. of St Andrews (SUPA)
I will interpret a very curious conspiracy of dark-bright matter in galaxies (Gentile et al 2009 Nature), insensitive to the sizes and formation histories of the observed galaxies: the baryons are
concentrated to approximately the same surface density at the very position where the halo offers locally maximum gravity. While normal gravitational and gas feedback processes must always occur, it
is difficult to forge a feedback history-independent universal scale unless there is some help from possibly new physics in the Dark. A partial confirmation is seen in simulations of N-body where the
matter is coupled to a cosmological scalar field (Zhao et al. 2009, ApJ Letters).
|
{"url":"http://www.astro.gla.ac.uk/nam2010/p11.php","timestamp":"2014-04-19T15:04:45Z","content_type":null,"content_length":"27015","record_id":"<urn:uuid:5e4d9f7a-79cd-4375-afe3-9dd2d3942502>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Results 1 - 10 of 20
- Journal of Cryptology , 1991
"... We show how a pseudo-random generator can provide a bit commitment protocol. We also analyze the number of bits communicated when parties commit to many bits simultaneously, and show that the
assumption of the existence of pseudo-random generators suffices to assure amortized O(1) bits of communicat ..."
Cited by 228 (15 self)
Add to MetaCart
We show how a pseudo-random generator can provide a bit commitment protocol. We also analyze the number of bits communicated when parties commit to many bits simultaneously, and show that the
assumption of the existence of pseudo-random generators suffices to assure amortized O(1) bits of communication per bit commitment.
- In Crypto , 2002
"... In 1992, Dwork and Naor proposed that e-mail messages be accompanied by easy-to-check proofs of computational effort in order to discourage junk e-mail, now known as spam. They proposed specific
CPU-bound functions for this purpose. Burrows suggested that, since memory access speeds vary across ma ..."
Cited by 82 (2 self)
Add to MetaCart
In 1992, Dwork and Naor proposed that e-mail messages be accompanied by easy-to-check proofs of computational effort in order to discourage junk e-mail, now known as spam. They proposed specific
CPU-bound functions for this purpose. Burrows suggested that, since memory access speeds vary across machines much less than do CPU speeds, memory-bound functions may behave more equitably than
CPU-bound functions; this approach was first explored by Abadi, Burrows, Manasse, and Wobber [8].
- Journal of Cryptology , 1993
"... We show very efficient constructions for a pseudo-random generator and for a universal one-way hash function based on the intractability of the subset sum problem for certain dimensions.
(Pseudo-random generators can be used for private key encryption and universal one-way hash functions for sign ..."
Cited by 78 (8 self)
Add to MetaCart
We show very efficient constructions for a pseudo-random generator and for a universal one-way hash function based on the intractability of the subset sum problem for certain dimensions.
(Pseudo-random generators can be used for private key encryption and universal one-way hash functions for signature schemes). The increase in efficiency in our construction is due to the fact that
many bits can be generated/hashed with one application of the assumed one-way function. All our construction can be implemented in NC using an optimal number of processors. Part of this work done
while both authors were at UC Berkeley and part when the second author was at the IBM Almaden Research Center. Research supported by NSF grant CCR 88 - 13632. A preliminary version of this paper
appeared in Proc. of the 30th Symp. on Foundations of Computer Science, 1989. 1 Introduction Many cryptosystems are based on the intractability of such number theoretic problems such as factoring and
discrete logarit...
- Journal of Cryptology , 1994
"... A signature scheme is existentially unforgeable if, given any polynomial (in the security parameter) number of pairs (m 1 ; S(m 1 )); (m 2 ; S(m 2 )); : : : (m k ; S(m k )) where S(m) denotes
the signature on the message m, it is computationally infeasible to generate a pair (m k+1 ; S(m k+1 )) fo ..."
Cited by 45 (5 self)
Add to MetaCart
A signature scheme is existentially unforgeable if, given any polynomial (in the security parameter) number of pairs (m 1 ; S(m 1 )); (m 2 ; S(m 2 )); : : : (m k ; S(m k )) where S(m) denotes the
signature on the message m, it is computationally infeasible to generate a pair (m k+1 ; S(m k+1 )) for any message m k+1 = 2 fm 1 ; : : : m k g. We present an existentially unforgeable signature
scheme that for a reasonable setting of parameters requires at most 6 times the amount of time needed to generate a signature using "plain" RSA (which is not existentially unforgeable). We point out
applications where our scheme is desirable. Preliminary version appeared in Crypto'94 y IBM Research Division, Almaden Research Center, 650 Harry Road, San Jose, CA 95120. Research supported by a BSF
Grant 32-00032-1. E-mail: dwork@almaden.ibm.com. z Incumbent of the Morris and Rose Goldman Career Development Chair, Dept. of Applied Mathematics and Computer Science, Weizmann Institute of Science,
- In Crypto '98, LNCS 1462 , 1998
"... Signature schemes that are derived from three move identification schemes such as the Fiat-Shamir, Schnorr and modified ElGamal schemes are a typical class of the most practical signature
schemes. The random oracle paradigm [1, 2, 12] is useful to prove the security of such a class of signature sche ..."
Cited by 40 (1 self)
Add to MetaCart
Signature schemes that are derived from three move identification schemes such as the Fiat-Shamir, Schnorr and modified ElGamal schemes are a typical class of the most practical signature schemes.
The random oracle paradigm [1, 2, 12] is useful to prove the security of such a class of signature schemes [4, 12]. This paper presents a new key technique, "ID reduction", to show the concrete
security result of this class of signature schemes under the random oracle paradigm. First, we apply this technique to the Schnorr and modified ElGamal schemes, and show the "concrete security
analysis" of these schemes. We then apply it to the multi-signature schemes.
- In Asiacrypt ’2000, LNCS 1976 , 2000
"... . Assuming a cryptographically strong cyclic group G of prime order q and a random hash function H, we show that ElGamal encryption with an added Schnorr signature is secure against the adaptive
chosen ciphertext attack, in which an attacker can freely use a decryption oracle except for the target c ..."
Cited by 40 (3 self)
Add to MetaCart
. Assuming a cryptographically strong cyclic group G of prime order q and a random hash function H, we show that ElGamal encryption with an added Schnorr signature is secure against the adaptive
chosen ciphertext attack, in which an attacker can freely use a decryption oracle except for the target ciphertext. We also prove security against the novel one-more-decyption attack. Our security
proofs are in a new model, corresponding to a combination of two previously introduced models, the Random Oracle model and the Generic model. The security extends to the distributed threshold version
of the scheme. Moreover, we propose a very practical scheme for private information retrieval that is based on blind decryption of ElGamal ciphertexts. 1 Introduction and Summary We analyse a very
practical public key cryptosystem in terms of its security against the strong adaptive chosen ciphertext attack (CCA) of [RS92], in which an attacker can access a decryption oracle on arbitrary
ciphertexts (ex...
- ICICS 2001, LNCS 2229 , 2001
"... We present a novel parallel one-more signature forgery against blind Okamoto-Schnorr and blind Schnorr signatures in which an attacker interacts some l times with a legitimate signer and
produces from these interactions l + 1 signatures. Security against the new attack requires that the following RO ..."
Cited by 24 (1 self)
Add to MetaCart
We present a novel parallel one-more signature forgery against blind Okamoto-Schnorr and blind Schnorr signatures in which an attacker interacts some l times with a legitimate signer and produces
from these interactions l + 1 signatures. Security against the new attack requires that the following ROS-problem is intractable: find an overdetermined, solvable system of linear equations modulo q
with random inhomogenities (right sides). There is an inherent weakness in the security result of Pointcheval and Stern. Theorem 26 [PS00] does not cover attacks with 4 parallel interactions for
elliptic curves of order 2 200 . That would require the intractability of the ROS-problem, a plausible but novel complexity assumption. Conversely, assuming the intractability of the ROS-problem, we
show that Schnorr signatures are secure in the random oracle and generic group model against the one-more signature forgery.
- IN CRYPTO ’97: PROCEEDINGS OF THE 17TH ANNUAL INTERNATIONAL CRYPTOLOGY CONFERENCE ON ADVANCES IN CRYPTOLOGY , 1997
"... Blind digital signatures were introduced by Chaum. In this paper, we show how security and blindness properties for blind digital signatures, can be simultaneously defined and satisfied,
assuming an arbitrary one-way trapdoor permutation family. Thus, this paper presents the first complexity-ba ..."
Cited by 21 (0 self)
Add to MetaCart
Blind digital signatures were introduced by Chaum. In this paper, we show how security and blindness properties for blind digital signatures, can be simultaneously defined and satisfied, assuming an
arbitrary one-way trapdoor permutation family. Thus, this paper presents the first complexity-based proof of security for blind signatures.
- IN ADVANCES IN CRYPTOLOGY— CRYPTO ’00 , 2000
"... We introduce and construct timed commitment schemes, an extension to the standard notion of commitments in which a potential forced opening phase permits the receiver to recover (with effort)
the committed value without the help of the committer. An important application of our timed-commitment sche ..."
Cited by 13 (0 self)
Add to MetaCart
We introduce and construct timed commitment schemes, an extension to the standard notion of commitments in which a potential forced opening phase permits the receiver to recover (with effort) the
committed value without the help of the committer. An important application of our timed-commitment scheme is contract signing: two mutually suspicious parties wish to exchange signatures on a
contract. We show a two-party protocol that allows them to exchange RSA or Rabin signatures. The protocol is strongly fair: if one party quits the protocol early, then the two parties must invest
comparable amounts of time to retrieve the signatures. This statement holds even if one party has many more machines than the other. Other applications, including honesty preserving auctions and
collective coin-flipping, are discussed.
- In The Mathematics of Public-Key Cryptography. The Fields Institute , 1999
"... Based on a novel proof model we prove security for simple discrete log cryptosystems for which security has been an open problem. We consider a combination of the random oracle (RO) model and
the generic model. This corresponds to assuming an ideal hash function H given by an oracle and an ideal gro ..."
Cited by 10 (2 self)
Add to MetaCart
Based on a novel proof model we prove security for simple discrete log cryptosystems for which security has been an open problem. We consider a combination of the random oracle (RO) model and the
generic model. This corresponds to assuming an ideal hash function H given by an oracle and an ideal group of prime order q, where the binary encoding of the group elements is useless for
cryptographic attacks In this model, we first show that Schnorr signatures are secure against the one-more signature forgery : A generic adversary performing t generic steps including ` sequential
interactions with the signer cannot produce `+1 signatures with a better probability than \Gamma t 2 \Delta =q. We also characterize the different power of sequential and of parallel attacks.
Secondly, we prove a simple ElGamal based encryption to be secure against the adaptive chosen ciphertext attack, in which an attacker can arbitrarily use a decryption oracle except for the challenge
ciphertext. This encryp...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1154626","timestamp":"2014-04-17T16:02:56Z","content_type":null,"content_length":"38533","record_id":"<urn:uuid:d1486118-6185-44b6-aca3-b779f7b57c4b>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Where’s the Magic? (EMD and SSA in R)
September 17, 2013
By Wayne
When I first heard of SSA (Singular Spectrum Analysis) and the EMD (Empirical Mode Decomposition) I though surely I’ve found a couple of magical methods for decomposing a time series into component
parts (trend, various seasonalities, various cycles, noise). And joy of joys, it turns out that each of these methods is implemented in R packages: Rssa and EMD.
In this posting, I’m going to document some of my explorations of the two methods, to hopefully paint a more realistic picture of what the packages and the methods can actually do. (At least in the
hands of a non-expert such as myself.)
EMD (Empirical Mode Decomposition) is, as the name states, empirical. It makes intuitive sense and it works well, but there isn’t as yet any strong theoretical foundation for it. EMD works by finding
intrinsic mode functions (IMFs), which are oscillatory curves that have (almost) the same number of zero crossings as extrema, and where the average maxima and minima cancel to zero. In my mind,
they’re oscillating curves that swing back and forth across the X axis, spending an equal amount of time above and below the axis, but not having any fixed frequency or symmetry.
EMD is an iterative process, which pulls out IMFs starting with higher frequencies and leaving behind a low-passed time series for the next iteration, finally ending when the remaining time series
cannot contain any more IMFs — this remainder being the trend. Each step of the iteration begins with fitting curves to the maxima and minima of the remaining time series, creating an envelope. The
envelope is then averaged, resulting in a proto-IMF which is iteratively refined in a “sifting” process. There are a choice of stopping criteria for the overall iterations and for the sifting
iterations. Since the IMF’s are locally adaptive, EMD has no problems with with non-stationary and non-linear time series.
The magic of IMFs is that, being adaptive they tend to be interpretable, unlike non-adaptive bases which you might get from a Fourier or wavelet analysis. At least that’s the claim. The fly in the
ointment is mode mixing: when one IMF contains signals of very different scales, or one signal is found in two different IMFs. The best solution to mode mixing is the EEMD (Ensemble EMD), which
calculates an ensemble of results by repeatedly adding small but significant white noise to the original signal and then processing each noise-augmented signal via EMD. The results are then averaged
(and ideally subjected to one last sifting process, since the average of IMFs is not guaranteed to be an IMF). In my mind, this works because the white noise cancels out in the end, but it tends to
drive the signal away from problematic combinations of maxima and minima that may cause mode mixing. (Mode mixing often occurs in the presence of an intermittent component to the signal.)
The R package EMD implements basic EMD, and the R package hht implements EEMD, so you’ll want to install both of them. (Note that EMD is part of the Hilbert-Huang method for calculating instantaneous
frequencies — a super-FFT if you will — so these packages support more than just EMD/EEMD.)
As the Wikipedia page says, almost every conceivable use of EMD has been patented in the US. EMD itself is patented by NOAA scientists, and thus the US government.
SSA (Singular Spectrum Analysis) is a bit less empirical than EMD, being related to EOF (Empirical Orthogonal Function analysis) and PCA (Principal Component Analysis).
SSA is a subspace-based method which works in four steps. First, the user selects a maximum lag L (1 < L < N, where N is the number of data points), and SSA creates a trajectory matrix with L columns
(lags 0 to L-1) and N – L + 1 rows. Second, SSA calculates the SVD of the trajectory matrix. Third, the user uses various diagnostics to determine what eigenvectors are grouped to form bases for
projection. And fourth, SSA calculates an elementary reconstructed series for each group of eigenvectors.
The ideal grouping of eigenvectors is in pairs, where each pair has a similar eigenvalue, but differing phase which usually corresponds to sin-cosine-like pairs. The choice of L is important, and
involves two considerations: 1) if there is a periodicity to the signal, it’s good to choose an L that is a multiple of the period, and 2) L should be a little less than N/2, to balance the error and
the ability to resolve lower frequencies.
The two flies in SSA’s ointment are: 1) issues relating to complex trends, and 2) the inability to differentiate two components that are close in frequency. For the first problem, one proposed
solution is to choose a smaller L that is a multiple of any period, and use that to denoise the signal, with a normal SSA operating on the denoised signal. For the second problem, several iterative
methods have been proposed, though the R package does not implement them.
The R package Rssa implements basic SSA. Rssa is very nice and has quite a few visualization methods, and to be honest I prefer the feel I get from it over the EMD/hht packages. However, while it
allows for manually working around issue 1 from the previous paragraph, it doesn’t address issue 2 which puts more of the burden on the user to find groupings — and even then this often can’t
overcome this problem.
SSA seems to have quite a few patents surrounding it as well, though it appears to have deeper historical roots than EMD, so it might be a bit less encumbered overall than EMD.
Let’s decompose some stuff!
Having talked about each method, let’s walk through the decomposition of a time series, to see how they compare. Let’s use the gas sales data from the forecast package:
data (gas, package=”forecast”)
And we’ll use EMD first:
library (EMD)
library (hht)
ee <- EEMD (c(gas), time (gas), 250, 100, 6, "trials")
eec <- EEMDCompile ("trials", 100, 6)
I’m letting several important parameters default, and I’ll discuss some of them in the next section. We’ve run EEMD with explicit parameter choices of: noise amplitude of 250, ensemble size of 100,
up to 6 IMFs, and store each run in the directory trials. (EEMD is a bit awkward in that it stores these runs externally, but with a huge dataset or ensemble it’s probably necessary.) This yields a
warning message, I believe because some members of the ensemble have the requested 6 IMFs, but some only have 5, and I assume that it is leaving them out. I have encountered such issues when doing my
own EEMD before hht came out: not all members of each ensemble have the same number of IMFs, as the white noise drives them in more complex or simpler directions.
Let’s do the same thing with SSA:
library (SSA)
gas.ssa <- ssa (gas, L=228)
gas.rec <- reconstruct (gas.ssa, list (T=1:2, U=5, M96=6:7, M12=3:4, M6=9:10, M4=14:15, M2.4=20:21))
We’ve chosen a lag of 228, which is the multiple of 12 (monthly data) just below half of the time series’ length. For the reconstruction, I’ve chosen the pair 1 and 2 (1:2) as the trend, pair 3:4
appears to be the yearly cycle, and so on, naming each one with names that make sense to me: “T” for “Trend”, “U” for “Unknown”, “M6″ for what appears to be a 6-month cycle, etc. I’ll come back to
some diagnostic plots for SSA that gave me the idea to use these pairs, but first let’s compare results. The trends appear similar:
plot (eec$tt, eec$averaged.residue[,1], type="l")
lines (gas.rec$T, col="red")
though the SSA solution isn’t flat in the pre-1965 era, and shows some high-frequency mixing in the 1990′s. The yearly cycles also appear similar:
plot (eec$tt, eec$averaged.imfs[,2], type="l")
lines (gas.rec$M12, col="red")
with the EEMD solution showing more variability from year to year, which might more realistic or it might simply be an artifact. We could compare other components, though there is not necessarily a
one-to-one correspondence because I chose groupings in the SSA reconstruction. One last comparison is a roughly eight-year cycle that both methods found, where again the EEMD result is more
plot (eec$tt, eec$averaged.imfs[,4], type="l")
lines (gas.rec$M96, col="red")
SSA requires more user analysis to implement, and also seems as if it would benefit more from domain knowledge. If I knew better how to trade off the various diagnostic outputs and knew a bit more
about the natural gas trade, I believe I could have obtained better results with SSA. As it stands, I applied both methods to a domain I do not know much about and EEMD seems to have defaulted
better. Rssa is also handicapped in comparison to EEMD via hht because basic SSA has similar problems to basic EMD, though hopefully Rssa will implement extensions to the algorithm, such as those
suggested in Golyandina & Shlemov, 2013, placing them on a more even footing.
Note from the R code for the graphs that SSA preserves the ts attributes of the original data, while EMD does not, which is one of several very convenient features.
OK, since I love graphs, let’s do one last comparison of denoising, where we skip my pair choices. The EEMD solution uses 6 components plus the residual (trend), for a total of 7. The rough
equivalent for SSA would then be 14 eigenvector pairs, so let’s just pick the first 14 eigenvectors and mix them all together and see what we get:
r <- reconstruct (gas.ssa, list (T=1:14))
plot (gas, lwd=3, col="gray")
lines (r$T, lwd=2)
Which matches well, except for the flat trend around 1965, and looks very smooth. The EEMD solution is (leaving out the first IMF to allow for a little smoothing):
plot (gas, lwd=3, col=”gray”)
lines (c(eec$tt), eec$averaged.residue + apply (eec$averaged.imfs[,2:6], 1, sum), lwd=2)
which is also reasonable, but it’s definitely not as smooth and has some different things happening around 1986. Is this more realistic or quirky? Unfortunately, I can’t tell you. Is this a fair
comparison? I believe so, since EEMD was attempting to break the signal down into 7 components, plus noise, and SSA ordered the eigenvectors and I picked the first N. Is it informative? I’m not sure.
Twiddly knobs and dials
Let’s consider the dials and knobs that we get for each method. With SSA, we have the lag, L, the eigenvector groupings, and some choices of methods for things like how the SVD is calculated. With
EEMD, we have the maximum number of IMFs we want, a choice of five different stopping rules for sifting, a choice of five different methods for handling curve fitting at the ends of the time series,
four choices for curve fitting, the size of the ensemble, and the size of the noise.
So EEMD has many more up-front knobs, though the defaults are good and the only knobs we need to be very concerned with are the boundary handling and the size of the noise. The default boundary for
emd is “periodic”, which is probably not a good choice for any non-stationary time series, but fortunately the default for EEMD is “wave”, which seems to be quite clever at time series endpoints. The
noise needs to be sufficiently large to actually push the ensemble members around without being so large as to swamp the actual data.
On the other hand, SSA has a whole series of diagnostic graphs that need to be interpreted (sort of like Box-Jenkins for ARIMA on steroids) in order to figure out what eigenvectors should be grouped
together. For example, a first graph is a scree plot of the singular values:
plot (gas.ssa)
From which we can see the trend at point 1 (and maybe 2), and the obvious 3:4 and 6:7 pairings. We can then look at the eigenvectors, the factor vectors, reconstructed time series for each
projection, and phase plots for pairings. (Phase plots that have a regular shape — circle, triangle, square, etc — indicate that the pair is working like sin-cosine pairs with a frequency related to
the number of sides. This is preferred.) Here’s an example of the reconstructions from the first 20 eigenvectors:
plot (gas.ssa, “series”, groups=1:20)
You can see clearly that 6:7 are of similar frequency and are phase-shifted, as are 18:19. You can also see that 11 has a mixing of modes where a higher frequency is riding on a lower carrier, and 12
has a higher frequency at the beginning and a lower frequency at the end. An extended algorithm could minimize these kinds of issues.
There are several other SSA diagnostic graphs I could make — the graph at the top of the article is a piece of a paired phase plot — but let’s leave it at that. Rssa also has functions for attempting
to estimate the frequencies of components, for “clusterifying” raw components into groups, and so on. Note also that Rssa supports lattice graphics and preserves time series attributes (tsp info),
which makes for a more pleasant experience.
I prefer the Rssa package. It has more and better graphics, it preserves time series attributes, and it just feels more extensive. It suffers, in comparison to EMD/hht because it does not (yet)
implement extended methods (ala Golyandina & Shlemov, 2013), and because SSA requires more user insight.
Neither method appears to be “magical” in real-world applications. With sufficient domain knowledge, they could each be excellent exploratory tools since they are data-driven rather than having fixed
bases. I hope that I’ve brought something new to your attention and that you find it useful!
Filed under:
Data Science
time series
for the author, please follow the link and comment on his blog:
Thinkinator » Rblog
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or
|
{"url":"http://www.r-bloggers.com/wheres-the-magic-emd-and-ssa-in-r/","timestamp":"2014-04-18T20:46:26Z","content_type":null,"content_length":"52707","record_id":"<urn:uuid:9d8c7182-5518-4d5b-b377-86fa8554f356>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tech Powered Math: News, Graphing Calculator Reviews, Math Education Apps, Learn Math
Drawing Connections with TI-Nspire Functions vs Derivatives
Posted on November 11, 2013 by Lucas Allen • 0 Comments
Recently, I was very honored to have the opportunity to speak at a professional development day at Illinois Central College. The math department their asked me to introduce them to the TI-Nspire, as
they are starting to see a significant number of their students show up with the Nspire. In addition to a basic overview of the device, I presented the ICC math department with series of activities I
might use in algebra, calculus, and statistics.
I thought TPM readers might be interested in the calculus activity, which involved graphing the derivative of a function on the TI-Nspire. In my experience, there are always some students that
struggle to make the graphical connections between a function and its first derivative, despite possible problem solving methods such as curve sketching, sign tables, and matching the function with a
list of possible derivatives.
In this video I offer another method to add to your bag of tricks, which is the TI-Nspire’s dynamic geometry software. You can easily use its “construct perpendicular” feature to try to make clearer
to students exactly where the key points are in the derivative that align to critical points in the original functions. Check out the video below for a more thorough explanation.
Filed Under: Calculus, Featured, TI-Nspire Lessons
The TI-Nspire and Microsoft Excel
Posted on October 9, 2013 by Lucas Allen • 0 Comments
As I’ve mentioned a couple of times on Tech Powered Math, I’m leading the charge on our new statistics class at my school. It’s the culmination of several years of effort, including a feasibility
study, search for textbooks, curriculum adoption, and now implementation. It’s very exciting and exhausting at the same time. The class is… Continue Reading
Filed Under: Featured, Pre-calculus, TI Lessons
TI-84+C Asymptote Detection
Posted on July 12, 2013 by Lucas Allen • 0 Comments
This isn’t at all a post I was planning to do, but again tonight I had another question on the Tech Powered Math Facebook page about the TI-84+C and asymptotes. If you press 2nd and FORMAT, you’ll
find an option called “Detect Asymptotes” that can be turned on or off. The people I’ve heard from… Continue Reading
Filed Under: TI-84 Lessons
How to Replace a TI-Nspire CX Battery
Posted on April 10, 2012 by Lucas Allen • 0 Comments
I’ve put together a video tutorial on how to change the battery on the TI-Nspire CX.… Continue Reading
Filed Under: TI-nspire, TI-Nspire Lessons
TI-Nspire Trig Formulas Document
Posted on October 25, 2011 by Lucas Allen • 0 Comments
The first formula sheet document I am releasing for the TI-Nspire is one for trigonometry formulas. There are quite a few trig identities for students to memorize, so I hope this will serve as a good
reference for you. This file will work for any up-to-date version of the TI-Nspire, including CAS or non-CAS, CX… Continue Reading
Filed Under: TI Lessons, TI-Nspire Lessons
TI-Nspire Formula Documents
Posted on October 25, 2011 by Lucas Allen • 0 Comments
Over the next few weeks, I’m going to be releasing a series of TI-Nspire formula “cheat sheets.” Of course, cheat sheet is just a nickname for this kind of document, don’t actually use them to cheat.
Having the formulas on your graphing calculator can serve as a very valuable study guide when you are either… Continue Reading
Filed Under: Featured, TI-Nspire Lessons
Math Nspired update includes Algebra II, Calculus
Posted on October 20, 2010 by Lucas Allen • 0 Comments
Texas Instruments today announced a significant update for their Math Nspired website. If you’re not familiar with Math Nspired, it’s a resource site for the TI-Nspire, primarily for math teachers,
but to a lesser extent for students.… Continue Reading
Filed Under: News, Resources, TI-Nspire Lessons
Video Lesson: Tables on the TI-84
Posted on September 17, 2010 by Lucas Allen • 0 Comments
The table feature allows you to quickly scroll the an x vs. y chart on your TI-84. This lesson demonstrates how the table can be used from beginning algebra graphs to finding a limit in calculus.…
Continue Reading
Filed Under: Algebra, Calculus, Pre-calculus, TI-84 Lessons
Video lesson: Tables on the TI-Nspire
Posted on September 17, 2010 by Lucas Allen • 0 Comments
The table feature allows you to quickly scroll the an x vs. y chart on your TI-Nspire. This lesson demonstrates how the table can be used from beginning algebra graphs to finding a limit in calculus.
… Continue Reading
Filed Under: Algebra, Calculus, Pre-calculus, TI-Nspire Lessons
Video lesson: Finding the intersection of two graphs on the TI-Nspire
Posted on September 7, 2010 by Lucas Allen • 0 Comments
In this video lesson, you’ll learn how to find the intersection of two graphs on the TI-Nspire graphing calculator.… Continue Reading
Filed Under: Algebra, Pre-calculus, TI-Nspire Lessons
|
{"url":"http://www.techpoweredmath.com/category/ti-calculator-lessons/","timestamp":"2014-04-18T18:21:15Z","content_type":null,"content_length":"47244","record_id":"<urn:uuid:49962166-9a27-4daf-b2b7-9a513459d69b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
|
calculating the power spectrum
When you have a periodic signal you use the Fourier series to approximate the signal as a sum of sinusoidal signals at frequencies of 0 (the DC component), F (the frequency of the periodic signal),
and positive integer multiples of F. The coefficients of the Fourier series are the complex amplitudes of this sinusoidals. Knowing the complex amplitudes at each frequency you can calculate the
power at each frequency (considering a load of 1 ohm), and this would be the power spectrum. Ofcourse you can't calculate the whole spectrum because it has an infinity of components. But in this
problem you are asked to calculate it from DC (0 Hz) to 50MHz.
If you had a non-periodic signal you would have used the Fourier transform to calculate it's spectral density of complex amplitude from which you would have calculated the power spectral density.
So, when you have a periodic signal you calculate it's power spectrum and when you have a non-periodic signal you calculate it's power spectral density.
This is because the spectrum of a periodic signal is discrete as opposed to that of a non-periodic signal which is continuous (and in the case of a continuous spectrum it's not handy to tell the
power at each frequency).
|
{"url":"http://www.physicsforums.com/showthread.php?t=185105","timestamp":"2014-04-16T22:17:59Z","content_type":null,"content_length":"32117","record_id":"<urn:uuid:a93a171e-b9c5-4723-be4d-2a5594bbf693>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Visual Insight
Catacaustic of a Cardioid
This image, drawn by Greg Egan, shows a cardioid and its catacaustic.
The cardioid is a heart-shaped curve traced by a point on the perimeter of a circle that is rolling around a fixed circle of the same radius. The catacaustic of a curve in the plane is the envelope
of rays emitted from some source and reflected off that curve.
If we shine rays from the cusp of the cardioid, the resulting catacaustic is a curve called the nephroid. This is the curve traced by a point on the perimeter of a circle that is rolling around a
fixed circle whose radius is twice as big!
Does this pattern continue? What is the catacaustic of a nephroid? And in general, given an algebraic curve of degree $d$, what can we say about the degree of its catacaustic? If you get stuck on
these puzzles, see:
• Greg Egan, Catacaustics, resultants and kissing conics.
For more on cardioids, nephroids and other curves formed by rolling one circle on another, see:
• John Baez, Rolling circles and balls.
It’s worth mentioning that we can get the nephroid as a catacaustic in another way, too: by shining light on a circle from a point at infinity, and letting the rays bounce off the inside of the
Visual Insight is a place to share striking images that help explain advanced topics in mathematics. I’m always looking for truly beautiful images, so if you know about one, please drop a comment
here and let me know!
|
{"url":"http://blogs.ams.org/visualinsight/2013/10/01/catacaustic-of-a-cardioid/","timestamp":"2014-04-17T04:12:38Z","content_type":null,"content_length":"27190","record_id":"<urn:uuid:5375b3c1-ed9c-4456-a299-055a6e8e8961>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chasing Bottoms: A Case Study in Program Verification in the Presence of Partial and Infinite Values
Chasing Bottoms: A Case Study in Program Verification in the Presence of Partial and Infinite Values
Nils Anders Danielsson and Patrik Jansson
Proceedings of the 7th International Conference on Mathematics of Program Construction, MPC 2004, LNCS 3125, 2004. Accompanying library: Chasing Bottoms. Extended version with most proofs available,
see the first part of my licentiate thesis. [ps.gz, pdf]
This work is a case study in program verification: We have written a simple parser and a corresponding pretty-printer in a non-strict functional programming language with lifted pairs and functions
(Haskell). A natural aim is to prove that the programs are, in some sense, each other's inverses. The presence of partial and infinite values in the domains makes this exercise interesting, and
having lifted types adds an extra spice to the task. We have tackled the problem in different ways, and this is a report on the merits of those approaches. More specifically, we first describe a
method for testing properties of programs in the presence of partial and infinite values. By testing before proving we avoid wasting time trying to prove statements that are not valid. Then we prove
that the programs we have written are in fact (more or less) inverses using first fixpoint induction and then the approximation lemma.
Nils Anders Danielsson
Last updated Sat Feb 16 14:24:13 UTC 2008.
|
{"url":"http://www.cse.chalmers.se/~nad/publications/danielsson-jansson-mpc2004.html","timestamp":"2014-04-19T12:23:10Z","content_type":null,"content_length":"3327","record_id":"<urn:uuid:999aef45-091c-4600-831f-b002375f92b0>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hypothesis Testing for Two Proportions
January 13th 2012, 02:10 PM #1
Jan 2012
Hypothesis Testing for Two Proportions
I have a problem I am doing for hw for my Data Analysis II course, which involves the use of Minitab and need help. I solved it, but need to know if I had done it correctly.
Problem: A random sample of 78 women ages 21-29 in Denver showed that 23 have a college degree. Another random sample of 73 men in Denver in the same age group showed that 20 have a college
degree. Based on information from Educational Attainment in the United States, Bureau of the Census, does this indicate that the proportion of Denver women ages 21-29 with college degrees is more
than Denver men in this same age group?
My Minitab output:
Test and CI for Two Proportions
Sample X N Sample p
1 23 78 0.294872
2 20 73 0.273973
Difference = p (1) - p (2)
Estimate for difference: 0.0208992
95% lower bound for difference: -0.0998659
Test for difference = 0 (vs > 0): Z = 0.28 P-Value = 0.388
My Answer:
Decision: 0.388 > 0.05 , so Fail to reject the null hypothesis
Conclusion: There exists sufficient evidence at the 5% level of significance that the true proportion of Denver women ages 21-29 with college degrees is more than the true proportion of Denver
men ages 21-29 with college degrees.
Re: Hypothesis Testing for Two Proportions
Decision is correct, fail to reject $H_0$ and therefore conclude there is no evidence to suggest the true proportion of women with college degrees is greater than the true proportion of men with
college degrees.
Re: Hypothesis Testing for Two Proportions
Sorry, I meant to type insufficient evidence. My bad...thanks for the help!
January 13th 2012, 02:31 PM #2
January 14th 2012, 01:52 PM #3
Jan 2012
|
{"url":"http://mathhelpforum.com/advanced-statistics/195243-hypothesis-testing-two-proportions.html","timestamp":"2014-04-18T07:23:38Z","content_type":null,"content_length":"36736","record_id":"<urn:uuid:07e3a41f-40de-4ce5-8230-81c3bb3f3fd0>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Beyond the adiabatic approximation: exponentially small coupling terms
For multi-level time-dependent quantum systems one can construct superadiabatic representations in which the coupling between separated levels is exponentially small in the adiabatic limit. We
explicitly determine the asymptotic behavior of the exponentially small coupling term for generic two-state systems with real-symmetric Hamiltonian. The superadiabatic coupling term takes a universal
form and depends only on the location and the strength of the complex singularities of the adiabatic coupling function. First order perturbation theory for the Hamiltonian in the superadiabatic
representation then allows to describe the time-development of exponentially small adiabatic transitions and thus to rigorously confirm Michael Berry s predictions on the universal form of adiabatic
transition histories (Joint work with V. Betz from Warwick).
Related Links
|
{"url":"http://www.newton.ac.uk/programmes/HOP/Abstract2/teufel.html","timestamp":"2014-04-17T00:52:27Z","content_type":null,"content_length":"3106","record_id":"<urn:uuid:b0a8900d-7c5b-4c07-88ec-97cccbc71438>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
|
La Honda Geometry Tutor
...I would like to identify the student's learning style and use different learning strategies to generate a dialogue. Re-working the school lesson and finding the point of confusion allow the
student to understand the concepts correctly and solve problems with simple guidance. Learning Prealgebra is like the learning of the roles, functions, and rules of the game pieces of the game of
5 Subjects: including geometry, Chinese, algebra 2, prealgebra
...I tutor middle school and high school math students. I can also teach Chinese at all levels. I am patient and kind.
11 Subjects: including geometry, calculus, statistics, Chinese
...Demento aka The Evil Mechzilla earned his baccalaureate (1988) and his PhD (1993) in Chemistry from the University of California, Santa Cruz. He is the author of 18 papers in the primary
literature, including four full papers in the Journal of Organic Chemistry and one in The Journal of the Amer...
14 Subjects: including geometry, chemistry, English, writing
...I have years of experience using STATA. I used this program in a number of settings and am well-versed in it uses. I have previously tutored a number of students in this application.
49 Subjects: including geometry, calculus, physics, statistics
...I love the look of excitement when a concept suddenly makes sense, or a new connection can be drawn between ideas. I believe all kids can understand math and science, if it is explained in a
way that makes sense to them. I have a Masters in Chemical Engineering from MIT, and over 10 years of industry experience.
26 Subjects: including geometry, chemistry, calculus, physics
|
{"url":"http://www.purplemath.com/La_Honda_Geometry_tutors.php","timestamp":"2014-04-20T11:09:38Z","content_type":null,"content_length":"23716","record_id":"<urn:uuid:df02bfd0-f770-4840-98dd-9eb1177a907a>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Advanced ABAP operations with elementary data types
In this excerpt from the ABAP Cookbook, a publication of SAP Press, you'll discover some SAP ABAP advanced operations that can be performed using elementary data. You'll also learn how to use these
features in your ABAP programs.
Requires Free Membership to View
Advanced ABAP operations with elementary data types
Working with ABAP date and time data types
Understanding ABAP programming bits and bytes
Working with Numbers, Dates, and Bytes
One of the nice things about working with an advanced programming language
like ABAP is that you don’t often have to worry about how that data is represented
behind the scenes at the bits and bytes level; the language does such a good job
of abstracting data that it becomes irrelevant. However, if you do come across a
requirement that compels you to dig a little deeper, you’ll find that ABAP also has
excellent support for performing more advanced operations with elementary data
types. In this chapter, we investigate some of these operations and show you techniques
for using these features in your programs.
2.1 Numeric Operations
Whether it’s keeping up with a loop index or calculating entries in a balance sheet,
almost every ABAP program works with numbers on some level. Typically, whenever
we perform operations on these numbers, we use basic arithmetic operators
such as the + (addition), - (subtraction), * (multiplication), or / (division) operators.
Occasionally, we might use the MOD operator to calculate the remainder of an
integer division operation, or the ** operator to calculate the value of a number
raised to the power of another. However, sometimes we need to perform more
advanced calculations. If you’re a mathematics guru, then perhaps you could come
up with an algorithm to perform these advanced calculations using the basic arithmetic
operators available in ABAP. For the rest of us mere mortals, ABAP provides
an extensive set of mathematics tools that can be used to simplify these requirements.
In the next two sections, we’ll examine these tools and see how to use
them in your programs.
2.1.1 ABAP Math Functions
ABAP provides many built-in math functions that you can use to develop advanced
mathematical formulas as listed in Table 2.1. In many cases, these functions can
be called using any of the built-in numeric data types in ABAP (e.g., the I, F, and P
data types). However, some of these functions require the precision of the floating
point data type (see Table 2.1 for more details). Because ABAP supports implicit
type conversion between numeric types, you can easily cast non-floating point
types into floating point types for use within these functions.
│ │Supported│ │
│Function│Numeric │Description │
│ │Types │ │
│abs │(All) │Calculates the absolute value of the provided │
│ │ │argument. │
│ │ │Determines the sign of the provided │
│sign │(All) │argument. If the sign is positive, the function │
│ │ │returns 1; if it’s negative, it returns -1; │
│ │ │otherwise, it returns 0. │
│ceil │(All) │Calculates the smallest integer value that isn’t │
│ │ │smaller than the argument. │
│floor │(All)│Calculates the largest integer value that isn’t │
│ │ │larger than the argument. │
│trunc │(All)│Returns the integer part of the argument. │
│frac │(All)│Returns the fractional part of the argument. │
│cos, sin, tan│F │Implements the basic trigonometric functions. │
│acos, asin, atan│F│Implements the inverse trigonometric │
│ │ │functions. │
│cosh, sinh, tanh│F│Implements the hyperbolic trigonometric │
│ │ │functions. │
│exp │F│Implements the exponential function with a │
│ │ │base e ≈ 2.7182818285. │
│log │F│Implements the natural logarithm function. │
│log10│F│Calculates a logarithm using base 10. │
│sqrt │F│Calculates the square root of a number. │
Table 2.1 ABAP Math Functions
The report program ZMATHDEMO shown in Listing 2.1 contains examples of how to
call the math functions listed in Table 2.1 in an ABAP program. The output of this
program is displayed in Figure 2.1.
REPORT zmathdemo.
CONSTANTS: CO_PI TYPE f VALUE '3.14159265'.
DATA: lv_result TYPE p DECIMALS 2.
lv_result = abs( -3 ).
WRITE: / 'Absolute Value: ', lv_result.
lv_result = sign( -12 ).
WRITE: / 'Sign: ', lv_result.
lv_result = ceil( '4.7' ).
WRITE: / 'Ceiling: ', lv_result.
lv_result = floor( '4.7' ).
WRITE: / 'Floor: ', lv_result.
lv_result = trunc( '4.7' ).
WRITE: / 'Integer Part: ', lv_result.
lv_result = frac( '4.7' ).
WRITE: / 'Fractional Part: ', lv_result.
lv_result = sin( CO_PI ).
WRITE: / 'Sine of PI: ', lv_result.
lv_result = cos( CO_PI ).
WRITE: / 'Cosine of PI: ', lv_result.
lv_result = tan( CO_PI ).
WRITE: / 'Tangent of PI: ', lv_result.
lv_result = exp( '2.3026' ).
WRITE: / 'Exponential Function:', lv_result.
lv_result = log( lv_result ).
WRITE: / 'Natural Logarithm: ', lv_result.
lv_result = log10( '1000.0' ).
WRITE: / 'Log Base 10 of 1000: ', lv_result.
\lv_result = log( 8 ) / log( 2 ).
WRITE: / 'Log Base 2 of 8: ', lv_result.
lv_result = sqrt( '16.0' ).
WRITE: / 'Square Root: ', lv_result.
Listing 2.1 Working with ABAP Math Functions
Figure 2.1 Output Generated by Report ZMATHDEMO
The values of the function calls can be used as operands in more complex expressions. For example, in Listing 2.1, notice how we’re calculating the value of
log( 8 ). Here, we use the change of base formula log( x ) / log( b ) (where
b refers to the target base, and x refers to the value applied to the logarithm function)
to derive the base 2 value. Collectively, these functions can be combined with
typical math operators to devise some very complex mathematical formulas.
2.1.2 Generating Random Numbers
Computers live in a logical world where everything is supposed to make sense.
Whereas this characteristic makes computers very good at automating many kinds of tasks, it can also make it somewhat difficult to model certain real-world phenomena.
Often, we need to simulate imperfection in some form or another. One
common method for achieving this is to produce randomized data using random
number generators. Random numbers are commonly used in statistics, cryptography,
and many kinds of scientific applications. They are also used in algorithm
design to implement fairness and to simulate useful metaphors applied to the
study of artificial intelligence (e.g., genetic algorithms with randomized mutations,
SAP provides random number generators for all of the built-in numeric data types
via a series of ABAP Objects classes. These classes begin with the prefix CL_ABAP_
RANDOM (e.g., CL_ABAP_RANDOM_FLOAT, CL_ABAP_RANDOM_INT, etc.). Though none of
these classes inherit from the CL_ABAP_RANDOM base class, they do use its features
behind the scenes using a common OO technique called composition. Composition
basically implies that one class delegates certain functionality to an instance of
another class. The UML class diagram shown in Figure 2.2 shows the basic structure
of the provided random number generator classes.
│+ CREATE ( ) │
│+ GET_NEXT( ) │
Figure 2.2 Basic UML Class Diagram for Random Number Generators
Unlike most classes where you create an object using the CREATE OBJECT statement,
instances of random number generators must be created via a call to a factory class
method called CREATE(). The signature of the CREATE() method is shown in Figure
2.3. Here, you can see that the method defines an importing parameter called SEED
that seeds the pseudo-random number generator algorithm that is used behind the
scenes to generate the random numbers. In a pseudo-random number generator,
random numbers are generated in sequence based on some calculation performed
using the seed. Thus, a given seed value causes the random number generator to
generate the same sequence of random numbers each time.
The CREATE() method for class CL_ABAP_RANDOM_INT also provides MIN and MAX
parameters that can place limits around the random numbers that are generated
(e.g., a range of 1-100, etc.). The returning PRNG parameter represents the generated
random number generator instance. Once created, you can begin retrieving
random numbers via a call to the GET_NEXT() instance method.
Figure 2.3 Signature of Class Method CREATE()
To demonstrate how these random number generator classes work, let’s consider
an example program. Listing 2.2 contains a simple report program named
ZSCRAMBLER that defi nes a local class called LCL_SCRAMBLER. The LCL_SCRAMBLER
class includes an instance method SCRAMBLE() that can be used to randomly scramble
around the characters in a string. This primitive implementation creates a
random number generator to produce random numbers in the range of [0...
{String Length}]. Perhaps the most complex part of the implementation is related
to the fact that random number generators produce some duplicates along the
way. Therefore, we have to make sure that we haven’t used the randomly generated
number previously to make sure that each character in the original string is
copied into the new one.
REPORT zscrambler.
CLASS lcl_scrambler DEFINITION.
METHODS: scramble IMPORTING im_value TYPE clike
RETURNING VALUE(re_svalue) TYPE string
EXCEPTIONS cx_abap_random.
CONSTANTS: CO_SEED TYPE i VALUE 100.
TYPES: BEGIN OF ty_index,
index TYPE i,
END OF ty_index.
CLASS lcl_scrambler IMPLEMENTATION.
METHOD scramble.*
Method-Local Data Declarations:
DATA: lv_length TYPE i,
lv_min TYPE i VALUE 0,
lv_max TYPE i,
lo_prng TYPE REF TO cl_abap_random_int,
lv_index TYPE i,
lt_indexes TYPE STANDARD TABLE OF ty_index.
<lfs_index> LIKE LINE OF lt_indexes.
* Determine the length of the string as this sets the
* bounds on the scramble routine:
lv_length = strlen( im_value ).
lv_max = lv_length - 1.
* Create a random number generator to return random
* numbers in the range of 1..{String Length}:
CALL METHOD cl_abap_random_int=>create
seed = CO_SEED
min = lv_min
max = lv_max
prng = lo_prng.
* Add the characters from the string in random order to
* the result string:
WHILE strlen( re_svalue ) LT lv_length.
lv_index = lo_prng->get_next( ).
READ TABLE lt_indexes TRANSPORTING NO FIELDS
WITH KEY index = lv_index.
IF sy-subrc EQ 0.
CONCATENATE re_svalue im_value+lv_index(1)
INTO re_svalue.
APPEND INITIAL LINE TO lt_indexes
ASSIGNING <lfs_index>.
<lfs_index>-index = lv_index.
* Local Data Declarations:
DATA: lo_scrambler TYPE REF TO lcl_scrambler,
lv_scrambled TYPE string.
* Use the scrambler to scramble around a word:
CREATE OBJECT lo_scrambler.
lv_scrambled = lo_scrambler->scramble( 'Andersen' ).
WRITE: / lv_scrambled.
Listing 2.2 Using Random Number Generators in ABAP
Obviously, a simple scrambler routine like the one shown in Listing 2.2 isn’t production
quality. Nevertheless, it does give you a glimpse of how you can use random
number generators to implement some interesting algorithms. As a reader
exercise, you might think about how you could use random number generators to
implement an UNSCRAMBLE() method to unscramble strings generated from calls
to method SCRAMBLE().
This was first published in November 2010
There are Comments. Add yours.
Sort by: OldestNewest
|
{"url":"http://searchsap.techtarget.com/feature/Advanced-ABAP-operations-with-elementary-data-types","timestamp":"2014-04-20T13:19:52Z","content_type":null,"content_length":"97323","record_id":"<urn:uuid:24f5adf8-83e4-4bde-87ab-37236c70a56d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On a variant of the equation $\sigma(x)=2^n$
up vote 0 down vote favorite
It is a nice exercise to prove that the only solutions (positive integers $x$) of the equation on the title are products of Mersenne primes; with all exponents equal to $1$.
((see also: A046528 in the OEIS))
Question: It is true that the only solutions $A \in GF(2)[x]$ to the equation $$ \sigma(A) = x^a(x+1)^b $$ are products of distinct Mersenne irreducible polynomials $M$ where this means $$ M = x^c
(x+1)^d+1 $$ and $M \in GF(2)[x]$ is irreducible.
Trivial example: $$ \sigma(x^2+x+1)=x(x+1). $$ As usual $\sigma(n)$ is the sum of all positive divisors of the positive integer $n$ and $\sigma(A)$ is the sum of all divisors (including $1$ and $A$)
of the polynomial $A$ in $GF(2)[x])$.
nt.number-theory polynomials
Is this exercise in a book somewhere, so I can refresh my memory on how to go about solving it? – Robert K Apr 23 '11 at 20:04
Just try an induction on $\omega(x)$ – Luis H Gallardo Apr 24 '11 at 1:21
In the OEIS page that I included in the question you may find a link to a proof, but it is amusing to try it yourself! – Luis H Gallardo Apr 24 '11 at 9:17
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged nt.number-theory polynomials or ask your own question.
|
{"url":"http://mathoverflow.net/questions/62721/on-a-variant-of-the-equation-sigmax-2n","timestamp":"2014-04-16T22:38:53Z","content_type":null,"content_length":"48601","record_id":"<urn:uuid:46fb250c-7086-490c-bf55-1b9fb0725ff8>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How do people understand algebra if they never encounter it?
» How do people understand algebra if they never encounter it?
How do people understand algebra if they never encounter it?
June 30, 2011 Posted by News under Darwinism, Mind, Neuroscience 5 Comments
In “Geometric Principles Appear Universal in Our Minds” (Wired Science, May 24, 2011) , Bruce Bower reflects on the fact that research among peoples who do not even count suggests that abstract
geometric principles are probably innate in humans:
If geometry relies on an innate brain mechanism, it’s unclear how such a neural system generates abstract notions about phenomena such as infinite surfaces and why this system doesn’t fully kick
in until age 7. If geometry depends on years of spatial learning, it’s not known how people transform real-world experience into abstract geometric concepts — such as lines that extend forever or
perfect right angles — that a forest dweller never encounters in the natural world.
As always, we needn’t wait long for a Darwin answer:
Whatever the case, the Mundurucú’s keen grip on abstract geometry contrasts with past evidence from Izard’s group that these Amazonian villagers cannot add or otherwise manipulate numbers larger
than five. Geometry may have a firmer evolutionary basis in the brain than arithmetic, comments cognitive neuropsychologist Brian Butterworth of University College London.“If so, this would
support recent findings that people who fail to learn arithmetic, or ‘dyscalculics,’ can still be good at geometry,” Butterworth says.
Some find the leap here puzzling: That “dyscalculics” can learn geometry better than arithmetic sheds no light in particular on evolution: Both are branches of mathematics are abstract arts.
It’s one thing to say that evolution explains why wolves are better at sniffing than at the times tables; another to simply plunk an assertion about “evolution” in the midst of discussion of variable
human groups’ performance with abstractions, providing no evidence apart from the humble assent of millions that “evolution is true.” But Butterworth is on safe ground. Hard to imagine Bower asking
him to explain.
These data are interesting in the light of the fact that the historical development of mathematics puts geometry about a millennium and a half before the development of algebra, implying that the
latter follows the former in a hierarchy of concepts. Thoughts?
Follow UD News at Twitter!
5 Responses to How do people understand algebra if they never encounter it?
1. But alas, besides the profound mystery of us innately understanding such ‘abstract mathematical constructs, why should reality itself be reducible to, and governed by such ‘abstract’ mathematical
constructs??? The only answer is that the ‘abstract mathematical constructs’ are not abstract, in the sense of being separate from concrete physical reality, but that concrete physical reality
does in fact derive its very being from the ‘immaterial’, abstract, realm of ideas in the first place!!! Moreover, this ‘anomaly’ is very strong evidence that man was indeed made in the ‘image of
God’ to have a intimate relationship with the Creator, and ‘sustainer’, of this universe!!!
Finely Tuned Big Bang, Elvis In The Multiverse, and the Schroedinger Equation – Granville Sewell – audio
At the 4:00 minute mark of the preceding audio, Dr. Sewell comments on the ‘transcendent’ and ‘constant’ Schroedinger’s Equation;
‘In chapter 2, I talk at some length on the Schroedinger Equation which is called the fundamental equation of chemistry. It’s the equation that governs the behavior of the basic atomic particles
subject to the basic forces of physics. This equation is a partial differential equation with a complex valued solution. By complex valued I don’t mean complicated, I mean involving solutions
that are complex numbers, a+b^i, which is extraordinary that the governing equation, basic equation, of physics, of chemistry, is a partial differential equation with complex valued solutions.
There is absolutely no reason why the basic particles should obey such a equation that I can think of except that it results in elements and chemical compounds with extremely rich and useful
chemical properties. In fact I don’t think anyone familiar with quantum mechanics would believe that we’re ever going to find a reason why it should obey such an equation, they just do! So we
have this basic, really elegant mathematical equation, partial differential equation, which is my field of expertise, that governs the most basic particles of nature and there is absolutely no
reason why, anyone knows of, why it does, it just does. British physicist Sir James Jeans said “From the intrinsic evidence of His creation, the great architect of the universe begins to appear
as a pure mathematician”, so God is a mathematician to’.
i.e. the Materialist is at a complete loss to explain why this should be so, whereas the Christian Theist presupposes such ‘transcendent’ control,,,
John 1:1
In the beginning was the Word, and the Word was with God, and the Word was God.
of note; ‘the Word’ is translated from the Greek word ‘Logos’. Logos happens to be the word from which we derive our modern word ‘Logic’.
To solidify Dr. Sewell’s observation that transcendent ‘math’ is found to be foundational to reality, I note this equation:
0 = 1 + e ^(i*pi) — Euler
Believe it or not, the five most important numbers in mathematics are tied together, through the complex domain in Euler’s number, And that points, ever so subtly but strongly, to a world of
reality beyond the immediately physical. Many people resist the implications, but there the compass needle points to a transcendent reality that governs our 3D ‘physical’ reality.
God by the Numbers – Connecting the constants
Excerpt: The final number comes from theoretical mathematics. It is Euler’s (pronounced “Oiler’s”) number: e*pi*i. This number is equal to -1, so when the formula is written e*pi*i+1 = 0, it
connects the five most important constants in mathematics (e, pi, i, 0, and 1) along with three of the most important mathematical operations (addition, multiplication, and exponentiation). These
five constants symbolize the four major branches of classical mathematics: arithmetic, represented by 1 and 0; algebra, by i; geometry, by pi; and analysis, by e, the base of the natural log.
e*pi*i+1 = 0 has been called “the most famous of all formulas,” because, as one textbook says, “It appeals equally to the mystic, the scientist, the philosopher, and the mathematician.”
(of note; Euler’s Number (equation) is more properly called Euler’s Identity in math circles.)
Moreover Euler’s Identity, rather than just being the most enigmatic equation in math, finds striking correlation to how our 3D reality is actually structured,,,
The following picture, Bible verse, and video are very interesting since, with the discovery of the Cosmic Microwave Background Radiation (CMBR), the universe is found to actually be a circular
sphere which ‘coincidentally’ corresponds to the circle of pi within Euler’s identity:
Picture of CMBR
Proverbs 8:26-27
While as yet He had not made the earth or the fields, or the primeval dust of the world. When He prepared the heavens, I was there, when He drew a circle on the face of the deep,
The Known Universe by AMNH – video – (please note the ‘centrality’ of the Earth in the universe in the video)
The flatness of the ‘entire’ universe, which ‘coincidentally’ corresponds to the diameter of pi in Euler’s identity, is found on this following site; (of note this flatness of the universe is an
extremely finely tuned condition for the universe that could have, in reality, been a multitude of different values than ‘flat’):
Did the Universe Hyperinflate? – Hugh Ross – April 2010
Excerpt: Perfect geometric flatness is where the space-time surface of the universe exhibits zero curvature (see figure 3). Two meaningful measurements of the universe’s curvature parameter, ½k,
exist. Analysis of the 5-year database from WMAP establishes that -0.0170 < ½k < 0.0068.4 Weak gravitational lensing of distant quasars by intervening galaxies places -0.031 < ½k < 0.009.5 Both
measurements confirm the universe indeed manifests zero or very close to zero geometric curvature,,,
This following video shows that the universe also has a primary characteristic of expanding/growing equally in all places,, which 'coincidentally' strongly corresponds to e in Euler's identity. e
is the constant used in all sorts of equations of math for finding what the true rates of growth and decay are for any given problem trying to find as such:
Every 3D Place Is Center In This Universe – 4D space/time – video
Towards the end of the following video, Michael Denton speaks of the square root of negative 1 being necessary to understand the foundational quantum behavior of this universe. The square root of
-1 is 'coincidentally' found in Euler's identity:
Michael Denton – Mathematical Truths Are Transcendent And Beautiful – Square root of -1 is built into the fabric of reality – video
I find it extremely strange that the enigmatic Euler's identity would find such striking correlation to reality. In pi we have correlation to the 'sphere of the universe' as revealed by the
Cosmic Background radiation, as well pi correlates to the finely-tuned 'geometric flatness' within the 'sphere of the universe' that has now been found. In e we have the fundamental constant that
is used for ascertaining exponential growth in math that strongly correlates to the fact that space-time is 'expanding/growing equally' in all places of the universe. In the square root of -1 we
have what is termed a 'imaginary number', which was first proposed to help solve equations like x2+ 1 = 0 back in the 17th century, yet now, as Michael Denton pointed out in the preceding video,
it is found that the square root of -1 is required to explain the behavior of quantum mechanics in this universe. The correlation of Euler's identity, to the foundational characteristics of how
this universe is constructed and operates, points overwhelmingly to a transcendent Intelligence, with a capital I, which created this universe! It should also be noted that these universal
constants, pi,e, and square root -1, were at first thought by many to be completely transcendent of any material basis, to find that these transcendent constants of Euler's identity in fact
'govern' material reality, in such a foundational way, should be enough to send shivers down any mathematicians spine. Further discussion can be found here relating Euler's identity to General
Relativity and Quantum Mechanics:
Here is a very well done video, showing the stringent 'mathematical proofs' of Euler's Identity:
Euler's identity – video
2. f/n; The mystery doesn’t stop there, this following video shows how pi and e are found in Genesis 1:1 and John 1:1
Euler’s Identity – God Created Mathematics – video
This following website, and video, has the complete working out of the math of Pi and e in the Bible, in the Hebrew and Greek languages respectively, for Genesis 1:1 and John 1:1:
Fascinating Bible code – Pi and natural log – Amazing – video (of note: correct exponent for base of Nat Log found in John 1:1 is 10^40, not 10^65 as stated in the video)
Another transcendent mathematical structure that is found imbedded throughout our reality is Fibonacci’s Number;
Fibonacci Numbers – Euler’s Identity – The Fingerprint of God – video – (See video description for a look at Euler’s Identity)
Researchers Succeed in Quantum Teleportation of Light Waves – April 2011
Excerpt: In this experiment, researchers in Australia and Japan were able to transfer quantum information from one place to another without having to physically move it. It was destroyed in one
place and instantly resurrected in another, “alive” again and unchanged. This is a major advance, as previous teleportation experiments were either very slow or caused some information to be
Explaining Information Transfer in Quantum Teleportation: Armond Duwell †‡ University of Pittsburgh
Excerpt: In contrast to a classical bit, the description of a (photon) qubit requires an infinite amount of information. The amount of information is infinite because two real numbers are
required in the expansion of the state vector of a two state quantum system (Jozsa 1997, 1) — Concept 2. is used by Bennett, et al. Recall that they infer that since an infinite amount of
information is required to specify a (photon) qubit, an infinite amount of information must be transferred to teleport.
Quantum no-hiding theorem experimentally confirmed for first time
Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor
destroyed. This concept stems from two fundamental theorems of quantum mechanics: the no-cloning theorem and the no-deleting theorem. A third and related theorem, called the no-hiding theorem,
addresses information loss in the quantum world. According to the no-hiding theorem, if information is missing from one system (which may happen when the system interacts with the environment),
then the information is simply residing somewhere else in the Universe; in other words, the missing information cannot be hidden in the correlations between a system and its environment. (This
experiment provides experimental proof that the teleportation of quantum information in this universe must be complete and instantaneous.)
The following articles show that even atoms (Ions) are subject to teleportation:
Of note: An ion is an atom or molecule in which the total number of electrons is not equal to the total number of protons, giving it a net positive or negative electrical charge.
Ions have been teleported successfully for the first time by two independent research groups
Excerpt: In fact, copying isn’t quite the right word for it. In order to reproduce the quantum state of one atom in a second atom, the original has to be destroyed. This is unavoidable – it is
enforced by the laws of quantum mechanics, which stipulate that you can’t ‘clone’ a quantum state. In principle, however, the ‘copy’ can be indistinguishable from the original (that was
Atom takes a quantum leap – 2009
Excerpt: Ytterbium ions have been ‘teleported’ over a distance of a metre.,,,
“What you’re moving is information, not the actual atoms,” says Chris Monroe, from the Joint Quantum Institute at the University of Maryland in College Park and an author of the paper. But as two
particles of the same type differ only in their quantum states, the transfer of quantum information is equivalent to moving the first particle to the location of the second.
3. ba77 I think you run a cut-and-paste database! I don’t know how you have time to write all that stuff for every blog!
Here’s my attempt to answer Denyse’s question. In Gen 1:26 we see the creation of male & female, which I have previously identified with the CroMagnon appearance about 60,000 BC.
DNA analysis reveals that CroMagnon is 100% modern DNA, which is to say, “in the image of God”.
Gen 2:7 is the creation of Adam, whom all agree is homo sapiens, and I identify with the Neolithic Revolution about 10,000 BC. It is also associated with the development of language, and you will
note that everything after Gen 2:7 has a name, whereas nothing before that was named.
But it is CroMagnon cave paintings that astonished the world. So here we have a dilemma. Before Adam, before language, there was no moral injunction, no original sin, no Fall. Somehow, sin is
related to language. But language is not related to art, since CroMagnon could certainly paint.
Therefore sin is a word that didn’t exist before language, and algebra is a language of math. But art is in our genes, and like geometry, requires no words.
4. Dr. Sheldon, interesting ‘spiritual’ insight as to the ancient art.,,, as to my ‘cut-and-paste database’,, Yes, I have collected quite a few references over the years in defense of ID, Theism in
general, as well as Christianity,,,,
5. I’ve encountered algebra and I still don’t understand it.
You must be logged in to post a comment.
|
{"url":"http://www.uncommondescent.com/darwinism/how-do-people-understand-algebra-if-they-never-encounter-it/","timestamp":"2014-04-17T18:27:50Z","content_type":null,"content_length":"72315","record_id":"<urn:uuid:6d5004fd-76e3-4fa3-b6bc-de6fe5341f9f>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
|
no sets of six integers with every pair summing to a square
Supposing that there is a set (a,b,c,d,e,f) and a+b, a+c, a+d, b+c... are all squares. Then each member of the set can be paired with 5 others but a+b=b+a, so there are 5*6/2=15 combinations. 5
(a+b+c+d+e+f)=sum of all 15 squares.
If a,b and c are odd, a+b, b+c and a+c are even, and must be multiples of 4 since they are square. But a+b+c is odd, and a+b+csad(a+b)+(b+c)+(a+c))/2 which is even. Therefore at most 2 of the
integers can be odd.
If a is odd, and the rest are even, a+b,b+c≡1(mod4), b+c≡2-a(mod4), a≡2(mod4).
If a and b are odd, and the rest are even, a+b≡0(mod4), a+c,b+c≡1(mod4), a+2c+b≡2(mod4), c≡1(mod4) but c is meant to be even.
The only remaining possibility is that all 6 are even. Every pair a,b is either both 2(mod4) or both 0(mod4) since (2x)^2==0(mod4). If they are all 0(mod4), they can all be divided by 4 until a set
is obtained which is all 2(mod4).
2(mod4)≡2 or 6 (mod8). If a and b are 2(mod8), a+b≡(4(mod8) and cannot be square. If a and b are 6(mod(8), a+b are 12(mod8) and cannot be square. In any set of 6 intergers ≡2(mod8), there will always
be a pair both 2(mod8) or both 6(mod8).
Question: What is the largest set possible such that every 3 sum to a square? Are there any sets of 4 such that every 3 sum to a cube?
Last edited by namealreadychosen (2011-07-27 17:29:10)
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=183052","timestamp":"2014-04-17T04:00:39Z","content_type":null,"content_length":"11040","record_id":"<urn:uuid:fc834c44-3e69-46ca-9956-6fe4365393f9>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Seven Cubes Solution
Let n be a positive integer. Define the function f from Z^n to Z by f(x) = x_1+2x_2+3x_3+...+nx_n. For x in Z^n, say y is a neighbor of x if y and x differ by one in exactly one coordinate. Let S(x)
be the set consisting of x and its 2n neighbors. It is easy to check that the values of f(y) for y in S(x) are congruent to 0,1,2,...,2n+1 (mod 2n+1) in some order. Using this, it is easy to check
that every y in Z^n is a neighbor of one and only one x in Z^n such that f(x) is congruent to 0 (mod 2n+1). So Z^n can be tiled by clusters of the form S(x), where f(x) is congruent to 0 mod 2n+1.
|
{"url":"http://rec-puzzles.org/index.php/Seven%20Cubes%20Solution","timestamp":"2014-04-19T11:58:27Z","content_type":null,"content_length":"7314","record_id":"<urn:uuid:0ef24334-7d4a-46a6-80a0-929df90981e0>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Historical question in analytic number theory
up vote 11 down vote favorite
The analytic continuation and functional equation for the Riemann zeta function were proved in Riemann's 1859 memoir "On the number of primes less than a given magnitude." What is the earliest
reference for the analytic continuation and functional equation of Dirichlet L-functions? Who first proposed that they might satisfy a Riemann hypothesis? Dirichlet did none of these things; his
paper dates from 1837, and as far as I know he only considered his L-functions as functions of a real variable.
ho.history-overview nt.number-theory analytic-number-theory
I would imagine it is Hecke. But I think Dirichlet himself did make use of the fact that the L-function has a simple pole at $s=1$. So in a sense he was complex analytic. – Anweshi Jan 27 '10 at
3 I think it was earlier than Hecke, but my knowledge of the 19th century literature is very poor. Dirichlet's L-functions do not have any poles. You mean that the zeta function has a pole? Euler
and everyone after him knew that it diverged like 1/(s-1) as s--->1 for s real, which is weaker than knowing anything about a pole, but sufficient for some applications ("Dirichlet density" of
primes in arithmetic progressions). – David Hansen Jan 27 '10 at 14:48
I mean the Dedekind zeta function of the cyclotomic field, which is a product of the Dirichlet $L$-functions for various characters of its Galois group which is isomorphic to $(\mathbb{Z}/n\mathbb
{Z})^*$, has a simple pole at $s = 1$. This is a significant step in the proof of Dirichlet's theorem. The answer to your question could be Dedekind, since the name occurs to me, and the zeta
function of the number fields are named after him. – Anweshi Jan 27 '10 at 14:56
Dedekind did not get analytic continuation of "his" zeta-functions beyond Re(s) > 1. The first person to prove analytic continuation of Dedekind zeta-functions (in general) a bit to the left of Re
2 (s) = 1 was Landau, in 1903 I believe. He got continuation as far as Re(s) > 1 - 1/[K:Q], where K is the number field whose zeta-function you're dealing with. This is treated in Lang's Algebraic
Number Theory. Before Landau, the density business involved limits as s approaches 1 from the right, as David writes. That one-sided limit does not imply anything about complex-analyticity around
s = 1. – KConrad Jan 28 '10 at 5:24
I had the impression that Dedekind knew how to meromorphically continue just a teensy bit, even before Landau, but I may be mistaken. – paul garrett Jul 23 '11 at 18:31
add comment
5 Answers
active oldest votes
Riemann was the first person who brought complex analysis into the game, but if you ask just about functional equations then he was not the first. In the 1840s, there were proofs of the
functional equation for the $L$-function of the nontrivial character mod 4, relating values at $s$ and $1-s$ for real $s$ between 0 and 1, where the $L$-function is defined by its
Dirichlet series. In particular, this happened before Riemann's work on the zeta-function. The proofs were due independently to Malmsten and Schlomilch. Eisenstein had a proof as well
up vote 19 (unpublished) which was found in his copy of Gauss' Disquisitiones. It involves Poisson summation. Eisenstein's proof is dated 1849 and Weil suggested that this might have motivated
down vote Riemann in his work on the zeta-function.
For more on Eisenstein's proof, see Weil's "On Eisenstein's Copy of the Disquisitiones" pp. 463--469 of "Alg. Number Theory in honor of K. Iwasawa" Academic Press, Boston, 1989.
add comment
To extend on Matt's comment about Euler, here is something I wrote up some years ago about Euler's discovery of the functional equation only at integral points. I hope there are no typos.
Although Euler never found a convergent analytic expression for $\zeta(s)$ at negative numbers, in 1749 he published a method of computing values of the zeta function at negative integers by
a precursor of Abel's Theorem applied to a divergent series. The computation led him to the asymmetric functional equation of $\zeta(s)$.
The technique uses the function $$ \zeta_{2}(s) = \sum_{n \geq 1} \frac{(-1)^{n-1}}{n^s} = 1 - \frac{1}{2^s} + \frac{1}{3^s} - \frac{1}{4^s} + \dots. $$ This looks not too different from $\
zeta(s)$, but has the advantage as an alternating series of converging for all positive $s$. For $s > 1$, $\zeta_2(s) = (1 - 2^{1-s})\zeta(s)$. Of course this is true for complex $s$, but
Euler only worked with real $s$, so we shall as well.
Disregarding convergence issues, Euler wrote $$ \zeta_{2}(-m) = \sum_{n \geq 1} (-1)^{n-1}n^m = 1 - 2^m + 3^m - 4^m + \dots, $$ which he proceeded to evaluate as follows. Differentiate the
equation $$ \sum_{n \geq 0} X^n = \frac{1}{1-X} $$ to get $$ \sum_{n \geq 1} nX^{n-1} = \frac{1}{(1-X)^2}. $$ Setting $X = -1$, $$ \zeta_{2}(-1) = \frac{1}{4}. $$ Since $\zeta_{2}(-1) = (1-2^
2)\zeta(-1)$, $\zeta(-1) = -1/12$. Notice we can't set $X = 1$ in the second power series and compute $\sum n = \zeta(-1)$ directly. So $\zeta_2(s)$ is nicer than $\zeta(s)$ in this Eulerian
Multiplying the second power series by $X$ and then differentiating, we get $$ \sum_{n \geq 1} n^2X^{n-1} = \frac{1+X}{(1-X)^3}. $$ Setting $X = -1$, $$ \zeta_{2}(-2) = 0. $$ By more
successive multiplications by $X$ and differentiations, we get $$ \sum_{n \geq 1} n^3X^{n-1} = \frac{X^2+4X+1}{(1-X)^4}, $$ and $$ \sum_{n \geq 1} n^4X^{n-1} = \frac{(X+1)(X^2+10X+1)}{(1-X)^
5}. $$ Setting $X = -1$, we find $\zeta_{2}(-3) = -1/8$ and $\zeta_{2}(-4) = 0$. Continuing further, with the recursion $$ \frac{d}{dx} \frac{P(x)}{(1-x)^n} = \frac{(1-x)P'(x) + nP(x)}{(1-x)^
{n+1}}, $$ we get $$ \sum_{n \geq 1} n^5X^{n-1} = \frac{X^4+26X^3+66X^2 + 26X +1}{(1-X)^6}, $$ $$ \sum_{n \geq 1} n^6X^{n-1} = \frac{(X+1)(X^4 + 56X^3 + 246X^2 + 56X+1)} {(1-X)^7}, $$ $$ \
sum_{n \geq 1} n^7X^{n-1} = \frac{X^6 + 120X^5 + 1191X^4 + 2416X^3 + 1191X^2 + 120X + 1}{(1-X)^8}. $$ Setting $X = -1$, we get $\zeta_{2}(-5) = 1/4, \ \zeta_{2}(-6) = 0, \ \zeta_{2}(-7) = -17
Apparently $\zeta_{2}$ vanishes at the negative even integers, while $$ \frac{\zeta_{2}(-1)}{\zeta_{2}(2)} = \frac{1}{4}\cdot\frac{6\cdot 2}{\pi^2} = \frac{3\cdot 1!}{1\cdot \pi^2}, \ \ \ \ \
up vote frac{\zeta_{2}(-3)}{\zeta_{2}(4)} = -\frac{1}{8}\cdot\frac{30\cdot24}{7\pi^4} = -\frac{15\cdot 3!}{7\cdot \pi^4}, $$ $$ \frac{\zeta_{2}(-5)}{\zeta_{2}(6)} = \frac{1}{4}\cdot \frac{42\cdot 6!}
14 down {31\pi^6} = \frac{63 \cdot 5!}{31\cdot \pi^6}, \ \ \ \ \frac{\zeta_{2}(-7)}{\zeta_{2}(8)} = -\frac{17}{16}\cdot \frac{30\cdot 8!}{127\cdot \pi^8} = -\frac{255\cdot 7!}{127\pi^8}. $$
The numbers $1, 3, 7, 15, 31, 63, 127, 255$ are all one less than a power of 2, so Euler was led to the observation that for $n \geq 2$, $$ \frac{\zeta_{2}(1-n)}{\zeta_{2}(n)} = \frac{(-1)^{n
/2+1}(2^n-1)(n-1)!}{(2^{n-1}-1)\pi^n} $$ if $n$ is even and $$ \frac{\zeta_{2}(1-n)}{\zeta_{2}(n)} = 0 $$ if $n$ is odd. Notice how the vanishing of $\zeta_{2}(s)$ at negative even integers
nicely compensates for the lack of knowledge of $\zeta_2(s)$ at positive odd integers $> 1$ (which is the same as not knowing $\zeta(s)$ at positive odd integers $> 1$).
Euler interpreted the $\pm$ sign at even $n$ and the vanishing at odd $n$ as the single factor $-\cos(\pi n/2)$, and with $(n-1)!$ written as $\Gamma(n)$ we get $$ \frac{\zeta_{2}(1-n)}{\
zeta_{2}(n)} = -\Gamma(n)\frac{2^n-1}{(2^{n-1}-1)\pi^n} \cos\left(\frac{\pi n}{2}\right). $$ Writing $\zeta_{2}(n)$ as $(1 - 2^{1-n})\zeta(n)$ gives the asymmetric functional equation $$ \
frac{\zeta(1-n)}{\zeta(n)} = \frac{2}{(2\pi)^n} \Gamma(n)\cos\left(\frac{\pi n}{2}\right). $$ Euler applied similar ideas to $L(s,\chi_4)$ and found its functional equation. You can work this
out yourself in Exercise 2 below.
1. Show that Euler's computation of zeta values at negative integers can be put in the form $$ (1 - 2^{n+1})\zeta(-n) = \left.\left(u\frac{d}{du}\right)^{n}\right\vert_{u=1}\left(\frac{u}
{1+u} \right) = \left.\left(\frac{d}{dx}\right)^{n}\right\vert_{x=0} \left(\frac{e^x}{1+e^x}\right). $$
2. To compute the divergent series
$$ L(-n,\chi_4) = \sum_{j \geq 0} (-1)^{j}(2j+1)^n = 1 - 3^n + 5^n - 7^n - 9^n + 11^n - \dots $$ for nonnegative integers $n$, begin with the formal identity $$ \sum_{j \geq 0} X^{2j} = \
frac{1}{1-X^2}. $$ Differentiate and set $X = i$ to show $L(0,\chi_4) = 1/2$. Repeatedly multiply by $X$, differentiate, and set $X = i$ in order to compute $L(-n,\chi_4)$ for $0 \leq n \
leq 10$. This computational technique is not rigorous, but the answers are correct. Compare with the values of $L(n,\chi_4)$ for positive $n$, if you know those, to get a formula for $L
(1-n,\chi_4)/L(n,\chi_4)$. Treat alternating signs like special values of a suitable trigonometric function.
Thanks! I always wondered, but was apprehensive of looking into the collected works of Euler. – Anweshi Jan 29 '10 at 21:14
add comment
Davenport (Chapter 9 in Multiplicative Number Theory) claims that the functional equation for Dirichlet L-functions was first given by Hurwitz in 1882 (Werke I, pp.72-88), though only for
quadratic characters. The proof uses what we now call the Hurwitz zeta function.
up vote 7
down vote I was told just yesterday that some people refer to the Riemann Hypothesis for Dirichlet L-functions as the Piltz Hypothesis. This is confirmed in the wikipedia article.
Ah, I suspected that it may have been Hurwitz, and that Davenport would have something to say, but did not have the book handy. :) BTW, once I was skimming Hardy's collected works and
found a sentence where he blithely asserted that RH for Dirichlet L-functions will be proven "within a week" of the original RH... – David Hansen Jan 27 '10 at 15:22
As a grad student I ran across a paper of McCurley that referenced Piltz for GRH. Or maybe it was this one: ams.org/journals/mcom/1987-48-177/S0025-5718-1987-0866095-8/… The reference to
Piltz (missing from the above wikipedia advert) is A. Piltz, Uber die Haufigkeit der Primzahlen in arithmetischen Progressionen und uber verwandte Gesetze, A. Neuenhahn, Jena, 1884.
flipkart.com/book/ber-die-hufigkeit-der-primzahlen/1113365641 (also on GoogleBooks). This was his dissertation, and he also conjectures that $p_n - p_{n-1} < p^\alpha$ for all $\alpha >
0$. – Junkie Apr 26 '10 at 1:48
add comment
According to Wikipedia, "an equivalent relationship [equivalent to the functional equation] was conjectured by Euler in 1749". I've seen mention of this in other places too, but of course,
up vote that doesn't prove anything.
2 down
Euler knew how to evaluate zeta at negative integers via abel summation, and of course also how to compute it at positive even integers. He observed some form of the functional equation
relating the values, but didn't have an overall notion of zeta as a function of a complex variable, as far as I know. It seems possible that Riemann was also influenced by these ideas of
Euler; does anyone know whether this is the case? – Emerton Jan 29 '10 at 15:17
Knowing the functional equation, at all the positive integers is enough to recover the functional equation for all complex z :) But the proof uses machinery well beyond Euler times, and a
1 bound on the growth of the Riemann zeta function usually derived from the functional equation. In any case, by a theorem of Carlson, two entire functions, that don't grow faster than exp
(c*|z|) (c < pi/2) (which is the case for (z-1)zeta(z) and hence a fortiori the functional equation term (z-1)2^z*pi^(z-1)*(blah blah)) and that agree on all positive integers, must be
equal for all complex z. – maks Jan 29 '10 at 18:43
Btw, in Carlson's theorem you just need c < pi (and that is optimal, because of the function sin(pi*z)), not c < pi/2, as I wrote earlier. – maks Jan 29 '10 at 18:46
add comment
Concerning the statement "An equivalent relationship [equivalent to the functional equation] was conjectured by Euler in 1749". This is discussed in Weil's book "Basic number theory." It
up vote 0 concerns only the values at integral points: Euler understood $\zeta(1-2k)$ by a simple regularization, and noticed the relation to $\zeta(2k)$.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged ho.history-overview nt.number-theory analytic-number-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/13130/historical-question-in-analytic-number-theory","timestamp":"2014-04-20T01:12:26Z","content_type":null,"content_length":"87736","record_id":"<urn:uuid:4314518b-f969-4e19-be81-fc49990ad8d4>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
|
identify the distribution given the generating function
March 20th 2012, 06:40 PM #1
Apr 2010
identify the distribution given the generating function
I have derived a formula for the generating function as 2/3x^3 / 1-1/3x^3 and am asked to manipulate this so that it follows the general formula for a distribution (Geo/Poisson/Bi/Negative Bi/
I have a feeling that it is a negative binomial distribution, since 2/3x^3 = (0.8736x)^3 but I dont know how to rewrite the denominator in the form of (1-ax)^3
If I can do that, then I can rewite the whole thing as (0.8736x/1-ax)^3 which is a general negative binomial with r=3, p=0.8736 and q=a right?
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/statistics/196199-identify-distribution-given-generating-function.html","timestamp":"2014-04-21T15:59:26Z","content_type":null,"content_length":"30006","record_id":"<urn:uuid:0dbbb111-b594-47c5-9ffe-322310c65a25>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Would an energy diffraction ring in five space form a Minkowski space?
None of this makes even the remotest bit of sense. This is rather a feature of many of your posts, where you mix jargon from entirely unrelated areas of physics and expect others to make sense of the
resulting mess for you.
Please, if you have a serious, well-defined question, ask it. But questions like this serve only to waste everyone's time.
I'm sorry to waste your time.
Doesn't a metric provide the distance betwen two vectors.
Can't standing waves and diffraction patterns be considered vectors?
Isn't Minkowski space comprised of three eliptical and one hyperbolic dimension?
In Euclidean space, you can measure the distance between any two identical but spacially separated vectors by the distance between them at any two points, but you can't do that in a coordinate system
where the parallel postulate doesn't hold.
If one dimension is expandng (the vectors are getting farther apart), then the dimension will be hyperbolic an the parallel postulate will not hold.
Since the universe is expanding, parallel vectors in x,y,z become divergent vectors as they move through T.
Is there any other mixed jargon that I used?
If you consider vectors painted on an inflating ballon, the distance between the vectors should follow a Minkowski like metric in three dimensions; and if you correlate energy density to rubber
density (elasticity), the vectors on the inflating dimple shaped ballon should have a GR type metric.
So if you think of energy as being the inverse dimension of time, it is analalogous to the top half of a diffration pattern being time and the bottom half being energy, where a vector is a wave
pattern distributerd pattern between the upper hyberbolic half diffraction pattern ands a lower eliptical diffraction pattern.
Although I'm sorry about asking a question before I had taken more time to think it through. Sometimes I get stuck on an idea and I want to find a resolution but what I really need to do is to take
walk or a nap to let my mind reconfigure it's understanding.
|
{"url":"http://www.physicsforums.com/showthread.php?t=413965","timestamp":"2014-04-21T02:18:31Z","content_type":null,"content_length":"29093","record_id":"<urn:uuid:b41579ce-5457-48a7-aae3-5ddb62913898>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rosemead Prealgebra Tutor
...My teaching methods ensure that my students feel confident in their Algebra 2 abilities by making the subject as simple as possible. I was nominated for the Pursuit of Excellence award (the
highest award recognized at my school) in math due to my academic achievements in my honors Algebra 2 clas...
22 Subjects: including prealgebra, reading, English, Spanish
...As a patient and experienced tutor, I will help students to deepen their understanding of the concepts. More importantly, I will help them develop useful study skills that will serve them well
into the future. For example, great skills to have are getting organised and prepared for classes and revise concepts regularly.
25 Subjects: including prealgebra, calculus, statistics, geometry
...Pre-Algebra competency applies not just to middle schoolers, but even to teens and adults looking to move on in academic and professional lives as they have to be better armed to translate real
life word problems into workable equations and solutions, so I am here to help make it fun and effectiv...
18 Subjects: including prealgebra, reading, writing, ASVAB
...Since the start of college, I have tutored almost every year, ranging from tutoring elementary students with their homework to helping students specifically with their Language Arts course. I
also have experience working with special needs students as part of my teaching credential program. As ...
12 Subjects: including prealgebra, reading, English, writing
...Rather than ask questions in a straight forward manner, the test is constantly asking students to dive in and use what they know -- and figure out the problem as they go. I help teach this
problem solving method to students who are more used to math from school-- which requires them to look at a...
49 Subjects: including prealgebra, English, reading, writing
|
{"url":"http://www.purplemath.com/Rosemead_Prealgebra_tutors.php","timestamp":"2014-04-17T04:45:08Z","content_type":null,"content_length":"24106","record_id":"<urn:uuid:8d7494a1-0f0a-4bec-b63e-3d585b56cc8f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - Linear Algebra - Matrix with given eigenvalues
Linear Algebra - Matrix with given eigenvalues
1. The problem statement, all variables and given/known data
Come up with a 2 x 2 matrix with 2 and 1 as the eigenvalues. All the entries must be positive.
Then, find a 3 x 3 matrix with 1, 2, 3 as eigenvalues.
3. The attempt at a solution
I found the characteristic equation for the 2x2 would be λ^2 - 3λ + 2 = 0. But then I couldn't get a matrix with positive entries to work for that.
Re: Linear Algebra - Matrix with given eigenvalues
Pick a diagonal matrix.
Re: Linear Algebra - Matrix with given eigenvalues
Does that count for the entries being positive though?
Re: Linear Algebra - Matrix with given eigenvalues
Quote by roto25 (Post 3849064)
Does that count for the entries being positive though?
Not really, no. Sorry. Better give this more thought than I gave this response.
Re: Linear Algebra - Matrix with given eigenvalues
thanks though!
micromass Apr4-12 06:12 AM
Re: Linear Algebra - Matrix with given eigenvalues
The 2x2-case is not so difficult. Remember (or prove) that the characteristic polynomail of a 2x2-matrix A is
By the way, I think your characteristic polynomial is wrong.
HallsofIvy Apr4-12 06:17 AM
Re: Linear Algebra - Matrix with given eigenvalues
??? Why do the diagonal matrices
[tex]\begin{bmatrix}1 & 0 \\ 0 & 2\end{bmatrix}[/tex]
[tex]\begin{bmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3\end{bmatrix}[/tex]
NOT count as "all entries postive"?
micromass Apr4-12 06:18 AM
Re: Linear Algebra - Matrix with given eigenvalues
He probably doesn't consider 0 to be positive.
HallsofIvy Apr4-12 06:22 AM
Re: Linear Algebra - Matrix with given eigenvalues
But it is much easier to claim that 0 is positive!:tongue:
Re: Linear Algebra - Matrix with given eigenvalues
Oh, I had typed 3 instead of 2 for the characteristic polynomial. I ended up looking at this from a Hermitian matrix point of view.
And then I got the matrix:
0 i +1
i-1 3
And I did get the right eigenvalues from that. Does that work?
micromass Apr4-12 08:09 AM
Re: Linear Algebra - Matrix with given eigenvalues
You still have 0 as an entry, you don't want that.
Re: Linear Algebra - Matrix with given eigenvalues
Yeah, I didn't realize that at first. :/
Re: Linear Algebra - Matrix with given eigenvalues
Quote by roto25 (Post 3849508)
Oh, I had typed 3 instead of 2 for the characteristic polynomial. I ended up looking at this from a Hermitian matrix point of view.
And then I got the matrix:
0 i +1
i-1 3
And I did get the right eigenvalues from that. Does that work?
I don't think i+1 would be considered a positive number either. Stick to real entries. Your diagonal entries need to sum to 3, and their product should be greater than 2. Do you see why?
Re: Linear Algebra - Matrix with given eigenvalues
Yes. Theoretically, I know what it should do. I just can't actually find the right values to do it.
Re: Linear Algebra - Matrix with given eigenvalues
Quote by roto25 (Post 3849655)
Yes. Theoretically, I know what it should do. I just can't actually find the right values to do it.
Call one diagonal entry x. Then the other one must be 3-x. Can you find a positive value of x that makes x*(3-x)>2? Graph it.
Re: Linear Algebra - Matrix with given eigenvalues
Well, any value of x between 1 and 2 (like 1.1) work.
Re: Linear Algebra - Matrix with given eigenvalues
Quote by roto25 (Post 3849670)
Well, any value of x between 1 and 2 (like 1.1) work.
Ok, so you just need to fill in the rest of the matrix.
Re: Linear Algebra - Matrix with given eigenvalues
but if I set x to be 1.1, my matrix would be
1.1 __
__ 1.9
And those two spaces have to be equivalent to 1.1*1.9 - 2, right?
because no matter what values I try, when the eigenvalues are getting closer to 1 and two, the matrix is just getting closer to the matrix of:
All times are GMT -5. The time now is 05:45 PM.
Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
© 2014 Physics Forums
|
{"url":"http://www.physicsforums.com/printthread.php?t=593257","timestamp":"2014-04-19T22:45:29Z","content_type":null,"content_length":"18213","record_id":"<urn:uuid:768ae949-eff8-44d9-a695-f38cf2674f98>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Question about an early result on the mixing of geodesic flows
up vote 4 down vote favorite
Let $T_t$ be the geodesic flow on a surface $S$ of constant negative curvature, and let $M(f,t) := \langle \bar f \cdot (f \circ T_t) \rangle$, where $\langle f \rangle := \int_S f(x) d\mu(x)$ and
where $\mu$ is the natural invariant measure.
A 1984 paper by Collet, Epstein, and Gallavotti (PDF here) shows (prop. 5, p. 90) that for $f$ nice (in a sense defined at the bottom of page 71), $\lvert M(f,t) \rvert < \lVert f \rVert^2_\xi C( \
xi) \cdot t^{b(\xi)}\exp(-t/2)$, where $\lVert \cdot \rVert^2_\xi$ is a certain rather complicated norm (defined in equation 3.5 of the paper) and $C$, $b$ do not depend on $f$.
I have two related questions about this result which hopefully someone here already knows (the paper is quite technical and I really don't need to know its details if I can get a bit of clarification
□ This result seems to imply that the rate of mixing is 1/2. How can this be? (see also this question)
□ How does this result (for which the "decay is not exponential") square with the results of Dolgopyat and Liverani that give exponential decay of correlations for reasonably nice Anosov flows?
ds.dynamical-systems ergodic-theory anosov-systems
Fixed, thanks... – Steve Huntsman Dec 28 '10 at 7:27
In the first paragraph do you mean to say $M(f,t):=\langle \bar f \cdot(f \circ T_{t}) \rangle$? because that's how you use it in the second paragraph. In other words, what is A$? – drbobmeister
Dec 28 '10 at 7:27
My pleasure, Steve. – drbobmeister Dec 28 '10 at 7:29
add comment
1 Answer
active oldest votes
Dear Steve,
In general the "precise" rate of mixing depends on the class of functions one considers. In particular, the authors of the paper you quoted treat only "analytic" functions (with some
fixed band of analyticity) and they show that you get a rate of $t^{b(\xi)}e^{-t/2}$. In this sense, the rate of mixing is exponential and essentially equal to 1/2: I said essentially
because of the factor $t^{b(\xi)}$ can't be removed in general, so that's why the authors said that the rate of mixing is not "genuinely exponential" (i.e., the sharp bound that they
obtain is not exactly an exponential function of time).
On the other hand, the results of Dolgopyat and Liverani concerns smooth but non-analytic functions and contact Anosov flows (and not only the ones coming from constant negative curvature
surfaces). In this context, they show "exponential mixing" in the sense that the correlations can be bounded by some exponential function of $t$ (say $e^{-\sigma t}$ for some $\sigma>0$).
up vote 5 Of course, they don't know whether their "rate" (i.e., the number $\sigma$ they obtain in the end of the calculations) is optimal (for instance, there is a subsequent work of Tsujii
down vote improving on Liverani's work). In particular, although Dolgopyat and Liverani obtain "exponential mixing", it is not in the same "sense" of Collet, Epstein and Gallavotti.
In resume, I guess that the confusion comes from the distinct employments of the nomenclature "exponential mixing" in the works of Collet, Epstein, Gallavotti, and Dolgopyat and Liverani
(and Tsujii also).
add comment
Not the answer you're looking for? Browse other questions tagged ds.dynamical-systems ergodic-theory anosov-systems or ask your own question.
|
{"url":"http://mathoverflow.net/questions/50539/question-about-an-early-result-on-the-mixing-of-geodesic-flows?sort=newest","timestamp":"2014-04-16T19:39:33Z","content_type":null,"content_length":"56202","record_id":"<urn:uuid:7fcac2c7-b884-4549-9a88-5269318cc587>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Alsip Trigonometry Tutor
Find an Alsip Trigonometry Tutor
...Most importantly, I thoroughly enjoy tutoring, building a rapport with students and parents and helping students to develop and understand concepts and skills that they were not able to
before. I have a particular knack for figuring out what is holding students back from securing the goal score...
20 Subjects: including trigonometry, reading, English, writing
...I have since become a professional engineer with a Master of Science in Mechanical Engineering. I received an A in calculus my senior year of high school and 4 on my calculus AP test. I have
since improved upon my math skills while obtaining a BS in mechanical engineering and an MS in mechanical engineering.
20 Subjects: including trigonometry, physics, statistics, calculus
...I provide Excel tutoring for students who seek to learn for Academic classes. I also provide Excel tutoring to working professionals and Small Businesses that seek to learn Excel for normal
Business use to the creation of advanced Excel workbooks that are aimed at automation of repetitive tasks ...
18 Subjects: including trigonometry, geometry, algebra 2, study skills
...My best students are those that desire to learn and I seek to cultivate that attitude of growth and learning through a zest and enthusiasm for learning. Eventually, I began teaching ACT
Reading/English, began to teach math and reading to all ages, and eventually became a sought after subject tutor. Later, I would become Exam Prep Coordinator and Managing Director of the Learning
26 Subjects: including trigonometry, chemistry, Spanish, reading
...It is like blood flowing in all the veins of the body. No scientist/ economist has done his research with out mathematics. The foundation of it is laid in elementary/middle level classes.
14 Subjects: including trigonometry, calculus, geometry, statistics
|
{"url":"http://www.purplemath.com/Alsip_trigonometry_tutors.php","timestamp":"2014-04-19T12:01:12Z","content_type":null,"content_length":"24046","record_id":"<urn:uuid:711a1a34-3d4a-4d11-835c-899513ac94ea>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A steam turbine has an inlet condition of P1 = 30 bar, T1 = 400 ◦C, v1 = 160 m/s. Its exhaust condition is T2 = 100 ◦C, v2 = 100 m/s, x2 = 1. The work for the turbine is wcv = 540 kJ/kg. Find σ˙cv/m˙
. The surroundings are at 500 K.
• 11 months ago
• 11 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/517ebf77e4b0ad25bbe3a709","timestamp":"2014-04-19T10:28:50Z","content_type":null,"content_length":"25431","record_id":"<urn:uuid:d5dd7e68-617b-40ac-aa08-1c844fe31401>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trinary Search
I am trying to convert a binary search into a trinary search. I have to keep both functions in the program, and I cannot figure out how to continue on to the trinary search after the binary has
How does a trinary search work, exactly? I've never heard of it.
It is just a function that searches for a user given number in a list. It is just like a modified binary search. The list is divided into 3 parts instead of 2.
Thank you, but I am looking for a TRINARY function. Ternary is a completely different thing.
I am trying to convert this code into a trinary search:
int searchlist(const int list[], int numElems, int value)
int first = 0,
last = Max -1,
position = -1;
bool found = false;
while ( !found && first <= last)
middle = (first + last) /2;
if (list[middle] == value)
found = true;
position = middle;
else if (list[middle] > value)
last = middle - 1;
first = middle + 1;
return position;
ah, well, never mind, my ternary function has a few run time bugs. It doesn't ternary some numbers correctly... such as 25, 16...
Topic archived. No new replies allowed.
|
{"url":"http://www.cplusplus.com/forum/beginner/94289/","timestamp":"2014-04-20T06:06:33Z","content_type":null,"content_length":"14860","record_id":"<urn:uuid:b21bdeb1-859a-4e79-92e7-0aaf02eca17b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Surface Area (2 Parts)
Find the surface area of that portion of the plane
z = 1+2 x-3 y that lies above the triangular region in the xy -plane with vertices (0, 0 ), (3, 0) , and (0, 2) .
I dont understand how to set up the integral or evaluate it....
Using the same plane, z=1+2x-3y, find the surface area that lies above the xy-plane
I am lost...can any one help????
|
{"url":"http://mathhelpforum.com/calculus/43143-surface-area-2-parts.html","timestamp":"2014-04-18T01:48:12Z","content_type":null,"content_length":"41894","record_id":"<urn:uuid:164cf38a-f97e-4830-8845-adcfa778649d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computer Adaptive Test Assuming an Infinite Item Pool
Original Code
# In order for us to understand what a Computer Adaptive Test (CAT) is, let's first think about how the CAT works.
# A CAT test starts by having some kind of initial assessment of student ability (test taker's ability).
# This is typically at the population mean.
# The test then selects an item that (in the most straightforward case) has the most information at that initial guess.
# If the student answers that question correctly then the program reassesses student ability and finds the next question which has the most information at the new assessment of student ability.
# The computer continues to select items until the termination conditions are met. These conditions might be anything from a fixed length of time for the test, a fixed number of questions for the
test, or more interestingly a sufficient level of precision achieved for student ability. See flow chart:
# In order to asses student ability I will use code form:
# Specify the initial conditions:
true.ability = rnorm(1)
# Load three parameter model ICC
PL3 = function(theta,a, b, c) c+(1-c)*exp(a*(theta-b))/(1+exp(a*(theta-b)))
# Load three parameter item information:
PL3.info = function(theta, a, b, c) a^2 *(PL3(theta,a,b,c)-c)^2/(1-c)^2 * (1-PL3(theta,a,b,c))/PL3(theta,a,b,c)
# Mock Computer Adaptive Test
# First let's specify and initial guess
est.start = 0
# How much do we adjust our estimate of person ability when all answer's are either right or wrong.
est.jump = .7
# Number of items on the test. This will be the end condition.
num.items = 50
# Set the other parameters in the 3PL
a.base = 3
c.base = .1
# Let's generate a vector to hold ability estimates.
ability.est <- est.start="est.start" p="p">
# Let's first generate a empty data frame to hold the set of item taken.
items <- a="NA,b=NA,c=NA,response=NA,p=NA," ability.est="NA)</p" data.frame="data.frame">
i = 1
# For this first mock test we will not select items from a pool but instead assume the pool is infinite and has an item with an a=a.base, c=c.base, and b equal to whatever the current guess is.
# Let's select our first item - a,b,c, response are scalars that will be reused to simplify coding.
# Probability of getting the item correct
p=PL3(true.ability, a,b,c)
response = runif(1) < p
# The Item Characteristic Curve (ICC) gives the probability of getting the item correct.
# Thus, a .9 is the max of the runifrom that should produce a response of TRUE or correct or 1 (as far as R is concerned these TRUE and 1 are the same as is FALSE and 0)
items[i,] = c(a=a, b=b, c=c, response=response, p=p, ability.est=ability.est[i])
# We have now successfully administered our first item.
# Should do our first MLE estimation?
# Not quite, unfortunately MLE requires in the bianary case that the student has answered at least one question right and at least one question wrong.
# Instead we will just adjust the ability estimate by the fixed factor (est.jump)
ability.est[i] = ability.est[i-1]-(-1)^(response)*est.jump
# Now we administer the second item:
# We will continue this until we get some heterogeneity in the responses
response.v = items$response
response.ave = sum(response.v)/length(response.v)
while ((response.ave == ceiling(response.ave)) & (num.items >= i) ) {
# This condition will no longer be true when at least one of the items is ansered correctly and one of the items answered incorrectly.
ability.est[i] = ability.est[i-1]-(-1)^(response)*est.jump
p=PL3(true.ability, a,b,c)
response = runif(1) < p
items[i,] = c(a=a, b=b, c=c, response=response, p=p, ability.est=ability.est[i])
response.v = items$response
response.ave = sum(response.v)/length(response.v)
# Now that we have some heterogeneity of responses we can use the MLE estimator
MLE = function(theta) sum(log((items$response==T)*PL3(theta, items$a, items$b, items$c) +
(items$response==F)*(1-PL3(theta, items$a, items$b, items$c))))
optim(0,MLE, method="Brent", lower=-6, upper=6, control=list(fnscale = -1))
# Okay, it seems to be working properly now we will loop through using the above function.
# The only thing we need change is the ability estimate.
while (num.items >= i) {
ability.est[i] = optim(0,MLE, method="Brent", lower=-6, upper=6, control=list(fnscale = -1))$par
p=PL3(true.ability, a,b,c)
response = runif(1) < p
items[i,] = c(a=a, b=b, c=c, response=response, p=p, ability.est=ability.est[i])
response.v = items$response
response.ave = sum(response.v)/length(response.v)
(ability.est[i] = optim(0,MLE, method="Brent", lower=-6, upper=6, control=list(fnscale = -1)))
# We can see that even in this ideal scenario in which you always have appropriately difficult items with high discriminatory power and low guessing, there is a noticeable amount of error.
plot(0:num.items, ability.est, type="l", main="CAT Estimates Ideally Converge on True Ability",
ylim=c(-3,3), xlab="Number of Items Administered", ylab="Estimated Ability", lwd=2)
abline(h=true.ability, col="red", lwd=2)
No comments:
|
{"url":"http://www.econometricsbysimulation.com/2012/11/computer-adaptive-test-assuming.html","timestamp":"2014-04-21T04:34:07Z","content_type":null,"content_length":"192064","record_id":"<urn:uuid:b201a233-294c-4b43-8da9-06525930ee64>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
lim_{x rightarrow infty} (-3x+sqrt{9x^2+4x-5}) The answer is apparently 2/3. But I can´t loose the squareroot term at any place so I always end up with infinity - infinity = 0. How do I do this limit
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/506c77b4e4b060a360fe85af","timestamp":"2014-04-18T03:22:17Z","content_type":null,"content_length":"55261","record_id":"<urn:uuid:85fc1457-4e67-40d2-a50e-82e2477e58f7>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[70.07] Visualizing the Big Bang: An Introduction to Topology and 3-Manifolds for Undergraduates
AAS Meeting #194 - Chicago, Illinois, May/June 1999
Session 70. Astronomy and Education
Display, Wednesday, June 2, 1999, 10:00am-6:30pm, Southwest Exhibit Hall
[Previous] | [Session 70] | [Next]
[70.07] Visualizing the Big Bang: An Introduction to Topology and 3-Manifolds for Undergraduates
R.B. Gardner (East Tennessee State University)
A popular tool used in freshman astronomy classes is the ``balloon analogy'' of the universe. In this analogy, we imagine ourselves as two-dimensional inhabitants of the surface of a swelling sphere.
This model of the universe has the desirable properties that it (1) has no edge, (2) has no center, and (3) satisfies Hubble's Law. Also, this model has spherical geometry and a finite amount of
``space.'' When discussing the other possible geometries of the universe (namely, Euclidean and hyperbolic), the two-dimensional analogies used are usually the Euclidean plane and the hyperbolic
parabaloid (respectively). These surfaces have the desired curvatures and geometries. However, many students get the impression from these examples that a space with zero or negative curvature must
be infinite. This is not the case.
In this presentation, an informal description of 3-manifolds and their topology will be given. A catalogue of topologically distinct manifolds will be presented, including those which have zero and
negative curvature, yet have finite volume. Models of the universe in terms of these manifolds will be introduced. Finally, empirical methods for determining which 3-manifold represents the topology
of our universe will be described.
If the author provided an email address or URL for general inquiries, it is a s follows:
[Previous] | [Session 70] | [Next]
|
{"url":"http://aas.org/archives/BAAS/v31n3/aas194/140.htm","timestamp":"2014-04-21T02:39:35Z","content_type":null,"content_length":"2924","record_id":"<urn:uuid:20ed06e0-3623-4d85-91e4-c1eeceb0ab9c>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Post a reply
Differentiating the products and quotients of trig functions tend to create a monster, since the product and quotient rule for differentials tend to produce expressions that are a bit on the long
side. This tends to lead to a trigonometric identity nightmare! What I'm wondering is, is it ok to rearrange the function, before you differentiate?
This is the problem I had.
sin(2t)/cos^2 t
If you first replace sin(2t) with 2sin t cos t you have:
2 sin t cos t / cos^2 t
you can cancel out cos t and get
2 sin t/ cos t
But sin t /cos t = tan t so we can rewrite it as
2 tan t
differentiate and we get:
2 sec^2 t
Now thats easy as pi and it was the correct answer to the problem. But would rearranging before differentiating ever produce an ambigous answer?
Of course, the form of the original function must always be taken into acount, we cannot use values of t that would result in division by zero or the square root of negative numbers in the original
function, even if those values work fine in the deriviate of the function.
Anyways, the question is, is rearranging before differentiating ever a bad idea?
|
{"url":"http://www.mathisfunforum.com/post.php?tid=2073","timestamp":"2014-04-18T19:07:44Z","content_type":null,"content_length":"17014","record_id":"<urn:uuid:ab3f391e-b9c9-4597-b310-ad1cd9b842cc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions - Blanknhorn modification of G-K and Euler methods for ellipse circumference
Date: Oct 18, 2012 2:08 AM
Author: thomasinventions@yahoo.com
Subject: Blanknhorn modification of G-K and Euler methods for ellipse circumference
Gauss-Kummer and Euler methods for ellipse circumference calculation can be used for extremely high eccentricities:
Note for example in the Gauss-Kummer:
the fractions can be pulled out:
Note for this case, the 1+1/4+1/64+... sums to 4/pi.
You can then work with:
and when h is VERY near 1 you can obtain a workable number of significant digits in short order. This can be done similarly for Euler's method. Do note, however, the rate of convergence is nothing like that of Cayley.
With only a couple hundred terms of the series, at least 6 significant digits can be produced over all eccentricities, and with exact endpoints using this method. Since I see some people throw their own names about to gain popularity, I suppose could rename this the Blankenhorn ellipse circumference modification to the Gauss-Kummer method, haha.
Use the popular method when the ratio a/b>=0.0005 and the modified method for a/b<0.0005.
-Thomas Blankenhorn
|
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7907883","timestamp":"2014-04-19T20:21:11Z","content_type":null,"content_length":"2205","record_id":"<urn:uuid:cda09fe4-d1f5-47a9-a31b-fd6866aa260c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fox Island Geometry Tutor
...This provides a springboard for continued success in life, and a thorough understanding of the basics of math, English, science and the liberal arts is what I focus on for students attempting
to pass the exam. As an instructor, I strive to provide background information and context to my student...
12 Subjects: including geometry, chemistry, algebra 1, algebra 2
...When I got back and went to Washington State University, I minored in German. My professor also recruited me to work as her assistant for the introductory class and to teach the junior-level
conversational German class. I understand the rules of German and know how to get them across to a student.
12 Subjects: including geometry, reading, accounting, ASVAB
...It is also valuable tool for advanced students who seek scholarships and academic recognition. One of my students received a full scholarship based on his PSAT scores. I have assisted students
in the Elementary age group with Math skills from Grades 1-8.
12 Subjects: including geometry, chemistry, GRE, reading
With my teaching experience of all levels of high school mathematics and the appropriate use of technology, I will do everything to find a way to help you learn mathematics. I can not promise a
quick fix, but I will not stop working if you make the effort. -Bill
16 Subjects: including geometry, calculus, statistics, GRE
...I also tutored a student in Algebra 2 who received A's on every test following my instruction. I enjoy working one on one with students, whether helping them with homework or preparing for an
exam. I am willing to create practice tests for students to ensure their success.
19 Subjects: including geometry, English, statistics, SAT math
|
{"url":"http://www.purplemath.com/Fox_Island_Geometry_tutors.php","timestamp":"2014-04-17T21:55:17Z","content_type":null,"content_length":"23843","record_id":"<urn:uuid:48419189-3794-4c2d-b311-2d4667271c3a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
|
300 ton equals how many kg
You asked:
300 ton equals how many kg
272,155.422 kilograms
the mass 272,155.422 kilograms
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
|
{"url":"http://www.evi.com/q/300_ton_equals_how_many_kg","timestamp":"2014-04-19T14:30:36Z","content_type":null,"content_length":"52260","record_id":"<urn:uuid:1be7e002-66e2-4cb4-afa4-9f664d86847f>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Weekly Problem 40 - 2010
Copyright © University of Cambridge. All rights reserved.
'Weekly Problem 40 - 2010' printed from http://nrich.maths.org/
The diagram shows nine points in a square array. What is the smallest number of points that need to be removed in order that no three of the remaining points are in a straight line?
If you liked this problem, here is an NRICH task which challenges you to use similar mathematical ideas.
This problem is taken from the UKMT Mathematical Challenges.
View the previous week's solutionView the current weekly problem
|
{"url":"http://nrich.maths.org/7134/index?nomenu=1","timestamp":"2014-04-19T12:45:49Z","content_type":null,"content_length":"3683","record_id":"<urn:uuid:bb671e9b-2e6b-4f79-ab4a-cc24d0111006>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the inverse of the statement “If you study for the test, your grade will increase”? A. If you do not study for the test, then your grade will not increase. B. If your grade increases, then
you studied for the test. C. If your grade does not increase, then you did not study for the test. D. If you study for the test, your grade will not increase.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/512d20eae4b098bb5fbbf991","timestamp":"2014-04-18T23:58:02Z","content_type":null,"content_length":"53701","record_id":"<urn:uuid:14e01a1e-e96d-4c45-9f0a-9d0cc1d94500>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reply to comment
March 2006
The maths behind the music
The transformation rule which is the basis for Nick's composition uses the complex function
Iterating this function starting with a given value
Any complex number
z = x+ iy
can also be thought of as the pair
of real numbers, so the formula does indeed transform one pair into another pair, as described above (see
Curious quaternions
for an introduction to complex numbers).
|
{"url":"http://plus.maths.org/content/comment/reply/2425","timestamp":"2014-04-18T08:50:12Z","content_type":null,"content_length":"22376","record_id":"<urn:uuid:491ea986-95df-4ee7-a088-aece3b838a20>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Recommend: A Course in Machine Learning
The following content is totally copied from the website of A Course in Machine Learning.
CIML is a set of introductory materials that covers most major aspects of modern machine learning (supervised learning, unsupervised learning, large margin methods, probabilistic modeling,
learning theory, etc.). It’s focus is on broad applications with a rigorous backbone. A subset can be used for an undergraduate course; a graduate course could probably cover the entire material
and then some.
This book is for the use of anyone anywhere at no cost and with almost no restrictions whatsoever. You may copy it or re-use it under the terms of the CIML License online at ciml.info/LICENSE.
You may not redistribute it yourself, but are encouraged to provide a link to the CIML web page for others to download for free. You may not charge a fee for printed versions, though you can
print it for your own use.
Individual Chapters:
1. Front Matter
2. Decision Trees
3. Geometry and Nearest Neighbors
4. The Perceptron
5. Machine Learning in Practice
6. Beyond Binary Classification
7. Linear Models
8. Probabilistic Modeling
9. Neural Networks
10. Kernel Methods
11. Learning Theory
12. Ensemble Methods
13. Efficient Learning
14. Unsupervised Learning
15. Expectation Maximization
16. Semi-Supervised Learning
17. Graphical Models
18. Online Learning
19. Structured Learning
20. Bayesian Learning
21. Back Matter
|
{"url":"http://blog.pengyifan.com/recommend-a-course-in-machine-learning/","timestamp":"2014-04-18T23:15:42Z","content_type":null,"content_length":"26937","record_id":"<urn:uuid:adc70713-d115-439a-aa67-2dcb42d8d29d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cardinal number exercise
March 28th 2008, 11:53 PM #1
MHF Contributor
Oct 2005
Cardinal number exercise
5.10 Exercise: Give examples of infinitely many infinite sets no two of which have the same cardinal number.
My text defines "countably infinite" to be a set that is countable but not infinite. Also it defines "countable" as the set is either finite or it has the same cardinal number as N, the natural
So if two sets do not have the same cardinal number, then they do not have a one-to-one correspondence, thus there exists not way to map f:A->B such that there exists at most one f(a) in B for a
in A.
In previous theorems the text claims that fields N, Z, and Q are countably infinite, meaning they are not infinite sets. All of these fields certainly have an infinitely amount of terms. This
doesn't make sense to me. Is the only infinite set R? I don't know if these definitions are standard, but I'm really lost obviously.
As for an example of two infinite sets that don't have the same cardinal number I actually can't think of an example because I don't know how to define an infinite set by these restrictions. It
seems both sets A and B would have to have elements strictly in R, and I can't think of a way that you can't map two subsets of R and not be one-to-one.
Basically I'm really lost and a little clarification would be a lifesaver. Thanks guys.
Are you sure? A countably infinite set is a set that is countable and infinite (as the name suggests).
—which would seem to suggest that N is not finite. In other words, N is infinite (as common sense suggests).
In previous theorems the text claims that fields N, Z, and Q are countably infinite, meaning they are not infinite sets. All of these fields certainly have an infinitely amount of terms. This
doesn't make sense to me. Is the only infinite set R? I don't know if these definitions are standard, but I'm really lost obviously.
These sets are indeed all infinite: N, Z and Q are countably infinite; R is uncountably infinite. So R has a different cardinality from the other three sets.
As for an example of two infinite sets that don't have the same cardinal number I actually can't think of an example because I don't know how to define an infinite set by these restrictions. It
seems both sets A and B would have to have elements strictly in R, and I can't think of a way that you can't map two subsets of R and not be one-to-one.
The big theorem that you really need to know in order to tackle this question is this. Given a nonempty set S, let P(S) denote the sets of all subsets of S (sometimes called the power set of S).
The theorem says that the cardinal number of P(S) is greater than the caredinal number of S.
If S is finite, with cardinality n, then the cardinality of P(S) is 2^n. If S is infinite, with cardinal number ℵ then P(S) is "even more infinite", and its cardinality is usually written as 2^ℵ.
For the example of infinitely many infinite sets no two of which have the same cardinal number, you can use an inductive construction. Let S_1 = N (or R), and for n≥1 define S_{n+1} = P(S_n).
I'm looking through the bottom of your post right now so I'll post back when I get somewhere. Thank you so much for clarifying that definition! It must be a typo. Everything seemed so
The big theorem that you really need to know in order to tackle this question is this. Given a nonempty set S, let P(S) denote the sets of all subsets of S (sometimes called the power set of S).
The theorem says that the cardinal number of P(S) is greater than the caredinal number of S.
If S is finite, with cardinality n, then the cardinality of P(S) is 2^n. If S is infinite, with cardinal number ℵ then P(S) is "even more infinite", and its cardinality is usually written as 2^ℵ.
For the example of infinitely many infinite sets no two of which have the same cardinal number, you can use an inductive construction. Let S_1 = N (or R), and for n≥1 define S_{n+1} = P(S_n).
We just finished talking about power sets last class, although I was very lost. I think I understand what the power set theorem says, although being "even more infinite" is something I don't
quite fully understand.
That is a clever method of finding infinitely many infinite sets! I'll definitely use that. Of course the hinge of this method is understanding the proof behind it, so I'll have to look into
Here is the hint that my text gave: maybe you could help put it into easier terms for me. This proving that for all non-empty sets A, P(A) > A.
Hint: Indirect proof! Suppose to the contrary that f:A->P(A) is a one-to-one correspondence between A and P(A) and consider the set $B = \{ x \in A: x otin f(x) \}$. Then $B \in P(A)$ so it must
correspond to some element $b \in A$. Is $b \in B$?
I was going to try to say as much as I understood, but I actually don't know what the colon means when defining a set. Is set B have x in A and x not in f(x)? That seems impossible.
Here is the hint that my text gave: maybe you could help put it into easier terms for me. This proving that for all non-empty sets A, P(A) > A.
Hint: Indirect proof! Suppose to the contrary that f:A->P(A) is a one-to-one correspondence between A and P(A) and consider the set $B = \{ x \in A: x otin f(x) \}$. Then $B \in P(A)$ so it must
correspond to some element $b \in A$. Is $b \in B$?
I was going to try to say as much as I understood, but I actually don't know what the colon means when defining a set. Is set B have x in A and x not in f(x)? That seems impossible.
The colon should be read as "such that". In the definition $B = \{ x \in A: x otin f(x) \}$, the condition for x to belong to B is that x is not in the subset $f(x)\subseteq A$.
To take a concrete example, suppose A = N (the natural numbers). If f is a function from N to P(N) then for every n in N, f(n) has to be a subset of N. For instance, f(1) might be {1,3,5,7,...}
and f(2) might be {10,11,12}. Then 1∈f(1) but 2∉f(2).
The idea of the hint for proving the cardinality theorem is essentially the same as the argument of Russell's paradox. If there is a surjective mapping f from A to P(A) then the set B (consisting
of all x in A such that x∉f(x)) must be equal to f(b) for some b∈A. But then if you ask whether b∈B, you find that this implies that b∉B, and vice versa. That is a contradiction, and proves that
f cannot exist.
Thank you very much for all of this. I'm trying to make sure I completely understand your example. So f is also mapping N to P(N), where N would a single element input, and the output would be
some sort of subset of N, correct? Hence like you said, for every n in N, f(n) has to be a subset of N. And I understand the last part of that paragraph, so good.
The thing I don't get is we are saying that f:A->P(A) for all n in A and they have a one-to-one correspondence, meaning for all P(n), there exists only one n in A which f will output P(n). Now,
we're saying suppose there is a set B, $B = \{ x \in A: x otin f(x) \}$. So all x's in B, must be in A, but not any subsets of A? That seems impossible since f always maps A->P(A).
So you're saying that since B is composed of elements of A, then there must exist some subset mapped by f that is equal to B. I think you wrote this as set B must equal f(b) for some b in A. So
now because there is this subset f(b), than there must equal some b in A that satisfies this mapping for f, thus b is in A, but also b is in B? I'm stuck on the very last part. I feel I almost
have it.
Last edited by Jameson; March 29th 2008 at 06:30 AM.
March 29th 2008, 12:55 AM #2
March 29th 2008, 01:44 AM #3
MHF Contributor
Oct 2005
March 29th 2008, 02:02 AM #4
MHF Contributor
Oct 2005
March 29th 2008, 04:14 AM #5
March 29th 2008, 06:02 AM #6
MHF Contributor
Oct 2005
|
{"url":"http://mathhelpforum.com/number-theory/32397-cardinal-number-exercise.html","timestamp":"2014-04-20T01:27:26Z","content_type":null,"content_length":"61118","record_id":"<urn:uuid:4b94e221-80e8-44c9-9fa7-de798060638a>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
|
DragonBox Algebra
DragonBox Algebra 12+ is a must-have tool for students so they can earn better grades and gain confidence in algebra and mathematics. It is based on the award winning game DragonBox Algebra 5+ but
covers more advanced topics in mathematics and algebra:
* Parentheses
* Positive and Negative signs
* Addition of Fractions (Common Denominators)
* Collection of Like Terms
* Factorization
* Substitution
DragonBox Algebra 12+ gives players a greater understanding of what mathematics is all about: objects and the relationships between objects.
This educational game targets children from the ages of 12 to 17 but children (or adults) of all ages can enjoy it. Playing doesn’t require supervision, although parents can enjoy playing along with
their children and maybe even freshen up their own math skills.
DragonBox Algebra 12+ introduces all these elements in a playful and colorful world appealing to all ages.
The player learns at his/her own pace by experimenting with rules that are introduced gradually. Progress is illustrated with the birth and growth of a dragon for each new chapter.
Dr. Patrick Marchal, Ph.D. in cognitive science, and Jean-Baptiste Huynh, a high school teacher, created DragonBox Algebra 12+ as an intuitive, interative and efficient way to learn algebra.
DragonBox Algebra 12+ is based on a novel pedagogical method developed in Norway that focuses on discovery and experimentation. Players receive instant feedback which differs from the traditional
classroom setting where students can wait weeks for feedback. DragonBox Algebra 12+ creates an environment for kids where they can learn, enjoy and appreciate math.
Our previous educational game, DragonBox Algebra 5+ has received many distinctions including the Gold Medal of the 2012 Serious Play Award (USA), the Best Serious game at Bilbao´s Fun and Serious
Game Festival and the Best Serious Game at the 2013 International Mobile Gaming Awards. It is also recommended by Common Sense Media where it won the Learn ON award.
* 20 progressive chapters (10 learning, 10 training)
* 357 puzzles
* Basic algebraic rules with which the child can experiment
* A focus on minimal instruction encourages creativity and experimentation from the player
* Multiple profiles for easy progress control
* Dedicated graphics and music for each chapter
* Multiple supported languages (English, français, norsk, svenska, dansk, español, 한국어, italiano, português, Deutsch, русский, 简体中文, 繁體中文, suomi, Nederlands, Eesti, Euskara, Türkçe...)
Totally bypasses any confusion or misunderstanding of terms. Excellent pace. Plenty of drill. Loved the graphics, feel and sound. Addictively fun.
All my kids age 5 to 12 have enjoyed DragonBox. Algebra did not click right away for my daughter, bit then she realized algebra was just like DragonBox and it made sense what she was supposed to do.
Now it's like a game for her
Great game Learn algebra without realizing it.
***** Kids ages 5-12 love it! Initially had problem installing it. Contacted support and was resolved within two weeks. Thanks!
This is a 5 star game! I bought both DB 1 and 2 and this is first rate. Not being sure I started my 11 yr old with the other game and it was way to easy. This is perfect. Challenging yet fun and
there are lots of concepts that she and her older siblings are seeing the overlap on and really getting a better grasp of! She even tried to take it to bed and play under the covers. Can you figure
out how to do this for Geometry?!?
Great app Maybe you can do calculus next?
What's New
Bug fix and translations. New translations: Estonian, Basque and Turkish
Need more than free videos to learn math? YourTeacher's Algebra app is like having a personal math tutor in your pocket.
“It’s like a private school math classroom, but you are the only student.”
"I just love YourTeacher and the way you explain things. I felt like I was in a classroom instead of just looking at examples."
"My daughter is doing Algebra 1 in 8th Grade. She had been getting really low grades because they are moving through the material so quickly. She had a test 3 days after we bought your program and
she got 94% (the highest score in the class) because we had her work through the modules over and. She really enjoys the program and her motivation is good again."
Need more than videos to learn Algebra…
YourTeacher’s Algebra app replicates the entire math classroom experience with your own personal math teacher.
Our lessons include:
-Multiple video example problems
(similar to how a teacher starts class at the board by explaining the examples from the textbook)
-Interactive practice problems with built-in support
(similar to how a teacher assigns practice and walks around the class providing help)
-A Challenge Problem
(similar to how a teacher assigns a higher level problem which students must work on their own to prove mastery)
-Extra problem worksheets
(similar to how a teacher assigns additional problems for homework)
-Review notes
(similar to how a teacher provides summary handouts or refers you to your textbook)
Scope and Sequence
YourTeacher’s Algebra app covers an entire year of Algebra 1.
Addition and Subtraction
Multiplication and Division
Order of Operations
Least Common Multiple
Addition and Subtraction
Multiplication and Division
Order of Operations
Combining Like Terms
Distributive Property
Distributive / Like Terms
One-Step Equations
Two-Step Equations
Equations with Fractions
Equations Involving Distributive
Variable on Both Sides
Variable on Both Sides / Fractions
Variable on Both Sides / Distributive
Integer Solutions
Decimal Solutions
Fractional Solutions
Beginning Formulas
CHAPTER 3: WORD PROBLEMS
Number Problems
Consecutive Integer Problems
Geometry Problems
Percent Problems
Age Problems
Value Problems
Interest Problems
Introductory Motion Problems
Solving and Graphing Inequalities
Combined Inequalities
The Coordinate System
Domain and Range
Definition of a Function
Function and Arrow Notation
Graphing within a Given Domain
Graphing Lines
The Intercept Method
Graphing Inequalities in Two Variables
Patterns and Table Building
Word Problems and Table Building
Slope as a Rate of Change
Using the Graph of a Line to Find Slope
Using Slope to Graph a Line
Using Coordinates to Find Slope (Graphs and Tables)
Using Coordinates to Find Slope
Using Slope to Find Missing Coordinates
Using Slope-Intercept Form to Graph a Line
Converting to Slope-Intercept Form and Graphing
Linear Parent Graph and Transformations
Using Graphs and Slope-Intercept Form
Using Tables and Slope-Intercept Form
Direct Variation
Applications of Direct Variation and Linear Functions
CHAPTER 7: EXPONENTS & POLYNOMIALS
CHAPTER 10: RADICALS
CHAPTER 11: QUADRATICS
CHAPTER 13: QUADRATIC EQUATIONS & FUNCTIONS
(Wifi or 3G connection required)
algebra, algebra tutoring, algebra tutor, algebra help
Taking algebra? Then you need the Wolfram Algebra Course Assistant. This definitive app for algebra--from the world leader in math software--will help you quickly solve your homework problems, ace
your tests, and learn algebra concepts so you're prepared for your next courses. Forget canned examples! The Wolfram Algebra Course Assistant solves your specific algebra problems on the fly, often
showing you how to work through the problem step by step.
This app covers the following topics applicable to Algebra I, Algebra II, and College Algebra:
- Evaluate any numeric expression or substitute a value for a variable.
- Simplify fractions, square roots, or any other expression.
- Solve a simple equation or a system of equations for specific variables.
- Plot basic, parametric, or polar plots of the function(s) of your choice.
- Expand any polynomial.
- Factor numeric expressions, polynomials, and symbolic expressions.
- Divide any two expressions.
- Find the partial fraction decomposition of rational expressions.
The Wolfram Algebra Course Assistant is powered by the Wolfram|Alpha computational knowledge engine and is created by Wolfram Research, makers of Mathematica—the world's leading software system for
mathematical research and education.
The Wolfram Algebra Course Assistant draws on the computational power of Wolfram|Alpha's supercomputers over a 2G, 3G, 4G, or Wi-Fi connection.
Black Friday Easter special discount sale -50%! Get it now! Happy Easter
Math Helper solves math problems and shows step-by-step solution.
Math Helper is an universal assistant app for solving mathematical problems for Algebra I, Algebra II, Calculus and Math for secondary and college students, which allows you not only to see the
answer or result of a problem, but also a detailed solution.
[✔] Linear Algebra - Operations with matrices
[✔] Linear algebra - Solving systems of linear equations
[✔] Vector algebra - Vectors
[✔] Vector algebra - Shapes
[✔] Calculus - Derivatives
[✔] Calculus - Indefinite Integrals (integrals solver)
[✔] Calculus - Limits
[✔] The theory of probability
[✔] The number and sequence
[✔] Function plotter
Derivatives, limits, geometric shapes, the task of statistics, matrices, systems of equations and vectors – this and more in Math Helper!
✧ 10 topics and 43+ sub-section.
✧ Localization for Russian, English, Italian, French, German and Portuguese
✧ Intel ® Learning Series Alliance quality mark
✧ EAS ® approved
✧ More than 10'000 customers all over the world supported development of Math Helper by doing purchase
✧ The application is equipped with a convenient multi-function calculator and extensive theoretical guide
✪ Thank you all helping us to reach 800'000+ downloads of Math Helper Lite
✪ You could also support us with good feedback at Google Play or by links below
✪ Our Facebook page: https://www.facebook.com/DDdev.MathHelper
✪ Or you could reach us directly by email
We have plans to implement
● Numbers and polynomial division and multiplication
● Implement new design and add 50+ new problems
● New applications, like Formulae reference for college and university and symbolic calculator
MathHelper is a universal assistant for anyone who has to deal with higher mathematics, calculus, algebra. You can be a student of a college or graduate, but if you suddenly need emergency assistance
in mathematics – a tool is right under your fingertips! You could also use it to prepare for SAT, ACT or any other tests.
Don't know how to do algebra? Stuck during calculus practice, need to solve algebra problems, just need integral solver or limits calculus calculator? Math Helper - not only math calculator, but
step-by-step algebra tutor, will help to solve derivative and integral, maths algebra, contains algebra textbooks - derivative and antiderivative rules (differentiation and integration), basic
algebra, algebra 1 and 2, etc.
Good calculus app and algebra for college students! Better than any algebra calculator with x, algebra solving calculator or algebra graphing calculator - this is calculus solver, algebra 1 and 2
solver, with step-by-step help, calculator, integrated algebra textbooks and formulas for calculus.
This is not just a math answers app, but math problem solver! This mathematics solver can solve any math problem - from basic math problems to integral and derivative, matrix, vectors, geometry and
much more. For everyone into math learning. This is not just math ref app - this is ultimate math problem solver.
Discover a new shape of mathematics with MathHelper!
Black Friday Easter special discount sale will last till monday. Get app with Black Friday Easter special discount sale now!
Want to learn algebra? Algebra is used every day in any profession and chances are, you’ll run into algebra problems throughout your life! Any of the following sound familiar?
- Trouble paying attention in class
- Improper instruction
- General disinterest in math
Don’t worry! Come finals day, you’ll be prepared with our comprehensive algebra app. Our app features everything you’ll need to learn algebra, from an all-inclusive algebra textbook to a number of
procedural problem generators to cement your knowledge.
- Abridged algebra textbook, covering many aspects of the subject
- Procedural problem generators, ensuring you will not encounter the same problem twice
- Problem explanations, demonstrating step-by-step how to do each problem
- Quicknotes, reviewing entire chapters in minutes
- Intuitive graphing interface, teaching proper graphing methods
- Statistics tracking, helping you identify your weaknesses
Plus, new chapters are added all the time!
Subjects covered:
Ch. 1: Basics
1.1 Basics of Algebra
1.2 Solving Equations
1.3 Solving Inequalities
1.4 Ratios and Proportions
1.5 Exponents
1.6 Negative Exponents
1.7 Scientific Notation
Ch. 2: Graphing
2.1 Rectangular Coordinate System
2.2 Graphing by Points
2.3 Graphing by Slope_Intercept Form
2.4 Graphing by Point_Slope Form
2.5 Parallel and Perpendicular Lines
2.6 Introduction to Functions
Ch. 3: Systems
3.1 Systems of Equations by Substitution
3.2 Systems of Equations by Elimination
3.3 Systems of Equations by Graphing
Ch. 4: Polynomials
4.1 Introduction to Polynomials
4.2 Adding and Subtracting Polynomials
4.3 Multiplying Polynomials
4.4 Dividing Polynomials
Ch. 5: Rationals
5.1 Simplifying Rational Expressions
5.2 Multiplying and Dividing Rational Expressions
5.3 Adding and Subtracting Rational Expressions
5.4 Complex Rational Expressions
5.5 Solving Rational Expressions
Ch. 6: Factoring
6.1 Introduction to Factoring
6.2 Factoring Trinomials
6.3 Factoring Binomials
6.4 Solving Equations by Factoring
Ch. 7: Radicals
7.1 Introduction To Radicals
7.2 Simplifying Radical Expressions
7.3 Adding and Subtracting Radical Expressions
7.4 Multiplying and Dividing Radical Expressions
7.5 Rational Exponents
7.6 Solving Radical Expressions
Ch. 8: Quadratics
8.1 Extracting Square Roots
8.2 Completing the Square
8.3 Quadratic Formula
8.4 Graphing Parabolas
Keywords: learn algebra, algebra, math, free, graphing, algebra textbook, teach algebra, algebra tutor, algebra practice, algebra problems, review algebra, study algebra, algebra prep, algebra cheat
sheet, algebra formulas, algebra notes, algebra quicknotes, pre-algebra, algebra 2, common core, high school math, high school algebra, ged, sat, gmat
★★★ Check out our NEW HD version with Tablet Support and save a crazy 80% ★★★
Here ==> https://play.google.com/store/apps/details?id=com.phoneflips.paen
Well paying careers demand skills like problem solving, reasoning, decision making, and applying solid strategies etc. and Algebra provides you with a wonderful grounding in those skills - not to
mention that it can prepare you for a wide range of opportunities.
This is a COMPLETE Pre-Algebra guide to well over 325 rules, definitions and examples, including number line, integers, rational numbers, scientific notation, median, like terms, equations,
Pythagorean theorem and much more!
Our guide will take you step-by-step through the basic building blocks of Algebra giving you a solid foundation for further studies in our easy-to-follow and proven format!
Table of Contents
1. Number Line
2. Inequality Symbols
3. Comparing and Ordering
4. Graphs of Real, Integer & Whole Numbers
5. Adding Positive & Negatives
6. Subtracting Numbers & Opposites
7. Multiplying & Dividing Positive & Negatives
8. Properties of Real Numbers
9. Exponents & Properties
10. Order of Operations
11. Divisibility Tests
12. Greatest Common Factor (G.C.F.)
13. Least Common Multiple (L.C.M.)
14. Rational Numbers, Proper, Improper Fractions
15. Reducing Proper & Improper Fractions
16. Adding Fractions
17. Subtracting Fractions
18. Multiplying Fractions
19. Dividing Fractions
20. Adding & Subtracting Decimals
21. Multiplying Decimals
22. Dividing Decimals
23. Fractions to Decimals
24. Decimals to Fractions
25. Rounding Decimals
26. Scientific Notation
27. Percent
28. Percent Problems
29. Averages & Means
30. Medians
31. Mode & Range
32. Variables, Coefficients & Terms, Degrees
33. Like / Unlike Terms
34. Polynomials / Degrees
35. Distributive Property
36. Add/Subtract Polynomials
37. Expression Evaluation
38. Open Sentence / Solutions
39. One-Step Equations
40. Solving ax+b = c Equations
41. Solving ax+b = cx+d Equations
42. Solving a Proportion
43. From Words to Symbols
44. Square Roots / Radical Sign
45. Pythagorean Theorem
Algebra is a very unique discipline. It is very abstract. The abstractness of algebra causes the brain to think in totally new patterns. That thinking process causes the brain to work, much like a
muscle. The more that muscle works out, the better it performs on OTHER tasks. In simple terms, algebra builds a better brain! Believe it or not algebra is much easier to learn than many of us think
and this guide helps make it easier!
Like all our 'phoneflips', this lightweight application has NO ads, never needs an internet connection and wont take up much space on your phone!
For hard copy versions of this and other great products, please visit: http://www.flipperguides.com/
**REAL TEACHER TAUGHT LESSONS**
This algebra course teaches basic number operations, variables and their applications. Gain a fundamental sense of equations, inequalities and their solutions. This course offers 11 full chapters
with 6-8 lessons each chapter that present short easy to follow algebra videos. These 5 to 10 minutes videos take students through the lesson slowly and concisely. Algebra is taken by students who
have gained skills like operation with number, rational numbers, basic equations and the basic coordinate plane.
Chapter 1 Algebra Tools
1.1 Variables and Expressions
1.2 Exponents anf Order of Operations
1.3 Exploring Real Numbers
1.4 Adding Real Numbers
1.5 Subtracting Integers
1.6 Multiply and Dividing Real Numbers
1.7 The Distributive Property
1.8 Properties of Numbers
1.9 Number Systems
1.10 Functions and Graphs
Chapter 2 Solving Equations
2.1 Solving Two Step Equations
2.2 Solving Muti Step Equations
2.3 Solving Equations With Variables on Both sides
2.4 Ratios and Proportions
2.5 Equations and Problem Solving
2.6 Mixture Problems
2.7 Percent of Change
2.8 Solving For a Special value
2.9 Weighted Averages
Chapter 3 Solving Inequalities
3.1 Inequalities and their Graphs
3.2 Solving Inequailty by Add Subtract
3.3 Solve an Inequality Mutiplying and Dividing
3.4 Solve Muti Step Inequalities
3.5 Solving Compund Inequal
3.6 Absolute Value And Inequal
3.7 Graphing Systems of Inequalities
3.8 Graphing Inequalities in Two variables
Chapter 4 Graphs and Functions
4.1 Graphing data on the Coordinate Plane
4.2 Greatest Common Divisor
4.3 Equivalent Fractions
4.4 Equivalent Forms of Rational Numbers
4.5 Comparing and Ordering Rational Numbers
4.6 Direct Variation
4.7 Deductive and Inductive
Chapter 5 Linear Equations and Their Graphs
5.1 Rate of Change and Slope
5.2 Slope Intercept Form
5.3 Standard Form
5.4 Point Slope Form
5.5 Parallel and Perpendicular
Chapter 6 System of Equations and Inequalities
6.1 Solve Systems by Graphing
6.2 Solve Systems using Substition
6.3 Solve Systems Using Elimination
6.4 Application of Systems of Equations
6.5 Linear Inequalities
6.6 Systems of Inequalities
Chapter 7 Exponents
7.1 Zero and Negative Exponents
7.2 Scientific Notation
7.3 Multiplication Properties of Exponents
7.4 More on Multiplications of Exponents
7.5 Division Properties of Exponents
Chapter 8 Polynomials and Factoring
8.1 Adding and Subtracting Polynomials
8.2 Multiplying and Factoring Polynomials
8.3 Multiply Binomials (FOIL)
8.4 Multiply Special cases
8.5 Factor Trinomials (a=1)
8.6 Factor Trinomials (a>1)
8.7 Special cases of factoring polynomials
8.8 Factoring polynomials using grouping
8.9 Multiplying Monomials
8.10 Dividing Mononials
8.11 Special Products of Binomials
8.12 Factor Difference of Squares
8.13 Perfect Squares
Chapter 9 Quadratic Equations and Functions
9.1 Exploring Graphing Quadratics
9.2 Quadratic Equation
9.3 Finding and Estimating Square Roots
9.4 Solving Quadratic Equations
9.5 Factor Quadratic ro Solve
9.6 Complete the Square to Solve Quadratics
9.7 Solve Quadratic Equations using the Quadratic Formula
9.8 Using Discriminant
9.9 Graphing Quadratics
9.10 Exponent Functions
9.11 Growth and Decay
Chapter 10 Radical Expressions and Equations
10.1 Simplify Radicals
10.2 The Pythagorean Theorem
10.3 Operations with radical Expressions
10.4 Solve Radical Equations
10.5 Graphing Square Root Functions
10.6 The Distance Formula
11.1 Simplify Rational Expressions
11.2 Multiply and Divdiding Rational Expressions
11.3 Divide Polynomials
11.4 Adding and Subtracting rational Expressions
11.5 Rational Equations
11.6 Inverse Variation
Algebra TestBank! PERFECT-Score Authors.
Johns Hopkins provides full funding so that every student at Dunbar High School (Baltimore, Maryland) receives our TestBank Software.
• Multiple-Choice Questions: Every subject is covered.
• All Subjects: Focus on any subject you choose.
• Adaptive Learning Technology: Re-calibrates based on your performance.
• Seen Least Option: Avoid repeating questions.
• Missed Most Option: Review questions you missed most often.
• Favorite Option: Flag specific questions for review later.
• Rationale: Understand "why" an answer is correct.
• Performance Statistics: Know where you stand for every subject.
• Retake entire test or only missed questions: Maximum retention.
• Test-taking advice, tips, and strategies: You'll be prepared.
• Frequent Updates: Help you stay up on the latest material.
• See our PERFECT SCORE Authors at http://AllenPrep.com
ADAPTIVE LEARNING TECHNOLOGY: Your Algebra TestBank is continually re-calibrated every time you answer a question, based on your personal performance.
• Mock Algebra Questions
100% multiple choice questions along with guideline answers focusing on every subject area. Practicing these questions will greatly reduce your risk of encountering surprises on exam day.
• Rationale given for every single question
Explanations and rationale provided for every question. This makes TestBank a truly stand-alone program. You will not need to refer to another source for rationale while you are using TestBank. Your
learning will be more efficient and productive since everything you need to know is in front of you.
• You choose which subjects to study
Depending on your learning needs or time frame, you can decide to take a specific number of questions randomly selected from ALL of the subject areas, any COMBINATION of the subjects, or an
INDIVIDUAL subject. This gives you control, allowing you to target your focus on certain subjects.
• The order of the QUESTIONS is always scrambled.
TestBank keeps you on your toes. Become a better test taker, creating confidence in the materials learned.
• The order of the ANSWERS is always scrambled.
Forces you to read/understand each answer choice instead of remembering which answer is correct based on its order. This feature forces you to think about each question, preventing you from
memorizing the 'location" of the answer instead of "knowing" the answer.
• Your performance is tracked.
Your performance is displayed when you open the app, allowing you to track your progress and target your studies. TestBank tracks your cumulative performance, both overall and by subject area.
• Questions Seen Least
TestBank knows which questions you have seen/answered the least often. When you choose this option (it's the default option), the questions that you have seen the least number of times will display
first. We recommend this option for your initial few weeks of using TestBank. This enables you to focus on new questions without wasting time on ones you have already seen.
• Questions Missed Most
TestBank tracks when you get a question incorrect. After using TestBank for a while, you will want to focus on the questions that you have missed most often. When you choose this option, the
questions that you missed (scored incorrect) most often will display first. Focus on weak areas by taking quizzes only with the questions that you have missed most frequently.
If you like Khan Academy, you'll LOVE Algebra TestBank! Use with Ted, Quora, and Star Chart also.
Use for State Achievement Tests:
• California: STaR, CAHSEE
• Florida: FCAT
• Georgia: CRCT, EOCT, GHSGT, GAA
• Illiniois: ISAT, PSAE
• Massachusetts: MCAS
• Michigan: MEAP, MME
• New Jersey: NJASK, GEPA, HSPA
• New York: Regents
• North Carolina: EOGs, EOCs
• Ohio: OAT, OGT
• Pennsylvania: PSSA, PASA
• Texas: TAKS
• Virgina: SOL
• Washington: WASL
Since 1993 | Allen Resources, Inc. | 401.965.0340
This is a algebra calculator that solves algebra for you, and gives you answers teachers will accept! Unlike other similar apps this app is built directly for students. The answers that appear will
be in the form your teacher will want. No more weird decimal answers! Finally an app with answers teachers will accept! This does a variety of algebra calculations to help with your math class. Its
like your own personal algebra tutor app. This is great for students in all algebra classes including algebra 2 it will act as a algebra 2 calculator and calculator for all algebra classes. It will
act as a calculator to solve algebra problems like Factoring and complete the square. Also Including quadratic equation solver, system of linear equations with two or three equations. pythagorean
theorem, with two versions, one for simple problems and one for more advanced problems. It will also solve slope, y-intercept and give the equation in slope intercept form. Simplifying square roots,
and is a calculator that calculates square roots, cubed roots and any other root. We have added the foil method and exponents to the list of problems that can be solved. Reduce, add, subtract, divide
and mixed number fractions. Finding LCM and GCD (least common multiple and greatest common divider). There are many more solvers included in this app, as well as a list of formulas! This is great for
kids and adults its, it can be used to help with math workout and math games. Free updates are constantly being released to improve the app based on your responses! Download now for this special
★★★★★ Next Generation Interactive Common-Core! Learn Algebra while having fun ★★★★★
Whether you are a high-school or college student this Algebra course will replace a whole year of boring Algebra, with fun and engaging interactions.
The most complete and interactive Algebra course, developed by over 100 Algebra teachers and used by millions of Algebra students worldwide. The Algebra Genie gets you learning quickly while having
250 interactive and dynamic lessons, over 200MB of animations and multimedia, covering all the important Algebra topics, to get you into college quickly!
Topics covered:
01. Algebraic Expressions
02. Exponents
03. Linear Relations
04. The Pythagorean Theorem
05. Function Basics
06. Functions
07. Quadratic Functions
08. Absolute Function
09. Square Root Function
10. Step Functions
11. Exponentials & Logarithms
12. Factoring
13. Systems of Equations
14. Conics
Stay tuned for our other apps coming to the App Store, such as Geometry and Trigonometry.
This app shows you the most important Algebra Formulas.
- Exponent Laws
- Solution of Quadratic Equation
- Binomial Theorem
- Rules of Zero
Very useful math tool for school and college! If you are a student, it will help you to learn algebra!
The critically acclaimed sequel to the popular Learn Algebra app! Now you can learn the basic concepts of Algebra 2 with the same intuitive interface and effective lesson-problem-quicknotes format of
the first app all for FREE. This application is perfect for those who have:
- Trouble paying attention in class
- Improper instruction
- General disinterest in math
Don’t worry! Come finals day, you’ll be prepared with our comprehensive algebra app. Our app features everything you’ll need to learn algebra, from an all-inclusive algebra textbook to a number of
procedural problem generators to cement your knowledge.
- Abridged algebra textbook, covering many aspects of the subject
- Procedural problem generators, ensuring you will not encounter the same problem twice
- Problem explanations, demonstrating step-by-step how to do each problem
- Quicknotes, reviewing entire chapters in minutes
- Intuitive graphing interface, teaching proper graphing methods
- Statistics tracking, helping you identify your weaknesses
Plus, new chapters are added all the time!
Subjects covered:
Ch. 1: Functions
1.1 Introduction to Functions
1.2 Operations with Functions
1.3 Absolute Value Functions
1.4 Piecewise Functions and Continuity
Ch. 2: Polynomials
2.0 Polynomials Review
2.1 Introduction to Rational Functions
2.2 Multiplying and Dividing Rational Functions
2.3 Adding and Subtracting Rational Functions
2.4 Solving Rational Equations
Ch. 3: Linear Systems
3.0 Linear Systems Review
3.1 Systems of Three Equations
3.2 Determinants and Cramer's Rule
3.3 Systems of Inequalities
Ch. 4: Exponentiation
4.0 Exponential Functions Review
4.1 Introduction to Logarithms
4.2 Solving Exponential and Logarithmic Functions
4.3 Models of Exponential Growth
Keywords: learn algebra 2, algebra 2, math, free, graphing, algebra 2 textbook, teach algebra 2, algebra 2 tutor, algebra 2 practice, algebra 2 problems, review algebra 2, study algebra 2, algebra 2
prep, algebra 2 cheat sheet, algebra 2 formulas, algebra 2 notes, algebra 2 quicknotes, pre-algebra, algebra, common core, high school math, high school algebra 2, ged, sat, gmat
Transfer Student Solver is app will be deactivated on 30 June 2013.
It has more content, more options ... Updates, plus or minus 3 ... at 3 months
Algebra Geometry Formulae is an ideal free app for all students above 12th grade, college graduates, engineering graduates and students preparing for various exams. We have compiled all the algebra,
geometry and statistics related formulas to cover all the Math’s formulas.
The maths topics covered in this free app are:
*Basic Properties and Facts
*Factoring and Solving Formulas
*Factoring and solving Methods (completing the squares methods etc...)
*Functions and Graphs
*Common Algebraic Errors
*Points and Lines
*Coordinate Geometry
*Measurement Formulas
*Facts and Properties
*Formulas and Identities
*Unit Circle
*Inverse Trigonometric Functions
*Law of Sines, Cosines and Tangent
Tags: learn algebra, geometry, math, free, graphing, geometry textbook, teach geometry, geometry tutor, geometry practice, geometry problems, review geometry, study geometry, geometry prep, geometry
cheat sheet, geometry formulas, geometry notes, geometry quicknotes, pre-geometry, algebra 2,algebra tutor,SAT,GRE,CAT,CET,ISEET,NEET,Math, Workout, Practice, Addition, Subtraction, Multiplication,
Division, Powers, math games, math game, math workout, maths help, , maths workout, Mathsbrain, maths kids, math tricks, math tutor, math teacher, math test, maths for kids, math drills, math flash
cards, math formulas, math facts, math homework, math magic, math maniac, math reference, math ref, math tricks, math skill, math wizard, brain teaser, math problem solving, math logic,SAT, PSAT,
GRE, GMAT, ACT, MCAT,maths exam.maths games,maths formulaes,trignometry
Algebra Cheat Sheet provides you with the quick reference of formulas in Algebra.
Topics include:
1. Basic Properties and Facts (includes properties of radicals, exponents, logarithms etc.,
2. Factoring and Solving equations
3. Methods of solving Linear Equations, Quadratic Equations, Solving equations with Square roots, Solving Equations with Absolute Values
4. Functions and graphs for parabola, eclipse, Hyperbola, circle etc.,
5. Common Algebraic Errors.
You can refer to this cheat sheet as a quick reference of algebraic formulas.
This is for students in High School/College learning algebra. If you are a beginner in algebra you might be thinking X+Y=XY, Is not it? But it’s not.
The beauty of algebra is, it deals with variables, expressions & equations. You will come to know various formulas.
For example if you know (a+b)^3= a^3+b^3+3a^2b+3ab^2.
You can calculate any number to the powers 2,3,4…in a fraction of seconds.
In the above equation a ,b are variables. So you can calculate (1.034)^3 also using that formula. Just feed a=1& b=.034
IMathPractice Algebra's 3 steps method of teaching has sections like Tutorial, Practice Skills, Practice Test & Algebra Challenge. Under tutorial it teaches you.
Types of Number like real number, integer, negative number, complex number
Addition, Subtraction, Multiplication & Division of Real Number
Addition, Subtraction, Multiplication & Division of Negative Number
Addition, Subtraction, Multiplication & Division of Complex Number
Properties of Number
Ratio & Proportion
Exponent & Radical
Integer exponents/real /rational exponents
Rational exponent is radical.
What is Monomial/binomial/polynomial?
Addition, Subtraction, Multiplication, Division of polynomials
Factoring polynomial
What is variable, expression & equation?
Linear equation, Quadratic equation
Equation with radical, Absolute value of equation
Rational Expression;
Addition, Subtraction, Multiplication, Division of rational expression
What’s Inequality?
Linear, Polynomial &Rational Inequalities
Absolute value of Inequality
Under Practice Skills, You will be able to practice all the above learned skills with help.
There are answer & steps to get the answer for each question.
Under Practice Test, You will be able to practice all the skills in specified timed environment. There is a timer. You need to finish within that time.
Under Algebra Challenge You will be prepared to compete with others. It contains 50 questions & time allotted is 1 hour.
There are around 210 questions to practice for in lite version.
What's new in version 1.3?
Bug reported in comment section has been removed.
Note:Currently the app is in English only. We are working on translation to other languages. Please don't rate 1 only because it's not in your language.You can send your request to developer.
Test your algebra skills, and train the four basic mathematical operations.
You can also choose if you want to separately train the addition, the division, the multiplication or the division.
This app is useful for both kids learning the basic operations and adults willing to do some mental gymnastics.
There is also a nice feature that will help you learn from your mistakes: if you click the wrong answer, you can be sure that the wrongly answered question will be asked you again soon, so you have
the possibility to answer correctly the next time (and learn!).
More from developer
DragonBox Algebra 5+ - The game that secretly teaches algebra
DragonBox Algebra 5+ Is perfect for giving young children a head start in mathematics and algebra. Children as young as five can easily begin to grasp the basic processes involved in solving linear
equations without even realising that they are learning. The game is intuitive, engaging and fun, allowing anyone to learn the basics of algebra at his or her own pace.
DragonBox Algebra 5+ covers the following algebraic concepts:
* Addition
* Division
* Multiplication
Suitable from age five and up, DragonBox Algebra 5+ gives young learners the opportunity to get familiar with the basics of equation solving.
DragonBox uses a novel pedagogical method based on discovery and experimentation. Players learn how to solve equations in a playful and colorfull game environment where they are encouraged to
experiment and be creative. By manipulating cards and trying to isolate the DragonBox on one side of the game board, the player gradually learns the operations required to isolate X on one side of an
equation. Little by little, the cards are replaced with numbers and variables, revealing the addition, division and multiplication operators the player has been learning throughout the game.
Playing does not require supervision, although parents can assist them in transferring learned skills into pen and paper equation solving. It is a great game for parents to play with their kids and
can even give them an opportunity to freshen up their own math skills.
DragonBox was developed by former math teacher Jean-Baptiste Huynh and has been heralded as a perfect example of game-based learning. As a result, it is currently forming the basis of an extensive
research project by the Center For Game Science at the University of Washington.
* 10 progressive chapters (5 learning, 5 training)
* 200 puzzles
* Learn to solve equations involving addition, subtraction, division and multiplication
* Multiple profiles
* Dedicated graphics and music for each chapter
* supported languages: English, français, norsk, svenska, dansk, español, 한국어, italiano, português, Deutsch, русский, 简体中文, 繁體中文, suomi, Nederlands, Eesti, Euskara, Türkçe, Čeština,
Lietuvių, Magyar, 日本語...
Gold Medal
2012 International Serious Play Awards
Best Educational Game
2012 Fun and Serious Games Festival
Best Serious Mobile Game
2012 Serious Games Showcase & Challenge
App of the Year
GullTasten 2012
Children’s App of the Year
GullTasten 2012
Best Serious Game
9th International Mobile Gaming Awards (2012 IMGA)
2013 ON for Learning Award
Common Sense Media
Best Nordic Innovation Award 2013
2013 Nordic Game Awards
Editors choice award
Children’s Technology Review
DragonBox is making me reconsider all the times I’ve called an educational app "innovative."
GeekDad, Wired
Step aside sudoku, algebra is the primordial puzzle game
Jordan Shapiro, Forbes
Brilliant, kids don't even know that they are doing Math
Jinny Gudmundsen, USA today
These guys are shaping the future of education
Brian Brushwood, TWiT
Awesome integration of algebra and gameplay!
My eight year old son immediately sat down and ran through the first two banks of problems without hesitation. It was amazing.
Christopher Wanko, CoolTools
You will be surprised at how much you can learn in a few hours with this app.
Geeks With Juniors
|
{"url":"https://play.google.com/store/apps/details?id=com.wewanttoknow.DragonBox2&hl=en","timestamp":"2014-04-21T00:07:51Z","content_type":null,"content_length":"220313","record_id":"<urn:uuid:524f1fa8-f417-419e-8a19-eda5a28aa61b>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This work gives a full description of a method for analyzing the admissible complex representations of the general linear group G = Gl(N,F) of a non-Archimedean local field F in terms of the
structure of these representations when they are restricted to certain compact open subgroups of G. The authors define a family of representations of these compact open subgroups, which they call
simple types. The first example of a simple type, the "trivial type," is the trivial character of an Iwahori subgroup of G. The irreducible representations of G containing the trivial simple type are
classified by the simple modules over a classical affine Hecke algebra. Via an isomorphism of Hecke algebras, this classification is transferred to the irreducible representations of G containing a
given simple type. This leads to a complete classification of the irreduc-ible smooth representations of G, including an explicit description of the supercuspidal representations as induced
representations. A special feature of this work is its virtually complete reliance on algebraic methods of a ring-theoretic kind. A full and accessible account of these methods is given here.
Subject Area:
• Mathematics
Hardcover: Not for sale in Japan
Shopping Cart:
|
{"url":"http://press.princeton.edu/titles/5270.html","timestamp":"2014-04-21T04:33:20Z","content_type":null,"content_length":"13145","record_id":"<urn:uuid:6c4fa73b-baff-41e7-acf3-a1caf848d1b2>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
|
separability problem
January 15th 2012, 01:58 PM #1
Jan 2012
separability problem
Hello! At the moment I'm preparing for an exam, and I'm stuck trying to answear the following (any suggestions are welcome):
Let H be a Hilbert space and denote by L(H) the space of all continuous linear maps from H to H (L(H) a Banach space). Suppose that the dimension of our H is infinite. Is L(H) separable? Why?
I'm thinking it's not. However, I've got nothing but intuition to back that up with.
Re: separability problem
Hello! At the moment I'm preparing for an exam, and I'm stuck trying to answear the following (any suggestions are welcome):
Let H be a Hilbert space and denote by L(H) the space of all continuous linear maps from H to H (L(H) a Banach space). Suppose that the dimension of our H is infinite. Is L(H) separable? Why?
I'm thinking it's not. However, I've got nothing but intuition to back that up with.
Think of the elements of L(H) as matrices (with respect to some orthonormal basis). The diagonal matrices correspond to bounded sequences. The Banach space $\ell^\infty$ of bounded sequences is
nonseparable ...
Re: separability problem
Oh! I see. Is that the same as saying that I found something "within" my L(H) that is not separable and L(H) is therefore not separable?
Re: separability problem
January 15th 2012, 11:45 PM #2
January 16th 2012, 05:39 AM #3
Jan 2012
January 16th 2012, 07:40 AM #4
|
{"url":"http://mathhelpforum.com/differential-geometry/195357-separability-problem.html","timestamp":"2014-04-18T21:56:17Z","content_type":null,"content_length":"41293","record_id":"<urn:uuid:6ce5ac3c-5ab0-4d4a-8973-aae8539537a5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
|