content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
derivatives...5 problems
January 27th 2010, 11:56 PM
derivatives...5 problems
I have a couple functions that I had to take the derivatives of... and I'm not sure if I got the answers right.
Here are the problems:
Find dy/dx.
1. y=(1+cos^2*7x)^3
2. y=(1+cos2x)^2
3. y=x/sqrt(1+2x)
4. y=sin^2(3x-2)
5. y=sqrt(tan5x)
January 27th 2010, 11:59 PM
Dear kdl00,
I think you could check these derivations easily with, webMathematica Explorations: Step-by-Step Derivatives
January 28th 2010, 05:09 AM
Hello kdl00
You say that you're not sure whether you got the answers right. Why not show us what you have done, and we'll check them for you?
|
{"url":"http://mathhelpforum.com/calculus/125889-derivatives-5-problems-print.html","timestamp":"2014-04-20T04:37:26Z","content_type":null,"content_length":"4978","record_id":"<urn:uuid:91527bfe-7949-4e3b-a268-7e973f17575b>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Set Theory of Thought
  Contents |  rgb Home | Philosophy Home | Axioms |
Other Books by rgb: | The Book of Lilith |
In this introduction so far you've already seen at least glimpses of many of the basic punch lines of this book, but they're probably a bit amorphous yet (at least I hope that they are or you won't
keep reading). Either way, we covered a lot of ground so let's summarize what we learned before moving on.
At this point we should be able to see that set theory is all really lovely and seems somehow to be more fundamental than the rules of logic and mathematics or the Laws of Thought. We've also seen
how it appears possible to make a naive, existential set theory that eliminates the possibility of paradox while embodying the Laws of Thought in a way that at least seems less ambiguous than they
did in English.
In the process we deduced some important truths about the necessity for matching the domains of proposed set relationships intended to pick out particular sets from the set of all sets within our
existential set Universe (which are all there whether or not we pick them out). Since we can easily come up with silly relationships, or broken relationships, or paradoxical relationships, or
self-referential set relationships that do not describe a set in the set Universe (including even the empty set) we invented a ``set'' not a set, the NaS set. A metaphor for this set (which is only a
metaphor since it isn't a set and doesn't exist in the closed set Universe where set operations are defined) is that it can be thought of as the non-invertible complement of the Universal set we wish
to reason in.
At this point, formal logic is one possible thing that can be built on top of or in parallel with existential set theory - we just add a very few axioms and definitions and stir gently, since the
Laws of Thought (viewed as axioms or not) are built right into its basic operational structure. We do need to discuss and define the notions of ``true'' and ``false'' and how they differ from
``exist'' and ``don't exist'' (are null) in the set theory, discuss the notion of ``provability'' as a possible proxy for ``is true'', and so on (and will do so in the next chapter), as those things
appear to be algebraic constructs that gain existential validity only to the extent that they permit us to make well-formed propositions concerning their associated Universal set. In general we'll
find that they are really useful only in artificial set Universes, not existential ones, and only useful to existential ones to the extent that we construct axiomatically defined mappings between the
two. The real Universe isn't ``true'' - it just is. Abstract propositions in logic about the real Universe can never be proven to be true using logic without this presumed (unprovable) mapping.
Formal mathematics is what we call the human activity of creating the artificial set Universes we wish to either use as a proxy for the real existential Universe or just for the sheer fun of doing
so. It is usually developed from axiomatic set theory from the beginning because we can see almost immediately upon attempting to develop set theoretic mathematics that any such development cannot be
unique or complete. Axioms are needed almost immediately for non-existential mathematical sets to deal with notions of conditionally undefined set operations, paradoxes, domain restrictions and
infinity, and oddnesses that result from viewing any given Universal set - say, the integers - as being embedded in a larger Universal set - say, a quaternionic field expressed as a function of
curvilinear space-time coordinates with specific conditions on smoothness and a metric. Cantor's paradox (really, Cantor's theorem) suggests that pretty much any set Universe can not only be embedded
in a larger set Universe, it can be embedded in a much much larger set Universe, recursively, so we can never talk about a truly Universal Set that contains all possible Sets any more than we can
talk about a largest real number, only various ways that real number sequences can scale to infinity.
Computational mathematics is a particularly lovely blend of logic and arithmetical mathematics on a highly constrained, discretized domain. The ``set Universe'' of objects acted on by the operations
of a computer is finite and discrete, intended to approximate real number arithmetic via integer arithmetic and symbolic mappings on a finite mesh. As a result it is nearly ideal for our purpose of
understanding the actually produce a truly undefined result in the limited set of logical transformations available to a computer.
Physics and natural science in general are still another and are truly based on an existential set Universe, even though without axioms even there we find ourselves unable to reason about the set
Universe, only to experience a single instantaneous realization drawn from it. We imagine that there is something we have named ``set theory'' (and all related imagined results such as ``logic'' and
``mathematics'') but to be able to use this to reason about the actual existential set requires axioms galore.
Just as is the case with computing, where NaN results have to be encoded ``by hand'' based on higher order definitions and axioms ``inherited'' from a presumed mapping to a particular space expressed
in terms of discrete binary transformations so waste one or more of the sets in the set Universe by assigning them the meta-meaning that results mapped there are not ``results'' and do not belong in
the set Universe in question.
imagined limiting process. In ordinary arithmetic for all nonzero the result is well-defined^3.42.
In all thes cases, in order to reason about the probably infinite number of subsets of the presumed Universal or non-Universal set, the very general framework of an abstract set theory is clothed in
imaginary clothes - unprovable assumptions called (in various contexts) definitions, axioms, rules of inference, laws of nature, microcode of the computer, a language (with a dictionary of symbols, a
semantic mapping of those symbols, a syntax one can use to assemble valid ``statements'' with the language, and more). These are all rules that exist in our minds as we reason (or that appear to
exist by inference in nature as it operates or that are engineered into the computer under the same assumed inference rules as those of nature or mathematics but applied). These abstract, arbitrary
rules both select particular subsets of the existential Universal set via metaphorical relations established between that set and particular abstract mathematical sets and identifies them (encodes
them within a language) via information compression, with inherited axiomatic functional relations between them. This is how we reason.
Since assumptions - specifically the axioms of a theory - seem to play a pivotal part in where we go from here (and is, after all, the title of this book) we'll next explore axioms in the context of
logic and try to better understand just what they are and how they occur throughout every realm of human endeavor as a prior step to being able to do anything ``rational'' within those realms at all.
In the process of looking at axioms and their critical role in formulating any sort of system of reason, we will inevitably be drawn to look at what can only be called a kind of breakdown in the
scope of reason itself - Gödel's Theorem.
This is basically an extension of work done by Russell himself, in particular, the Russell Paradox. Although its importance can be overemphasized - it certainly doesn't mean that reasoning itself
``no longer works'', it doesn't mean that mathematics and physics and all that are in any fundamental sense unsound - its importance should not be minimized, either. It definitely warrants its own
dedicated discussion, as it both maps certain paradoxical logical constructs out of set theory and into the null set and separates the notion of ``truth'' from that of ``provability''.
This is quite disturbing. It turns out that we cannot even completely analyze mathematical or logical propositions for truth within any sufficiently complex axiomatic system. As noted in the somewhat
irreverent beginning of this part of the book, we want answers to the Big Questions - the questions on Life, the Universe, and Everything. We've assigned the task to Philosophers and want them to be
able to explain their answers to us ``beyond any doubt'' using this reason thing we've been paying them to work out for so long. And it isn't just everyday people that expect pure reason to pay its
own way and explain it all - even relatively contemporary and quite competent philosophers-mathematicians like Bertrand Russell would like ``startling'' conclusions to come out of the simple rules of
logic applied to the world^3.43.
Alas, as we will see in great detail, we will be forced to conclude that Russell's fond hope is doomed from the beginning. We cannot prove one single damn thing about the Universe using pure reason.
To use reason at all we must begin with unprovable axioms (unprovable premises leading to unprovable conclusions according to logic itself) and when we do, we will almost certainly still be left with
some things that may be true or false or both or neither but unprovable in any event.
Now let's get started. It is pretty clear that our examination of ``reason'' must continue with a look at the details of how logical systems work. We've laid a good foundation with our discussion of
set theory above, and I've laid out some of the things we hope to learn already so they won't come as a complete surprise or shock, but the details matter. Part of the ``stomping the corpse'' thing.
Set Theory of Thought
  Contents |  rgb Home | Philosophy Home | Axioms |
Other Books by rgb: | The Book of Lilith | Copyright © 2010-01-21
Duke Physics Department
Box 90305
Durham, NC 27708-0305
|
{"url":"http://phy.duke.edu/~rgb/Philosophy/axioms/axioms/Summary.html","timestamp":"2014-04-18T08:32:34Z","content_type":null,"content_length":"15943","record_id":"<urn:uuid:d8c24d36-6519-4a56-97b5-027379e5ee7f>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-User] Edge Detection
Dan Yamins dyamins@gmail....
Thu Nov 12 12:23:33 CST 2009
On Thu, Nov 12, 2009 at 10:23 AM, Chris Colbert <sccolbert@gmail.com> wrote:
> All of the OpenCV edge detection routines are also available in
> scikits.image if you have opencv (>= 2.0) installed.
> On Tue, Nov 10, 2009 at 5:48 PM, Zachary Pincus <zachary.pincus@yale.edu>
> wrote:
> >
> > Code: Look at what's available in scipy.ndimage. There are functions for
> > getting gradient magnitudes, as well as standard filters like Sobel etc.
> > (which you'll learn about from the above), plus morphological operators
> for
> > modifying binarized image regions (e.g. like erosion etc.; useful for
> > getting rid of stray noise-induced edges), plus some basic functions for
> > image smoothing like median filters, etc.
> >
> > For exploratory analysis, you might want some ability to interactively
> > visualize images; you could use matplotlib or the imaging scikit, which
> is
> > still pre-release but making fast progress:
> > http://github.com/stefanv/scikits.image
> >
> > I've attached basic code for Canny edge detection, which should
> demonstrate
> > a bit about how ndimage works, plus it's useful in its own right. There
> is
> > also some code floating around for anisotropic diffusion and bilateral
> > filtering, which are two noise-reduction methods that can be better than
> > simple median filtering.
> >
Hi Chris and Zachary, thanks very much for your help. I really appreciate
My goal was to recognize linear (and circular) strokes in images of text.
After I wrote my question and did some further research, I realized that I
was so ignorant that I didn't know enough to properly ask for what I
wanted. Finding strokes in letters is actually more like "line detection"
(as in "detecting lines as geometric features") than it is like edge
detection (e.g. something that the sobel operator does well). I needed to
localize the lines and describe them in some geometric way, not so much
determine where their boundaries were.
What I ended up doing is using the Radon transform (scipy.misc.radon),
together with the hcluster package. The basic idea is that applying Radon
transform to the image of a letter transforms the strokes into confined
blobs whose position and extent in the resulting point/angle space describes
the location, width, and angle of the original stroke. Then, I make a
binary version of the transformed image by applying an indicated threshold
on intensity -- e.g. a 1 at all points in the transformed image whose
intensity are above the threshold, and 0 elsewhere. Then, I cluster this
binary image, which ends up identifying clusters whose centroid and diameter
correspond to features of idealized strokes. This algorithm seems to
work pretty well.
Thanks alot again for your help, the scipy.ndimage package really seems
great. I read somewhere that the edge-detection routines will actually
become part of the next version of the package. Is that still true?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20091112/e1fe465f/attachment.html
More information about the SciPy-User mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-user/2009-November/023266.html","timestamp":"2014-04-19T19:55:39Z","content_type":null,"content_length":"6172","record_id":"<urn:uuid:93c07b2b-d634-43b2-bdf3-3e2114e45df2>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Glendale, AZ Math Tutor
Find a Glendale, AZ Math Tutor
...A little about myself: I am attending Grand Canyon University and studying to become a Neuropsychologist. As well as taking many honors and AP classes I am ahead of my peers in the amount of
credits I need to take. I have been tutoring as a job for five years now and I have taught the ages of 4 to 23.
30 Subjects: including differential equations, ASVAB, grammar, prealgebra
...Learning is more than the memorization of facts, it is the training of the brain to think, evaluate and solve. I received a B.S in Mechanical Engineering from the Milwaukee School of
Engineering with the 3.90 GPA. Throughout university I tutored students in math, science, and mechanical engineering.
21 Subjects: including algebra 1, algebra 2, calculus, ACT Math
...My Bachelor's degree is in both biology and geology, from the University of Western Ontario. In addition to my experience teaching both lectures and labs at the community college level, I have
tutoring experience in science, math, test preparation, and English. My educational background has given me excellent language and grammar skills, as well as a strong foundation in math and
28 Subjects: including trigonometry, ACT Math, algebra 1, algebra 2
...The student is urged to ask questions in discussing those problems, and, in turn, I ask peripheral questions to ensure good basic comprehension. I use a modified Socratic method of teaching,
making the student familiar with basic concepts, and learning to solve specific problems the student has ...
30 Subjects: including trigonometry, algebra 1, algebra 2, calculus
...I have extensive experience in soccer, both as an athlete and a coach. I played for more than twenty years, both in Europe and in the States. I coached for St.
30 Subjects: including algebra 1, linear algebra, statistics, reading
Related Glendale, AZ Tutors
Glendale, AZ Accounting Tutors
Glendale, AZ ACT Tutors
Glendale, AZ Algebra Tutors
Glendale, AZ Algebra 2 Tutors
Glendale, AZ Calculus Tutors
Glendale, AZ Geometry Tutors
Glendale, AZ Math Tutors
Glendale, AZ Prealgebra Tutors
Glendale, AZ Precalculus Tutors
Glendale, AZ SAT Tutors
Glendale, AZ SAT Math Tutors
Glendale, AZ Science Tutors
Glendale, AZ Statistics Tutors
Glendale, AZ Trigonometry Tutors
|
{"url":"http://www.purplemath.com/glendale_az_math_tutors.php","timestamp":"2014-04-17T01:29:59Z","content_type":null,"content_length":"23801","record_id":"<urn:uuid:8724bf38-8790-41a4-8076-99a5fdf8205c>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Statistics::MVA::BayesianDiscrimination - Two-Sample Linear Discrimination Analysis with Posterior Probability Calculation.
This document describes Statistics::MVA::BayesianDiscrimination version 0.0.2
Discriminant analysis is a procedure for classifying a set of observations each with k variables into predefined classes such as to allow the determination of the class of new observations based upon
the values of the k variables for these new observations. Group membership based on linear combinations of the variables. From the set of observations where group membership is know the procedure
constructs a set of linear functions, termed discriminant functions, such that:
L = B[0] + B[1] * x1 + B[2] * x2 +... ... + B[n] * x_n
Where B[0] is a constant, B[n's] are discriminant coefficients and x's are the input variables. These discriminant functions (there is one for each group - consequently as this module only analyses
data for two groups atm it generates two such discriminant functions.
Before proceeding with the analysis you should: (1) Perform Bartlett´s test to see if the covariance matrices of the data are homogenous for the populations used (see Statistics::MVA::Bartlett. If
they are not homogenous you should use Quadratic Discrimination analysis. (2) test for equality of the group means using Hotelling's T^2 (see Statistics::MVA::HotellingTwoSample or MANOVA. If the
groups do not differ significantly it is extremely unlikely that discrimination analysis with generate any useful discrimination rules. (3) Specify the prior probabilities. This module allows you to
do this in several ways - see "priors".
This class automatically generates the discrimination coefficients at part of object construction. You can then either use the output method to access these values or use the discriminate method to
apply the equations to a new observation. Both of these methods are context dependent - see "METHODS". See http://en.wikipedia.org/wiki/Linear_discriminant_analysis for further details.
# we have two groups of data each with 3 variables and 10 observations - example data from http://www.stat.psu.edu/online/courses/stat505/data/insect.txt
my $data_X = [
[qw/ 191 131 53/],
[qw/ 185 134 50/],
[qw/ 200 137 52/],
[qw/ 173 127 50/],
[qw/ 171 128 49/],
[qw/ 160 118 47/],
[qw/ 188 134 54/],
[qw/ 186 129 51/],
[qw/ 174 131 52/],
[qw/ 163 115 47/],
my $data_Y = [
[qw/ 186 107 49/],
[qw/ 211 122 49/],
[qw/ 201 144 47/],
[qw/ 242 131 54/],
[qw/ 184 108 43/],
[qw/ 211 118 51/],
[qw/ 217 122 49/],
[qw/ 223 127 51/],
[qw/ 208 125 50/],
[qw/ 199 124 46/],
use Statistics::MVA::BayesianDiscrimination;
# Pass the data as a list of the two LISTS-of-LISTS above (termed X and Y). The module by default assumes equal prior probabilities.
#my $bld = Statistics::MVA::BayesianDiscrimination->new($data_X,$data_Y);
# Pass the data but telling the module to calculate the prior probabilities as the ratio of observations for the two groups (e.g. P(X) X_obs_num / Total_obs.
#my $bld = Statistics::MVA::BayesianDiscrimination->new({priors => 1 },$data_X,$data_Y);
# Pass the data but directly specifying the values of prior probability for X and Y to use as an anonymous array.
#my $bld = Statistics::MVA::BayesianDiscrimination->new({priors => [ 0.25, 0.75 ] },$ins_a,$ins_b);
# Print values for coefficients to STDOUT.
# Pass the values as an ARRAY reference by calling in LIST context - see L</output>.
my ($prior_x, $constant_x, $matrix_x, $prior_y, $constant_y, $matrix_y) = $bld->output;
# Perform discriminantion analyis for a specific observation and print result to STDOUT.
$bld->discriminate([qw/184 114 59/]);
# Call in LIST context to obtain results directly - see L</discriminate>.
my ($val_x, $p_x, $post_p_x, $val_y, $p_y, $post_p_y, $type) = $bld->discriminate([qw/194 124 49/]);
Creates a new Statistics::MVA::BayesianDiscrimination. This accepts two references for List-of-Lists of values corresponding to the two groups of data - termed X and Y. Within each List-of-Lists each
nested array corresponds to a single set of observations. It also accepts an optional HASH reference of options preceding these values. The constructor automatically generates the discrimination
coefficients that are accessed using the output method.
# Pass data as ARRAY references.
my $bld = Statistics::MVA::BayesianDiscrimination->new($data_X,$data_Y);
# Passing optional HASH reference of options.
my $bld = Statistics::MVA::BayesianDiscrimination->new({priors => 1 },$data_X,$data_Y);
Context-dependent method for accessing results of discrimination analysis. In void context it prints the coefficients to STDOUT.
In LIST-context it returns a list of the relevant data accessed as follows:
my ($prior_x, $constant_x, $matrix_x, $prior_y, $constant_y, $matrix_y) = $bld->output;
print qq{\nPrior probability of X = $prior_x and Y = $prior_y.};
print qq{\nConstants for discrimination function for X = $constant_x and Y = $constant_y.};
print qq{\nCoefficients for discrimination function X = @{$matrix_x}.};
print qq{\nCoefficients for discrimination function Y = @{$matrix_y}.};
Method for classification of a new observation. Pass it an ARRAY reference of SCALAR values appropriate for the original data-sets passed to the constructor. In void context it prints a report to
$bld->discriminate([qw/123 34 325/];
In LIST-context it returns a list of the relevant data as follows:
my ($val_x, $p_x, $post_p_x, $val_y, $p_y, $post_p_y, $type) = $bld->discriminate([qw/123 34 325/];
print qq{\nLinear score function for X = $val_x and Y = $val_y - the new observation is of type \x27$type\x27.};
print qq{\nThe prior probability that the new observation is of type X = $p_x and the posterior probability = $post_p_x};
print qq{\nThe prior probability that the new observation is of type X = $p_y and the posterior probability = $post_p_y};
Pass within an anonymous HASH preceding the two data references during object construction:
my $bld = Statistics::MVA::BayesianDiscrimination->new({priors => option_value },$data_X,$data_Y);
Passing '0' causes the module to assume equal prior probabilities for the two groups (prior_x = prior_y = 0.5). Passing '1' causes the module to generate priors depending on the ratios of the two
data-sets e.g. X has 15 observations and Y has 27 observations gives prior_x = 15 / (15 + 27). Alternatively you may specify the values to use by passing an anonymous ARRAY reference of length 2
where the first value is prior_x and the second is prior_y. There are currently no checks on priors directly passed so ensure that prior_x + prior_y = 1 if you supply you own.
# Use prior_x = prior_y = 0.5.
my $bld = Statistics::MVA::BayesianDiscrimination->new({priors => 0 },$data_X,$data_Y);
# Generate priors depending on rations of observation numbers.
my $bld = Statistics::MVA::BayesianDiscrimination->new({priors => 1 },$data_X,$data_Y);
# Specify your own priors.
my $bld = Statistics::MVA::BayesianDiscrimination->new({priors => [$prior_x, $prior_y] },$data_X,$data_Y);
'Statistics::MVA' => '0.0.1', 'Carp' => '1.08', 'Math::Cephes' => '0.47', 'List::Util' => '1.19', 'Text::SimpleTable' => '2.0',
Let me know.
Daniel S. T. Hughes <dsth@cantab.net>
Copyright (c) 2010, Daniel S. T. Hughes <dsth@cantab.net>. All rights reserved.
This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See perlartistic.
Because this software is licensed free of charge, there is no warranty for the software, to the extent permitted by applicable law. Except when otherwise stated in writing the copyright holders and/
or other parties provide the software "as is" without warranty of any kind, either expressed or implied, including, but not limited to, the implied warranties of merchantability and fitness for a
particular purpose. The entire risk as to the quality and performance of the software is with you. Should the software prove defective, you assume the cost of all necessary servicing, repair, or
In no event unless required by applicable law or agreed to in writing will any copyright holder, or any other party who may modify and/or redistribute the software as permitted by the above licence,
be liable to you for damages, including any general, special, incidental, or consequential damages arising out of the use or inability to use the software (including but not limited to loss of data
or data being rendered inaccurate or losses sustained by you or third parties or a failure of the software to operate with any other software), even if such holder or other party has been advised of
the possibility of such damages.
|
{"url":"http://search.cpan.org/~dsth/Statistics-MVA-BayesianDiscrimination-0.0.2/lib/Statistics/MVA/BayesianDiscrimination.pm","timestamp":"2014-04-19T13:30:55Z","content_type":null,"content_length":"24156","record_id":"<urn:uuid:be6bb8e6-1569-473b-9f5b-ceb6304243f0>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lin. Alg.: Prove char. poly. of n-by-n matrix always has.... [Archive] - Free Math Help Forum
04-03-2007, 07:05 PM
I was wondering how one would do this. Any explanations would be GREATLY appreciated:
A characteristic polynomial of a matrix A is the polynomial p sub A of x = |xI sub n - A|.
Prove that the characteristic polynomial of an n by n matrix always has degree n, with the coefficient of x^n equal to 1. (Hint: use induction on n)
|
{"url":"http://www.freemathhelp.com/forum/archive/index.php/t-50632.html","timestamp":"2014-04-18T03:09:54Z","content_type":null,"content_length":"3103","record_id":"<urn:uuid:f704b9c0-51e2-4af5-8625-dce3b6773630>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Waverley Geometry Tutor
Find a Waverley Geometry Tutor
...I tutor in elementary mathematics and science to several students in high school currently. These subjects have included physics, mathematics and chemistry. My tutoring experience has taught me
what questions are asked on the Regents and what difficulties students have.
47 Subjects: including geometry, chemistry, reading, calculus
...I will review your proficiency, needs, objectives, study habits, test preparation, homework and test-taking skills, put together a plan and work with you to achieve your specific goals. I have
tutored students at all levels for over a decade from Lexington, Newton, Lincoln, Weston, Belmont, Bedford, Wellesley and many other communities. I tutor both during the summer and the school
34 Subjects: including geometry, reading, English, writing
...I enjoy helping my students to understand and realize that they can not only do the work - they can do it well and they can understand what they're doing. My references will gladly provide
details about their own experiences. I have a master's degree in computer engineering and run my own data analysis company.
11 Subjects: including geometry, algebra 1, algebra 2, precalculus
...I enjoy helping others to master vocabulary, whether through games or discussing the different roots, suffixes, and prefixes in a word. I enjoy helping others understand the logic and rules
that govern our writing, interpretation, and speech. I have almost six months' experience tutoring in English half-time, including grammar.
29 Subjects: including geometry, reading, English, literature
...I love math and love helping others to "get" math.I have experience teaching algebra topics in both a classroom setting and tutoring one-on-one. I have had success with helping students with
learning disabilities to understand algebra. Topics may include, but are not limited to: solving equations, proportional reasoning, rules with exponents, factoring equations and quadratic
8 Subjects: including geometry, reading, algebra 1, GED
|
{"url":"http://www.purplemath.com/Waverley_Geometry_tutors.php","timestamp":"2014-04-20T13:58:37Z","content_type":null,"content_length":"24199","record_id":"<urn:uuid:06ec68a0-2cef-4f67-9156-90863de4b65e>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cryptology ePrint Archive: Report 2009/285
Efficient Public Key Encryption Based on Ideal LatticesDamien Stehlé, Ron Steinfeld, Keisuke Tanaka, Keita XagawaAbstract: The potential high efficiency of public-key encryption based on structured
lattices was first indicated by the NTRU cryptosystem, which was proposed about 10 years ago. Unfortunately, the security of NTRU is only heuristic. Thus, it remained an important research challenge
to construct an efficient encryption scheme based on structured lattices which admits a proof of security relative to a well established cryptographic assumption. We make progress in addressing the
above challenge. We show how to construct a CPA-secure public-key encryption scheme with security provably based on the worst case hardness of the approximate Shortest Vector Problem in structured
ideal lattices. Under the assumption that the latter is exponentially hard to solve even with a quantum computer, our scheme resists any subexponential attack and offers (quasi-)optimal asymptotic
performance: if $n$ is the security parameter, both keys are of bit-length $\softO(n)$ and the amortized costs of both encryption and decryption are $\softO(1)$ per message bit. Our construction
adapts the trapdoor one-way function of Gentry, Peikert and Vaikuntanathan (STOC 2008), based on the Learning With Errors problem, to structured lattices. Our main technical tools are an adaptation
of Ajtai's trapdoor key generation algorithm (ICALP 1999) to structured ideal lattices, and a re-interpretation of Regev's quantum reduction between the Closest Vector Problem and sampling short
lattice vectors. We think these techniques are very likely to find further applications in the future.
Category / Keywords: public-key cryptography / Public-Key Encryption, Lattices, Provable Security, Post-Quantum CryptographyDate: received 16 Jun 2009Contact author: damien stehle at gmail com
Available format(s): PDF | BibTeX Citation Version: 20090616:201707 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
|
{"url":"http://eprint.iacr.org/2009/285/20090616:201707","timestamp":"2014-04-18T00:16:38Z","content_type":null,"content_length":"3384","record_id":"<urn:uuid:5d66efc5-6909-4b49-af86-7c7104919118>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
|
, Universal, or Portable OBSERVATORY, is an instrument intended to answer a number of useful purposes in practical astronomy, independent of any particular observatory. It may be employed in any
steady room or place, and it performs most of the useful problems in the science of astronomy. The following is the description of one lately invented, and named the Universal Equatorial.
The principal parts of this instrument (sig. 2, plate viii.) are, 1st, The azimuth or horizontal circle A, which represents the horizon of the place, and moves on a long axis B, called the vertical
axis. 2d, The Equatorial or hour-circle C, representing the Equator, placed at right angles to the polar axis D, or the axis of the earth, upon which it moves. 3d, The semicircle of declination E, on
which the telescope is placed, and moving on the axis of declination, or the axis of motion of the line of collimation F. Which circles are measured and divided as in the following Table:
Measures of the several circles and divisions on them. Radius Inches Limb divided to Non. of 3.0 gives seconds Divid. on limb into pts.of Inc. Divid. by Non. into pts.of Inc.
Azimuth or horizontal circle } 5.1 15′ 30″ 45th 1350th
Equatorial or hour circle } 5.1 15′, or 1 m. in time 30″
2″ 45th 1350th
Vertical semicircle for declination or latitude } 5.5 15′ 30″ 42d 1260th
4th, The telescope, which is an achromatic refractor with a triple object-glass, whose focal distance is 17 inches, and its aperture 2.45 inc., and it is furnished with 6 different eye-tubes; so that
its magnifying powers extend from 44 to 168. The telescope in this Equatorial may be brought parallel to the polar axis, as in the figure, so as to point to the pole-star in any part of its diurnal
revolution; and thus it has been observed near noon, when the sun has shone very bright. 5th, The apparatus for correcting the error in altitude occasioned by refraction, which is applied to the
eye-end of the telescope, and consists of a slide G moving in a groove or dove-tail, and carrying the several eye-tubes of the telescope, on which slide there is an index corresponding to five small
divisions engraved on the dove-tail; a very small circle, called the refraction circle H, moveable by a finger-serew at the extremity of the eye-end of the telescope; which circle is divided into
half minutes, one whole revolution of it being equal to 3′ 18″, and by its motion it raises the centre of the cross-hairs on a circle of altitude; and also a quadrant I of 1 1/2 inc. radius, with
divisions on each side, one expressing the degree of altitude of the object viewed, and the other expressing the minutes and seconds of error occasioned by refraction, corresponding to that degree of
altitude. To this quadrant is joined a small round level K, which is adjusted partly by the pinion that turns the whole of this apparatus, and partly by the index of the quadrant; for which purpose
the refraction circle is set to the same minute &c, which the index points to on the limb of the quadrant; and if the minute &c, given by the quadrant, exceed the 3′ 18″ contained in one entire
revolution of the refraction circle, this must be set to the excess above one or more of its entire revolutions; then the centre of the cross-hairs will appear to be raised on a circle of altitude to
the additional height which the error of refraction will occasion at that altitude.
The principal adjustment in this instrument, is that of making the line of collimation to describe a portion of an hour circle in the heavens: in order to which, the azimuth circle must be truly
level; the line of collimation, or some corresponding line represented by the small brass rod M parallel to it, must be perpendicular to the axis of its own proper motion; and this last axis must be
perpendicular to the polar axis. On the brass rod M there is occasionally placed a hanging level N, the use of which will appear in the following adjustments:
The azimuth circle may be made level by turning the instrument till one of the levels be parallel to an imaginary line joining two of the feet screws; then adjust that level with these two feet
screws; turn the circle 180°, or half round; and if the bubble be not then right, correct half the error by the screw belonging to the level, and the other half error by the two foot screws,
repeating this operation till the bubble come right; then turn the circle 90° from the two former positions, and set the bubble right, if it be wrong, by the foot screw at the end of the level; when
this is done, adjust the other level by its own screw, and the azimuth circle will be truly level. The hanging level must then be fixed to the brass rod by two hooks of equal length, and made truly
parallel to it: for this purpose, make the polar axis perpendicular or nearly perpendicular to the horizon; then adjust the level by the pinion of the declination semicircle: reverse the level, and
if it be wrong, correct half the error by a small steel screw that lies under one end of the level, and the other half error by the pinion of the declinationsemicircle, repeating the operation till
the bubble be right in both positions. To make the brass rod, on which the level is suspended, at right angles to the axis of motion of the telescope, or line of collimation, make the polar axis
horizontal, or nearly so; set the declination semicircle to 0°, and turn the hour-circle till the bubble come right; then turn the declination- | circle to 90°; adjust the bubble by raising or
depressing the polar axis (first by hand till it be nearly right, afterwards tighten with an ivory key the socket which runs on the arch with the polar axis, and then apply the same ivory key to the
adjusting screw at the end of the said arch till the bubble come quite right); then turn the declination-circle to the opposite 90°; if the level be not then right, correct half the error by the
aforesaid adjusting screw at the end of the arch, and the other half error by the two screws that raise or depress the end of the brass rod. The polar axis remaining nearly horizontal as before, and
the declinationsemicircle at 0°, adjust the bubble by the hour-circle; then turn the declination-semicircle to 90°, and adjust the bubble by raising or depressing the polar axis; then turn the
hour-circle 12 hours; and if the bubble be wrong, correct half the error by the polar axis, and the other half error by the two pair of capstan screws at the feet of the two supports on one side of
the axis of motion of the telescope; and thus this axis will be at right angles to the polar axis. The next adjustment, is to make the centre of the cross hairs remain on the same object, while the
eye-tube is turned quite round by the pinion of the refraction apparatus: for this adjustment, set the index on the slide to the first division on the dove-tail; and set the division marked 18″ on
the refraction-circle to its index; then look through the telescope, and with the pinion turn the eye-tube quite round; then if the centre of the hairs does not remain on the same spot during that
revolution, it must be corrected by the four small screws, 2 and 2 at a time, which will be found upon unscrewing the nearest end of the eye-tube that contains the first eye-glass; repeating this
correction till the centre of the hairs remain on the spot looked at during a whole revolution. To make the line of collimation parallel to the brass rod on which the level hangs, set the polar axis
horizontal, and the declination-circle to 90°, adjust the level by the polar axis; look through the telescope on some distant horizontal object, covered by the centre of the cross hairs: then invert
the telescope, which is done by turning the hour-circle half round; and if the centre of the cross hairs does not cover the same object as before, correct half the error by the uppermost and
lowermost of the 4 small screws at the eyeend of the large tube of the telescope; this correction will give a second object now covered by the centre of the hairs, which must be adopted instead of
the first object; then invert the telescope as before; and if the second object be not covered by the centre of the hairs, correct half the error by the same two screws as were used before: this
correction will give a third object, now covered by the centre of the hairs, which must be adopted instead of the second object; repeat this operation till no error remain; then set the hour-circle
exactly to 12 hours, the declination-circle remaining a 90° as before; and if the centre of the cross hairs do not cover the last object fixed on, set it to that object by the two remaining small
screws at the eye-end of the large tube, and then the line of collimation will be parallel to the brass rod. For rectifying the nonius of the declination and Equatorial circles, lower the telescope
as many degrees &c below 0° or Æ on the decli- nation-semicircle as are equal to the complement of the latitude; then elevate the polar axis till the bubble be horizontal; and thus the Equatorial
circle will be elevated to the co-latitude of the place: set this circle to 6 hours; adjust the level by the pinion of the declination-circle; then turn the Equatorial circle exactly 12 hours from
the last position; and if the level be not right, correct one half of the error by the Equatorial circle, and the other half by the declination-circle: then turn the Equatorial circle back again
exactly 12 hours from the last position; and if the level be still wrong, repeat the correction as before, till it be right, when turned to either position: that being done, set the nonius of the
Equatorial circle exactly to 6 hours, and the nonius of the declination-circle exactly to 0°.
The chief uses of this Equatorial are,
1st, To find the meridian by one observation only: for this purpose, elevate the Equatorial circle to the co-latitude of the place, and set the declination-semicircle to the sun's declination for the
day and hour of the day required; then move the azimuth and hourcircles both at the same time, either in the same or contrary direction, till you bring the centre of the cross hairs in the telescope
exactly to cover the centre of the sun; when that is done, the index of the hour-circle will give the apparent or solar time at the instant of observation; and thus the time is gained, though the sun
be at a distance from the meridian; then turn the hour-circle till the index points precisely at 12 o'clock, and lower the telescope to the horizon, in order to observe some point there in the centre
of the glass; and that point is the meridian mark, found by one observation only. The best time for this operation is 3 hours before, or 3 hours after 12 at noon.
2d, To point the telescope on a star, though not on the meridian, in full day-light. Having elevated the equatorial circle to the co-latitude of the place, and set the declination-semicircle to the
star's declination, move the index of the hour-circle till it shall point to the precise time at which the star is then distant from the meridian, found in the tables of the right ascension of the
stars, and the star will then appear in the glass.
Besides these uses, peculiar to this instrument, it may also be applied to all the purposes to which the principal astronomical instruments are applied; such as a transit instrument, a quadrant, and
an equal-altitude instrument.
See the description and drawing of an Equatorial telescope, or portable observatory, invented by Mr. Short, in the Philos. Trans. number 493, or vol. 46, p. 242; and another by Mr. Nairne, vol. 61,
p. 107.
EQUIANGULAR Figure, is one that has all its angles equal among themselves; as the square, and all the regular figures.
An equilateral figure inscribed in a circle, is always Equiangular. But an Equiangular figure inscribed in a circle, is not always equilateral, except when it has an odd number of sides: If the sides
be of an even number, then they may either be all equal, or else half of them will always be equal to each other, and the other half to each other, the equals being placed alternately. See the
demonstration in my Mathematical Miscellany, pa. 272. |
, is also said of any two figures of the same kind, when each angle of the one is equal to a corresponding angle in the other, whether each figure, separately considered in itself, be an equiangular
figure or not, that is, having all its angles equal to each other. Thus, two triangles are Equiangular to each other, if, ex. gr. one angle in each be of 30°, a second angle in each of 50°, and the
third angle of each equal to 100 degrees.
Equiangular triangles have not their like sides necessarily equal, but only proportional to each other; and such triangles are always similar to each other.
Equicrural Triangle, is one that has two of its sides equal to each other; and is more usually called an Isosceles triangle.
, Equuleus, or Equus Minor, a constellation of the northern hemisphere. See EQUULEUS.
|
{"url":"http://words.fromoldbooks.org/Hutton-Mathematical-and-Philosophical-Dictionary/e/equatorial.html","timestamp":"2014-04-21T12:24:58Z","content_type":null,"content_length":"18878","record_id":"<urn:uuid:45e269bd-8805-46f6-8b83-0a8a0f490861>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
|
algorithms for comparing two simplicial complexes
up vote 2 down vote favorite
Given a set $A$ of subsets of $\{1, \ldots n\}$ which is closed under taking subsets, let $X(A)$ be the corresponding simplicial complex, i.e. simplices of $X(A)$ are elements of the set $\bar A$,
and gluing is induced by containment of subsets)
Consider the following computational problem
Input: a natural number $n$ and two sets $A$ and $B$ of subsets of $\{1,\ldots, n\}$, closed under taking subsets.
Problem: Are $X(A)$ and $X(B)$ isomorphic as simplicial complexes? (i.e. is there a bijection of $\{1,\ldots ,n}$ which bijectively sends faces of $X(A)$ to faces of $X(B)$?)
Question: I'm interested to know what algorithms are known for this problem. I'm specifically interested in worst running times in terms of $n$ alone. Please note that the size of the input can be
exponential in $n$.
In principle $A$ and $B$ might consist of $2^n$ subsets, so this is a lower bound for the problem, because the algorithm needs to read the input.
On the other hand the trivial algorithm of checking each permutation takes at most $\mathcal O(2^n\cdot n!)$ steps.
algorithms co.combinatorics at.algebraic-topology
add comment
2 Answers
active oldest votes
An exp(O(n)) algorithm is given in "Hypergraph isomorphism and structural equivalence of boolean functions", Eugene M. Luks, STOC 1999.
up vote 5 down vote
Aha, great reference! Hm, $e^{O(n)}=2^{cn}$ for a suitable constant $c$, while the naive brute force approach in $O(2^n\cdot n!)$ would amount roughly to $2^{dn\log(n)}$ (for
some other constant $d$), I think? – Max Horn Dec 14 '11 at 16:45
Max: yes, exactly. – Łukasz Grabowski Dec 14 '11 at 17:30
@Colin: do you know anything has been done since about the problem described in the paragraph 7.1? – Łukasz Grabowski Dec 14 '11 at 18:40
@Łukasz: I have no idea, sorry. – Colin McQuillan Dec 14 '11 at 18:53
add comment
A special case of this problem is the graph isomorphism problem. Interestingly, for this it is unknown whether it is solvable in polynomial time (relative to the number of vertices, so $n$
in your case), and also unknown whether or not it is $NP$-complete. As far as I know, Luks' algorithm is still state of the art (though I might be wrong), and that has runtime $O(2^{\sqrt{n
Since this is a special case of your problem, its general worst case runtime will be unknown, too. Of course in this special case, one has only subsets of size 2 / simplices of dimension 1;
up vote 2 as you point out, as soon as we allow arbitrary rank simplices, the above runtime cannot be achieved anymore, as the input alone has size $O(2^n)$.
down vote
EDIT: Actually, looking at the Wikipedia link I give, I discovered that there is a paper by Babai and Codenotti (2008), "Isomorhism of Hypergraphs of Low Rank in Moderately Exponential
Time", where they give an algorithm that works in the same general time as Luks' algorithm for hypergraphs (and thus in particular simplical complexes) of bounded rank that has roughly the
same general run time as Luks' algorithm for graph isomorphism. Of course that still does not answer the general question.
If the simplicial complexes are relatively sparse one could write them in $O(nk)$, where $k$ is the number of maximal simplices. $k$ is probably bounded by $n$ choose $n/2$, but could be
much lower. – Will Sawin Dec 14 '11 at 17:55
add comment
Not the answer you're looking for? Browse other questions tagged algorithms co.combinatorics at.algebraic-topology or ask your own question.
|
{"url":"http://mathoverflow.net/questions/83420/algorithms-for-comparing-two-simplicial-complexes?sort=votes","timestamp":"2014-04-16T19:42:03Z","content_type":null,"content_length":"60567","record_id":"<urn:uuid:ae6d558c-0eea-4c57-89a4-2a44febb8ed8>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Brevet US6537150 - Gaming devices having reverse-mapped game set
This application claims priority of and incorporates by reference the following U.S provisional patent applications: No. 60/126,777, filed on Mar. 29, 1999, No. 60/127,663, filed on Apr. 2, 1999, No.
60/140,629, filed on Jun. 23, 1999, No. 60/158,589, filed on Oct. 7, 1999, and No. 60/159,766, filed on Oct. 15, 1999.
This invention relates to the field of gaming, and provides a system and method which enhances the entertainment value of a game. The invention is especially adapted for use with, but not necessarily
limited to, lottery-based games.
Conventional gaming machines employ direct mapping between the symbols displayed to the player, and the award paid out. As used in this specification, the term “direct mapping” means that the system
first determines the displayed symbol, and then maps that symbol to an award level. A simple example of a conventional direct-mapped gaming machine is an ordinary mechanical slot machine. The slot
machine contains a plurality of wheels, each wheel bearing a set of symbols. The configuration of symbols on each wheel determines a probability of obtaining any particular combination of symbols
when playing the machine. Each combination is mapped, or associated with, an award. The machine includes, implicitly or explicitly, a “pay table” which shows the award associated with each
combination. When a player achieves a given combination, the machine maps that combination to the appropriate award (which may be zero), and pays the player accordingly.
The above-described mechanical slot machine can be replaced by an electronic version, but the principle of operation is still the same. Through appropriate random number generators, the machine
derives a combination of symbols, and this combination is mapped directly to an award which is then paid to the player.
The direct-mapped systems of the prior art have several disadvantages. First, the games have limited variety. A conventional mechanical or electronic slot machine can function in essentially one way
only, and the games playable on such machines tend to become boring to the player. Secondly, some direct-mapped systems of the prior art allow little or no opportunity for a player to exercise a
degree of skill. In the example of the slot machine, the player has no role in the determination of the eventual award, other than by inserting money and pushing a button to operate the machine.
A third disadvantage of the direct-mapped systems of the prior art relates to legal requirements. Some jurisdictions permit only gaming devices which function as lotteries, i.e. games in which there
is a finite pool of prizes from which to draw. A pure slot machine, of the type which spins a set of wheels (either mechanical or virtual) to obtain a combination of symbols, is not a true lottery,
as described above, because the number of potential prizes of a particular category is, at least in theory, unlimited. While a direct-mapped system can be used with a true lottery, such systems are
difficult to implement, at least in part because the probability of each possible outcome changes as the pool of awards is depleted, and this changing probability must be appropriately modeled by the
Another disadvantage of systems of the prior art results from legal restrictions on “bonus” awards. Some jurisdictions effectively limit the use of bonus or secondary event awards, by requiring that
such awards not be counted in determining the net payout of a gaming device. These rules tend to limit the flexibility available to the designer of a game. The reverse-mapped system of the present
invention provides more flexibility, and can be more easily tailored to comply with local regulations while still providing a varied and entertaining game, through the use of bonus and secondary
event simulations that are reverse-mapped from pre-determined award outcomes.
It has been known to provide a lottery-type game which includes a pool of a fixed number of plays, all having pre-selected winning and losing outcomes. U.S. Pat. No. 5,324,035, the disclosure of
which is incorporated by reference herein, describes such a system. Due to the fact that all of the outcomes and displays are pre-selected, the entertainment value of the game is limited. Such games
also do not lend themselves to the application of skill in determining the outcome.
U.S. Pat. No. 4,494,197, the disclosure of which is incorporated by reference herein, discloses an electronic lottery system. In one embodiment, the latter system simulates a bowling game, and
presents a display to the player corresponding to a winning or losing play, depending on whether the system has electronically selected a win or a loss. The latter system, however, is limited in the
variety of games that can be constructed. Also, the patented system does not provide a convenient means for incorporating an element of player skill into the game, or for providing one or more bonus
awards to players. The present invention comprises a substantial improvement over the above-described patents.
The present invention provides a gaming system and method which solves all of the problems mentioned above. The system of the present invention provides games which are more exciting, and more
varied, as perceived by the player. The present invention makes it more feasible to incorporate aspects of skill into the play. The invention also makes it possible to provide multiple award
sequences. simulated secondary awards, and/or bonus awards to players. The present invention provides games which can be easily modified, by simple software changes only, to change the character of
the games. Finally, the present invention is especially suitable for use with lottery-type games, and is therefore suitable for use in jurisdictions which require finite pools of awards in a gaming
The present invention comprises a gaming system and method in which the outcome of a play is determined first, and then the outcome is mapped to a symbol suitable for display to the player. The
method is called “reverse mapping” because the outcome is determined first, and is then associated with, or mapped to, a symbol which corresponds to that outcome. In most cases, the outcome can be
reverse-mapped to any one of a plurality of symbol combinations, so the mapping function is, in general, not one-to-one.
The invention can be practiced with lottery-type games, or with other games. An example of a non-lottery game, which, by definition, uses an infinite “pool” of awards, could be an electronic slot
machine. In the latter case, the system determines an outcome, without depleting any pool of awards, using a predetermined probability distribution. The system then reverse-maps that outcome to a
symbol combination which is displayed to the player. If each of a plurality of symbols corresponds to the same outcome, then the system must choose randomly among them, to determine which symbol is
to be displayed. Because the “pool” is not depleted, the probability of obtaining a particular award does not change from play to play.
When using the present invention in a lottery-type game, having a finite pool of awards, the system chooses a game set element from the finite pool. Each game set element is coded for a particular
award, and/or for a bonus award, so the choice of the game set element determines what award can be won by the player. Then, the system associates (reverse-maps) that award with a symbol to be
displayed to the player, consistent with the value of the award. For each play, the game set element is withdrawn from the pool, so the probability of selecting a particular game set element in the
pool varies from play to play. By contrast, in a game which uses an infinite game set, a game set element is selected randomly for each play, and the probability of selecting a particular game set
element does not vary from play to play.
The reverse-mapped game can be combined with an element of skill to provide an even more varied and entertaining game. For example, in a lottery-type game based on video poker, the system selects a
game set element, from a pool, the game set element representing a “best” hand achievable by the player on that particular play. The system deals cards to the player, and gives the player the chance
to hold or replace each card, according to the rules of poker. If the player chooses an optimal or other pre-determined strategy, the system fulfills the player's choice with cards which correspond
to the maximum award associated with the game set element. If the player chooses a sub-optimal strategy, then the system may fulfill the player's choice with cards corresponding to an amount which is
less than or equal to the maximum award amount. The difference between the maximum possible award and the amount actually awarded to the player may be placed in an electronic “bank” which can be
added to the awards available on subsequent plays. A plurality of player terminals can be linked together, and the un-awarded amounts from one terminal can be added to a common “bank” shared by all
of the terminals. In this way, the entertainment value of the game is still further enhanced.
Games made according to the present invention can be varied still further by drawing two or more game set elements from the same game set, or from different game sets, especially in response to a
multiple wager by the player, adding the values of these elements, and reverse-mapping the result to an appropriate symbol display. The award displayed to the player may be the same as the sum of the
values of the selected game set elements, or it may be less, in which case the system deposits the unawarded portion into one or more funds used to support bonus plays, progressive awards, or
“mystery” awards. In this way, the games created by the present invention include a substantial degree of unpredictability, from the point of view of the player, even though the system pays out, in
the aggregate, the same percentage to players. In one simple example, a two-credit game, played according to the present invention, has a substantially different appearance from a one-credit game,
even where the game set elements are otherwise the same. The above variations can be applied both to the case of lottery games and non-lottery games, and can be used with games of pure chance and
games involving an element of skill.
In another embodiment of the present invention, the system selects a game set element from a finite pool, each game set element being coded either with an amount to be awarded or a symbol indicating
a bonus award. If the game set element contains an amount, the player may win that amount, or some lesser amount (in which case the balance is added to a separate bonus fund), the outcome being
determined randomly or according to a pre-determined probability distribution. If the selected game set element is coded or is otherwise determined to be applied as a bonus award, the player may
receive all or part of the stored bonus fund, either in a single award or in a multi-step display/award sequence. Thus, in this embodiment, the player may receive an amount which is equal to or less
than an amount shown on the game set element. Bonus awards can also be made where the game set element is not specially coded for a bonus, and can be awarded in multiple steps.
The invention has the advantage that it greatly facilitates the construction of varied and entertaining games. Moreover, a given game can be changed without making any hardware changes, and by making
only minor software adjustments, i.e. by changing a game set “template” used to create a pool of game set elements to be stored in a computer memory. The invention requires no probabilistic analysis
of the display symbols, because these symbols are selected only after the outcome is determined. Moreover, because a given outcome can be mapped, in general, to a plurality of different symbols, the
game as perceived. by the player is much more varied than comparable games of the prior art.
In another embodiment, the invention includes a video lottery poker game which may comprise one or more draws from finite pools of game set elements. In one version of this embodiment, an initial
hand is presented to the player, the hand being determined by a game set element drawn from a finite pool. Unless the initial hand is a winner, in which case an award is made and the game ends, the
player is given the opportunity to decide which cards to hold or replace, in an attempt to create a winning hand. Based in part on the cards initially dealt, and in part on the strategy selected by
the player, the system selects another pool of game set elements from which to draw. If this second draw produces a winning hand, the player receives an award. In another version of the above game,
the initial hand is determined by random calculation, and not by drawing an element from a finite pool. The game is otherwise the same as in the previous version.
The present invention therefore has the primary object of providing a method and system for playing a game, wherein the outcome of each play of the game is reverse-mapped to a symbol to be displayed
to a player.
The invention has the further object of enhancing the variety and entertainment value of existing games.
The invention has the further object of providing a variety of possible games using the same gaming equipment.
The invention has the further object of providing a gaming system and method which. can be used either with lottery-type games or with games which have no fixed pool of awards.
The invention has the further object of providing a reverse-mapped gaming system and method, in which the skill of the player can affect the amount won in the game.
The invention has the further object of providing a reverse-mapped gaming system and method, suitable for use with a plurality of gaming terminals linked together in a network.
The invention has the further object of providing a game which can be easily modified.
The invention has the further object of providing a gaming system and method wherein a single game set can be used to operate a plurality of different games having differing structures, as perceived
by a player.
The invention has the further object of providing games which include bonus awards, or multiple award sequences, and which enhance the unpredictability of the games from the point of view of the
The invention has the further object of providing games involving skill, such as video poker.
The invention has the further object of providing a game in which a player has more than one chance to win, on a given play.
The invention has the further object of providing a game in which the skill of a player may affect the amount awarded to the player, and the amount added to a bonus fund for later distribution.
The reader skilled in the art will recognize other objects and advantages of the present invention, from a reading of the following brief description of the drawings, the detailed description of the
invention, and the appended claims.
FIG. 1 provides a flow chart showing the programming of a reverse-mapped game according to the present invention.
FIG. 2 provides a flow chart showing the programming of a lottery-type reverse-mapped game, according to the present invention.
FIG. 3 provides a block diagram showing a network of gaming terminals, used in the reverse-mapped system of the present invention.
FIG. 4 provides a flow chart illustrating part of the programming of the central game servers, according to the present invention.
FIG. 5 provides a block diagram showing essential components of a player terminal according to the present invention.
FIG. 6 provides a flow chart showing the programming of a lottery-type reverse-mapped video poker game, according to the present invention.
FIG. 7 provides a block diagram of a hand-held remote player terminal according to the present invention.
FIG. 8 provides a flow chart showing the programming of a video lottery poker game, in an alternative embodiment of the present invention.
One basic form of the method of the present invention comprises the steps of randomly selecting an outcome, and then associating that outcome with a symbol to be displayed to the player. The process
of associating the outcome with a symbol is called “reverse mapping”, because, in the present invention, a potential or final award outcome is determined first, and the system then finds-a symbol
display, or series of displays, which can logically be associated (“mapped”) with the outcome.
In the reverse-mapped games of the present invention, the mapping of a final outcome to the displayed symbol may not be one-to-one. Instead, there may be, in general, many different symbol
combinations that correspond to a selected final outcome. An important step in the method of the present invention is the selection of one of these symbol combinations which corresponds to the
selected outcome. As will be shown below, the fact that the mapping is not one-to-one makes the game more varied and more entertaining.
The above-described process is especially suited for use with lotteries, which include finite pools of awards, but can also be used with random games which are not lotteries, i.e. where there are no
finite pools. The following description, illustrated by Tables 1 and 2, provides a simple example of a game which comprises an electronic slot machine.
TABLE 1
Symbol Award
BBB 10
PPP 10
CCC 5
DDD 5
TTT 5
MMM 3
XXX 3
Mixed 0
Table 1 comprises a “pay able”, or payoff matrix, for this first example award associated with each symbol. The term “mixed” refers to all combinations not explicitly shown. It is assumed that on
each play, the player wagers one unit, and the award is measured in terms of the same units. Of course, a game may be structured such that more than one unit can be wagered at one time, in which case
the awards can be multiplied by the number of units wagered.
In the above example, there are five possible awards, namely 25, 10, 5, 3, and 0. In the operation of the present invention, a computer is programmed to select first one of these awards according to
a predetermined probability distribution. For example, the probability of obtaining each possible award could be determined according to the distribution shown in Table 2:
TABLE 2
Award Probability
25 .01
10 .03
5 .05
3 .06
0 .85
The computer selects one of the available awards according to the indicated probability distribution. In the above example, the mean award would be 0.98, with a standard deviation of about 3.17.
Thus, when a player wagers one unit, the expected payback is 0.98 units.
After selecting the award, the computer then determines a symbol to be displayed to the player. In this example, if the award is 25, there is only one allowable symbol, namely “777”, and the system
displays this symbol to the player. If the award is 10, there are three possible symbols, namely “BBB”, “PPP”, and “000”. The program then selects one of these symbols, such as by randomly selecting
one of them with equal probability. If the award is 5, the program would select one of the symbols “CCC”, “DDD”, or “TTT”, with equal probability. If the award is 3, the program can select one of the
symbols “MMM” and “XXX”, again with equal probability. If the award is zero, the program can select any of the other symbols available (not shown in the table).
The procedure for using a probability distribution to select an outcome, by computer, is well known in the art of Monte Carlo methods. For example, the selection from Table 2 could be obtained by
generating a random number between 1 and 100, and selecting 25 if the random number is 1, selecting 10 if the random number is between 2-4, selecting 5 if the random number is between 5-9, selecting
3 if the random number is between 10-15, and selecting zero if the random number is between 16-100. If the random number generator is properly configured so that it produces a true random number
within the stated range, the above procedure will select the various awards with the desired probabilities. Other techniques could be used instead of the method described above.
The above-described game is a simple example of reverse mapping. That is, the system first determines an outcome, i.e. an award to be paid, and then the system reverse-maps that outcome to a symbol
which is to be displayed.
The basic game described above can be summarized, and generalized, according to the flow chart of FIG. 1, which comprises the essential programming of a computer which implements the invention. In
block 1, the player inserts cash (or a cash equivalent, such as a voucher for credits) into a terminal or other gaming machine. In block 2, the player selects the amount to be wagered, and presses a
button which starts the game. In block 3, the system determines the outcome, i.e. the award, of the play, using the technique described above, or its equivalent. In block 4, the system determines a
symbol to be displayed to the player, the symbol being consistent with the outcome previously determined. In block 5, the system displays the symbol to the player, and also displays the associated
award (which may be zero).
In block 6, the system increments a credit account according to the amount won by the player. If the award is zero, the account is incremented by zero. In test 7, the player is given the option of
playing again or terminating the game. If the player elects to terminate, the program proceeds to block 8, which pays the player the net winnings, if any, either in the form of cash or an appropriate
cash substitute. If the player chooses to continue, the program returns to block 2.
The above example relates to a slot machine, in which the available supply of awards is theoretically unlimited. The same principle can be applied to a lottery game, in which there is a finite pool
of awards. The following description, made with respect to Tables 1 and 3, provides a simple example of such a lottery game, again in the context of an electronic slot machine.
TABLE 3
Number of
Elements in Award
Award Game Set Contribution
Totals 228 225
Table 3 represents the finite pool of elements in the game set. In practice, a game set element physically comprises an element of data, stored in a computer memory, which element identifies the
award associated with that element. For example, the computer can store a plurality of game set elemennts as an array, each element of the array identifying a monetary value. The array defines the
desired probability distribution according to the number of elements having a particular value. For example, if in a particular game, it is desired to award one thousand dollars, with an initial
probability of 0.0001, and if the array contains 10,000 elements, then there should be one element in the array having the value of one thousand dollars.
In the simplified example of Table 3, there are two elements associated with a award of 25, five elements associated with a award of 10, and so on. The product of the award and the number of
corresponding elements of the game set is shown in the third column, and represents the total amount that can be won from the awards of a given level. The sum of the values in the third column
therefore represents the total of awards available to be paid to the player. The sum of the values in the second column is the number of elements in the pool. In the example of Table 3, the system
will eventually pay out 98.68% (225/228) of the total amount wagered.
In operation, the computer randomly selects a game set element from the finite pool. As explained above, each such element is coded so that it is associated with a particular award. The system must
then translate, or reverse-map, that award into an appropriate symbol display. The latter step is done by using Table 1, as before. For example, if the system has chosen one of the five elements
having a award of 10, the system must then select, from FIG. 1, one of the symbol displays (either “BBB”, “PPP”, or “000”) corresponding to a award of 10, to be shown to the player. The chosen game
set element is then removed from the pool.
In the latter example, the probability distribution governing the selection of game set elements is inherent in the finite pool, and need not be separately calculated. The pool can be visualized as a
homogeneous mix of game set elements. The probability of selecting a given element is determined by the number of elements remaining in the pool. Clearly, the game set elements associated with large
awards (such as the award of 25, in Table 3) are present in very small numbers, so the probability of selecting those elements is relatively small. Note also that the probability of selecting any
element changes after each play, because the size of the pool is reduced slightly after each play. The initial probability of obtaining each award can clearly be established by determining the/
-number of elements in the game set associated with each possible award.
The probability of choosing each symbol combination corresponding to the selected award can be varied. In the above example, it was assumed to be a uniform distribution. That is, in the example of
Table 1, where “BBB”, “PPP”, and “000” all correspond to the value of 10, if the system seeks a symbol combination corresponding to a value of 10, it can be programmed to select one of these three
combinations with equal probability. But this rule could be modified, such that a non-uniform distribution is used, and doing so would add further to the variety and unpredictability of the game.
An important variation on the above example is the possibility of varying the award according to the skill exhibited by the player. The player terminal can be programmed to receive input from the
player, and this input can be evaluated to determine whether the player's input represents an optimal or sub-optimal game strategy. In the above example of a slot machine, the player can be given the
opportunity of stopping a set of virtual or fixed wheels, and the extent to which the player is successful in stopping the wheels to yield repeated symbols depends on the skill of the player. The
award indicated in the above tables would then be the maximum allowable award for that particular play. If the player fails to perform in the most optimal manner, the system is programmed to adjust
the award so that the award is less than or equal to the maximum allowable value. The amount of adjustment can be predetermined for each play, or it can be determined randomly. The unused award can
then be transferred to an electronic storage facility, and can be awarded to a player later. The latter concept can be used to create special awards, which can be fixed, or which can be “progressive”
jackpots wherein the available award continues to grow until won by a player. Progressive awards will be discussed in more detail later, in connection with the example of a video poker game.
The basic lottery game described above can be summarized according to the flow chart of FIG. 2, which represents the programming of a computer which implements this aspect of the invention. In block
11 of FIG. 2, the player inserts cash, or its equivalent, into the machine. In block 12, the player selects a wager, as before, and presses a button which starts the game. In block 13, the system
selects (either randomly, sequentially, or by any other means of selection) a game set element from a finite pool of such elements, and electronically removes that element from the pool. In block 14,
the system determines a maximum possible game outcome, which may include an addition of bonus funds obtained from a stored bank of unused awards.
Block 15 is optional, and may be included if the game is to include an element of skill. In block 15, the system accepts input from the player. In the example of a slot machine, this input may be a
signal produced by the player's manual attempt to stop a virtual wheel. In the case of video poker (to be described in more detail later), the skill may be in the player's strategy of which cards to
hold and which cards to replace. If the game does not include any element of skill, then block 15 would be omitted.
In block 16, the system reverse-maps the determined maximum outcome, as modified by input from the player (if such input is received), into an appropriate display image, and displays that image to
the player. In block 17, the system increments the player's account with the amount won, if any.
Test 18 determines whether there is any unused award, i.e. if there is a difference between the maximum available award and the award actually received. The latter difference will be nonzero, for
example, if the game involves an element of skill, and if the player has performed in a sub-optimal manner. If there is an unused portion of the award, the unused portion can be stored in an
electronic award bank, in block 19. Test 20 determines whether and how a part of the award bank is to be awarded. If an award is to be made, the system computes the amount of the award bank to be
awarded, and that amount will be added to the maximum outcome determined in block 14.
The determination, in test 20, of how to distribute a bonus or game play award, can be made by entirely random means, or it can be programmed to occur after a predetermined number. of plays. Many
other rules can be devised to determine when the bonus will be awarded. Similarly, the amount of the bonus that is distributed at one time can be predetermined, or it can vary according to any
pseudo-random scheme. In the present invention, a bonus can include a monetary award or other prize, or it could comprise one or more free chances to play again.
In test 22, the player is given the opportunity to decide whether to play again. If the player wishes to continue, the program returns to block 12. If the player wishes to stop, the program proceeds
to block 23, where the player is paid whatever amount has been won.
The games made according to the present invention, in both of the examples discussed above, have several important advantages. First, the use of the reverse-mapping procedure makes it possible to
construct a game which is varied and entertaining, but which still has the same overall payout as a conventional game using direct mapping. By using reverse-mapping, one can vary the displays seen by
the player, even though the underlying probabilities which govern the game are the unaffected. Thus, it is possible to program the gaming machine with a wide variety of games, or with different
variations of the same type of game, by making only minor software changes, and without modifying the equipment used to play the game.
Second, the use of reverse-mapping eliminates the need to reconfigure the probabilities of obtaining each possible symbol displayed to the user. In the example of the slot machine, one need not
re-create the electronic “wheels” for each change in the game. Instead, one directly controls the outcome, and reverse-maps that outcome to any available symbol display which is consistent with a pay
table shown to the player.
Thirdly, when used in a lottery-type game, the reverse-mapping process of the present invention makes it very easy to create a more varied and entertaining assortment of lottery games, which games
are suitable in jurisdictions which require that the games be true lotteries.
Also, the present invention is especially suited for use in environments where large numbers of gaming machines are interconnected through a network. This aspect of the invention will be described
FIG. 3 provides a block diagram showing a simplified network of gaming machines. This figure shows two central game servers 31 and 32, each being connected to a plurality of player terminals. Server
31 may include an optional display 33 which facilitates the monitoring of the status of the terminals under the control of that server. Similarly, server 32 may be associated with display 34. All of
the servers are shown connected in a network, which also includes cashier terminal 35 and player account server 36. The player account server performs the bookkeeping necessary to keep track of
credits and debits for each player. The entire system may be linked, by a modem, to remote-site central game server 37.
Among other things, the central game servers may act as repositories of game set elements to be used by the various terminals, in lottery-type games. That is, each time a player starts a game, the
player terminal obtains a game set element from the finite supply located in the central game server. Also, if the game involves an aspect of skill, and if the player's play is sub-optimal, the
player terminal may be programmed to return, to the central game server, the unused portion of an award, so that the unused portion may be added to the award bank discussed above. An award bank can
also be maintained even if no element of skill is involved. This award bank could be made available only to the terminal from which it came, or, more preferably, it can be made available to other
terminals controlled by the same server. The latter alternative provides a more varied and interesting game, because one player can benefit, indirectly, from mistakes made by another players at
different terminals. Thus, the network further enhances the variety of the games, because the players operating the various networked terminals are, in effect, competing with each other.
FIG. 4 provides a flow chart illustrating the programming of the central game servers. In block 40, a game set template is loaded into the central game server. The template is a set of data from
which the server can create the game set elements to be used in the game. The server creates these elements in block 41, and stores them in its memory. Block 42 represents the operation of the
central game server in receiving requests for game set elements, from the various player terminals, and in randomly selecting a game set element from the finite pool. Information on the game set
element drawn from the pool may be shown on the optional display, indicated in block 43.
In block 44, the central game server sends the selected element to the player terminal, and removes that element from the finite pool. If the pool of game set elements is empty, the system
automatically initiates the process of creating a new game set, in block 46. Otherwise, the program returns to block 42, the system being ready to accept the next request for a game set element.
The present invention is not limited to use with networked terminals, but can be used with stand-alone terminals. Players could also use hand-held terminals, or terminals which are linked by wireless
connections to a central server. FIG. 7 provides a block diagram of a hand-held remote player terminal, showing the main circuit board and the essential peripheral devices connected thereto. These
variations do not affect the basic principle of operation. However, in the case of stand-alone terminals, the game set elements are not taken from a pool serving a plurality of player terminals, but
would instead be provided in a local memory, at each terminal. In the case of hand-held terminals, the hand-held terminals could be stand-alone devices, or they could be continuously or
intermittently linked to a central server by appropriate wireless or hard-wired means.
FIG. 5 provides a block diagram showing the basic components of a typical player terminal. The essential component of the terminal is main processor/controller block 100. The main board receives
input from the player through touch screen sensor 116 and/or player push buttons 112. The choices made by the player, and the results of the game, can be shown on display monitor 114. The program for
the game can be stored on EPROM 102, and the main board preferably has access to non-volatile memory 104. The main board can also be connected to printer 106 and bill acceptor 108. A network
interface 110 allows communication with other machines.
Many other variations of the hardware depicted in FIG. 5 are possible. For example, the printer and/or bill acceptor can be provided at a different location from the player terminal. The network
communication interface can be replaced with a wireless connection, such as an infrared interface. The programming for the games could come from a central game server, rather than an EPROM attached
directly to the terminal. These variations do not significantly affect the basic principles underlying the operation of the present invention.
As mentioned above, an important application of the present invention is in the game of lottery poker. The game to be described is a substantial improvement over a conventional game of video poker.
Before describing the game of the present invention, it is helpful to review the operation of a conventional video poker game.
In a conventional video poker game, using direct mapping, the computer displays a pay table, showing the award to be made for each category, i.e. royal flush, straight flush, full house, etc. A
typical pay table is shown in Table 4. This pay table applies to a five-credit game, since the minimum award (for Jacks or Better) is five units.
TABLE 4
Royal Flush 4000
Straight Flush 250
Four of a Kind 125
Full House 40
Flush 25
Straight 20
Three of a Kind 15
Two Pair 10
Jacks or Better 5
The system randomly “deals” five cards from a simulated deck, and displays these cards on a video monitor. The player must then decide which cards to hold, and which cards to replace with a new card
randomly drawn from the deck. After the player has indicated his or her choices, the system draws the cards requested, and evaluates the new hand according to the pay table. The amount won, if any,
is credited to the player's account, and the player can then begin the next play.
The above-described game clearly uses direct mapping, because the cards to be dealt are determined first, and then the final hand is evaluated according to the pay table.
In the reverse-mapped video poker game of the present invention, the maximum allowable award for a particular play is determined first. Then, subject to input received by the player, the system
determines symbols (i.e. poker hands) which correspond to the amount to be awarded.
The essential programming of a reverse-mapped lottery poker game of the present invention is illustrated by the flow chart of FIG. 6. In block 51, the player inserts cash, or its equivalent, into the
terminal, selects a wager, and starts the play, typically by pressing a button. In block 52, the system selects a game set element from a finite pool. Each game set element comprises a data element
which represents a maximum potential amount to be awarded. Each such amount corresponds to a particular hand category, such as royal flush, straight, full house, etc. The selection can be performed
randomly or sequentially, from among the game set elements. The pool contains a predetermined number of game set elements for each category; the greater the value of the category, the fewer the
elements in the pool corresponding to that category. For example, the number of royal flushes in the pool should be far smaller than the number of straights.
To each amount which the player can win on a particular play, there is associated a table of poker hands which can be dealt. The hands in each table are chosen such that if any such hand is presented
to the player, and if the player plays in an optimal manner, the player will be able to achieve the category corresponding to the maximum possible award amount. In general, a particular hand may be
listed in more than one table, because a given hand may be played differently to yield different final results.
In block 53, the system selects a hand from the table corresponding to the amount determined in block 52, and displays the selected hand to the player. The system also displays a pay table to the
player, so that the player will know the award obtainable for each category.
In block 56, the system accepts inputs from the player. The player must indicate (such as by pressing buttons or clicking a mouse) which cards should be held, and which cards should be replaced by a
new card drawn from the deck. Thus, if five cards are dealt to the player, the player must make five decisions, i.e. the player must decide, for each card, whether to hold that card or draw a
For each possible hand of poker, and depending on the pay table showing the awards to be won for each category, there is an optimal strategy which can be calculated using elementary principles of
combinatorics. The following example shows how this calculation can be performed.
First, assume that the pay table showing the awards for various categories is as follows:
Royal flush 800
Straight flush 50
Four of a kind 25
Full house 9
Flush 6
Straight 4
Three of a kind 3
Two pairs 2
One pair Jacks or better 1
The pay table shown above means that for a wager of one unit, a royal flush returns 800 units, etc. The pay table is chosen arbitrarily at the outset. As will be apparent from the following
discussion, the optimal strategy depends critically on the pay table.
Now, suppose that the hand dealt to the player is as follows:
King of Clubs
King of Hearts
Queen of Hearts
Jack of Hearts
Ten of Diamonds
There are 32 ways to play any five-card hand, as shown below:
Hold all cards or discard all cards 2 ways
Discard 1 card 5 ways
Discard 2 cards 10 ways
Discard 3 cards 10 ways
Discard 4 cards 5 ways
Each of the above ways comprises a possible strategy. For each such strategy, one can calculate an expected return. The “optimal” strategy is then the strategy having the greatest expected return.
In the example given above, suppose that one wishes to compute the expected return achieved by holding only the King of Hearts, the Queen of Hearts, and the Jack of Hearts, and by drawing two new
cards. The number of ways of drawing two cards from a deck of 47 cards (i.e. 52 cards less the original five cards dealt) is obtained by elementary combinatorics as (47!)(46!)/2!, which is 1081.
Thus, the probability of obtaining each new hand is 1/1081.
The number of ways of obtaining a royal flush, for the cards held, is one, so the probability of obtaining a royal flush is 1/1081.
The number of ways of obtaining a straight flush, for the cards held, is also one, so the probability of obtaining a straight flush is also 1/1081.
The number of ways of obtaining either four of a kind or a full house is zero, because neither category can be constructed with the three cards held. Thus, the probability of obtaining four of a kind
or a full house is zero.
The number of ways of obtaining a flush, but not a royal flush or a straight flush, is 43 (45 ways of drawing two cards of the same suit, less the royal flush (1), less the straight flush (1)). Thus,
the probability of obtaining a flush is 43/1081.
In similar fashion, one can determine, by combinatoric principles, i.e. by computing the number of ways of drawing cards, that the probabilities of obtaining a straight, three of a kind, two pairs,
and one pair Jacks or better, are 22/1081, 7/1081, 21/1081, and 318/1081, respectively. In computing the number of ways of drawing cards, one must take into consideration the cards that have been
dealt and/or discarded, and thus removed from the deck. In general, the probability of obtaining a particular category is the number of ways of executing the selected strategy in a manner which
yields that category, divided by the total number of ways of executing that strategy.
The expected return obtained from the above-mentioned strategy is computed by multiplying each pay table result by the probability of achieving that result. For the above example, the expected (mean)
return is:
That is, if the player plays the above strategy many times, the expected return for a wager of one unit will be 1.459 units.
In a similar manner, one can compute the expected return for each of the other 31 ways to play the given hand. One therefore derives a set of 32 numbers, comprising the expected returns from each way
of playing the hand. The strategy having the greatest expected return is defined as the optimal strategy for that hand.
Since the expected return is essentially a weighted average of the pay table amounts, the optimal strategy, in the above example, depends critically on the pay table.
It is clear that the above-described computation can be repeated for every possible initial hand. That is, for each possible hand, one can calculate the expected return from each of the 32 ways of
playing that hand, and can determine the optimal strategy by selecting the strategy having the greatest expected return.
In summary, for each possible hand, and given a pay table, there is an optimal strategy, i.e. a strategy which maximizes the expected return to the player.
The invention is not limited to the use of the particular procedure described above. It is also possible to use other criteria for determining the “optimal” strategy. For example, one could define as
“optimal” the strategy which maximizes expected return and minimizes variability of return. One could define optimal strategies in still other ways, and these ways could be partially or completely
independent of the concept of expected or average return. It is even possible to predefine an “optimal” strategy without regard to the pay table, and without regard to what is optimal in the
mathematical sense. It is only necessary that the system be able to compare the performance of a player with a stored table representing an “optimal” strategy. The present invention is intended to
include all of the latter alternatives.
When the optimal strategies have been derived in the manner described above, the results are stored in a computer memory. In the preferred embodiment, the memory of the computer includes a table
which shows the optimal strategy for every possible hand. Thus, by electronically examining the table, and by comparing the optimal strategy with the decisions actually made by the player, the system
can determine whether or not the player's choice was optimal.
The system performs this table lookup in test 58, to determine whether or not the player's choice was optimal. If the choice was optimal, the system continues in block 63. In block 63, the system
must “fill” the player's order, i.e. it must replace those cards, which the player wants to replace, with cards from the remainder of the deck. It must do so while giving the player a final hand
corresponding to the optimal hand determined earlier. This process is simple, because the system can simply fill the player's order with cards which make the final hand identical to the optimal hand
stored in memory. The modified hand that is displayed in block 63 is also known as the “resulting draw”,because it includes the cards drawn as the result of the player's decisions on which cards to
hold and which cards to draw.
The system then credits the player's account in block 64, and gives the player, in test 65, the opportunity to stop the game (block 66) or play another hand (return to block 51).
Now suppose that the player's strategy was sub-optimal. The system must next determine an amount to be awarded to the player. In block 61, using the choices made by the player, the system determines
the cards that generate a hand belonging to another category, other than the optimal category. In one preferred embodiment, the system generates a hand which comprises the next most valuable category
following the optimal category. For example, if the pay table is as shown in Table 4, and if the system had chosen (in block 52) the category “full house” as the category available to the player,
then if the player performs sub-optimally, the system would then look for cards (in fulfillment of the player's choices) which produce a flush, which is the next highest category following a full
house. However, if the player's choices make it impossible to recreate the next highest category with the cards available, the system would then be programmed to look for cards which correspond to
the next category in the hierarchy (in this case, a straight), and so forth.
In another embodiment, the system need not be limited to evaluating to the next highest category. Instead, the system could choose from among several (or all) categories, according to a predetermined
probability. Thus, if the player performs suboptimally, the system can be programmed to select another category according to any probability distribution, and award an amount to the player
corresponding to that category. In the most general case, it is even possible to select a more valuable category than the one which the player failed to achieve. That is, it is possible to award the
player a greater award than if the player had played optimally. The latter alternative further enhances the entertainment value of the game.
In another variation of the video poker game, suppose that the system awards the maximum amount only if the player performs optimally. Thus, if the player's choices are sub-optimal, the program will
award the player an amount which is less than the maximum amount. In making this award, the system can be structured in at least two different ways. One way is simply to award the player an amount
which is less than the maximum amount, without doing anything further. Another, more interesting way is to award the player less than the maximum amount, and to place the difference into one or more
“banks” to be used to fund future awards. The concept of funding an award bank can also be applied in the case described above, wherein the award is made according to a predetermined probability
Thus, in test 59, the system determines whether the un-awarded sum is to be added to the bank. If so, the system will add the difference between the maximum achievable amount, for the previous play,
and the amount actually won by the player, to the bank. This step may be accomplished simply by adjusting the pay table, in block 60. One simple way of doing so is to add the un-awarded sum to the
available award for the most valuable category (royal flush). Another way of doing so is to distribute this sum across several categories, preferably two or more of the most valuable categories. When
the pay table has been adjusted, the player will automatically see the revised pay table on the next play.
Next, the selected sub-optimal hand (or, in the more general case, the non-optimal hand) is displayed to the player, in block 62. The program then continues with block 64, as before.
The game described above is clearly a lottery, because it operates with a fixed pool of awards, selected in block 52. The game can be practiced in the networked environment described above, wherein
the central game servers, or lottery gaming controllers, store the game set elements and distribute those elements to the various player terminals upon request.
The game described above is also an example of reverse-mapping, because the maximum outcome of each play is determined first, and the hand that corresponds to this outcome is determined later.
Moreover, the mapping is not one-to-one, because there are a plurality of hands belonging to each category.
The invention is not limited to use with simulated slot machines or video poker. The same concepts can be applied to many other kinds of games, such as video black jack, roulette and other wheel
games, and others.
The principles of the invention, discussed above, can be extended further to create even more varied and entertaining games. The following paragraphs describe additional ways in which games can be
created using the reverse-mapping of the present invention. All of the following cases can be practiced with either lottery or non-lottery games, and with games of pure chance, as well as with games
involving an element of skill.
Consider the following Table 5, which comprises a pay table for a conventional game:
TABLE 5
Symbol Award
BBB 10
PPP 5
CCC 1
Mixed 0
This is a typical slot machine game, in which the player wins the award shown in the second column, if the machine generates the corresponding symbol in the first column. As before, the amount of the
award is in arbitrary units. For example, if the wager is one dollar, and if the machine generates “777”, then the player wins 25 dollars.
Consider now what happens when the rules of the game allow a player to wager a double bet. In a conventional game, if the player wagers two dollars, and if the machine generates “777”, the player
will win exactly 50 dollars. That is, the awards are simply multiplied by an integral factor corresponding to the size of the wager. The two-credit game is no more entertaining than the one-credit
version, except that the wagers and awards are doubled.
In the reverse-mapping system of the present invention, by contrast, the above-described two-credit game becomes much more interesting. In a two-credit lottery game, wherein the player wagers double
the basic amount, the system selects two awards from the available pool, and adds these awards together. Then, the system reverse-maps this sum to an appropriate symbol display. Thus, in one
embodiment, the two-credit version of the game represented by Table 5 will not resemble Table 5, but instead has possible awards which are 50, 35, 30, 27, 26, 25, 20, 15, 12, 11, 10, 7, 6, 5, 4, 3,
2, 1, and zero. The latter numbers are the awards which can be obtained by drawing two numbers, sequentially and independently, from the second column of Table 5. (Note that one or both of the awards
drawn could be zero, and the same award could be drawn twice.) The maximum award, 50, occurs when the system draws “25” twice; the next highest award, 35, occurs when the system draws a “25” and a
“10”, etc.
After calculating the award. the system reverse-maps the award to an appropriate symbol which is displayed to the player.
The above-described game need not be limited to a two-credit play. Even with a standard pay table and a single credit play, a “25” could result in two awards of “10”, and one award of “5”, in a
multiple-award display sequence.
However, it will be appreciated that not every combination of two awards will necessarily be represented in a pay table displayed to the player. If the selected combination is not in the pay table
seen by the player, the system can be programmed to award the highest amount available, consistent with the awards drawn, and to use the balance to fund a special bonus, a progressive prize, and/or a
“mystery pay”. For example, if the pay table displayed to the player shows possible awards of 50, 25, 10, and 0, and if the sum of the two awards drawn is 35, the system will award 25 to the player
(because 25 is the highest award in the pay table consistent with the awards drawn), and will place the remaining 10 credits in a special fund. Instead of a special fund, the system could provide the
player with a secondary “free” spin which will provide the remainder of the award amount. Thus, for example, if the award amount is 35, a “777” could be displayed and pay 25, followed by an automatic
or player-initiated secondary spin which displays “BBB” and pays 10, for a total award of 35.
The special fund could be used for predetermined or reverse-mapped bonus or special award play combinations, which may allow the player an additional play without the need to insert more money.
The system could be programmed to distribute the bonus to a player after a predetermined number of plays on the machine. That predetermined number of plays could be fixed (e.g. a bonus may be
distributed after, say, every fifty plays), or it could be variable. Thus, the interval between awards of bonus plays can be made to vary. The length of the interval could be determined by a random
or pseudo-random process, or by some other algorithm.
Alternatively, the special fund could be used to build up a progressive award, which is also awarded to a player after a predetermined number of plays, or after some random or pseudo-random number of
In all of the embodiments of the present invention, it is possible to distribute un-awarded amounts to a plurality of separate funds used to provide progressive or bonus awards. An un-awarded amount
could be divided equally among several such funds, or it could be divided according to any other scheme.
In still another alternative, the special fund could provide a “mystery pay” which distributes money to a player, on a random basis. The mystery pay could be distributed after a predetermined number
of plays, or after a randomly determined number of plays. The amount distributed could be the entire amount of the fund, or some fixed or randomly-determined portion thereof.
In a particular embodiment, the system could be programmed to distribute all or part of the special fund after a predetermined number of consecutive losing plays. For example, the system could be
programmed to distribute an award, from the special fund, if a player has lost ten plays in a row. The number of losing plays required could itself be a variable number, produced by a random number
generator, or by other means. So the interval between special awards could be made to vary in an unpredictable manner.
It should be appreciated that, while the above-described methods produce a game which is exciting and unpredictable, from the point of view of the player, the amounts ultimately paid out to players
are the same as in a conventional game. Indeed, from the point of view of the operator of the game, the games provide the operator with the same expected return as with a conventional game. Also, the
games of the present invention can be easily set to provide whatever overall payout ratio is desired, simply by adjusting appropriate software parameters.
The above-described two-credit game applies both to lottery and non-lottery games. In the case of a lottery, the system simply draws two awards from the finite pool. In a non-lottery game, having a
theoretically infinite pool of awards, the system calculates two plays, using the same probability distribution, and adds the awards obtained from both plays to yield an aggregate award. In both
cases, if this aggregate award is found in the pay table shown to the player, the award is reverse-mapped to an appropriate symbol to be displayed. If the computed award is not found in the table,
the player receives the highest available award consistent with the computed award, with the balance being directed to a special fund.
The two-credit game described above can comprise a game of pure chance, or it can be a game having an element of skill. If the game has an element of skill, the player may affect the amount awarded
by playing optimally, as described earlier. It is also possible for the system to withhold a portion of an award, even where the player performs optimally, and to deposit the withheld portion into a
special fund.
Clearly, the above discussion can be generalized to allow other multiple-credit games, such as three or four-credit games, using the same principles. From the above discussion, it is clear that these
games will differ from each other, much more so than multiple-credit games of the prior art.
In all of the games which require selection of two or more game set elements from a finite pool, the selections can be performed in a random manner, such as by drawing a game set element at random
from the pool. Alternatively, the selections can be made sequentially, i.e. by selecting two or more game set elements which are “adjacent” to each other in the computer memory. Also, the selections
of game set elements can be independent, i.e. the selection of one game set element is independent of the selection of another game set element.
In the above examples, games involving skill have been described in terms of an optimal strategy. However, the invention can be generalized to include games in which the player wins an award by
selecting some predetermined strategy, which strategy may not necessarily be optimal. The game would otherwise be played in the same manner as described earlier.
In another variation, instead of awarding a maximum amount if the player selects a predetermined (or optimal) strategy, and a lesser amount otherwise, the system may be programmed simply to reduce
the probability of obtaining the maximum amount in the event that the player does not select the correct strategy. In this case, there would be a plurality of possible awards, each one associated
with a probability, and one award would be selected accordingly. In a more general case, the system could even be programmed to award an amount greater than the original maximum amount (such as by
awarding a prize from an accumulated award bank), with a predetermined probability. The latter variation adds considerable excitement and unpredictability to the game.
Another significant aspect of the invention is its ability to generate a plurality of games which, in appearance, are entirely different from each other, but which games are derived from the same
game set. Tables 6, 7, and 8 represent different games that are derived directly from the basic pay table shown in Table 5. The games associated with these tables are described in the following
TABLE 6
Symbol Award
777 Progressive + 25
BBB 9
PPP 4
CCC 1
Mixed 0
Table 6 represents a one-credit game. In Table 6, the highest possible outcome is an award of 25 credits plus the balance in a progressive fund. The latter outcome is mapped to the symbol “777” . The
other possible outcomes are 9, 4, 1, and 0. Note that the outcomes 9, 4, and 1 are obtained by subtracting one from each outcome in the corresponding positions of Table 5. While the game shows, to
the player, the outcomes 9, 4, and 1, in reality the system awards 10. 5, and 2, and deposits the additional credit into the progressive fund. The player does not know that one credit has been
“withheld” and deposited. In this example, there is no contribution made to the progressive fund for the outcome “1” associated with the symbol “CCC”.
Thus, it will be apparent that the game represented by Table 6 is really the same game as that shown in Table 5, in that the system makes the same awards, with the same overall payout ratios, and
with the same probabilities. The difference between these games is in how and when the awards are distributed. In a fundamental sense, one can say that the game of Table 6 is derived from that of
Table 5.
In the example of Table 6, there is only one symbol associated with some of the outcomes. The use of one symbol is for clarity of illustration; in general, there will be more than one such symbol, as
a given outcome may be reverse-mapped to any of several different symbols, as explained above.
Table 7 shows a two-credit game which is also derived from Table 5, which comprises a game of Keno, in which players attempt to select winning combinations of numbers arranged on a grid.
TABLE 7
Hits Award
18 of 18 50
17 of 18 35
16 of 18 30
15 of 18 27
14 of 18 26
13 of 18 25
12 of 18 20
11 of 18 15
10 of 18 12
9 of 18 11
8 of 18 10
7 of 18 7
6 of 18 6
5 of 18 5
4 of 18 4
3 of 18 3
2 of 18 2
1 of 18 1
Table 7 shows the awards associated with each possible number of “hits” made by the player. The awards comprise combinations of two awards taken from Table 5, because the game is a two-credit game.
For example, if system selects 25 and 25, the award is 50. If the system selects 25 and 10, the award is 35, and so on. Thus, the game represented by Table 5 is used to create a Keno game which has a
completely different outward appearance, but which nevertheless uses the same game set. As before, one can construct the game such that each outcome is reverse-mapped into any of a plurality of
display symbols.
Table 8 shows still another example of a game derived from Table 5. In Table 8, the game is again played with two credits, and there is a base game and a bonus wheel play.
TABLE 8
Base Game Play Bonus Wheel Play
Symbols Award Symbols Award
777 25 777 Progressive + 25
BBB 10 BBB 9
PPP 5 PPP 4
CCC 1 CCC 1
The base game play is the same as for the one-credit game represented by Table 5. But since this is a two-credit game, the system draws another award, using the pay table shown at the right-hand side
of Table 8. The pay table on the right-hand side is the same as that of Table 6. and this portion of the game works in the same way. The system displays the result of a bonus play, and deposits
unused credits into a progressive fund. Thus, the resulting game is different from any of the other games previously described, although it is still derived from the same basic pay table shown in
Table 5.
In all of the above examples, the game set does not include any information regarding the symbols displayed to the player, because many symbols and award outcomes are possible, based on the
particular game set element that was selected.
Thus, the player terminal can provide many game variations, which are independent of a fixed and predetermined set of outcomes that comprise a game set.
In still another variation of the present invention, awards from different game sets can be selected and added together. For example, in a two-credit game, the system can select awards from two
different game sets, add them together, and reverse-map the result to a symbol displayed to the player. Part of the resulting award may be withheld and deposited to a fund used to support bonus plays
etc., as described above. The different game sets could be stored in the memory of a particular player terminal, or they could be game sets maintained by a central game server which provides game set
elements to various terminals upon request. Games derived by adding awards from two or more different game sets clearly have even greater potential for variety and entertainment value, than the games
created by selecting two or more elements from the same game set.
The selections of multiple awards from the same or different game sets comprise independent probability events. That is, when two or more game set elements are to be selected, the system selects each
one according to a particular probability distribution (which, in the case of a lottery, may be variable as the number of game set elements changes), each selection being statistically independent of
the previous selection.
In general, and regardless of the type of game, when two or more awards are selected by the system, these awards are added internally to form an intermediate sum, and a predetermined value may be
subtracted from this intermediate sum to produce an award which is displayed and paid to the player. The predetermined value is then added to a separate fund which is used to support other awards, as
described above. The player does not see the intermediate value, and is therefore not aware that the system deducted part of his or her potential award and deposited it into the special fund. Note
also that, in one special case, the predetermined value may be set always to zero, in which case there is no fund for bonus awards.
The invention can be modified further by including, in the game sets, certain triggering event designators. In this case, the game set elements may include data in addition to, or instead of, an
amount of an award. For example, the game might include bonus plays or progressive plays which are triggered when a selected game set element includes a flag which tells the system to award a special
bonus. When such an element is selected, its value may be determined either by a predetermined fixed bonus, or by a progressive fund created from accumulated contributions of other players, and that
element is reverse-mapped to an appropriate symbol or symbols as before. Also, if the system selects a game set element which identifies a bonus, the player can be awarded all or part of a stored
bonus fund.
In general, a game set element may reverse-map to a single fixed award, or it could reverse-map to an initial award and a series of subsequent bonus awards. Also, one could obtain a bonus award
without necessarily drawing a game set element that is coded with a designator described above.
A simple example of a multiple award sequence is as follows. Suppose that, on a particular play, the system determines that the player will win 10 units. Suppose further that the possible awards are
1, 2, 3. 5, or 10. The system could display a symbol which corresponds to the award of 10. The player would be paid, and the game would be over. Alternatively, the system could display one of the
lesser awards, and could give the player one or more “free” chances to play. On each subsequent “free” play, the player would win another award, such that when the sequence is over, the player would
have won a total of 10 units. This award sequence could be automatic, or it could require the player to provide input, such as by pressing a button to start each new play. In any case, the total
amount awarded is the same as before, but in the latter alternative, the player receives the award in several packages, through the bonus play sequence described.
The concept of multiple award sequences can be incorporated into any of the embodiments of the invention described above. An award can be made in one lump sum, or in a series of steps.
Another important advantage of the reverse-mapped game of the present invention is that the game can be very easily modified. For example, one can change the character of the game by changing the
probabilities of obtaining each possible outcome. This change can be accomplished directly, without the need to make changes in the symbols. In effect, the present invention allows the operator to
build a new slot machine without having to construct new wheels. The “new” machine is created simply by modifying software parameters. The present invention therefore makes it possible to build an
enormous variety of games, using the same hardware, and with minimal effort to make each change.
The number of games that can be played with the present invention is virtually limitless. Whereas the typical lottery “scratcher” game, or pull tab game, is limited to a particular format, and to
fixed award and display outcomes, the present invention can generate games of much greater variety. The present invention can produce multi-level bonus games, such as second-chance bonus wheel games,
games of skill such as video poker or video black jack, games with a variable number of game set elements applied to a single play, such as a variable-bet game which allows the player to wager a
different amount on each play, and progressive games as described above.
The invention is not limited to a particular form of display mechanism. The displays can be spinning reel displays, video monitor displays, wheel displays, dot matrix LED displays, vacuum fluorescent
displays, or combinations of the foregoing, to enhance the entertainment level of the player terminal.
In the embodiment of the invention wherein a plurality of player terminals are connected by a network, the same game set could be used for different games played on different terminals in the
network. For example, one terminal might support a 1-3 credit game, while another terminal might support a 1-5 credit game, while both terminals derive their game set elements from the same pool,
such as a pool located in a central game server. Alternatively, a single game set could be used to operate different games on a single stand-alone terminal.
In general, any terminal programmed according to the present invention may include the capability of splitting an award into a base game outcome and a bonus play outcome, or of withholding a portion
of a basic award and depositing the withheld portion into a fund which provides for progressive awards or mystery pays. A game may have two independently-funded prize structures. For example, the
game may have a one-dollar total wager required to play, with two cents of each wager being used to fund an accumulating fund, the remainder being directed to the basic game. The latter withholding
could be practiced whether or not the game is a lottery, and whether or not the game involves an element of skill which affects the award to the player.
The present invention also includes another form of video lottery poker, described in the following paragraphs. This video poker game includes two phases, effectively giving the player two chances to
win, the first based on a hand of cards initially dealt, and a second chance based on the game strategy chosen by the player.
In the first phase of the video lottery poker game, the system effectively “deals” an initial hand, by selecting a game set element from a pool of game set elements, each game set element
representing a possible hand. If the game set element (i.e. the hand dealt to the player) corresponds to one of a plurality of particular winning categories, the player wins, an award is made, and
the game is over. An award is then made to the player according to a predetermined pay table.
An example of a possible pay table for this game is as follows:
Weight Pay
Royal Flush 4 250
Straight Flush 36 50
Four of a Kind 624 25
Full House 3744 8
Flush 5108 6
Straight 10200 4
Decision Hands 2579244
Total 2598960
The first six rows of the pay table shown above define the categories which immediately win an award, i.e. royal flush, straight flush, etc. The weights for each category represent the number of ways
to produce each category from a regular 52-card deck. These weights, when divided by the total number of possible hands, comprise the probabilities of obtaining each category.
The right-hand column represents the payout for each category. Thus, if a player wagers one unit, and obtains a royal flush, the award is 250 units, etc. The amounts in the pay table can be chosen
arbitrarily by the operator of the system.
The weightings indicated for the pay table shown above are implemented by providing a pool of game set elements in which the number of elements representing a particular category is proportional to
the probability of obtaining that category. For example, there are four ways of obtaining a royal flush, 36 ways of obtaining a straight flush, etc. The numbers of game set elements corresponding to
a royal flush, straight flush, etc. are selected to be in proportion to the weights shown, i.e. 4, 36, 624 etc. When the system draws a game set element from an undepleted pool, the probability of
obtaining a particular result will be in proportion to these weights. Note, however, that after a game set element is withdrawn from the pool, the probability of obtaining a particular result on a
subsequent play is slightly altered, due to depletion of the pool. But when the game is played many times, and especially when the pool is periodically replenished, the overall probability will be in
close accord with the theoretical value.
Note also that, instead of assigning weights to each hand based on the actual probability of obtaining such hand, the weights shown in the table could be arbitrarily selected, and could be entirely
unrelated to the theoretical probability. In this case, the weights can be controlled simply by choosing the number of game set elements, present in the pool, for each category. The mechanics of the
game are otherwise the same, though the results will differ from that based on actual probabilities.
If the initial hand that is dealt, i.e. the game set element drawn from the pool, does not correspond to one of the six winning categories shown in the pay table, the hand is said to be a “decision
hand”. A decision hand is one in which the player is given an opportunity to play, i.e. to decide whether to hold or draw cards. In the example given above, there are 2,579,244 such decision hands.
Thus, the sum of the weights shown in the table is 2,598,960, which is the number of five-card hands that can be drawn from a 52-card deck.
If the player receives a decision hand, i.e. if the player “loses” on the initial play, the player is given an opportunity, in effect, to play again, i.e. to replace one or more cards of the hand.
The player indicates to the system, by pushing appropriate buttons, or by other equivalent means, which cards to hold and which ones to discard. After receiving the decisions of the player, the
system selects one of a plurality of “decision pools” which collectively summarize all of the possible situations that can be encountered by a player, and all of the possible strategies that can be
selected by the player. Another game set element is then drawn from the selected decision pool. Before explaining this second card-drawing process, it is important to understand the structure of the
decision pools.
One example of a set of decision pools is illustrated in the following tabulation. The notation used in the table will be explained immediately thereafter.
1. 4C RF, A-J, FLUSH
2. 4C RF, A-J, STRAIGHT
3. 4C RF, A-J, PAIR
4. 4C RF, A-J
5. 4C RF, A-10, FLUSH
6. 4C RF, A-10, STRAIGHT
7. 4C RF, A-10, PAIR
8. 4C RF, A-10
9. 4C RF, K-10, FLUSH
10. 4C RF, K-10, STRAIGHT
11. 4C RF, K-10, PAIR
12. 4C RF, K-10
13. 4C SF, Q, J,10,9
14. 4C SF, J,10,9,8
15. 4C SF, 10 or below
16. 4C FL, 3 high cards
17. 4C FL, 2 high cards
18. 4C FL, 1 high card
19. 4C FL
20. 4C ST, 3 high cards
21. 4C ST, 2 high cards
22. 4C ST, 1 high card
23. 4C ST
24. 3 of a kind
25. 2 pair
26. 1 paying pair
27. 1 small pair
28. 3C RF,AKQ,AKJ,AQJ
29. 3C RF,KQJ
30. 3C RF,AK10,AQ10,AJ10
31. 3C RF,KQ10,KJ10
32. 3C RF,QJ10
33. 2C RF,AK,AQ,AJ
34. 2C RF,KQ,KJ
35. 2C RF,QJ
36. 2C RF,A10
37. 2C RF,K10
38. 2C RF,Q10
39. 2C RF,J10
40. AKQJ
41. KQJ
42. 2 high cards, 1 high card discarded
43. 1 high card,A
44. 1 high card,K
45. 1 high card,Q
46. 1 high card,J
47. Redraw
48. 1 paying pair with 1 card held
49. 1 paying pair with 2 cards held
50. 1 small pair with 1 card held
51. 1 small pair with 2 cards held
52. 3 of a kind with 1 card held
53. 3C ST,0H,2 high cards
54. 3C ST,0H,1 high card
55. 3C ST,0H,0 high cards
56. 3C ST,1H,2 high cards
57. 3C ST,1H,1 high card
58. 3C ST,1H,0 high cards
59. 3C ST,2H,2 high cards
60. 3C ST,2H,1 high card
61. 3C ST,2H,0 high cards
62. 3C ST,3H,2 high cards
63. 3C ST,3H,1 high card
64. 3C ST,3H,0 high cards
65. 2C ST,0H,1 high card
66. 2C ST,0H,0 high cards
67. 2C ST,1H,1 high card
68. 2C ST,1H,0 high cards
69. 2C ST,2H,1 high card
70. 2C ST,2H,0 high cards
71. 2C ST,3H,1 high card
72. 2C ST,3H,0 high cards
73. 2C ST,4H,1 high card
74. 2C ST,4H,0 high cards
75. 3C FL,0H,2 high cards
76. 3C FL,0H,1 high card
77. 3C FL,0H,0 high cards
78. 3C FL,1H,2 high cards
79. 3C FL,1H,1 high card
80. 3C FL,1H,0 high cards
81. 3C FL,2H,2 high cards
82. 3C FL,2H,1 high card
83. 3C FL,2H,0 high cards
84. 3C FL,3H,2 high cards
85. 3C FL,3H,1 high card
86. 3C FL,3H,0 high cards
87. 2C FL,0H,1 high card
88. 2C FL,0H,0 high cards
89. 2C FL,1H,1 high card
90. 2C FL,1H,0 high cards
91. 2C FL,2H,1 high card
92. 2C FL,2H,0 high cards
93. 2C FL,3H,1 high card
94. 2C FL,3H,0 high cards
95. 2C FL,4H,1 high card
96. 2C FL,4H,0 high cards
97. 4C ST,1H,3 high cards
98. 4C ST,1H,1 high card
99. 4C ST,1H,1 high card
100.4C ST,1H,0 high cards
101.4C,2H,3 high cards
102.4C,2H,2 high cards
103.4C,2H,1 high card
104.4C,2H,0 high cards
The meaning of the above notation is as follows:
Pool No. 1 means that the player has received four cards towards a royal flush (Ace through Jack) plus a fifth card which makes a flush, and has elected to discard that fifth card.
Pool No. 2 means that the player has received four cards towards a royal flush (Ace through Jack), plus a fifth card which makes a straight (but not a straight flush, which would have resulted in an
immediate win), and the player has discarded the fifth card. Note that the fifth card must be one of the three 10s.
Pool No. 3 means that the player has received four cards towards a royal flush (Ace through Jack), plus a fifth card which forms a pair (with one of the other cards), and has elected to discard the
fifth card.
Pool No. 4 means that the player has received four cards towards a royal flush, plus another card which does not place the situation within the definition of Pool Nos. 1-3. The player has discarded
this fifth card. Note that, in general, the pools are arranged in a descending hierarchy; a given pool excludes what is covered in a previous pool. The hierarchy also relates to optimal strategies
for the given pay table. Pool No. 1 is the most optimal, No. 2 is the next, and so on, down to Pool No. 105 which is the least optimal.
Pool. No. 5 means that the player has received four cards towards a royal flush, including an Ace and 10, plus two other cards needed for a royal flush, plus another card of the same suit (but not
making a combination covered by a previous pool). The player has discarded the fifth card.
Pool No. 6 is similar to No. 5, except that the fifth card forms a straight, and is discarded by the player.
Pool No. 7 means that the player receives four cards towards a royal flush, plus one card forming a pair (Jacks or better), and the player discards the fifth card.
Pool No. 8 is similar to No. 7. except that the fifth card does not form a pair (or any combination covered by the previous pools).
Pool Nos. 9-12 are similar to Pool Nos. 5-8, except that the player receives a King and 10 instead of Ace and 10.
Pool No. 13 means that the player has received four cards towards a straight flush, including a Queen, Jack, 10, and 9, with the fifth card not forming a straight or flush. The player has discarded
the fifth card. In Pool No. 14, the cards held are Jack, 10, 9, 8, and in Pool No. 15, the cards held are 10 or below.
In Pool Nos. 16-19, the player receives four cards towards a flush, with either three, two, one, or zero high cards. A “high card” means a Jack or higher. The player discards the fifth card.
In Pool Nos. 20-23, the player receives four cards towards a straight, with either three, two, one, or zero high cards, and discards the fifth card.
In Pool No. 24, the player receives three of a kind, and discards the remaining two cards.
In Pool No. 25, the player receives two pairs, and discards the remaining card.
In Pool No. 26, the player receives one paying pair (Jacks or better), and discards the remaining cards.
In Pool No. 27, the player receives one small (non-paying) pair, and discards the remaining cards.
In Pool Nos. 28-32, the player receives three cards towards a royal flush, with the various combinations indicated. The player discards the remaining two cards.
In Pool Nos. 33-39, the player receives two cards towards a royal flush, with the various combinations indicated, and discards the remaining three cards.
In Pool No. 40, the player receives an Ace, King, Queen, and Jack (but not a combination covered in a previous pool), and discards the remaining card.
In Pool No. 41, the player receives a King, Queen, and Jack (but not a combination covered in a previous pool), and discards the remaining cards.
In Pool No. 42, the player receives two high cards, and discards one of them, plus the remaining cards.
In Pool Nos. 43-46, the player receives one high card (Ace, King, Queen, or Jack) and discards the remaining cards.
In Pool No. 47, the player has elected to discard the entire hand and draw new cards.
Pool Nos. 48-105 are sub-optimal for the pay table shown above. In Pool Nos. 48 and 49, the player has received one paying pair, and has held either one or two cards beyond that pair, and discarded
the remainder. In Pool Nos. 50 and 51, the player receives one small (non-paying) pair, and has held either one or two cards, and discarded the remainder. In Pool No. 52, the player receives three of
a kind, and holds one additional card, and discards the other card.
Pool Nos. 53-100 indicate combinations of cards which form parts of straights or flushes. The notation “OH” means that there are zero “holes” or gaps. Thus, Pool No. 53 means that the player receives
three cards towards a straight, with no gaps, and with two high cards. An example is Queen, Jack, 10. The player has elected to discard the remaining two cards.
In each of the pool s from No. 54 through 100, the player holds the cards that make the partial straight or flush, and discards the others.
In Pool Nos. 101-104. the player receives four cards, with two gaps, and with three, two, one, or zero high cards. The player discards the fifth card.
In Pool No. 105, the player has received one small card, and elects to discard the other four cards.
The pools enumerated above are intended to cover all possible situations and all possible strategies that can be selected by the player. Thus, in practice, the correct pool can be determined. by a
table lookup, and without any substantial calculation. For example, if a player receives four cards towards a royal flush, including an Ace and 10, but not a flush, straight, or pair, and if the
player elects to discard the fifth card, then the system associates this occurrence with Pool No. 8. As used in this particular game, the term “occurrence” is defined as a hand presented to the
player, followed by a strategy elected by the player. Thus, for each occurrence, there is a unique pool to which the system will refer.
To each pool, one associates ten weighted “bins” containing game set elements. (In practice, there are no real bins, but instead there is a mix of game set elements, each category of game set element
being present in accordance with its weighting or probability.) The ten “bins” correspond to the ten basic categories of poker, i.e. Royal Flush, Straight Flush, Four of a Kind, Full House, Flush,
Straight, Three of a Kind, Two Pair, One Pair Jacks or Better, and None of the Above (losing hand). For each bin, one must determine the number of ways to construct the category associated with that
bin, by drawing the specified number of cards.
The construction of the bins can be understood with reference to the following example relating to the situation of Pool No. 1. Since Pool No. 1 is a decision pool involving the drawing of one new
card, one must determine the number of ways to form each category by drawing one card:
Royal Flush:
There is one way to make a royal flush, namely by drawing a 10 having the same suit as the four cards held. Since there are 47 ways to draw a card (52 cards in the deck, minus the five cards
initially dealt), the probability of this outcome is 1/47.
Straight Flush:
It is not possible to make a straight flush by drawing one card, given the cards already held. The probability of this outcome is zero.
Four of a Kind:
It is not possible to make four of a kind by drawing one card, given the cards already held. The probability of this outcome is zero.
Full House:
It is not possible to make a full house by drawing one card, given the cards already held. The probability of this outcome is zero.
There are seven ways to make a flush (there are eight cards remaining in the suit, but one of those cards would make a royal flush, and is previously accounted for). The probability of this outcome
is 7/47.
There are three ways to make a straight. The probability of this outcome is 3/47.
Three of a Kind:
It is not possible to make three of a kind by drawing one card, given the cards already held. The probability of this outcome is zero.
Two Pairs:
It is not possible to make two pairs by drawing one card, given the cards already held. The probability of this outcome is zero.
One Pair Jacks or Better:
There are 12 ways to make one pair of Jacks or better by drawing one card. The probability of this outcome is 12/47.
There are 24 remaining ways of drawing one card. These are the losing hands. The probability of this outcome is 24/47.
One can similarly construct probabilities for each of the ten bins associated with each of the other decision pools. Note that for all pools which involve two-card draws, the total number of ways of
drawing cards is 1081. For three-card draws, the number of ways is 16,215, etc. These numbers are simply the binomial coefficients which indicate the number of ways of selecting various numbers of
objects from a set.
When one has determined probabilities for each possible outcome, one can proceed to construct the bins, by providing game set elements in the proper proportions. In the example shown above, the game
set elements associated with decision pool No. 1 would be present in the following proportions:
1 game set element for a royal flush,
7 game set elements for a flush,
3 game set elements for a straight,
12 game set elements for one pair Jacks or better, and
24 game set elements for losing hands.
The numbers shown above represent proportions, and do not normally comprise the actual numbers of game set elements. To construct the actual pool of game set elements, one would usually multiply each
of the above numbers by the same large integer, such as 10,000 or 1,000,000, while still preserving the weightings shown above.
As in the first phase of the game, the probabilities associated with the draws from the decision pools will change slightly as the pools are depleted. In the aggregate, however, as the games are
played repeatedly and the pools are replenished, the actual probabilities will converge to the theoretical values.
As in the first phase of the game, the probabilities associated with the draws from the decision pools could also be determined arbitrarily, and without regard to the mathematical probabilities of
obtaining various hands. The invention is intended to include this alternative.
In summary, in the video poker game described above, except for the relatively infrequent occasions where a player wins an award based on the initial hand received, each play involves two distinct
draws from two different pools of game set elements. The first draw is made from the pool corresponding to initial dealt hands. The second draw, if needed, is from the weighted “bins” associated with
a particular decision pool, the decision pool having been selected according to the hand presented to the player and the player's strategy. Thus, this embodiment essentially comprises two lotteries
played in sequence.
The lottery poker game described above therefore has the advantage that it provides a “bonus” to the player. If the player does not win on the first play, the system gives the player another chance,
by asking the player to select cards to be held or discarded. Then, the system operates another lottery, giving the player another chance to win an award.
In still another embodiment of the video poker game, there is only one lottery. In this alternative, the initial hand is randomly generated, not by selecting a game set element from a pool, but by
generating a hand at random and displaying it to the player. Then, the player makes his or her decisions about which cards to hold, and the system proceeds as previously described. That is, based on
the hand dealt to the player and the player's strategy, the system turns to one of the decision pools, and draws a game set element from that pool, according to a probability determined by the number
of each type of game set element in the pool. This embodiment is therefore the same as the preceding version, except that the initial hand is not determined by drawing a game set element from a
finite pool.
FIG. 8 provides a flow chart summarizing the basic programming of the two alternative lottery poker embodiments described above. The system starts in block 81. In block 82, the system derives an
initial hand and displays that hand to the player. Block 82 is intended to represent both of the two alternative embodiments discussed above. In the first of these alternatives, the initial hand is
obtained by drawing a game set element from a finite pool. In the second of these alternatives, the initial hand is simply determined randomly, without reference to a finite pool.
In test 83, the system determines whether the initial hand is a winning hand, i.e. whether it corresponds to the enumerated categories which are intended to win an immediate award. If so, the system
issues the award, in block 84, and the system returns to block 81 to start a new game.
If the hand is not a winning hand, the system continues in block 85, in which the system accepts inputs from the player. These inputs comprise decisions on whether to hold or replace each card. Based
on the cards initially dealt to the player, and on the player's strategy exercised in block 85, the system maps these parameters to one of a plurality of decision pools, in block 86. The system then
randomly draws a game set element from the selected decision pool, in block 87. If test 88 indicates that the resulting hand is a winner, the system issues an award in block 89. Otherwise, the system
returns to block 81.
Note that, in the case where the initial hand is selected by drawing an element from a pool, the pool from which the element is drawn is not the same as the decision pool used later. Thus, for most
plays, the game in this case comprises two draws from two different pools of game set elements.
The invention is not limited to the particular embodiments discussed above. The invention can be applied to many other kinds of games of chance and skill. The arrangement of player terminals and
central game servers can also be modified in many ways, within the scope of the invention. These and other modifications, which will be apparent to the reader skilled in the art, should be considered
within the spirit and scope of the following claims.
|
{"url":"http://www.google.fr/patents/US6537150","timestamp":"2014-04-20T00:45:58Z","content_type":null,"content_length":"316585","record_id":"<urn:uuid:5da6e299-97c5-4908-8c8e-ec1728dd2ba0>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Enumerating (generalized) de Bruijn tori
up vote 6 down vote favorite
Given a cyclic word $w$ of length $N$ over a $q$-ary alphabet and $k \in \mathbb{Z}_+$, consider the directed multigraph $G_k(w) = (V,E)$ with $V \subset$ {$1,\dots,q$}$^k$ given by the $k$-lets
(i.e., subwords of $k$ symbols) that appear in $w$ (without multiplicity) and $E$ given by the $(k+1)$-lets in $w$ that appear with multiplicity. An edge $w_\ell \dots w_{\ell+k}$ connects the
vertices $w_\ell \dots w_{\ell+k-1}$ and $w_{\ell+1} \dots w_{\ell+k}$. If $w$ is a de Bruijn sequence, $G_k(w)$ is a de Bruijn graph, and vice versa. So call $G_k(w)$ the generalized de Bruijn graph
corresponding to $w$ and $k$. It is not hard to compute the number of words $w'$ having $G_k(w)$ as their generalized de Bruijn graph, using the matrix-tree and BEST theorems.
In two dimensions, the picture is much less clear. De Bruijn tori are basically periodic rectangular arrays of symbols in which all possible subarrays of a certain size occur with multiplicity 1.
There is a structure (a hypergraph?)--the "generalized de Bruijn structure"--corresponding to a generic rectangular array of symbols in a generalization of the sketch above, so by analogy call a
rectangular array of symbols over a finite alphabet a generalized de Bruijn torus in this context.
Given one generalized de Bruijn torus, how many others share its multiplicities of rectangular subarrays (equivalently, its generalized de Bruijn structure)?
(Note that even the existence of de Bruijn tori for nonsquare subarrays is uncertain, which is why I'm working in the "generalized" context.)
co.combinatorics computational-complexity open-problem
I just realized that this is likely to be very hard indeed, as it appears related to Levin's universal one-way "tiling expansion" function. See arxiv.org/abs/cs.CR/0012023 – Steve Huntsman Sep 17
'10 at 18:27
Could you explain what is the exact relation? I tried reading Levin's paper and I could not understands the tiling expansion problem clearly. – J.A Sep 25 '13 at 13:10
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged co.combinatorics computational-complexity open-problem or ask your own question.
|
{"url":"http://mathoverflow.net/questions/10752/enumerating-generalized-de-bruijn-tori?answertab=active","timestamp":"2014-04-17T04:34:44Z","content_type":null,"content_length":"50256","record_id":"<urn:uuid:5b9900b2-51af-4a65-abc4-928c70e55fa0>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why do you need to line up the decimal points before comparing and ordering numbers with decimals?
Elementary arithmetic is the simplified portion of arithmetic which includes the operations of addition, subtraction, multiplication, and division.
Elementary arithmetic starts with the natural numbers and the written symbols (digits) which represent them. The process for combining a pair of these numbers with the four basic operations
traditionally relies on memorized results for small values of numbers, including the contents of a multiplication table to assist with multiplication and division.
A numeral system (or system of numeration) is a writing system for expressing numbers, that is, a mathematical notation for representing numbers of a given set, using digits or other symbols in a
consistent manner. It can be seen as the context that allows the symbols "11" to be interpreted as the binary symbol for three, the decimal symbol for eleven, or a symbol for other numbers in
different bases.
Ideally, a numeral system will:
In mathematics, the repeating decimal 0.999... (sometimes written with more or fewer 9s before the final ellipsis, or as 0.9, 0.(9), or $0.9 with dot over the 9$) denotes a real number that can be
shown to be the number one. In other words, the symbols "0.999..." and "1" represent the same number. Proofs of this equality have been formulated with varying degrees of mathematical rigor, taking
into account preferred development of the real numbers, background assumptions, historical context, and target audience.
Every nonzero, terminating decimal has an equal twin representation with infinitely many trailing 9s, such as 8.32 and 8.31999... The terminating decimal representation is almost always preferred,
contributing to the misconception that it is the only representation. The same phenomenon occurs in all other bases or in any similar representation of the real numbers.
Related Websites:
|
{"url":"http://answerparty.com/question/answer/why-do-you-need-to-line-up-the-decimal-points-before-comparing-and-ordering-numbers-with-decimals","timestamp":"2014-04-16T18:56:15Z","content_type":null,"content_length":"22238","record_id":"<urn:uuid:c091435f-299c-4f68-8c6a-de39d33670cd>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
3^(2p) + 3^(3q) + 3^(5r) = 3^(7s) Find the minimum value of p+q+r+s where p,q,r,s are all positive integer.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5076a584e4b02f109be37b08","timestamp":"2014-04-18T14:11:48Z","content_type":null,"content_length":"340548","record_id":"<urn:uuid:3a220674-0910-4879-befd-002641bcce7f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Python encoding/decoding for huffman tree
07-21-2012, 02:52 PM
Python encoding/decoding for huffman tree
Hi guys,
I have an assignment for python in my class on Huffman Tree encode and decoding.
I have problems creating my tree right now.
the print statemnts are just for me to see whats running in my function so ignore those. but this is my code so far
class HuffmanNode(object): #parent inheritence?
def __init__(self, left=None, right=None, root=None):
self.left = left
self.right = right
self.root = root
-----------------------------------------------^ thats my huffman node class within this huffman tree class
p = PriorityQueue()
print frequencies
index = 0
for value in frequencies:
if value != 0:
print value
print index
index = index + 1
tup = ()
for i in range(len(frequencies)):
if frequencies[i] != 0:
tup = (i, frequencies[i])
n = self.HuffmanNode(None, None, tup)
p.insert(n, frequencies[i]) #inserted node, freq value
print '----------------------------------------------------------'
# print node.root,node.left,node.right #root =2
total = 0
while not p.is_empty():
a = p.get_min()
if self._root == None:
a = self.HuffmanNode()
a2 = p.get_min()
total = a.root[1], a2.root[1]
node = self.HuffmanNode(a, a2, total)
print a.root
print a.left
print a.right
i faollowed wikis steps "The simplest construction algorithm uses a priority queue where the node with lowest probability is given highest priority:
Create a leaf node for each symbol and add it to the priority queue.
While there is more than one node in the queue:
Remove the two nodes of highest priority (lowest probability) from the queue
Create a new internal node with these two nodes as children and with probability equal to the sum of the two nodes' probabilities.
Add the new node to the queue.
The remaining node is the root node and the tree is complete.
I am supposed to manually remove first two nodes , then every other node only removes one by one . but i cant seem to get it to work, how do i remove the firsti two without having the whileeloop
become taking two nodes every pass
i trid using an iff statement but i dont know if my iff statement condition is right.
|
{"url":"http://www.codingforums.com/python/268455-python-encoding-decoding-huffman-tree-print.html","timestamp":"2014-04-19T08:29:26Z","content_type":null,"content_length":"7323","record_id":"<urn:uuid:a1e975a3-8267-4576-a25a-c32f51834369>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Roger Halbheer on Security
Fixing Risk Management
Since quite a while I am not satisfied with the way we (in the industry) are doing risk management. In my early days, before I was actually entering the security space, I was doing project management
and as part of it risk management. The way we did it was fairly simple (as probably most of you do): We had an impact on high/medium/low and a probability. We were fairly sophisticated as the
probability was a percentage number. I often said, that we do not know whether a 50% probability really has the probability of 50% but we were fairly confident that 50% was more than 40%. Basically
it seemed to work fairly well but it was not really satisfactory – I just did not have anything better.
Then I was starting to work in security and saw these models called “Return On Security Investment” – ROSI. There were models which were fairly simple ($ impact * probability is the cost) up to very
sophisticated and complex models. I never liked them and I was fairly vocal about them. The reason was fairly simple: garbage in, garbage out or to take a different equation I read recently:
garbage * garbage = garbage^2
As we do not know the impact (what was the impact of Blaster to Microsoft and our reputation in $?) and we do not really know the probability, the formula mentioned above seems fairly accurate but
you can calculate the garbage to two digits behind the comma.
Finally, when I talk to customers that they should do more risk management, they sometimes ask me a simple question: How? and I fall short of a really good answer
However, I am now closer to an approach. I recently read a book called The Failure of Risk Management: Why It’s Broken and How to Fix It, which changed my way of looking at things. Actually it
changed earlier when I read How to Measure Anything: Finding the Value of Intangibles in Business by the same author (Douglas W. Hubbard). The basics behind is to look at what you measure (e.g. the
risks) from a perspective of a statistician. Being an engineer, I hated statistics at the university but I think I should have looked at it much more. There are a few fundamental claims he makes in
my opinion:
• We do not work with clear figures (e.g. 40%) but with ranges, where we estimate a 90% probability of the real figure being in this range (a confidence interval). If I would ask you, how likely a
virus outbreak in your network is, you will not be able to tell me 30% but you might be able to tell me that the probability is between 20% and 40%. The same with the impact: You might be 90%
confident that the financial impact of an outbreak is between $x and $y. If you are an expert and did some training on that, this is feasible.
• As soon as we think about finding data to underline our estimate, the goal is not to find an exact number but to reduce the size of the interval (the uncertainty).
• You should be able to focus on the most important ranges and not on what is easiest to manage. He shows a way to actually measure the value of information.
• Focus on the values in your model, with the highest uncertainty – where you have the least data.
Once you build a model and define these ranges, what do you do then? Well, there is a technique in statistics called Monte Carlo Simulations. Based on the ranges with this method, we can start to
calculate a distribution of the outcome. There is even a possibility to model complex systems and systems, where different events correlate.
Using mathematical methods – as we use to model other systems as well – might (or I would even say will) be the right path to move on. We have to move from art to science.
|
{"url":"http://www.halbheer.ch/security/2010/11/15/fixing-risk-management/","timestamp":"2014-04-20T18:22:52Z","content_type":null,"content_length":"76817","record_id":"<urn:uuid:2768a342-0e2b-49d5-9940-4b21a818119b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Resistors In Parallel Calculator If you are looking for a resistor COLOR CODE calculator, then click
This calculator can determine the resistance of up to 10 resistors in parallel.
Enter resistances into the boxes below and when all values have been input, click on the CALCULATE button and the result will appear in the box below that
As a test, if you input resistances of 3, 9 and 18 ohms, your answer should be 2 ohms.
Clicking the RESET button will clear all the boxes.
This calculator can solve other math problems.
Calculating resistors in parallel is PRECISELY the same as the calculations required for INDUCTORS in PARALLEL or for CAPACITORS in SERIES.
This calculator can be used for work problems. For example, 'A' can paint a room in 5 hours and 'B' can paint a room in 6 hours. If they both work together how long will the job take? Input the 5 and
6 just as if they were resistors and get your answer.
This calculator can be used for 'fill' problems. For example, one pipe can fill a water tank in 5 hours, while another pipe can fill the same tank in 6 hours. If both pipes are working at the same
time........hmmm seems eerily familiar to the other problem doesn't it?
Good luck with your math problems.
Numbers are displayed in scientific notation with the amount of significant figures you specify. For easier readability, numbers between .001 and 1,000 will not be in scientific notation but will
still have the same precision.
You may change the number of significant figures displayed by changing the number in the box above.
Most browsers, will display the answers properly but if you are seeing no answers at all, enter a zero in the box above, which will eliminate all formatting but at least you will see the answers.
Return To Home Page
Copyright © 1999 - 1728 Software Systems
|
{"url":"http://www.1728.org/resistrs.htm","timestamp":"2014-04-19T17:02:00Z","content_type":null,"content_length":"8689","record_id":"<urn:uuid:77a75ce1-a437-46c1-b1b5-a3a17f45235d>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
|
free-free boundaries
Next: rigid-rigid boundaries Up: Exact solutions of characteristics Previous: Exact solutions of characteristics
The eigen value problem becomes
subject to boundary condition
In view of (52), using (51) we can shown that
From this it follows that the required solution must be
where 51) leads to eigen value relation
For a given
The critical Rayleigh number
and the corresponding
Next: rigid-rigid boundaries Up: Exact solutions of characteristics Previous: Exact solutions of characteristics A/C for Homepage of Dr. S Ghorai 2003-01-16
|
{"url":"http://home.iitk.ac.in/~sghorai/NOTES/benard/node14.html","timestamp":"2014-04-16T22:13:28Z","content_type":null,"content_length":"8040","record_id":"<urn:uuid:c6353bfe-147c-4183-9c85-e16dd19b54dc>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gaussian Elimination Method - Help
That is the correct least squares fit.
Now from there you have an overdetermined system of linear equations. Since a is a 9 x 6 matrix you can not invert it.
We start with these 9 equations:
When we substitute x and y for each point we get:
Now the formula to least square an overdetermined system is
The final bivariate polynomial is
which is the same as above.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=289550","timestamp":"2014-04-16T10:54:16Z","content_type":null,"content_length":"19287","record_id":"<urn:uuid:bfa9de8f-5024-4e62-8bf2-e4a6b54e3b0c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: R: Problem with Left Truncation
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: R: Problem with Left Truncation
From Antoine Terracol <Antoine.Terracol@univ-paris1.fr>
To statalist@hsphsun2.harvard.edu
Subject Re: st: R: Problem with Left Truncation
Date Fri, 23 Oct 2009 12:06:02 +0200
Carlo Lazzaro wrote:
Dear Elaine,
Please find beneath the following point-to-point comments about your query:
<We are doing survival analysis, but unlike other dataset, our dataset
only includes observations that have failed.>
I would not be concerned about all failure=1; how long patient takes to
failure (failure time (tn)- risk onset (t0)) it's the relevant issue.
I haven't done the math, but my intuition is the following.
If Elaine's dataset can be considered as a random sample from the
population, then she can use -stset- and proceed as usual.
I can think of one case where it would not be true: Consider the case
where individuals cannot be at risk before a certain calendar date t0
(say, because she is studying spells in a social program that did not
exist before). If Elaine's observation window is relatively (compared to
the average real duration in the state under study) short, and begins
relatively (again, compared to the real durations) close to t0, then
Elaine's sample will only contain spells short enough to have failed
during the observation window. In this cas, the coefficients will be
biased (although I'm not quite sure if there is a way to handle that
with Stata).
On the other hand, if the spells could have started at any given date
before the observation window (or long enough before the start of the
observation window, relatively to the real durations), then I think her
sample will be random and can be analysed as usual.
Ce message a ete verifie par MailScanner
pour des virus ou des polluriels et rien de
suspect n'a ete trouve.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2009-10/msg01071.html","timestamp":"2014-04-21T10:42:40Z","content_type":null,"content_length":"7904","record_id":"<urn:uuid:86c5e21d-8093-41e0-a271-b44b6cbd4958>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
|
algebra fraction
September 21st 2008, 07:06 PM #1
algebra fraction
$<br /> <br /> pi^2/-s^3-2 = - (pi^2/-s^3-2) <br /> <br /> <br />$
$<br /> = pi^2/ -s^3+2<br />$
am i using the subtraction correctly?
Last edited by realintegerz; September 21st 2008 at 08:41 PM.
I don't know what you mean by /- but I assume you left out some parentheses.
pi^2/(-s^3-2)= $\frac{pi^2}{-s^3-2}=-\frac{pi^2}{s^3+2}$.
pi^2/(-s^3)-2= $\frac{pi^2}{-s^3}-2=-\frac{pi^2}{s^3}-2$.
-(pi^2/(-s^3-2))= $-\frac{pi^2}{-s^3-2}=\frac{pi^2}{s^3+2}$.
-(pi^2/(-s^3)-2))= $-\left(\frac{pi^2}{-s^3}-2\right)=\frac{pi^2}{s^3}+2$.
You wrote pi^2/-s^3-2= pi^2/ -s^3+2 which seems obviously wrong.
I'm not sure how your problem looks like. Is it
$\frac{\pi^2}{-s^3} - 2$
ok i changed it...didnt know it would come out like that on here..
the 2nd line is the right side
im just checking if i distributed the negative right or wrong
$-(A+B)=-A-B, -(A-B)=-A+B,-(-A-B)=A+B$,
I hope it helps. I wrote all the possibilities in the previous post.
is that correct ??
If you saying the left hand side equals the bottom right hand side then nope.
You can see this is not true by plugging in a value for s. Try plugging in 1 and you will notice
$\frac{\pi^2}{-(1)^3 - 2}~\text{does not equal}~ \frac{\pi^2}{-(1)^3+2}$
so i distributed the negative correctly or no?
i just need to know if i did or not...because im checking if functions are even, odd, or neither...
the function is h(s) = [ pi^2 over s^3 - 2 ]
Testing to see if it even
$y = \frac{\pi^2}{-s^3-2}$
Replace s with -s
$y = \frac{\pi^2}{-(-s)^3-2}$
which equals
$y = \frac{\pi^2}{s^3-2}$
so it is not the same as the original equation so it is not even.
To see if it odd replace y with -y and s with -s
$-y = \frac{\pi^2}{-(-s)^3-2}$
$-y = \frac{\pi^2}{s^3-2}$
$y = \frac{\pi^2}{-s^3+2}$
which is not the same as the original function so it is neither
the original function itself h(s) does not have -s^3 - 2...
its s^3 - 2
like i said before
the function h(s) = pi^2/s^3-2
i put -s into it, and it isnt the same, so its not even...
and the picture is the work to see if its odd...
h(s) = pi^2/s^3 - 2
h(-s) = pi^2/(-s)^3 - 2
= pi^2/-s^3 - 2
so h(s) doesnt = h(-s)
then i tried this
trying to see if h(-s) = - h(s)
pi^2/-s^3 - 2 = - ( pi^2/ s^3 - 2)
= pi^2/-s^3 + 2
thats all my work
Ok sorry I misread
$y = \frac{\pi^2}{s^3-2}$
To see if it odd replace y with -y and s with -s
$-y = \frac{\pi^2}{(-s)^3-2}$
$-y = \frac{\pi^2}{-s^3-2}$
$y = \frac{\pi^2}{s^3+2}$
So nope it is not odd either
September 21st 2008, 07:41 PM #2
Aug 2008
September 21st 2008, 07:47 PM #3
September 21st 2008, 08:42 PM #4
September 21st 2008, 09:16 PM #5
Aug 2008
September 22nd 2008, 04:09 PM #6
September 22nd 2008, 04:21 PM #7
September 22nd 2008, 04:39 PM #8
September 22nd 2008, 05:41 PM #9
September 22nd 2008, 06:03 PM #10
September 22nd 2008, 06:06 PM #11
September 22nd 2008, 06:13 PM #12
September 22nd 2008, 06:21 PM #13
September 22nd 2008, 06:29 PM #14
|
{"url":"http://mathhelpforum.com/algebra/49998-algebra-fraction.html","timestamp":"2014-04-17T11:35:34Z","content_type":null,"content_length":"68618","record_id":"<urn:uuid:46d649f5-0750-43b9-8ccf-ad7f50b89d9c>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Need help using induction and inequalites
February 10th 2010, 07:18 PM #1
Feb 2010
Need help using induction and inequalites
I am taking a discrete math class right now and am having some difficulties with induction when working with inequalities. My teacher is very strict about proofs, rightly so, and knocks points
off brutally for mistakes
I can show that its true for n = 1 where both sides are equal to 1/2. The gap between the two widens though when n = 2. Then I plug in k in place of n and assume it to be an arbitrary element of
the set of natural positive numbers. Now looking at the sequence it looks like a factorial of odds on top and a factorial of evens on the bottom which is greater or equal to 1/2k. I am not sure
what good the factorial would do for this problem and I am not even sure if its relevant.
So it would seem that this would be true already because 1/2k will get smaller as k approaches infinity vs the sequence which gets smaller more slowly. I then increment the k to k+1. I then tried
to do what I would normally do with the type of induction problems we have been doing so far, which is take the closed form of the sequence with k inside and add it to the k+1th iteration and try
to prove that adding those together is equivalent to the k+1th closed form.
I tried doing that in this case and multiplied 1/2k times the k+1th iteration of the sequence and after a little bit of factoring got the other side of the inequality at k+1 and then a small
fraction. I am somewhat sure that what I have done is wrong.
Any assistance would be greatly appreciated.
I am taking a discrete math class right now and am having some difficulties with induction when working with inequalities. My teacher is very strict about proofs, rightly so, and knocks points
off brutally for mistakes
I can show that its true for n = 1 where both sides are equal to 1/2. The gap between the two widens though when n = 2. Then I plug in k in place of n and assume it to be an arbitrary element of
the set of natural positive numbers. Now looking at the sequence it looks like a factorial of odds on top and a factorial of evens on the bottom which is greater or equal to 1/2k. I am not sure
what good the factorial would do for this problem and I am not even sure if its relevant.
So it would seem that this would be true already because 1/2k will get smaller as k approaches infinity vs the sequence which gets smaller more slowly. I then increment the k to k+1. I then tried
to do what I would normally do with the type of induction problems we have been doing so far, which is take the closed form of the sequence with k inside and add it to the k+1th iteration and try
to prove that adding those together is equivalent to the k+1th closed form.
I tried doing that in this case and multiplied 1/2k times the k+1th iteration of the sequence and after a little bit of factoring got the other side of the inequality at k+1 and then a small
fraction. I am somewhat sure that what I have done is wrong.
Any assistance would be greatly appreciated.
Notice this is equivalent to proving that $0\leqslant 1\cdot 3\cdots(2n-1)-2\cdots(2n-2)$. So, let us call $\ell_n=1\cdots (2n-1)-2\cdots(2n-2)$. Then, $\ell_1=1-0\geqslant 0$. So, now assume
that $\ell_n\geqslant 0$ then $\ell_{n+1}=1\cdots(2n-1)\cdot(2n+1)-2\cdots(2n-2)\cdots(2n)$. Now clearly $2n+1\geqslant 2n$ and so
$1\cdots(2n-1)\cdot(2n+1)-2\cdots(2n-1)\cdot 2n$$\geqslant 1\cdots(2n-1)\cdot 2n-2\cdots (2n-2)\cdot 2n=2n\cdot \ell_n\geqslant 2n\cdot 0=0$. The conclusion follows
Thanks. I will have to spend some time working through this proof so that I understand it well enough to write it the way the teacher will want. I appreciate the help. I am somewhat confused by
the first step. I probably need to just stare it at longer.
$\frac{1}{2n}\leqslant\frac{1\cdots(2n-1)}{2\cdots 2n}\implies 1\leqslant 2n\cdot\frac{1\cdots(2n-1)}{2\cdots 2n}=\frac{1\cdots (2n-1(}{2\cdots(2n-2)}$$\implies 2\cdots (2n-1)\leqslant 1\cdots
(2n-1)\implies 0\leqslant 1\cdots (2n-1)-2\cdots(2n-2)$
Wow. Tricky problems with simple elegant solutions. Thank you very much.
February 10th 2010, 08:01 PM #2
February 10th 2010, 08:18 PM #3
Feb 2010
February 10th 2010, 08:23 PM #4
February 10th 2010, 08:26 PM #5
Feb 2010
|
{"url":"http://mathhelpforum.com/discrete-math/128290-need-help-using-induction-inequalites.html","timestamp":"2014-04-19T04:02:36Z","content_type":null,"content_length":"48360","record_id":"<urn:uuid:62d4d0de-d57c-4a50-ab9b-d1597b3f4b35>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chapter 8 - Homework Corrections
Chapter 8 - Homework Corrections
I believe that students in most classes do what we call "homework". Some of them do this in study halls during school hours, and frequently students do their homework at home. But in my experience,
most students come to school with their homework, find out if they have done it correctly, and that is the end of it.
In my writing-intensive geometry class I expected the students to correct their homework. And by correcting it, I meant that they were to do far more than just crossing out their wrong answer, but
also to write the correct answer, explain why their answer was wrong, and why they made that mistake. They were also expected to write a complete explanation of the correct way to do the problem, as
you will see in the remainder of this page.
One question in a homework assignment was as follows: "A circle is inscribed in a quadrilateral. The lengths of the sides of the quadrilateral are as follows: AB = 23, CD = 12. ABCD is circumscribed
around the circle. Find the perimeter of quadrilateral ABCD." Karen had gotten the problem wrong on the homework when she first tried it, because she had not thought of using two variables, and was
overwhelmed with the problem of trying to label the diagram and work with just one variable. In class the next day, as we were discussing the homework, I suggested that those students who had trouble
should try the problem again, using 2 variables. In her HW corrections, Karen drew the following diagram, and used algebra to find the answer:
Here is her written description of her successful method in solving the problem: "First, I drew the diagram above and labeled all the segments. This time, I used two variables, x and y, which really
helped. Here is my first equation:
x + x + 23-x + 23-x +12-y +12-y +y =70
I was worried because there were two variables, and also because the equation was so long! So, when I'd tried it earlier, in my homework, I guess I just got overwhelmed and gave up too soon. But when
I had a second chance (and some hints!) I realized I shouldn't worry about the long equation - I just need simplify it and see what happens, so I crossed my fingers and hoped for the best. It was
very cool because when you see that long equation you think 'oh no!'; I simplified it a bit and this is what I got:
2x + 46 -2x + 2y + 24 - 2y = 70
...which is very cool, because all the x- and y-terms cancel out and you are left with the perimeter = 70! It was so amazing - just when I was about to give up, it all came out great! Next time, I
hope I can stay calm and get it right on the first try!"
The students were asked to do their homework using a regular pencil, and then make corrections in red pencil so that they can quickly spot areas in which they are having trouble. These corrections
prove extremely useful when studying for tests.
I believe that all of this helps students to learn, and to cultivate habits of mind that will serve them well in later years. The students themselves, though they grew weary of writing homework and
test corrections (see chapter 3), nevertheless saw their value, as you will see in their comments below:
"I chose this piece to put in my portfolio because I feel that homework is something that shows the amount of effort you put into the class on your own time. I am proud to say that I always do my
homework, and if I did not understand a problem, I make sure that it is explained in class by a classmate, or by the teacher. I always show my work, so that I can study my homework papers before a
test, especially the parts where I made a mistake, so I don't make that mistake again!" Serena
Danny had a great learning plan, which helped him to be an A student: "I think homework is a great factor in how well you do in class. Homework also reflects how much work you do as a student. I feel
that I put a lot of effort into my homework and notes. In addition, I have a code for my homework: If I feel the homework problem is challenging and easy to forget for the test, I circle it and so
when I study for the test, I start with the circled problems; I know exactly which problems to study. If I circle and star the problem, this means either I still don't understand it or that I have a
Elise wrote a long essay on this topic; when reading it I was very impressed with her effort and her tenacity. She did very well in my class, so I knew that her effort and tenacity paid off! She
wrote: "I do all of my homework in a notebook, and my test corrections too. This ways I always have my previous work to look back on, especially when studying for a test. I always take time to show
all of my work in each problem, so if I find out that my answer is incorrect I can go back through my work and find out where I went wrong so I don't make that mistake again. I write down all the
steps in the solution when I do each problem too, because that way I can see exactly where I went wrong when I do my corrections."
Janine added some nice drawings, to add color to her comments about her homework, which had actually needed very few corrections!
Some of the assignments we did in my class are not part of the traditional curriculum, and were not in our textbook. One of these was one I called "Diagonals of a Polygon" In this assignment, the
students were asked to consider polygons, such as triangle, quadrilateral, pentagon etc., and discover how many diagonals each polygon had. For example, a square has two diagonals - one from the
upper left corner to the bottom right, and the other from the upper right corner to the bottom left. The students explored triangles (which have no diagonals), quadrilaterals, which have two
diagonals, pentagons, which have 5 diagonals (which form a star), etc. I will leave it up to you to explore this idea and discover for yourself how many diagonals other polygons have, and if there is
a formula for finding that. If you like, you can learn a lot more about this topic at the following web page: http://www.mathopenref.com/polygondiagonal.html. This website gave me the following
formula for finding the number of diagonals of any polygon:
Formula for the number of diagonals
"The number of diagonals from a single vertex is three less than the number of vertices or sides, or (n-3).
There are N vertices, which gives us n(n-3) diagonals
But each diagonal has two ends, so this would count each one twice. So as a final step we divide by 2, for the final formula: where n is the number of sides (or vertices)
And then the formula is: n(n-3) divided by 2
Another way of writing that is:
In her homework corrections on this assignment, Julie wrote: "This worksheet was fun because it makes you feel like you invented the formula all be yourself! I think this was a great assignment
because it was really hands-on, and also it was cool that it wasn't even in the textbook! I liked figuring it out because nobody helped me and I figured almost all of it out myself. The mistake I
made was a really small one, and writing the correction was easy. For some reason, I when I multiplied n times n I wrote 2n! Silly me; of course n times n is n squared! Even though I didn't get it
100 per cent correct, I feel like I learned a lot anyway."
The students constructed and explored a variety of polygons and wrote explanations and essays about what they discovered. In exploring a pentagon, Jared wrote the following comments:
"This project was fun because it made me feel like I 'invented' or figured out a formula all by myself. I thought this was a very good project because it was more of a 'hands-on' thing, not just
'reading it in a book and taking their word for it' thing. I always think you learn more this way because you can see from your own experience the 'how' and the 'why' of it. And if you forget the
formula, you can figure it out again. I really like the challenge of finding patterns in numbers and stuff. It's actually like it's yourself who is the teacher, which is cool."
In his homework corrections, Jared wrote "I chose to put this piece because I feel that homework is something that shows the amount of effort you put into a class on your own time. I am proud of my
corrections because I worked really hard on them."
On "Parent Night", when the students' parents came to my classroom, I showed them many of the writing assignments that their sons and daughters had completed. The parents were amazed at the amount of
work the students had done, and were fascinated by the student papers that I shared with them. I asked the parents to write a comment on how they felt about this class, and the "writing-intensive"
approach. One parent wrote: "I enjoyed reading Nicole's reflections, and seeing some of the assignments that she completed. I was especially interested (and impressed) with the Homework Corrections
that the students are required to write. I don't think we did that when I was in school, but what a great idea! It is so important for the students to figure out what they are doing wrong, so that
they can do it right next time."
Another parent wrote "I think it's wonderful that the students review and correct their work, and even explain where they went wrong. It is so much better than just bemoaning their mistakes and then
moving on without getting to the heart of the problem!"
And even the students agreed; even though they thought the corrections were a "pain", they began to see that the work they did achieved results. Kelli wrote "At first I was very unwilling to do the
HW corrections; what a pain! But after completing a few, I found that it really helped. I realized that although I might get some other problem wrong in the future, I sure wouldn't get that one wrong
John wrote "Sometimes on the test I really don't know what I'm doing on a particular problem. But when I get the test back and have to do that darned problem all over again, then I'm forced to figure
out what I did wrong and how to fix it. And I feel like I have accomplished something. Maybe, if I do this enough, I'll start getting them right the first time around. That would be cool -Wow! No
more homework corrections!"
Perhaps the comment that pleased me the most was Karen's, when she wrote: "In doing my corrections this year, I think that I really got to the root of my mistakes, which helped me learn. This is
something that I really liked about this class. In past math classes, I never had this valuable experience, and I think that it has made a really big difference."
"Geometry enlightens the intellect and sets one's mind right. All of its proofs are very clear and orderly. It is hardly possible for errors to enter into geometrical reasoning, because it is well
arranged and orderly. Thus, the mind that constantly applies itself to geometry is not likely to fall into error. In this convenient way, the person who knows geometry acquires intelligence."
Ibn Khaldun (1332-1406)
Go To Homepage Go To Introduction
1) Constructions 2) Clock Problem 3) Test Corrections 4) ASN Explain 5) Thoughts About Slope 6) What is Proof?
7) Similar Triangles 8) Homework Corrections 9) Quads Midpoints 10) Quads Congruence 11) Polygons
12) Polygons Into Circles 13) Area and Perimeter 14) Writing About Grading 15) Locus 16) Extra Credit Projects
17) Homework Reflections 18) Students' Overall Reflections 19) Parents' Evaluate Method 20) In Conclusion
|
{"url":"http://mathforum.org/sanders/exploringandwritinggeometry/hwcorrections.htm","timestamp":"2014-04-19T18:30:07Z","content_type":null,"content_length":"15357","record_id":"<urn:uuid:60529663-1dbb-4d43-a471-92671125ff6f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Uniform Probability [Archive] - Statistics Help @ Talk Stats Forum
03-17-2010, 09:55 PM
I'm having trouble with this problem, and help would be really appreciated!
Suppose a random variable x is best described by a uniform probability distribution with range 2 to 4. Find the value of a, that makes the following probability statements true. P(x < a) = 0.06
|
{"url":"http://www.talkstats.com/archive/index.php/t-11303.html","timestamp":"2014-04-18T08:37:11Z","content_type":null,"content_length":"3575","record_id":"<urn:uuid:6dc241eb-4389-434b-99e9-d6ab41d07f59>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Similarity Parameters
As a rocket moves through the atmosphere, the gas molecules of the atmosphere near the rocket are disturbed and move around the rocket. Aerodynamic forces are generated between the gas and the
rocket. The magnitude of these forces depend on the shape of the rocket, the speed of the rocket, the mass of the gas going by the rocket and on two other important properties of the gas; the
viscosity, or stickiness, of the gas and the compressibility, or springiness, of the gas. To properly model these effects, aerodynamicists use similarity parameters, which are ratios of these effects
to other forces present in the problem. If two experiments have the same values for the similarity parameters, then the relative importance of the forces are being correctly modeled. Representative
values for the properties of the Earth's atmosphere and the Martian atmosphere are given on other pages, but the actual value of the parameter depends on the state of the gas and on the altitude.
Aerodynamic forces depend in a complex way on the viscosity of the gas. As a rocket moves through a gas, the gas molecules stick to the surface. This creates a layer of air near the surface, called a
boundary layer, which, in effect, changes the shape of the rocket. The flow of gas reacts to the edge of the boundary layer as if it was the physical surface of the rocket. To make things more
confusing, the boundary layer may separate from the body and create an effective shape much different from the physical shape. And to make it even more confusing, the flow conditions in and near the
boundary layer are often unsteady (changing in time). The boundary layer is very important in determining the drag of an object. To determine and predict these conditions, aerodynamicists rely on
wind tunnel testing and very sophisticated computer analysis.
The important similarity parameter for viscosity is the Reynolds number. The Reynolds number expresses the ratio of inertial (resistant to change or motion) forces to viscous (heavy and gluey)
forces. From a detailed analysis of the momentum conservation equation, the inertial forces are characterized by the product of the density r times the velocity V times the gradient of the velocity
dV/dx. The viscous forces are characterized by the viscosity coefficient mu times the second gradient of the velocity d^2V/dx^2. The Reynolds number Re then becomes:
Re = (r * V * dV/dx) / (mu * d^2V/dx^2)
Re = (r * V * L) / mu
where L is some characteristic length of the problem. If the Reynolds number of the experiment and flight are close, then we properly model the effects of the viscous forces relative to the inertial
forces. If they are very different, we do not correctly model the physics of the real problem and predict incorrect levels of the aerodynamic forces.
Aerodynamic forces also depend in a complex way on the compressibility of the gas. As a rocket moves through the gas, the gas molecules move around the rocket. If the rocket passes at a low speed
(typically less than 200 mph) the density of the fluid remains constant. But for high speeds, some of the energy of the rocket goes into compressing the fluid and changing the density, which alters
the amount of resulting force on the object. This effect becomes more important as speed increases. Near and beyond the speed of sound (about 330 m/s or 700 mph on earth), shock waves are produced
that affect the drag of the rocket. Again, aerodynamicists rely on wind tunnel testing and sophisticated computer analysis to predict these conditions.
The important similarity parameter for compressibility is the Mach number - M, the ratio of the velocity of the rocket to the speed of sound a.
M = V / a
So it is completely incorrect to measure a drag coefficient at some low speed (say 200 mph) and apply that drag coefficient at twice the speed of sound (approximately 1400 mph, Mach = 2.0). The
compressibility of the air alters the important physics between these two cases.
The effects of compressibility and viscosity on drag are contained in the drag coefficient. For propulsion systems, compressibility affects the amount of mass that can pass through an engine and the
amount of thrust generated by a rocket nozzle.
Guided Tours
• Compressible Aerodynamics:
• Viscous Aerodynamics:
• Scalars:
Related Sites:
Rocket Index
Rocket Home
Exploration Systems Mission Directorate Home
|
{"url":"http://microgravity.grc.nasa.gov/education/rocket/airsim.html","timestamp":"2014-04-17T00:51:31Z","content_type":null,"content_length":"13964","record_id":"<urn:uuid:aadbdfe9-ae18-40db-a17d-90a18499a4dd>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Linear Regression I
Math Content:
Data Analysis and Probability
This applet allows you to investigate a
regression line
, sometimes known as a "line of best fit."
Plot several points in a relatively straight line, and then click Show Line. How well does the line approximate the scatterplot? Then, plot another point that does not lie along the line. How does
the regression line change because of the outlier?
Click anywhere on the grid to plot points. To delete a point, hold down the
key, and click on the point you wish to delete. To move a point, hold down the
key, and drag the point to a new location with the mouse.
To change the scale of the graph, change the values of x‑min, x‑max, y‑min, and y‑max, and hit the Set Scale button.
Click Show Line to display the linear regression line.
In the upper left corner, the following values are displayed:
│ n │ The number of points on the graph. │
│ │ │
│ r │ The correlation coefficient. This measure indicates the association between the x‑variable and the y‑variable. Its absolute value roughly indicates how well the line of best fit │
│ │ approximates the data. │
│ y = │ An equation describing the line of best fit. │
│ │ │
(Note that the
-value and the equation of the line only appear after
Show Line
is clicked.)
The Clear button will reset the graph.
Plot one point on the graph and then click Show Line. Why do you think a line is not graphed?
Clear the graph and plot two points that have whole-number coordinates.
• On your own, find an equation for the line through these two points.
• Click Show Line. Compare the equation for the line drawn to the equation that you calculated. Explain and resolve any differences.
Clear the graph and plot three points. Think about a line that "fits" these three points as closely as possible.
• Is it possible for a single straight line to contain all three of the points you plotted?
• On a piece of paper, plot these same three points, and sketch a line that you think best fits the three points.
• Click Show Line. Do you think that the line graphed fits the points well? How does it compare to the line you drew?
Clear the graph. Place several points on the graph that lie roughly in a straight line, then hit Show Line. The line that appears is the regression line, which is sometimes known as the "line of best
• What is the r-value for the line?
• Place just one additional point on the graph that lies far away from the line. What effect does this point have on the r‑value? What effect does it have on the line of best fit?
• Move several of the points. How does the r-value and line change as points are moved?
The line that is drawn is called the "least-squares regression line." Bascially, the least-squares regression line is the line that minimizes the squared "errors" between the actual points and the
points on the line. This makes the line fit the points. To get a better feel for the regression line, try the following tasks.
• Plot four points so that the regression line is horizontal. Do this in several different ways. What do you notice about the regression line and the r‑value?
• Plot three points (not all on a straight line) so that the regression line is horizontal. What do you notice about the regression line and the r‑value?
|
{"url":"http://illuminations.nctm.org/Activity.aspx?id=4187","timestamp":"2014-04-20T03:39:16Z","content_type":null,"content_length":"39190","record_id":"<urn:uuid:290a9b24-7c50-4bf4-80a7-a2ef274a19f8>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Chapter 2
Basic Set Theory
A set is a Many that allows itself to
be thought of as a One.
- Georg Cantor
This chapter introduces set theory, mathematical in-
duction, and formalizes the notion of mathematical
functions. The material is mostly elementary. For
those of you new to abstract mathematics elementary
does not mean simple (though much of the material
is fairly simple). Rather, elementary means that the
material requires very little previous education to un-
derstand it. Elementary material can be quite chal-
lenging and some of the material in this chapter, if
not exactly rocket science, may require that you ad-
just you point of view to understand it. The single
most powerful technique in mathematics is to adjust
your point of view until the problem you are trying
to solve becomes simple.
Another point at which this material may diverge
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/006/1967686.html","timestamp":"2014-04-16T11:11:46Z","content_type":null,"content_length":"7952","record_id":"<urn:uuid:abeac2d5-1c8a-4860-b309-98175677aa4c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Speed of Light May Not be a Limit After All - Industry News
Speed of Light May Not be a Limit After All
Science & Technology
Posted: October 12, 2012 06:10AM
An important part of science is the questioning of accepted rules. Last year researchers at CERN reported that neutrinos had traveled at faster than the speed of light, which has never been observed
before and is not allowed according to the Theory of Special Relativity. It was later discovered that the speed measurement was the result of a systematic error (but proving that did give us the most
accurate measure of the speed of a neutrino). At the time of the initial discovery though, many scientists started questioning the validity of the Universe's speed limit, and now researchers at the
University of Adelaide have found that limit may not exist after all.
The reason why even theoretical superluminal travel has been considered impossible is because to travel that fast would break the mathematics of the formulae involved. Basically every measurement in
the Universe should always return a real number, but when you exceed the speed of light, some measurements would take on imaginary values (square roots of negative numbers). What the researchers have
done though is reexamined the theory and extended in such that the formulae no longer break down.
Sadly it must be stated that the researchers worked on this purely from a mathematical perspective, which makes sense as they are mathematicians. What that means is while they may have found it is
possible to travel faster than the speed of light, they can offer no solution for achieving such speeds. Looks like science fiction remains fiction for a bit longer.
|
{"url":"http://www.overclockersclub.com/news/32862/","timestamp":"2014-04-18T14:13:59Z","content_type":null,"content_length":"23392","record_id":"<urn:uuid:00622a69-5254-4fc6-a833-0137e2a4435c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Strict tuples
Sebastian Sylvan sebastian.sylvan at gmail.com
Mon Mar 20 09:26:15 EST 2006
On 3/19/06, Manuel M T Chakravarty <chak at cse.unsw.edu.au> wrote:
> Loosely related to Ticket #76 (Bang Patterns) is the question of whether
> we want the language to include strict tuples. It is related to bang
> patterns, because its sole motivation is to simplify enforcing
> strictness for some computations. Its about empowering the programmer
> to choose between laziness and strictness where they deem that necessary
> without forcing them to completely re-arrange sub-expressions (as seq
> does).
> So what are strict tupples? If a lazy pair is defined in pseudo code as
> data (a, b) = (a, b)
> a strict pair would be defined as
> data (!a, b!) = ( !a, !b )
> Ie, a strict tuple is enclosed by bang parenthesis (! ... !). The use
> of the ! on the rhs are just the already standard strict data type
> fields.
Maybe I've missed something here. But is there really any reasonable
usage cases for something like:
f !(a,b) = a + b
in the current bang patterns proposal?
I mean, would anyone really ever want an explicitly strict (i.e. using
extra syntax) tuple with lazy elements?
Couldn't the syntax for strict tuples be just what I wrote above
(instead of adding weird-looking exclamation parenthesis).
I'm pretty sure that most programmers who would write "f !(a,b) = ..."
would expect the tuple's elements to be forced (they wouldn't expect
it to do nothing, at least).. In fact !(x:xs) should mean (intuitively
to me, at least) "force x, and xs", meaning that the element x is
forced, and the list xs is forced (but not the elements of the xs).
Couldn't this be generalised? A pattern match on any constructor with
a bang in front of it will force all the parts of the constructor
(with seq)?
f !xs = b -- gives f xs = xs `seq` b, like the current proposal
f !(x:xs) = b -- gives f (x:xs) = x `seq` xs `seq` b, unlike the
current proposal?
The latter would then be equal to
f (!x:xs) = b
right? Just slightly more convenient in some cases...
f (A !b (C !c !d) !e !f !g) = y
would be equivalent to:
f !(A b !(C c d) e f g) = y
Instead of the latter meaning doing nothing...
Sebastian Sylvan
UIN: 44640862
More information about the Haskell-prime mailing list
|
{"url":"http://www.haskell.org/pipermail/haskell-prime/2006-March/000984.html","timestamp":"2014-04-19T23:40:33Z","content_type":null,"content_length":"4767","record_id":"<urn:uuid:33d48dbf-2149-4f35-bd57-1c40737df3b5>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Scientist find possible inhabitable planet
Valhallen wrote:
Ace of Flames wrote:Considering it falls up, wouldn't fire be negative mass?
So to travel outside the solar system, we need to set ourselves on fire.
Relative to the air, yes. however, it seems that building a stable wormhole requires something with negative mass relative to empty space. You can get something sort of like that via the Casimir
effect, but that's not nearly enough from what I've heard.
Theoretically/Speculatively speaking, wouldn't switching the locations(or charges, considering) of the electrons and protons in the atoms of the material work? It wouldn't be negative mass, but the
inverted charge could have an effect of some kind.
Re: Scientist find possible inhabitable planet
Well, not quite. Quantum teleportation works about thusly: Particles 1, 2, and 3 are at Point A. Particles 1 and 2 are entangled, and Particle 3 represents a qubit that is to be transported to Point
B. Particle 2 is moved (not FTL) to Point B. An operation is performed on particles 1 and 3 such that their superposition is collapsed, classical information about Particles 1 and 3 is produced, and
Particle 2 now has information about particle 3 in its quantum state. The classical information is sent (not FTL) to Point B and used to apply a transformation that converts the quantum state of
Particle 2 into an exact replica of the original state of Particle 3, producing the qubit intact at Point B. Hence potentially useful for some applications, but not FTL travel or transmission of
Wormholes, on the other hand, would permit FTL travel according to certain solutions of the laws of physics as currently understood, even if they are completely impractical from an engineering point
of view, and may not, in fact, exist.
nicomon wrote:Theoretically/Speculatively speaking, wouldn't switching the locations(or charges, considering) of the electrons and protons in the atoms of the material work? It wouldn't be
negative mass, but the inverted charge could have an effect of some kind.
Switching the locations would get you plasma that would quickly return to normal atoms, and switching the charges would get you antimatter. Both still have positive mass, and would therefore be
useless for propping open a wormhole unless used as a scaffolding for something with negative mass. Antimatter has some interesting properties, but I don't know of any that would be particularly
Re: Scientist find possible inhabitable planet
Valhallen wrote:@zepherin
Well, not quite. Quantum teleportation works about thusly: Particles 1, 2, and 3 are at Point A. Particles 1 and 2 are entangled, and Particle 3 represents a qubit that is to be transported to
Point B. Particle 2 is moved (not FTL) to Point B. An operation is performed on particles 1 and 3 such that their superposition is collapsed, classical information about Particles 1 and 3 is
produced, and Particle 2 now has information about particle 3 in its quantum state. The classical information is sent (not FTL) to Point B and used to apply a transformation that converts the
quantum state of Particle 2 into an exact replica of the original state of Particle 3, producing the qubit intact at Point B. Hence potentially useful for some applications, but not FTL travel or
transmission of information.
Wormholes, on the other hand, would permit FTL travel according to certain solutions of the laws of physics as currently understood, even if they are completely impractical from an engineering
point of view, and may not, in fact, exist.
nicomon wrote:Theoretically/Speculatively speaking, wouldn't switching the locations(or charges, considering) of the electrons and protons in the atoms of the material work? It wouldn't be
negative mass, but the inverted charge could have an effect of some kind.
Switching the locations would get you plasma that would quickly return to normal atoms, and switching the charges would get you antimatter. Both still have positive mass, and would therefore be
useless for propping open a wormhole unless used as a scaffolding for something with negative mass. Antimatter has some interesting properties, but I don't know of any that would be particularly
What if Particle 2 of the entanglement is antimatter, and Particle 1 is still normal matter?
Re: Scientist find possible inhabitable planet
Divide by zero and you open wormholes to parallel dimensions.
Re: Scientist find possible inhabitable planet
I divided by 0 once.
But that's a story for another day.
|
{"url":"http://www.snafu-comics.com/forum/viewtopic.php?f=1&t=48576&p=2963491","timestamp":"2014-04-21T05:21:45Z","content_type":null,"content_length":"39952","record_id":"<urn:uuid:ee4c713f-c548-4ef5-b655-5b8a80da78fb>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Using A Casio Fx115es Plus, Exactly How Would I ... | Chegg.com
using a casio fx115es plus, exactly how would i calculate compounded interest? i need to know the exact keys to press to calculate my answer. here is the problem...amt invested is 2650, so p=2650.
the daily compounded interest rate is 3%, so r=.03 and n=360. the investment is for a two year period so, t=2. if this is inaccurate, please say so. my problem lies with the keys on my calculator. i
dont know which key to press that will allow me to enter the "nt" as an exponent.
Other Math
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/using-casio-fx115es-plus-exactly-would-calculate-compounded-interest-need-know-exact-keys--q4238584","timestamp":"2014-04-16T12:03:51Z","content_type":null,"content_length":"20183","record_id":"<urn:uuid:f1aaec03-48dd-456e-bf93-759d511371fa>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Non-Linear and non separable differential equation
June 8th 2011, 07:51 PM #1
Jun 2011
Non-Linear and non separable differential equation
How can i solve this equation as it is non-linear and non seperable, i cant use the integrating factor as Q(x) does not exist and i cant seperate so that x and y are separate.
x.(dy/dx)=-3y + 6x
some tips please
What do you mean its not linear? Note that $x\frac{\,dy}{\,dx}=-3y+6x \implies \frac{\,dy}{\,dx}+\frac{3}{x}y=6$ is of the form $\frac{\,dy}{\,dx}+P(x)y=Q(x)$, which is clearly linear...
Can you take it from here and use the integrating factor?
thanks..got it now
i used this before and got the wrong answer..
integrating factor is e^3lnx
got answer as y=(3/2)x +c/(x^3)
which is correct
thankyou very much
June 8th 2011, 08:01 PM #2
June 8th 2011, 08:09 PM #3
Jun 2011
|
{"url":"http://mathhelpforum.com/differential-equations/182668-non-linear-non-separable-differential-equation.html","timestamp":"2014-04-21T15:00:40Z","content_type":null,"content_length":"37058","record_id":"<urn:uuid:69f0aa8e-4df2-4be1-a648-36be5cd8bc0d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CGTalk - SkyShader Azimuth
11-19-2003, 05:07 AM
Here's one:
Can anyone tell me how azimuth and elevation are parametized in Maya?
The reason that I want to know is that I want to connect the U&V position of a light on a surface sphere with the position of the sun on the shader. So here's the network:
SkySphereShape.worldSpace[0]>closestPointOnSurface. inputSurface
now the closestPointOnSurface returns a U&V co-ordinate which denotes where my light is across the surface , 2 floats with a min of 0, max of 1(because they are linked to the surface:p ), so I'm
almost there, but, the numbers for azimuth and elevation are meaningful outside of (0,1)....so I want to know what constant I have to multiply by to make the movement of my scene light link up with
the shader sun....
|
{"url":"http://forums.cgsociety.org/archive/index.php/t-103959.html","timestamp":"2014-04-20T19:03:50Z","content_type":null,"content_length":"5785","record_id":"<urn:uuid:ab90c671-ae09-4a05-910e-f7d740ba62c1>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Primitive roots
March 7th 2011, 10:00 AM #1
May 2008
Primitive roots
If g is a primitive root mod 13 then I can show that g^11 is also a primitive root.
My question is how can I deduce that the product of all the primitive roots mod 13 are congruent to 1 (mod 13) from this result?
Is there a corresponding result for the product of all the primitive roots mod 169?
Thanks in advance.
find the condition for k such that $g^k$ is also a primitive root mod 13.
I'm aware of the condition for k to such that g^k is a primitive root mod 13. In fact, I have shown that g^11 is a primitive root mod 13.
But my question is how can I use this information to deduce that the product of all the primitive roots mod 13 is congruent to 1 mod 13.
I was then curious if a similar result holds for the product of all primitive roots mod 169.
I'm aware of the condition for k to such that g^k is a primitive root mod 13. In fact, I have shown that g^11 is a primitive root mod 13.
But my question is how can I use this information to deduce that the product of all the primitive roots mod 13 is congruent to 1 mod 13.
I was then curious if a similar result holds for the product of all primitive roots mod 169.
Given that $g$ is a primitive root of $13$, all the primitive roots are given by $g^k$, where $(k,12)=1$; so the primitive roots of $13$ are $g^1, g^5, g^7,$ and $g^{11}$.
Then the product of all the primitive roots of $13$ is congruent to $g^{1+5+7+11}=g^{24}$ modulo $13$. By Fermat's Theorem, $g^{24}=(g^{12})^2\equiv1\pmod {13}$.
The general result is the following: Let $m$ be a positive integer which has a primitive root. Then the product of the primitive roots of $m$ is congruent to $1$ modulo $m$; the exceptions are $m
=3, 4,$ and $6$.
By the way, the integers $m$ that have primitive roots are $1, 2, 4, p^k,$ and $2p^k$, where $p$ is an odd prime.
Thank you both for your help with this.
I managed to prove the first result in the end. I never thought to use Fermat's Theroem until It struck me at the last minute.
I would like some clarification on your second point melese if that is possible?
If g is a primitive root mod 169, then so is g+13. So how are the rest of the primitive roots obtained and how does their product become congruent to 1 mod 169?
March 9th 2011, 01:54 AM #2
March 9th 2011, 10:01 AM #3
May 2008
March 9th 2011, 11:17 AM #4
Jun 2010
March 9th 2011, 10:13 PM #5
May 2008
|
{"url":"http://mathhelpforum.com/number-theory/173753-primitive-roots.html","timestamp":"2014-04-16T19:00:53Z","content_type":null,"content_length":"43117","record_id":"<urn:uuid:6b33f7a5-0ce0-4d7d-8343-5658928f1389>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nate Silver does it again! Will pundits finally accept defeat?
November 6, 2012
By Simply Statistics
My favorite statistician did it again! Just like in 2008, he predicted the presidential election results almost perfectly. For those that don’t know, Nate Silver is the statistician that runs the
fivethirtyeight blog. He combines data from hundreds of polls, uses historical data to weigh them appropriately and then uses a statistical model to run simulations and predict outcomes.
While the pundits were claiming the race was a “dead heat”, the day before the election Nate gave Obama a 90% chance of winning. Several pundits attacked Nate (some attacks were personal) for his
predictions and demonstrated their ignorance of Statistics. Jeff wrote a nice post on this. The plot below demonstrates how great Nate’s prediction was. Note that each of the 45 states (including DC)
for which he predicted a 90% probability or higher of winning for candidate A, candidate A won. For the other 6 states the range of percentages was 48-52%. If Florida goes for Obama he will have
predicted every single state correctly.
Update: Congratulations also to Sam Wang (Princeton Election Consortium) and Simon Jackman (pollster) that also called the election perfectly. And thanks to the pollsters that provided the unbiased
(on average) data used by all these folks. Data analysts won “experts” lost.
Update 2: New plot with data from here. Old graph here.
for the author, please follow the link and comment on his blog:
Simply Statistics
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or
|
{"url":"http://www.r-bloggers.com/nate-silver-does-it-again-will-pundits-finally-accept-defeat/","timestamp":"2014-04-21T09:45:26Z","content_type":null,"content_length":"36782","record_id":"<urn:uuid:dcbb00a7-4127-4abe-aa3f-95186f68cd18>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
help please The local gym charges members a monthly fee of $50 plus an additional $2 every time you want to use the pool. Let p represent the number of times you used the pool in one month and C
represent the total cost for your monthly membership. Write an equation to find the total cost of your monthly membership to the gym.
Best Response
You've already chosen the best response.
c = 50 + 2p
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4ef14bd6e4b0dc507db68551","timestamp":"2014-04-18T03:52:08Z","content_type":null,"content_length":"27780","record_id":"<urn:uuid:66ed9946-7f1f-42ae-ba7f-bdbc2c0a21be>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Maximizing Monochromatic Polarized Light Interference Patterns Using GlobalSearch and MultiStart
This example shows how Global Optimization Toolbox functions, particularly GlobalSearch and MultiStart, can help locate the maximum of an electromagnetic interference pattern. For simplicity of
modeling, the pattern arises from monochromatic polarized light spreading out from point sources.
The electric field due to source i measured in the direction of polarization at point x and time t is
where is the phase at time zero for source , is the speed of light, is the frequency of the light, is the amplitude of source , and is the distance from source to .
For a fixed point the intensity of the light is the time average of the square of the net electric field. The net electric field is sum of the electric fields due to all sources. The time average
depends only on the sizes and relative phases of the electric fields at . To calculate the net electric field, add up the individual contributions using the phasor method. For phasors, each source
contributes a vector. The length of the vector is the amplitude divided by distance from the source, and the angle of the vector, is the phase at the point.
For this example, we define three point sources with the same frequency ( ) and amplitude ( ), but varied initial phase ( ). We arrange these sources on a fixed plane.
% Frequency is proportional to the number of peaks
relFreqConst = 2*pi*2.5;
amp = 2.2;
phase = -[0; 0.54; 2.07];
numSources = 3;
height = 3;
% All point sources are aligned at [x_i,y_i,z]
xcoords = [2.4112
ycoords = [0.3957
zcoords = height*ones(numSources,1);
origins = [xcoords ycoords zcoords];
Visualize the Interference Pattern
Now let's visualize a slice of the interference pattern on the plane z = 0.
As you can see from the plot below, there are many peaks and valleys indicating constructive and destructive interference.
% Pass additional parameters via an anonymous function:
waveIntensity_x = @(x) waveIntensity(x,amp,phase, ...
% Generate the grid
[X,Y] = meshgrid(-4:0.035:4,-4:0.035:4);
% Compute the intensity over the grid
Z = arrayfun(@(x,y) waveIntensity_x([x y]),X,Y);
% Plot the surface and the contours
Posing the Optimization Problem
We are interested in the location where this wave intensity reaches its highest peak.
The wave intensity ( ) falls off as we move away from the source proportional to . Therefore, let's restrict the space of viable solutions by adding constraints to the problem.
If we limit the exposure of the sources with an aperture, then we can expect the maximum to lie in the intersection of the projection of the apertures onto our observation plane. We model the effect
of an aperture by restricting the search to a circular region centered at each source.
We also restrict the solution space by adding bounds to the problem. Although these bounds may be redundant (given the nonlinear constraints), they are useful since they restrict the range in which
start points are generated (see the documentationdocumentation for more information).
Now our problem has become:
subject to
where and are the coordinates and aperture radius of the point source, respectively. Each source is given an aperture with radius 3. The given bounds encompass the feasible region.
The objective ( ) and nonlinear constraint functions are defined in separate MATLAB® files, waveIntensity.m and apertureConstraint.m, respectively, which are listed at the end of this example.
Visualization with Constraints
Now let's visualize the contours of our interference pattern with the nonlinear constraint boundaries superimposed. The feasible region is the interior of the intersection of the three circles
(yellow, green, and blue). The bounds on the variables are indicated by the dashed-line box.
% Visualize the contours of our interference surface
domain = [-3 5.5 -4 5];
ezcontour(@(X,Y) arrayfun(@(x,y) waveIntensity_x([x y]),X,Y),domain,150);
hold on
% Plot constraints
g1 = @(x,y) (x-xcoords(1)).^2 + (y-ycoords(1)).^2 - 9;
g2 = @(x,y) (x-xcoords(2)).^2 + (y-ycoords(2)).^2 - 9;
g3 = @(x,y) (x-xcoords(3)).^2 + (y-ycoords(3)).^2 - 9;
h1 = ezplot(g1,domain);
set(h1,'Color',[0.8 0.7 0.1],'LineWidth',1.5); % yellow
h2 = ezplot(g2,domain);
set(h2,'Color',[0.3 0.7 0.5],'LineWidth',1.5); % green
h3 = ezplot(g3,domain);
set(h3,'Color',[0.4 0.4 0.6],'LineWidth',1.5); % blue
% Plot bounds
lb = [-0.5 -2];
ub = [3.5 3];
line([lb(1) lb(1)],[lb(2) ub(2)],'LineStyle','--')
line([ub(1) ub(1)],[lb(2) ub(2)],'LineStyle','--')
line([lb(1) ub(1)],[lb(2) lb(2)],'LineStyle','--')
line([lb(1) ub(1)],[ub(2) ub(2)],'LineStyle','--')
title('Pattern Contours with Constraint Boundaries')
Setting Up and Solving the Problem with a Local Solver
Given the nonlinear constraints, we need a constrained nonlinear solver, namely, fmincon.
Let's set up a problem structure describing our optimization problem. We want to maximize the intensity function, so we negate the values returned form waveIntensity. Let's choose an arbitrary start
point that happens to be near the feasible region.
For this small problem, we'll use fmincon's SQP algorithm.
% Pass additional parameters via an anonymous function:
apertureConstraint_x = @(x) apertureConstraint(x,xcoords,ycoords);
% Set up fmincon's options
x0 = [3 -1];
opts = optimoptions('fmincon','Algorithm','sqp');
problem = createOptimProblem('fmincon','objective', ...
@(x) -waveIntensity_x(x),'x0',x0,'lb',lb,'ub',ub, ...
% Call fmincon
[xlocal,fvallocal] = fmincon(problem)
xlocal =
-0.5000 0.4945
fvallocal =
Now, let's see how we did by showing the result of fmincon in our contour plot. Notice that fmincon did not reach the global maximum, which is also annotated on the plot. Note that we'll only plot
the bound that was active at the solution.
[~,maxIdx] = max(Z(:));
xmax = [X(maxIdx),Y(maxIdx)]
figure;contour(X,Y,Z);hold on
% Show bounds
line([lb(1) lb(1)],[lb(2) ub(2)],'LineStyle','--')
% Create textarrow showing the location of xlocal
annotation('textarrow',[0.25 0.21],[0.86 0.60],'TextEdgeColor',[0 0 0],...
'TextBackgroundColor',[1 1 1],'FontSize',11,'String',{'Single Run Result'});
% Create textarrow showing the location of xglobal
annotation('textarrow',[0.44 0.50],[0.63 0.58],'TextEdgeColor',[0 0 0],...
'TextBackgroundColor',[1 1 1],'FontSize',12,'String',{'Global Max'});
axis([-1 3.75 -3 3])
xmax =
1.2500 0.4450
Using GlobalSearch and MultiStart
Given an arbitrary initial guess, fmincon gets stuck at a nearby local maximum. Global Optimization Toolbox solvers, particularly GlobalSearch and MultiStart, give us a better chance at finding the
global maximum since they will try fmincon from multiple generated initial points (or our own custom points, if we choose).
Our problem has already been set up in the problem structure, so now we construct our solver objects and run them. The first output from run is the location of the best result found.
% Construct a GlobalSearch object
gs = GlobalSearch;
% Construct a MultiStart object based on our GlobalSearch attributes
ms = MultiStart;
rng(4,'twister') % for reproducibility
% Run GlobalSearch
[xgs,~,~,~,solsgs] = run(gs,problem);
% Run MultiStart with 15 randomly generated points
[xms,~,~,~,solsms] = run(ms,problem,15);
GlobalSearch stopped because it analyzed all the trial points.
All 14 local solver runs converged with a positive local solver exit flag.
Elapsed time is 1.752507 seconds.
xgs =
1.2592 0.4284
MultiStart completed the runs from all start points.
All 15 local solver runs converged with a positive local solver exit flag.
Elapsed time is 0.681476 seconds.
xms =
1.2592 0.4284
Let's examine the results that both solvers have returned. An important thing to note is that the results will vary based on the random start points created for each solver. Another run through this
example may give different results. The coordinates of the best results xgs and xms printed to the command line. We'll show unique results returned by GlobalSearch and MultiStart and highlight the
best results from each solver, in terms of proximity to the global solution.
The fifth output of each solver is a vector containing distinct minima (or maxima, in this case) found. We'll plot the (x,y) pairs of the results, solsgs and solsms, against our contour plot we used
% Plot GlobalSearch results using the '*' marker
xGS = cell2mat({solsgs(:).X}');
scatter(xGS(:,1),xGS(:,2),'*','MarkerEdgeColor',[0 0 1],'LineWidth',1.25)
% Plot MultiStart results using a circle marker
xMS = cell2mat({solsms(:).X}');
scatter(xMS(:,1),xMS(:,2),'o','MarkerEdgeColor',[0 0 0],'LineWidth',1.25)
title('GlobalSearch and MultiStart Results')
With the tight bounds on the problem, both GlobalSearch and MultiStart were able to locate the global maximum in this run.
Finding tight bounds can be difficult to do in practice, when not much is known about the objective function or constraints. In general though, we may be able to guess a reasonable region in which we
would like to restrict the set of start points. For illustration purposes, let's relax our bounds to define a larger area in which to generate start points and re-try the solvers.
% Relax the bounds to spread out the start points
problem.lb = -5*ones(2,1);
problem.ub = 5*ones(2,1);
% Run GlobalSearch
[xgs,~,~,~,solsgs] = run(gs,problem);
% Run MultiStart with 15 randomly generated points
[xms,~,~,~,solsms] = run(ms,problem,15);
GlobalSearch stopped because it analyzed all the trial points.
All 4 local solver runs converged with a positive local solver exit flag.
Elapsed time is 0.691341 seconds.
xgs =
0.6571 -0.2096
MultiStart completed the runs from all start points.
All 15 local solver runs converged with a positive local solver exit flag.
Elapsed time is 0.606034 seconds.
xms =
2.4947 -0.1439
% Show the contours
figure;contour(X,Y,Z);hold on
% Create textarrow showing the location of xglobal
annotation('textarrow',[0.44 0.50],[0.63 0.58],'TextEdgeColor',[0 0 0],...
'TextBackgroundColor',[1 1 1],'FontSize',12,'String',{'Global Max'});
axis([-1 3.75 -3 3])
% Plot GlobalSearch results using the '*' marker
xGS = cell2mat({solsgs(:).X}');
scatter(xGS(:,1),xGS(:,2),'*','MarkerEdgeColor',[0 0 1],'LineWidth',1.25)
% Plot MultiStart results using a circle marker
xMS = cell2mat({solsms(:).X}');
scatter(xMS(:,1),xMS(:,2),'o','MarkerEdgeColor',[0 0 0],'LineWidth',1.25)
% Highlight the best results from each:
% GlobalSearch result in red, MultiStart result in blue
plot(xgs(1),xgs(2),'sb','MarkerSize',12,'MarkerFaceColor',[1 0 0])
plot(xms(1),xms(2),'sb','MarkerSize',12,'MarkerFaceColor',[0 0 1])
legend('Intensity','GlobalSearch','MultiStart','Best GS','Best MS','Location','best')
title('GlobalSearch and MultiStart Results with Relaxed Bounds')
The best result from GlobalSearch is shown by the red square and the best result from MultiStart is shown by the blue square.
Tuning GlobalSearch Parameters
Notice that in this run, given the larger area defined by the bounds, neither solver was able to identify the point of maximum intensity. We could try to overcome this in a couple of ways. First, we
examine GlobalSearch.
Notice that GlobalSearch only ran fmincon a few times. To increase the chance of finding the global maximum, we would like to run more points. To restrict the start point set to the candidates most
likely to find the global maximum, we'll instruct each solver to ignore start points that do not satisfy constraints by setting the StartPointsToRun property to bounds-ineqs. Additionally, we will
set the MaxWaitCycle and BasinRadiusFactor properties so that GlobalSearch will be able to identify the narrow peaks quickly. Reducing MaxWaitCycle causes GlobalSearch to decrease the basin of
attraction radius by the BasinRadiusFactor more often than with the default setting.
% Increase the total candidate points, but filter out the infeasible ones
gs = GlobalSearch(gs,'StartPointsToRun','bounds-ineqs', ...
% Run GlobalSearch
xgs = run(gs,problem);
GlobalSearch stopped because it analyzed all the trial points.
All 10 local solver runs converged with a positive local solver exit flag.
Elapsed time is 1.004284 seconds.
xgs =
1.2592 0.4284
Utilizing MultiStart's Parallel Capabilities
A brute force way to improve our chances of finding the global maximum is to simply try more start points. Again, this may not be practical in all situations. In our case, we've only tried a small
set so far and the run time was not terribly long. So, it's reasonable to try more start points. To speed the computation we'll run MultiStart in parallel if Parallel Computing Toolbox™ is available.
% Set the UseParallel property of MultiStart
ms = MultiStart(ms,'UseParallel',true);
demoOpenedPool = false;
% Create a parallel pool if one does not already exist
% (requires Parallel Computing Toolbox)
if max(size(gcp)) == 0 % if no pool
demoOpenedPool = true;
catch ME
% Run the solver
xms = run(ms,problem,100);
if demoOpenedPool
% Make sure to delete the pool if one was created in this example
delete(gcp) % delete the pool
Starting parallel pool (parpool) using the 'local' profile ... connected to 4 workers.
MultiStart completed the runs from all start points.
All 100 local solver runs converged with a positive local solver exit flag.
Elapsed time is 1.847349 seconds.
xms =
1.2592 0.4284
Objective and Nonlinear Constraints
Here we list the functions that define the optimization problem:
type waveIntensity
type apertureConstraint
type distanceFromSource
function p = waveIntensity(x,amp,phase,relFreqConst,numSources,origins)
% WaveIntensity Intensity function for opticalInterferenceDemo.
% Copyright 2009 The MathWorks, Inc.
% $Revision: 1.1.8.1 $ $Date: 2012/08/11 01:51:25 $
d = distanceFromSource(x,numSources,origins);
ampVec = [sum(amp./d .* cos(phase - d*relFreqConst));
sum(amp./d .* sin(phase - d*relFreqConst))];
% Intensity is ||AmpVec||^2
p = ampVec'*ampVec;
function [c,ceq] = apertureConstraint(x,xcoords,ycoords)
% apertureConstraint Aperture constraint function for opticalInterferenceDemo.
% Copyright 2009 The MathWorks, Inc.
% $Revision: 1.1.8.1 $ $Date: 2012/08/11 01:50:12 $
ceq = [];
c = (x(1) - xcoords).^2 + (x(2) - ycoords).^2 - 9;
function d = distanceFromSource(v,numSources,origins)
% distanceFromSource Distance function for opticalInterferenceDemo.
% Copyright 2009 The MathWorks, Inc.
% $Revision: 1.1.8.1 $ $Date: 2012/08/11 01:50:25 $
d = zeros(numSources,1);
for k = 1:numSources
d(k) = norm(origins(k,:) - [v 0]);
|
{"url":"http://www.mathworks.com/help/gads/examples/maximizing-monochromatic-polarized-light-interference-patterns-using-globalsearch-and-multistart.html?nocookie=true","timestamp":"2014-04-17T13:04:29Z","content_type":null,"content_length":"61028","record_id":"<urn:uuid:4e3515f5-4c85-4439-9216-893b9ae76075>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mike Naylor
Norwegian University of Science and Technology (NTNU)
I enjoy creating mathematical artwork with multiple interpretations and hidden ideas that can be revealed by thoughtful inspection. Much of my artwork focuses on the use of the human body to
represent geometric concepts, but I also enjoy creating abstract works that capture mathematical ideas in ways that are pleasing, surprising and invite further reflection.
This image contains 4 fractal elements each of which contains a representation of the binary numbers from 0 to 127 (0 to 1111111). Each quarter of each of the 4 main images was created by starting
with a 50% gray square on a white or black background. Two half-size squares are placed adjacent to this square and one half-size square within. The squares are given a shade of gray which averages
the two shades around them. This rule is carried out 7 times.
The resulting abacaba fractal contains 128 binary numbers in the sequence from 0 to 127. You are encouraged and challenged to find these binary numbers within the artwork; it is an interesting
challenge with many solutions!
This spiral of human figures link their arms and legs together to form an infinite spiral based on design elements of a inscribed pentagram. An inscribed pentagram is 5-pointed star inside of a
pentagon and is one of the sacred symbols of the ancient Pythagoreans. Whereas in the traditional form the edges of the pentagon are equal length, in this representation each successive edge is
scaled in proportion to the golden mean, thus creating a spiral instead of a closed figure. The arm and leg linkages create both of the forms of the golden triangles with angles of 36°, 72° and 108°.
This work was inspired by a previous work, Pentamen, shown at Bridges 2011.
Painted SLS nylon sculpture
In this sculpture four human figures join together to create a cube. This work also joins together mathematics and the human experience, reminding us that we are as much a part of mathematics as
mathematics is a part of us.
The four figures are separate and identical except for coloration. They snap together to make a free-standing form. Each figure is hand-painted with acrylic.
|
{"url":"http://gallery.bridgesmathart.org/exhibitions/2012-bridges-conference/abacaba","timestamp":"2014-04-25T04:59:32Z","content_type":null,"content_length":"18871","record_id":"<urn:uuid:ab2c8622-b5ab-4fd6-88ed-6f3b421618d0>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Question of combinatorics in the lower part of the Borel hierarchy.
up vote 7 down vote favorite
Let $S^\omega$ denote either $\omega^\omega$ or $2^\omega$.
Let's call a function $f: S^\omega \rightarrow$ {0,1} 'nice' if there exists a function $g_f: S^{\lt \omega} \rightarrow 2$ such that for every $x \in S^\omega$: $\lim_{k \rightarrow \infty} g_f(
(x_0,...,x_k) ) = f(x)$.
(One could think of this as a calculation of $f(x)$ that 'changes its mind' at most finitely often.)
(Note that this does not imply that $f$ is continuous. Rather, the nice functions correspond to $\Delta_2^0$ sets.)
If $\alpha$ is an ordinal, we call $f$ '$\alpha$-nice' if there exists a function $h_f: S^{\lt \omega} \rightarrow \alpha \times\lbrace 0,1\rbrace$ such that, using the notation $(\alpha(k), n(k)) =
h_f( (x_0,..., x_k) )$, we have:
1. $\lim_{k \rightarrow \infty} n(k) = f(x)$ for all $x \in S^\omega$
2. $\alpha(k+1) \leq \alpha(k)$ for all $k \in \omega$
3. whenever $n(k+1) \neq n(k)$, we have $\alpha(k+1) \lt \alpha(k)$
We'll say that $f$ 'has rank' $\alpha$ if $\alpha$ is the minimal ordinal such that $f$ is $\alpha$-nice (if there exists any such $\alpha$).
1. Is every nice function an $\alpha$-nice function for some $\alpha$?
2. Assuming ZFC but not CH, what is the maximum (or l.u.b.) rank that a nice function can have?
lo.logic descriptive-set-theory
I somewhat tried to improve the readability by reducing line breaks, using \lim and markup touches (lists and such) – Asaf Karagila Jan 11 '12 at 10:00
Alternative title for this question: 'How difficult can it be to tell in advance how often one needs to change ones mind?' – Herman Jurjus Jan 11 '12 at 11:55
add comment
3 Answers
active oldest votes
Consider the case $2^\omega$ (the case of Baire space I believe is only notationally more complicated.) Let $(s_i : i<\omega)$ recursively enumerate $2^{<\omega}$ with $s_0=\varnothing$ and
$s_i \subset s_j $ $\rightarrow$ $i < j$
Let $l_i=length(s_i)$. Given a nice function $f$ as witnessed by $g_f$ call $s_i$ a "switch" if $g_f(s_i\upharpoonright l_i - 1) \neq g_f(s_i)$. For switches $s_i,s_j, j\neq i$ only define
$s_i \prec s_j$ if $s_i \supset s_j$. By the properties of $f,g_f$, $\prec$ has no infinite descending paths, hence it is a finite path tree $T_{g_f}$, with a rank function $rk_T$ say. Let
$r(g_f)$ be the rank of this tree. Then $r(g_f) < \omega_1^{g_f}$ where the latter ordinal is the least not recursive in (the real code of) $g_f$.
up vote Consequently we can now define an $h$ function of the desired kind into $r(g_f)+1 \times 2$ as long as
4 down
vote $h(s) \geq sup [ rk_T(s_i) : s \subset s_i ] +1$ for any $s\in 2^{<\omega}$.
This shows any nice function is an $\alpha$-nice function for some $h$, and thus answers 1.
Conversely given any finite path tree one can embed it into $2^{<\omega}$ in an order preserving way, via $G$ say, and define a function $f,g_f$ using the range of $G$ as "switches". As any
countable ordinal is realised as the rank of such a finite path tree we see that the l.u.b for Q2 is $\omega_1$.
Yes, that seems to be it. Thanks! (I will chew on it for a day or so before accepting the answer; who knows there's a catch somewhere.) – Herman Jurjus Jan 12 '12 at 11:42
The first part is ok. Simply put: the switch-points in $g_f$ form a well-founded tree, and this tree can be labelled with countable ordinals; so you can't possibly go beyond $\omega_1$.
Your argument for the reverse, however, needs at least a little repair, I think. For example, if (x0,...xn) is mapped to 0, but (x0,...,xn,0) and (x0,...,xn,1) are both mapped to 1, then
we have two 'switches', but, they're not needed. A simpler g could also compute f. In other words: the fact that g_f correctly 'computes' f doesn't prove that rank(f) = rank(g_f). Or am I
overlooking something? – Herman Jurjus Jan 12 '12 at 14:39
BTW, I do think that this is easily repairable. – Herman Jurjus Jan 12 '12 at 14:41
@Herman: I don't think the reverse needs repair. I did not completely specify a way to do it, or any rk(f); I was just claiming that if you map the finite path tree into $2^{<\omega}$ (and
I was not bothering to say how you do it, anything that works fine!) then you can see that this way gives you a method of building $g$'s corresponding to some $f$. Or, if you like, use an
inductive argument on countable ordinals to build trees of switches within $2^{<\omega}$ corresponding to $g_f$'s of increasing countable ordinal rank. – Philip Welch Jan 12 '12 at 15:09
But there exist f's of finite rank with infinitely complex g_f's. So at least the construction needs /somewhat/ more detail. You are right though: induction settles the matter. Therefore
your answer is accepted. – Herman Jurjus Jan 12 '12 at 15:24
show 1 more comment
(As Andreas has pointed out, this answer is not correct---it concerns a slightly different class of functions.)
The answer to your first question is yes. For any nice function $f$, consider the tree $T_f$ of finite sequences $(x_0,\ldots,x_k)$ such that there is some proper extension $(x_0,\
ldots,x_k,\ldots,x_{k+r})$ with $$g_f((x_0,\ldots,x_k))\neq g_f((x_0,\ldots,x_k,\ldots,x_{k+r})).$$ (In other contexts, this is called the "tree of unsecured sequences".)
Then it is easy to see that $f$ being nice implies that this tree is well-founded. The function $h_f$ can be defined, with ordinal $ht(T_f)$, by setting the ordinal value $\alpha((x_0,\
ldots,x_k))$ to be the height of $(x_0,\ldots,x_k)$ in $T_f$ if this sequence is unsecured, and $0$ if the sequence is secured.
In the $2^\omega$ case, this means the supremum of ranks of nice functions is $\omega$: by Konig's lemma, a well-founded binary tree is finite.
up vote 2
down vote In the $\omega^\omega$ case, I believe the supremum of ranks should be $\omega_1$ (in plain ZFC), though the proof doesn't appear to be entirely obvious. (One could to take a tree of
sequences of height $\alpha$, which induces a function $h_f$, and then take the corresponding $f$, but some additional work is needed to ensure that there is no other representation of $f$
giving it a lower rank.)
Functions like your $g_f$, but with range $\omega$ instead of $\{0,1\}$, have been called "asymptotically stable". I believe this terminology was introduced by Tao in a blog post;
Kohlenbach and Gaspar have a paper ("On Tao’s “finitary” infinite pigeonhole principle") discussing an application, and I have a paper with Beiglbock ("Transfinite Approximation of Hindman's
Theorem") which deals with the tree $T_f$.
I'm confused by the "easy to see" statement at the start of the second paragraph. Suppose $f:2^\omega\to2$ sends the constant 0 function to 0 and everything else to 1. This is nice with
$g_f$ sending any finite sequence of zeros to 0 and all other finite sequences to 1. Then if $s$ is a finite sequence of zeros, there is a proper extension with a different $g_f$-value
(just append a 1 to $s$), so $s$ would be unsecured. Then all these finite sequences of 0's form an infinite path through $T_f$. What have I misunderstood here? – Andreas Blass Jan 12 '12
at 1:15
Oh dear. The "easy to see" statement isn't actually true. I jumped to thinking that the nice functions lined up with the asymptotically stable functions, but asymptotically stable
functions have an additional continuity property (in an A.S. function, the limit $f(x)$ is actually determined by an initial segment of $x$). – Henry Towsner Jan 12 '12 at 3:28
add comment
If the switch tree of $g_f$ is finite (i.e. not just well founded, but having only finitely many nodes), then no matter how large the number of nodes, the rank of $f$ will still be no more
than 2. (Because an alternative $g'_f$ could give an irrelevant default value 'at first', and 'wait' until all the switchpoints are passed; it can then give its definitive answer as its
second value.)
In my opinion, to complete the proof, you need at least something like the following:
Given any countable ordinal $\alpha$, make a countable well-founded tree T having $\alpha$ as its rank, and with the following additional property: for every node $n$, and any child $c$ of
that node: (countably) infinitely many copies of the subtree rooted by $c$ occur under $n$.
Next, make an embedding of T into the tree $2^{\lt \omega}$ such that, whenever a non-leaf node of T is associated with sequence $s$, then the children of the node are associated with the
sequences $s + [0]$, $s + [1,0]$, $s + [1,1,0]$, etc., where '$+$' denotes concatenation of finite sequences.
(That this can be done can be proved with an non-constructive 'reverse-induction' proof: if there existed a tree for which such an embedding doesn't exist, then there would be a subtree for
up vote 1 which such an embedding doesn't exist, this subtree having itself also such a subtree, etc. But since T is well-founded, we can't have an infinite decreasing chain of subtrees.
down vote Contradiction.)
Now with /this/ embedding, make $g_f$ such that the switch points are precisely the image points of the embedding, and define $f$ accordingly.
Claim: /then/ $f$ has rank $\alpha$.
A sloppy argument for the latter: Any alternative $g'_f$ which correctly computes $f$ must, at some 'moment' in the sequence $s + [1,1,1,...]$ adopt the correct $f$ value for that infinite
sequence; after which 'we still have any subtree of T available', so to speak. I.e. for every such $g'_f$, we can make a sequence that 'forces' $g'_f$ to actually make a change of mind
corresponding to the node in T mapped to $s$, and after that point we can still proceed with any child-subtree that we wish.
I'm open for suggestions on how to express all of this more formally.
add comment
Not the answer you're looking for? Browse other questions tagged lo.logic descriptive-set-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/85394/question-of-combinatorics-in-the-lower-part-of-the-borel-hierarchy","timestamp":"2014-04-16T19:39:10Z","content_type":null,"content_length":"74038","record_id":"<urn:uuid:eae58a3c-a92a-48e8-99ce-e0aad8eac2b9>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Domino Magic Rectangle
Copyright © University of Cambridge. All rights reserved.
'Domino Magic Rectangle' printed from http://nrich.maths.org/
An ordinary set of 28 dominoes can be laid out as a 7 by 4 magic rectangle in which the spots in all the columns total 24 while the spots in all the rows total 42. Try it!
You may like to use this domino set or if you would prefer to use a computer interactivity, you might find our Dominoes Environment useful.
Magic squares can be made with the 25 dominoes remaining when you have put aside (0-5), (0-6) and (1-6). The total of each row, each column and each diagonal is 30. This can be done in many different
ways. As a group project you might like to see how many distinct magic domino squares you can find.
Here is part of one layout to get you started.
Can you invent a game involving dominoes and magic squares? If you invent a good game do tell NRICH and we'll publish it on this page.
|
{"url":"http://nrich.maths.org/1205/index?nomenu=1","timestamp":"2014-04-18T21:23:36Z","content_type":null,"content_length":"4269","record_id":"<urn:uuid:995bf26a-d026-4362-87bb-1d9eac87e6d6>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Most Cited High-Energy Physics Articles, 1999 Edition
Based on articles with the most citations from the SPIRES-HEP Literature Database, SLAC Library
Positions are based on the number of citations in the HEP database [SLAC-SPIRES], between January 1, 1999 and December 31, 1999. The situation may change with time. For your convenience, pointers
to the number of citations between January 1, 1999 and today are also added.
See also the All-Time Favorites list, and a review of this years most cited HEP Papers by Michael Peskin, SLAC Theory Group.
1. REVIEW OF PARTICLE PHYSICS. PARTICLE DATA GROUP
By C. Caso, G. Conforto, A. Gurtu, M. Aguilar-Benitez, C. Amsler, R.M. Barnett, P.R. Burchat, C.D. Carone, O. Dahl, M. Doser, S. Eidelman, J.L. Feng, M. Goodman, C. Grab, D.E. Groom, K.
Hagiwara, K.G. Hayes, J.J. Hernandez, K. Hikasa, K. Honscheid, F. James, M.L. Mangano, A.V. Manohar, K. Monig, H. Murayama, K. Nakamura, K.A. Olive, A. Piepke, M. Roos, R.H. Schindler, R.E.
Shrock, M. Tanabashi, N.A. Tornqvist, T.G. Trippe, P. Vogel, C.G. Wohl, R.L. Workman, B. Armstrong, J.L. Casas Serradilla, B.B. Filimonov, P.S. Gee, S.B. Lugovsky, S. Mankov, F. Nicholson,
K.S. Babu, D. Besson, O. Biebel, R.N. Cahn, R.L. Crawford, R.H. Daltiz, T. Damour, K. Desler, R.J. Donahue, D.A. Edwards, J. Erler, V.V. Ezhela, A. Fasso, W. Fetscher, D. Froidevaux, T.K.
Gaisser, L. Garren, S. Geer, H.J. Gerber, F.J. Gilman, H.E. Haber, C. Hagmann, I. Hinchliffe, C.J. Hogan, G. Hohler, J.D. Jackson, K.F. Johnson, D. Karlen, B. Kayser, K. Kleinknecht, I.G.
Knowles, C. Kolda, P. Kreitz, P. Langacker, R. Landua, L. Littenberg, D.M. Manley, J. March-Russell, T. Nakada, H. Quinn, G. Raffelt, B. Renk, M.T. Ronan, L.J. Rosenberg, M. Schmitt, D.N.
Schramm, D. Scott, T. Sjostrand, G.F. Smoot, S. Spanier, M. Srednicki, T. Stanev, M. Suzuki, N.P. Tkachenko, G. Valencia, K. van Bibber, R. Voss, L. Wolfenstein, S. Youssef (CERN & LBL,
Berkeley & Genoa U. & INFN, Genoa & Urbino U. & INFN, Florence & Tata Inst. & Madrid, CIEMAT & Zurich U. & Stanford U., Appl. Mech. Dept. & William-Mary Coll. & Novosibirsk, IYF & UC,
Berkeley, Astronomy Dept. & Argonne & Zurich, ETH & KEK, Tsukuba & Hillsdale Coll. & Valencia U. & Tohoku U. & Ohio State U. & UC, San Diego & Minnesota U. & Cal Tech, Kellogg Lab & Helsinki
U. & SLAC & SUNY, Stony Brook & Virginia Tech & Virginia Tech & Serpukhov, IHEP).
Published in Eur.Phys.J.C3:1-794,1998 [also the 1996 version Phys.Rev.D54,1,1996]
1269 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
2. THE LARGE N LIMIT OF SUPERCONFORMAL FIELD THEORIES AND SUPERGRAVITY
By Juan Maldacena (Harvard U.).
Published in Adv.Theor.Math.Phys.2:231-252,1998
e-Print Archive: hep-th/9711200
625 citations between 1 Jan and 31 Dec 1999.
(Number of citations today)
3. ANTI-DE SITTER SPACE AND HOLOGRAPHY
By Edward Witten (Princeton, Inst. Advanced Study).
Published in Adv.Theor.Math.Phys.2:253-291,1998
e-Print Archive: hep-th/9802150
464 citations between 1 Jan and 31 Dec 1999.
(Number of citations today)
4. GAUGE THEORY CORRELATORS FROM NONCRITICAL STRING THEORY
By S.S. Gubser, I.R. Klebanov, A.M. Polyakov (Princeton U.).
Published in Phys.Lett.B428:105-114,1998
e-Print Archive: hep-th/9802109
425 citations between 1 Jan and 31 Dec 1999.
(Number of citations today)
5. EVIDENCE FOR OSCILLATION OF ATMOSPHERIC NEUTRINOS
By Super-Kamiokande Collaboration (Y. Fukuda et al.).
Published in Phys.Rev.Lett.81:1562-1567,1998
e-Print Archive: hep-ex/9807003
382 citations between 1 Jan and 31 Dec 1999.
(Number of citations today)
6. THE HIERARCHY PROBLEM AND NEW DIMENSIONS AT A MILLIMETER
By Nima Arkani-Hamed (SLAC), Savas Dimopoulos (Stanford U., Phys. Dept.), Gia Dvali (ICTP, Trieste).
Published in Phys.Lett.B429:263-272,1998
e-Print Archive: hep-ph/9803315
285 citations between 1 Jan and 31 Dec 1999.
(Number of citations today)
7. HIGH-ENERGY PHYSICS EVENT GENERATION WITH PYTHIA 5.7 AND JETSET 7.4
By Torbjorn Sjostrand (CERN).
Published in Comput.Phys.Commun.82:74-90,1994
267 citations between 1 Jan and 31 Dec 1999.
(Number of citations today)
8. NEW DIMENSIONS AT A MILLIMETER TO A FERMI AND SUPERSTRINGS AT A TEV
By Ignatios Antoniadis (Ecole Polytechnique), Nima Arkani-Hamed (SLAC), Savas Dimopoulos (Stanford U., Phys. Dept.), Gia Dvali (ICTP, Trieste).
Published in Phys.Lett.B436:257-263,1998
e-Print Archive: hep-ph/9804398
215 citations between 1 Jan and 31 Dec 1999.
(Number of citations today)
By Nima Arkani-Hamed (SLAC), Savas Dimopoulos (Stanford U., Phys. Dept.), Gia Dvali (ICTP, Trieste).
Published in Phys.Rev.D59:086004,1999
e-Print Archive: hep-ph/9807344
202 citations between 1 Jan and 31 Dec 1999.
(Number of citations today)
10. NEUTRINO OSCILLATIONS IN MATTER
By L. Wolfenstein (Carnegie Mellon U.).
Published in Phys.Rev.D17:2369,1978
185 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
11. SUPERSYMMETRY, SUPERGRAVITY AND PARTICLE PHYSICS
By H.P. Nilles (Geneva U. & CERN).
Published in Phys.Rept.110:1,1984
182 citations between 1 Jan and 31 Dec 1999.
(Number of citations today)
12. HETEROTIC AND TYPE I STRING DYNAMICS FROM ELEVEN-DIMENSIONS
By Petr Horava (Princeton U.), Edward Witten (Princeton, Inst. Advanced Study).
Published in Nucl.Phys.B460:506-524,1996
e-Print Archive: hep-th/9510209
179 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
13. THE SEARCH FOR SUPERSYMMETRY: PROBING PHYSICS BEYOND THE STANDARD MODEL
By Howard E. Haber (UC, Santa Cruz & SLAC), G.L. Kane (Michigan U.).
Published in Phys.Rept.117:75,1985
176 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
14. DIRICHLET BRANES AND RAMOND-RAMOND CHARGES
By Joseph Polchinski (Santa Barbara, ITP).
Published in Phys.Rev.Lett.75:4724-4727,1995
e-Print Archive: hep-th/9510017
175 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
15. M THEORY AS A MATRIX MODEL: A CONJECTURE
By T. Banks (Rutgers U., Piscataway), W. Fischler (Texas U.), S.H. Shenker (Rutgers U., Piscataway), L. Susskind (Stanford U., Phys. Dept.).
Published in Phys.Rev.D55:5112-5128,1997
e-Print Archive: hep-th/9610043
170 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
16. WHERE DO WE STAND WITH SOLAR NEUTRINO OSCILLATIONS?
By J.N. Bahcall (Princeton, Inst. Advanced Study), P.I. Krastev (Wisconsin U., Madison), A.Yu. Smirnov (ICTP, Trieste).
Published in Phys.Rev.D58:096016,1998
e-Print Archive: hep-ph/9807216
170 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
17. LARGE N FIELD THEORIES, STRING THEORY AND GRAVITY
By Ofer Aharony (Rutgers U., Piscataway), Steven S. Gubser (Harvard U.), Juan Maldacena (Harvard U. & Princeton, Inst. Advanced Study), Hirosi Ooguri (UC, Berkeley & LBL, Berkeley), Yaron Oz
e-Print Archive: hep-th/9905111
170 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
18. STRONG COUPLING EXPANSION OF CALABI-YAU COMPACTIFICATION
By Edward Witten (Princeton, Inst. Advanced Study).
Published in Nucl.Phys.B471:135-158,1996
e-Print Archive: hep-th/9602070
167 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
19. ELECTRIC - MAGNETIC DUALITY, MONOPOLE CONDENSATION, AND CONFINEMENT IN N=2 SUPERSYMMETRIC YANG-MILLS THEORY
By N. Seiberg (Rutgers U., Piscataway & Princeton, Inst. Advanced Study), E. Witten (Princeton, Inst. Advanced Study).
Published in Nucl.Phys.B426:19-52,1994 (Erratum-ibid B430:485-486,1994)
e-Print Archive: hep-th/9407087
161 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
By S.P. Mikheev, A.Yu. Smirnov (Moscow, INR).
Published in Sov.J.Nucl.Phys.42:913-917,1985 (Yad.Fiz.42:1441-1448,1985)
156 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
By H.L. Lai, J. Huston (Michigan State U.), S. Kuhlmann (Argonne), F. Olness (Southern Methodist U.), J. Owens (Florida State U.), D. Soper (Oregon U.), W.K. Tung, H. Weerts (Michigan State
Published in Phys.Rev.D55:1280-1296,1997
e-Print Archive: hep-ph/9606399
151 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
22. A POSSIBLE NEW DIMENSION AT A FEW TEV
By I. Antoniadis (Ecole Polytechnique & CERN).
Published in Phys.Lett.B246:377-384,1990
149 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
23. EXTRA SPACE-TIME DIMENSIONS AND UNIFICATION
By Keith R. Dienes (CERN), Emilian Dudas (CERN & ORSAY, LPTHE), Tony Gherghetta (CERN).
Published in Phys.Lett.B436:55-65,1998
e-Print Archive: hep-ph/9803466
148 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
24. TASI LECTURES ON D-BRANES
By Joseph Polchinski (Santa Barbara, ITP).
e-Print Archive: hep-th/9611050
146 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
By CHOOZ Collaboration (M. Apollonio et al.).
Published in Phys.Lett.B420:397-404,1998
e-Print Archive: hep-ex/9711002
146 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
26. HERWIG: A MONTE CARLO EVENT GENERATOR FOR SIMULATING HADRON EMISSION REACTIONS WITH INTERFERING GLUONS. VERSION 5.1 - APRIL 1991
By G. Marchesini (Parma U. & INFN, Parma), B.R. Webber (Cambridge U.), G. Abbiendi (Padua U. & INFN, Padua), I.G. Knowles (Glasgow U.), M.H. Seymour (Cambridge U.), L. Stanco (Padua U. &
INFN, Padua).
Published in Comput.Phys.Commun.67:465-508,1992
144 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
27. CP VIOLATION IN THE RENORMALIZABLE THEORY OF WEAK INTERACTION
By M. Kobayashi, T. Maskawa (Kyoto U.).
Published in Prog.Theor.Phys.49:652-657,1973
144 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
28. MEASUREMENT OF A SMALL ATMOSPHERIC MUON-NEUTRINO / ELECTRON-NEUTRINO RATIO
By Super-Kamiokande Collaboration (Y. Fukuda et al.).
Published in Phys.Lett.B433:9-18,1998
e-Print Archive: hep-ex/9803006
142 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
29. PARTICLE CREATION BY BLACK HOLES
By S.W. Hawking (Cambridge U.).
Published in Commun.Math.Phys.43:199-220,1975
140 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
30. ANTI-DE SITTER SPACE, THERMAL PHASE TRANSITION, AND CONFINEMENT IN GAUGE THEORIES
By Edward Witten (Princeton, Inst. Advanced Study).
Published in Adv.Theor.Math.Phys.2:505-532,1998
e-Print Archive: hep-th/9803131
139 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
31. WEAK SCALE SUPERSTRINGS
By Joseph D. Lykken (Fermilab).
Published in Phys.Rev.D54:3693-3697,1996
e-Print Archive: hep-th/9603133
137 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
32. STUDY OF THE ATMOSPHERIC NEUTRINO FLUX IN THE MULTI-GEV ENERGY RANGE
By Super-Kamiokande Collaboration (Y. Fukuda et al.).
Published in Phys.Lett.B436:33-41,1998
e-Print Archive: hep-ex/9805006
137 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
33. STRING THEORY DYNAMICS IN VARIOUS DIMENSIONS
By Edward Witten (Princeton, Inst. Advanced Study).
Published in Nucl.Phys.B443:85-126,1995
e-Print Archive: hep-th/9503124
135 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
34. PARTON DISTRIBUTIONS: A NEW GLOBAL ANALYSIS
By A.D. Martin (Durham U.), R.G. Roberts (Rutherford), W.J. Stirling (Durham U.), R.S. Thorne (Oxford U.).
Published in Eur.Phys.J.C4:463-496,1998
e-Print Archive: hep-ph/9803445
131 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
35. BOUND STATES OF STRINGS AND P-BRANES
By Edward Witten (Princeton, Inst. Advanced Study).
Published in Nucl.Phys.B460:335-350,1996
e-Print Archive: hep-th/9510135
130 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
36. CHIRAL PERTURBATION THEORY: EXPANSIONS IN THE MASS OF THE STRANGE QUARK
By J. Gasser (Bern U.), H. Leutwyler (CERN).
Published in Nucl.Phys.B250:465,1985
130 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
37. ELEVEN-DIMENSIONAL SUPERGRAVITY ON A MANIFOLD WITH BOUNDARY
By Petr Horava (Princeton U.), Edward Witten (Princeton, Inst. Advanced Study).
Published in Nucl.Phys.B475:94-114,1996
e-Print Archive: hep-th/9603142
126 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
38. EVIDENCE FOR NU(MU) ---> NU(E) NEUTRINO OSCILLATIONS FROM LSND
By LSND Collaboration (C. Athanassopoulos et al.).
Published in Phys.Rev.Lett.81:1774-1777,1998
e-Print Archive: nucl-ex/9709006
124 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
39. DYNAMICAL PARTON DISTRIBUTIONS OF THE PROTON AND SMALL X PHYSICS
By M. Gluck, E. Reya (Dortmund U.), A. Vogt (DESY).
Published in Z.Phys.C67:433-448,1995
122 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
40. EVIDENCE FOR ANTI-MUON-NEUTRINO ---> ANTI-ELECTRON-NEUTRINO OSCILLATIONS FROM THE LSND EXPERIMENT AT LAMPF
By LSND Collaboration (C. Athanassopoulos et al.).
Published in Phys.Rev.Lett.77:3082-3085,1996
e-Print Archive: nucl-ex/9605003
122 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
41. A LARGE MASS HIERARCHY FROM A SMALL EXTRA DIMENSION
By Lisa Randall (Princeton U. & MIT, LNS), Raman Sundrum (Boston U.).
Published in Phys.Rev.Lett.83:3370-3373,1999
e-Print Archive: hep-ph/9905221
120 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
42. ASYMPTOTIC FREEDOM IN PARTON LANGUAGE
By G. Altarelli (Ecole Normale Superieure), G. Parisi (IHES, Bures).
Published in Nucl.Phys.B126:298,1977
120 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
43. MONOPOLES, DUALITY AND CHIRAL SYMMETRY BREAKING IN N=2 SUPERSYMMETRIC QCD
By N. Seiberg (Rutgers U., Piscataway & Princeton, Inst. Advanced Study), E. Witten (Princeton, Inst. Advanced Study).
Published in Nucl.Phys.B431:484-550,1994
e-Print Archive: hep-th/9408099
119 citations between 1 Jan and 31 Dec 1999.
(Number of citations between 1 Jan 1999 and today is here. Total number of citations is here)
SLAC-authored documents are sponsored by the U.S. Department of Energy under Contract DE-AC03-76-SF00515. Accordingly, the U.S. Government retains a nonexclusive, royalty-free license to publish
or reproduce these documents, or allow others to do so, for U.S. Government purposes. All documents available from this server may be protected under the U.S. and Foreign Copyright Laws.
Permission to reproduce may be required.
Top Cited HEP Articles, 1999 edition by Heath O'Connell, SLAC. Reviewer is Michael Peskin, SLAC. Original edition by H. Galic. Work performed at Stanford Linear Accelerator Center (SLAC)
Search HEP | SPIRES Databases | Comments
SLAC | SLAC Library | Stanford University
Questions and Comments to library@slac.stanford.edu
Updated: March 23, 2000
URL: http://www.slac.stanford.edu/library/topcites/top40.1999.html
|
{"url":"http://www.slac.stanford.edu/spires/topcites/older/topcites/top40.1999.html","timestamp":"2014-04-18T03:18:15Z","content_type":null,"content_length":"32488","record_id":"<urn:uuid:6a7fd7c5-7c69-4118-9a27-1d8326617bcb>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hanover, MA Math Tutor
Find a Hanover, MA Math Tutor
...I am a retired Spec. Ed teacher. I have a M.Ed. in Special Education from Boston College.
15 Subjects: including geometry, SAT math, algebra 1, prealgebra
...Typically it includes a review of basic algebra topics; various types of functions--including trigonometric and polynomial; series; limits; and an introduction to vectors. Most troubles with
introductory calculus are traceable to an inadequate mastery of algebra and trigonometry. As noted above, trigonometry is usually encountered as a part of a pre-calculus course.
7 Subjects: including algebra 1, algebra 2, calculus, trigonometry
...I will tutor at my house or at the residence of the student, whichever is the most comfortable environment for the student and most conducive for learning. My tutoring style is pretty basic. I
figure out where the student is struggling and try different approaches until I find one that gives him or her a better understanding of what needs to be accomplished.
21 Subjects: including prealgebra, calculus, ACT Math, SPSS
...When working with students with diverse needs, I enjoy finding methods that make use of each student's unique strengths, talents and learning styles. With all students, I am excited to use
student-centered approaches to encourage critical thought and facilitate academic success. In other words, I love to teach!
16 Subjects: including SAT math, algebra 1, elementary (k-6th), grammar
...I am also willing to communicate with students' teachers to ensure we are on the same page and to best reinforce or supplement was is happening at school. I look forward to the opportunity to
work with you and hope to hear from you!In my teaching and tutoring experience I have worked with many s...
19 Subjects: including algebra 1, prealgebra, reading, English
Related Hanover, MA Tutors
Hanover, MA Accounting Tutors
Hanover, MA ACT Tutors
Hanover, MA Algebra Tutors
Hanover, MA Algebra 2 Tutors
Hanover, MA Calculus Tutors
Hanover, MA Geometry Tutors
Hanover, MA Math Tutors
Hanover, MA Prealgebra Tutors
Hanover, MA Precalculus Tutors
Hanover, MA SAT Tutors
Hanover, MA SAT Math Tutors
Hanover, MA Science Tutors
Hanover, MA Statistics Tutors
Hanover, MA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/hanover_ma_math_tutors.php","timestamp":"2014-04-19T23:24:27Z","content_type":null,"content_length":"23773","record_id":"<urn:uuid:d9b9915e-cdee-4001-b918-683f5fce901d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Differential Equations
You can use the Mathematica function DSolve to find symbolic solutions to ordinary and partial differential equations.
Solving a differential equation consists essentially in finding the form of an unknown function. In Mathematica, unknown functions are represented by expressions like . The derivatives of such
functions are represented by , , and so on.
The Mathematica function DSolve returns as its result a list of rules for functions. There is a question of how these functions are represented. If you ask DSolve to solve for , then DSolve will
indeed return a rule for . In some cases, this rule may be all you need. But this rule, on its own, does not give values for or even . In many cases, therefore, it is better to ask DSolve to solve
not for , but instead for itself. In this case, what DSolve will return is a rule which gives as a pure function, in the sense discussed in "Pure Functions".
Getting solutions to differential equations in different forms.
In standard mathematical notation, one typically represents solutions to differential equations by explicitly introducing "dummy variables" to represent the arguments of the functions that appear. If
all you need is a symbolic form for the solution, then introducing such dummy variables may be convenient. However, if you actually intend to use the solution in a variety of other computations, then
you will usually find it better to get the solution in pure-function form, without dummy variables. Notice that this form, while easy to represent in Mathematica, has no direct analog in standard
mathematical notation.
Solving simultaneous differential equations.
You can add constraints and boundary conditions for differential equations by explicitly giving additional equations such as .
If you ask Mathematica to solve a set of differential equations and you do not give any constraints or boundary conditions, then Mathematica will try to find a general solution to your equations.
This general solution will involve various undetermined constants. One new constant is introduced for each order of derivative in each equation you give.
The default is that these constants are named C[n], where the index n starts at for each invocation of DSolve. You can override this choice, by explicitly giving a setting for the option
GeneratedParameters. Any function you give is applied to each successive index value n to get the constants to use for each invocation of DSolve.
You should realize that finding exact formulas for the solutions to differential equations is a difficult matter. In fact, there are only fairly few kinds of equations for which such formulas can be
found, at least in terms of standard mathematical functions.
The most widely investigated differential equations are linear ones, in which the functions you are solving for, as well as their derivatives, appear only linearly.
If you have only a single linear differential equation, and it involves only a first derivative of the function you are solving for, then it turns out that the solution can always be found just by
doing integrals.
But as soon as you have more than one differential equation, or more than a first-order derivative, this is no longer true. However, some simple second-order linear differential equations can
nevertheless be solved using various special functions from "Special Functions". Indeed, historically many of these special functions were first introduced specifically in order to represent the
solutions to such equations.
Beyond second order, the kinds of functions needed to solve even fairly simple linear differential equations become extremely complicated. At third order, the generalized Meijer G function MeijerG
can sometimes be used, but at fourth order and beyond absolutely no standard mathematical functions are typically adequate, except in very special cases.
For nonlinear differential equations, only rather special cases can usually ever be solved in terms of standard mathematical functions. Nevertheless, DSolve includes fairly general procedures which
allow it to handle almost all nonlinear differential equations whose solutions are found in standard reference books.
In practical applications, it is quite often convenient to set up differential equations that involve piecewise functions. You can use DSolve to find symbolic solutions to such equations.
Beyond ordinary differential equations, one can consider differential-algebraic equations that involve a mixture of differential and algebraic equations.
Solving partial differential equations.
DSolve is set up to handle not only ordinary differential equations in which just a single independent variable appears, but also partial differential equations in which two or more independent
variables appear.
The basic mathematics of partial differential equations is considerably more complicated than that of ordinary differential equations. One feature is that whereas the general solution to an ordinary
differential equation involves only arbitrary constants, the general solution to a partial differential equation, if it can be found at all, must involve arbitrary functions. Indeed, with independent
variables, arbitrary functions of arguments appear. DSolve by default names these functions C[n].
For an ordinary differential equation, it is guaranteed that a general solution must exist, with the property that adding initial or boundary conditions simply corresponds to forcing specific choices
for arbitrary constants in the solution. But for partial differential equations this is no longer true. Indeed, it is only for linear partial differential and a few other special types that such
general solutions exist.
Other partial differential equations can be solved only when specific initial or boundary values are given, and in the vast majority of cases no solutions can be found as exact formulas in terms of
standard mathematical functions.
|
{"url":"http://reference.wolfram.com/mathematica/tutorial/DifferentialEquations.html","timestamp":"2014-04-18T05:49:19Z","content_type":null,"content_length":"68659","record_id":"<urn:uuid:42216ef2-433a-4c58-9ce9-c28028224b0b>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Glendale, WI SAT Math Tutor
Find a Glendale, WI SAT Math Tutor
...I have been tutoring this subject for over 4 years and know the material very well. My undergrad degree was in Microbiology from UW-Milwaukee. I know all the microscopic information on an
intracellular level as well as intercellular.
46 Subjects: including SAT math, chemistry, statistics, geometry
...These informal assessments can be one or more of the following: silent and oral reading comprehension, learning modality, study skills inventory, etc. Then a diagnostic TEAS is taken for each
part of the TEAS (reading, mathematics, science, English and language usage). The diagnostics generall...
36 Subjects: including SAT math, English, reading, biology
...Many times, math looks hard because all of the little tricks and turns that a teacher can take when they know math. I may know that the variable "x" is different from the multiplication sign
"x" in scientific notation but I realize that they look the same to you. Math classes today are more about thinking and applying than just crunching numbers.
14 Subjects: including SAT math, calculus, statistics, geometry
Hello. I am proud to be a statistics geek. I like to show students that numbers aren't as intimidating as they seem.
21 Subjects: including SAT math, English, reading, statistics
...As I near the end of my requirements, I continue to look for opportunities to cultivate my craft and develop meaningful connections with those that need help. I know that I am among good
company. There are many talented educators within this community.
22 Subjects: including SAT math, Spanish, writing, geometry
Related Glendale, WI Tutors
Glendale, WI Accounting Tutors
Glendale, WI ACT Tutors
Glendale, WI Algebra Tutors
Glendale, WI Algebra 2 Tutors
Glendale, WI Calculus Tutors
Glendale, WI Geometry Tutors
Glendale, WI Math Tutors
Glendale, WI Prealgebra Tutors
Glendale, WI Precalculus Tutors
Glendale, WI SAT Tutors
Glendale, WI SAT Math Tutors
Glendale, WI Science Tutors
Glendale, WI Statistics Tutors
Glendale, WI Trigonometry Tutors
Nearby Cities With SAT math Tutor
Bayside, WI SAT math Tutors
Brookfield, WI SAT math Tutors
Brown Deer, WI SAT math Tutors
Fox Point, WI SAT math Tutors
Greenfield, WI SAT math Tutors
Menomonee Falls SAT math Tutors
Mequon SAT math Tutors
Milwaukee, WI SAT math Tutors
New Berlin, WI SAT math Tutors
River Hills, WI SAT math Tutors
Shorewood, WI SAT math Tutors
Wauwatosa, WI SAT math Tutors
West Allis, WI SAT math Tutors
West Milwaukee, WI SAT math Tutors
Whitefish Bay, WI SAT math Tutors
|
{"url":"http://www.purplemath.com/Glendale_WI_SAT_math_tutors.php","timestamp":"2014-04-17T22:03:32Z","content_type":null,"content_length":"23965","record_id":"<urn:uuid:c6a1eb05-9160-4d8d-879d-bbcb2830e76b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Coq proof assistant user's guide. Version 5.8
Results 11 - 20 of 43
- FOURTEENTH ANNUAL IEEE SYMPOSIUM ON LOGIC IN COMPUTER SCIENCE , 1999
"... This paper extends the termination proof techniques based on reduction orderings to a higher-order setting, by adapting the recursive path ordering definition to terms of a typed lambda-calculus
generated by a signature of polymorphic higher-order function symbols. The obtained ordering is well-foun ..."
Cited by 44 (10 self)
Add to MetaCart
This paper extends the termination proof techniques based on reduction orderings to a higher-order setting, by adapting the recursive path ordering definition to terms of a typed lambda-calculus
generated by a signature of polymorphic higher-order function symbols. The obtained ordering is well-founded, compatible with fi-reductions and with polymorphic typing, monotonic with respect to the
function symbols, and stable under substitution. It can therefore be used to prove the strong normalizationproperty of higher-order calculi in which constants can be defined by higher-order rewrite
rules. For example, the polymorphic version of Gödel's recursor for the natural numbers is easily oriented. And indeed, our ordering is polymorphic, in the sense that a single comparison allows to
prove the termination property of all monomorphic instances of a polymorphic rewrite rule. Several other non-trivial examples are given which examplify the expressive power of the ordering.
- ACM TRANS. PROGRAM. LANG. SYST , 2002
"... ..."
- Types for Proofs and Programs, volume 996 of Lecture Notes in Computer Science , 1995
"... . We have embedded the meta-theory of I/O automata, a model for describing and reasoning about distributed systems, in Isabelle 's version of higher order logic. On top of that, we have
specified and verified a recent network transmission protocol which achieves reliable communication using sing ..."
Cited by 22 (2 self)
Add to MetaCart
. We have embedded the meta-theory of I/O automata, a model for describing and reasoning about distributed systems, in Isabelle 's version of higher order logic. On top of that, we have specified and
verified a recent network transmission protocol which achieves reliable communication using single-bit-header packets over a medium which may reorder packets arbitrarily. 1 Introduction This paper
describes a formalization of Input/Output automata (IOA), a particular model for concurrent and distributed discrete event systems due to Lynch and Tuttle [9], inside Isabelle/HOL, a theorem prover
for higher-order logic [12]. The motivation for our work is twofold: -- The verification of distributed systems is a challenging application for formal methods because in that area informal arguments
are notoriously unreliable. -- This area is doubly challenging for interactive general purpose theorem provers because model-checking [4] already provides a successful automatic approach to the
, 1994
"... This paper gives an introduction to type theory, focusing on its recent use as a logical framework for proofs and programs. The first two sections give a background to type theory intended for
the reader who is new to the subject. The following presents Martin-Lof's monomorphic type theory and an im ..."
Cited by 21 (2 self)
Add to MetaCart
This paper gives an introduction to type theory, focusing on its recent use as a logical framework for proofs and programs. The first two sections give a background to type theory intended for the
reader who is new to the subject. The following presents Martin-Lof's monomorphic type theory and an implementation, ALF, of this theory. Finally, a few small tutorial examples in ALF are given.
"... This note describes a protocol for the transmission of data packets that are too large to be transferred in their entirety. Therefore, the protocol splits the data packets and broadcasts it in
parts. It is assumed that in case of failure of transmission through data channels, only a limited number o ..."
Cited by 20 (8 self)
Add to MetaCart
This note describes a protocol for the transmission of data packets that are too large to be transferred in their entirety. Therefore, the protocol splits the data packets and broadcasts it in parts.
It is assumed that in case of failure of transmission through data channels, only a limited number of retries are allowed (bounded retransmission). If repeated failure occurs, the protocol stops
trying and the sending and receiving protocol users are informed accordingly. The protocol and its external behaviour are speci ed in CRL. The correspondence between these is shown using the axioms
of CRL. The whole proof of this correspondence has been computer checked using the proof checker Coq. This provides an example showing that proof checking of realistic protocols is feasible within
the setting of process algebras.
- Informal Proceedings of First Workshop on Logical Frameworks , 1992
"... A proof checking system may support syntax that is more convenient for users than its `official' language. For example LEGO (a typechecker for several systems related to the Calculus of
Constructions) has algorithms to infer some polymorphic instantiations (e.g. pair 2 true instead of pair nat bo ..."
Cited by 20 (2 self)
Add to MetaCart
A proof checking system may support syntax that is more convenient for users than its `official' language. For example LEGO (a typechecker for several systems related to the Calculus of
Constructions) has algorithms to infer some polymorphic instantiations (e.g. pair 2 true instead of pair nat bool 2 true) and universe levels (e.g. Type instead of Type(4)). Users need to understand
such features, but do not want to know the algorithms for computing them. In this note I explain these two features by non-deterministic operational semantics for "translating" implicit syntax to the
fully explicit underlying formal system. The translations are sound and complete for the underlying type theory, and the algorithms (which I will not talk about) are sound (not necessarily complete)
for the translations. This note is phrased in terms of a general class of type theories. The technique described has more general application. 1 Introduction Consider the usual formal system, !, for
- Proceedings of the 2 nd International Symposium on Theoretical Aspects of Computer Software, TACS '94 , 1994
"... We present an equational verification of Milner's scheduler, which we checked by computer. To our knowledge this is the first time that the scheduler is proof-checked for a general number n of
scheduled processes. 1991 Mathematics Subject Classification: 68Q60, 68T15. 1991 CR Categories: F.3.1. K ..."
Cited by 18 (5 self)
Add to MetaCart
We present an equational verification of Milner's scheduler, which we checked by computer. To our knowledge this is the first time that the scheduler is proof-checked for a general number n of
scheduled processes. 1991 Mathematics Subject Classification: 68Q60, 68T15. 1991 CR Categories: F.3.1. Keywords & Phrases: Coq, micro CRL, Milner's Scheduler, proof checking, type theory. Other
versions: This report is a more detailed version of [16], brought out at the University of Utrecht. An extended abstract will appear in the LNCS Proceedings of TACS'94 (International Symposium on
Theoretical Aspects of Computer Software, Japan, April 1994). Support: The work of the first author took place in the context of EC Basic Research Action 7166 concur 2. The work of the second author
is supported by the Netherlands Computer Science Research Foundation (SION) with financial support of the Netherlands Organisation for Scientific Research (NWO). 1
- Models, Algebras and Logic of Engineering Software , 2003
"... We describe a methodology for proving theorems mechanically about Java methods. The theorem prover used is the ACL2 system, an industrial-strength version of the Boyer-Moore theorem prover. An
operational semantics for a substantial subset of the Java Virtual Machine (JVM) has been defined in ACL2. ..."
Cited by 18 (9 self)
Add to MetaCart
We describe a methodology for proving theorems mechanically about Java methods. The theorem prover used is the ACL2 system, an industrial-strength version of the Boyer-Moore theorem prover. An
operational semantics for a substantial subset of the Java Virtual Machine (JVM) has been defined in ACL2. Theorems are proved about Java methods and classes by compiling them with javac and then
proving the corresponding theorem about the JVM. Certain automatically applied strategies are implemented with rewrite rules (and other proof-guiding pragmas) in ACL2 “books” to control the theorem
prover when operating on problems involving the JVM model. The Java Virtual Machine or JVM [27] is the basic abstraction Java [17] implementors are expected to respect. We speculate that the JVM is
an appropriate level of abstraction at which to model Java programs with the intention of mechanically verifying their properties. The most complex features of the Java subset we handle –
construction and initialization of new objects, synchronization, thread management, and virtual method invocation – are all supported directly and with full abstraction as single atomic instructions
in the JVM. The complexity of verifying JVM bytecode program stems from the complexity of Java’s semantics, not
- UNIVERSITY OF MARIBOR , 1996
"... This paper reports about the formal specification and verification of a Bounded Retransmission Protocol (Brp) used by Philips in one of its products. We started with the descriptions of the Brp
service (i.e., external behaviour) and protocol written in the µCrl language by Groote and van de Pol. Aft ..."
Cited by 16 (2 self)
Add to MetaCart
This paper reports about the formal specification and verification of a Bounded Retransmission Protocol (Brp) used by Philips in one of its products. We started with the descriptions of the Brp
service (i.e., external behaviour) and protocol written in the µCrl language by Groote and van de Pol. After translating them in the Lotos language, we performed verifications by model-checking using
the Cadp (Caesar/Aldébaran) toolbox. The models of the Lotos descriptions were generated using the Caesar compiler (by putting bounds on the data domains) and checked to be branching equivalent using
the Aldébaran tool. Alternately, we formulated in the Actl temporal logic a set of safety and liveness properties for the Brp protocol and checked them on the corresponding model using our Xtl
generic model-checker.
- Proc. of the 4th Inter Symp. of Formal Methods Europe, FME'97: Industrial Applications and Strengthened Foundations of Formal Methods , 1997
"... . Interactive theorem proving gives a general approach for modelling and verification of both hardware and software systems but requires significant human efforts to deal with many tedious
proofs. To be used in practical, we need some automatic tools such as model checkers to deal with those tedious ..."
Cited by 14 (3 self)
Add to MetaCart
. Interactive theorem proving gives a general approach for modelling and verification of both hardware and software systems but requires significant human efforts to deal with many tedious proofs. To
be used in practical, we need some automatic tools such as model checkers to deal with those tedious proofs. In this paper, we formalise a verification system of both CCS and an imperative language
in LEGO which can be used to verify both finite and infinite problems. Then a model checker, LegoMC, is implemented to generate the LEGO proof terms of finite models automatically. Therefore people
can use LEGO to verify a general problem and throw some finite sub-problems to be verified by LegoMC. On the other hand, this integration extends the power of model checking to verify more
complicated and infinite models as well. 1 Introduction Interactive theorem proving gives a general approach for modelling and verification of both hardware and software systems but requires
significant human effor...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=59951&sort=cite&start=10","timestamp":"2014-04-18T16:55:44Z","content_type":null,"content_length":"38797","record_id":"<urn:uuid:62393010-cf57-41d0-b92d-7587da949f12>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Browse by Creator
Number of items: 25.
Hill, A. A. and Carr, M. (2013) The influence of a fluid-porous interface on solar pond stability. Advances in Water Resources, 52. pp. 1-6. ISSN 0309-1708
Hill, A. A. and Carr, M. (2013) Stabilising solar ponds by utilising porous materials. Advances in Water Resources, 60. pp. 1-6. ISSN 0309-1708
Hill, A. A. and Malashetty, M. S. (2012) An operative method to obtain sharp nonlinear stability for systems with spatially dependent coefficients. Proceedings of the Royal Society A, 468. pp.
323-336. ISSN 1471-2946
Capone, F., Gentile, M. and Hill, A. A. (2011) Double-diffusive penetrative convection simulated via internal heating in an anisotropic porous layer with throughflow. International Journal of Heat
and Mass Transfer, 54 (7-8). pp. 1622-1626. ISSN 0017-9310
Capone, F., Gentile, M. and Hill, A. A. (2011) Penetrative convection in anisotropic porous media with variable permeability. Acta Mechanica, 216 (1-4). pp. 49-58. ISSN 0001-5970
Hill, A. A. and Carr, M. (2010) Nonlinear stability of the one-domain approach to modelling convection in superposed fluid and porous layers. Proceedings of the Royal Society A, 466 (2121). pp.
2695-2705. ISSN 1364-5021
Capone, F., Gentile, M. and Hill, A. A. (2010) Penetrative convection via internal heating in anisotropic porous media. Mechanics Research Communications, 37 (5). pp. 441-444. ISSN 0093-6413
Hill, A. A. and Carr, M. (2010) Sharp global nonlinear stability for a fluid overlying a highly porous material. Proceedings of the Royal Society A, 466 (2113). pp. 127-140. ISSN 1364-5021
Capone, F., Gentile, M. and Hill, A. A. (2009) Anisotropy and symmetry in porous media convection. Acta Mechanica, 208 (3-4). pp. 205-214. ISSN 0001-5970
Hill, A. A. and Straughan, B. (2009) Global stability for thermal convection in a fluid overlying a highly porous material. Proccedings of the Royal Society A, 465 (2101). pp. 207-217. ISSN 1364-5021
Hill, A. A. (2009) Instability of Poiseuille flow in a fluid overlying a glass bead packed porous layer. Acta Mechanica, 206 (1-2). pp. 95-103. ISSN 0001-5970
Hill, A. A. and Straughan, B. (2009) Poiseuille flow in a fluid overlying a highly porous material. Advances in Water Resources, 32 (11). pp. 1609-1614. ISSN 0309-1708
Hill, A. A. (2009) A differential constraint approach to obtain global stability for radiation induced double-diffusive convection in a porous medium. Mathematical methods in the applied sciences, 32
(8). pp. 914-921. ISSN 1099-1476
Hill, A. A. (2008) Global stability for penetrative double-diffusive convection in a porous medium. Acta Mehanica, 200 (1-2). pp. 1-10. ISSN 0001-5970
Capone, F., Gentile, M. and Hill, A. A. (2008) Penetrative convection in a fluid layer with throughflow. Ricerche di Matematica, 57 (2). pp. 251-260. ISSN 0035-5038
Hill, A. A. and Straughan, B. (2008) Poiseuille flow in a fluid overlying a porous medium. Journal of Fluid Mechanics, 603. pp. 137-149. ISSN 0022-1120
Hill, A. A., Rionero, S. and Straughan, B. (2007) Global stability for penetrative convection with throughflow in a porous material. IMA Journal of Applied Mathematics, 72 (5). pp. 635-643. ISSN
Hill, A. A. (2007) Unconditional nonlinear stability for convection in a porous medium with vertical throughflow. Acta Mechanica, 193 (3-4). pp. 197-206. ISSN 0001-5970
Hill, A. A. and Straughan, B. (2006) Linear and non-linear stability thresholds for thermal convection in a box. Mathematical Methods in the Applied Sciences, 29 (17). pp. 2123-2132. ISSN 0170-4214
Hill, A. A. and Straughan, B. (2006) A legendre spectral element method for eigenvalues in hydrodynamic stability. Journal of Computational and Applied Mathematics, 193 (1). pp. 363-381. ISSN
Hill, A. A. (2005) Double-diffusive convection in a porous medium with a concentration based internal heat source. Proceedings of the Royal Society A, 461 (2054). pp. 561-574. ISSN 1364-5021
Hill, A. A. (2004) Conditional and unconditional nonlinear stability for convection induced by absorption of radiation in a porous medium. Continuum Mechanics and Thermodynamics, 16 (4). pp. 305-318.
ISSN 0935-1175
Hill, A. A. (2004) Convection induced by the selective absorption of radiation for the Brinkman model. Continuum Mechanics and Thermodynamics, 16 (1-2). pp. 43-52. ISSN 0935-1175
Hill, A. A. (2004) Penetrative convection induced by the absorption of radiation with a nonlinear internal heat source. Dynamics of Atmospheres and Oceans, 38 (1). pp. 57-67. ISSN 0377-0265
Hill, A. A. (2003) Convection due to the selective absorption of radiation in a porous medium. Continuum Mechanics and Thermodynamics, 15 (3). pp. 275-285. ISSN 0935-1175
This list was generated on Thu Apr 17 02:58:17 2014 BST.
|
{"url":"http://eprints.uwe.ac.uk/view/author/Hill=3AAntony_A=2E=3A=3A.date.html","timestamp":"2014-04-17T15:28:21Z","content_type":null,"content_length":"36431","record_id":"<urn:uuid:ee4ad7e2-d4e4-42e4-990b-27cf6956e83b>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
|
standard deviation of two data sets(combined)
January 23rd 2013, 12:55 AM
standard deviation of two data sets(combined)
I have two sets of data and need to find there combined standard deviation.
What I know 1: for a set of 10 numbers sum of x=50 and sum of x(squared)=310.
1: for a set of 15 numbers sum of x=86 and sum of x(squared)=568.
I know how to find the standard deviation ofthe individual sets but don't know how to
combine them.
January 23rd 2013, 10:17 PM
Re: standard deviation of two data sets(combined)
I am giving the formula for the Combined Variance. Once you find this, the combined standard deviation is only a single step away...
January 24th 2013, 01:46 AM
Re: standard deviation of two data sets(combined)
Yes thank you I have seen this online but I'm getting a different answer
from it than the one in my book which says 3.25.
Can someone help me out with this?
January 25th 2013, 02:00 AM
Re: standard deviation of two data sets(combined)
I now now why me book says 3.25 they have just added the two standard deviations
which I can kind of see why this is wrong.
Can someone please give me the right answer so that I know for myself,
from the formula provided by Sambit I get a combined standard deviation of 5.5876 is this correct?
Please pop over to the calculator forum and help me out with putting two groups of frequency data in
and getting the combined standard deviation out.
Thank you.
|
{"url":"http://mathhelpforum.com/statistics/211903-standard-deviation-two-data-sets-combined-print.html","timestamp":"2014-04-18T06:09:30Z","content_type":null,"content_length":"5382","record_id":"<urn:uuid:8d23e9bc-2fac-4201-b638-2fbf9b862184>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bar Graphs and Histograms
Remember the difference between discrete and continuous data: discrete data has clear separation between the different possible values, while continuous data doesn't. We use bar graphs for displaying
discrete data, and histograms for displaying continuous data. A bar graph has nothing to do with pubs, and a histogram is not someone you hire to show up at your friend's front door and sing to them
about the French Revolution.
We'll do bar graphs first.
Sample Problem
The colors of students' backpacks are recorded as follows:
red, green, red, blue, black, blue, blue, blue, red, blue, blue, black.
Draw a bar graph for this data.
The values observed were red, green, blue, and black. If we had bins marked "red,'' "green,'' "blue,'' and "black,'' we could sort the backpacks into the appropriate bins. After doing this, how many
backpacks would be in each bin?
Looks like most of these kids are singin' the blues.
A bar graph is a visual way of displaying what the data looks like after it's been sorted. We start with axes. The graph kind, not the "I want to be a lumberjack" kind. The horizontal axis has the
names of the bins (in this case, the colors), and the vertical axis is labelled "Quantity":
Above each bin (color) we put a bar whose height is the number of things in that bin (backpacks of that color):
Since we're super fancy, we've also color the bars different colors.
The advantage of doing this is that you won't wear your black crayon down to its nub and be left with a box full of unused colors. Time to let "chartreuse" see the light of day. If you're not using
crayons, please disregard.
The bar graph shows what things would look like if we stacked up the backpacks in their respective bins. You'll need to imagine the zippers and pockets though. We won't be drawing those in the graph.
Check this out for a silly—yet oddly entertaining—way to comprehend bar graphs.
Sample Problem
A fruit basket contains four kiwis, eight apples, six bananas, and one dragonfruit.
Draw a bar graph for the kinds of fruit in the basket. If the dragonfruit gets angry, just feed it one of the kiwis. They love New Zealanders.
If we sort the fruit into bins, we'll need bins for kiwis, apples, bananas, and dragonfruit, so the bar graph axes will look like this:
Sorting the fruit into the appropriate bins gives us this bar graph:
Here's a slightly different sort of bar graph, where we compare a small number of observations. We're still making bins and putting things in the bins. Don't bother going to your local Target and
trying to get more bins. We've cleaned them out.
A bar graph makes it easy to see at a glance which bin has the most objects and which bin has the fewest objects. By the way, you'd better have some large bins on hand if you're planning to put
entire companies into them. We're talking industrial size.
Sample Problem
A survey was conducted of 79 kids at a local swimming pool to find their favorite hot-weather refreshment. Use the bar graph below to answer the questions.
1. How many kids said ice cream was their favorite hot weather refreshment?
2. Which refreshment was the favorite of exactly 20 kids?
3. Which was more popular, popsicles or ice water?
4. How many kids said hot coffee was their favorite hot weather refreshment?
1. 40, since that's the height of the bar in the "ice cream" bin.
2. Soda, since there are 20 kids in the "soda" bin.
3. Popsicles, since the bar above "popsicles" is slightly taller than the bar above "ice water."
4. Only one, but we felt like he was probably messing with us, so we threw out that result.
Now for histograms. Histograms are used for displaying data where the separation between "bins" is not so clear, and we need to make decisions about what bins we'd like to use. We hate making
decisions though, which is why we're going to keep our Magic 8-Ball within reach, just in case.
Sample Problem
A bunch of fish were caught in a lake. Don't worry, we'll throw them back in after we're done with this example. The lengths of the fish, in inches, were
6.2, 6.2, 6.55, 7, 7.4, 8.5, 8.6, 9, 9.1, 9.2, 9.25, 9.3, 10.4, 10.5, 10.6.
Make some histograms to display this data.
Let's make a histogram where the first bin contains all fish that are at least 6 but not more than 7 inches, the next contains all fish that are at least 7 but not more than 8 inches, and so on. That
way, we won't have any long fish in with any short fish, and we don't need to worry about any of them getting inch envy. How many fish are in each bin?
To draw the histogram, we draw something that looks very much like a bar graph. The axes are labelled similarly, and the height of each bar corresponds to the number of fish in the bin. The bars are
touching because there is no clear separation between different bins (there's not much difference between a fish 7 inches long and a fish 6.999 inches long, although one of them does get to go on a
few more amusement park rides).
The bins we picked for the histogram were arbitrary. We could just as well draw a histogram where the bins held fish that were 6 to 6.5 inches, 6.5 to 7 inches, and so on. We just don't like to use
half-inches if we can help it, because we don't want to encourage decimal points. They already insert themselves far too often.
We use the word interval to refer to the "size'' of the bins in a histogram. In the first histogram we drew for the fish lengths, the bins were 1 inch (6 to 7 inches, 7 to 8 inches, etc.). In the
second histogram the bins were
As the interval becomes smaller, the histogram gives a closer, more precise picture of what the data looks like. In the histogram with interval 1 inch, we could see that there were 5 fish between 9
and 10 inches. In the histogram with interval
There's nothing particularly special about intervals of 1 or
Remember: when putting data into the bins, if a number is on the boundary between two bins, we put it into the right one. If the bins are 6 to 7, and 7 to 8, then the number 7 goes in the bin from 7
to 8. It can laugh all it wants at its 6.999 friend from there.
Now that we've built both bar graphs and histograms, let's revisit the differences between them. Some differences are more important than others. The fact that bar graphs drink Coke while histograms
prefer Pepsi, for example, is irrelevant.
Bar graphs
• are drawn to display discrete data.
• have "bins'' that can be figured out easily by looking at the data (colors, or genres, or dollars).
• usually have white space between the bars, but the bars can be touching (not super important).
• can be used to compare a small number of values (allowance per person, or income per company).
• are drawn to display continuous data.
• can have whatever "bins'' you want, depending on how detailed you want the histogram to be.
• have their bars touching; might be a good idea to use bar sanitizer to prevent the spread of germs.
|
{"url":"http://www.shmoop.com/probability-statistics/bar-graphs-histograms.html","timestamp":"2014-04-19T11:58:09Z","content_type":null,"content_length":"46411","record_id":"<urn:uuid:469c4618-4ef2-4db9-9db1-8fee4dad65a4>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SOLVED] Double integral find volume
May 10th 2010, 03:47 PM
[SOLVED] Double integral find volume
integral from 0 to 2(integral from 0 to 1 (2x+y)^8 dx dy
My impulse is to use substitution but I am not sure if I can or not
May 10th 2010, 03:55 PM
{(2+y)^{10}}{10}-\frac{y^{10}}{10}\right]^2_0=\ldots$ Why substitution if directly is pretty simple?
May 10th 2010, 05:21 PM
I think I see now. I just wasn't in the y is a constant mindset
261632/45 book answer awesome thanks. that problem kind of sucked
|
{"url":"http://mathhelpforum.com/calculus/144062-solved-double-integral-find-volume-print.html","timestamp":"2014-04-17T07:17:20Z","content_type":null,"content_length":"5554","record_id":"<urn:uuid:e989c002-c264-4dfb-894e-bf80f8bdd87c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Views: 2,977
Date Added: July 14, 2009
Lecture Description
This video lecture, part of the series Calculus Videos: Series and Sequences by Prof. , does not currently have a detailed description and video lecture title. If you have watched this lecture and
know what it is about, particularly what Mathematics topics are discussed, please help us by commenting on this video with your suggested description and title. Many thanks from,
- The CosmoLearning Team
Course Index
Course Description
In this course, Calculus Instructor Patrick gives 30 video lessons on Series and Sequences. Some of the Topics covered are: Convergence and Divergence, Geometric Series, Test for Divergence,
Telescoping Series, Integral Test, Limit and Direct Comparison Test, Alternating Series, Alternating Serie... (read more)
There are no comments. Be the first to post one.
Posting Comment...
|
{"url":"http://www.cosmolearning.com/video-lectures/integral-test-for-series/","timestamp":"2014-04-18T23:16:33Z","content_type":null,"content_length":"42415","record_id":"<urn:uuid:caaebb72-437b-48e8-84bf-040c804280eb>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Defining variable, symbol, indeterminate and parameter
up vote 9 down vote favorite
Are there precise definitions for what a variable, a symbol, a name, an indeterminate, a meta-variable, and a parameter are?
In informal mathematics, they are used in a variety of ways, and often in incompatible ways. But one nevertheless gets the feeling (when reading mathematicians who are very precise) that many of
these terms have subtly different semantics.
For example, an 'indeterminate' is almost always a 'dummy' in the sense that the meaning of a sentence in which it occurs is not changed in any if that indeterminate is replaced by a fresh 'name' ($\
alpha$-equivalence). A parameter is usually meant to represent an arbitrary (but fixed) value of a particular 'domain'; in practice, one frequently does case-analysis over parameters when solving a
parametric problem. And while a parameter is meant to represent a value, an 'indeterminate' usually does not represent anything -- unlike a variable, which is usually a placeholder for a value. But
variables and parameters are nevertheless qualitatively different.
The above 2 paragraphs are meant to make the intent of my question (the first sentence of this post) more precise. I am looking for answers of the form "an X denotes a Y".
big-picture math-philosophy
@Jacques, such philosophical questions are perfect as community wiki. – Wadim Zudilin Jun 25 '10 at 7:24
2 @Wadim: I was rather hoping that this was not really philosophical (anymore). Isn't this so basic that there should be definitions? – Jacques Carette Jun 25 '10 at 11:36
@Jacques: this is the type of thing that is so basic that there are not mathematical definitions. Unless mathematicians were proving things about variables, why would they need to define them
precisely? – Carl Mummert Apr 27 '11 at 10:14
@Carl: This is exactly what foundation studies are all about: take things that were thought basic and define them formally. Both logic and set theory did that, for a part of mathematics. But it
appears that there are still parts of mathematics which are not formal. Why is that? – Jacques Carette Apr 27 '11 at 12:30
add comment
6 Answers
active oldest votes
In written English (and of course other languages), we have linguistic constructs which tell the reader how to approach the ideas that are about to be presented. For example, if I begin a
sentence with "However, . . .", the reader expects a caution about a previously stated proposition, but if I begin the sentence with "Indeed, . . . ", the reader expects supporting evidence
for a position. Of course we could completely discard such language and the same ideas would be communicated, but at much greater effort. I regard the words "variable", "constant",
"parameter", and so on, in much the same way I regard "however", "indeed", and "of course"; these words are informing me about potential ways to envision the objects I am learning about. For
up example, when I read that "$x$ is a variable", I regard $x$ as able to engage in movement; it can float about the set it is defined upon. But if $c$ is an element of the same set, I regard it
vote 5 as nailed down; "for each" is the appropriate quantifier for the letter $c$. And when (say) $\xi$ is a parameter, then I envision an uncountable set of objects generated by $\xi$, but $\xi$
down itself cannot engage in movement. Finally, when an object is referred to as a symbol, then I regard its ontological status as in doubt until further proof is given. Such as: "Let the symbol
vote '$Lv$' denote the limit of the sequence $\lbrace L_{n}v \rbrace_{n=1}^{\infty}$ for each $v \in V$. With this definition, we can regard $L$ as a function defined on $V$. . . "
So in short, I regard constructing precise mathematical definitions for these terms as equivalent to getting everyone to have the same mental visions of abstract objects.
Excellent! Your discourse above is very reminiscent (to me) of the same discourse present in Leibniz's work, and later Frege as well as Russell, which served to really clarify the
mathematical vernacular. The informal use of words led to some formalizations (think set theory and logic) which really help put mathematics on a much more solid foundation than before. This
is now established canon. But why do other words in common mathematical usage (with mathematical meaning) escape this treatment? – Jacques Carette Apr 27 '11 at 12:36
I believe that "variable", "constant", and "parameter" have identical set theoretic meaning, as they operate as adjectives describing elements of sets within any given proof, and the
validity of proofs depends only on the properties of the elements of the sets under consideration, not the adjectives used to describe the elements. So though we regard "variable" as a noun,
it arises from the mental abstraction of an adjective. Objects which seem to be amenable to precise mathematical definitions seem to arise as abstractions of nouns. (That's the best answer I
can come up with unfortunately!) – user14717 Apr 28 '11 at 3:02
add comment
Regarding the status of variables, you probably want to look at Chung-Kil Hur's PhD thesis "Categorical equational systems: algebraic models and equational reasoning". Roughly speaking, he
extends the notion of formal (as in formal polynomials) to signatures with binding structure and equations. He was a student of Fiore's, and I think they've been interested in giving better
models (inspired by the nominal sets approach) to things like higher-order abstract syntax. I've been meaning to read his thesis for a while, to see if his treatment of variables can suggest
up vote techniques that could be used for writing reflective decision procedures which work over formulas with quantifiers.
3 down
vote For schematic variables or metavariables, there's a formal treatment of them in MJ Gabbay's (excellently-titled) paper "One and a Halfth-Order Logic"
I've been reading other of Gabbay's papers, which I have been greatly enjoying. I have tried several times to erad Fiore's work, but my understanding of CT is just not strong enough to
cope. It appears quite unfortunate, since he does seem to have a lot to say about questions I have been asking myself. – Jacques Carette Jun 24 '10 at 20:09
add comment
Intriguing question...
If there are definitions then as far as I know they're pretty much unspoken ones. Maybe someone has actually codified them somewhere, but I'm guessing not- so I'm going to take a few
guesses and stick this answer as community wiki in a bid to get some kind of consensus:
I can (fairly confidently) vouch for
Variable: The argument of a function (sometimes a truth function :))
Indeterminate: Dummy variable used to prove statements with universal quatifiers
up vote 2 Parameter: A numerical variable determining an object
down vote
I would take guesses at:
Symbol: A function or functional (ie. more complex than simply an object) that is a variable
Name: The argument of a truth function
And I have no idea about:
Metavariable: ?
1 Feel free to mess about with this ^^^^^^^^ – Tom Boardman Jun 24 '10 at 19:46
This is indeed what I was looking for. Some (like indeterminate), seem quite close to the mark. I am less keen on your characterization of 'variable' though. Don't variables sometimes
occur outside the scope of a function? [Although another might be that they don't and those who use variable in that context are guilty of either sloppiness or misunderstanding of what
a variable is]. – Jacques Carette Jun 24 '10 at 20:06
add comment
Of the various types of "placeholder", certainly a couple have definite mathematical meanings. In logic, the meaning of free and bound variables is set out in detail. And I take
"indeterminate" to be a term used with a precise meaning in algebra; in polynomial rings, for example, the indeterminates are not exactly independent variables in the conventional sense
up vote 1 of functional notation.
down vote
I know this - I could have put all of that in my question (which was long enough as it is). What I am seeking is exactly some of those precise meanings. I do know the precise meanings
from logic (I did mention $\alpha$-equivalence, right?). But the meaning of a term in logic is not always the meaning of that term in the rest of mathematics... – Jacques Carette Jun 24
'10 at 19:10
It might be clearer to ask first where the dominant idea of "function", as mathematicians now understand it, is not the only useful one. And then ask for the descriptive terms to be
clarified. – Charles Matthews Jun 24 '10 at 21:01
add comment
It seems to me that this PhD thesis may well contain answers that I find satisfying. The discussion on p.52 is particularly appealing, but the whole thesis is strewn with similar
passages discussing mathematical terms which are frequently left (formally) undefined in the mathematical literature.
up vote 0 Warning: many people who post here would likely call this thesis part mathematical philosophy and part computer science, and find little modern mathematics in it. But then again, as
down vote mathematicians seem to be trying to take type theory back for their own, maybe this kind of work will come back in vogue too.
add comment
I used to worry a lot about the “ontological status of variables”, and I was eventually able to achieve a modicum of ontological security, at least, with respect to the simple domains that
up vote interested me, by taking a pattern-theoretic view of variables. In this view, you shift the question from the status of an isolated variable name like “$x$” to the syntactic entity “$S \
-1 down ldots x \ldots$” in which the variable name occurs. You may now view “$S \ldots x \ldots$” as a name denoting the objects denoted by its various substitution instances.
You're telling me that variables are different when viewed intensionally or extensionally. We agree. So, can you formalize not just variable, but each of the terms I gave, with an
explicit denotation? – Jacques Carette Jun 24 '10 at 19:13
@ Jacques Carette –– That would require an excursion into details that experience tells me are not likely to be tolerated here. Shrift as shortly as possible, it helps to have the
classical notion of general or plural denotation, which classical thinkers deployed to good effect long before they had classes. – Jon Awbrey Jun 24 '10 at 19:28
add comment
Not the answer you're looking for? Browse other questions tagged big-picture math-philosophy or ask your own question.
|
{"url":"http://mathoverflow.net/questions/29413/defining-variable-symbol-indeterminate-and-parameter/29416","timestamp":"2014-04-16T07:53:09Z","content_type":null,"content_length":"87673","record_id":"<urn:uuid:d4b96f3f-afe4-48a1-839a-f9423d20f60a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Measuring Armature Circuit Resistance in a PMDC Motor
Posted by: Edmund Glueck | April 26, 2012
Measuring Armature Circuit Resistance in a PMDC Motor
Application Notes & Gearmotor Tips:
When a permanent magnet DC (PMDC) motor fails to operate properly, or fails to run at all, one sensible troubleshooting step is to measure the winding resistance to determine if there’s an open
circuit or a short circuit, either between turns of the same winding, between two windings in the same motor, or between a winding and ground. This is a fairly simple procedure using an ohmmeter on
an AC motor or a brushless DC motor, but it is not so simple with a brush type (PM) DC motor or gearmotor.
Since an ohmmeter can’t be used to directly measure the armature circuit resistance in a PMDC motor or gearmotor, the resistance has to be calculated.
If you have tried to measure armature circuit resistance with an ohmmeter (via the motor leads), you have found that the result is an unusually high reading. You might have thought that the armature
was in a position where the brushes were bridging two commutator bars on each side. But turning the armature provided no better resistance reading. This is because armature circuit resistance cannot
be measured accurately with an ohmmeter.
Because the connection between the brushes and the commutator is not a solid one, the gap between the two components represents additional resistance to the low voltage and low current of a typical
ohmmeter power source. The additional resistance comes from the oxide film formed on the surface of the commutator and the free particles in the gap, as shown in the image below (the graphitic film
is conductive). The same oxide film and free particles don’t represent a significant resistance to the higher voltage and higher current of a typical DC motor power supply. Since an ohmmeter can’t be
used to directly measure the armature circuit resistance, the resistance has to be calculated.
Test Procedure:
1. Before the armature circuit resistance can be calculated, the motor must be set up in a way such that both the armature and the motor frame are not permitted to turn. In the case of a
gearmotor with a fairly high gear ratio, this could require a rather sturdy restraint. However, the stall torque of a DC motor is proportional to the applied voltage, so reduce the voltage until you
are able to hold the gearmotor in place with the armature locked.
2. Make sure that the voltage is still high enough to drive a current of at least 10% of the motor’s nameplate rating or 0.25 Amp, whichever is higher.
3. Starting with a “cool” motor (meaning the winding temperature is the same as the ambient, which should be at a normal room temperature of
+25° C [+77°F]), measure the applied voltage and the current draw at least five times, with the armature in a different angular position each time.
4. Do each test quickly to keep the motor from heating up, because winding resistance changes with temperature and these changes would affect the accuracy of the tests.
The armature circuit resistance can then be calculated by using the following basic steady state DC motor equation:
E = I x R + (Ke x w), where
E = applied voltage
I = current
R = armature circuit resistance
Ke = voltage constant
w = speed (omega)
Since speed = 0 with the armature locked in place, the back emf voltage (Ke x w) = 0 and the equation can be simplified to:
E = I x R
This can be rewritten to solve for the armature circuit resistance:
R = E/I
Use this equation with the current and voltage measurements from each of the five tests and then average the results for the resistance. Even if you never use this method yourself, you can now
explain to your customers why they have not been able to measure the correct armature circuit resistance with an ohmmeter and then suggest this more reliable method.
You can download the PDF version of this application note by clicking on this link.
1. Bodine Model 6422 (24 VDC, type 33A7)
B+K Precision multimeter reading: 0.674 Ω
This test method: 0.196 Ω
1. Bodine Model 6035 (130 VDC, type 33A5)
B+K Precision multimeter reading: 11.41 Ω
This test method: 11.13 Ω
Note: The bar-to-bar resistance measured directly between two opposing commutator bars does not provide the “full picture”. The armature circuit resistance provides the sum of resistances in the PMDC
motor – from the commutator to the external motor cord and everything in-between (brushes, brush holders, springs, winding, terminals, etc.)
Copyright Bodine Electric Company © 04/2012. All rights reserved.
Posted in Application Tips, Engineering Talk, Gearmotor Basics | Tags: Bodine Electric Company, Fractional Horsepower, Gearmotor, Permanent Magnet DC Gearmotors, pmdc motors
|
{"url":"https://gearmotorblog.wordpress.com/2012/04/26/armature-resistance-dc-motors/","timestamp":"2014-04-21T00:08:05Z","content_type":null,"content_length":"58615","record_id":"<urn:uuid:d96ffb65-8cc5-4420-8c43-4d65ce6c94be>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Java floating point high precision library
up vote 15 down vote favorite
Which libraries for Java are there that have a fast implementation for floating point or fixed point operations with a precision of several thousands of digits? How performant are they?
A requirement for me is that it implements a multiplication algorithm that is better than the naive multiplication algorithm that takes 4 times as much time for a 2 times larger number of digits
(compare Multiplication algorithms).
java floating-point
A general question from a standpoint of interest... what is your application that requires several thousand of digits decimal precision? – Simon Nov 10 '08 at 9:27
This is a hobby, not a job: I'd like to calculate some more digits of the en.wikipedia.org/wiki/Feigenbaum_constant – hstoerr Nov 11 '08 at 8:10
add comment
4 Answers
active oldest votes
There are three libraries mentioned on the Arbitrary Precision Arithmetic page: java.math (containing the mentioned BigDecimal), Apfloat and JScience. I run a little speed check on them
which just uses addition and multiplication.
The result is that for a relatively small number of digits BigDecimal is OK (half as fast as the others for 1000 digits), but if you use more digits it is way off - JScience is about 4
times faster. But the clear performance winner is Apfloat. The other libraries seem to use naive multiplication algorithms that take time proportional to the square of the number of
up vote 20 digits, but the time of Apfloat seems to grow almost linearly. On 10000 digits it was 4 times as fast as JScience, but on 40000 digits it is 16 times as fast as JScience.
down vote
accepted On the other hand: JScience provides EXCELLENT functionality for mathematical problems: matrices, vectors, symbolic algorithms, solution of equation systems and what not. So I'll
probably go with JScience and later write a wrapper to integrate Apfloat into the algorithms of JScience - due to the good design this seems easily possible.
(UPDATE: I wrote a test suite for the number package of JScience and fixed a number of bugs. This went into release 4.3.1. So I can recommend checking it out.)
2 Do some of the libraries support trigonometric functions? – Cookie Monster Aug 8 '11 at 9:14
add comment
Have you checked the performance of BigDecimal? I can't see anything obvious in the JavaDoc, but it would certainly be my first port of call.
up vote 4 down
1 For very high precision this is much slower. I'd only recommend this if you have only tens of digits or don't care about speed. – hstoerr Nov 8 '11 at 19:01
@hstoerr: It's good that you've checked it - but I think the approach of "test the simplest thing that will work" (where being built in is a significant starting point advantage)
is still a good initial step :) – Jon Skeet Nov 8 '11 at 21:03
BigDecimal is fairly limited, for example it doesn't support sqrt in contrast to JScience – Tomasz Apr 27 '13 at 3:31
add comment
You could take a look at the JScience library and their Real number class. I'm not sure how the performance is relative to BigDecimal, but the goal of the library is to provide
up vote 2 down highly-tuned classes for scientific applications, which seems like a good sign.
add comment
Apfloat offers high precision on the mantissa, but appears to give less-than-usual precision on the exponent (based on the fact that it crashes with "Logarithm of zero" for values that
double can handle). So it is not useful for big numbers.
Also, the documentation says:
"A pitfall exists with the constructors Apfloat(float,long) and Apfloat(double,long). Since floats and doubles are always represented internally in radix 2, the conversion to any other
radix usually causes round-off errors, and the resulting apfloat won't be accurate to the desired number of digits.
up vote -2
down vote For example, 0.3 can't be presented exactly in base 2. When you construct an apfloat like new Apfloat(0.3f, 1000), the resulting number won't be accurate to 1000 digits, but only to
roughly 7 digits (in radix 10). In fact, the resulting number will be something like 0.30000001192092896... "
This appears to make Apfloat minimally useful.
BigDecimal does not have a logarithm function, and the documentation does not say whether it allows you to make bigger numbers than a double; the exponent is 32 bits, sort of.
1 Actually Phil, the reason for that pitfall is because you are constructing the Apfloat with a float or double. The inaccuracy is because you are passing it a number that is
inaccurate. If you took the time to read the next paragraph, you would see that if you construct it with a string you can have infinite precision. – Snickers Mar 4 '11 at 19:28
BigDecimal 0.3 would be accurate. To take that example. Does Apfloat have a constructor from BigDecimal? BigDecimal is kind of a hybrid in representation, the mantissa is base 2
(BigInteger), but the exponent is base 10. – Cookie Monster Aug 8 '11 at 17:36
add comment
Not the answer you're looking for? Browse other questions tagged java floating-point or ask your own question.
|
{"url":"http://stackoverflow.com/questions/277309/java-floating-point-high-precision-library?answertab=votes","timestamp":"2014-04-24T21:20:02Z","content_type":null,"content_length":"89849","record_id":"<urn:uuid:51bfb282-f4a2-4770-8f05-51b73de3fe87>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Portola Valley Algebra 1 Tutor
Find a Portola Valley Algebra 1 Tutor
...But many of my students learn better through analogies, fun stories, and hands-on activities, to name a few. One of my students is very energetic and cannot sit still for her math homework, so
I tell her fun analogies, like the "greater than" sign is a crocodile mouth eating the bigger number. ...
24 Subjects: including algebra 1, reading, writing, ESL/ESOL
...Ledger Entry, Customers, ledger entry record keeping, reconciliation of statements, preparing financial statements etc. I am born and brought up in India. I studied Hindi in high school and was
taught in Hindi throughout my academic career.
17 Subjects: including algebra 1, calculus, geometry, statistics
...If the students didn't have enough work to fill the hour, I supplemented with Spanish materials. I am somewhat of a homework coach. I am more than happy to provide original materials, but I
always start with what the student has to do for homework or what he or she is learning in class and I build off of that.
15 Subjects: including algebra 1, English, Spanish, geometry
...I graduated from high school as top student in my class, especially the math subject. I was the price winner of several national math contests, including second prize of National Mathematics
Olympiad and first prize of several Regional Mathematics Olympiad. I have tutored numerous students to help them develop solid foundation and better understanding in probability.
15 Subjects: including algebra 1, chemistry, calculus, algebra 2
...When I got married and had a child of my own, I transitioned into private industry and worked briefly as the Director of Data and Research at an education technology company. It was more
lucrative financially, but less emotionally satisfying. Now that I have two small children at home, they take up a lot of my time and energy... yet I find that I miss teaching.
22 Subjects: including algebra 1, English, reading, chemistry
|
{"url":"http://www.purplemath.com/Portola_Valley_algebra_1_tutors.php","timestamp":"2014-04-17T19:25:58Z","content_type":null,"content_length":"24346","record_id":"<urn:uuid:32a98394-3109-4628-9f84-990c404f7480>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Irvine, CA Algebra Tutor
Find a Irvine, CA Algebra Tutor
Hey my name is Tim, and I am currently a third year undergraduate student at University of California, Irvine. I am studying biology and have been involved in neurostemcell/Alzheimer's research
for a majority of my time at Irvine. I am very knowledgeable in Biology, Chemistry, and Math.
6 Subjects: including algebra 1, Spanish, chemistry, biology
...I am constantly using my engineering background to bring real life examples to the tutoring sessions. This really helps the kids visualize math better. I enjoy finding ways to help kids be
successful!Having tutored math for 10 years, I have spent a considerable amount of effort in helping my students not only fine tune their math skills, but improve their studying methods.
17 Subjects: including algebra 2, algebra 1, reading, calculus
...I have taken the AP test, as well as the SAT II. Kindergarten, elementary, junior high school, high school, SAT I, GRE, college. I took the GRE Vocabulary test section.
23 Subjects: including algebra 1, algebra 2, chemistry, Spanish
...My experience with A&P is broad as I also have taken classes such as clinical anatomy and histology. As part of my degree in Pre-Med, I was required to take Microbiology. I have a keen
understanding of foundational principles and techniques of culture and identification, life processes, and diversity of micro-organisms.
8 Subjects: including algebra 1, chemistry, biology, grammar
...I can also tutor college math subjects like Linear Algebra, Abstract Algebra, Differential Equations, and more. My teaching strategy is to emphasize the importance of dedication and hard work,
highlight the fundamental building blocks for each course in order to understand the concepts, and boos...
9 Subjects: including algebra 1, algebra 2, calculus, geometry
|
{"url":"http://www.purplemath.com/Irvine_CA_Algebra_tutors.php","timestamp":"2014-04-17T10:57:22Z","content_type":null,"content_length":"23860","record_id":"<urn:uuid:79d28328-8fdf-4825-8f1b-5560a7299013>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
- User Profile for: boblpeterse_@_otmail.com
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
User Profile: boblpeterse_@_otmail.com
User Profile for: boblpeterse_@_otmail.com
UserID: 8726
Name: Bob L Petersen
Email: boblpetersen@hotmail.com
Registered: 12/4/04
Occupation: Other
Location: My lawn show
Biography: True Lies Bob L. Petersen Total Recall A Googleplex or Bob why can you spend so much time working
Total Posts: 525
Show all user messages
|
{"url":"http://mathforum.org/kb/profile.jspa?userID=8726","timestamp":"2014-04-16T23:03:56Z","content_type":null,"content_length":"13685","record_id":"<urn:uuid:09caba5b-2c5d-4d73-b268-a870fb8f1642>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: What are sets? again
Replies: 21 Last Post: Dec 9, 2012 10:12 AM
Messages: [ Previous | Next ]
Re: What are sets? again
Posted: Dec 1, 2012 5:41 AM
On Fri, 30 Nov 2012, Zuhair wrote:
> The following is an account about what sets are,
> Language: FOL + P, Rp
> P stands for "is part of"
Does P represent "subset of" or "member of"?
> Rp stands for "represents"
Give an intuitive example or two how you interpreted "represents".
> Axioms: Identity theory axioms +
> I. Part-hood: P partially orders the universe.
> ll. Supplementation: x P y & ~ y P x -> Exist z. z P y & ~ x P z.
> Def.) atom(x) <-> for all y. y P x -> x P y
> Def.) x atom of y <-> atom(x) & x P y.
> Def.) c is a collection of atoms iff for all y. y P c -> Exist z. z
> atom of y.
> Def.) c is atomless <-> ~ Exist x. x atom of c
> lll. Representation: x Rp c & y Rp d -> (x=y<->c=d)
> lV. Representatives: x Rp c -> atom(x)
> V. Null: Exist! x. (Exist c. x Rp c & c is atomless).
> A Set is an atom that uniquely represents a collection of atoms or
> absence of atoms.
> Def.) Set(x) <-> Exist c. (c is a collection of atoms or c is
> atomless) & x Rp c & atom(x)
> Here in this theory because of lV there is no need to mention atom(x)
> in the above definition.
> Set membership is being an atom of a collection of atoms that is
> uniquely represented by an atom.
> Def.) x e y iff Exist c. c is a collection of atoms & y Rp c & x atom
> of c & atom(y)
> Here in this theory because of lV there is no need to mention atom(y)
> in the above definition.
> Vl. Composition: if phi is a formula in which y is free but x not,
> then
> [Exist y. atom(y) & phi] -> [Exist x. x is a collection of atoms &
> (for all y. y atom of x <-> atom(y) & phi)] is an axiom.
> Vll. Pairing: for all atoms c,d Exist x for all y. y e x <-> y=c or
> y=d
> /
> This theory can interpret second order arithmetic. And I like to think
> of it as a base theory on top of which any stronger set theory can
> have its axioms added to it relativized to sets and with set
> membership defined as above, so for example one can add all ZFC axioms
> in this manner, and the result would be a theory that defines a model
> of ZFC, and thus proves the consistency of ZFC. Anyhow this would only
> be a representation of those theories in terms of different
> primitives, and it is justified if one think of those primitives as a
> more natural than membership, or if one think that it is useful to
> explicate the later. Moreover this method makes one see the Whole
> Ontology involved with set\class theories, thus the bigger picture
> revealed! This is not usually seen with set theories or even class
> theories as usually presented, here one can see the interplay between
> sets and classes (collections of atoms), and also one can easily add
> Ur-elements to this theory and still be able to discriminate it from
> the empty set at the same time, a simple approach is to stipulate the
> existence of atoms that do not represent any object. It is also very
> easy to explicate non well founded scenarios here in almost flawless
> manner. Even gross violation of Extensionality can be easily
> contemplated here. So most of different contexts involved with various
> maneuvering with set\class theories can be easily
> paralleled here and understood in almost naive manner.
> In simple words the above approach speaks about sets as being atomic
> representatives of collections (or absence) of atoms, the advantage is
> clearly of obtaining a hierarchy of objects. Of course an atom here
> refers to indivisible objects with respect to relation P here, and
> this is just a descriptive atom-hood that depends on discourse of this
> theory, it doesn't mean true atoms that physically have no parts, it
> only means that in the discourse of this theory there
> is no description of proper parts of them, so for example one can add
> new primitive to this theory like for example the primitive "physical"
> and stipulate that any physical object is an atom, so a city for
> example would be an atom, it means it is descriptively an atom as far
> as the discourse of this theory is concerned, so atom-hood is a
> descriptive modality here. From this one can understand that a set is
> a way to look at a collection of atoms from atomic perspective, so the
> set is the atomic representative of that collection, i.e. it is what
> one perceives when handling a collection of atoms as one descriptive
> \discursive whole, this one descriptive\discursive whole is actually
> the atom that uniquely represents that collection of atoms, and the
> current methodology is meant to capture this concept.
> Now from all of that it is clear that Set and Set membership are not
> pure mathematical concepts, they are actually reflecting a
> hierarchical interplay of the singular and the plural, which is at a
> more basic level than mathematics, it is down at the level of Logic
> actually, so it can be viewed as a powerful form of logic, even the
> added axioms to the base theory above like those of ZFC are really
> more general than being mathematical and even when mathematical
> concepts are interpreted in it still the interpretation is not
> completely faithful to those concepts. However this powerful logical
> background does provide the necessary Ontology required for
> mathematical objects to be secured and for
> their rules to be checked for consistency.
> But what constitutes mathematics? Which concepts if interpreted in the
> above powerful kind of logic would be considered as mathematical? This
> proves to be a very difficult question. I'm tending to think that
> mathematics is nothing but "Discourse about abstract structure", where
> abstract structure is a kind of free standing structural universal.
> Anyhow I'm not sure of the later. I don't think anybody really
> succeeded with carrying along such concepts.
> Zuhair
Date Subject Author
11/30/12 Zaljohar@gmail.com
12/1/12 William Elliot
12/1/12 Zaljohar@gmail.com
12/2/12 William Elliot
12/2/12 Graham Cooper
12/2/12 Zaljohar@gmail.com
12/2/12 William Elliot
12/2/12 ross.finlayson@gmail.com
12/3/12 William Elliot
12/4/12 fom
12/9/12 Charlie-Boo
12/4/12 fom
12/2/12 Charlie-Boo
12/3/12 Graham Cooper
12/3/12 William Elliot
12/4/12 fom
12/4/12 Zaljohar@gmail.com
12/5/12 fom
12/5/12 Zaljohar@gmail.com
12/4/12 Zaljohar@gmail.com
12/5/12 Zaljohar@gmail.com
12/5/12 What are sets? A correction Zaljohar@gmail.com
|
{"url":"http://mathforum.org/kb/message.jspa?messageID=7930670","timestamp":"2014-04-17T17:06:12Z","content_type":null,"content_length":"47251","record_id":"<urn:uuid:0e27385e-ec00-41d4-833d-9d1e4714afa0>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Secrets of Success
> Home
> About
> Schedule
> Calendar
---Download Bookmark
>Workshop Series
-- IS 103 Series
-- Being Your Best You
-- Library Skills
-- Maida Kamber
-- Microsoft Office
-- MyPlan
> Skill Areas
---College Success
---Study Skills
Learning Resources
> Library Resources
> Learning Resources
> Tutorials
> Microsoft Office
> Social Technologies
Subject Support
> Art
> Business
> Culinary
> Foreign Language
> Hawaiian
> History
> Math - Testing
> Med: Nursing
> Political Science
> Sciences
> Social Sciences
Contact Information
Disability Assistance
Version 3d
Home > Resources > Math > Courses > MATH 24 > Objectives
Math 24: Elementary Algebra I
Course Description
An introduction to basic algebra topics, MATH 24 is the first course in a two semester sequence of Elementary Algebra courses. Instruction includes units on operations with signed numbers, linear
equations and inequalities in one variable, the coordinate plane, and linear systems in two variables.
Upon successful completion of MATH 24, the student should be able to:
• Translate word phrases into algebraic expressions.
• Use the order of operations to find the value of algebraic expressions.
• Identify whole numbers, integers, rational numbers, irrational numbers, and real numbers.
• Find the absolute value, additive inverse, and multiplicative inverse of a real number.
• Perform the basic operations (add, subtract, multiply, and divide) with signed rational numbers.
• Identify the following properties: commutative, associative, identity, inverse, distributive.
• Identify terms, like terms, and numerical coefficients in a polynomial.
• Solve linear equations and inequalities in one variable.
• Solve a formula for a specified variable.
• Write and solve ratios and proportions including those from word problems.
• Plot an ordered pair and state the quadrant in which it lies.
• Graph linear equations and inequalities by point plotting, the intercept method, and the slope-intercept method.
• Write the equation of a line given two points or the slope and y-intercept or the slope and a point on the line.
• Solve linear systems of equations or inequalities in two variables by algebraic and graphic methods.
• Use linear systems to solve word problems.
MATH 24 resource page
Kapi'olani Community College - © 2002-2009. All Rights Reserved.
4303 Diamond Head Road, Honolulu, HI 96816 | 808.734.9206
Established: July 17, 2002 - Send comments to:
|
{"url":"http://library.kcc.hawaii.edu/SOS/resources/math/courses/math24_objectives.htm","timestamp":"2014-04-21T04:33:23Z","content_type":null,"content_length":"11870","record_id":"<urn:uuid:583334b5-7302-4146-829d-4ed7104951e4>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tech Briefs
Adaptive Meshing in CFD
In our January 15, 2009 News we focused on the ADINA algorithm that we developed to adapt meshes in the solution of fluid-structure interactions.
As we mentioned in that News, ADINA 8.6 offers the capability to adapt and repair CFD meshes so that appropriate mesh grading is used. The ADINA techniques operate on CFD solution gradients and
involve refining and coarsening the mesh in the various regions of flow for adequate element sizes throughout the fluid region.
The ADINA capability operates on general free-form meshing and hence accommodates meshes from many pre-processors, and in particular also meshes prepared for Nastran (see our News ADINA Pre- and
Post-processing Options and Using NASTRAN Models for 3D CFD and FSI Analyses).
In the present News we give the solution of a high-speed compressible flow problem, in order to illustrate that very strong gradients — shock fronts — can also be accurately resolved with the ADINA
algorithm. We consider a 2D flow passing a circular cylinder with a free stream Mach number = 8, see Figure 1 below. This problem has been used widely to benchmark solution schemes.
The movie above shows the coarse starting mesh of 1,758 elements and 3 meshes, including the final mesh, of the 6 mesh adaptations used to reach the final mesh of 56,471 elements. The pressure
solutions for these meshes are also shown in the movie. The final mesh resolves the shock front very accurately as shown in Figures 2 to 4.
Figure 1 Problem considered
Figure 2 Calculated velocities
Figure 3 Calculated pressure
Figure 4 Mach number along section A-A
This benchmark problem solution illustrates how ADINA can be used for high-speed compressible flows to accurately calculate solution quantities, including shock fronts.
CFD, compressible flows, high-speed, shock front, Mach number, adaptive mesh, solution gradients
|
{"url":"http://www.adina.com/newsgH45.shtml","timestamp":"2014-04-17T01:15:37Z","content_type":null,"content_length":"15873","record_id":"<urn:uuid:65deed39-b36a-48e7-a613-fb11db9f809e>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Progression - Am I right?
May 7th 2007, 12:40 PM
Progression - Am I right?
Hi, could someone tell me if I've done this correctly?
For the progression 1,4,7,10, .... state the
50th term and the sum of the first 50 terms.
I have 50th term as 148
I have sum of first 50 as 1475...
May 7th 2007, 01:00 PM
looks OK but if you told us how you found this it would be easier to
I have sum of first 50 as 1475...
Sum of the first n terms of an AP with first term a and common difference d:
S(n) = (1/2) n (2a + (n-1)*d) = 3725
May 7th 2007, 01:09 PM
regarding the sum,
I was shown in a previous thread a while ago that
Sn = n(a1 + an) / 2
so I ended up with 25( 1 + a20)
May 7th 2007, 01:12 PM
sorry....just realised my mistake, should have been a50 not a20....it works out now,
thank you.
May 7th 2007, 01:13 PM
|
{"url":"http://mathhelpforum.com/algebra/14683-progression-am-i-right-print.html","timestamp":"2014-04-21T03:40:48Z","content_type":null,"content_length":"6495","record_id":"<urn:uuid:7dfc4ad0-9a9f-4c0a-af36-eaf70ffbfb6f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Holes with Binding Power
, 2009
"... I'd like to conclude by emphasizing what a wonderful eld this is to work in. Logical reasoning plays such a fundamental role in the spectrum of intellectual activities that advances in
automating logic will inevitably have a profound impact in many intellectual disciplines. Of course, these things t ..."
Cited by 4 (3 self)
Add to MetaCart
I'd like to conclude by emphasizing what a wonderful eld this is to work in. Logical reasoning plays such a fundamental role in the spectrum of intellectual activities that advances in automating
logic will inevitably have a profound impact in many intellectual disciplines. Of course, these things take time. We tend to be impatient, but we need some historical perspective. The study of logic
has a very long history, going back at least as far as Aristotle. During some of this time not very much progress was made. It's gratifying to realize how much has been accomplished in the less than
fty years since serious e orts to mechanize logic began.
"... Abstract. The Curry-Howard correspondence connects Natural Deduction derivation with the lambda-calculus. Predicates are types, derivations are terms. This supports reasoning from assumptions to
conclusions, but we may want to reason ‘backwards ’ from the desired conclusion towards the assumptions. ..."
Add to MetaCart
Abstract. The Curry-Howard correspondence connects Natural Deduction derivation with the lambda-calculus. Predicates are types, derivations are terms. This supports reasoning from assumptions to
conclusions, but we may want to reason ‘backwards ’ from the desired conclusion towards the assumptions. At intermediate stages we may have an ‘incomplete derivation’, with ‘holes’. This is natural
in informal practice; the challenge is to formalise it. To this end we use a one-and-a-halfth order technique based on nominal terms, with two levels of variable. Predicates are types, derivations
are terms — and the two levels of variable are respectively the assumptions and the ‘holes ’ of an incomplete derivation. 1
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=4019745","timestamp":"2014-04-24T10:04:49Z","content_type":null,"content_length":"14724","record_id":"<urn:uuid:ea717454-d45e-4ac0-9fd3-d364c7ae99b4>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
|
2 more world problems (please help!!)
November 18th 2007, 07:01 PM #1
Nov 2007
2 more world problems (please help!!)
a) Two cement blocks and three bricks weigh 102lb, as do one cement block and ten bricks. What does one brick weigh?
b) A refrigerator repairman charges a fixed amount for a service call in addition to his hourly rate. If a two-hour repair costs $50 and a four-hour repair costs $74, what is the repairman's
hourly rate?
Let $c$ be the weight of a cement block
Let $b$ be the weight of a brick
then we have the system:
$2c + 3b = 102$ ....................(1)
$c + 10b = 102$ ....................(2)
solve this system for $b$ and you will have your answer. can you continue?
b) A refrigerator repairman charges a fixed amount for a service call in addition to his hourly rate. If a two-hour repair costs $50 and a four-hour repair costs $74, what is the repairman's
hourly rate?
Let $m$ be the hourly rate
Let $x$ be the number of hours the repair takes
Let $b$ be the fixed charge.
then the cost of the repair is given by:
$c = mx + b$
since a two hour repair costs $50, we have:
$2m + b = 50$ ..................(1)
since a four-hour repair costs $74, we have:
$4m + b = 74$ ...................(2)
thus we need to solve the following system
$2m + b = 50$ ..................(1)
$4m + b = 74$ ..................(2)
for $m$
can you continue?
Let $c$ be the weight of a cement block
Let $b$ be the weight of a brick
then we have the system:
$2c + 3b = 102$ ....................(1)
$c + 10b = 102$ ....................(2)
solve this system for $b$ and you will have your answer. can you continue?
Let $m$ be the hourly rate
Let $x$ be the number of hours the repair takes
Let $b$ be the fixed charge.
then the cost of the repair is given by:
$c = mx + b$
since a two hour repair costs $50, we have:
$2m + b = 50$ ..................(1)
since a four-hour repair costs $74, we have:
$4m + b = 74$ ...................(2)
thus we need to solve the following system
$2m + b = 50$ ..................(1)
$4m + b = 74$ ..................(2)
for $m$
can you continue?
need more explaining please.
November 18th 2007, 07:10 PM #2
November 18th 2007, 07:53 PM #3
Nov 2007
|
{"url":"http://mathhelpforum.com/math-topics/23081-2-more-world-problems-please-help.html","timestamp":"2014-04-20T16:21:37Z","content_type":null,"content_length":"42889","record_id":"<urn:uuid:bd2cc869-6dc5-4d34-8f0c-92979c4925e6>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Maple 15 Questions and Posts
Hi all
I dont know why some Z1 appears on the screen and the code does not converge.
please help me
thanks alooooot
H1:= p*(diff(f(tau),tau$3)+((3/5)*f(tau)*diff(f(tau),tau$2))-(1/5)*(diff(f(tau),tau$1))^2+((2/5)*tau*diff(h(tau),tau$1))-((2/5)*h(tau))-BB*diff(f(tau),tau$1))+(1-p)*(diff(f(tau),tau$3)):
H11:= p*((1/pr)*diff(h(tau),tau$3)+(3/5)*f(tau)*diff(h(tau),tau$2))+(1-p)*(diff(h(tau),tau$3)):
convert(series(collect(expand(eq2), p), p, n+1), 'polynom');
convert(series(collect(expand(eq22), p), p, n+1), 'polynom');
for i to n do
s[i] := coeff(eq4, p^i) ;
print (i);
end do:
for i to nn do
ss[i] := coeff(eq44, p^i) ;
print (i);
end do:
s[0]:=diff(f[0](tau), tau$3);
ss[0]:=diff(h[0](tau), tau$3);
ics[0]:=f[0](0)=0, D(f[0])(0)=0, D(f[0])(BINF)=0;
icss[0]:=h[0](BINF)=0, D(h[0])(0)=1, D(h[0])(BINF)=0;
dsolve({s[0], ics[0]});
f[0](tau):= rhs(%);
#dsolve({ss[0], icss[0]});
h[0](tau):= -exp(-tau); #;rhs(%);
for i to n do
f[ii-1](tau):=convert(series(f[ii-1](tau), tau, nn+1), 'polynom');
h[ii-1](tau):=convert(series(h[ii-1](tau), tau, nn+1), 'polynom');
ics[i]:=f[i](0)=0, D(f[i])(0)=0, D(f[i])(BINF)=0;
dsolve({s[i], ics[i]});
icss[i]:=h[i](BINF)=0, D(h[i])(0)=0, D(h[i])(BINF)=0;
dsolve({ss[i], icss[i]});
end do;
plot(pade(diff(f(tau),tau), tau, [7, 7]),tau=0..5,color=blue,style=point,symbol=circle,symbolsize=7,labels=["tau","velocity"]);
|
{"url":"http://www.mapleprimes.com/products/Maple/Maple%2015","timestamp":"2014-04-18T18:46:19Z","content_type":null,"content_length":"225936","record_id":"<urn:uuid:7e54cfec-b39b-4a92-92b6-156b9879d94d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to convert to hyperpolic function? - Mombu the Science Forum
How to convert to hyperpolic function?
hello; this is mma 5.0
I am having hard time figuring how to tell mma to convert the
output to a form I want.
In Maple, I can use the 'convert' command, which is really nice, but
mma has no such command to convert an expression between different
forms. So other than using Simplify, FullSimplify, ExpToTrig, TrigToExp,
I have no idea what to do.
This is an example, the result of this I want to be expressed as
arcsinh(y/c), not in the way it is generated, (which is correct
but different form).
So I guess I am asking one general question: What is the mma command
that is eqivelant to maple 'convert'? and a specific question is how
to make the output of this example expressed in arcsinh instead of in
terms of ln?
--- mma code -----
In[4]:= sol = Integrate[1/Sqrt[c^2+y^2],y]
Out[4]= Log[y + Sqrt[c + y ]]
In[5]:= Simplify[sol,Elements[c,Reals]]
Out[5]= Log[y + Sqrt[c + y ]]
---- maple code -----
In maple, in this example I did not have to use the convert
command, but in other examples I used it.
/ 1 \1/2
sol := arcsinh(|----| y)
| 2 |
\ c /
Now I can if needed convert the above to ln using convert:
|
{"url":"http://www.mombu.com/science/mathematics/t-how-to-convert-to-hyperpolic-function-11412227.html","timestamp":"2014-04-18T00:22:05Z","content_type":null,"content_length":"30012","record_id":"<urn:uuid:b959c1fc-5ea5-45a0-bf80-11e1480c64c5>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matches for:
Return to List
Advances in the Mathematical Sciences
Moscow Seminar in Mathematical Physics
             
American Mathematical The Theory Department of the Institute of Theoretical and Experimental Physics (ITEP) is internationally recognized for achievements in various branches of theoretical
Society physics. The seminars at ITEP for many years have been among the main centers of scientific life in Moscow.
Translations--Series 2
This volume presents results from the seminar on mathematical physics that has been held at ITEP since 1983. It reflects the style and direction of some of the work done at
Advances in the the Institute.
Mathematical Sciences
The majority of the papers in the volume describe the Knizhnik-Zamolodchikov-Bernard connection and its far-reaching generalizations. The remaining papers are related to
1999; 299 pp; hardcover other aspects of the theory and integrable models. Included are discussions on quantum Lax operators analyzed by methods of algebraic geometry, current algebras associated
with complex curves, and the relationship between matrix models and integrable systems.
Volume: 191
Graduate students and research mathematicians working in ordinary differential equations on manifolds and dynamical systems.
ISBN-10: 0-8218-1388-9
• G. E. Arutyunov, L. O. Chekhov, and S. A. Frolov -- Quantum dynamical \(R\)-matrices
ISBN-13: • B. Enriquez and V. Rubtsov -- Some examples of quantum groups in higher genus
978-0-8218-1388-1 • V. V. Fock and A. A. Rosly -- Poisson structure on moduli of flat connections on Riemann surfaces and the \(r\)-matrix
• D. A. Ivanov and A. S. Losev -- KZB equations as a flat connection with spectral parameter
List Price: US$128 • S. Kharchev -- Kadomtsev-Petviashvili hierarchy and generalized Kontsevich model
• S. Khoroshkin, D. Lebedev, and S. Pakuliak -- Yangian algebras and classical Riemann problems
Member Price: US$102.40 • I. Krichever and A. Zabrodin -- Vacuum curves of elliptic \(L\)-operators and representations of the Sklyanin algebra
• A. M. Levin and M. A. Olshanetsky -- Hierarchies of isomonodromic deformations and Hitchin systems
Order Code: TRANS2/191 • N. Nekrasov -- Infinite-dimensional algebras, many-body systems and gauge theories
|
{"url":"http://ams.org/bookstore?fn=20&arg1=trans2series&ikey=TRANS2-191","timestamp":"2014-04-18T03:33:45Z","content_type":null,"content_length":"16055","record_id":"<urn:uuid:0a7c8bb2-bec2-46b5-93c8-1f4dc2ee61bc>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Forest Park, IL Math Tutor
Find a Forest Park, IL Math Tutor
...I have over ten years of experience as a statistician with the federal government and studied statistics at the graduate level as part of my M.A. program. I have had good success working with
students who are non-math-oriented. I have worked as a labor economist in federal and state government ...
7 Subjects: including prealgebra, statistics, reading, economics
...I think that I can persuade any student that math is not only interesting and fun, but beautiful as well. I've been tutoring test prep for 15 years, and I have a lot of experience helping
students get the score they need on the GRE. I've helped students push past their goal scores in both the Quant and Verbal.
24 Subjects: including differential equations, discrete math, linear algebra, algebra 1
...I have tutored high school students in public schools as well as privately for three years. I have taught swim lessons for children ranging in ages from 18 months-12 years. I have taught
swimming and diving.
23 Subjects: including ACT Math, reading, English, writing
...My family is originally from Belgium and I have been privileged to live in four different countries including Belgium, the UK, and Australia. Although difficult at times, I have loved this
experience because it has both broadened my interests and allowed me to create a global network. Although ...
21 Subjects: including geometry, ACT Math, algebra 1, algebra 2
...I am a graduate of Northwestern University and currently reside in Logan Square. I have interned with the not-for-profit tutoring organization 826LA in Los Angeles, worked with a private
company to tutor the SAT and SAT II's, and belonged to an organization which goes into elementary schools to ...
36 Subjects: including algebra 2, reading, prealgebra, geometry
Related Forest Park, IL Tutors
Forest Park, IL Accounting Tutors
Forest Park, IL ACT Tutors
Forest Park, IL Algebra Tutors
Forest Park, IL Algebra 2 Tutors
Forest Park, IL Calculus Tutors
Forest Park, IL Geometry Tutors
Forest Park, IL Math Tutors
Forest Park, IL Prealgebra Tutors
Forest Park, IL Precalculus Tutors
Forest Park, IL SAT Tutors
Forest Park, IL SAT Math Tutors
Forest Park, IL Science Tutors
Forest Park, IL Statistics Tutors
Forest Park, IL Trigonometry Tutors
Nearby Cities With Math Tutor
Bellwood, IL Math Tutors
Berkeley, IL Math Tutors
Berwyn, IL Math Tutors
Broadview, IL Math Tutors
Hines, IL Math Tutors
La Grange, IL Math Tutors
Lyons, IL Math Tutors
Maywood, IL Math Tutors
North Riverside, IL Math Tutors
Oak Park, IL Math Tutors
River Forest Math Tutors
River Grove Math Tutors
Riverside, IL Math Tutors
Stone Park Math Tutors
Westchester Math Tutors
|
{"url":"http://www.purplemath.com/Forest_Park_IL_Math_tutors.php","timestamp":"2014-04-19T12:36:06Z","content_type":null,"content_length":"23956","record_id":"<urn:uuid:1ab701b1-23a2-451d-bae8-6f5c2db30fd5>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hillcrest Heights, MD Algebra 1 Tutor
Find a Hillcrest Heights, MD Algebra 1 Tutor
...Being native in Russian, proficient and fluent in English and German, and intermediate in Spanish, I will help my students work towards and achieve their language learning objectives. I am a
professional interpreter and language tutor born and educated in Russia. The Russian school system puts ...
10 Subjects: including algebra 1, calculus, ESL/ESOL, German
...I have taught high school students in biology, human anatomy, general chemistry, and physics as a college student. I would love an opportunity to work with you, your family member or a friend.
I have always enjoyed reading ever since I had a great reading teacher in Kindergarten.
27 Subjects: including algebra 1, Spanish, physics, reading
...Additionally, I have been tutoring elementary school students and high schools students in math.I have experience tutoring elementary students both Math and English. I have been a tutor for
elementary students in Math for the past 3 years. I have also tutored English both in the area and abroad in Spain.
14 Subjects: including algebra 1, English, Spanish, reading
...Most of my expertise comes from real world experiences, which is a big influence on how I choose to tutor my students. I like to use little time teaching methods, and more time challenging my
student to apply techniques to activities of interest. I feel this method promotes real learning improving my students' knowledge retention over memorizing.
13 Subjects: including algebra 1, business, Microsoft Excel, prealgebra
...In addition, I have always challenged myself by taking Advanced Placement courses (AP) throughout high school. Many of those include AP Calculus and AP Physics C as I have mentioned as well as
AP English 11, AP Chemistry, AP Literature with Composition, and AP Biology. I have learned a lot sinc...
71 Subjects: including algebra 1, chemistry, English, physics
Related Hillcrest Heights, MD Tutors
Hillcrest Heights, MD Accounting Tutors
Hillcrest Heights, MD ACT Tutors
Hillcrest Heights, MD Algebra Tutors
Hillcrest Heights, MD Algebra 2 Tutors
Hillcrest Heights, MD Calculus Tutors
Hillcrest Heights, MD Geometry Tutors
Hillcrest Heights, MD Math Tutors
Hillcrest Heights, MD Prealgebra Tutors
Hillcrest Heights, MD Precalculus Tutors
Hillcrest Heights, MD SAT Tutors
Hillcrest Heights, MD SAT Math Tutors
Hillcrest Heights, MD Science Tutors
Hillcrest Heights, MD Statistics Tutors
Hillcrest Heights, MD Trigonometry Tutors
Nearby Cities With algebra 1 Tutor
Ardmore, MD algebra 1 Tutors
Bon Air, VA algebra 1 Tutors
Cameron Station, VA algebra 1 Tutors
Crystal City, VA algebra 1 Tutors
Forest Heights, MD algebra 1 Tutors
Forestville, MD algebra 1 Tutors
Kingstowne, VA algebra 1 Tutors
Marlow Heights, MD algebra 1 Tutors
N Chevy Chase, MD algebra 1 Tutors
North Englewood, MD algebra 1 Tutors
Palmer Park, MD algebra 1 Tutors
Rogers Heights, MD algebra 1 Tutors
Silver Hill, MD algebra 1 Tutors
Temple Hills algebra 1 Tutors
Tuxedo, MD algebra 1 Tutors
|
{"url":"http://www.purplemath.com/Hillcrest_Heights_MD_algebra_1_tutors.php","timestamp":"2014-04-21T15:28:31Z","content_type":null,"content_length":"24793","record_id":"<urn:uuid:a42aeda9-c0fa-4f79-944e-c7901af1f628>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Progress of the Online Soup Search
Progress of the Online Soup Search
Over the last couple of months, Nathaniel Johnston's Online Soup Search for Conway's Life has been hunting for 20x20 random "methuselah" patterns, using a modest-sized distributed network -- a good
fraction of the spare CPU cycles of perhaps a dozen computers. As of the end of August, the conwaylife.com server has tallied the final stabilizations of over 111 million random 20x20 Conway's Life
"soups", totaling over three billion Life objects (still-life, oscillator, or spaceship). This is slowly approaching the scale of Achim Flammenkamp's earlier random-ash census project from a decade
and a half ago -- which represented an impressive amount of dedicated CPU time for 1994.
Version 1.03
of the soup-search script is now available. It's a Python script that will run on the
current version of Golly
for Windows, Mac, or Linux. Version 1.03 displays much more detail about the progress of the current search.
Methuselah survival times appear to fit a simple inverse exponential sequence. Lifespans between 1000(N-1) and 1000N are about twice as frequent as lifespans between 1000N and 1000(N+1) -- for a wide
range of N. Version 1.03 of the script continuously updates an on-screen table of these frequencies, starting at N=5. It is an open question how far this relationship continues, or whether a larger
sample will yield a more precise approximation of the curve.
The longest-lived methuselah found so far by the Online Soup Search is the pattern at right, which lasts over 25,000 ticks before stabilizing. Previous search efforts have done considerably better --
the record-holder is
Andrzej Okrasinski's "Lidka"
, found by his "Life Screensaver" Windows software in 2005, in a run of some 12 billion 20x20 soups -- apparently somewhere near the 3-billion-soups mark. Unfortunately, however, the email address
given on the website does not appear to be functional, and some compatibility problems have been reported with the screensaver utility in recent versions of Windows.
With current CPU resources, "Lidka" is not likely to be surpassed very quickly. If the exponential drop in methuselah frequency continues at a similar rate through the next several 1000-tick "bins",
then a methuselah lasting 30,000+ ticks might be expected to turn up, very roughly, sometime in the next year or two, after several billion soup patterns have been tallied. This lines up fairly well
with the number of soups examined to discover "Lidka", though of course there are no guarantees that the first 30,000+ methuselah will appear at exactly the "right" time (statistically speaking).
Of course, the time needed to find a new record-breaker will go way down if enough computers join the distributed search effort...!
|
{"url":"http://pentadecathlon.com/lifeNews/2009/08/progress_of_the_online_soup_se.html","timestamp":"2014-04-17T01:05:26Z","content_type":null,"content_length":"12163","record_id":"<urn:uuid:341011ab-6c6f-4aab-80a4-c4e9f071a63c>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
|
New notions of reduction and non-semantic proofs of β-strong normalization in typed λ-calculi
Results 1 - 10 of 14
"... We present # CIL , a typed #-calculus which serves as the foundation for a typed intermediate language for optimizing compilers for higher-order polymorphic programming languages. The key
innovation of # CIL is a novel formulation of intersection and union types and flow labels on both terms and ..."
Cited by 28 (11 self)
Add to MetaCart
We present # CIL , a typed #-calculus which serves as the foundation for a typed intermediate language for optimizing compilers for higher-order polymorphic programming languages. The key innovation
of # CIL is a novel formulation of intersection and union types and flow labels on both terms and types. These flow types can encode polyvariant control and data flow information within a
polymorphically typed program representation. Flow types can guide a compiler in generating customized data representations in a strongly typed setting. Since # CIL enjoys confluence,
standardization, and subject reduction properties, it is a valuable tool for reasoning about programs and program transformations.
- In: (ITRS ’04 , 2005
"... The operation of expansion on typings was introduced at the end of the 1970s by Coppo, Dezani, and Venneri for reasoning about the possible typings of a term when using intersection types. Until
recently, it has remained somewhat mysterious and unfamiliar, even though it is essential for carrying ..."
Cited by 17 (7 self)
Add to MetaCart
The operation of expansion on typings was introduced at the end of the 1970s by Coppo, Dezani, and Venneri for reasoning about the possible typings of a term when using intersection types. Until
recently, it has remained somewhat mysterious and unfamiliar, even though it is essential for carrying out compositional type inference. The fundamental idea of expansion is to be able to calculate
the effect on the final judgement of a typing derivation of inserting a use of the intersection-introduction typing rule at some (possibly deeply nested) position, without actually needing to build
the new derivation.
"... This pearl gives a discount proof of the folklore theorem that every strongly #-normalizing #-term is typable with an intersection type. (We consider typings that do not use the empty
intersection # which can type any term.) The proof uses the perpetual reduction strategy which finds a longest path. ..."
Add to MetaCart
This pearl gives a discount proof of the folklore theorem that every strongly #-normalizing #-term is typable with an intersection type. (We consider typings that do not use the empty intersection #
which can type any term.) The proof uses the perpetual reduction strategy which finds a longest path. This is a simplification over existing proofs that consider any longest reduction path. The
choice of reduction strategy avoids the need for weakening or strengthening of type derivations. The proof becomes a bargain because it works for more intersection type systems, while being simpler
than existing proofs.
, 2008
"... • In 1967, an internationally renowned mathematician called N.G. de Bruijn wanted to do something never done before: use the computer to formally check the correctness of mathematical books. •
Such a task needs a good formalisation of mathematics, a good competence in implementation, and extreme att ..."
Add to MetaCart
• In 1967, an internationally renowned mathematician called N.G. de Bruijn wanted to do something never done before: use the computer to formally check the correctness of mathematical books. • Such a
task needs a good formalisation of mathematics, a good competence in implementation, and extreme attention to all the details so that nothing is left informal. • No earlier formalisation of
mathematics (Frege, Russell, Whitehead, Hilbert, Ramsey, etc.) had ever achieved such attention to details. • Implementing extensive formal systems on the computer was never done before. • De Bruijn,
an extremely original mathematician, did every step his own way, quickly replacing existing ideas with his own original genious way of thinking, shaping the road ahead for everyone else to follow.
, 2005
"... We have seen previously that there are three ways of generating new redexes by reduction. Out of the three forms, one may argue that one of them is not a new redex, but a residual of a redex in
the initial term: (λx.(λy.M))N1N2 We can define a new reduction which reduces this ’hidden redexes’. This ..."
Add to MetaCart
We have seen previously that there are three ways of generating new redexes by reduction. Out of the three forms, one may argue that one of them is not a new redex, but a residual of a redex in the
initial term: (λx.(λy.M))N1N2 We can define a new reduction which reduces this ’hidden redexes’. This reduction is called generalized β-reduction and is denoted by → g β. Given a term with hidden
redexes, one step of →g β allows the reduction of any of the existing redexes (no matter how deeply hidden they are): (... (λx1....λxn.M)N1)...Np) → g
, 2009
"... • I(λx.B) = [x]I(B) and I(AB) = (I(B))I(A) • I((λx.(λy.xy))z) ≡ (z)[x][y](y)x. The items are (z), [x], [y] and (y). • applicator wagon (z) and abstractor wagon [x] occur NEXT to each other. • A
term is a wagon followed by a term. • (β) ..."
Add to MetaCart
• I(λx.B) = [x]I(B) and I(AB) = (I(B))I(A) • I((λx.(λy.xy))z) ≡ (z)[x][y](y)x. The items are (z), [x], [y] and (y). • applicator wagon (z) and abstractor wagon [x] occur NEXT to each other. • A term
is a wagon followed by a term. • (β)
, 2009
"... Functions (see book [Kamareddine et al.]) • General definition of function is key to Frege’s formalisation of logic (1879). • Self-application of functions was at the heart of Russell’s paradox
(1902). • To avoid paradoxes, Russell controlled function application via type theory. • Russell (1908) gi ..."
Add to MetaCart
Functions (see book [Kamareddine et al.]) • General definition of function is key to Frege’s formalisation of logic (1879). • Self-application of functions was at the heart of Russell’s paradox
(1902). • To avoid paradoxes, Russell controlled function application via type theory. • Russell (1908) gives the first type theory: the Ramified Type Theory (rtt). • rtt is used in Russell and
Whitehead’s Principia Mathematica (1910–1912). • Simple theory of types (stt): Ramsey (1926), Hilbert and Ackermann (1928). • Frege’s functions ̸= Principia’s functions ̸= λ-calculus functions (1932).
ISR 2009, Brasiliá 1 • Church’s simply typed λ-calculus λ → = λ-calculus + stt (1940). • Both rtt and stt are unsatisfactory. Hence, birth of different type systems, each with different functional
power. All based on Church’s λ-calculus. • Eight influential typed λ-calculi 1940–1988 unified in Barendregt’s cube. • Not all functions need to be fully abstracted as in the λ-calculus. For some
functions, their values are enough. • Non-first-class functions allow us to stay at a lower order (keeping decidability, typability, etc.) without losing the flexibility of the higher-order aspects.
• We extend the cube of the eight influential type systems with non-first-class functions showing that this allows placing the type systems of ML, LF and Automath more accurately in the hierarchy of
, 2008
"... • In 1967, an internationally renowned mathematician called N.G. de Bruijn wanted to do something never done before: use the computer to formally check the correctness of mathematical books. •
Such a task needs a good formalisation of mathematics, a good competence in implementation, and extreme att ..."
Add to MetaCart
• In 1967, an internationally renowned mathematician called N.G. de Bruijn wanted to do something never done before: use the computer to formally check the correctness of mathematical books. • Such a
task needs a good formalisation of mathematics, a good competence in implementation, and extreme attention to all the details so that nothing is left informal. • No earlier formalisation of
mathematics (Frege, Russell, Whitehead, Hilbert, Ramsey, etc.) had ever achieved such attention to details. • Implementing extensive formal systems on the computer was never done before. • De Bruijn,
an extremely original mathematician, did every step his own way, quickly replacing existing ideas with his own original genious way of thinking, shaping the road ahead for everyone else to follow.
Eindhoven, September 2008 1De Bruijn in 1967 • When de Bruijn announced his new project Automath at the start of January 1967, there was mixed reactions:
, 2012
"... Lambda2012 The two previous speakers discussing the origin of λ in Church’s writing Lambda2012 1De Bruijn’s typed λ-calculi started with his Automath • In 1967, an internationally renowned
mathematician called N.G. de Bruijn wanted to do something never done before: use the computer to formally chec ..."
Add to MetaCart
Lambda2012 The two previous speakers discussing the origin of λ in Church’s writing Lambda2012 1De Bruijn’s typed λ-calculi started with his Automath • In 1967, an internationally renowned
mathematician called N.G. de Bruijn wanted to do something never done before: use the computer to formally check the correctness of mathematical books. • Such a task needs a good formalisation of
mathematics, a good competence in implementation, and extreme attention to all the details so that nothing is left informal. • Implementing extensive formal systems on the computer was never done
before. • De Bruijn, an extremely original mathematician, did every step his own way. • He proudly announced at the ceremony of the publications of the collected Automath work: I did it my way. •
Dirk van Dalen said at the ceremony: The Germans have their 3 B’s, but we Dutch too have our 3 B’s: Beth, Brouwer and de Bruijn.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=291484","timestamp":"2014-04-17T14:09:00Z","content_type":null,"content_length":"34780","record_id":"<urn:uuid:4eba974d-5960-4b5d-88d8-54b5b9f1b2f1>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
I need some friends....
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50de164fe4b0f2b98c86e516","timestamp":"2014-04-20T14:05:06Z","content_type":null,"content_length":"46368","record_id":"<urn:uuid:1d717800-9b48-4679-ac39-da0c735d4268>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] Fwd: GPU Numpy
Christopher Barker Chris.Barker@noaa....
Tue Sep 8 16:21:53 CDT 2009
George Dahl wrote:
> Sturla Molden <sturla <at> molden.no> writes:
>> Teraflops peak performance of modern GPUs is impressive. But NumPy
>> cannot easily benefit from that.
> I know that for my work, I can get around an order of a 50-fold speedup over
> numpy using a python wrapper for a simple GPU matrix class.
I think you're talking across each other here. Sturla is referring to
making a numpy ndarray gpu-aware and then expecting expressions like:
z = a*x**2 + b*x + c
to go faster when s, b, c, and x are ndarrays.
That's not going to happen.
On the other hand, George is talking about moving higher-level
operations (like a matrix product) over to GPU code. This is analogous
to numpy.linalg and numpy.dot() using LAPACK routines, and yes, that
could help those programs that use such operations.
So a GPU LAPACK would be nice.
This is also analogous to using SWIG, or ctypes or cython or weave, or
??? to move a computationally expensive part of the code over to C.
I think anything that makes it easier to write little bits of your code
for the GPU would be pretty cool -- a GPU-aware Cython?
Also, perhaps a GPU-aware numexpr could be helpful which I think is the
kind of thing that Sturla was refering to when she wrote:
"Incidentally, this will also make it easier to leverage on modern GPUs."
Christopher Barker, Ph.D.
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-September/045073.html","timestamp":"2014-04-19T09:43:03Z","content_type":null,"content_length":"4188","record_id":"<urn:uuid:1fb08d40-15cf-4055-b3eb-103ee73113f0>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How much heat is needed to melt a sample of lead weighing 0.237 kg at 26.4 C?Assume the specific heat, the latent... - Homework Help - eNotes.com
How much heat is needed to melt a sample of lead weighing 0.237 kg at 26.4 C?
Assume the specific heat, the latent heat and the melting point of lead are 128 J/kg*C, 2.45*10^4 J/kg and 327.3 C respectively.
A sample of lead weighing 0.237 kg is initially at 26.4 C. The heat required to melt the sample has to be determined. It is given that the melting point of lead is 327.3 C, the specific heat of lead
is 128 J/kg*C, and the latent heat is 2.45*10^4 J/kg.
First the temperature of the sample has to be raised to 327.8 C. The heat required to do this is equal to 128*0.237*(327.3 - 26.4) = 128*0.237*300.9 = 9128.1 J
Once the sample is at the temperature at which lead changes from solid to heat additional heat has to be added to force the change in phase. This is equal to 2.45*10^4*0.237 = 5806.5 J
The total heat required to melt the lead sample is the sum of the heat required to raise the temperature and the heat required to cause the sample to melt. This is given as 9128.1 + 5806.5 = 14934.6
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/sample-lead-used-make-lead-sinker-fishing-has-an-310658","timestamp":"2014-04-19T03:27:55Z","content_type":null,"content_length":"26750","record_id":"<urn:uuid:cc153ef3-670c-4640-940a-5c8cdbe61c60>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Factoring with cubic integers
- EXPERIMENTAL MATHEMATICS , 1996
"... This article describes an implementation of the NFS, including the choice of two quadratic polynomials, both classical sieving and a special form of lattice sieving (line sieving), the block
Lanczos method and a new square root algorithm. Finally some data on factorizations obtained with this implem ..."
Cited by 13 (0 self)
Add to MetaCart
This article describes an implementation of the NFS, including the choice of two quadratic polynomials, both classical sieving and a special form of lattice sieving (line sieving), the block Lanczos
method and a new square root algorithm. Finally some data on factorizations obtained with this implementation are listed, including the record factorization of 12^151 -1.
"... A novel portable hardware architecture of the Elliptic Curve Method of factoring, designed and optimized for application in the relation collection step of the Number Field Sieve, is described
and analyzed. A comparison with an earlier proof-of-concept design by Pelzl, Simka, et al. has been perform ..."
Cited by 12 (1 self)
Add to MetaCart
A novel portable hardware architecture of the Elliptic Curve Method of factoring, designed and optimized for application in the relation collection step of the Number Field Sieve, is described and
analyzed. A comparison with an earlier proof-of-concept design by Pelzl, Simka, et al. has been performed, and a substantial improvement has been demonstrated in terms of both the execution time and
the area-time product. The ECM architecture has been ported across five different families of FPGA devices in order to select the family with the best performance to cost ratio. A timing comparison
with the highly optimized software implementation, GMP-ECM, has been performed. Our results indicate that low-cost families of FPGAs, such as Spartan-3 and Spartan-3E, offer at least an order of
magnitude improvement over the same generation of microprocessors in terms of the performance to cost ratio. 1.
, 1998
"... The Number Field Sieve (NFS) is the asymptotically fastest factoring algorithm known. It had spectacular successes in factoring numbers of a special form. Then the method was adapted for general
numbers, and recently applied to the RSA-130 number [6], setting a new world record in factorization. Th ..."
Cited by 12 (3 self)
Add to MetaCart
The Number Field Sieve (NFS) is the asymptotically fastest factoring algorithm known. It had spectacular successes in factoring numbers of a special form. Then the method was adapted for general
numbers, and recently applied to the RSA-130 number [6], setting a new world record in factorization. The NFS has undergone several modifications since its appearance. One of these modifications
concerns the last stage: the computation of the square root of a huge algebraic number given as a product of hundreds of thousands of small ones. This problem was not satisfactorily solved until the
appearance of an algorithm by Peter Montgomery. Unfortunately, Montgomery only published a preliminary version of his algorithm [15], while a description of his own implementation can be found in
[7]. In this paper, we present a variant of the algorithm, compare it with the original algorithm, and discuss its complexity.
- Experimental Mathematics , 1995
"... The Number Field Sieve (NFS) is the asymptotically fastest known factoring algorithm for large integers. This article describes an implementation of the NFS, including the choice of two
quadratic polynomials, both classical and lattice sieving, the block Lanczos method and a new square root algorith ..."
Cited by 1 (0 self)
Add to MetaCart
The Number Field Sieve (NFS) is the asymptotically fastest known factoring algorithm for large integers. This article describes an implementation of the NFS, including the choice of two quadratic
polynomials, both classical and lattice sieving, the block Lanczos method and a new square root algorithm. Finally some data on factorizations obtained with this implementation are listed, including
the record factorization of 12 151 \Gamma 1. AMS Subject Classification (1991): 11Y05, 11Y40 Keywords & Phrases: number field sieve, factorization Note: This report has been submitted for publication
elsewhere. Note: This research is funded by the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (Netherlands Organization for Scientific Research, NWO) through the Stichting Mathematisch
Centrum (SMC), under grant number 611-307-022. 1. Introduction The Number Field Sieve (NFS) --- introduced in 1988 by John Pollard [19] --- is the asymptotically fastest known algorithm for factoring
"... The Number Field Sieve (NFS) is the asymptotically fastest known factoring algorithm for large integers. This method was proposed by John Pollard [20] in 1988. Since then several variants have
been implemented with the objective of improving the siever which is the most time consuming part of this ..."
Cited by 1 (0 self)
Add to MetaCart
The Number Field Sieve (NFS) is the asymptotically fastest known factoring algorithm for large integers. This method was proposed by John Pollard [20] in 1988. Since then several variants have been
implemented with the objective of improving the siever which is the most time consuming part of this method (but fortunately, also the easiest to parallelise). Pollard's original method allowed one
large prime. After that the two-large-primes variant led to substantial improvements [11]. In this paper we investigate whether the three-large-primes variant may lead to any further improvement. We
present theoretical expectations and experimental results. We assume the reader to be familiar with the NFS.
, 1997
"... 25> f(a=b) and of a=b\Gammam are both smooth, meaning that only small prime factors divide these numerators. These are more likely to be smooth when 1 We assume the reader to be familiar with
this factoring method, although no expert knowledge is required to understand the spirit of this announcem ..."
Add to MetaCart
25> f(a=b) and of a=b\Gammam are both smooth, meaning that only small prime factors divide these numerators. These are more likely to be smooth when 1 We assume the reader to be familiar with this
factoring method, although no expert knowledge is required to understand the spirit of this announcement. 2 NFSNET is a collaborative effort to factor numbers by the Number Field Sieve. It relies on
volunteers from around the world who contribute the "spare time" of a large number of workstations to perform the sieving. In addition to completing work on other numbers, their 75 workstations
sieved (3 349 \Gamma 1)=2 during the months of December 1996 and January 1997. The organizers and principal researchers of NFSNET are: Marije ElkenbrachtHuizing, Peter Montgomery, Bob Silverman,
Richard Wackerbarth, and Sam Wagstaff, Jr. 1. the polynomial values themselves are
, 2006
"... In this thesis design alternatives for hardware solutions that accelerate flexible elliptic curve cryptography in GF (2 m ) are evaluated. ..."
Add to MetaCart
In this thesis design alternatives for hardware solutions that accelerate flexible elliptic curve cryptography in GF (2 m ) are evaluated.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=634246","timestamp":"2014-04-21T05:37:02Z","content_type":null,"content_length":"27904","record_id":"<urn:uuid:92764774-fdb4-4c59-bf5d-c8e6b3b974c2>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
|
triangles are similar
How do we prove triangles are similar?
Use any of the 3 theorems : AA, SAS, SSS .
I. AA Theorem
When 2 angles of one triangle are equal to 2 corresponding angles of the other triangle,the two triangles must be similar.
III. SSS Theorem
If three pairs of corresponding sides are proportional, then the triangles must be similar.
II. SAS Theorem
If two sides and the included angle of one triangle are in the same ratio as the corresponding two sides and included angle in another triangle, then the triangles must be similar.
Practice Problems
Problem 1)
Can you use one of the theorems on this page to prove that the triangles are
Problem 2)
Two different versions of triangles HYZ and HIJ. (Assume scale is not consistent). Use the
side side side theorem
to determine which pair is similar.
Problem 3)
Can you use one of the theorems on this page to prove that the triangles are
Problem 4)
Can you use one of the theorems on this page to prove that the triangles are
Problem 4)
Can you use one of the theorems on this page to prove that the triangles are
|
{"url":"http://www.mathwarehouse.com/geometry/similar/triangles/similar-triangle-theorems.php","timestamp":"2014-04-18T18:16:23Z","content_type":null,"content_length":"29136","record_id":"<urn:uuid:aa11c8fb-44e5-45e5-bcba-e404d63a0975>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Point Charges in a Square: Could the Force on Each Be Zero?
i saw the equation: (1/(4*pi*E))((Qq)/(a^2)) as the force on one of the sides of the square between Q and q. and i know there's another force pointing perpendicular on the adjacent side of the
square. the addition of those two vectors would give me the hypotenuse of a 45-45-90 triangle. therefore, i saw that the magnitude of the hypotenuse is that force times sqrt(2).
would that be right?
|
{"url":"http://www.physicsforums.com/showpost.php?p=1855082&postcount=5","timestamp":"2014-04-16T16:18:19Z","content_type":null,"content_length":"7097","record_id":"<urn:uuid:c5bbfb38-2429-4e19-8f8a-aa56e9657c6c>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gregorius Saint-Vincent
Born: 8 September 1584 in Bruges, Spanish Netherlands (now Belgium)
Died: 27 January 1667 in Ghent, Spanish Netherlands (now Belgium)
Click the picture above
to see a larger version
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
Gregory of Saint-Vincent was known as Grégoire and also by the Latin version of his name, Gregorius a Sancto Vincentio. We know nothing of his parents and his early life. He began his studies at the
Jesuit College of Bruges in 1595. After six years, he went to Douai in northern France in 1601, studying mathematics and philosophy there. Saint-Vincent became a Jesuit novice in 1605 in Rome and
entered the Jesuit Order in 1607. He was a student of Christopher Clavius at the Collegio Romano in Rome and his talents were quickly spotted by Clavius who persuaded him to remain at the College to
study mathematics, philosophy and further advanced topics in theology.
This was a period when science and theology struggled to come to terms with arguments that the sun rather than the earth was at the centre of the universe. Galileo's results obtained by turning the
newly invented telescope on the moon and planets had not contributed to the heliocentric argument except to show that, if the earth was the centre of the universe, then not all heavenly bodies
revolved round the centre since he had seen Jupiter's moons orbiting Jupiter. Leaders of the Jesuit Order asked Jesuit mathematicians on the faculty of the Collegio Romano, for their opinion on
Galileo's discoveries. Saint-Vincent and others in the Collegio Romano were fascinated by Galileo's work and expressed views supporting a heliocentric universe which certainly did not please the
leader of the Jesuit Order who insisted that they support Aristotle's world-view. At the beginning of February 1612 Saint-Vincent's teacher Clavius died and later that year he went to Louvain to
complete his theology degree. He was ordained a priest in 1613.
From 1613 he began his career as a teacher, first at Brussels where he taught Greek. Then he continued to teach Greek in a number of Jesuit Colleges - in Bois-le-Duc (now 's-Hertogenbosch in the
Netherlands) in 1614, and Coutrai (now Kortrijk in Belgium) in 1615. The next year he was appointed chaplain to the Spanish troops stationed in Belgium which must have been a difficult job since this
was a period of Dutch revolt against Spain. The revolt was, however, a long-term struggle which was on hold at this time, the war with Spain only being actively resumed in 1621. Saint-Vincent next
spent three years teaching mathematics at the Jesuit College in Antwerp, first as François de Aguilon's assistant becoming his successor after his death in 1617. Ad Meskens writes [6]:-
De Aguilon (1567-1617) had published 'Opticorum Libri Sex' (1613), an influential book on optics and binocular vision. St Vincent's first mathematical investigations deal with problems related to
refraction and reflection.
During this time Saint-Vincent published Theses cometis (1619) and Theses mechanicae (1620). His [1]:-
'Theses de cometis' (Louvain, 1619) and 'Theses mechanicae' (Antwerp, 1620) were defended by his student Jean Charles de la Faille, who later made them the basis of his highly regarded
'Theoremata de centro gravitatis' (1632).
In 1621 the College in Antwerp moved to Louvain where Saint-Vincent spent four years teaching mathematics. During his years in Louvain, Saint-Vincent worked on mathematics and developed methods which
were important in setting the scene for the invention of the differential and integral calculus. He [1]:-
... elaborated the theory of conic sections on the basis of Commandino's editions of Archimedes (1558), Apollonius (1566), and Pappus (1588). He also developed a fruitful method of
infinitesimals. His students Gualterus van Aelst and Johann Ciermans defended his 'Theoremata mathematica scientiae staticae' (Louvain, 1624); and two other students, Guillaume Boelmans and Ignaz
Derkennis, aided him in preparing the 'Problema Austriacum', a quadrature of the circle, which Gregorius regarded as his most important result. He requested permission from Rome to print his
manuscript, but the general of the order, Mutio Vitelleschi, hesitated to grant it. Vitelleschi's doubts were strengthened by the opinion that Christoph Grienberger (Clavius' successor) rendered
on the basis of preliminary material sent from Louvain.
Among Saint-Vincent's students mentioned in this quote Johann Ciermans (1602-1648) and Guillaume Boelmans (born 7 October 1603 in Maastricht, died 20 October 1638 at Louvain) are perhaps the most
important. Saint-Vincent made a request to Mutio Vitelleschi, the Sixth Superior General of the Society of Jesus, to publish his manuscript. This led Vitelleschi to ask Saint-Vincent to prepare a
submission for Christoph Grienberger, professor of mathematics at the Collegio Romano and censor of all mathematical works written by Jesuit authors, asking him to give an opinion on the value of
Saint-Vincent's new methods. The Bibliotheque Royale de Belgique still contains the manuscript prepared by Guillaume Boelmans under Saint-Vincent's directions which was sent to Grienberger. It is
interesting to note that this document contains the first recorded use of polar coordinates. As he did for all such submissions, Grienberger returned corrections and changes which he recommended that
Saint-Vincent incorporate in his work before it could be published. In an attempt to make his manuscript acceptable for publication, Saint-Vincent went to Rome in 1625 but two years later, having
failed to get his material into a form Grienberger deemed to be satisfactory for publication, he returned to Louvain. He then spent six years in Prague as chaplain to the Holy Roman Emperor Ferdinand
II from 1626 until 1632. These were years of great difficulty since his health was poor but there was also severe tensions between the fervent Catholicism of Ferdinand and the Protestant nobles. Not
long after taking up the post, Saint-Vincent suffered a stroke but slowly recovered and his request to have his former student Theodor Moret appointed as his assistant was granted. Moret (1602-1667),
from Antwerp, was a fine mathematician and had joined the Jesuit Order when he was twenty years old. After spending time as Saint-Vincent's assistant, he taught at the Academy in Olomouc. By this
time Saint-Vincent's reputation was high and the Madrid Academy made him a tempting offer of a position in 1630. Sadly his health was still not robust enough to allow him to accept such an offer and
he was forced to decline.
In September 1631 Gustav II of Sweden led his Protestant army to attack the Catholic forces in Saxony. Victories saw German Protestant princes join forces with him as he moved south. Gustav took
Munich in May 1632 and his ally, the Protestant elector of Saxony, attacked Prague. As the Protestant forces entered Prague, Saint-Vincent fled to Vienna leaving in such haste that he left behind
many of his important mathematical papers. He moved to the Jesuit College in Ghent where he taught from 1632 for the rest of his life. Ten years after he abandoned his papers in Prague they were
returned to him by Father de Amagia and they were published as Opus geometricum quadraturae circuli sectionum coni in Antwerp in 1647. Despite his poor health, Saint-Vincent turned to another of the
classical problems of mathematics, namely the duplication of the cube. He suffered a second stroke in 1659 and his book was still incomplete when he died from a third stroke in 1667. The book Opus
geometricum ad mesolabium was published in 1668, over a year after his death.
Let us turn to Saint Vincent's mathematical contributions. Ad Meskens writes [6]:-
St Vincent's first investigations had to do with reflection and refraction. One of the problems which arose was the trisection of an angle. In searching for ways to obtain a trisection, St
Vincent came across the series
^1/[1] - ^1/[2] + ^1/[4] - ^1/[8] + ....
This series according to St Vincent equals ^2/[3], which he called the terminus. In contrast with classical Greek mathematics, St Vincent thus accepts, for the first time in the history of
mathematics, the existence of a limit. While Euclid writes that "one will obtain at last something smaller than the smallest quantity", St Vincent goes further and boldly writes: "the quantity
will be exhausted"
Saint-Vincent's main work, the Opus geometricum quadraturae circuli sectionum coni (1647), is a book over 1250 pages long. The title indicates that Saint-Vincent's main aim was to square the circle
and this, as Jean Dhombres writes in [5], proved a major stumbling block to its important methods being influential:-
[Saint-Vincent] is essentially a man of one book: 'Opus geometricum'. But what a book! It contains more than 1200 pages (in folio), and thousands of figures. It was printed in Antwerp in 1647,
but was never republished. One thing about the book immediately stirred some uneasiness: the addition to the title, namely: 'quadraturae circuli'. The engraved frontispiece shows sunrays
inscribed in a square frame being arranged by graceful angels to produce a circle on the ground: 'mutat quadrata rotundis'. There was uneasiness in the learned world because no one in that world
still believed that under the specific Greek rules the quadrature of a circle could possibly be effected, and few relished the thought of trying to locate an error, or errors, in 1200 pages of
text. Four years later, in 1651, Christiaan Huygens found a serious defect in the last book of 'Opus geometricum', namely in Proposition 39 of Book X, on page 1121. This gave the book a bad
There are many topics covered in the book including a study of circles, triangles, geometric series, ellipses, parabolas and hyperbolas. His book also contains his quadrature method which is related
to that of Cavalieri but which he discovered independently. He gives a method of squaring the circle which we can now see is essentially integration. Let us give a few more details of this remarkable
work. Book I is, naturally, an introductory one in which Saint-Vincent sets the scene with material on circles and triangles. Also in this Book is an introduction to transformations treated in a
geometrical way. Book II looks at geometric series which Saint-Vincent is able to sum using the transformations he introduced in the first Book. He applies his results to a number of interesting
problems such as the trisection of an angle which he achieves through an infinite series of bisections. He also applies his summation of series to the classical Greek problem of Zeno, namely Achilles
and the tortoise.
In Books III, IV, V and VI Saint-Vincent treats conic sections, the circle, ellipse, parabola, and hyperbola, using classical Greek geometric methods. He treats the hyperbola summing the area under
the curve using a sequence of ordinates in geometric progression [2]:-
Basically these procedures are closely related to the development of the integral calculus and the numerical methods which were subsequently developed for the calculation of logarithms form a
fascinating study.
Basically, Saint-Vincent showed that the area under a rectangular hyperbola xy = k over the interval [a, b] is the same as that over the interval [c, d] if ^a/[b] = ^c/[d]. He therefore integrated x
^-1 in a geometric form that, although he does not make the connection, is easily recognised as the logarithmic function. This connection was in fact made by Saint-Vincent's pupil Alphonse Antonio de
Sarasa (1618-1667).
In Book VII Saint-Vincent gives full details of his geometric method of integration, giving a large collection of interesting examples. He considers solid figures generated by two plane parallel
surfaces on a common base where the solid is bounded by equidistant parallels. In Book IX he extends his methods to give volumes of cylindrical bodies. Of course, as far as Saint-Vincent was
concerned, his aim was to square the circle and he reaches that in Book X. It was here that he made the error which led him to believe that he had squared the circle using ruler and compass methods.
We have already noted in an above quote that the error was detected by Huygens in 1651. Margaret Baron writes [2]:-
Unfortunately the delayed publication of the 'Opus geometricum' prevented it from receiving the attention it would certainly have merited had it appeared twenty years earlier. In 1647, ten years
after the publication of Descartes' 'La Géométrie', algebraic methods were rapidly gaining ground and the form and manner of presentation of Grégoire's work was not such as to make it easy
reading. Those who obtained the book did so mainly to study the faulty circle quadrature. many who read it, however, became fascinated by the geometric integration methods and went on to make a
deeper study of the entire work. Amongst those who gained much from the 'Opus geometricum' should be counted Blaise Pascal whose 'Traité des trilignes rectangles et leurs onglets' is based
essentially on the 'ungula' of Grégoire. Huygens recommended the section on geometric series to Leibniz who later came to make a thorough study of the entire work. Tschirnhaus, friend and
associate of Leibniz during his Paris years, found in the 'ductus in planum' a valuable foundation for the development of his own algebraic integration methods.
Article by: J J O'Connor and E F Robertson
Click on this link to see a list of the Glossary entries for this page
List of References (16 books/articles)
A Poster of Gregory of Saint-Vincent Mathematicians born in the same country
Cross-references in MacTutor
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © November 2010 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is:
|
{"url":"http://www-history.mcs.st-andrews.ac.uk/Biographies/Saint-Vincent.html","timestamp":"2014-04-19T09:50:55Z","content_type":null,"content_length":"27015","record_id":"<urn:uuid:13a90331-13ac-4e7f-a546-4c15ad9fca38>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
|
David Eisenbud
Born: 8 April 1947 in New York City, New York, USA
Click the picture above
to see three larger pictures
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
David Eisenbud's parents were Leonard Eisenbud and Ruth-Jean Rubinstein. Leonard Eisenbud's parents, Boris and Katherine Eisenbud, were Russian but had emigrated to the United States in 1902. Leonard
Eisenbud (1913-2004) was a student of Eugene Wigner and together they co-authored the important book Nuclear Structure (1958). During World War II, Leonard had undertaken research on radar, and with
Wigner, had lectured on quantum mechanics at the Clinton Laboratories at Oak Ridge. In 1947, shortly after their son David was born, the Eisenbuds moved to a newly purchased house in Patchogue after
accepting the offer to join the new Brookhaven National Laboratory's Physics Department situated on Long Island, Upton, New York. Francis Bonner describes what happened next [2]:-
I hadn't been there very long when I was shocked to learn that Leonard Eisenbud's "Q" clearance was in jeopardy: he had received an "interrogatory" asking him to respond to a broad range of
questions about his and his wife's political affiliations. It was no secret that Ruth-Jean's mother and sister were members of the Communist Party, and the charges against Leonard began and
wandered off from there. Indignant and dispirited, and believing the probability of his receiving clearance to be vanishingly small, Leonard chose not to pursue it to the stage of formal hearings
and left to accept an appointment at the Bartol Research Foundation in Swarthmore, PA.
David Eisenbud was brought up in Swarthmore where his father worked until 1958 when the State University of New York set up the State University College on Long Island at Oyster Bay, and Leonard
Eisenbud was appointed Professor of Physics and Acting Chairman of the Department of Physics. So far we have not mentioned the career of David's mother Ruth-Jean Eisenbud (1915-2004) - we should note
that she was a childhood polio victim but went on to become a psychotherapist and a professor at New York University and Adelphi University.
Eisenbud entered the University of Chicago where he went on to undertake both undergraduate and graduate studies. He was only nineteen years old when he graduated with a B.S. in 1966 and continued to
graduate studies, being awarded his Master's degree in 1967. He then undertook research with his thesis advisor Saunders Mac Lane but he was given considerable help by J C Robson who he considered as
an unofficial advisor. He wrote of Mac Lane:-
He was a great figure, and very important for me personally.
But others at Chicago were also strong influences on the young Eisenbud [1]:-
While I was a graduate student at the University of Chicago (1967-1970), I listened at every chance I got to the beautiful lectures of Irving Kaplansky. He was then just finishing his book
'Commutative Rings', and lectured from it. I admired him and it a great deal, but - in the style of a rebellious adolescent - I was quite ready to proclaim that a lot was left out.
On 3 June 1970, Eisenbud married Monika Margarete Schwabe; they had two children David and Alina. Also in 1970, Eisenbud received his Ph.D. from Chicago for his thesis on non-commutative ring theory
Torsion Modules over Dedekind Prime Rings. Eisenbud's first paper was not on ring theory, however, but rather on group theory with Groups of order automorphisms of certain homogeneous ordered sets
(1969). In 1970 he published a number of papers on non-commutative ring theory: Subrings of Artinian and Noetherian rings; (with J C Robson) Modules over Dedekind prime rings; and (with J C Robson)
Hereditary Noetherian prime rings. He was appointed as a lecturer at Brandeis University in 1970 and taught there for twenty-seven years. J C Robson, an English ring theorist, was at the University
of Leeds in England and Eisenbud visited him in 1971 [1]:-
In the fall of 1971, visiting at the University of Leeds in England, I had a chance to lecture on ... Noether normalization (in a version borrowed from Nagata's book, 'Local Rings').
He was promoted at Brandeis University to Assistant Professor in 1972. In the following year, he was awarded a Sloan Foundation Fellowship which allowed him to spend the two academic years 1973-75 on
research visits. He was a visiting scholar at Harvard University during 1973-74 and then was an invited speaker at the International Congress of Mathematicians in Vancouver in August 1974. Continuing
his research visits, he was a Fellow at the Institut des Hautes Etudes Scientiques (Bures-Sur-Yvette) during 1974-75:-
... thoroughly enjoying the French for their mathematics, culture, and good life. ... About half way through that year I was invited by Wolfgang Vogel, whom I had never met, to come to Halle, in
East Germany, to visit, give a lecture, and discuss mathematics. ... A refugee from neighbouring Leipzig when she was five, my wife hadn't been back since then and came along. The trip seemed
exotic and slightly risky. We had plenty of adventures that first visit! On the train from Leipzig to Halle my wallet - with my passport - mysteriously disappeared from my coat, a loss that we
discovered as soon as I tried to check in at the University guest house.
Returning to Brandeis University, Eisenbud was promoted to Associate Professor in 1976. He was a Visiting Professor at the University of Bonn during the academic year 1979-80, then promoted to full
professor at Brandeis in 1980. Barry Mazur describes his research achievements in [9]. We give some extracts:-
David Eisenbud's research accomplishments extend broadly through algebra and its applications. His publications (over a hundred of them!) have made significant contributions to fundamental issues
in the subject. David also has a marvelous gift for mathematical collaboration. The sweep of his interests and the intensity of his mathematical interactions have brought him into fruitful
co-authorship with many mathematicians of different backgrounds and different viewpoints.
Shortly after his graduate days, David began a joint project with Buchsbaum. Among other things, they established an elegant geometric criterion for exactness of a finite free complex that has
many applications in the homological study of commutative rings. ... In the middle 1970s David worked with Harold Levine on the topology of finite C^∞ map germs ... V I Arnold once referred to [
the] celebrated formula of Eisenbud-Levine, which links calculus, algebra and geometry, as a "paradigm" more than a theorem that provides a local manifestation of interesting global invariants
and that "would please Poincaré and Hilbert (also Euler, Cauchy and Kronecker, to name just those classical mathematicians, whose works went in the same direction)." Given this early work, it was
natural for David's attention to turn to the study of singularities and their topology. In this period, David wrote a book with the topologist Walter Neumann [a son of Bernhard Neumann and Hanna
Neumann] on the topology of the complements of the sort of knots that appear in the theory of plane-curve singularities. David next became interested in algebraic geometry, beginning a long and
important collaboration with Joe Harris. Together, they developed the theory of Limit Linear Series and used it to solve a number of classical problems about the moduli spaces of complex
algebraic curves.
During 1982-84, Eisenbud was Chairman of the Department of Mathematics at Brandeis University. Following this he again took research leave spending the academic year 1986-87 as a Visiting Professor
at the Mathematical Sciences Research Institute at Berkeley and the following academic year as a Visiting Professor at Harvard University. Back at Brandeis, he was again Chairman of the Department of
Mathematics during 1992-94. He made a research visit after this period as Chairman, spending the autumn term of 1994 at Harvard University and the spring term of 1995 at l'Institut Henri Poincaré in
Paris. In 1997 he was appointed as Director of the Mathematical Sciences Research Institute (MSRI) at Berkeley and a Professor at University of California, Berkeley. Although in some sense these two
posts were complementary, we should note that MSRI is not part of the University of California. This has advantages in that it gives it independence, but it also means that it is far less secure
financially as it stands alone. There were many challenges and opportunities for Eisenbud taking on this new post [4]:-
MSRI is a large operation, with about 1,300 visitors coming through each year and about 85 in residence at any one time. It is also large in terms of its coverage of mathematics. Over the years
it has hosted programs in mathematical economics, mathematical biology, string theory, and statistics, as well as in a wide variety of areas in pure mathematics. Indeed, Eisenbud notes that a
distinctive feature of MSRI in the world of mathematics institutes is its combination of pure and applied areas. As he puts it, "We have continued to have a fundamental emphasis, and we mix it
with applied areas."
However, he took up the post wanting to be far more than an administrator [7]:-
Eisenbud is looking forward not only to being the director of MSRI but also to being a mathematician there. "My own work has involved a number of different fields, and I like learning mathematics
a lot," he remarks. "So I feel that I'll profit personally by the flow of mathematics through there, as well as helping the Institute."
He held this position as director for ten years and, during that time, he was also president of the American Mathematical Society from 2003 to 2005. In fact the excellent work he was doing at MSRI
was a factor in his election as president. Margaret Wright writes [9]:-
David has served the mathematical community as chair of the mathematics department at Brandeis, on advisory and evaluation committees for the National Science Foundation, as a member of the Board
on Mathematical Sciences, and as vice president of the AMS. But his service that is most visible nationally and internationally has been as director of MSRI, where he moved in 1997 after
twenty-seven years at Brandeis. A fundamental strength of mathematicians is their ability to generalize, and I believe that David's performance as AMS president can be predicted with high
accuracy by generalizing from his success at MSRI. In fact, his leadership at MSRI exemplifies the qualities needed by the AMS president. With David as its director, MSRI has continued its
tradition of superlative programs in fundamental mathematics while simultaneously expanding into a broader and more diverse selection of fields. David has furthered a deliberate policy of
outreach into new areas, and MSRI's influence and reputation increasingly extend beyond core mathematics into areas on the boundaries between mathematics and science as well as into applications
ranging from imaging to cryptography to finance.
Eisenbud has published a number of important books. In 1982, in collaboration with Corrado De Concini and Claudio Procesi, he published the monograph Hodge algebras. Three years later, this time in
collaboration with Walter Neumann, he published Three-dimensional link theory and invariants of plane curve singularities. In 1992, in collaboration with Joe Harris, he produced Schemes. The language
of modern algebraic geometry. Alexey Rudakov writes in a review:-
This book is intended to introduce basic notions of modern algebraic geometry. These are schemes in general, affine schemes, projective schemes and the functor of points. The authors discuss the
motivation behind most of their definitions and give many examples. The reader who works through them will become comfortable with schemes as far as flatness and characterization of a space by
its functor of points are concerned, as well as with the notion itself. These topics are important in the developing field of noncommutative geometry, where scheme-theoretic thinking is involved,
and this book provides a good exposition for students.
His most significant book, however, was Commutative algebra: With a view toward algebraic geometry which was published in 1995. Matthew Miller begins a detailed review with the following paragraph:-
With so many texts on commutative algebra available, one's reaction to this one might be "Why yet another one?", and "Why is it so fat?" The answer to the second question answers the first as
well - this text has a distinctively different flavour than existing texts, both in coverage and style. Motivation and intuitive explanations appear throughout, there are many worked examples,
and both text and problem sets lead up to contemporary research.
This important book earned Eisenbud the 2010 Steele prize from the American Mathematical Society. The citation indicates why the book is so significant:-
Eisenbud's book was designed with several purposes in mind. One was to provide an updated text on basic commutative algebra reflecting the intense activity in the field during the author's life.
Another was to provide algebraic geometers, commutative algebraists, computational geometers, and other users of commutative algebra with a book where they could find results needed in their
fields, especially those pertaining to algebraic geometry. But even more, Eisenbud felt that there was a great need for a book which did not present pure commutative algebra leaving the
underlying geometry behind. In his introduction he writes, "It has seemed to me for a long time that commutative algebra is best practiced with knowledge of the geometric ideas that played a
great role in its formation: in short, with a view toward algebraic geometry."
Two further books by Eisenbud are important. The geometry of schemes (2000), again written with Joe Harris:-
... is a wonderful introduction to the way of looking at algebraic geometry introduced by Alexandre Grothendieck and his school. The style of this book, however, differs greatly from that of
Bourbaki; it is not formal and systematic, but friendly and inviting .... Thus this book introduces big ideas with seemingly simple, concrete examples, generalizes from them to an appropriate
abstract formulation, and then applies the concept to interesting classical problems in a meaningful way. It is a pleasure to read.
Finally we mention Eisenbud's 2005 book The geometry of syzygies. A second course in commutative algebra and algebraic geometry and look forward to further fascinating books.
We have noted several honours given to Eisenbud, including the 2010 Steele Prize, but among those we have not yet mentioned is his election to the American Academy of Arts and Sciences in 2006 and
the creation of the Eisenbud Professorship at Berkley's Mathematical Sciences Research Institute made possible by a US$10 million gift from the Simons Foundation in May 2007. We end this biography
with his own descriptions of his non-mathematical interests:-
My interests outside mathematics include hiking, juggling, and, above all, music. Originally a flutist, I now spend most of my musical time singing art-songs (Schubert, Schumann, Brahms, Debussy,
Article by: J J O'Connor and E F Robertson
List of References (11 books/articles)
Mathematicians born in the same country
Honours awarded to David Eisenbud
(Click below for those honoured in this way)
AMS Steele Prize 2010
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © July 2011 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is:
|
{"url":"http://www-gap.dcs.st-and.ac.uk/~history/Biographies/Eisenbud.html","timestamp":"2014-04-18T23:16:55Z","content_type":null,"content_length":"27378","record_id":"<urn:uuid:a71d5d78-c7e6-4bbb-9aa8-7608b3230327>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-user] spare matrices
Nathan Bell wnbell@gmail....
Tue Mar 17 08:20:38 CDT 2009
On Tue, Mar 17, 2009 at 7:11 AM, Eric Friedman <ejf27@cornell.edu> wrote:
> 1)How do I find the detailed descriptions of functions like coo?
> Is there a good
> document (which I haven't been able to find) or do I need
> to look inside the code?
> For example, I don't see how to access individual elements in a coo.
> Also, I'd like to be able to pull out a submatrix from a
> subset of the rows and
> find the set of rows which are nonzero.
The docstrings in scipy.sparse are the best source of information. I
generally use IPython to read docstrings:
In [1]: from scipy.sparse import *
In [2]: csr_matrix? <press enter>
The same information is available here:
To extract submatrices you'll need to use either the CSR or CSC
formats. These formats support most of the fancy indexing tricks that
numpy provides for arrays. The find() function in scipy.sparse may
also be helpful.
The sparse documentation doesn't currently inform users about "best
practices" for efficiency, so feel free to ask on this list for help
with specific operations.
> 2) I tried linsolve.spsolve and it says I should use
> sparse.linalg.dsolve but
> when I try that directly I can't get it to work.
> Also, is there any
> documentation on dsolve or spsolve? My matrix is
> singular, but the matrix
> equation is still solvable -- can it deal with that?
Yes, scipy.sparse.spsolve() can solve some consistent but singular systems.
>>> from scipy import rand
>>> from scipy.sparse.linalg import spsolve
>>> from scipy.sparse import *
>>> A = identity(10, format='csc')
>>> A[9,9] = 0
>>> b = A*rand(10)
>>> spsolve(A,b)
However spsolve() is known to fail in some cases:
spsolve() uses an outdated version of SuperLU. We should update this
by SciPy 0.8.
If you find spsolve() inadequate, then consider the UMFPACK scikit instead:
FWIW, UMFPACK is the same solver that MATLAB uses for solving sparse
linear systems. Also, I believe it is generally faster than SuperLU.
Nathan Bell wnbell@gmail.com
More information about the SciPy-user mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-user/2009-March/020389.html","timestamp":"2014-04-19T00:16:48Z","content_type":null,"content_length":"5193","record_id":"<urn:uuid:606ae575-bfdc-49b2-8d55-c5a7fdf219f5>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Implicit and Explicit Iterations with Meir-Keeler-Type Contraction for a Finite Family of Nonexpansive Semigroups in Banach Spaces
Journal of Applied Mathematics
Volume 2012 (2012), Article ID 720192, 14 pages
Research Article
Implicit and Explicit Iterations with Meir-Keeler-Type Contraction for a Finite Family of Nonexpansive Semigroups in Banach Spaces
^1School of Control and Computer Engineering, North China Electric Power University, Baoding 071003, China
^2School of Mathematics and Physics, North China Electric Power University, Baoding 071003, China
^3Department of Mathematics Education and the RINS, Gyeongsang National University, Jinju 660-701, Republic of Korea
Received 31 December 2011; Accepted 28 January 2012
Academic Editor: Rudong Chen
Copyright © 2012 Jiancai Huang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
We introduce an implicit and explicit iterative schemes for a finite family of nonexpansive semigroups with the Meir-Keeler-type contraction in a Banach space. Then we prove the strong convergence
for the implicit and explicit iterative schemes. Our results extend and improve some recent ones in literatures.
1. Introduction
Let be a nonempty subset of a Banach space and be a mapping. We call nonexpansive if for all . The set of all fixed points of is denoted by , that is, .
One parameter family is said to a semigroup of nonexpansive mappings or nonexpansive semigroup on if the following conditions are satisfied: (1) for all ; (2) for all ; (3)for each , for all ; (4)for
each , the mapping from , where denotes the set of all nonnegative reals, into is continuous.
We denote by the set of all common fixed points of semigroup , that is, and by the set of natural numbers.
Now, we recall some recent work on nonexpansive semigroup in literatures. In [1], Shioji and Takahashi introduced the following implicit iteration for a nonexpansive semigroup in a Hilbert space:
where and . Under the certain conditions on and , they proved that the sequence defined by (1.1) converges strongly to an element in .
In [2], Suzuki introduced the following implicit iteration for a nonexpansive semigroup in a Hilbert space: where and . Under the conditions that , he proved that defined by (1.2) converges strongly
to an element of . Later on, Xu [3] extended the iteration (1.2) to a uniformly convex Banach space that admits a weakly sequentially continuous duality mapping. Song and Xu [4] also extended the
iteration (1.2) to a reflexive and strictly convex Banach space.
In 2007, Chen and He [5] studied the following implicit and explicit viscosity approximation processes for a nonexpansive semigroup in a reflexive Banach space admitting a weakly sequentially
continuous duality mapping: where is a contraction, and . They proved the strong convergence for the above iterations under some certain conditions on the control sequences.
Recently, Chen et al. [6] introduced the following implicit and explicit iterations for nonexpansive semigroups in a reflexive Banach space admitting a weakly sequentially continuous duality mapping:
where is a contraction, and . They proved that defined by (1.4) and (1.5) converges strongly to an element of , which is the unique solution of the following variation inequality problem:
For more convergence theorems on implicit and explicit iterations for nonexpansive semigroups, refer to [7–13].
In this paper, we introduce an implicit and explicit iterative process by a generalized contraction for a finite family of nonexpansive semigroups in a Banach space. Then we prove the strong
convergence for the iterations and our results extend the corresponding ones of Suzuki [2], Xu [3], Chen and He [5], and Chen et al. [6].
2. Preliminaries
Let be a Banach space and the duality space of . We denote the normalized mapping from to by defined by where denotes the generalized duality pairing. For any with and , it is well known that the
following inequality holds:
The dual mapping is called weakly sequentially continuous if is single valued, and , where denotes the weak convergence, then weakly star converges to [14–16]. A Banach space is called to satisfy
Opial’s condition [17] if for any sequence in , , It is known that if admits a weakly sequentially continuous duality mapping , then is smooth and satisfies Opial’s condition [14].
A function is said to be an -function if , for any , and for every and , there exists such that , for all . This implies that for all .
Let be a mapping. is said to be a -contraction if there exists a -function such that for all with . Obviously, if for all , where , then is a contraction. is called a Meir-Keeler-type mapping if for
each , there exists such that for all , if , then .
In this paper, we always assume that is continuous, strictly increasing and , where , is strictly increasing and onto.
The following lemmas will be used in next section.
Lemma 2.1 (see [18]). Let be a metric space and be a mapping. The following assertions are equivalent: (i) is a Meir-Keeler-type mapping;(ii)there exists an -function such that is a -contraction.
Lemma 2.2 (see [19]). Let be a Banach space and be a convex subset of . Let be a nonexpansive mapping and be a -contraction. Then the following assertions hold: (i) is a -contraction on and has a
unique fixed point in ;(ii)for each , the mapping is of Meir-Keeler-type and it has a unique fixed point in .
Lemma 2.3 (see [20]). Let be a Banach space and be a convex subset of . Let be a Meir-Keeler-type contraction. Then for each there exists such that, for each with .
Lemma 2.4 (see [21]). Let C be a closed convex subset of a strictly convex Banach space E. Let be a nonexpansive mapping for each , where is some integer. Suppose that is nonempty. Let be a sequence
of positive numbers with . Then the mapping defined by is well defined, nonexpansive and holds.
Lemma 2.5 (see [22]). Assume that is a sequence of nonnegative real numbers such that where is a sequence in and is a sequence in such that(i);(ii);(iii) or .Then .
3. Main Results
In this section, by a generalized contraction mapping we mean a Meir-Keeler-type mapping or - contraction. In the rest of the paper we suppose that from the definition of the -contraction is
continuous, strictly increasing and is strictly increasing and onto, where , for all . As a consequence, we have the is a bijection on .
Theorem 3.1. Let be a nonempty closed convex subset of a reflexive Banach space which admits a weakly sequentially continuous duality mapping from into . For every , let be a semigroup of
nonexpansive mappings on such that and be a generalized contraction on . Let and be the sequences satisfying and . Let be a sequence generated by Then converges strongly to a point , which is the
unique solution to the following variational inequality:
Proof. First, we show that the sequence generated by (3.1) is well defined. For every and , let and define by where . Since is nonexpansive, is nonexpansive. By Lemma 2.2 we see that is a
Meir-Keeler-type contraction for each . Hence, each has a unique fixed point, denoted as , which uniquely solves the fixed point equation (3.3). Hence generated by (3.1) is well defined.
Now we prove that generated by (3.1) is bounded. For any , we have Using (3.4), we get and hence which implies that Hence This shows that is bounded, and so are , and .
Since is reflexivity and is bounded, there exists a subsequence such that for some as . Now we prove that . For any fixed , we have By hypothesis on , , we have Further, from (3.9) we get Since
admits a weakly sequentially duality mapping, we see that satisfies Opial’s condition. Thus if , we have This contradicts (3.11). So .
In (3.5), replacing with and with , we see that which implies that Now we prove that is relatively sequentially compact. Since is weakly sequentially continuous, we have which implies that If , then
is relatively sequentially compact. If , we have . Since is continuous, . By the definition of , we conclude that , which implies that is relatively sequentially compact.
Next, we prove that is the solution to (3.2). Indeed, for any , we have Therefore, Since and is weakly sequentially continuous, we have This shows that is the solution of the variational inequality (
Finally, we prove that is the unique solution of the variational inequality (3.2). Assume that with is another solution of (3.2). Then there exists such that . By Lemma 2.3 there exists such that .
Since both and are the solution of (3.2), we have Adding the above inequalities, we get which is a contradiction. Therefore, we must have , which implies that is the unique solution of (3.2).
In a similar way it can be shown that each cluster point of sequence is equal to . Therefore, the entire sequence converges strongly to . This completes the proof.
If letting for all in Theorem 3.1, then we get the following.
Corollary 3.2. Let be a nonempty closed convex subset of a reflexive Banach space which admits a weakly sequentially continuous duality mapping from into . For every ( ), let be a semigroup of
nonexpansive mappings on such that and be a generalized contraction on . Let and be sequences satisfying . Let be a sequence generated by Then converges strongly to a point , which is the unique
solution to the following variational inequality:
Theorem 3.3. Let be a nonempty closed convex subset of a reflexive and strictly convex Banach space which admits a weakly sequentially continuous duality mapping from into . For every , let be a
semigroup of nonexpansive mappings on such that and be a generalized contraction on . Let and be the sequences satisfying . Let be a sequence generated Then converges strongly to a point , which is
the unique solution of variational inequality (3.2).
Proof. Let and . Now we show by induction that It is obvious that (3.25) holds for . Suppose that (3.25) holds for some , where . Observe that Now, by using (3.24) and (3.26), we have By induction we
conclude that (3.25) holds for all . Therefore, is bounded and so are , , .
For each and , define the mapping , where . Then we rewrite the sequence (3.24) to Obviously, each is nonexpansive. Since is bounded and is reflexive, we may assume that some subsequence of converges
weakly to . Next we show that . Put , , and for each . Fix . By (3.28) we have So, for all , we have
Since has a weakly sequentially continuous duality mapping satisfying Opials^’ condition, this implies . By Lemma 2.4, we have for each . Therefore, . In view of the variational inequality (3.2) and
the assumption that duality mapping is weakly sequentially continuous, we conclude that
Finally, we prove that as . Suppose that . Then there exists and subsequence of such that for all . Put , and . By Lemma 2.3 one has for all . Now, from (2.2) and (3.28) we have It follows that where
is a constant.
Let and . It follows from (3.33) that It is easy to see that , and (noting (3.28)) Using Lemma 2.5, we conclude that as . It is a contradiction. Therefore, as . This completes the proof.
If letting for all in Theorem 3.3, then we get the following.
Corollary 3.4. Let be a nonempty closed convex subset of a reflexive and strictly convex Banach space which admits a weakly sequentially continuous duality mapping from into . For every , let be a
semigroup of nonexpansive mappings on such that and be a generalized contraction on . Let and be sequences satisfying . Let be a sequence generated Then converges strongly to a point , which is the
unique solution of variational inequality (3.2).
Remark 3.5. Theorem 3.1 and Corollary 3.2 extend the corresponding ones of Suzuki [2], Xu [3], and Chen and He [5] from one nonexpansive semigroup to a finite family of nonexpansive semigroups. But
Theorem 3.3 and Corollary 3.4 are not the extension of Theorem 3.2 of Chen and He [5] since Banach space in Theorem 3.3 and Corollary 3.4 is required to be strictly convex. But if letting in Theorem
3.3 and Corollary 3.4, we can remove the restriction on strict convexity and hence they extend Theorem 3.2 of Chen and He [5] from a contraction to a generalized contraction.
Remark 3.6. Our Theorem 3.1 extends and improves Theorems 3.2 and 4.2 of Song and Xu [4] from a nonexpansive semigroup to a finite family of nonexpansive semigroups and a contraction to a generalized
contraction. Our conditions on the control sequences are different with ones of Song and Xu [4].
This work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science, and Technology (Grant Number:
1. N. Shioji and W. Takahashi, “Strong convergence theorems for asymptotically nonexpansive semigroups in Hilbert spaces,” Nonlinear Analysis, vol. 34, no. 1, pp. 87–99, 1998. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH · View at Scopus
2. T. Suzuki, “On strong convergence to common fixed poitns of nonexpansive semigroups in HIlbert spaces,” Proceedings of the American Mathematical Society, vol. 131, pp. 2133–2136, 2002.
3. H. K. Xu, “A strong convergence theorem for contraction semigroups in banach spaces,” Bulletin of the Australian Mathematical Society, vol. 72, no. 3, pp. 371–379, 2005. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH · View at Scopus
4. Y. Song and S. Xu, “Strong convergence theorems for nonexpansive semigroup in Banach spaces,” Journal of Mathematical Analysis and Applications, vol. 338, no. 1, pp. 152–161, 2008. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
5. R. D. Chen and H. M. He, “Viscosity approximation of common fixed points of nonexpansive semigroups in Banach space,” Applied Mathematics Letters, vol. 20, no. 7, pp. 751–757, 2007. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
6. R. D. Chen, H. M. He, and M. A. Noor, “Modified mann iterations for nonexpansive semigroups in Banach space,” Acta Mathematica Sinica, English Series, vol. 26, no. 1, pp. 193–202, 2010. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
7. I. K. Argyros, Y. J. Cho, and X. Qin, “On the implicit iterative process for strictly pseudo-contractive mappings in Banach spaces,” Journal of Computational and Applied Mathematics, vol. 233,
no. 2, pp. 208–216, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
8. S. S. Chang, Y. J. Cho, H. W. Joseph Lee, and C. K. Chan, “Strong convergence theorems for Lipschitzian demi-contraction semigroups in Banach spaces,” Fixed Point Theory and Applications, vol.
2011, Article ID 583423, 10 pages, 2011. View at Publisher · View at Google Scholar
9. Y. J. Cho, L. B. Ćirić, and S. H. Wang, “Convergence theorems for nonexpansive semigroups in CAT (0) spaces,” Nonlinear Analysis, vol. 74, no. 17, pp. 6050–6059, 2011. View at Publisher · View at
Google Scholar
10. Y. J. Cho, S. M. Kang, and X. Qin, “Strong convergence of an implicit iterative process for an infinite family of strict pseudocontractions,” Bulletin of the Korean Mathematical Society, vol. 47,
no. 6, pp. 1259–1268, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
11. W. Guo and Y. J. Cho, “On the strong convergence of the implicit iterative processes with errors for a finite family of asymptotically nonexpansive mappings,” Applied Mathematics Letters, vol.
21, no. 10, pp. 1046–1052, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
12. H. He, S. Liu, and Y. J. Cho, “An explicit method for systems of equilibrium problems and fixed points of infinite family of nonexpansive mappings,” Journal of Computational and Applied
Mathematics, vol. 235, no. 14, pp. 4128–4139, 2011. View at Publisher · View at Google Scholar
13. X. Qin, Y. J. Cho, and M. Shang, “Convergence analysis of implicit iterative algorithms for asymptotically nonexpansive mappings,” Applied Mathematics and Computation, vol. 210, no. 2, pp.
542–550, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
14. J. P. Gossez and E. L. Dozo, “Some geometric properties related to the fixed point theoryfor nonexpansive mappings,” Pacific Journal of Mathematics, vol. 40, pp. 565–573, 1972.
15. J. S. Jung, “Iterative approaches to common fixed points of nonexpansive mappings in Banach spaces,” Journal of Mathematical Analysis and Applications, vol. 302, no. 2, pp. 509–520, 2005. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
16. J. Schu, “Approximation of fixed points of asymptotically nonexpansive mappings,” Proceedings of the American Mathematical Society, vol. 112, pp. 143–151, 1991. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH
17. Z. Opial, “Weak convergence of the sequence of successive approximations for nonexpansive mappings,” Bulletin of the American Mathematical Society, vol. 73, pp. 591–597, 1967. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH
18. T. C. Lim, “On characterizations of Meir-Keeler contractive maps,” Nonlinear Analysis, vol. 46, no. 1, pp. 113–120, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH ·
View at Scopus
19. A. Petrusel and J. C. Yao, “Viscosity approximation to common fixed points of families of nonexpansive mappings with generalized contractions mappings,” Nonlinear Analysis, vol. 69, no. 4, pp.
1100–1111, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
20. T. Suzuki, “Moudafi's viscosity approximations with Meir-Keeler contractions,” Journal of Mathematical Analysis and Applications, vol. 325, no. 1, pp. 342–352, 2007. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH · View at Scopus
21. R. E. Bruck, “Properties of fixed-point sets of nonexpansive mappings in Banach spaces,” Transactions of the American Mathematical Society, vol. 179, pp. 251–262, 1973. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH
22. H. K. Xu, “An iterative approach to quadratic optimization,” Journal of Optimization Theory and Applications, vol. 116, no. 3, pp. 659–678, 2003. View at Publisher · View at Google Scholar · View
at Zentralblatt MATH · View at Scopus
|
{"url":"http://www.hindawi.com/journals/jam/2012/720192/","timestamp":"2014-04-18T12:16:58Z","content_type":null,"content_length":"717419","record_id":"<urn:uuid:d81dee2c-7148-4ec8-87f1-1bad5f7012d4>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Northglenn, CO Calculus Tutor
Find a Northglenn, CO Calculus Tutor
...I've worked with high school and college students, priding myself on being able to explain any concept to anyone. I worked as a tutor and instructional assistant at American River College in
Sacramento, CA for 15 years. I also attended UC Berkeley as an Engineering major.I took this class at American River College in Sacramento, CA.
11 Subjects: including calculus, geometry, statistics, algebra 1
...If you do not see a certain course above please feel free to contact me as I may have taken that course.While attending college I got a bachelors of science degree in Electrical Engineering.
During the 4 1/2 years of pursuing my degree I had emphasis in Power and Controls focusing more so on Pow...
20 Subjects: including calculus, chemistry, physics, statistics
...I love math, and I enjoy helping students understand it as well. When working with a student, I usually try to show the student how I understand the material first. Then upon understanding the
student's learning style, I will incorporate a variety of methods to help the student.
17 Subjects: including calculus, chemistry, psychology, discrete math
...Too often I hear the phrase, "I'm not good at math". I strongly believe that ANYONE can be "good" at math. Through tutoring, I seek to remove the intimidation of math and physics and help
students build up confidence and understanding.
13 Subjects: including calculus, physics, geometry, algebra 1
Hey! I grew up in the Bay Area and graduated from College Park High School in 2012. This fall, I will be in my sophomore year at the University of Denver, working on degrees in Biology,
International Studies, and Mathematics.
6 Subjects: including calculus, algebra 2, SAT math, precalculus
Related Northglenn, CO Tutors
Northglenn, CO Accounting Tutors
Northglenn, CO ACT Tutors
Northglenn, CO Algebra Tutors
Northglenn, CO Algebra 2 Tutors
Northglenn, CO Calculus Tutors
Northglenn, CO Geometry Tutors
Northglenn, CO Math Tutors
Northglenn, CO Prealgebra Tutors
Northglenn, CO Precalculus Tutors
Northglenn, CO SAT Tutors
Northglenn, CO SAT Math Tutors
Northglenn, CO Science Tutors
Northglenn, CO Statistics Tutors
Northglenn, CO Trigonometry Tutors
Nearby Cities With calculus Tutor
Arvada, CO calculus Tutors
Aurora, CO calculus Tutors
Boulder, CO calculus Tutors
Brighton, CO calculus Tutors
Broomfield calculus Tutors
Denver calculus Tutors
East Lake, CO calculus Tutors
Eastlake, CO calculus Tutors
Englewood, CO calculus Tutors
Federal Heights, CO calculus Tutors
Lakewood, CO calculus Tutors
Littleton, CO calculus Tutors
Thornton, CO calculus Tutors
Westminster, CO calculus Tutors
Wheat Ridge calculus Tutors
|
{"url":"http://www.purplemath.com/Northglenn_CO_Calculus_tutors.php","timestamp":"2014-04-18T19:03:25Z","content_type":null,"content_length":"24048","record_id":"<urn:uuid:e49dc56c-5990-4653-93b0-add2e43b8cfe>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nowhere Dense
October 12th 2009, 02:26 AM #1
Sep 2009
Nowhere Dense
Hello ,
I have the following question and I hope from you to answer it:
"What is the sigma algebra generated by the nowhere dense subsets of R"
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/differential-geometry/107520-nowhere-dense.html","timestamp":"2014-04-17T12:46:07Z","content_type":null,"content_length":"28237","record_id":"<urn:uuid:eac7d1a7-7ffd-4673-a085-32a9d241c08a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Glossary of terms for the maximal determinant problem
Kronecker product: If A is a p×q matrix and B is an r×s matrix, then the Kronecker product, A⊗B, is the pr×qs matrix
a(1,1)B a(1,2)B ... a(1,q)B
. .
. . .
. .
a(p,1)B a(p,2)B ... a(p,q)B
where a(i,j) is the element of A in row i and column j. (Also known as the tensor product.)
|
{"url":"http://www.indiana.edu/~maxdet/glossary.html","timestamp":"2014-04-19T22:10:22Z","content_type":null,"content_length":"3140","record_id":"<urn:uuid:43b9847b-195e-458e-9719-58fc10bada98>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The different does not have a canonical square root
Just wanted to draw attention to this very nice exchange on Math Overflow. Matt Emerton remarks that the different of a number field is always a square in the ideal class group, and asks: is there
a canonical square root of the ideal class of the different?
What grabs me about this question is that the word “canonical” is a very hard one to define precisely. Joe Harris used to give a lecture called, “The only canonical divisor is the canonical
divisor.” The difficulty around the word “canonical” is what gives the title its piquancy.
Usually we tell students that something is “canonical” if it is “defined without making any arbitrary choices.” But this seems to place a lot of weight on the non-mathematical word “arbitrary.”
Here’s one way to go: you can say a construction is canonical if it is invariant under automorphisms. For instance, the abelianization of a group is a canonical construction; if f: G_1 -> G_2 is an
isomorphism, then f induces an isomorphism between the abelianizations.
It is in this sense that MathOverflow user “Frictionless Jellyfish” gives a nice proof that there is no canonical square root of the different; the slick cnidarian exhibits a Galois extension K/Q,
with Galois group G = Z/4Z, such that the ideal class of the different of K has a square root (as it must) but none of its square roots are fixed by the action of G (as they would have to be, in
order to be called “canonical.”) The different itself is canonical and as such is fixed by G.
But this doesn’t seem to me to capture the whole sense of the word. After all, in many contexts there are no automorphisms! (E.G. in the Joe Harris lecture, “canonical” means something a bit
Here’s a sample question that bothers me. Ever since Gauss we’ve known that there’s a bijection between the set of proper isomorphism classes of primitive positive definite binary quadratic forms of
discriminant d and the ideal class group of a quadratic imaginary field.
Do you think this bijection is “canonical” or not? Why?
5 thoughts on “The different does not have a canonical square root”
1. I think the use of the word canonical in mathematics is a blend of the usage in standard English together with “preserved by isomorphisms”. To take a lower level example than those discussed in
your post, I would be happy to talk of the “canonical” isomorphism Z–>Z (Z=integers) of abelian groups. Implicitly, here, I am thinking of Z as the free cyclic group with privileged generator 1;
as such it has no isomorphisms, and the map sending 1->1 is “canonical”. On the other hand, the map sending 1 to -1 is also preserved by automorphisms. Of course, what is happening in this case
is that we secretly recognize Z as a ring, and so have a psychological preference for the first map. So we call it canonical not only because it is preserved by automorphisms, but also because it
is, colloquially, the “map we obviously would think of”. Trying to pin down a “definition” of canonical is the fundamental error of analytic philosophy — the conceit that words have a Platonic
meaning — which obscures the more complicated reality of semantics.
With that in mind, I take your last question to be purely psychological, in which case, when I think of the map from the class group to binary forms, I imagine taking I = (a,b) (generators of the
Z-module I) and sending I to the quadratic form N(ax+by)/N(I). I would say it is canonical in the sense that I would guess if I pick up a random book I would guess that this is the map they would
use. But I could well imagine there is another “natural” definition — or perhaps a “canonical” map in the other direction whose composite is -1. To think that is “wrong” would be to fall into the
trap of imagining canonical to be a purely mathematical concept. Of course, when writing a paper, one should make sure when talking about the “canonical” map from X->Y that the reader has the
same map in mind as you do (e.g. when the map is canonically canonical).
2. I’ve been thinking of asking this on mathoverflow. This is probably because of Matt’s question, but maybe also because I recently got a referee report that told me half my canonicals should be
naturals and half my naturals should be canonicals.
1. Canonical=functorial: There is a canonical map from a vector space to its double dual because it prolongs to a map of functors. I’d probably say this is what natural means too. A weaker
version than functorial would be invariant under isomorphisms.
2. Canonical=definable: Something is canonical if it’s possible to write down a definition for it. So the group Z has two canonical generators, but a general infinite cyclic group has none. And R
has a canonical algebraic closure (many, in fact), but a general field presumably doesn’t. Not too interestingly, Z/pZ also has a canonical algebraic closure. (Fix your favorite ordering on the
polynomials and keep adjoining formal roots, in the usual way, of the first polynomial that doesn’t yet have a root.) I’m not really one for holding onto quotes from my old teachers, but Givental
had a good one: “What means canonical? It means I tell you how to do it.”
3. Canonical=standard: This is probably closer to some original meaning. So then R^n has a canonical basis, the one everyone always takes, and i is the canonical complex square root of -1. Also
canonical forms.
4. Canonical=something someone already dubbed canonical: This is pretty close to the previous one. For instance, probably some book somewhere states explicitly that the canonical map G–>G/N to a
quotient group is to be called the canonical map.
5. Canonical=unnamed part of some structure: A vector bundle over X is defined to be a map E–>X such that… Then if you’re talking about a vector bundle E over X, you might later refer to the
canonical map E–>X. Similarly, in a tensor category you could talk about the canonical map (A*B)*C –> A*(B*C). Such a map is part of the definition of a tensor category.
6. Canonical=induced by the evident universal property. Given sets X and Y, you have the canonical inclusion from X to the disjoint union.
That’s all I can think of for now. I think my current usage is moving more towards 4 and 5, probably because the other three have precise names I wrote above. Note that some of these work better
with the definite article, and some with the indefinite.
3. I think that the bijection between ideal classes and quadratic forms is canonical. An ideal class in a quadratic number field is a lattice in Z^2. When the quadratic field is imaginary, this
lattice gives rise to a torus with a unique conformal structure. Conformal structures of tori are defined by points in moduli space, H^2/PSL(2,Z). On the other hand, a quadratic form with
negative discriminant corresponds to a lattice point (a,b,c) in Z^3 such that the discriminant b^2-4ac is negative. The automorphisms of this quadratic form are PSL(2,Z) again, with its
3-dimensional representation (corresponding to linear change of bases of the quadratic form). Projectivizing, the point gives rise to a unique point in the hyperbolic plane, up to the action of
PSL(2,Z) (although this is the Klein model, instead of the Poincare model, of hyperbolic space). So the correspondence is natural in this geometric sense, and arising from a special isomorphism
between Lie groups (sl(2,R) and so(2,1)).
4. Melanie Matchett Wood submitted an article on Gauss composition over schemes recently that gives an interpretation of both sides as quotients of affine three-space by equivalent actions of GL(2)
x GL(1). Her construction employs a choice of equivalence, but I couldn’t tell if the discriminant adds enough rigidity to the moduli problem as stated to make the correspondence canonical (in
the “unique up to unique isomorphism” sense).
5. You’re way ahead of me, Scott, I was going to blog about Wood’s paper for this very reason. Will get to it in the next few days. Unless you want to!
Tagged algebraic number theory, canonical, class group, cnidarian, different, emerton, functoriality, joe harris, math overflow, number theory, quadratic forms
|
{"url":"http://quomodocumque.wordpress.com/2010/07/26/the-different-does-not-have-a-canonical-square-root/","timestamp":"2014-04-18T20:43:03Z","content_type":null,"content_length":"70244","record_id":"<urn:uuid:3fe246a3-db38-46bc-b64f-ced66e33bb27>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Answer - Shillings
The point of this puzzle turns on the fact that if the magic square were to be composed of whole numbers adding up 15 in all ways, the two must be placed in one of the corners.
Otherwise fractions must be used, and these are supplied in the puzzle by the employment of sixpences and half-crowns.
I give the arrangement requiring the fewest possible current English coins—fifteen.
It will be seen that the amount in each corner is a fractional one, the sum required in the total being a whole number of shillings.
|
{"url":"http://www.pedagonet.com/puzzles/coins1.htm","timestamp":"2014-04-19T17:02:22Z","content_type":null,"content_length":"10673","record_id":"<urn:uuid:c906d115-1c0c-4bfa-b1ea-9f2f70605937>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Soluble Groups....
April 2nd 2009, 08:55 AM #1
Junior Member
Mar 2009
Soluble Groups....
Hi guys,
How can you prove that S3, S4 & A4 are soluble, & that A5 & S5 are not?
I know that every group of order less than 60 is soluble, but how would you proove it without using that fact in these cases? :-)
I have absolutely no idea where to start! :-s (our lecturer did not explain this bit very well AT ALL! :-( )
Any of the proofs, or even some explanation about what Soluble Groups actually mean & their significance, would be VERY MUCH appreciated! :-)
Many, many thanks. x
Hi guys,
How can you prove that S3, S4 & A4 are soluble, & that A5 & S5 are not?
I know that every group of order less than 60 is soluble, but how would you proove it without using that fact in these cases? :-)
I have absolutely no idea where to start! :-s (our lecturer did not explain this bit very well AT ALL! :-( )
Any of the proofs, or even some explanation about what Soluble Groups actually mean & their significance, would be VERY MUCH appreciated! :-)
Many, many thanks. x
$A_3$ and $S_3/A_3$ are both solvable because they are abelian. therefore $S_3$ is solvable. $V,$ the klein-4 group, and $A_4/V$ are both abelian and hence solvable. therefore $A_4$
is solvable. also obviously $S_4/A_4$ is abelian and hence solvable. thus $S_4$ is solvable. $A_5',$ the commutator subgroup of $A_5,$ is a normal subgroup of $A_5.$ so either $A_5'=\{1\}$
or $A_5'=A_5$ because $A_5$ is simple. but $A_5' eq \{1 \}$ because $A_5$ is not abelian. thus $A_5'=A_5$ and hence $A_5^{(k)}=A_5 eq \{1 \},$ for all $k.$ so $A_5$ is not solvable which
implies that $S_5$ is not solvable because every subgroup of a solvable group is solvable.
Last edited by NonCommAlg; April 5th 2009 at 01:19 PM. Reason: typo!
Hi guys,
How can you prove that S3, S4 & A4 are soluble, & that A5 & S5 are not?
I know that every group of order less than 60 is soluble, but how would you proove it without using that fact in these cases? :-)
I have absolutely no idea where to start! :-s (our lecturer did not explain this bit very well AT ALL! :-( )
Any of the proofs, or even some explanation about what Soluble Groups actually mean & their significance, would be VERY MUCH appreciated! :-)
Many, many thanks. x
As NonCommAlg said you need to know that $A_5$ is a simple group - which means it has no proper non-trivial normal subgroup (so if $N\triangleleft A_5 \implies N = \{ e \}\text{ or }N=A_5$). If
$A_5$ is simple then it certainly cannot be solvable. This is easy to see since you cannot write a subnormal series because you will not be able to form normal subgroups of $A_5$ other than the
trivial and itself, hence $\{ e \}\triangleleft A_5$ is the only subnormal series for $A_5$. Since the factor group $A_5/\{e\}\simeq A_5$ is not abelian it means the group cannot be sovable. This
means $S_5$ (in fact $S_n$) cannot be solvable because $A_5$ is a subgroup, and a subgroup of a solvable group is solvable. Thus, all we need to show is that $A_5$ is simple.
Here is one method with orbits and stablizers. We will derive the class equation for $A_5$. First, we will work with $S_5$ to help figure out the conjugacy classes for $A_5$. For $S_5$ there are
7 conjugacy classes. These consist of: 1-cycle, 2-cycles, 3-cycles, 4-cycles, 5-cycles, (3-cycles)(2-cycles),(2-cycles)(2-cycles). The number of 1-cycles is just 1. The number of 2-cycles is ${5\
choose 2} = 10$. The number of 3-cycles is $2!\cdot {5\choose 3} = 20$. The number of 4-cycles is $3!\cdot {5\choose 4} = 30$ and the number of 5-cycles is $4! = 24$. The number of (3-cycles)
(2-cycles) is ${5\choose 3}=20$. The number of (2-cycles)(2-cycles) is $3{5\choose 4} = 15$. These are the conjugacy classes for $S_5$ and the number of elements in each class. What we will be
interested in is the class equation for $A_5$.
Two elements that are conjugate in $A_5$ must be conjugate in $S_5$, obviously. However, elements that are conjugate in $S_5$ are not necessarily conjugate in $A_5$. Remember a result from group
theory that the number of elements conjugate to $x\in A_5$ would be equal to the index of $C(x)$ (centralizer of $x$ in $A_5$) in $A_5$. The conjugacy classes for $A_5$ consist of: 5-cycles,
3-cycles, 1-cycle, (2-cycles)(2-cycles). We need to count the number of elements in each conjugacy class, we will use our work above to help answer this question. Let us start easy, 1-cycle, is
just the identity, so there is just one element in that conjugacy class. Let $x=(12)(34)$, the number of conjugates to this in $S_5$ is $15$, but necessary in $A_5$. We will prove that it is $15$
for $A_5$ as well. Let $[(12)(34)]_{S_5}$ be the order of the conjugacy class of $(12)(34)$ in $S_5$, therefore $15=[(12)(34)]_{S_5} = 120/|C_{S_5}(x)| \implies |C_{S_5}(x)|=8$. We notice that
(if you do the computation) $\left< (1324),(13)(24)\right> \subseteq C_{S_5}(x)$, but since $|\left<(1324),(13)(24)\right>| = 8$ it means $C_{S_5}(x) = \left<(1324),(13)(24)\right>$. Thus, $C_
{A_5}(x) = A_5\cap C_{S_5}(x) = \{ e, (12)(34),(13)(24),(14)(23)\}$. Thus, $|C_{A_5}(x)| = 4$ which means $[(x)]_{A_5} = [A_5:C_{A_5}(x)] = 15$ which means the conjugacy class of $x=(12)(34)$ in
$A_5$ is the entire conjugacy class for that element in $S_5$. Let $y=(123)$ then (all these arguments are similar) $|(123)|_{S_5} = 20$ so $|C_{S_5}(y)| = 6$, it can now be shown $C_{S_5}(y) = \
left< (123),(45)\right>$ thus $C_{A_5}(y) = A_5\cap C_{S_5}(y) = \left< (123)\right>$. This means $[y]_{A_5} = 20$ and so the conjugacy class of $(123)$ is also all of conjugates in $S_5$. Now we
get to $z=(12345)$ (the last remaining case to consider). Again (similar argument, steps excluded), $C_{S_5}(z) = \left< (12345)\right>$ and so $C_{A_5}(z) = \left< (12345)\right>$. However, this
means $[(12345)]_{A_5} = 12$. Thus, conjugacy class for $(12345)$ in $A_5$ is not all of the ones in $S_5$ (which have $24$ of them). Repeating the same argument with the remaining 12 5-cycles we
see (by the same argument as just now) that they consist of $12$ conjugate elements too. Thus, the conjugacy class equation for $A_5$ is $60 = 1 + 12 + 12 + 15 + 20$. It now follows that $A_5$ is
simple since we cannot write a non-trivial proper divisor of $60$ as a summand of those numbers.
April 2nd 2009, 04:39 PM #2
MHF Contributor
May 2008
April 2nd 2009, 07:43 PM #3
Global Moderator
Nov 2005
New York City
|
{"url":"http://mathhelpforum.com/advanced-algebra/81967-soluble-groups.html","timestamp":"2014-04-20T13:10:48Z","content_type":null,"content_length":"59610","record_id":"<urn:uuid:3d5fc62f-955f-4b94-8c89-6ccac0f5cefb>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hopewell, NJ Precalculus Tutor
Find a Hopewell, NJ Precalculus Tutor
...It is also my favorite subject to tutor. My qualifications for tutoring precalculus are based on long experience and a perfect 800 score on the SAT subject test, level 2, without a graphing
calculator. It is such a shame that English is not rigorously taught at most schools today.
23 Subjects: including precalculus, English, calculus, geometry
...I take the time to read along with my students when they are assigned books for school, so that I can talk to them about their work. I also read along with students on the books they choose for
themselves. Writing!
43 Subjects: including precalculus, English, writing, reading
...References available upon request. I have been a private and group calculus tutor for the past 4 years, with great success. I have worked in a Chemistry research lab at TCNJ, which required a
lot of application of the concepts of Chemistry I have tutored 3 students in the past for the SAT, all of whom were happy with the results.
26 Subjects: including precalculus, chemistry, physics, calculus
...I am also familiar with the sciences and can tutor elementary school and middle school science. I tutored students in elementary through high school science, Algebra 1, Algebra 2, Trigonometry,
Geometry and Pre-calculus throughout college and have an excellent success rate. I easily build rapport with my students and make them feel comfortable with the subject matter.
16 Subjects: including precalculus, calculus, physics, geometry
...As such, I definitively have the knowledge base with which to engage students of all different ability levels. As part of my High School's National Honors Society, I have experience tutoring
High School freshman, sophomores, and juniors in biology, chemistry, and economics, and I regularly expla...
19 Subjects: including precalculus, chemistry, calculus, physics
|
{"url":"http://www.purplemath.com/Hopewell_NJ_Precalculus_tutors.php","timestamp":"2014-04-18T14:10:35Z","content_type":null,"content_length":"24343","record_id":"<urn:uuid:504ad898-51b5-4dde-abba-32ef1e15df77>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Complex Variables
Date: 03/25/2003 at 05:35:21
From: Kenny Wong
Subject: Complex root for equations of trigonometry
Is there any complex root for an equation like sin(x)=3/2?
What does a^i= ? where a is a real constant.
Date: 03/25/2003 at 11:02:30
From: Doctor Jerry
Subject: Re: Complex root for equations of trigonometry
Hi Kenny,
If I enter 1.5 on my calculator (HP48GX) and press the ASIN button
(arcsin), I find
1.57079632679 - i*0.962423650119.
I'll try to explain this.
We must start with something - maybe e^{i*t} = cos(t) + i*sin(t),
where t is real.
From this we see that
e^{i*t} = cos(t) + i*sin(t)
e^{-i*t} = cos(-t) + i*sin(-t) = cos(t) - i*sin(t)
e^{i*t} - e^{-i*t} = 2i*sin(t)
sin(t) = [e^{i*t} - e^{-i*t}]/(2i)
Let's set
3/2=1.5=sin(t) = [e^{i*t} - e^{-i*t}]/(2i)
Multiply both sides by 2i*e^{i*t}:
3ie^{i*t} = e^{2i*t} - 1
Now let w=e^{i*t}. So,
3i*w = w^2 - 1 or
w^2 -(3i)w-1=0.
Solving this by the quadratic formula (and just looking at one root)
w = (3/2)i+i*sqrt(5)/2.
e^{i*t} = (3/2+sqrt(5)/2)i = (3/2+sqrt(5)/2)*e^{pi*i/2}
Take natural logs of both sides:
i*t = ln(3/2+sqrt(5)/2)+pi*i/2
t=-i*ln(3/2+sqrt(5)/2) + pi/2 = 1.57079632679 - i*0.962423650119.
Most of the unfamiliar ideas above are in a course called "complex
variables," sometimes taught as a junior or senior undergraduate
course in the U.S. and usually as a first year graduate course.
- Doctor Jerry, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/62584.html","timestamp":"2014-04-19T15:24:33Z","content_type":null,"content_length":"6516","record_id":"<urn:uuid:d7d5936d-38cc-429b-97b9-b322a272a943>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Consider The DC Motor System Below: We Can Say ... | Chegg.com
DC Motor System, Speed, Torque, Voltage flux etc.
Image text transcribed for accessibility: Consider the DC motor system below: We can say that the current is given by i=(Vapp - Eag)/R1 + Rm The torque is proportional to the current and magnetic
flux Phi T = k1 Phi i The motor counterelectromotive voltage is proportional to the flux and rotational mechanical speed Ohm Eag = k2 Phi Ohm Thereby the speed is given by Ohm = (vapp - (R1 + Rm)T/
(k1 Phi))/k2 Phi When is the speed approximately proportional to the voltage? Can we reduce the applied voltage Vapp indefinitely to slow down the motor? What will happen when the voltage is too low?
What happens when the motor start (initially it was not moving)?
Electrical Engineering
Answers (1)
• Cheers
Rating:5 stars 5 stars 1
TheGladiator answered 1 hour later
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/consider-dc-motor-system-say-current-given-vapp-eag-r1-rm-torque-proportional-current-magn-q4097969","timestamp":"2014-04-18T04:41:12Z","content_type":null,"content_length":"21383","record_id":"<urn:uuid:b6b2d225-b9c9-4180-9bd1-22800a0f05d7>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kids.Net.Au - Encyclopedia > Julian date
The term
Julian date
has different meanings. It is sometimes confused with
Julian day
, which also has more than one meaning. Just as the Gregorian date is a date in the
Gregorian calendar
, a Julian date is a date in the
Julian calendar
. Some people use the term
Julian date
as a synonymous of
Julian Day
Julian Date Number
. Such use makes it ambigous, for which reason is better to reserve the term
Julian date
to refer to a day in the Julian calendar.
The Julian Day (JD) or Julian Day Number is the time that has elapsed since noon January 1, 4713 BC[?] (according to the proleptic Julian calendar; or November 24, 4714 BC according to the proleptic
Gregorian calendar), expressed in days and fractions of a day. The Julian day is based on the Julian period proposed by Joseph Scaliger in 1583. Note: although many references say that the "Julian"
in "Julian day" refers to Scaliger's father, Julius Scaliger, in the introduction to Book V of his "Opus de Emendatione Tempore" ("Work on the Emendation of Time") he states: "Iulianum vocauimus:
quia ad annum Iulianum dumtaxat accomodata est" which translates more or less as "We call this Julian merely because it is accomodated to the Julian year". This "Julian" in "proleptic Julian
calendar" and "Julian year" refers to Julius Caesar, who introduced the Julian calendar in 46 BC.
Given that the Julian Day Number (and modifications of it) has been widely used by astronomers, it is also called "Astronomical Julian Day (AJD)". Because the starting point is so long ago, numbers
in the Julian day can be quite large and cumbersome. For this reason, adaptations have been made by substracting a number of days from the Julian day to obtain the Modified Julian Day (MJD) or
Truncated Julian Day (TJD) (See below.)
The most well known version of the Julian Day is perhaps the Chronological Julian Day (CJD), a modification of the Astronomical Julian Day, in which the starting point is set at midnight January 1,
4713 BC (Julian calendar) rather than noon. Chronographers found the Julian Day concept useful, but they didn't like noon as starting time. So CJD = AJD + 0.5. Note that AJD uses Coordinated
Universal Time (UTC), and so it is the same for all time zones and is independent of Daylight-Saving Times (DST). On the other hand, CJD is not, so it changes with different time zones and takes into
account the different local DSTs.
To make numbers more convenient, a more recent starting point is sometimes used. For example, the Lilian Day number (LD) counts from October 14, 1582 C.E. in the Gregorian Calendar, which is the date
before the day on which the Gregorian calendar was adopted. Where CJD is the Chronological Julian day number: LD = CJD - 2,299,160 = AJD - 2,299,159.5
The Julian date system was intended to provide a single system of dates that could be used to unify different historical chronologies. Its epoch was fixed by Scaliger to a time that he believed
pre-dated all known historical dates.
In his book Outlines of Astronomy, published in 1849, the astronomer John Herschel recommended that a version of Scaliger's scheme should be used to make a standard system of time for astronomy. This
has now become the standard system of Julian dates. Julian dates are typically used by astronomers to calculate astronomical events, and eliminate the complications resulting from using standard
calendar periods. There are two particular advantages: first, starting so far back in time allows historical observations to be handled easily (when studying ancient records of, eg, eclipses);
second, because Julian days begin at noon a single night of astronomical observation will fall within the same Julian day.
The Modified Julian Date, invented by space scientists in the 1950s, is defined in terms of the Julian Date as follows:
MJD = AJD - 2400000.5
The offset of 0.5 means that MJDs start midnight of November 17th, 1858 CE.
Modified Julian Days are always based on the Universal Time system, not local time.
The Truncated Julian Day (TJD) is obtained by substracting 2,440,000.5 to the AJD.
See also: time, time scales, era, epoch, epoch (astronomy)
• Gordon Moyer, "The Origin of the Julian Day System," Sky and Telescope, vol. 61, pp. 311-313 (April 1981).
• Explanatory Supplement to the Astronomical Almanac, edited by P. Kenneth Seidelmann, published by University Science Books (August 1992), ISBN 0935702687
External links:
All Wikipedia text is available under the terms of the GNU Free Documentation License
|
{"url":"http://encyclopedia.kids.net.au/page/ju/Julian_date?title=4713_BC","timestamp":"2014-04-19T09:45:12Z","content_type":null,"content_length":"20090","record_id":"<urn:uuid:329cbb0a-72d4-46e4-aa37-b23f052f1508>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Which Function To Use - Part 1
Excel sports a large number of worksheet functions that can be used to slice and dice data. One of the qualities that makes Excel so fascinating is the endless variety that these functions can be
combined into formulas producing powerful, surprising, even elegant solutions. There is much to learn here and the user is easily bewildered, wondering which function or combination to work with to
produce the desired results.
Just knowing where to start in a given situation requires experience. There's a science to it, but truly inspired solutions are as much art as science. Think of this series of articles as a treasure
map, as I will attempt to lead you to the most productive paths.
Before we delve in, I need to backup. We must understand what a range is and how it is specified. A range is a group of one or more cells. The range is NOT the values in those cells. This distinction
is important. Some functions (and hence formulas) return values. Some return ranges. Still others can return either in varying circumstances. The value is just one property of a cell. Other
properties include color, type of border, font, etc. Typically just the value property is directly accessed from a formula, but many of these other properties can be manipulated via formulas when
using Conditional Formatting. All range properties can be accessed when using VBA.
Ranges can be specified by either R1C1 notation or A1 notation. R1C1 is no longer common and requires an option setting. This article will discuss A1 notation exclusively.
Excel recognizes three reference operators. These three operators combine input ranges and produce an output range.
The first is the Range Operator, which produces a minimum rectangle around the inputs. The Range Operator is the COLON:
You've seen this in action a million times, I'm sure, but you may have never realized what it means. The colon specifies the range defined by the minimum rectangle circumscribing the input ranges.
The input ranges can be specified by any method that describes a range, including defined names, functions, and formulas:
In each case the Range Operator returns the range that circumscribes the input ranges with the tightest rectangle. At first glance, the first example seems redundant, and in fact, it can be
but by substituting formulas for the input ranges, novel ways of controlling other formulas begin to present themselves.
There are two other reference operators and you may not even know about them. These are the Union and Intersect operators. The Union Operator is the COMMA. The Intersect Operator is the SPACE
character (space bar).
While the Range Operator always returns a rectangular range, these two other operators can return ranges of any shape, contiguous or not, as long as all of the input ranges are on the same worksheet:
(A3:E3 C1:C5)
(CurRow CurCol)
The first of these five examples returns a range in the shape of a diagonal. The second returns a range in the shape of a plus sign. The third returns a range composed solely of C3. The fourth
returns a range of four disconnected cells in the shape of the vertices of a square. The last returns the range where the named formulas intersect. This last example is probably the most overlooked
lookup method in Excel and can be used to conveniently return information from a list or a table of data. Again, the caveat is that all of the input ranges must be on one worksheet.
Another note is in order for the Union and Intersect Range Operators. In math, a union of two or more sets will only include an element once, but the Excel Union Range Operator does not work this way
=SUM( (A2:B2,B2:C2) )
The above formula uses the Union Range Operator. It should be the equivalent of:
But it is not. Whatever is in B2, gets summed twice in this example. Also, notice the parentheses around the compound range. This is good practice since functions that require multiple input
parameters use commas to separate those inputs. The parentheses ensure that Excel is not confused by ambiguous input.
So the Union Range Operator joins input ranges but is not a true Set Union Operator. On the other hand, the Intersect Range Operator is a true Set Intersect Operator, returning only elements (cells)
in common.
There's at least one more twist to range referencing that deserves mention. Ranges can also be 3-D:
Notice that the Range Operator is used twice in this 3-D range example. This returns a range composed of A14:Z14 for twelve worksheets. This could be useful for adding monthly sheets, for example.
The other two reference operators do not work on 3-D ranges. Here is an
article from Microsoft explaining 3-D ranges
in greater detail and specifying which functions work with them. There's actually more functions than listed there, but that is a topic for another day.
Now that we know what a range is, we need to realize that the roster of functions that we can use to slice and dice our data, use these ranges as inputs. This is most commonly done with rectangular
ranges, but the others can produce interesting results.
There is a lot to go over here and so I am going to break it up into a series of articles. To begin with, I thought I would share a video presentation I made a couple of years ago on the VLOOKUP
function, long before I began this blog. I had the idea for the blog at the time, but really did not know the direction I would take with it. Here is the VLOOKUP video:
It's a long video; 27 minutes on a simple function! Probably too long. But I'd like to hear your feedback. I've always wanted to host a masters class in Excel, and after seeing the popularity of
Chandoo's foundational
Excel School
, I started thinking of this once again. The production is not as polished as I would like, but hey it was a couple of years ago when I was just learning how to make videos.
The idea behind the masters series is to explore each function in detail and then see how they can be combined into elegant formulas. Of course it would have to have advanced charting, excelhero
style. So what do you think? Is this something I should invest my time in?
If you liked this article, please share it!
17 Comments
I would love this kind of in-depth exploration of Excel, especially when it comes to combining functions! I know I have barely scratched the surface of so much that is available in Excel.
I cringe when looking at training opportunities at how basic even the so-called "advanced" Excel courses.
How can I watch the video....?
Definitely Yes, You should invest some time as you have a great way of explaining things in excel. I am interested in joining such class and learn excel
Thank you very much
Hi Daniel,
Great video tutorial very well explained and presented can't wait until the next one.
Thank you for sharing your expertise with us
I come from China,27 years old.
Could you introduce yourself in detail?such as age,educationg,experience?
I want to know How could you do such beautiful work!
Daniel...The video is cut at approx 19 min....It does not download/play completely....
Neat article Daniel.
Although I know of and use the Intersect Operator (Space), I generally don't like it for general use as the intersect operator.
It is hard to spot and read in general use and people who aren't aware of it miss it.
I enjoyed watching the video and would love to see more. It would be great to see you in action in Excel as well.
About the problem you mentioned at the end with using a compound-index column; if you use an Excel 2007/2010 table the formulas you add will automatically fill down as the list grows.
Wow, Awesome post - Which Function To Use - Part 1 - Excel Hero Blog was a wonderful read. I'm such a newbie when it comes to all this, Thanks for this!
Daniel.. Great post looking forward to the next part.
Of interest the VLOOKUP function can return more than one result if used with as an Array formula.
'={VLOOKUP(A2,'Mar 10 Data'!A:G,{3,5,7},0)}
But I guess iti si important to know whom the Video is targeted at.
Enjoy you blogs very much, thanks for your inspiring work
The master series sounds great to me.
Union operator wow.
The match function can be used for multi criteria lookups
Hi Daniel,
I really like the way you presented ranges topic here. The video could have a quicker pace though and considering your audience I think you could move over the basics quickly and focus on those
little tricks that we all love and that make the difference:)
As for your last question - personally, I'd love to see more of your lessons. You're the creative guy that we all want to steal a little bit from:)
@ Alan,
The & operator is slow
Use this Array entered
or non array entered
Condition1 = Typically A1:A10 = "Something"
Great work, once again. From my perspective, and apparently from most others who comment on your site, I would love to see more videos. Keep up the stellar work!
Daniel...The video is cut at approx 9 min....It does not download/play completely....
@Lubo -
Sorry that you had problems with the video. Not sure what it could be because it is working for me. I uploaded it again, so maybe you will have better luck now.
Daniel Ferry
3D references aren't ranges, strictly speaking. For example,
returns #VALUE!. OTOH, try entering the following as array formulas in a 4-row by 1-column range.
However, the following returns a #REF! error.
|
{"url":"http://www.excelhero.com/blog/2010/06/which-function-to-use---part-1.html","timestamp":"2014-04-18T03:06:52Z","content_type":null,"content_length":"57965","record_id":"<urn:uuid:3dab3154-055f-4f9d-a370-8beee86c95c5>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
|
About This Tool
The online Scientific Notation Calculator is used to perform addition, subtraction, multiplication and division calculations in scientific notation.
Scientific Notation
Scientific notation (also called standard form or exponential notation) is a way of writing numbers that accommodates values too large or small to be conveniently written in standard decimal
In scientific notation all numbers are written like this:
a × 10^b
where the exponent b is an integer, and the coefficient a is any real number (called the significand or mantissa).
E Notation
E notation is a type of scientific notation in which the phrase "times 10 to the power of" is replaced by the letter E or e; for example, 1.2 × 10^5 is written as 1.2E+5 or 1.2e5 and 7.4 × 10^-8 is
written as 7.4E-8.
|
{"url":"http://www.miniwebtool.com/scientific-notation-calculator/","timestamp":"2014-04-18T08:30:32Z","content_type":null,"content_length":"5878","record_id":"<urn:uuid:07d91a28-0eba-4ef6-89ab-4ef5c543f8cc>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Constructivism (1997)
Communicating Mathematics Using Heterogeneous Language and Reasoning
John Pais
1. Modeling the Learner's Learning
Mathematics is the science of patterns. Richard Feynman described his own compelling scientific curiosity as simply his desire to "find out about the world," to discover and understand the patterns
that exist in nature. In a profound insight describing how humans accomplish this task, Jean Piaget observed that "intelligence organizes the world by organizing itself." Precisely how humans
accomplish this self-organization is still essentially unknown, but nevertheless has important consequences for structuring teaching-learning environments in which successful learning occurs.
Recently, some leading mathematics education researchers (see e.g. [3] and [6]) have focused their efforts on formulating, using, and testing models of cognitive processes that humans might use to
successfully construct, in Piaget's sense of self-organization, their own mathematical concepts. These researchers emphasize a framework in which the learner's internal accommodation process has the
primary role in the construction of knowledge. In [3] p. 7, they describe "the essential ingredients" of their perspective as follows:
An individual's mathematical knowledge is her or his tendency to respond to perceived mathematical problem situations by reflecting on problems and their solutions in a social context and by
constructing or reconstructing mathematical actions, processes and objects and organizing these in schemas to use in dealing with the situations.
In contrast, other leading researchers (see e.g. [1] and [2]) employ an information-processing perspective in which both assimilation and accommodation processes play key roles in the learner's
construction of knowledge. These researchers model the learner as an information-processing system that creates a knowlege representation by using both external instruction to assimilate information,
and internal accomodation to structure and organize information. In [2] p. 14, they summarize essential aspects of their theoretical framework regarding how knowledge may be represented symbolically
by the learner, and assert that the evidence (provided by current research in cognitive psychology) indicates the following:
Symbols are much more than formal expressions.
Any kind of pattern that can be stored and can refer to some other pattern, say, one in the external world, is a symbol, capable of being processed by an information-processing system.
Cognitive competence (in this case mathematical competence) depends on the availability of symbolic structures (e.g., mental patterns or mental images) that are created in response to experience.
Cognitive theories postulate (and provide evidence for) complex processes for transforming (assimilating and accommodating) these external representations to produce internal structures that are
quite different from the external representations.
Today, instruction is based in large part on 'folk psychology.' To go beyond these traditional techniques, we must continue to build a theory of the ways in which knowledge is represented internally,
and the ways in which such internal representations are acquired.
2. Communicating Mathematics
The constructivist framework for mathematics education makes prominent the notion that each learner must actively construct her/his own mathematical concepts and that, ultimately, mathematical
knowledge consists in the learner's individual ability to do mathematics in a given context, by purposefully re-constructing useful mathematical concepts and tools appropriate to the given context.
This view of learning entails that successful instructional strategies will de-emphasize the teacher-as-lecturer component of instruction and, instead, create an approach in which the primary focus
is on each learner's construction of her/his own mathematical concepts. In fact, (radical) constructivism, as espoused by Ernst von Glasersfeld, seriously questions whether there can be any shared
concepts and meanings at all between teachers and learners. Consider the following from [7], pp. 141-143:
...A piece of language directs the receiver to build up a conceptual structure, but there is no direct transmission of the meaning the speaker or writer intended. The only building blocks available
to the interpreter are his or her own subjective conceptualizations and re-presentations.
...In communication, the result of interpretation survives and is taken as the meaning, if it makes sense in the conceptual environment which the interpreter derives from the given words and the
situational context in which they are now encountered.
...To put it simply, to 'understand' what someone has said or written implies no less but also no more than to have built up a conceptual structure from an exchange of language, and, in the given
context, this structure is deemed to be compatible with what the speaker appears to have had in mind.
...If, however, the participants adopt a constructivist view and begin by assuming that a speaker's meanings cannot be anything but subjective constructs, a productive accommodation and adaptation
can mostly be reached.
Then, according to von Glasersfeld, communicating mathematics effectively is a difficult task due not only to its conceptual complexity and abstract nature, but also due to the intrinsic limitations
of the mechanism of human communication. This state of affairs probably is no great shock to the experienced mathematics instructor, however it does provide explanation for any generated
re-presentations (or memories) of situations in which there may have been a perplexing discrepancy between the constructions intended by the instructor and those actually created by the learner.
In a thoughful and thought-provoking article [11], William Thurston identifies this probem of communicating mathematics as crucial, and challenges mathematicians to address it by first asking (p.
How do mathematicians advance human understanding of mathematics?
and then answering partially that (p. 163):
We are not trying to meet some abstract production quota of definitions, theorems and proofs. The measure of our success is whether what we do enablespeople to understand and think more clearly and
effectively about mathematics.
Thurston specifically identifies the ineffectiveness of standard instruction in mathematics classrooms as a communication problem, and suggests, since humans are acustomed to using all their mental
facilities to understand and communicate, that a greater emphasis be placed on fostering wider, one-on-one channels of communication that operate efficiently in both directions (pp. 166-167, with a
slightly different order below):
...in classrooms, ...we go through the motions of saying for the record what we think the students 'ought' to learn, while the students are trying to grapple with the more fundamental issues of
learning our language and guessing at our mental models. ...We assume the problem is with the students rather than with communication...
Mathematics in some sense has a common language: a language of symbols, technical definitions, computations, and logic. This language efficiently conveys some, but not all, modes of mathematical
...One-on-one, people use wide channels of communication that go far beyond formal mathematical language. They use gestures, they draw pictures and diagrams, they make sound effect and use body
language. Communication is more likely to be two-way, so that people can concentrate on what needs the most attention. With these channels of communication, they are in a much better position to
convey what's going on, not just in their logical and linguistic facilities, but in their other mental facilities as well.
The difficulty of achieving teacher-learner communication, which, as we have seen, follows from constructivist epistemology, is certainly supported and reinforced by Thurston's observations above. In
addition, Thurston further underscores the importance of mathematicians seriously addressing the problem of effectively communicating their language and mental infrastructure, both to students and
other mathematicians (p. 168):
We mathematicians need to put far greater effort into communicating mathematical ideas. To accomplish this, we need to pay more attention to communicating not just our definitions, theorems, and
proofs, but also our ways of thinking. We need to appreciate the value of different ways of thinking about the same mathematical structure.
We need to focus far more energy on understanding and explaining the basic mental infrastructure of mathematics... This entails developing mathematical language that is effective for the radical
purpose of conveying ideas to people who don't already know them.
3. Heterogeneous Language and Reasoning
Thurston's notion of "developing mathematical language for the radical purpose of conveying ideas to people who don't already know them" can be seen in the constructivist approach that is implemented
in [3], in which the primary medium of communication that is chosen is the computer language ISETL (pp. 15, 21, respectively):
Our main strategy for getting students to make mental constructions proposed by our theoretical analysis is to assign tasks that will require them to write and/or revise computer code using a
mathematical programming language.
It is important to note that the reason we are using ISETL here is because there are essential understandings we are trying to get students to construct as specified by our genetic decomposition [of
a concept], and we cannot do this with many other systems. For example, writing a program that constructs a function for performing a specific action in various contexts tends to get students to
interiorize that action [conception of function] to a process [conception of function].
To be continued ...
[1] John R. Anderson, Albert T. Corbett, Kenneth R. Koedinger, and Ray Pelletier. Cognitive Tutors: Lessons Learned. The Journal of Learning Sciences, 4, 167-207, 1995.
[2] John R. Anderson, Lynne M. Reder, and Herbert A. Simon. Applications and Misapplications of Cognitive Psychology to Mathematics Education, 1996 (to appear). http://sands.psy.cmu.edu/personal/ja/
[3] Mark Asiala, Anne Brown, David J. Devries, Ed Dubinsky, David Mathews, and Karen Thomas. A Framework for Research and Curriculum Development in Undergraduate Mathematics Education. In Jim Kaput
et al. (eds.), Research in Collegiate Mathematics Education II, CBMS Issues in Mathematics Education 6, 1-32, The American Mathematical Society, Providence, Rhode Island, 1996.
[4] Jon Barwise and John Etchemendy. Visual Information and Valid Reasoning. In Walter Zimmermann and Steve Cunningham (eds.), Visualization in Teaching and Learning Mathematics, 9-24, The
Mathematical Association of America, Washington, DC, 1991.
[5] Jon Barwise and John Etchemendy. Computers,Visualization, and the Nature of Reasoning, 1996 (to appear). http://csli-www.stanford.edu/hp/index.html
[6] Ed Dubinsky. Reflective Abstraction in Advanced Mathematical Thinking. In D. Tall (ed.), Advanced Mathematical Thinking, 95-123, Kluwer, Dordrecht, The Netherlands, 1991.
[7] Ernst von Glasersfeld. Radical Constructivism: A Way of Knowing and Learning. Falmer Press, New York, 1995.
[8] Saunders Mac Lane. Mathematics Form and Function. Springer-Verlag, New York, 1986.
[9] John Pais. Calculus for Kinetic Modeling. Interactive MathVision, St. Louis, MO, 1996-1998.
[10] Anna Sfard. Operational Origins of Mathematical Objects and the Quandary of Reification-- The Case of Function. In Guershon Harel and Ed Dubinsky (eds.), The Concept of Function: Aspects of
Epistemology and Pedagogy, MAA Notes 25, 59-71, The Mathematical Association of America, Washington, DC, 1992.
[11] William P. Thurston. On Proof and Progress in Mathematics. Bulletin of the American Mathematical Society 30 (2), 161-177, 1994.
|
{"url":"http://interactive-mathvision.com/PaisPortfolio/CKMPerspective/Constructivism(1998).html","timestamp":"2014-04-20T23:27:12Z","content_type":null,"content_length":"15849","record_id":"<urn:uuid:60dc0c31-fa01-4c39-93e2-3c7452d1b78e>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
|
• An algebra-based method for inferring gene regulatory networks
• Graphical models
• Algebraic and combinatorial aspects of sandpile monoids on directed graphs
• The secant conjecture in the real Schubert calculus
• Toric degenerations of Bezier patches
• Identifying causal effects with computer algebra
• Experimentation at the Frontiers of Reality in Schubert Calculus
• Parameter estimation for Boolean models of biological networks
• Some geometrical aspects of control points for toric patches
• Linear precision for parametric patches
• Computing the additive structure of indecomposable modules over Dedekind-like rings using Gröbner bases
• Sequential dynamical systems over words
• Catalog of small phylogenetic trees
• Algebraic geometry for Bayesian networks
• Algebraic statistics in model selection
• Algebraic geometry of Bayesian networks
• Bases de Gröbner asociadas a modulos finitos (Spanish)
|
{"url":"http://www.shsu.edu/~ldg005/papers.html","timestamp":"2014-04-17T01:07:17Z","content_type":null,"content_length":"21916","record_id":"<urn:uuid:54c5afe4-9b99-4994-8de1-aab253497aa2>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Baytown SAT Math Tutor
Find a Baytown SAT Math Tutor
...I can help prepare for testing and speech/debates. I am prepared to help any student who wants to succeed.I am a Texas certified teacher in the grades of 4th-8th. I am classified as a
Generalist, which means I am qualified to teach any of the core subjects of 4th or 5th grade, as well as 6th - 8th grade.
36 Subjects: including SAT math, reading, writing, ESL/ESOL
I have taught math and science as a tutor since 1989. I am a retired state certified teacher in Texas both in composite high school science and mathematics. I offer a no-fail guarantee (contact me
via WyzAnt for details). I am available at any time of the day; I try to be as flexible as possible.
35 Subjects: including SAT math, chemistry, calculus, physics
...There is nothing you learn early that will be contradicted by later lessons. My approach in working with you on algebra 1 and algebra 2 is first to assess your familiarity and comfort with
basic concepts, and explain and clarify the ones you need some improvement on; and then to work on the spec...
20 Subjects: including SAT math, writing, algebra 1, algebra 2
...Tests are boring, but working with me, you'll never be bored. I studied English and Political Science as an undergraduate at University of San Diego and English as a graduate student at Rice
University, making me well qualified to tutor English, reading, writing, AP Literature, AP Language, AP G...
22 Subjects: including SAT math, English, college counseling, ADD/ADHD
...I have worked with students of various academic capabilities to help them achieve the scores that they want. I know many SAT math strategies and various techniques to help students improve on
this test that they won't necessarily learn in the classroom. I have over two years of experience teaching ACT English in both Texas and abroad as an ACT instructor in the Middle East.
22 Subjects: including SAT math, English, reading, biology
|
{"url":"http://www.purplemath.com/Baytown_SAT_Math_tutors.php","timestamp":"2014-04-20T02:20:21Z","content_type":null,"content_length":"23931","record_id":"<urn:uuid:5a84ebbb-6b57-4a5e-8c41-0ca49a6083d6>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Close pairs of relative equilibria for identical point vortices
Publication: Research - peer-review › Journal article – Annual report year: 2011
title = "Close pairs of relative equilibria for identical point vortices",
publisher = "American Institute of Physics",
author = "Tobias Dirksen and Hassan Aref",
note = "© 2011 American Institute of Physics",
year = "2011",
doi = "10.1063/1.3590740",
volume = "23",
number = "5",
pages = "051706",
journal = "Physics of Fluids",
issn = "1070-6631",
TY - JOUR
T1 - Close pairs of relative equilibria for identical point vortices
A1 - Dirksen,Tobias
A1 - Aref,Hassan
AU - Dirksen,Tobias
AU - Aref,Hassan
PB - American Institute of Physics
PY - 2011
Y1 - 2011
N2 - Numerical solution of the classical problem of relative equilibria for identical point vortices on the unbounded plane reveals configurations that are very close to the analytically known,
centered, symmetrically arranged, nested equilateral triangles. New numerical solutions of this kind are found for 3n + 1 vortices, where n = 2, 3, ..., 30. A sufficient, although apparently not
necessary, condition for this phenomenon of close solutions is that the "core" of the configuration is marginally stable, as occurs for a central vortex surrounded by an equilateral triangle. The
open, regular heptagon also has this property, and new relative equilibria close to the nested, symmetrically arranged, regular heptagons have been found. The centered regular nonagon is also
marginally stable. Again, a new family of close relative equilibria has been found. The closest relative equilibrium pairs occur, however, for symmetrically nested equilateral triangles. © 2011
American Institute of Physics.
AB - Numerical solution of the classical problem of relative equilibria for identical point vortices on the unbounded plane reveals configurations that are very close to the analytically known,
centered, symmetrically arranged, nested equilateral triangles. New numerical solutions of this kind are found for 3n + 1 vortices, where n = 2, 3, ..., 30. A sufficient, although apparently not
necessary, condition for this phenomenon of close solutions is that the "core" of the configuration is marginally stable, as occurs for a central vortex surrounded by an equilateral triangle. The
open, regular heptagon also has this property, and new relative equilibria close to the nested, symmetrically arranged, regular heptagons have been found. The centered regular nonagon is also
marginally stable. Again, a new family of close relative equilibria has been found. The closest relative equilibrium pairs occur, however, for symmetrically nested equilateral triangles. © 2011
American Institute of Physics.
UR - http://www.aip.org/
U2 - 10.1063/1.3590740
DO - 10.1063/1.3590740
JO - Physics of Fluids
JF - Physics of Fluids
SN - 1070-6631
IS - 5
VL - 23
SP - 051706
ER -
|
{"url":"http://orbit.dtu.dk/en/publications/close-pairs-of-relative-equilibria-for-identical-point-vortices(72a2290f-10fc-4b74-a94f-684c95aa0dac)/export.html","timestamp":"2014-04-21T09:57:43Z","content_type":null,"content_length":"20021","record_id":"<urn:uuid:cb329e76-4ef2-4e7d-9e3f-69342a0423f3>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Recursive types in polymorphic lambda calculus
Recursive types in polymorphic lambda calculus
There is a fairly standard encoding of recursive types
into polymorphic lambda calculus, given by
rec X.F[X] = all X.(F[X] -> X) -> X
where F[X] is a type in which the type variable X appears only
covariantly. Recall that every covariant type corresponds to a
covariant functor, so for every h:X->Y we have F[h]:F[X]->F[Y].
If we abbreviate T = rec X.F[X], then the key functions on this
type are given by the polymorphic lambda terms:
fold : all X.(F[X] -> X) -> T -> X
fold = Lam X.lam k:F[X]->X.lam t:T.t{X}(k)
in : F[T] -> T
in = lam u:F[T].Lam X.lam k:F[X]->X.k(F[fold{X}(k)](u))
out : T -> F[T]
out = fold{F[T]}(F[in])
Questions: Who first had this insight? Where is a good place to find
this spelled out in the literature? Please send results to me, and I
will summarize them for the list. Cheers, -- P
|
{"url":"http://www.seas.upenn.edu/~sweirich/types/archive/1999-2003/msg00138.html","timestamp":"2014-04-18T18:14:26Z","content_type":null,"content_length":"2952","record_id":"<urn:uuid:c17b4fc6-7fd9-4359-9b67-3e83606e5b3b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Measurement and Data-Processing Approach for Estimating the Spatial Statistics of Turbulence-Induced Index of Refraction Fluctuations in the Upper Atmosphere
We present a method of data reduction and analysis that has been developed for a novel experiment to measure the spatial statistics of atmospheric turbulence in the tropopause. We took measurements
of temperature at 15 points on a hexagonal grid for altitudes from 12,000 to 18,000 m while suspended from a balloon performing a controlled descent. From the temperature data we estimate the index
of refraction and study the spatial statistics of the turbulence-induced index of refraction fluctuations. We present and evaluate the performance of a processing approach to estimate the parameters
of isotropic models for the spatial power spectrum of the turbulence. In addition to examining the parameters of the von Kármán spectrum, we have allowed the so-called power law to be a parameter in
the estimation algorithm. A maximum-likelihood-based approach is used to estimate the turbulence parameters from the measurements. Simulation results presented here show that, in the presence of the
anticipated levels of measurement noise, this approach allows turbulence parameters to be estimated with good accuracy, with the exception of the inner scale.
© 2001 Optical Society of America
OCIS Codes
(010.0010) Atmospheric and oceanic optics : Atmospheric and oceanic optics
Wade W. Brown, Michael C. Roggemann, Timothy J. Schulz, Timothy C. Havens, Jeff T. Beyer, and L. John Otten, "Measurement and Data-Processing Approach for Estimating the Spatial Statistics of
Turbulence-Induced Index of Refraction Fluctuations in the Upper Atmosphere," Appl. Opt. 40, 1863-1871 (2001)
Sort: Year | Journal | Reset
1. M. C. Roggemann and B. M. Welsh, Imaging Through Turbulence. (CRC Press, Boca Raton, Fla., 1996).
2. J. W. Goodman, Statistical Optics (Wiley, New York, 1985).
3. A. N. Kolmogorov, “The local structure of turbulence in incompressible viscous fluids for very large Reynolds’ numbers,” in Turbulence, Classic Papers on Statistical Theory, S. K. Friedlander and
L. Topper, eds. (Wiley InterScience, New York, 1961), pp. 151–155.
4. V. I. Tatarskii, Wave Propagation in a Turbulent Medium (Dover, New York, 1967).
5. R. R. Beland, “Propagation through atmospheric optical turbulence,” in The Infrared and Electro-Optical Systems Handbook (SPIE, Bellingham, Wash., 1993).
6. P. J. Gardner, M. C. Roggeman, B. M. Welsh, R. D. Bowersox, and T. E. Luke, “Statistical anisotropy in free turbulence for mixing layers at high Reynolds numbers,” Appl. Opt. 35, 4879–4889
7. P. J. Gardner, M. C. Roggeman, B. M. Welsh, R. D. Bowersox, and T. E. Luke, “Comparison of measured and computed Strehl ratios for light propagated through a channel flow of a He–N[2] mixing
layer at high Reynolds number,” Appl. Opt. 36, 2559–2576 (1997).
8. L. V. Antoshkin, N. N. Botygina, O. N. Emaleev, L. N. Lavrinova, V. P. Lukin, A. P. Rostov, B. V. Fortes, and A. P. Yankov, “Investigation of turbulence spectrum anisotropy in the ground
atmospheric layer: preliminary results,” Atmos. Oceanic Opt. 8, 993–996 (1995).
9. J. Vernin, “Atmospheric turbulence profiles,” in Wave Propagation in Random Media (Scintillation), V. I. Tatarski, A. Ishimaru, and V. U. Zovorontny, eds., Vol. PM09 of SPIE Press Monographs
(SPIE Press, Bellingham, Wash., 1992), pp. 248–260.
10. D. K. Lilly and P. F. Lester, “Waves and turbulence in the stratosphere,” J. Atmos. Sci. 31, 800–812 (1974).
11. M. Bester, W. C. Danchi, C. G. Degiacomi, L. J. Greenhill, and C. H. Townes, “Atmospheric fluctuations: empirical structure functions and projected performance of future instruments,” Astrophys.
J. 392, 357–374 (1992).
12. C. E. Coulman, J. Vernin, Y. Coqueugniot, and J. L. Caccia, “Outer scale of turbulence appropriate to modeling refractive-index structure profiles,” Appl. Opt. 27, 155–160 (1988).
13. V. P. Lukin, “Investigation of some peculiarities in the structure of large-scale atmospheric turbulence,” Atmos. Oceanic Opt. 5, 834–840 (1992).
14. V. P. Lukin, E. V. Nosov, and B. V. Fortes, “The efficient outer scale of atmospheric turbulence,” Atmos. Oceanic Opt. 10, 100–105 (1997).
15. V. V. Voitsekhovich and C. Cuevas, “Adaptive optics and the outer scale of turbulence,” J. Opt. Soc. Am. A 12, 2523–2531 (1995).
16. F. Dalaudier, C. Sidi, M. Crochet, and J. Vernin, “Direct evidence of sheets in the atmospheric temperature field,” J. Atmos. Sci. 51, 237–248 (1994).
17. L. J. Otten, D. T. Kyrazis, D. W. Tyler, and N. Miller, “Implication of atmospheric models on adaptive optics designs,” in Adaptive Optics in Astronomy, M. A. Ealey and F. Merkle, eds., Proc.
SPIE 2201, 201–211 (1994).
18. M. C. Roggemann and L. J. Otten, “Design of an anemometer array for measurement of cn2 and turbulence spatial power spectrum,” paper AIAA-99–3620, presented at the Thirteenth Lighter-than-Air
Systems Technology Conference, Norfolk, Va., 28 June–1 July 1999 (American Institute of Aeronautics and Astronautics, New York, 1999).
19. I. S. G. and I. M. Ryzhik, Table of Integrals, Series, and Products (Academic, New York, 1980).
20. A. Papoulis, Probability, Random Variables, and Stochiastic Processes (McGraw-Hill, New York, 1991).
21. M. C. Roggemann, B. M. Welsh, D. Montera, and T. A. Rhoadarmer, “Method for simulating atmospheric phase effects for multiple time slices and anisoplanatic conditions,” Appl. Opt. 34, 4037–4051
22. M. A. Branch and A. Grace, MATLAB Optimization Toolbox (The Math Works, 24 Prime Park Way, Natick, Mass., 1996).
23. H. D. Young, Statistical Treatment of Experimental Data (McGraw-Hill, New York, 1962).
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
|
{"url":"http://www.opticsinfobase.org/ao/abstract.cfm?URI=ao-40-12-1863","timestamp":"2014-04-16T16:32:17Z","content_type":null,"content_length":"122400","record_id":"<urn:uuid:69d86761-a346-48c3-8d00-2494381a0d81>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Complex number- Locus and finding the greatest, lowest of argument
August 21st 2006, 03:10 AM #1
Junior Member
May 2006
Complex number- Locus and finding the greatest, lowest of argument
I have no idea what a locus means so
The complex number u is given by u=(7+4i)/(3-2i). In the form x+yi, x=i, y=2
Sketch and argand diagram showing the point representing the complex number u. Show on the same diagram the locus of the complex number z such that [z-u]=2. Give a thorough description would be
Find the greatest value of argument. Urgent.
I have no idea what a locus means so
The complex number u is given by u=(7+4i)/(3-2i). In the form x+yi, x=i, y=2
Should have $x=1,\ y=2$ for the above.
Sketch and argand diagram showing the point representing the complex number u.
You should be able to do the above now.
Show on the same diagram the locus of the complex number z such that [z-u]=2. Give a thorough description would be good.
I presume that this last should be $|z-u|=2$, which is a circle
of radius $2$ centred at $u$.
Find the greatest value of argument.
This is now just a bit of geometry (look like $135^{\circ}$ or $3 \pi/4 \mbox{ radians}$
to me, but I haven't checked in detail)
August 21st 2006, 05:11 AM #2
Grand Panjandrum
Nov 2005
|
{"url":"http://mathhelpforum.com/algebra/5047-complex-number-locus-finding-greatest-lowest-argument.html","timestamp":"2014-04-16T09:06:02Z","content_type":null,"content_length":"35057","record_id":"<urn:uuid:855834a1-184a-4fbe-857b-3c6fa51a5f77>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Friday, March 15, 2013
So funny, I had no idea yesterday was Pi day but after reading feed posts from several sparkers, I had to look it up!
WHO KNEW!!!
Could they be referring to the new movie?
Surely not!
or perhaps the food we all love to eat!!
But, of course it was not spelled pie but pi, so it had to be
they have a whole day for a mathematical symbol?? NO! Say it aint so! but yes, it is apparently true, at least it was listed on wikepedia as so.
Was this just another hallmark holiday? who knows? but it did remind me of my brother and so I am dedicating Pi day to him.
Why? because one day I went to his house, there he sat in his bean bag chair with a pencil and long piece of paper in his hand. I could see it was a mathematical problem. I was not surprised since he
had always been a math whiz.
"watcha doin?" I ask
his response, "trying to find out if there really are no repeating sequences in pi!"
there was no response to such a mundane mind numbing task, I went back home! No way I could make this up!!
Here's Pi for you Hod! Enjoy!
and so I am turning green for St paddy's day!
WATCHMEGO! 3/18/2013 6:02AM DD wanted pie on pi day so we got a tiny apple pie from a grocery store so she and DH could eat pie on pi day!
Report Inappropriate Comment
BALLOUZOO 3/16/2013 11:21PM I vote for a pie day!
Report Inappropriate Comment
SHANSHE 3/16/2013 9:35AM you mean I missed it??? LOL
Report Inappropriate Comment
FITNHEALTHYKAL 3/16/ It was Pi day which is always celebrated in the schools so I knew about it cuz of the Mom aspect AND drum roll cuz Karen from last round told me (she's a math teacher and
2013 7:25AM a computer wiz)....a palindrome 3.14.13 (same front to back as back to front).
Report Inappropriate Comment
BONNYSPARKGIRL 3/16/2013 3:01AM Didn't miss it my side. Our local Breakfast show dedicated their whole show to Pi....and that included baking Pies
Report Inappropriate Comment
2BEHEALTHYAGAIN 3/15/2013 10:59PM And to think I missed it
Hope you have a wonderful weekend!
Report Inappropriate Comment
JERSEYGIRL24 3/15/2013 10:49PM I knew yesterday was Pi day but I didn't get too excited about it. I am definitely a geek. BTW love your background!!!
Report Inappropriate Comment
MONETRUBY 3/15/2013 10:32PM Ah yes, a day for all the math geeks, of which I am not one. I much prefer the pi with an e on the end. More calories, I know, but no math necessary!
Report Inappropriate Comment
4-1HEALTHYCYNDI 3/15/2013 9:13PM What will they think of celebrating next?! Thanks for the chuckle.
Report Inappropriate Comment
MERRYMARY42 3/15/2013 5:39PM Yes I heard it on radio yesterday, and they were celebrating the day with pot pi, and pi neapple pi, a fun thing I guess
Report Inappropriate Comment
OKBACK2ME 3/15/2013 5:38PM Yup, I missed it too!
Report Inappropriate Comment
SPECIFICITY 3/15/2013 4:41PM Hey in two years it will be even more of a Pi day 3.14.15 (3.1415)
Report Inappropriate Comment
DAWNWATERWOMAN 3/15/2013 4:37PM Wow! Who knew! Happy St Patty's day weekend to you my pi loving friend. At least it's calorie free.
Report Inappropriate Comment
|
{"url":"http://www.sparkpeople.com/mypage_public_journal_individual.asp?blog_id=5287964","timestamp":"2014-04-19T21:24:42Z","content_type":null,"content_length":"87401","record_id":"<urn:uuid:6d2eac90-779f-4234-ac31-2574bdc9c286>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Principle of Mathematical Induction
From ProofWiki
Let $P \left({n}\right)$ be a propositional function depending on $n \in \N$.
If the following statements hold:
$(1): \quad P \left({n_0}\right)$ is true for some $n_0 \in \N$ (where $n_0$ is often, but not always, zero or one)
$(2): \quad \forall k \in \N: k \ge n_0 : P \left({k}\right) \implies P \left({k+1}\right)$
then $P \left({n}\right)$ is true for all $n \ge n_0$.
This process is called proof by (mathematical) induction.
Basis for the Induction
The step that shows that the proposition is true for the first value $n_0$ is called the basis for the induction (also sometimes informally called the base case).
Induction Hypothesis
The assumption made that $P \left({k}\right)$ is true for some $k \in \N$ is the induction hypothesis (also sometimes called the inductive hypothesis).
Induction Step
The step which shows that $P \left({k}\right) \implies P \left({k+1}\right)$ is called the induction step (also sometimes called the inductive step).
Suppose that the two given conditions hold:
$(1): \quad P \left({n_0}\right)$ is true for some $n_0 \in \N$
$(2): \quad \forall k \in \N: k \ge n_0 : P \left({k}\right) \implies P \left({k+1}\right)$
Let $S = \left\{{x \in \N: P \left({x}\right)}\right\}$.
That is, the set of all $x \in \N$ for which $P \left({x}\right)$ holds.
From Subset of Set with Propositional Function, $S \subseteq \N$.
We have that $n_0 \in S$ from $(1)$.
Now suppose $k \in S$.
That is, $P \left({k}\right)$ holds.
From $(2)$ it follows that $P \left({k + 1}\right)$ holds, and so $k + 1 \in S$.
Thus we have established:
$S \subseteq \N$
$n_0 \in S$
$k \in S \implies k + 1 \in S$
From the Principle of Finite Induction it follows that $\N \setminus \N_{n_0} \subseteq S$.
That is, for every element $k$ of $\N \setminus \N_{n_0}$, it follows that $P \left({k}\right)$ holds.
But $\N \setminus \N_{n_0}$ is precisely the set of all $n \in \N$ such that $n \ge n_0$.
Hence the result.
Minimal Infinite Successor Set
In the language of the minimal infinite successor set, this principle can be written as:
$(1): \quad P \left({n_0}\right)$ is true for some $n_0 \in \omega$ (where $n_0$ is often, but not always, zero or one)
$(2): \quad \forall k \in \omega: k \ge n_0 : P \left({k}\right) \implies P \left({k^+}\right)$
then $P \left({n}\right)$ is true for all $n \ge n_0$.
Here, $k^+$ denotes the successor of $k$, and $\omega$ denotes the minimal infinite successor set.
Natural Numbers in Real Numbers
If the natural numbers $\N$ are considered part of the real numbers $\R$, this principle can be written as:
If $A \subseteq \N$ is an inductive set, then $A = \N$.
Also known as
This principle is sometimes referred to as the Principle of Weak Induction.
Other sources use the term First Principle of Mathematical Induction.
Also see
Note the difference between this and the Second Principle of Mathematical Induction, which can often be used when it is not possible to prove $P \left({k+1}\right)$ directly from the truth of $P \
left({k}\right)$, but when it is possible to prove $P \left({k+1}\right)$ from the assumption of the truth of $P \left({n}\right)$ for all values of $n_0$ such that $n_0 \le n \le k$.
In Equivalence of Well-Ordering Principle and Induction it is proved that this, the Second Principle of Mathematical Induction and the Well-Ordering Principle are all logically equivalent to each
|
{"url":"http://www.proofwiki.org/wiki/Principle_of_Mathematical_Induction","timestamp":"2014-04-21T10:23:05Z","content_type":null,"content_length":"39138","record_id":"<urn:uuid:6865e731-57c4-45f1-93f2-9e9c7ec8f140>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
|