content
stringlengths
86
994k
meta
stringlengths
288
619
Next Number in a Sequence Date: 03/13/2002 at 07:15:40 From: KJS Subject: Next number in a sequence -- infinite answers? Question submitted via WWW: This site rocks - I'm a computer scientist and math enthusiast. My question: I recall reading not too long ago that there is an infinite number of mathematical answers to the question "What is the next number in this sequence: [x, y, z ...]?" (Where [x, y, z ...] is any sequence of integers.) The reasoning (which I only vaguely recall) is that given any sequence, one can construct an infinite number of n-degree polynomials that satisfy the sequence, hence discern an infinite number of answers. I further recall that because this was proven, all questions of this form have been removed from the SAT. What is the proof for this? Date: 03/13/2002 at 09:00:48 From: Doctor Rick Subject: Re: Next number in a sequence -- infinite answers? Hi, KJS. I don't know about the SAT, but it is true that you can pick any answer you please to such a question and justify it using a polynomial. Of course, the intended answer is generally much "simpler" or more "satisfying," but these are subjective and not well-defined mathematical principles. Here is how it works. Let's say the numbers given are a[1], a[2], ..., a[n-1], and you pick any number you wish for a[n]. We want to find a polynomial f(x) such that f(1) = a[1] f(2) = a[2] f(n) = a[n] We can find a polynomial of degree n-1 that will fit these conditions. For instance, a line (1st-degree polynomial) can be found that will pass through any two given points; a quadratic (2nd-degree polynomial) can be found that will pass through any three given points, etc. Here is one way to find the polynomial that fits the conditions. First fit the points (1, a[1]) and (2, a[2]) with a line: y = (x-1)(a[2]-a[1]) + a[2] We'll call this linear function f_1(x). f_1(x) = (x-1)(a[2]-a[1]) + a[2] Now consider the quadratic function f_2(x) = b[1](x-1)(x-2) + f_1(x) The first term is zero at both points x=1 and x=2; therefore f(1) = a[1] and f(2) = a[2], regardless of the value of b[1]. Evaluate f(3): f_2(3) = 2b[1] + f_1(3) We want this to equal a[3]; we can solve for b[1] such that this is b[1] = (a[3] - f_1(3))/2 Do you see that the same process can be repeated indefinitely? Now that we have a polynomial that fits the first three points, we can add a cubic term: f_3(x) = b[2](x-1)(x-2)(x-3) + f_2(x) Then we can solve for b[2] such that f_3(4) = a[4], and so on. See also this item in our Dr. Math Archives: Predicting the Next Number in a Sequence - Doctor Rick, The Math Forum Date: 03/13/2002 at 09:16:59 From: Doctor Peterson Subject: Re: Next number in a sequence -- infinite answers? Hi, KJS. This page shows the method, call finite differences, that can be used to find the polynomial of degree n that fits n+1 points in a sequence: Method of Finite Differences You can extend the method to find polynomials of any degree greater than n as well; just choose the next term arbitrarily and find the polynomial that fits the extended sequence, and so on. This is not a new discovery, but it may have taken test-writers some time to realize its implications. Here is a sample answer I've given to a sequence puzzle of the sort you are talking about: Terms and Rules Really, there's a lot more to this than just that you can find polynomial solutions. There are also infinitely many other sequences that follow more complicated rules than that - after all, a sequence doesn't even have to follow a rule you can state mathematically, but can be entirely random and still be called a sequence! So it's entirely meaningless to ask, "What is THE next number in THE sequence that starts with ...?" It could be absolutely anything. What they really mean is, "What is the next number in the sequence a test writer or puzzle maker is most likely to intend by the following?" In a way it's more psychology than math: what kind of sequence would a reasonably normal person think was "nice" enough to ask about? Some of the puzzles we get of this form are very simple (linear, for example); others are extremely tricky, but once you see the trick there's no question it's the "right" answer. You're looking for a sequence that is as easy as possible to define. And if the question were stated that way, it might be a reasonable test question, particularly if you were told what category of sequence it is. But usually they work much better as challenging puzzles than as test - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/56864.html","timestamp":"2014-04-20T16:54:31Z","content_type":null,"content_length":"10136","record_id":"<urn:uuid:d8539641-fa78-429f-b9b2-14518dc5574d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Sciences Seminars Mathematical Sciences conducts seminars in special topics each semester. Contact the faculty member in charge of each section for details and to enroll in a seminar. For more information about seminars, and for dates and times, visit our Department calendar. Spring 2014 Seminars Topology Seminar (Math 799-001) MW 2:00pm-3:15pm, EMS E208, Contact Boris Okun Student Topology Seminar (Math 799-002) MW 3:30pm-4:45pm, EMS E408, Contact Craig Guilbault Algebra Seminar (Math 799-003) T 2:00pm-2:50pm, EMS E408, Contact Yi Zou Numerical Analysis Seminar (Math 799-004) F 11:00am-11:50am, EMS E408, Contact Dexuan Xie Probability Seminar (Math 799-005) MW 3:30pm-4:45pm, EMS E424A, Contact Richard Stockbridge Applied & Computational Math Seminar (Math 799-006) W 12:30pm-1:45pm, EMS E424A, Contact Peter Hinow Low Dimensional Dynamics Seminar (Math 799-007) F 11:00am-11:50am, EMS W109, Contact Karen Brucks Statistics Seminar (Math 799-008) T 2:00pm-2:50pm, EMS E495, Contact Jugal Ghorai
{"url":"http://www4.uwm.edu/letsci/math/courses/seminars.cfm","timestamp":"2014-04-20T21:18:27Z","content_type":null,"content_length":"26276","record_id":"<urn:uuid:37ba18aa-4056-4ce9-8d96-4a7f182175d1>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
Can you identify this line graph? I recently took a test with a question like the one I have attached above. There were four line graphs (I've only included one) with four solutions (I've only included two). I do not remember the question. I am hoping someone can identify the problem, and point me in the right direction. Things I remember, There were no inequalities (< >). Every ray had a different starting point (like o or 2) and extending either direction (that is either an arrow in the negative direction or the positive direction). Every starting point was filled in completely (what you would expect with an inequality that was greater or EQUAL TO). I did not know what to do. I assume just plug in values for x and check to see if the graph was true or not.
{"url":"http://www.physicsforums.com/showthread.php?s=9ff9cef1016c1863e2f5b5547cf4b380&p=4658424","timestamp":"2014-04-18T15:40:30Z","content_type":null,"content_length":"24412","record_id":"<urn:uuid:e8962852-151d-44db-9f1f-e1c6cd36e2d8>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Most of my publicly available code is hosted github. Early code with no version history is just hosted here. Most of these projects are basically hacks, following the YAGNI principle. All my software is under the GPLv3 license, unless otherwise noted. Small Python module to read and write .binvox files. The voxel data is represented as dense 3-dimensional Numpy arrays in Python (a direct if somewhat wasteful representation for sparse models) or as an array of 3D coordinates (more memory-efficient for large and sparse models). PyPCD (Python) Python module to read and write point clouds stored in the PCD file format, used by the Point Cloud Library (PCL). Cowbot software (Spin) Sensing and control for a small mobile robot. More. 2D P-Median solver (C++) An efficient implementation of the the classical vertex substitution heuristic for the P-Median optimization problem. More. hdf5opencv (C++) A simple library to save OpenCV matrices as Hdf5 datasets and load Hdf5 datasets as OpenCV matrices. Hdf5 is a standard binary format for scientific data. Evoimage (Python / Cython) Inspired by similar projects on the web, Evoimage attempts to resynthesize arbitrary input images out of overlapping translucent rectangles. This is done heuristically with genetic algorithms or particle swarms. SubspaceExtract (Python) A simple Python script to extract PCA, LDA, NLDA and semi-randomized LDA bases for a set of images (probably faces, so you get the famous eigenfaces and fisherfaces). Requires Numpy and PIL (see README for more info.) moaftme (R) MOAFTME (Mixture of Accelerated Failure Time Model Experts) is an R package for the estimation of parameters of a survival analysis model consisting in a mixture of experts, each of which is an accelerated failure time (AFT) model. Estimation is performed witn an MCMC algorithm. KamikazeMazeEdit (C#) A C# GUI application to create and edit mazes in the format for PUC's Mobile Robotics course. rpbar (C++) A minimalistic taskbar for the Ratpoison window manager. Dratmenu (Python) A window menu for ratpoison using dmenu. PyHOG (Python) Python wrapper for Felzenswalb's HOG features.
{"url":"http://dimatura.net/code.html","timestamp":"2014-04-19T17:49:28Z","content_type":null,"content_length":"7199","record_id":"<urn:uuid:495d5c73-c8cc-4450-bc27-79f25f859f84>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
Centrifugal Force and Angular Velocity I've never really understood this, what is so special about radians that you can ignore them when converting units? You don't just ignore them; a radian is a unit of measure . A radian is defined as the arc length of a circle subtended by the central angle between 2 radii of a circle, divided by the radius of the circle, that is, rad=s/r, where s is the arc length subtended by the cenrtral angle, and r is the radius of the circle. As you should see, the radian has units of length/length, which is dimensionless. That's how you end up with Newtons as the centripetal force unit, as Cashmoney has noted.
{"url":"http://www.physicsforums.com/showthread.php?t=278678","timestamp":"2014-04-20T11:22:02Z","content_type":null,"content_length":"29555","record_id":"<urn:uuid:32d1e9b3-8ea7-473c-89cb-011532a52ffd>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
thermal conductivity If there are no collisions, e.g. at very low temperatures, you have ballistic transport which is very rapid. I think that only so-called Umklapp scattering processes actually can reduce the heat transport and this requires the sum of the crystal momenta of the two phonons to be larger than a reciprocal lattice vector. So it is only important at relatively high energies ~ Debye energy. Whether this includes room temperature depends on the material.
{"url":"http://www.physicsforums.com/showthread.php?s=22bf0df1b084e2b4bb85a4c89b83ed8e&p=4565689","timestamp":"2014-04-18T13:55:28Z","content_type":null,"content_length":"27581","record_id":"<urn:uuid:491f72ac-5cf5-4575-8cf3-84bfb4c31d59>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of brunt atmospheric dynamics , and , the Brunt-Väisälä frequency , or frequency, is the frequency at which a vertically displaced parcel will oscillate within a statically stable environment. In the atmosphere, $N equiv sqrt\left\{frac\left\{g\right\}\left\{theta\right\}frac\left\{dtheta\right\}\left\{dz\right\}\right\}$, where $theta$ is potential temperature, $g$ is the local acceleration of gravity, and $z$ is geometric height. In the ocean where salinity is important, or in fresh water lakes near freezing, where density is not a linear function of temperature, $N equiv sqrt\left\{-frac\left\{g\right\}\left\{rho\right\}frac\left\{drho\right\}\left\{dz\right\}\right\}$, where $rho$, the potential density, depends on both temperature and salinity. The concept derives from Newton's Second Law when applied to a fluid parcel in the presence of a background stratification (in which the density changes in the vertical). The parcel, perturbed vertically from its starting position, experiences a vertical acceleration. If the acceleration is back towards the initial position, the stratification is said to be stable and the parcel oscillates vertically. In this case, N >0 and the frequency of oscillation is given by N. If the acceleration is away from the initial position (N <0), the stratification is unstable. In this case, overturning or convection generally ensues. The Brunt-Väisälä frequency relates to internal gravity waves and provides a useful description of atmospheric and oceanic stability.
{"url":"http://www.reference.com/browse/brunt","timestamp":"2014-04-16T13:27:50Z","content_type":null,"content_length":"77816","record_id":"<urn:uuid:ded8162f-1434-4fd3-b507-d44017ee1565>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: RECEIVER AND METHOD OF RECEIVING Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A receiver is arranged to detect and recover data from Orthogonal Frequency Division Multiplexed (OFDM) symbols, which are transmitted as first and second versions to form a multiple input multiple output (MISO) system and which carry pairs of data symbols encoded to form Alamouti cells. The receiver includes a channel estimator and corrector, which is arranged to form an estimate of the data symbols, by generating an estimate of first and second channels via which the first and second versions of the OFDM symbols are received. The receiver includes a channel estimator and corrector comprising a pilot data extractor which is arranged to extract the pilot data from the OFDM symbols, a frequency dimension interpolator which is arranged to interpolate between the pilot data received from each of the OFDM symbols in the frequency domain to form sum pilot data and difference pilot data, a sum and difference decoder arranged to combine the sum and difference pilot data to form for each data symbol of the Alamouti pairs an estimate of a sample of the first channel and an estimate of the sample of the second channel, and an Alamouti decoder. The Alamouti decoder is arranged to receive the data bearing sub-carriers from the OFDM symbols and to estimate the data symbols by performing Alamouti decoding using the estimates of the samples for the first and second channels. As such a receiver for a MISO communication system is arranged to recover an estimate of the data symbols, which have been Alamouti-type encoded using frequency dimension interpolation A receiver for detecting and recovering data from Orthogonal Frequency Division Multiplexed (OFDM) symbols, the OFDM symbols including a plurality of data bearing sub-carriers on which data is transmitted and a plurality of pilot bearing sub-carriers on which pilot data is transmitted, each of the OFDM symbols having been transmitted as a first version of the OFDM symbols via a first channel and a second version of the OFDM symbols via a second channel in accordance with a multiple input-single output system and symbols of the data carried by the sub-carriers of the OFDM symbols are paired to form Alamouti cells, the pairs of data symbols having been encoded differently for the first and second versions of the OFDM symbols in accordance with an Alamouti-type encoding and at least some of the pilot data from the second version being formed as inverted with respect to the corresponding pilot data of the first version of the OFDM symbols, the receiver comprisinga demodulator arranged in operation to detect a signal representing the OFDM symbols, and to generate a sampled version of the OFDM symbols in the time domain,a Fourier transform processor arranged in operation to form a frequency domain version of the OFDM symbols, anda channel estimator and corrector comprisinga pilot data extractor which is arranged to extract the pilot data from the OFDM symbols,a frequency dimension interpolator which is arranged in operation to interpolate between the pilot data received from each the OFDM symbols in the frequency dimension to form sum pilot data and difference pilot data,a sum and difference decoder which is arranged in operation to combine the sum and difference pilot data to form for each pair of data symbols of the Alamouti cells an estimate of a sample of the first channel transfer function and an estimate of the sample of the second channel transfer function, andan Alamouti decoder which is arranged in operation to receive the data bearing sub-carriers from the OFDM symbols and to estimate the transmitted data symbols by performing Alamouti decoding using the estimates of the samples for the first and second channel transfer functions. A receiver as claimed in claim 1, wherein the OFDM symbols are include odd OFDM symbols and even OFDM symbols and for one of the odd and even symbols the second transmitted version is arranged to include inverted pilot data providing pilot data which is inverted with respect to the pilot data in the first version, whereby one of the odd and even OFDM symbols provides sum pilot data and the other of the odd and even OFDM symbols provides difference pilot data, and the channel estimator and corrector includesa delay element which is arranged to receive the OFDM symbols and to store the pilot data from the odd or even OFDM symbols for at least one symbol, and wherein the interpolator is operable to interpolate the pilot data in the frequency dimension for a current OFDM symbol and to interpolate the pilot data in the frequency dimension for the OFDM symbol stored by the delay element to form the sum pilot data and the difference pilot data from the odd and even OFDM symbols. A receiver as claimed in claim 1, wherein the Alamouti decoder is arranged in operation to form, for each estimated data symbol of each pair forming an Alamouti pair, a combination of the sample of the first channel and the sample of the second channel for the first data symbol of the pair and the sample of the first channel and the sample of the second channel for the second data symbol of the Alamouti pair, the combination (ρ) forming a weighting factor for each symbol estimate which is representative of a state of the channel when the symbol was estimated, the weighting factor (ρ) being used as a channel state indicator for the estimate of the symbol. A receiver as claimed in claim 1, wherein the channel estimator and corrector includes a de-noiser circuit, coupled between the sum and difference decoder and the Alamouti decoder and arranged to filter the estimates of the samples of the first channel and the estimates of the samples of the second channel to form an average value of the estimates for a predetermined number of samples. A receiver as claimed in claim 4, wherein the channel estimator and corrector includes a noise filter which is arranged in operation to generate an estimate of a noise power which is present when each of the estimated data symbols are received by comparing a difference between the estimates of the samples of one of the first and second channels before the de-noiser circuit and the estimates of the samples of the first and second channel after the de-noiser circuit, and filtering the difference, the estimate of the noise power being used to determine a reliability of the accuracy of the estimated data symbol. A receiver as claimed in claim 1, wherein the channel estimator and corrector includes a phase offset generator and first and second multipliers, which are arranged to multiply the sum pilot symbols and the difference pilot symbols respectively by a phase offset value to centre the responses from the first and second channel estimates within a frequency bandwidth of the frequency dimension A method of detecting and recovering data from Orthogonal Frequency Division Multiplexed (OFDM) symbols, the OFDM symbols including a plurality of data bearing sub-carriers on which data is transmitted and a plurality of pilot bearing sub-carriers on which pilot data is transmitted, each of the OFDM symbols having been transmitted as a first version of the OFDM symbol via a first channel and a second version of the OFDM symbol via a second channel to form a multiple input-single output system and symbols of the data carried by the sub-carriers of the OFDM symbols are paired to form Alamouti pairs, the pairs of data symbols having been encoded differently for the first and second versions of the OFDM symbols in accordance with a modified Alamouti encoding and at least some of the pilot data from the second version being formed as inverted with respect to the corresponding pilot data of the first version of the OFDM symbols, the method comprisingdetecting a signal representing the OFDM symbols, and to generate a sampled version of the OFDM symbols in the time domain,forming a frequency domain version of the OFDM symbols by performing a Fourier transform,extracting the pilot data from the OFDM symbols,interpolating between the pilot data received from each of the OFDM symbols in the frequency dimension to form sum pilot data and difference pilot data,combining the sum and difference pilot data to form for each of the data symbols of the Alamouti pairs an estimate of a sample of the first channel and an estimate of the sample of the second channel,receiving the data bearing sub-carriers from the OFDM symbols, andestimating the data symbols by performing Alamouti decoding using the estimates of the samples for the first and second channels. A method as claimed in claim 7, wherein the OFDM symbols are include odd OFDM symbols and even OFDM symbols and for one of the odd and even symbols the second transmitted version is arranged to include inverted pilot data providing pilot data which is inverted with respect to the pilot data in the first version, whereby one of the odd and even OFDM symbols provides sum pilot data and the other of the odd and even OFDM symbols provides even pilot data, and the method includesreceiving the OFDM symbols at a delay element,storing the pilot data from the odd or even OFDM symbols for at least one symbol, and the interpolating includesinterpolating the pilot data in the frequency dimension for a current OFDM symbol, andinterpolating the pilot data in the frequency dimension for the OFDM symbol stored by the delay element to form the sum pilot data and the difference pilot data from the odd and even OFDM symbols. A method as claimed in claim 7, wherein the Alamouti decoding includesforming, for each estimated data symbol of each Alamouti pair, a combination of the sample of the first channel and the sample of the second channel for the first data symbol of the pair and the sample of the first channel and the sample of the second channel for the second data symbol of the Alamouti pair, the combination (ρ) forming a weighting factor for each symbol estimate which is representative of a state of the channel when the symbol was estimated, the weighting factor (ρ) being using as a channel state indicator for the estimate of the symbol. A method as claimed in claim 7, includingfiltering the estimates of the samples of the first channel and the estimates of the samples from the second channel, using a de-noiser circuit to form an average value of the estimates for a predetermined number of samples. A method as claimed in claim 10, the method includinggenerating an estimate of a noise power which is present when each of the estimated data symbols are received by comparing a difference between the estimates of the samples of one of the first and second channels before the de-noiser circuit and the estimates of the samples of the first and second channel after the de-noiser circuit, andfiltering the difference with a noise filter, the estimate of the noise power being used to determine a reliability of the accuracy of the estimated data symbol. A method as claimed in claim 7, includingmultiplying the sum pilot symbols and the difference pilot symbols respectively by a phase offset value to centre the response of the first and second channel estimates within a frequency bandwidth of the frequency dimension interpolator p before the interpolating. A channel estimator and corrector for an OFDM receiver, the channel estimator and corrector, comprisinga pilot data extractor which is arranged to extract the pilot data from OFDM symbols,a frequency dimension interpolator which is arranged in operation to interpolate between the pilot data received from each of the OFDM symbols in the frequency domain to form sum pilot data and difference pilot data,a sum and difference decoder which is arranged in operation to combine the sum and difference pilot data to form for each data symbol of the Alamouti pair an estimate of a sample of a first channel and an estimate of the sample of a second channel, andan Alamouti decoder which is arranged in operation to receive the data bearing sub-carriers from the OFDM symbols and to estimate the data symbols by performing Alamouti decoding using the estimates of the samples for the first and second channels. A receiver for receiving data from Orthogonal Frequency Division Multiplexed (OFDM) symbols, the receiver comprisinga demodulator arranged in operation to detect a signal representing the OFDM symbols, and to generate a sampled version of the OFDM symbols in the time domain, the OFDM symbols including a plurality of data bearing sub-carriers on which data is transmitted and a plurality of pilot bearing sub-carriers on which pilot data is transmitted, each of the OFDM symbols having been transmitted as a first version of the OFDM symbols via a first channel and a second version of the OFDM symbols via a second channel in accordance with a multiple input-single output system and symbols of the data carried by the sub-carriers of the OFDM symbols are paired to form Alamouti pairs, the pairs of data symbols having been encoded differently for the first and second versions of the OFDM symbols in accordance with an Alamouti-type encoding and at least some of the pilot data from the second version being formed as inverted with respect to the corresponding pilot data of the first version of the OFDM symbols,a Fourier transform processor arranged in operation to form a frequency domain version of the OFDM symbols, anda channel estimator and corrector comprisinga pilot data extractor which is arranged to extract the pilot data from the OFDM symbols,a frequency domain interpolator which is arranged in operation to interpolate between the pilot data received from each the OFDM symbols in the frequency dimension to form sum pilot data and difference pilot data,a sum and difference decoder which is arranged in operation to combine the sum and difference pilot data to form for each data symbol of the Alamouti pairs an estimate of a sample of the first channel and an estimate of the sample of the second channel, andan Alamouti decoder which is arranged in operation to receive the data bearing sub-carriers from the OFDM symbols and to estimate the data symbols by performing Alamouti decoding using the estimates of the samples for the first and second channels. An apparatus for detecting and recovering data from Orthogonal Frequency Division Multiplexed (OFDM) symbols, the OFDM symbols including a plurality of data bearing sub-carriers on which data is transmitted and a plurality of pilot bearing sub-carriers on which pilot data is transmitted, each of the OFDM symbols having been transmitted as a first version of the OFDM symbol via a first channel and a second version of the OFDM symbol via a second channel to form a multiple input-single output system and symbols of the data carried by the sub-carriers of the OFDM symbols are paired to form Alamouti pairs, the pairs of data symbols having been encoded differently for the first and second versions of the OFDM symbols in accordance with an Alamouti-type encoding and at least some of the pilot data from the second version being formed as inverted with respect to the corresponding pilot data of the first version of the OFDM symbols, the apparatus comprisingmeans for detecting a signal representing the OFDM symbols, and to generate a sampled version of the OFDM symbols in the time domain,means for forming a frequency domain version of the OFDM symbols by performing a Fourier transform,means for extracting the pilot data from the OFDM symbols,means for interpolating between the pilot data received from each the OFDM symbols in the frequency dimension to form sum pilot data and difference pilot data,means for combining the sum and difference pilot data to form for each data symbol of the Alamouti pairs an estimate of a sample of the first channel and an estimate of the sample of the second channel,means for receiving the data bearing sub-carriers from the OFDM symbols, andmeans for estimating the data symbols by performing Alamouti decoding using the estimates of the samples for the first and second channels. FIELD OF INVENTION [0001] The present invention relates to receivers and methods for detecting and recovering data from Orthogonal Frequency Division Multiplexed (OFDM) symbols, the OFDM symbols including a plurality of data bearing sub-carriers on which data is transmitted and a plurality of pilot bearing sub-carriers on which pilot data is transmitted. Moreover, each of the OFDM symbols is received after being transmitted as a first version of the OFDM symbol via a first channel and a second version of the OFDM symbol via a second channel to form a multiple input-single output (MISO) system. Furthermore symbols of the data carried by the sub-carriers of the OFDM symbols are paired to form Alamouti cells. BACKGROUND OF THE INVENTION [0002] There are many examples of radio communications systems in which data is communicated using Orthogonal Frequency Division Multiplexing (OFDM). Systems which have been arranged to operate in accordance with Digital Video Broadcasting (DVB) standards for example, utilise OFDM. OFDM can be generally described as providing K narrow band sub-carriers (where K is an integer) which are modulated in parallel, each sub-carrier communicating a modulated data symbol such as Quadrature Amplitude Modulated (QAM) symbol or Quadrature Phase-shift Keying (QPSK) symbol. The modulation of the sub-carriers is formed in the frequency domain and transformed into the time domain for transmission. Since the data symbols are communicated in parallel on the sub-carriers, the same modulated symbols may be communicated on each sub-carrier for an extended period, which can be longer than a coherence time of the radio channel. The sub-carriers are modulated in parallel contemporaneously, so that in combination the modulated carriers form an OFDM symbol. The OFDM symbol therefore comprises a plurality of sub-carriers each of which has been modulated contemporaneously with different modulation symbols. Multiple-Input Single-Output (MISO) is a technique which can be used in combination with Alamouti encoding to improve the robustness with which data can be received. Alamouti encoding forms pairs of data symbols, which are then differently encoded for communication via each of the channels. MISO/Alamouti encoding can therefore be used in combination to form robust and effective communication of data, and in particular when used with OFDM. However, in order to decode Alamouti encoded data cells it is necessary to form an estimate of each channel through which each version of the OFDM symbols of the MISO system have passed. To this end, it is known to include pilot data which is conveyed by sub-carriers of the OFDM symbols assigned for this purpose. For example, the DVB-T2 system has been designed to include pilot patterns to allow the system to be deployed in a MISO mode and to utilise Alamouti encoding, as disclosed in "ETSI EN 300 755, "Digital Video Broadcasting (DVB): Frame structure, channel coding and modulation for a second generation digital terrestrial television broadcasting system (DVB-T2)", May 2008. Detecting and recovering data from OFDM symbols which have been transmitted in accordance with a MISO/Alamouti encoding system represents a technical problem, in particular when the receiver is required to generate an estimate of each of the first and second channels via which the OFDM symbols are received. SUMMARY OF INVENTION [0005] According to an aspect of the present invention there is provided a receiver for detecting and recovering data from Orthogonal Frequency Division Multiplexed (OFDM) symbols. The OFDM symbols include a plurality of data bearing sub-carriers on which data is transmitted and a plurality of pilot bearing sub-carriers on which pilot data is transmitted. Each of the OFDM symbols is received after being transmitted as a first version of the OFDM symbols via a first channel and a second version of the OFDM symbols via a second channel, which thereby form a multiple input-single output (MISO) system. Furthermore the symbols of the data carried by the sub-carriers of the OFDM symbols are paired to form Alamouti cells, the pairs of data symbols having been encoded differently for the first and second versions of the OFDM symbols in accordance with an Alamouti-type encoding and at least some of the pilot data from the second version is inverted with respect to the corresponding pilot data of the first version of the OFDM symbols. The receiver comprises a demodulator arranged in operation to detect a signal representing the OFDM symbols, and to generate a sampled version of the OFDM symbols in the time domain, a Fourier transform processor arranged in operation to form a frequency domain version of the OFDM symbols, and a channel estimator and corrector. The channel estimator and corrector comprises a pilot data extractor which is arranged to extract the pilot data from the OFDM symbols, a frequency domain interpolator which is arranged in operation to interpolate between the pilot data received from each the OFDM symbols in the frequency domain to form sum pilot data and difference pilot data, a sum and difference decoder which is arranged in operation to combine the sum and difference pilot data to form for each pair of data symbols of the Alamouti cells an estimate of a sample of the first channel and an estimate of the sample of the second channel, and an Alamouti decoder. The Alamouti decoder is arranged to receive the data bearing sub-carriers from the OFDM symbols and to estimate the data symbols by performing Alamouti decoding using the estimates of the samples for the first and second channels. Embodiments of the present invention can find application in a communications system in which data symbols are carried by OFDM symbols in a MISO system. In a MISO system the OFDM symbols are formed into first and second versions and each version is transmitted via first and second antenna to a receiver. At the single antenna receiver the OFDM symbols are received contemporaneously and therefore appear as if combined during transmission via the different channels. To provide a facility which will allow the data symbols to be recovered, the data symbols are paired and encoded as Alamouti cells, each cell being formed differently on the first and second version of the OFDM symbols. Alamouti-type encoding is used to refer all forms of Alamouti encoding including the classical Alamouti encoding, and a modified Alamouti encoding as used for example in DVB-T2, wherein the symbol order of a pair of symbols is not reversed in the second of the two transmissions as would be required for normal Alamouti encoding. On the decoding side, classical Alamouti decoding assumes that the channel does not change between the received pairs of symbols of the Alamouti cell, whereas non-classical allows a possibility that the change has changed. In order to estimate the first and second channels through which the first and second versions of the OFDM symbols have passed, each version of the OFDM symbols include pilot data carried on pilot sub-carriers. In some of the second versions of the OFDM symbols the pilot data cells are inverted with respect to the first version. As such the pilot data is received, when combined at the receiver antenna as sum pilot data and difference pilot data depending on whether the pilot data of the second version of the OFDM symbol is non-inverted or inverted respectively. Embodiments of the present invention provide a receiver for detecting and recovering data from OFDM symbols, which have been encoded and transmitted in accordance with a MISO/Alamouti encoded system, using frequency only interpolation. As such, data can be detected more rapidly and the receiver implementation can be simplified. In one example, the OFDM symbols include odd OFDM symbols and even OFDM symbols and for one of the odd and even symbols the second transmitted version is arranged to include inverted pilot data providing pilot data which is inverted with respect to the pilot data in the first version. That is to say, the OFDM symbols may be thought of as pairs of successive OFDM symbols so that successive OFDM symbols are treated differently by the receiver. The first OFDM symbol of the pair maybe referred to as an even OFDM symbol and may carry the sum pilot data, because the second version of the OFDM symbol carries non-inverted pilots, so that when the even OFDM symbols are formed at the receiver the pilots sum. The second OFDM symbol of the pair of successive OFDM symbols maybe referred to as an odd OFDM symbol and may carry difference pilot data, because the second version of the OFDM symbol carries inverted pilots, so that when the odd OFDM symbols are formed at the receiver the pilots subtract. Embodiments of the present invention can provide a receiver with a channel estimator and corrector which includes a delay element which is arranged to receive the OFDM symbols and to store the pilot data from the odd or even OFDM symbol. A frequency only interpolator is arranged to interpolate the pilot data in the frequency domain for a current one of the odd and even OFDM symbols and to interpolate the pilot data in the frequency domain for the other of the odd and even OFDM symbols stored by the delay element to form the sum pilot data and the difference pilot data from the odd and even OFDM symbols. As such, frequency only interpolation is arranged to provide sum and difference pilot data for each data bearing sub-carrier of the odd and even symbols, by assuming that the channel does not change substantially between successive symbols. Alamouti decoding can then be successfully applied to recover an estimate of the data for each of the Alamouti cells conveyed by the odd and even OFDM symbols. In some examples the estimates of the channel samples for the first and second channels for each of the data symbols of the Alamouti cell are combined to generate an estimate of the channel state information (CSI) within the Alamouti decoder. This channel state information (ρ) can be used to provide a simple and yet accurate assessment of the state of the channel for use with decoding the symbols communicated in the OFDM symbol, such as for example where the data symbols have been encoded with an error correction code, in which the CSI is used as a metric of received signal quality and for turbo decoding for example. In other examples the presence of inverted and non-inverted pilots within the same OFDM symbol can be sufficient to provide an accurate estimate of the sum and difference components of the pilot data, and so no delay between successive symbols is required. For example when detecting and recovering data from a P2 symbol or a frame closing symbol according to the DVB-T2 standard, because these OFDM symbols have a sufficient density of pilot data to allow enough sum and difference pilot data to be recovered from one OFDM symbol, there is no requirement to delay one OFDM symbol with respect to a current OFDM symbol, because frequency interpolation can be performed within the OFDM symbol. Various further aspects and features of the present invention are defined in the appended claims and include a method of detecting and recovering data from OFDM symbols. BRIEF DESCRIPTION OF DRAWINGS [0015] Embodiments of the present invention will now be described by way of example only with reference to the accompanying drawings in which like parts are identified by the same numerical designations and in which: [0016]FIG. 1 is a schematic block diagram of a DVB T2 transmitter chain which includes elements which are repeated in order to transmit OFDM symbols in accordance with a MISO/Alamouti encoding scheme; FIG. 2 is a schematic representation of a simplified version of the transmitter shown in FIG. 1 to illustrate Alamouti encoding; [0018]FIG. 3 is a schematic block diagram illustrating a frame of OFDM symbols in accordance with the DVB T2 standard; [0019]FIG. 4 is a schematic block diagram of a receiver which is arranged to detect and recover data from OFDM symbols received from the transmitter shown in FIG. 1 which have been Alamouti encoded; [0020]FIG. 5a is a schematic representation of the encoding of the pilot data symbols for respective versions of the OFDM symbols transmitted by the transmitter shown in FIG. 1 ; whereas FIG. 5b illustrates a process of performing temporal interpolation on the pilot data symbols shown in FIG. 5a; [0021]FIG. 6 is a schematic block diagram of a channel estimator and corrector which is shown in FIG. 4 and includes an Alamouti decoder; [0022]FIG. 7 is a schematic block diagram of an Alamouti decoder which is shown in FIG. 6 FIG. 8 is a graphical illustration of four plots of signal value with respect to frequency for example signals generated when performing Alamouti decoding; [0024]FIG. 9a is a schematic representation of an example OFDM symbol pattern with sum and difference pilots; and FIG. 9b is an illustrative representation of an effect of interpolating between the pilot shown in FIG. 9a [0025]FIG. 10 is a schematic block diagram illustrating one example of the frequency only channel estimation and correction processor; [0026]FIG. 11 is a schematic block diagram of a temporal interpolation based MISO channel estimation and correction processor; FIG. 12 is a schematic block diagram of a frequency only MISO channel estimation and correction processor; FIG. 13 is a schematic block diagram of a channel estimator de-nosier circuit which appears in FIG. 12; [0029]FIG. 14 is a schematic block diagram of a noise filter for the frequency only channel estimation and correction processor shown in FIG. 12; and [0030]FIG. 15 is a frequency only MISO channel estimation and correction processor for use with P2 and frame closing symbols according to the DVB T standard. DESCRIPTION OF PREFERRED EMBODIMENTS [0031] Example embodiments of the present invention are described in the following paragraphs with reference to a receiver operating in accordance with the DVB-T2 standard, although it will be appreciated that embodiments of the present find application with other DVB standards and radio communications systems which utilise OFDM and/or MISO. DVB-T2 [2] is a second generation standard for digital video broadcasting of terrestrial TV. The main requirement addressed by this new standard is to provide higher capacity than the first generation DVB-T standard [1]. It is envisaged that this increased capacity can be used for HDTV broadcasting. DVB-T2 is a guard interval OFDM system with some differences when compared with DVB-T. Space-frequency coding is used to provide transmitter diversity (so called MISO--multiple input, single output) in T2. The space-frequency code is a modified Alamouti code applied to adjacent sub-carriers in each OFDM symbol. [0034]FIG. 1 provides an example block diagram of an OFDM transmitter which may be used for example to transmit video images and audio signals in accordance with the DVB-T, DVB-H, DVB-T2 or DVB-C2 standard. In FIG. 1 a program source generates data to be transmitted by the OFDM transmitter. A video coder 2, and audio coder 4 and a data coder 6 generate video, audio and other data to be transmitted which are fed to a program multiplexer 10. The output of the program multiplexer 10 forms a multiplexed stream with other information required to communicate the video, audio and other data. The multiplexer 10 provides a stream on a connecting channel 12. There may be many such multiplexed streams which are fed into different branches A, B etc. For simplicity, only branch A will be described. As shown in FIG. 1 an OFDM transmitter 20 receives the stream at a multiplexer adaptation and energy dispersal block 22. The multiplexer adaptation and energy dispersal block 22 randomises the data and feeds the appropriate data to a forward error correction encoder 24 which performs error correction encoding of the stream. A bit interleaver 26 is provided to interleave the encoded data bits which for the example of DVB-T2 is the LDCP/BCH encoder output. The output from the bit interleaver 26 is fed to a bit into constellation mapper 28, which maps groups of bits onto a constellation point of a modulation scheme, which is to be used for conveying the encoded data bits. The outputs from the bit into constellation mapper 28 are constellation point labels that represent real and imaginary components. The constellation point labels represent data symbols formed from two or more bits depending on the modulation scheme used. These can be referred to as data cells. These data cells are passed through a time-interleaver 30 whose effect is to interleave data cells resulting from multiple LDPC code words. The data cells are received by a frame builder 32, with data cells produced by branch B etc in FIG. 1 , via other channels 31. The frame builder 32 then forms many data cells into sequences to be conveyed on OFDM symbols, where an OFDM symbol comprises a number of data cells, each data cell being mapped onto one of the sub-carriers. The number of sub-carriers will depend on the mode of operation of the system, which may include one of 1 k, 2 k, 4 k, 8 k, 16 k or 32 k, each of which provides a different number of sub-carriers according, for example to the following table: -US-00001 Data Sub- Mode carriers 1K 853 2K 1705 4K 3409 8K 6913 16K 13921 32K 27841 Maximum Number of Sub-carriers per mode. If MISO transmission is to be utilised, then the sequence of data cells to be carried in each OFDM symbol is communicated in parallel to two duplicated versions of a remaining part of a transmitter The transmitter shown in FIG. 1 has been adapted to provide MISO mode transmission. In the MISO mode, two versions of the OFDM symbols are generated. As a result in the transmitter, after the symbol interleaver 33, certain elements are repeated. For example, as shown in FIG. 1 an OFDM symbol builder 37.1, a pilot and signalling data embedder 36.1, an OFDM modulator 38.1, a guard interval inserter 40.1, a digital to analogue converter 42.1 and a front end RF unit 44.1, as well as a transmitter antenna 46.1 are provided in a first branch, and an OFDM symbol builder 37.2, a pilot and signalling embedder 36.2 as well as an OFDM modulator 38.2, a guard interval inserter 40.2, a digital to analogue converter 42.2 an RF front end 42.2, and a second transmitter antenna 46.2 are provided in a second branch. The duplicated elements of the transmitter shown in FIG. 1 are arranged to form an Alamouti encoding system in accordance with a multiple input multiple output or multiple input multiple single output mode with which the present technique finds application. FIG. 2 provides a simplified representation of the elements shown in FIG. 1 which is used to explain Alamouti encoding in accordance with the DVB-T2 MISO mode. As shown in FIG. 2 a simplified version of the transmitter chain includes a source coding unit 1, which forms OFDM symbols using a frame builder 32 and a symbol interleaver 33. The OFDM symbol builder 37.1, 37.2 form first and second symbols Sa, Sb which form a pair within each of the respective versions of the OFDM symbol in accordance with the Alamouti encoding. Each of the symbol pairs Sa, Sb form an Alamouti cell which are transmitted in a different order via each of the respective transmitter antennas 46.1, 46.2. Thus the MISO head end unit 50 enables a formation of the respective pairs of data symbols into a form in which they can be Alamouti encoded. Each of the pairs of data symbols S are then transmitted via different paths 52, 54 before being received by a single receive antenna 56. Respective versions of the received symbol pair r are then processed by a receiver 60. Accordingly, the Alamouti coding of pairs of data symbols does not change the value or order of transmission of the data symbols transmitted on the first transmitter antenna 46.1 but does change the transmitted symbol values and their order for the cells transmitted on the second transmitter antenna 46.2. Thus from the first transmitter antenna 46.1 S is transmitted first followed by S i.e. the cells maintain their value and their order while for the second transmitter antenna 46.2, the order and value of S and S are modified. Specifically, for the second transmitter antenna 46.2 the first value to be transmitted is (-S *) followed by (S *). Therefore in the first bin of the associated pair the first transmitter antenna 46.1 transmits S while the second transmitter antenna 46.2 transmits (-S *). Then in the second bin of the pair, the fist transmitter antenna 46.1 transmits S while the second transmitter antenna 46.2 transmits (S *); where * denotes the complex conjugation operation. Frame Format [0042] For the DVB-T2 system, the number of sub-carriers per OFDM symbol can vary depending upon the number of pilot and other reserved carriers. An example illustration of a "super frame" according to the DVB-T2 standard is shown in FIG. 3 Thus, in DVB-T2, unlike in DVB-T, the number of sub-carriers for carrying data is not fixed. Broadcasters can select one of the operating modes from 1 k, 2 k, 4 k, 8 k, 16 k, 32 k each providing a range of sub-carriers for data per OFDM symbol, the maximum available for each of these modes being 1024, 2048, 4096, 8192, 16384, 32768 respectively. In DVB-T2 a physical layer frame is composed of many OFDM symbols. Typically the frame starts with a preamble or P1 symbol as shown in FIG. 2, which provides signalling information relating to the configuration of the DVB-T2 deployment, including an indication of the mode. The P1 symbol is followed by one or more P2 OFDM symbols 64, which are then followed by a number of payload carrying OFDM symbols 66. The end of the physical layer frame is marked by a frame closing symbols (FCS) 68. For each operating mode, the number of sub-carriers may be different for each type of symbol. Furthermore, the number of sub-carriers may vary for each according to whether bandwidth extension is selected, whether tone reservation is enabled and according to which pilot sub-carrier pattern has been selected. As such a generalisation to a specific number of sub-carriers per OFDM symbol is difficult. Receiver [0044]FIG. 4 provides a more detailed example illustration of the receiver 60 which may be used with the present technique. As shown in FIG. 4 , an OFDM signal is received by an antenna 100 and detected by a tuner 102 and converted into digital form by an analogue-to-digital converter 104. A guard interval removal processor 106 removes the guard interval from a received OFDM symbol, before the data is recovered from the OFDM symbol using a Fast Fourier Transform (FFT) processor 108 in combination with a channel estimator and corrector 110 and an embedded-signalling decoding unit 111. The demodulated data is recovered from a de-mapper 112 and fed to a symbol de-interleaver 114, which operates to effect a reverse mapping of the received data symbol to re-generate an output data stream with the data de-interleaved. Similarly, the bit de-interleaver 116 reverses the bit interleaving performed by the bit interleaver 26. The remaining parts of the OFDM receiver shown in FIG. 4 are provided to effect error correction decoding 118 to correct errors and recover an estimate of the source data. Alamouti Decoding Theory [0045] As explained above, MISO is a kind of transmitter diversity included in T2 by which each QAM data-carrying cell is transmitted from two transmit antennas for reception through one antenna. Prior to transmission, a relationship (the so-called space-frequency coding) is created between pairs of QAM cells using a modified Alamouti encoding [3]. In DVB-T2, the paired cells to be transmitted from the two antennas--hence the space (differentially located transmit antennas) are from the same OFDM symbol but assigned to two adjacent sub-carriers--hence the frequency (different OFDM sub-carriers) coding. As shown in FIG. 2, at the output of the receiver front end 601, the received cells as illustrated occur in the order r and r and can be designated as: where H ,b and H ,b are the channel transfer function (CTF) coefficients representing the first and second propagation paths 52, 54 shown in FIG. 2. The subscripts a and b relate to the CTF coefficients during reception of r and r respectively. Thus H for example is the CTF coefficient from the second transmitter antenna 46.2 to the receiver antenna 56 during reception of r . The terms n and n characterise the receiver noise terms during reception of r and r respectively. Define: = ( H 1 a - H 2 a H 2 b * H 1 b * ) , S = ( S a S b * ) , n = ( n a n b * ) , r = ( r a r b * ) ( 3 ) ##EQU00001## Then in matrix notation, =HS+n (4) With knowledge of H ,b and H ,b, estimates of the transmitted cells S=(S , S can be derived from: ^ = H - 1 r = 1 ρ ( H 1 b * H 2 a - H 2 b * H 1 a ) ( r a r b * ) ( 5 ) ##EQU00002## where ρ=(H *). Expanding Equation (5) and substituting for r and r from Equations (1) and (2): *) (6) *+ρ.su- p.-1(H ) (7) Note that the scaling ρ is complex. From Equations (6) and (7) the demodulated values can be derived by use of the classic Alamouti approach wherein it is assumed that from each transmitter antenna, the channel coefficient stays the same during reception of the paired cells--i.e. H and similarly, H . This makes ρ=(H *) real--thereby avoiding complex division. This approach however suffers additional loss if the channel from a given transmitter actually varies even slightly in phase and/or amplitude between reception of r and r . However, it has the advantage of lower complexity in the calculation and use of ρ. MISO Mode Channel Estimation [0051] As will be appreciated in order to perform Alamouti decoding an estimate of the channel for both diversity branches is required. In the receiver shown in FIG. 4 , these operations are performed by the channel estimator and corrector 110. In channel estimation the CTF coefficients H ,b and H ,b have to be estimated using scattered pilots (SPs) embedded in each OFDM symbol. When MISO is in use in DVB-T2 transmission, the SPs for one of the two transmitter chains are modified. This modification is done as follows: for the first transmitter antenna 46.1, SPs are transmitted as normal whilst for the second transmitter antenna 46.2 every other SP is inverted. At the receiver antenna 56 pilot cells from the two transmitter antennas combine forming either `sum pilots` (when un-inverted pilots are transmitted from both antennas 46.1 46.2) or `difference pilots` (when the first transmitter antenna 46.1 transmits an un-inverted pilot whilst the second transmitter 46.2 transmits an inverted pilot). This is illustrated for the PP1 pilot pattern in FIG. 5a [0053]FIG. 5a provides a schematic representation illustrating the formation of the pilot symbols for an example pilot pattern. The dark pilot cells 150 represent `sum pilots` whilst the light cells 152 represent `difference pilots`. As explained above, the pilot data symbol on some of the scattered pilots of the DVB-T2 standard for the second version of the OFDM symbol transmitted by the second lower branch are inverted with respect to the corresponding pilot symbol on the first upper branch. Accordingly, when these symbols are received contemporaneously at the receive antenna 56, they will form different scattered pilots. On the other hand, some of the scattered pilots in the second version of the OFDM symbol will not be inverted, that is to say they will be in phase with the version of the pilots on the first version of the OFDM symbol transmitted by the upper transmit antennas. Thus, as shown in FIG. 5a so called sum pilots are shown as the dark pilot cells 150 whereas different pilots are shown as the light shaded cells 152 whereas data cells are shown as white blocks. In some receivers, channel estimates for symbols transmitted in the MISO mode can be can be derived as follows. Firstly, the received value at each pilot cell is divided by the relevant known value from the pilot modulating PN sequence to get an estimate of the CTF. Let scattered pilot cell 2n be a sum pilot and let SP cell 2n+1 be difference pilot. Then, at the receiver if it is assumed that the channel coefficients H and H from the first transmitter 46.1 and the second transmitter 46.2 respectively to the receiver are constant for the neighbouring SP cells 2n and 2n+1 (this assumption is only a convenience of derivation--in practice, after frequency interpolation, the CTF at the neighbouring cells will be different as necessary), and that p is the SP scaling, then the received pilot cells r are: n (8a) n+1 (9a) where e n and e n+1 are noise terms during reception of the pilot cells. Demodulate the pilot cells by dividing r n and r n+1 by p n and p n+1 respectively to get g and g --the sum and difference CTF respectively: g s = H 1 + H 2 + e 2 n / p 2 n ( 8 b ) g d = H 1 - H 2 + e 2 n + 1 / p 2 n + 1 ( 9 b ) ##EQU00003## Then ignoring the noise terms, the two resulting equations in two unknowns can be solved to get H and H . More generally, the solution can be found by executing: ( H 1 H 2 ) = ( 1 1 1 - 1 ) - 1 ( g s g d ) ( 10 ) ##EQU00004## We need to invert the Hadamard matrix in Equation (10). Note that for any Hadamard matrix D, D =mI where m is the rank order of the matrix (=2 in this case). Thus for Hadamard matrices, D =(1/m) D and inversion is rather trivial giving the solutions: 1 = g s + g d 2 , H 2 = g s - g d 2 ( 11 ) ##EQU00005## The execution of Equation (11) is known as sum-difference decoding. Temporal and Frequency Interpolation MISO Mode Channel Estimation [0060] The CTF coefficients g and g or indeed H and H only exist in a grid defined by the occurrence of the scattered pilots (SP) that are embedded into the signal for use in channel estimation. There are eight different scattered pilot (SP) patterns in DVB-T2. The choice of SP pattern is closely linked to the FFT mode and guard interval being used. This is because small guard intervals for example are chosen by operators whose network topology presents very short delay spreads. For such delay spreads, optimum channel estimation can still be achieved by widely spaced scattered pilots in frequency. Table 1 shows the spacing of the pilot patterns where D is the frequency spacing between two SP sub-carriers and Dy is the repetition interval of SP cells on a give sub-carrier. -US-00002 TABLE 1 Scattered pilot patterns Pilot pattern Dx Dy Cascade Up-sample factors PP1 3 4 {2, 3} PP2 6 2 {2, 2, 3} PP3 6 4 {2, 2, 3} PP4 12 2 {2, 2, 2, 3} PP5 12 4 {2, 2, 2, 3} PP6 24 2 {2, 2, 2, 2, 3} PP7 24 4 {2, 2, 2, 2, 3} PP8 6 16 {2, 2, 3} To equalise the whole OFDM symbol, CTF coefficients must be determined for all data bearing sub-carriers in the symbol. This can be done by first interpolating the CTF coefficients in the time direction (temporal interpolation) followed by interpolation in the frequency direction (frequency interpolation). FIG. 5b shows the outcome of temporal interpolation on g and g . on the PP1 pattern of FIG. 5a. Notice that a CTF coefficient now occurs every Dx=3 sub-carriers. As shown in FIG. 5b , the sum and difference CTF coefficients resulting from the temporal interpolation (TITP) must in turn undergo frequency interpolation (FITP) separately to provide two versions of the grid of FIG. 5b --one in which all cells are filled completely with difference coefficients g and another in which all cells are filled with sum coefficients g . The two grids provide the CTF coefficients for use in Alamouti decoding. The consequence of assuming that the channel coefficient is the same for each transmitter over SP cells 2n and 2n+1 is that the minimum coherence bandwidth of any channel that can be accurately estimated using the SP pattern is now wider than it would otherwise be if the same SP pattern was used in the SISO case. Indeed, for this particular case the minimum coherence bandwidth is doubled. The effect of this is to halve the effective pilot density D in the frequency dimension and consequently halve the maximum channel delay spread that a MISO receiver (with the given SP pattern) can cope with. Implementation of Temporal and Frequency Interpolation MISO Mode Channel [0063]FIG. 6 provides a schematic diagram of an example channel estimator and interpolator 110.1 arranged to implement the Temporal and Frequency Interpolation MISO Mode Channel Estimation discussed above. In FIG. 6 , the OFDM symbols are received on an input channel 200 and received at a pilot data extracting unit 202. The pilots' are then fed to a temporal interpolation unit 204. As explained above when performing temporal interpolation if a number of OFDM symbols can be stored then up-sampling can be performed by interpolating between the stored pilot symbols to form a sum pilot carrier value g and a difference pilot carrier value g at each location where those scattered pilots are positioned within the OFDM symbols such as that shown in FIG. 5b . The pilot carrier values are then fed to a serial to parallel converter 206, which separates the sum pilot carrier values g and the difference pilot carrier values g 206 into two parallel streams 208 to 210. The sum enable signal 212 provides an indication as to whether pilot carrier values are sum or difference. Each of the parallel sum and difference pilot values are then fed to a multiplier 214, 216 which receives a phase offset from a phase offset generator 218, which has an effect of centring the respective carrier values within a bandwidth of the frequency interpolation filter (FITP). Each of the sum and difference pilot carrier values are then received by a frequency interpolator 220 which perform frequency interpolation to generate a value of the sum and difference pilot carrier values at each location of a data cell throughout the OFDM symbol. The sum pilot carrier values g are then fed to an adder 222 and the difference pilot of carrier values are then fed to a second adder 224 but which are subtracted from the sum pilot carrier values received on a opposite branch to form after the scaling unit 226 and data carrier enabler 228 and a down sampler 230, a sample of the channel impulse response for that data symbol of a pair of data symbols encoded in accordance with the Alamouti encoding. Thus the value of the CTF response is fed via a connection channel 232 to an input of an Alamouti decoder 234. Correspondingly, on the upper branch providing sum carriers g , the first adder 222 receives from an opposite branch a value of a difference pilot g which is added by the summer 222 and after scaling via unit 236 passing through the data enable circuit 228 and the down-converting unit 230 forms a corresponding sample for the channel impulse response for the first symbol of the pair on an output channel 238. The pair of samples of the channel transfer function coefficient (CTF) are received by the Alamouti decoder which performs an Alamouti decoding operation as explained in the following section to form on an output channel 240 an estimate of the symbols S as were transmitted by the transmitter in each of the different conversions corresponding to the Alamouti encoding. As can be seen from FIG. 6 , after the time-dimension interpolation by the temporal interpolation unit 204, the sum-difference channel estimates are designated as gDx since they occur with Dx sub-carrier spacing. Furthermore, frequency interpolation by the frequency interpolator 220 is done separately on the sum sequence gs and the difference sequence g . Whilst gDx is sampled every Dx cells, the sum and difference sequences g and g are each sampled every 2Dx cells. The frequency interpolator 220 (undertaken after centring the channel impulse response under the interpolation filter window) therefore has to interpolate by a factor of 2Dx. This provides values of g and g every cell of the OFDM symbol. Corresponding pairs of elements of g and g then go through the sum-difference decoding of Equation (11) to provide H1 and H2 for every cell of the OFDM. The last decimation by 2 by the down-converting unit 230 would be used if classic Alamouti decoding is used. The outputs are the sequences H and H formed from H and H by taking only the coefficients from data-bearing cells. These outputs would be used in the Alamouti decoding. As will be appreciated from the above explanation each of g and g includes the CTF from both transmitters--Equations (8b) and (9b). A combined centring delay spread can therefore be used. The actual combined delay spread depends on the form of MISO used. In co-located MISO, since both transmit antennas are on the same transmitter tower, the difference in time of arrival of the signals at the receiver from the two antennas is therefore minimal. In this scenario, the first paths from both transmitters arrive approximately simultaneously. Therefore the delay spread of the combined channel is the longer of the two delay spreads. In distributed MISO, the transmit antennas may be located on different transmit towers--for example on separate SFN transmit towers. In this case, it is likely that there is a large difference between the time of arrival of the signals from the two transmitters. In the worst case, the combined delay spread is the sum of the two delay spreads. This uncertainty as to whether it is co-located or distributed MISO should not cause a problem if the filter centring, which generates the phase offset e jc uses a search algorithm for the optimum delay spread setting. Alamouti Decoder [0067] As explained above, the pair of samples of the channel impulse response are received by the Alamouti decoder 234 which performs an Alamouti decoding operation. As well as executing traditional channel correction it also reverses the association between the paired cells. [0068]FIG. 7 provides a schematic block diagram of the Alamouti decoder 234 in accordance with the present technique. The input symbols are received via an input channel 200 and fed to a data enable unit 300. Each of the samples of the channel transfer function (CTF) for the respective pair of symbols are received from the channels 238, 232 at the de-multiplexer 302, 304 respectively. The de-multiplexers 232, 234 serve to separate the pairs of channel samples corresponding to the pair of symbols of the paired data cells into respectively H , H , H , H which are first fed to a first and second multipliers 306, 308 which are multiplied together with a conjugate being applied to one of the two channel samples H , H , and each of the respective first and second multipliers 306, 308 form at the output a multiplication of the two channel samples for the pair H , H * and H , H * which are added by a first adder 310 to form on an output channel 312 and intermediate representation of the channel state ρ. The value of ρ on the output channel 312 is received at the first input of a divider circuit 314. After the received symbols r(n) are passed through the data enabling block 300 they are received by the serial to parallel converter 320 which separates the pairs of symbols into upper and lower branches for processing the first symbol of the pair r and the second symbol of the pair r . The first symbol r is fed to a multiplier 322 on a first input which received on a second input a conjugate of the opposite channel sample H * which are multiplied together and fed to a first input of an adder 324. On the opposite branch the channel sample for the opposite symbol channel H is fed to a first input of a multiplier 326 which receives at a second input a conjugate of the lower symbol of the pair and forms at an output H * which is fed to a first input of an adder 228. A multiplier 330 receives on a first input a conjugate value from the channel impulse response sample for that symbol for the second version of the transmitted OFDM symbol which is conjugated and multiplied by the multiplier 330 by the received symbol from the upper branch r to form a product H * which is subtracted from the output of the multiplier 326 by the adder circuit 328, using a negative applied to the second input, to form at an output a value ρS * which is an estimate of the second symbol of the pair multiplied by a scaling factor ρ which is dependent on the channel. A further multiplier 332 receives a conjugated value of the second received symbol r * and is multiplied by a value of the channel impulse response for the first symbol position for the second transmitted OFDM symbol to form at an output which is fed to the adder 324 H *. Thus at the output of the upper adder 324 a value of ρS is formed which is an estimate of the first symbol (S ) of the pair multiplied by the factor ρ. A parallel to serial convertor 334 is used to feed the estimates of the symbols multiplied by the factor ρ to the divider unit 314 which divides the value by the value of ρ fed from the adder 310 to form at an output an estimate of the received symbols S , S The operation of the Alamouti decoder shown in FIG. 7 is explained as follows: The output sequences from H and H first need to be de-multiplexed into the sequences H ,b and H ,b respectively explained as follows in order to execute the decoding Equations (6) and (7). In T2 MISO pairing is between neighbouring data cells, so recalling that H and H are sequences that hold only the data cell CTF coefficients, this de-multiplexing can be described as follows: [0,2,4, . . . ]; H [1,3,5, . . . ]; [0,2,4, . . . ]; H [1,3,5, . . . ]; In other words all even CTF coefficients form H.sub.(12)a while all odd CTF coefficients form H.sub.(12)b. If the classic Alamouti combining approach were used (i.e. include the last decimation by 2, then take either H or H above for H and either H or H for H . If however, the variable channel combining approach is used, then the modified Alamouti decoding can be implemented as illustrated in FIG. 7 . Notice the de-multiplexors of H into H and H and of H into H and H . The rest of FIG. 7 illustrates the implementation of Equations (6) and (7). It is also assumed here that either the received data cells r(n) have been phase shifted by the filter centring rotator or that the CTF coefficients H and H have been de-rotated to remove the filter centring rotation that they encountered at the input of the frequency interpolation. A possible way to avoid these multiple rotations is to filter-centre the full symbol immediately after the FFT. This not only takes care of the filter centring rotation of the CTF but also the subsequent de-rotation of H and H or the filter centring rotation of the data cells. The computation of ρ=(H *) is also illustrated. The value of |ρ| is proportional to the power of the received cell in much the same way as the CTF coefficient for a SISO OFDM cell is and so could be used to compute the channel state information (CSI). This proportionality is illustrated in FIG. 8. The top plot 340 shows the CTF |H | from Tx1, whilst the next plot 342 is the CTF |H | from Tx2. The third plot 344 shows the vector summation |H | between the two channels. In the last plot 346, this vector summation is repeated, whilst the plot for |ρ| is superposed. Apart from a slight difference in level which can be resolved by use of appropriate scaling, there is very close correlation between the two. It is therefore possible to compute the CSI by dividing |ρ| with the noise power per data cell. This noise power can be derived as explained later below. Noise Power Modification by Alamouti Decoder [0074] The Alamouti decoder 234 can additionally be used to provide a noise power estimate. Equations (6) and (7) are re-arranged and reproduced below as Equations (12) and (13), respectively where ρ=(H *). In the second part of each equation, the term in bracket on the right represents the noise. Given that n and n are the noise components at the input of the Alamouti decoder 234, it can be said that at the output of the Alamouti decoder these noise components are modified by the channel coefficients. - +H *) (12) ) (13) As will be appreciated, the signal power is |ρ| . What is the (modified) noise power? Taking Equation (12) the noise power N can be computed as follows: N a = E [ ( H 1 b * n a + H 2 a n b * ) * ( H 1 b * n a + H 2 a n b * ) * ] = E [ H 1 b * H 1 b n a * n a + H 1 b * H 2 a * n a n b + H 1 b H 2 a n a * n b * + H 2 a * H 2 a n b * n b ] = E [ H 1 b * H 1 b n a 2 + H 2 a * H 2 a n b 2 ] ##EQU00006## If we make the assumption that E[|n E.left brkt-bot.H .right brkt-bot.=σ |ρ| (14) if we consider that H and H which they are in most cases. The CSI which is computed as the ratio of |ρ| and N is thus given by: = ρ 2 σ 2 ρ = ρ σ 2 ( 15 ) ##EQU00007## where σ represents the noise power estimated through other means described in [7] and in section 5. Frequency Only Interpolation MISO Mode Channel Estimation [0080] The MISO channel estimator (MCE) described above and shown in FIG. 6 requires that temporal interpolation is used prior to the channel estimates being computed. At the output of the temporal interpolation, CTF coefficients g are known for one carrier in every D . However, some pilot patterns such as PP7 have a large D which potentially makes for large memory requirements during temporal interpolation. Even with SISO, it is envisaged that when such pilot patterns are used, temporal interpolation would not be used. For some of the pilot patterns it has been determined that only frequency-based channel estimation will be used even in SISO. However some pilot patterns typically require temporal interpolation to be performed. However, in the MISO mode this can be avoided in some cases if we devise a more efficient MISO channel estimation mode in which only frequency interpolation is used. FIG. 9a provides a schematic diagram illustrating a PP5 MISO pilot pattern for which (D , D )=(12, 4). As explained above, alternate OFDM symbols in the MISO mode carry only either sum or difference pilots and a complete MISO channel estimation process must use both the sum and difference pilots. However, for frequency-only channel estimation on a given symbol, we have access either to sum only or difference only pilots but not both. However, assuming that the coherence time of the channel is substantially longer than two symbol periods (this is a reasonable assumption for example roof-top reception), then it follows that the actual channel is relatively constant over more than two symbol periods. If this is the case, then the channel estimates from adjacent symbols can be interchanged without significant distortion. It would therefore be reasonable under these circumstances to combine the sum channel estimates from one symbol with the difference channel estimates from its immediate adjacent neighbour and expect very little degradation. Accordingly, FIG. 9b provides a schematic diagram of an inventive aspect of embodiments of the present invention which illustrates interpolation being performed in the frequency domain across the difference pilots and sum pilots In these embodiments it is assumed, without generalisation, that even numbered OFDM symbols always carry sum pilots while odd numbered symbols carry difference pilots. Then we can derive our sum channel estimates g from symbol 2n and derive our difference channel estimates g from symbol 2n+1. Thus after deriving g and g from Equations (8) and (9) we can apply frequency interpolation by a factor of DxDy to each in turn. This would provide up-sampled versions of each of g and g with samples for each cell of the OFDM symbol. This is illustrated in FIG. 10 , which is similar to FIG. 6 except that g and g come from adjacent OFDM symbols--not just one symbol as is the case for g FIG. 6 . The (DxDy) subscript on the g and g inputs is used to indicate the sampling rate of the inputs. The frequency interpolator here uses the same filter cascades that have been described in our co-pending UK patent application number GB0909590.2 for the given SP pattern in frequency-only SISO mode, the contents of which are herein incorporated by reference. Implementation of Frequency Only Interpolation MISO Mode Channel [0085]FIG. 10 provides an example frequency only MISO channel estimation and correction processor in accordance with the present technique. Essentially, the channel estimation and correction processor corresponds to the example shown in FIG. 6 except that no temporal interpolation is performed and odd or even OFDM symbols are stored in a delay element 406, so that by interpolating across the frequency domain sum and difference components required for the Alamouti decoding can be made available for each of the data cells within the OFDM symbols. As shown in FIG. 10 the MISO channel estimator and corrector includes a delay element 300, 406 which receives the pilot data extracted from a pilot data extractor 202 and stores the pilot data samples for a first OFDM symbol of the pair, which is then combined with a second OFDM symbol of the pair which is not delayed. The pilot data of the second OFDM symbol of the pair is fed via a channel 408 without going through the delay element 406, which then can be combined with the pilot carriers which have been delayed from the first OFDM symbol in order to generate the samples of the channel for the respective pairs of symbols for respective transmission paths in accordance with the Alamouti encoding decoding. For brevity only the differences between FIG. 6 FIG. 10 will be explained. In general the first data symbol can be regarded as even numbered OFDM symbol and the second data symbol can be regarded as odd numbered OFDM symbols carrying difference pilots. As shown in FIG. 10 because the delay element 406 stores the sum pilot values from the first OFDM symbol, there is no requirement for a corresponding serial to parallel converter to form the first and second branches of the channel estimation processor. Again phase offset value e jc is formed by a phase generator 208 and fed to respective multipliers 214, 216. Frequency interpolation is then performed across the entire OFDM symbol by the frequency interpolator 220 to form for the upper and lower branches for each data cell of the respective OFDM symbols a sum pilot value g and a different pilot value g . For the upper and lower branches the output from the frequency interpolator 220 is fed to a sum and difference decoder 225 which is formed by adders 222, 224 and scaling processes 236, 238. The first adder 222 of the sum and difference decoder 410 adds the sum pilot value g to the difference pilot value g from the opposite branch and after scaling by the scaling unit 236 forms samples for the first channel H . Correspondingly, the lower adder 224 subtracts the difference pilot value g from the sum pilot value g to form after the scaling unit 238 samples for the second channel H . After a data carrier enable unit 228 which selects only the CTF coefficients that coincide with data cells, the respective first and second channel samples are down sampled by a down sampling unit 330 to form for each of the respective transmission samples for the transmission channel H which are fed via respective outputs 238, 232 to Alamouti decoder 234 which generates the estimates of the transmitted symbols by performing Alamouti decoding such as for example the Alamouti decode shown in FIG. 7 . The down sampling unit 330 would be by passed if classic Alamouti decoding is not going to be used. Positioning of the Interpolator [0087] As will be appreciated with reference to FIGS. 6 and 10, the implementation of the MISO mode channel estimation (MCHC) can be considered as three separate functional blocks: the sum-difference decoding (SDD), the frequency interpolation (FITP) and the Alamouti decoding (ADEC). As discussed above, an advantage is provided by positioning the FITP before the SDD so that only one filter centring operation is needed to cover both channels. This is illustrated in FIG. 11. [0088]FIG. 11 provides a temporal interpolation based MISO channel estimation and correction processor which utilises the Alamouti decoder shown in FIG. 7 in combination with channel estimation processor shown partly in FIG. 6 . As shown in FIG. 11 temporally interpolated pilot data samples are fed to a serial to parallel convertor 206 which converts the pilot data samples respectively using a sum enable flag 212 into respective upper and lower branches providing sum pilot values gs and different pilot values g . The sum pilot values and different pilot values g and g have already been temporally interpolated as shown in FIG. 5b so that by performing frequency interpolation using the frequency interpolator 200 the sum and difference pilot values can be formed for each of the respective cells of the OFDM symbol. The sum and difference decoder 225 then generates as before the samples of the first and second channels via which the different versions of the OFDM symbols have passed and using a data enabling switch 228 the respective samples of the channel for each of the data symbols are generated. If classic Alamouti decoding is to be used, then down sampling using the down sampling processor 230 is applied. Otherwise, the down sampler is by-passed. The CTF samples for the data cells of the OFDM symbols for the first and second channels are then formed on output channels 238, 232. These again are fed to the Alamouti decoder 234 which also receives on a second input 420 r(n) the received data bearing cells, which are processed by the Alamouti decoder as explained above to form an estimate of the symbols S(n) which are paired, and an estimate of the value ρ which is used to form channel state information for the estimated symbols. -Noising Frequency Only MISO Channel Estimates FIG. 12 provides a further example enhancement of a frequency only MISO channel estimator and corrector 110.4. Parts shown in FIG. 12 correspond to those shown in FIG. 10 and so only the differences will be explained. The channel estimator and corrector 110.4 shown in FIG. 12 corresponds to that shown in FIG. 10 , except that further elements are included in order to recover an estimate of noise which has affected the reception of the data symbols. As shown in FIG. 12, a channel estimate de-nosier 500 is shown to receive the respective CTF samples for the first and second channels for the upper and lower branches from the sum and difference decoder 225 before these are received by the data enabling switch 228 to form channel estimates for data symbols. Furthermore a further adder 502 is arranged to subtract a sample of the output of the de-noisier 500 from the second channel H from the estimate of the second channel H at the input of the de-nosier 500, which is then fed to a noise filter 504 to form, on an output channel 506, an estimate of the noise which has affected the received signals. The noise estimate can be used later to assist in decoding the data symbols. According to the example provided in FIG. 12, to provide a means for producing a noise power estimate with frequency-only MISO channel estimation and correction (FO-MCHC) a channel estimate de-noising filter 500 is applied, is used, after the sum-difference decoding (SDD), to reduce the long-term noise on the channel estimates. The de-noised channel estimates are then subtracted from the original estimates to get an estimate of the noise. The noise filter 504 processes the noise estimate to provide a noise power estimate for each sub-carrier from which the noise power at each data cell can be determined. Meanwhile the channel estimates H and H pass into the Alamouti decoding (ADEC) block which performs the equalisation of the data cells r (n) to give the estimated cells s'(n) and the power of the estimated cells ρ(n) which will be divided later by the estimated noise N(n) to give the channel state information (CSI) for each data cell. There are additional requirements for implementing FO-MCHC. These arise because the described FO-MCHC equalises OFDM symbols in pairs. The buffer shown at the input in FIG. 12 must be large enough to hold all the CTF coefficients from the even symbol. This size is max(K )/min(DxDy) where K is the number of useful sub-carriers in each OFDM symbol and max(K )=27841 from Table 60 of [2] whilst min(DyDx)=12 from Table 1 above. Thus max(K /DxDy)=2321. This number can be reduced by considering min(DxDy) only for the MISO SP patterns for which FO-MCHC will be used. Functionally, this buffer can be moved to the input of the SDD because it is only in the SDD that g and g are processed together. However, at this location, a larger buffer (approximately DxDy times larger) would be needed--the actual size would be max(K Another buffer--not shown--is needed to hold the data cells r (n) from the even symbol. The size of this buffer would be at least max(C )=27152 from Table 42 of [2]. In FIG. 12, the noise estimation uses H . However in multipath channels where the delay spread from transmitter antenna 46.1 is substantially different to that from transmitter antenna 46.2, the noise may be computed on both H and H and their powers averaged. Except for the effects of the filtering both in the de-noising and noise filter blocks, both symbols 2n and 2n+1 should have a common ρ(n) and N(n). An example illustration of the de-noising processor 500 is shown in FIG. 13. In FIG. 13 both estimates of the channel for the upper and lower branches are received correspondingly by a scaling unit 520 which scales each of the samples via value α. An adder 522 adds to the scaled channel samples a delayed version of a previous output from the adder multiplied by further scaling factor 1-α by multipiers 524. The delayed output is produced using the delay circuits 526. Effectively the de-noising unit 500 forms a leaky bucket filter using a scaling factor α to average out the effects of the noise over the respective symbols. Usually α is set to be very much less than 1. An example of a noise power filter is shown in FIG. 14 . In FIG. 14 the difference value between the respective samples of the second estimated channel are fed on an input 530 to a magnitude calculator 532. The magnitude of the estimated noise term is then multiplied by an I C (or user) controlled scaling factor by a multiplier 534 before being fed to a scaling multiplier 536. The output of the scaling multiplier 536 is fed to an adder 538 which combines the multiplied sample with a delayed multiplied output from the adder 538 as affected by a delay unit 540 and a multiplier 542. The first and second multipliers 536, 542 scale the received samples by a factor β, 1-β to form again a leaky bucket style filter which uses a value of β of very much less than 1. As such the noise sum magnitude values are filtered to generate an estimate of essentially a difference between successive estimates of channel samples. Use of FO -MCHC for P2 and FC Symbols A further example of a frequency only MISO channel estimation and correction unit 110.5 is shown in FIG. 15 where again only the difference between the previous examples will be explained. Essentially the frequency only estimator for the example of the channel estimating corrector 110.5 shown in FIG. 15 corresponds to that shown in FIG. 12 in which all other elements correspond except for the input element. The channel estimator and corrector unit 110.5 as shown in FIG. 15 would be suitable for performing estimation of the channel and Alamouti decoding for OFDM symbols which include sufficient pilots throughout the OFDM symbol which would provide an accurate estimate of the channel difference pilots and the channel sum pilots without requiring samples from other OFDM symbols. Accordingly, there is no delay element 406 as shown for the example in FIG. 12. On the contrary, there is shown in a serial to parallel converter 206 and a sum-enable control input 228 which respectively diverts the received pilot symbol estimates to respective upper and lower branches in accordance with the samples which were available from the received OFDM symbols. In MISO transmission of DVB-T2, each of a P2 and a frame closing symbol (FCS) both carry sum and difference pilots. These symbols can be equalised using the FO-MCHC but the blocks up to the SDD in FIG. 12 will be replaced by those in FIG. 11 , as illustrated in FIG. 15 Various further aspects and features of the present inventions are defined in the appended claims. REFERENCES [0100] [1] ETSI, "Digital Video Broadcasting (DVB) Framing structure, channel coding and modulation for digital terrestrial television EN300 744 v1.1.2", August 1997. [2] DVB, "Digital Video Broadcasting (DVB) Frame structure, channel coding and modulation for a second generation digital terrestrial television broadcasting system (DVB-T2); Draft of EN302 755 v1.1.1", May 2008. [3] S. M. Alamouti, "A Simple Transmit Diversity Technique for Wireless Communications", IEEE Journal on Select Areas in Communications, vol 16, no. 8, October 1998 [4] A. Vielmon et al., "Performance of Alamouti transmit diversity over time-varying Rayleigh-fading channels", IEEE Transaction on Wireless Communications, vol. 3, n. 5, September 2004 [5] DVB, "Digital Video Broadcasting (DVB-T2) Implementation Guidelines for DVB-T2; Draft of" July. 2008. [6] Ollie Haffenden, "Alamouti in varying channels", DVB-T2 Doc T2 Patent applications by Samuel Asangbeng Atungsiri, Basingstoke GB Patent applications by SONY CORPORATION Patent applications in class Particular pulse demodulator or detector Patent applications in all subclasses Particular pulse demodulator or detector User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20100310017","timestamp":"2014-04-19T14:47:32Z","content_type":null,"content_length":"123026","record_id":"<urn:uuid:7bb9e718-0e11-46ea-9c02-85018f71fee0>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
Axioms Proving. March 14th 2009, 07:45 PM Axioms Proving. Let A, B and C be any three events. Show that (i) $P(A) = P(B)$if and only if $P(A \cap B^{c}) = P(A^{c} \cap B)$ (ii) Given $P(A) = 0.5$ and $P(A \cup (B^{c} \cap C^{c})^{c})= 0.8$, determine $P(A^{c} \cap (B \cup C))$ Show that for any two events : (i) If $A \subset B$, then $P(A^{c} \cap B) = P(B) - P(A)$ (ii) If $A \subset B$, then $P(A) \leq P(B)$ (iii) Why is it incorrect to assume that for some events A and B, $P(A) = 0.4, P(B) = 0.3$ and $P (A \cap B) = 0.35$? March 15th 2009, 12:41 AM I'll give you most of the solution, but you will have to fill in some steps ;) Thing that may come in handy : $P(M \cup N)=P(M)+P(N)-P(M \cap N) \quad (1)$ $\Rightarrow P(M \cap N)=P(M)+P(N)-P(M \cup N) \quad (2)$ $P(M^c)=1-P(M) \quad (3)$ $(M \cup N)^c=M^c \cap N^c \quad (4)$ (de Morgan's law) $(M \cap N)^c=M^c \cup N^c \quad (5)$ (de Morgan's law) \begin{aligned}<br /> P(A \cap B^c) &=P(A)+P(B^c)-P(A \cup B^c) \quad \text{ by (2)} \\<br /> &=P(A)+1-P(B)-(1-P(A^c \cap B)) \quad \text{ by (3),(4)} \end{aligned} $P(A \cap B^c)=P(A)-P(B)+P(A^c \cap B)$ ${\color{red}P(A \cap B^c)}-{\color{red}P(A^c \cap B)}=P(A)-P(B)$ From here, it should be very easy to conclude :) (ii) Given $P(A) = 0.5$ and $P(A \cup (B^{c} \cap C^{c})^{c})= 0.8$, determine $P(A^{c} \cap (B \cup C))$ I'm thinking on this one... Show that for any two events : (i) If $A \subset B$, then $P(A^{c} \cap B) = P(B) - P(A)$ $B=A \cup (B \cap A^c)$ why ? because let's consider an element in B. It is contained in A, or it is contained in B, but not in A. This latter possibility gives the set $B \cap A^c$ You can also see that since A and $A^c$ are disjoint, then A and $B \cap A^c$ are disjoint. Hence $P(B)=P(A \cup (B \cap A^c))=P(A)+P(B \cap A^c)$ And the conclusion follows. (ii) If $A \subset B$, then $P(A) \leq P(B)$ Use (i) : $P(B)=P(A)+P(B \cap A^c)$ but since $P(B \cap A^c)$ is a probability, it's $\geq 0$ thus $P(B) \geq P(A)+0$ (iii) Why is it incorrect to assume that for some events A and B, $P(A) = 0.4, P(B) = 0.3$ and $P (A \cap B) = 0.35$? Because $A \cap B$ is included in $A$ By (ii), we should have $P(A \cap B) \leq P(A)$, which is not the case here ;)
{"url":"http://mathhelpforum.com/statistics/78721-axioms-proving-print.html","timestamp":"2014-04-18T07:52:48Z","content_type":null,"content_length":"14603","record_id":"<urn:uuid:053ebb70-b3ae-43a9-85cf-cc40a5f720c6>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
A simple pair trading strategy The examples presented on these pages are for educational purposes only and may not be seen as investment advice! You, and only you, are responsible for what you are doing with your money. Please click here to read the entire disclaimer and disclosure. The goal with this strategy is to exploit deviations from the mean on a timeseries consisting of the two german utilities RWE and E.ON, while being neutral to overall marketchanges in the utility sector. To find out wether this pair is appropriate for this type of trading, I made a linear regression analysis, as shown in the graph below. The equation shown in the graph is the equation for the theoretical price of 1 piece RWE: RWE = 2,2298 * E.ON + 1,3152 EUR. This equation explains 88,41% of all the prices of RWE in the period examined, shown by the R sqared. 1,3152 EUR is what an investor on average has to invest to own a portfolio of either 1 long RWE and short 2,2298 E.ON and the other way round. The next step is to calculate all the theoretical prices of RWE and to divide these theoretical prices of RWE by the observered prices. This generates the ratio, where a ratio of 1 means that the prices are in equilibrium, while a ratio smaller than 1 means that E.ON is cheap relative to RWE and greater than 1 means that RWE is to cheap relatively to E.ON. Plotting the ratio, as shown below, one can easily see that the ratio is mean-reverting. So according to the above a simple strategy could be to buy 2,2298 pieces of E.ON and to sell short 1 piece of RWE when the ratio is below 1 and the other way round when greater than 1. To find a suitable entry and exit price I calculated the standard deviation of the ratio and backtestet the strategy with 1, 2, 3 and 4 standard deviations from 1 as the entry point. The trades will be closed when the ratio moves back to 1. A stop-loss is not applied. As the graph above shows, 1 standard deviation from the mean seems to be the best entry point. Admittedly this is before trading costs and slippage. One might have wondered why I only examined the data until end of 2006 in my regression. The answer is easy: to have enough data to backtest "out of sample", meaning that the model is not fitted to this period and therewith kind of a "live test". The graph below shows the performance if one had made this model on New Years eve 2006 and had started trading it in the beginning of 2007: The beta of this strategy is fairly close to zero (0,004), meaning that the strategy is close to marketsneutral. The Sharpe Ratio is 0,48 (assuming a risk free interest rate of 2%).
{"url":"http://www.christoph-junge.de/pairstrading.php","timestamp":"2014-04-20T23:45:38Z","content_type":null,"content_length":"7295","record_id":"<urn:uuid:f6d739ac-3fe0-45fb-a0b0-b4f07401ab6f>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Microcontroller fuzzy logic processing module using selectable membership function shapes - Patent # 5491775 - PatentGenius Microcontroller fuzzy logic processing module using selectable membership function shapes 5491775 Microcontroller fuzzy logic processing module using selectable membership function shapes (7 images) Inventor: Madau, et al. Date Issued: February 13, 1996 Application: 08/310,894 Filed: September 22, 1994 Inventors: Davis; Leighton I. (Ann Arbor, MI) Feldkamp; Lee A. (Plymouth, MI) Madau; Dinu P. (Dearborn, MI) Seippel; Steven H. (Canton, MI) Yuan; Fumin (Canton, MI) Assignee: Ford Motor Company (Dearborn, MI) Primary MacDonald; Allen R. Assistant Dorvil; Richemond Attorney Or Coppiellei; Raymond L.May; Roger L. U.S. Class: 706/4; 706/52; 706/900 Field Of 395/3; 395/61; 395/54; 395/900 U.S Patent 4809175; 4837725; 4864490; 5111232; 5184131; 5243687; 5249258; 5280624 Other A Fuzzy Logic Programming Environment for Real-Time Control, Dec. 31, 1988 Stephen Chiu, Masaki Togai.. Abstract: A standardized microcontroller-based fuzzy logic processing module for generating control signal values in response to variable input signal values in accordance with constraints imposed by propositions or "rules" stored in memory in a standardized format. Each rule consists of one or more input conditions and an output directive. Each input signal and the output signal values are characterizable by their degree of membership in a predetermined number of fuzzy sets, each fuzzy set being defined by a membership function. Each input condition is composed of a reference to a particular input variable, which has been preprocessed into integer (whole number) normalized form and a reference to a membership function. Each output directive includes a reference to a further membership function. Each membership function is implemented by one of a three possible forms of standard membership data structures: a triangular function represented by three data points, a table lookup function represented by a two end data points and the points between these two end points, and an standard shape function composed of a set of data points and a set of reference points used to form a similar shape function by interpolation. Fuzzy logic processing is accomplished by determining the extent to which the input conditions are satisfied by the current values of the input signals in order to develop a rule strength value, and then performing a "center-of-gravity" determination based on the output membership function values of each satisfied rule integrated over the range of possible output signal values. Claim: What is claimed is: 1. A digital fuzzy logic system comprising: a source of a plurality of input signal values; a processor; a memory connected to said processor for storing data and for further storing instructions which are executable by said processor for manipulating said data, and a utilization device connected to said processor and responsive to a digital output signal value produced by said processor; wherein said processor and said memory operate cooperatively to form a fuzzy logic controller comprising: means for processing said input signal values into a corresponding set of input integer values; means for storing plural sets of function definition values, means for storing instructions for performing a plurality of different standard membership functions, each of said functions when executed producing a degree-of-membership value in accordance with the combination of a supplied integer value and aspecified one of said sets of function definition values; means for storing data representative of a plurality of rules, each of said rules specifying at least: (a) a first one of said input integer values, (b) a first one of said sets of function definition values, (c) a first one of said standard membership functions, (d) a second one of said set of function definition values, and (e) a second one of said standard membership functions; means responsive to each given one of said rules for forming a strength value for said rule, said means comprising: means for executing said first standard membership function specified in said given rule to produce an intermediate integer value in accordance with the first input integer value specified by said given rule and the first set of functiondefinition values specified by said given rule, and means for executing said second standard membership function specified by said given rule to produce said strength value for said given rule in accordance with said intermediate integer value and the second set of function definition valuesspecified by said given rule; and means for producing said digital output signal value by forming the weighted combination of the strength value formed in response to each of said plurality of rules. 2. Apparatus as set forth in claim 1 wherein said plurality of different standard membership functions are selected from a group comprising: a) a triangular function which when executed produces a degree-of-membership value in accordance with function definition values representative of left and fight signal value limits associated with a minimum degree-of-membership values and acenter signal value associated with a maximum degree-of-membership value, b) a table-lookup function which when executed produces a degree-of-membership value in accordance with function definition values represented by an array of degree-of-membership values each corresponding to a possible signal value, and c) an interpolated function which when executed produces a degree-of-membership value by interpolating between adjacent ones of said function definition values. 3. Apparatus as set forth in claim 2 wherein said second one of said standard membership functions specified for each of said rules is a table-lookup function. 4. Apparatus as set forth in claim 3 wherein said first one of said standard membership functions specified for each of said rules is a triangular function. Description: AUTHORIZATION A portion of the disclosure of this patent document contains material which is su copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appearsin the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. 1. Field of the Invention This invention relates generally to electronic control systems and, more particularly, to a system for generating process control signals which vary in response to changing input signals in accordance with constraints imposed by rules which statesystem objectives. 2. Background of the Invention Control systems typically use feedback mechanisms to vary the value of an actuating signal in response to variations in sensed system conditions. By manipulating signals which are directly proportional to sensed system conditions, as well assignals related to integrals or derivatives of sensed signals, accurate control of many dynamic systems may be successfully achieved. Such feedback control systems work well because they are able to model desired system performance, sense the state ofthe physical system and compare it with the desired model, and generate error-correcting feedback signals so that the physical system behaves in accordance with the desired model. Frequently, however, even an elaborate proportional-integral-differential (PID) controller cannot accurately model more complex physical systems. Many control functions which humans easily perform cannot be automated by conventional feedbackcontrollers. For example, the functions which an experienced automobile driver performs (turning the steering wheel, pressing the brake and accelerator pedal, and shifting gears) can't be handled by conventional controllers, because the driver'smovements are based on a complex inputs which the driver processes based on past experience. A technique called "fuzzy logic," developed by Dr. Lotfi Zadeh and described in "Fuzzy Sets," Information and Control, Vol.8. No. 3, June 1965, pp. 338-53, has proven highly effective in controlling complex systems. Using fuzzy logic, inputvariables are processed against a collection of rules, each of which expresses a system objective in propositional form; for example: "if velocity is low and rpm is low then shift to 1st-gear." Unlike conventional logic, in which conditions are eithersatisfied or not satisfied, the conditions "velocity is low" and "rpm is low" may be only partially satisfied so that the rule is only partially satisfied. As a consequence, when the input conditions are only partially satisfied, the rule and its outputdirective to "shift to low gear" is assigned less "rule strength." In the fuzzy logic based controller, many such rules are evaluated concurrently. The actual gear selection is performed by combining all directives from all of the satisfied rules inaccordance with the degree to which each rule was satisfied, thereby arriving at a consensus Fuzzy logic control systems allow the possible state or signal values assumable by the system to be classified into "fuzzy sets," each defined by a membership function. A membership function associated with a given signal thus provides anindication of the degree-of-membership the current value of that signal has with respect to the fuzzy set. Rules express both their conditions and their directives in terms of fuzzy sets. For example, a condition that "velocity is low" might berepresented by a triangular membership function which yields a maximum degree-of-membership value at 4 mph, and which goes to zero at -3 mph and +11 mph. Given a current speed value, the membership function specified in the first rule condition yields adegree-of-membership value which is then combined with the degree-of-membership value from the second condition, "rpm is low," to determine the current rule strength. The rule's directive, "shift to low gear," is also expressed as a fuzzy set, with "lowgear" being represented by a membership function which yields a degree-of-membership value for each possible gear ratio. The determination of the "consensus" among all of the applicable rules is typically accomplished by calculating the "center of gravity" of all of the rules, taking into account both their rule strength, as determined by the degree to which eachrule's condition(s) were satisfied, and the shape of the membership function for the rule's output directive. In complex fuzzy logic systems employing a large number of rules, the calculations required to reach the desired consensus output value areextensive. When control systems must operate in real-time, the speed at which the desired output control values must be calculated to keep pace with rapidly changing input signal can place a substantial burden on the system's computational resources. In contrast, other control functions in the same system may use fuzzy logic rules which need to be evaluated relatively infrequently. In many environments, such as a vehicle control system, it is desirable to perform fuzzy logic manipulations in combination with conventional control functions. In this way, a group of both fuzzy and non-fuzzy logic controllers may manipulate orrespond to the same input or output signals, or share resources, such as sensors, actuators or other instrumentalities. Such related control functions are preferably implemented using a single microcontroller, suitably programmed to perform associatedfunctions in a coordinated way. SUMMARY OF THE INVENTION It is a general object of the invention to perform fuzzy logic control functions by means of an efficient, standardized, general-purpose, shareable processing module which performs integer arithmetic fuzzy logic operations on input signal valuesby reference to conditional propositions, or "rules," having predetermined formats to produce de-fuzzified output signal values. In accordance with a feature of the invention, fuzzy logic processing to produce one or more output control signals is performed by a digital controller consisting of a microprocessor and a memory which stores, for each of the input signals to beprocessed and for each output signal to be produced, a predetermined number of associated membership functions, each function specifying a predetermined degree-of-membership integer value for each integer value assumable by the associated signal. In the preferred embodiment to be described, seven membership functions are established for each signal and represent a group of seven fuzzy sets. Although these sets are typically designated with the semantic names "negative large," "negativemedium," "negative small," "zero," "positive small," "positive medium," and "positive large," the signal values which are within any given set may be any collection of values which are defined by that set's membership function. In accordance with the invention, a membership function may assume one of three standard forms: (a) a triangular function specified by left and right signal value limits, both of which have a zero degree-of-membership value, and a "center" signalvalue having the maximum degree-of-membership value; (b) a table-lookup function specified by an array of degree-of-membership values in memory indexed by the current integer signal value; and (c) a shape function specified by left and right signal rangelimits having zero degree-of-membership values, and the specification of a table-lookup function which defines a template shape (such as a bell-shape) from which the actual degree-of-membership values of the shape function are interpolated. In accordance with the invention, the module performs fuzzy logic operations specified by a plurality of rules, each rule being expressed in a standard format which identifies one or more input condition(s) by specifying input membershipfunction(s) and further specifying an output membership function. Using high speed integer arithmetic, the processing module derives the rule strengths for each rule, then performs an integer arithmetic center-of-gravity calculation to determine thedesired, de-fuzzified output signal value. For efficiency of storage, the input membership functions are preferably implemented as triangular functions whose membership values are calculated from the current input values by determining the membership function and input value lineintersections. To speed execution, the output membership functions take the form of table lookup functions which may be dynamically precalculated by interpolation from predetermined shape function limit values. These and other objects, features and advantages of the present invention may be better understood by considering the following detailed description. BRIEF DESCRIPTION OF THE DRAWINGS In the course of the description of the preferred embodiment which follows, reference will frequently be made to the attached drawings, in which: FIG. 1 is a block diagram illustrating the functional organization of the preferred embodiment of the invention; FIG. 2 is a graph of a triangular membership function used to convert signal values into degree-of-membership values for a fuzzy set; FIG. 3 is a memory map of the values needed to specify the values needed to define seven triangular membership functions for a given signal; FIG. 4 is a graph showing the relative locations of values stored in memory for a representative table lookup membership function; FIG. 5 is a memory map of the values which define lookup membership functions for each of seven sets associated with a given signal; FIG. 6 is a graph showing the locations of values stored in memory for a shape function; FIG. 7 is a memory map of the values needed to specify seven shape membership functions for a given signal; FIG. 8 is a memory map illustrating the addressing mechanism used to specify the set of membership functions during the processing of a collection of rules to determine a defuzzified output value from a designated group of input values; FIG. 9 is a flow chart illustrating the processing steps used to evaluate the applicability of a collection of fuzzy logic rules to develop an output control signal magnitude; FIG. 10, parts (a) through (g), are illustrative graphs showing the triangular input and output membership functions defined by two rules and the manner in which those rules are processed with respect to two input signals to yield an outputcontrol value; FIG. 11 is a flowchart illustrating the processing steps used in performing the center-of-gravity determination to form the desired output signal magnitude in accordance with the invention; and FIG. 12 is an enlargement of FIG. 9(g) which illustrates the mechanism for determining a center-of-gravity value. DESCRIPTION OF THE PREFERRED EMBODIMENT The embodiment of the invention to be described utilizes a programmed microcontroller to perform general-purpose fuzzy logic control functions as generally shown in FIG. 1 of the drawings. The fuzzy logic module 101 accepts input signals from aseveral sources 103, 105 and 107, each signal being pre-processed at 109 to form normalized digital signals which take the form of integer values supplied to the module 101 by the fuzzy logic inputs 110. The module 101 repeatedly processes the currentvalue of the digital input signals 110 to produce a sequence of digital integer output values at 112. This sequence of output values is then translated by suitable post-processing at 113 into a control signal for driving a utilization device, such asthe actuator 115 seen in FIG. 1. The logic module 101 creates the integer output signal values at 112 by comparing the input signals at 110 with conditions defined in a group of rules stored in memory at 117. The conditions expressed in the rules identify input membershipfunctions which are stored at 119. The identified input membership functions from 119 are then used at 120 to determine a "rule strength" value for each rule indicating the extent to which the rule's input conditions are satisfied by one or more of theinput signal value(s) at 110. The resulting rule strength values are stored temporarily at 122. Using these stored rule strength values and the output membership functions 118, the module then determines a "center-of-gravity" value at 124 yielding the output integer value at 112. The center-of-gravity determination is performed in part byintegrating degree-of-membership values produced by the output membership functions stored at 128 over the range of possible output integer values, each degree-of-membership value being limited by the rule strength values stored at 122. The results fromthe integration are then processed to form a center-of-gravity value delivered as the "de-fuzzified" output integer value at 112. This output integer represents the consensus of all of the rules whose conditions are satisfied, to varying degrees, by theinput signal values at 110, and is converted by post-processing into a signal magnitude appropriate for controlling a utilization device, such as the actuator 115 shown in FIG. 1. Standard Membership Functions In accordance with the invention, the fuzzy logic module employs "fuzzy sets" defined by membership functions having predetermined formats which are stored in memory and which characterize the values assumable by both the input signals and thedesired output signal. To permit the use of standard processing methods, each input signal is preprocessed at 109, before being supplied to the fuzzy logic module so that a normalized signal level is supplied at 110 in the form of a digital integerhaving a predetermined range of possible values, e.g., 0 to 40. In addition, a predetermined number of fuzzy sets is used to characterize the possible values assumable by each input signal level. In the preferred embodiment, each signal value is a member of one or more of seven sets which may be indicated bythe semantic designations "negative large," "negative medium," "negative small," "zero," "positive small," "positive medium," and "positive large." Signal values which are outside a given set are assigned a zero degree-of-membership value, whereas eachpossible signal value within a given set is associated with a degree-of-membership value expressed as an integer having a predetermined range. In the preferred embodiment described here, the maximum degree-of-membership in a set is represented by theinteger 32. Membership functions are stored in memory to translate each integer signal value into a degree-of-membership values for any designated fuzzy set associated with that signal. Triangular Membership Functions In order to efficiently store the potentially large number of data points necessary to define the membership functions needed for several input signals, the preferred embodiment stores each input membership function by defining the three cornerpoints of a triangle. Triangular membership functions are bounded at the right and left by designated signal values representing the outer limits of signal values which are within the fuzzy set being defined. The third designated signal value indicatesthe signal value at which the maximum degree of set membership occurs. A representative standard triangular membership function defined by the three integer values X.sub.L, X.sub.C and X.sub.R is shown in FIG. 2 of the drawings. FIG. 3 is a chartshowing the array of stored integers which define the seven triangular fuzzy sets which are preferably used to evaluate each input signal. Degree-of-membership values for triangular fuzzy set membership function are determined by locating the intersection of the vertical line, representing a signal value, with the triangle. The processing routine first determines whether the giveninput value lies within the boundaries of the triangle. If the input is outside the triangle, then the output degree-of-membership value is known to be zero since in input does not belong to that set. Once the input is determined to intersect thetriangle, the routine determines if the input is to the fight or left of the maximum or "center" value, X.sub.C. The routine then determines the Y (degree-of-membership) coordinate at the intersection point given the X co-ordinate by using theslope-line equations: For inputs right of center (that is, inputs greater than X.sub.C): For inputs left of center (that is, inputs less that X.sub.C): The scaling factor 32 is chosen to give rule strengths of appropriate resolution within the dynamic range. The maximum degree-of-membership at the "center" value X.sub.C of a triangular membership function need not be midway between the twoouter limits X.sub.R and X.sub.L. For example, the fuzzy set "negative large" might be represented by a right triangle where both X.sub.C and X.sub.L are equal to 0, the lowest value assumable by an integer variable. The corner-point designations for a group of seven triangular membership functions are preferably associated with each input function. The array of integers defining the seven triangular sets is illustrated in FIG. 3 of the drawings. In accordance with the invention, each membership function is further identified in memory with a type indicator. For example, the value 1 indicates a triangular membership function while 2 indicates a table lookup membership function and 3 abell-shaped function. Table Lookup Membership Functions The frequently accessed output membership functions stored at 118 are preferably represented in memory by table lookup values in order to speed computation. Each table lookup membership function is stored in memory as a sequence of numbers,beginning with the left and right outer range limits of the set X.sub.R and X.sub.L, both of which yield zero degree-of-membership values, followed by N=(X.sub.R -X.sub.L -1) degree-of-membership values respectively associated with the output signalvalues (X.sub.L +1) through (X.sub.R -1). The shape of a typical lookup function (which may take any suitable form) is shown in FIG. 4. The data values for the lookup function are stored in memory as shown in the memory map of FIG. 5. The first two numbers, X.sub.R and X.sub.L, of eachfunction specify the outer limits of the fuzzy set, and the remaining values M1 through MN of each set represent the degree-of-membership values for the intermediate points. The lookup values graphically depicted in FIG. 4 might be appropriate for a"negative large" membership function defined by the stored data points: 0,12; 29,27,25,21,19,16,13,11,8,5, and 3. With table look-up storage, the shape of the membership function is unlimited. It is, however, normally desirable to have a membershipfunction that is continuous so that changes in the inputs don't cause abrupt changes in the output. Bell-Shaped Membership Functions Membership functions having predefined shapes may also be dynamically constructed at run time from stored template information. For example, a standard bell-shaped membership function having a shape depicted in FIG. 6 may be stored in a standard"high resolution" lookup table structure as seen in FIG. 7(a). As seen in FIG. 7(b), the upper and lower limits X.sub.L and X.sub.R only are stored for each of the seven fuzzy sets associated the semantic designation of the output memberships. Theactual lookup function is then derived by interpolation from the high- resolution function, scaling that function to the proper limits for each of the seven sets. If high speed processing is not required, this interpolation may be accomplished as partof the processing of each input signal value. Alternatively, the interpolation may be performed during process initialization to create a complete lookup data point array, organized as shown in FIG. 5 of the drawings, for each of the seven fuzzy setsassociated with a given signal variable. Rule Storage In accordance with the invention, the behavior of the fuzzy logic module is determined by a set of rules stored in memory, as seen at 117 in FIG. 1. The data structure used to define the rules consists of the following predetermined memorylocations: 1. a memory location containing the value NoOfRules, the number of rules used to generate a given defuzzified output signal; 2. an array of rule addresses indexed by rule number; and 3. a variable length record structure for each rule defining one or more input signal conditions and an output directive. Each input signal condition defined by a rule consists of a input identifier, which designates a particular input signal by number, and an input membership function identifier (e.g., nl, nm, ns, ze, ps, pm or pl) which specifies the particularfuzzy set to which the current value of the identified input signal must belong in order to satisfy the condition. Thus, an input condition might be expressed in a rule as "3,PS", a condition which would be satisfied whenever the current value of input3is a member of the set PS (positive small). Similarly, the output directive consists of the combination of an output identifier value and a membership function. For example, the output directive "output,pm" indicates that the output signal should be a member of the set pm (positivemedium) when the rules input conditions are satisfied. A rule may contain more than one input condition. Thus, the rule "1,nl,2,ze,output,ns" can be stated as the linguistic proposition "if input1 is negative large and input2 is zero then the output should be negative small." Rule Processing FIG. 8 of the drawings illustrates the manner in which, given a stored collection of rules, the fuzzy logic module locates the information structures required to calculate "rule strength" values. The rules themselves are stored in memory at addresses which are stored at 201 in an array of rule addresses. The rules may accordingly be accessed, by the rule number, from 1 up to and including the value NoOfRules, which is stored in memory tospecify the last rule stored. In the example shown in FIG. 8, the array 201 is indexed by rule number 2 to yield the address of the second rule 202. The second rule at 202 contains a pair of input conditions and an output directive. The firstcondition identifies the particular input signal to be tested, Input1=4, to specify row 4 in an array 203 which contains membership function identification information for signal number 4 as defined in 110. The column of array 203 is specified by thefirst membership function identifier Memb1 in the rule 202. In FIG. the value of Memb1 is "ps" (positive small), thus specifying the cell 204. The cell 204 contains two values: the first value "1" specifies the type of membership function (triangular)which has been selected to define the "ps" (positive small) set for the fourth input signal. The second portion of cell 204 contains the address of the data defining the triangular function which, in this case, is a triangular function defined by thethree corner values X.sub.L, X.sub.C and X.sub.R stored at 205. In the same way, the fuzzy logic module is able to identify the type and location of every membership function and its associated input signal in order to evaluate all of the rulesgoverning the fuzzy logic translation of the sensed input signals into a desired output signal. Rule Strength Determination As indicated in the block diagram of FIG. 1, the fuzzy logic module contemplated by the present invention processes the input signals at 110 by first determining the strengths of the rules, given the current input values, and stores the rulestrengths at 122. After the rule strengths have been determined, the process then determines a center-of-gravity value at 124. This processing is shown in more detail in the flowchart, FIG. 9. Rule processing begins at 220 as seen in FIG. 9. Before each rule is evaluated, its rule strength variable is initialized to 1 at 221. Each rule condition is then identified at 222, its membership function is located at 224, and the currentvalue of the input signal associated with that condition is obtained at 226. The membership value specified by the located membership function given the associated input signal value is then determined at 228. In the specific embodiment of the invention shown in FIG. 9, those rules which contain more than one condition combine those conditions using a logical AND operator. Rules which combine conditions using the logical OR operator are not used;instead, equivalent relationships are established using separate rules to exress the separate conditions. By limiting the relationship between conditions within a rule in this fashion, processing is simplified and less storage space is consumed by therules, which need not include fields indicating the logical relationship between conditions. The fuzzy logic equivalent of the AND operation is performed by selecting the minimum (MIN) condition membership value among the conditions within a rule. Thus, if any condition is totally unsatisfied (that is, evaluates to a zero value), theresult is zero. For single condition rules, the "rule strength" of each rule is simply the output of the single identified membership function given the current input value. If the rule has more than one condition, the rule strength to be saved forlater processing is the minimum membership value, located by the test at 236 and saved at 237. If the conditions within a given rule were joined by a logical OR operator, then the appropriate result of the logical combination would be formed by themaximum (MAX) of the condition membership values. After each rule strength is stored, the current rule number is compared with NoOfRules at 238 to determine if more rules are to be processed. If not, the process continues with the next step, the determination of the center-of-gravity at 240. The processing steps described above are illustrated graphically in FIG. 10, parts (a) through (f), which depicts a control system for generating a output signal indicating a desired velocity V from two input signals, a position signal P and adeviation signal D, based on two rules. FIG. 10(a)-(c) illustrate the first rule which might be written "P,ps,D,ps,output,ps"; that is, if input the position P is positive small and the deviation D is positive small then the velocity V should bepositive small." FIG. 10(d)-(f) illustrates a second rule which could be written as "P,ze,D,ps,output,ze" or "if P is zero and D is positive small then V is zero."Example input signal values P and D are shown by the dotted vertical lines P and D in FIG.10. The first condition seen in FIG. 10(a) is satisfied if the position signal P is positive small. P intersects the triangular membership function, yielding the degree-of-membership value indicated by the horizontal line 301. For the secondcondition of the first rule, the smaller amplitude input signal value D (deviation) also satisfies the triangular membership function seen in FIG. 10(b), yielding the degree-of-membership value 303. Since value 303 is smaller than the value 301, itbecomes the stored "rule strength" value for the first rule. P and D also satisfy both input conditions of the second rule, as illustrated at FIGS. 10(d) and 10(f). The smaller degree-of-membership value 305 from the first condition at FIG. 10 (d) is smaller than the value 307 produced by the membershipfunction at FIG. 10(e), so that the value indicated by 305 is stored as the rule strength for the second rule. As next discussed, these two stored rule strength values are then used to determine the center of gravity, as next discussed. Center of Gravity Determination As indicated in FIG. 1 at 122 and 124, after the strength of each rule has been determined for the current input signal values, the module next determines a "center of gravity" value representing the "de-fuzzified" consensus of all of the ruleswhich have been satisfied, as indicated by non-zero rule strength values. The de-fuzzified output value is determined by a numerical integration process which is illustrated by FIG. 10(g), which is enlarged in FIG. 12. The output value is determined by integrating the area under a curve formed by the point-to-pointmaximum output degree-of- membership value of each satisfied rule, as limited by that rule's strength. This curve is seen in FIGS. 10(g) and 12, and represents the maximum of two trapezoid shapes 330 and 340. Trapezoid 330 is the curve formed by thetriangular output membership function of the first rule seen in FIG. 10(c) as limited by that rule's strength value 303. The smaller trapezoid 340 is formed by the triangular output membership function of the second rule seen in FIG. 10(f), as limitedby the second rule's strength value 305. The first rule, which was more strongly satisfied than the second rule, indicated that the output value V should be within the set "positive small" while the second rule, which was less strongly satisfied,indicated that the output value V should be within the set "zero". The defuzzification process provides a defuzzified result signal which represents a consensus value, visually seen in FIGS. 10(g) and 12 as the center-of-gravity 350 of the area underthe curve formed by the maximum of the two trapezoidal shapes 330 and 340. The preferred method for determining the center-of-gravity value is illustrated in more detail by the flowchart seen in FIG. 11 of the drawings. The procedure produces, by numeric integration, two values: a total moment arm value MArm and atotal area value Area, both of which are initialized to zero at 401 as soon as the routine starts at the entry point indicated at 400 in FIG. 11. The integration is repeated by varying the value of the output variable X from 0 through MaxX. The valuesfor the variables RuleNo and MaxStrength are initialized to 1 and zero respectively at 403 (and are re-initialized to those values at 403 for each new value of X). The rule strength previously determined (at discussed above in connection with FIG. 9) for the rule identified by RuleNo is then checked at 405. If the rule strength is zero, that rule will not make a contribution to the output value and animmediate jump is accordingly made to 408, where current value of RuleNo is checked to determine if it is equal to RuleNo, and hence identifies the last rule to be processed. If RuleNo is less than NoOfRules, RuleNo is incremented to the next rulenumber at 410 and a return is made to 405 to continue the processing of that rule. If RuleStrength is non-zero, control is passed to 412 where the output membership function for the rule specified by RuleNo is used to determine a degree-of-membership value LocalStrength, given the current value of X. LocalStrength is thencompared at 414 to the RuleStrength value previously determined for the rule specified by RuleNo. If the rule's rule strength is less than LocalStrength, LocalStrength is replaced by that smaller value at 416. At this time, LocalStrength is an integerrepresentative of the rule's directive value, taking into account the current input signal values, the degree to which those input signal values satisfy the rule's input conditions, and the value specified by the rules output membership function for thecurrent value of X If the resulting rule directive value LocalStrength after the test at 414 is greater than MaxStrength, it replaces the former value of MaxStrength at 422, so that MaxStrength holds the maximum directive value produced by one of the rules afterall of the rules have been processed for a given value of X (as indicated when RuleNo is no longer less than NoOfRules when tested at 408). Thus, as seen in the graph of FIG. 12, the routine described above determines the value MaxStrength for each value of X, and MaxStrength corresponds to the height of a subarea of integration indicated by the shaded rectangle 360 in FIG. 12. Thetotal area under the curve defined by a succession of MaxStrength values as X changes is determined by accumulating the values of MaxStrength in the variable Area as indicated at 440 in FIG. 11. The value MArm is then incremented at 450 by the momentarm of the subarea, which is equal to the product (X * MaxStrength). The value of X is then tested at 460 to determine if the end of the integration interval Maxx has been reached. If not, X is incremented by the value Stepsize at 470 control isreturned to 403 to re-initialize RuleNo and MaxStrength for the next X value to be processed. At the conclusion of the integration interval, the center-of-gravity value COG is determined by dividing MArm by Area at 480. The module then passes this value, which represents the desired defuzzified output signal, to an appropriateutilization routine for postprocessing at 490. Software Implementation As previously indicated, the present invention advantageously takes the form of a sharable, multipurpose fuzzy logic mechanism implemented by means of a microprocessor and memory containing suitable programs and data structures for providing thedesired fuzzy logic functions. The following assembly language routine, is executable on an Intel 196KR microcontroller (as converted into an executable module using the corresponding Intel Assembler/Linker/Compiler, both of which are available fromIntel Corporation, Santa Clara, Calif. 95052. ##SPC1## * * * * * Randomly Featured Patents
{"url":"http://www.patentgenius.com/patent/5491775.html","timestamp":"2014-04-17T01:10:38Z","content_type":null,"content_length":"59294","record_id":"<urn:uuid:b297f570-0a0a-4d92-a3a0-b266904bd9ac>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: A Note on Competitive Diffusion Through Social Networks Noga Alon Michal Feldman Ariel D. Procaccia Moshe Tennenholtz § We introduce a game-theoretic model of diffusion of technologies, advertisements, or influence through a social network. The novelty in our model is that the players are interested parties outside the network. We study the relation between the diameter of the network and the existence of pure Nash equilibria in the game. In particular, we show that if the diameter is at most two then an equilibrium exists and can be found in polynomial time, whereas if the diameter is greater than two then an equilibrium is not guaranteed to exist. 1 Introduction Social networks such as Facebook and Twitter are modern focal points of human interaction. The pursuit of insights into the nature of this interaction calls for a game-theoretic analysis. Indeed, a number of papers (see, e.g., [5]) investigate variations on the following setting. The social network is represented by an undirected graph, where the vertices are users and edges connect users who are in a social relationship. Suppose, for example, that there are several competing applications, e.g., voice over IP systems, that are not interoperable. The users play a coordination game, where if two neighbors adopt the same system they get some reward that is based on the inherent quality
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/025/2138732.html","timestamp":"2014-04-20T16:14:40Z","content_type":null,"content_length":"8469","record_id":"<urn:uuid:fd45fa80-4a5b-4315-a2fd-db260cea8aac>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
Speed of an average sailboat - Boat Design Forums As has been mentioned, sailboats can be restricted to a speed governed by their LWL and estimated speed/length ratio (S/L). For most modern, displacement sailboats this S/L is accepted as 1.35 and is the qualifier against the LWL's square root. Sailboats are not limited to this 1.35 S/L and most modern performance cruising sailboats can move past this generic qualifier, some by a great amount (yep, better then 20 knots). On a modern 30' production cruiser, you may have a maximized LWL and possibly as much as 28' in LWL, making a theoretical maximum hull speed of 7.14 knots (about 8 1/4 MPH). In beam or freer winds, you may have the ability to climb over the bow wave and begin to plane, which would permit much high speeds. There are also other things that can increase speed, but these are the type of things you can't just buy, at the local West Marine store and bolt onto your yacht, suddenly going 30% faster. Sailboats are all about the big picture, in the balancing act, that is required to motivate her and keep the yacht upright.
{"url":"http://www.boatdesign.net/forums/sailboats/speed-average-sailboat-18365.html","timestamp":"2014-04-17T18:23:21Z","content_type":null,"content_length":"75881","record_id":"<urn:uuid:cdec3994-8007-49a0-9c00-80ebe4bf02cf>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Related rates 2 E-4: Sand is pouring on a conical pile at 12m^3/min. The diameter of the base is always 3/2 the height. Find the rate at which the height is increasing when the pile is 2 m tall. • one year ago • one year ago Best Response You've already chosen the best response. I get an answer that is different from the solution provided in the answer key. I'm going to show my work, and perhaps someone can tell me where I went wrong, or let me know that there is an error in the solutions. Best Response You've already chosen the best response. Best Response You've already chosen the best response. \[r = \frac{3h}{4} \\ V = \frac{\pi r^2h}{3}\\ V = \frac{3\pi h^3}{16} \\ V' = \frac{9\pi h^2h'}{16}\\ V' = 12 ,h=2\\12 = \frac{9\pi 2^2h'}{16} \\h' = \frac{16}{3\pi}\] Best Response You've already chosen the best response. Since this is a related rates question, I realized Leibniz notation would be more apt. \[V = \frac{3\pi h^2}{16} \\ \frac{dV}{dh} = \frac{9\pi h^2}{16} \\\frac{dV}{dt} = \frac{dh}{dt} * \frac{dV} {dh} \\ 12 = \frac{dh}{dt} * \frac{9\pi h^2}{16} \\ \frac{dh}{dt} = \frac{12 * 16}{9\pi h^2} \\ \frac{dh}{dt} = \frac{16}{3\pi}\] Still not sure why the solution gave a different answer, but I'm moving on. I can't see how my answer could be wrong. Best Response You've already chosen the best response. I am not sure either, but I think that there is some mistake in the solution. They have set dr/dt= 12, whereas the problem states that dV/dt = 12 m^3/min. Also, the solution solves for dV/dt, but the problem asks you to find dh/dt. Best Response You've already chosen the best response. Thank you for taking the time to look at this problem even though it was a closed question. I think you are right. Are you working through the OCW course as well? Best Response You've already chosen the best response. Not really, I was trying to work through the ODE course but haven't for a while. I like to look through peoples' questions about Calculus because it is a good review and there are many parts of Calculus I didn't really understand when I took it. Plus, it is fun! Also, what is a closed question? I saw this label, but didn't know where to find an explanation for it. Best Response You've already chosen the best response. I guess this means the question is old and/or solved? Best Response You've already chosen the best response. Well, on OpenStudy you are only able to to have one open question at a time. The closed questions are really an archive of all question asked. Some of the closed questions have been solved, some haven't, but it is assumed when the question is closed that the original poster has moved on to another question. However, I find it fun to look through the closed questions sometimes too, and solve one that never was solved. Best Response You've already chosen the best response. Thank you! I also had stumped on this error while trying to do all exercices! I also found a mistake with problem 2B-1b. So yeah it's reassuring to know our answers are correct! :) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5057e2afe4b0a91cdf454702","timestamp":"2014-04-16T10:38:53Z","content_type":null,"content_length":"88731","record_id":"<urn:uuid:c0ab751e-b7d3-47ad-b0d1-a75d9c594b10>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
Edgewater, NJ Calculus Tutor Find an Edgewater, NJ Calculus Tutor ...I also received credit for VEE Economics and VEE Corporate Finance (part of the SOA Actuary track). I am highly proficient in probability and statistics, which are tested in great detail in actuarial science. I also have tutored in material covered on exam P/1 and exam FM/2. Samuel I am highly proficient in linear algebra. 21 Subjects: including calculus, geometry, statistics, algebra 1 ...In pre-algebra, the student is introduced to the abstract concepts. These concepts have to be related to the real life. This is my goal for this subject. 15 Subjects: including calculus, geometry, GRE, algebra 1 ...Thank you and I look forward to hearing from you! SAT I: Math: 800Verbal: 800SAT II:Physics: 800Chemistry: 800Math IIC: 800World History: 780Writing: 770Biology: 730AP SUBJECT TESTS: English Literature and Composition: 5BC Calculus: 5Microeconomics: 5Government: 4Physics B: 5European History: 5I... 36 Subjects: including calculus, English, chemistry, reading Over the past year, I have tutored more than a dozen students at all academic levels (K-12 & undergrad). I enjoy watching my students learn and grow over time, as they begin to grasp new material and develop an affinity for learning. I strive to imbed a deeper understanding of most subjects than is... 38 Subjects: including calculus, Spanish, algebra 1, GRE ...I spent most of my time working in engineering and chemistry. I'm currently working on my masters in epidemiology working with biostatistics and health policy. I also have 3 years of clinical research experience having been published in journals such as The Oncologist and Cancer. 12 Subjects: including calculus, chemistry, statistics, probability Related Edgewater, NJ Tutors Edgewater, NJ Accounting Tutors Edgewater, NJ ACT Tutors Edgewater, NJ Algebra Tutors Edgewater, NJ Algebra 2 Tutors Edgewater, NJ Calculus Tutors Edgewater, NJ Geometry Tutors Edgewater, NJ Math Tutors Edgewater, NJ Prealgebra Tutors Edgewater, NJ Precalculus Tutors Edgewater, NJ SAT Tutors Edgewater, NJ SAT Math Tutors Edgewater, NJ Science Tutors Edgewater, NJ Statistics Tutors Edgewater, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Edgewater_NJ_calculus_tutors.php","timestamp":"2014-04-19T17:02:53Z","content_type":null,"content_length":"24073","record_id":"<urn:uuid:d0cebb62-01c0-4320-9dfc-b0865b704712>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
Equilibrium Constants Author: John Hutchinson It was noted above that the equilibrium partial pressures of the gases in a reaction vary depending upon a variety of conditions. These include changes in the initial numbers of moles of reactants and products, changes in the volume of the reaction flask, and changes in the temperature. We now study these variations quantitatively. Consider first the reaction here. Following on our previous study of this reaction, we inject an initial amount of N[2]O[4](g) into a 100L reaction flask at 298K. Now, however, we vary the initial number of moles of N[2]O[4](g) in the flask and measure the equilibrium pressures of both the reactant and product gases. The results of a number of such studies are given here. Table 1: Equilibrium Partial Pressures in Decomposition Initial n(N[2]O[4]) p(N[2]O[4]) [atm] p(NO[2]) [atm] 0.1 0.00764 0.033627 0.5 0.071011 0.102517 1 0.166136 0.156806 1.5 0.26735 0.198917 2 0.371791 0.234574 2.5 0.478315 0.266065 3 0.586327 0.294578 3.5 0.695472 0.320827 4 0.805517 0.345277 4.5 0.916297 0.368255 5 1.027695 0.389998 We might have expected that the amount of NO[2] produced at equilibrium would increase in direct proportion to increases in the amount of N[2]O[4] we begin with. table 1 shows that this is not the case. Note that when we increase the initial amount of N[2]O[4] by a factor of 10 from 0.5 moles to 5.0 moles, the pressure of NO[2] at equilibrium increases by a factor of less than 4. The relationship between the pressures at equilibrium and the initial amount of N[2]O[4] is perhaps more easily seen in a graph of the data in table 1, as shown in figure 1. There are some interesting features here. Note that, when the initial amount of N[2]O[4] is less than 1 mol, the equilibrium pressure of NO[2] is greater than that of N[2]O[4]. These relative pressures reverse as the initial amount increases, as the N[2]O[4] equilibrium pressure keeps track with the initial amount but the NO[2] pressure falls short. Clearly, the equilibrium pressure of NO[2] does not increase proportionally with the initial amount of N[2]O[4]. In fact, the increase is slower than proportionality, suggesting perhaps a square root relationship between the pressure of NO[2] and the initial amount of N[2]O[4]. Figure 1: Equilibrium Partial Pressures in Decomposition Reaction We test this in figure 2 by plotting P[NO[2]] at equilibrium versus the square root of the initial number of moles of N[2]O[4]. figure 2 makes it clear that this is not a simple proportional relationship, but it is closer. Note in figure 1 that the equilibrium pressure P[N[2]O[4]] increases close to proportionally with the initial amount of N[2]O[4]. This suggests plotting P[NO[2]] versus the square root of P[N[2]O[4]]. This is done in figure 3, where we discover that there is a very simple proportional relationship between the variables plotted in this way. We have thus observed that where c is the slope of the graph. equation 6 can be rewritten in a standard form To test the accuracy of this equation and to find the value of K[p], we return to table 1 and add another column in which we calculate the value of K[p] for each of the data points. Table 2 makes it clear that the "constant" in equation 7 truly is independent of both the initial conditions and the equilibrium partial pressure of either one of the reactant or product. We thus refer to the constant K[p] in equation 7 as the reaction equilibrium constant. Figure 2: Relationship of Pressure of Product to Initial Amount of Reactant. Figure 3: Equilibrium Partial Pressures. Table 2: Equilibrium Partial Pressures in Decomposition Initial n(N[2]O[4]) p(N[2]O[4]) [atm] p(NO[2]) [atm] K[p] 0.1 0.00764 0.0336 0.148 0.5 0.0710 0.102 0.148 1 0.166 0.156 0.148 1.5 0.267 0.198 0.148 2 0.371 0.234 0.148 2.5 0.478 0.266 0.148 3 0.586 0.294 0.148 3.5 0.695 0.320 0.148 4 0.805 0.345 0.148 4.5 0.916 0.368 0.148 5 1.027 0.389 0.148 It is very interesting to note the functional form of the equilibrium constant. The product NO[2] pressure appears in the numerator, and the exponent 2 on the pressure is the stoichiometric coefficient on NO[2] in the balanced chemical equation. The reactant N[2]O[4] pressure appears in the denominator, and the exponent 1 on the pressure is the stoichiometric coefficient on N[2]O[4] in the chemical equation. We now investigate whether other reactions have equilibrium constants and whether the form of this equilibrium constant is a happy coincidence or a general observation. We return to the reaction for the synthesis of ammonia. In a previous section, we considered only the equilibrium produced when 1 mole of N[2] is reacted with 3 moles of H[2]. We now consider a range of possible initial values of these amounts, with the resultant equilibrium partial pressures given in table 3. In addition, anticipating the possibility of an equilibrium constant, we have calculated the ratio of partial pressures given by: In table 3, the equilibrium partial pressures of the gases are in a very wide variety, including whether the final pressures are greater for reactants or products. However, from the data in table 3, it is clear that, despite these variations, K[p] in equation 8 is essentially a constant for all of the initial conditions examined and is thus the reaction equilibrium constant for this reaction. Table 3: Equilibrium Partial Pressures of the Synthesis of V [L] n(N[2]) n(H[2]) p(N[2]) p(H[2]) p(NH[3]) K[p] 10 1 3 0.0342 0.1027 4.82 6.2×10^5 10 0.1 0.3 0.0107 0.0322 0.467 6.0×10^5 100 0.1 0.3 0.00323 0.00968 0.0425 6.1×10^5 100 3 3 0.492 0.00880 0.483 6.1×10^5 100 1 3 0.0107 0.0322 0.467 6.0×10^5 1000 1.5 1.5 0.0255 0.00315 0.0223 6.2×10^5 Studies of many chemical reactions of gases result in the same observations. Each reaction equilibrium can be described by an equilibrium constant in which the partial pressures of the products, each raised to their corresponding stoichiometric coefficient, are multiplied together in the numerator, and the partial pressures of the reactants, each raised to their corresponding stoichiometric coefficient, are multiplied together in the denominator. For historical reasons, this general observation is sometimes referred to as the Law of Mass Action.
{"url":"http://www.vias.org/genchem/react_equilib_gasphase_12597_split003.html","timestamp":"2014-04-19T19:34:45Z","content_type":null,"content_length":"16462","record_id":"<urn:uuid:09428182-e167-43b4-8807-2dfc2dc49b46>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
So much smarter than you Ogawd, not more Berlinski! Just this once more, okay? Then I promise to stop. I'm sure I can. It's just that—well, how should I say it?—this is so damned weird! I'm scratching my head and need to get this off my chest. (Nice mixed metaphor there, if you'll excuse my patting myself on my own back. Ow!) I promise you will find this amazing. Let's begin with a little math lesson. That's always a good way to gather a crowd. Come a little closer, kids, and Dr. Z will astonish you with his great wisdom! I take for my text a reading from the book of Tom the Apostol , Chapter 3, Section 2: The equation lim[x→p] f(x) = A is read: “The limit of f(x) as x approaches p is equal to A,” or “f(x) approaches A as x approaches p.” It is also written without the limit symbol, as follows: f(x) → A as x → p. This symbolism is intended to convey the idea that we can make f(x) as close to A as we please, provided we choose x sufficiently close to p. That was a perfectly straightforward description of the notion of a mathematical limit. Here's a concrete example to show you that you understand the basic idea: ) = , which is often called the squaring function (the reason for this is left as an exercise for the student). Evaluating function notation is no big deal in a simple case like this. If you're told that = 3, you just plug in 3 wherever you see (the number you plug in is called the of the function), like so: (3) = 3 = 9. Okay? If we revisit the limit idea with the squaring function in mind, replacing with 3 and with 9, we find ourselves looking at the statement that → 9 as → 3. Does that seem reasonable? As gets closer and closer to 3, gets closer and closer to 9. If you look at the accompanying graph of , you may be able to persuade yourself that this is true. You would be correct to think there are subtleties which we have avoided here, but feel free nevertheless to congratulate yourself on having grasped a key fact about the squaring function: when two numbers are close together, their squares are also close together. The closer the numbers, the closer their squares. It's not exactly rocket science at this point. Once upon a time in Prague I finally got around this year to reading David Berlinski's A Tour of the Calculus , the 1995 book that tried to popularize calculus and helped him make his mark as an author. Since I usually assign some outside reading to my calculus students in order to keep them from thinking that everything is in the pages of their textbook, I welcomed the appearance of the book and its nontechnical approach. Before I found the time to read it on my own, however, some of my colleagues shared their disdain of prolix prose and metaphor mania. My ardor cooled and the book got shoved to the back of my stack of reading material, where it's languished the past decade. In the intervening years, has been busily pimping for the Discovery Institute, where he is a senior fellow with Center for Science and Culture. Despite his yeoman service on behalf of the exponents of intelligent design, claims a general agnosticism about the topic and does not profess to be a creationist. He prefers merely to lob his mathematically laden bombs at the theory of evolution. Such nonsense made me even less inclined to dig out his book on calculus, because he struck me more and more as a player of perverse intellectual games, pretending a kind of neutrality while focusing all of his attacks on the scientific side of the evolution/creation argument. has not escaped unscathed. I wrote about his wrongheadedness myself in David Berlinski vs. Goliath , but for a real nuts-and-bolts evisceration of his arguments, see what Mark -Carroll has to say about Berlinski's Bad Math Good Math, Bad Math In my never-ending quest for bad math to mock, I was taking a look at the Discovery Institute's website, where I found an essay, On the Origin of Life, by David Berlinksi. Bad math? Oh, yeah. Bad, sloppy, crappy math. This surfeit of criticisms of , although I fully agreed with them, finally tweaked my conscience and reminded me of the unread calculus book. Perhaps seduction by the dark side of creationism had drawn him away from more constructive efforts. I had read enough of his prose and seen him on enough video to remain wary of his tendency to talk down to people, but A Tour of the Calculus was supposed to be an attempt to speak to the general public. I needed to read it to see for myself. It didn't take too many pages for me to see that all of pomposity is at full play in A Tour of the Calculus . He lards the text with digressions and obscure allusions (perhaps I am not educated enough to appreciate him). The mathematical concepts are recast in odd ways that confuse rather than shed light. The writing is self-indulgent. These are disappointments rather than surprises. Nevertheless, I promised you a surprise, and it's time for me to deliver. I began with a short math lesson on the squaring function because it is at the center of a bizarre tale recounted by in Chapter 15, A Prague Interlude goes to Prague University to lecture on Tychnoff's theorem , a sophisticated result from topology. One of his hosts is Professor , a . “ is extraordinarily intelligent,” tells us. Please remember these things. is giving a math talk attended by an extraordinarily intelligent mathematician. Here is an extended quote from the middle of his narration: I am supposed to talk about Tychonoff's theorem, but to my surprise I find myself explaining the elementary calculus to a roomful of mathematicians, re-creating in my own mind the steps that Bolzano took in order to define continuity. For some reason I feel it absolutely crucial to explain how the concept of a limit is applied to functions. No one seems to mind or even notice. “A function indicates a relationship in progress, arguments going to values. Given any real number, the function f(x) = x^2 returns its square, tak? ... In go arguments 1, 2, 3, out come the values, 1, 4, 9.... As the arguments of f get larger and larger, its values get larger and larger in turn.* ... Now imagine,” I say, “arguments coming closer and closer to the number 3, tak?” I walk back to the blackboard and show the men in my audience what I mean, writing, 2, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, ..., before the function. “What then happens to the function? How does it behave?” I ask, realizing with a sense of wonder somewhat at odds with the hard-boiled pose I usually affect, that a function is among the things in this world that behaves—it has a life of its own and so in its own way participates in the drama of things that are animate. “I mean,” I say, “what happens to the values of f as its arguments approach 3?” I look out toward my audience. Swoboda and Schweik are looking at me intently, their faces serene, without irony. It is plain to me that they do not know the answer yet. “They approach, those values, the number 9, so that the function is now seen as running up against a limit, a boundary beyond which it does not go.” Swoboda leans back and sighs audibly, as if for the first time he had grasped a difficult principle. The room, with its wooden pews and narrow blackboard, is getting close. I say, “The concept of a limit, as it is applied to functions, is forged in the fire of these remarks.” There's more. Much more. But enough. Can you explain this passage to me? begins by admitting he is talking about elementary calculus, but then has his roomful of mathematicians rapt in awe as he reveals that values of the squaring function approach 9 as its arguments approach 3. College students would not be surprised by this result, let along a roomful of mathematicians. I presume intends this as an extended metaphor, because otherwise he is cruelly mocking his Czech colleagues. The point of the metaphor, however, is completely and entirely lost on me. His account continues with yet another elementary limit. A moderately competent Calculus I student would dispatch it promptly and math teachers could do it in their heads, but presents it to his audience with drama and mystery. The mathematicians leave the seminar at its conclusion and trudge down the street, exhausted. “Their tread is heavy and tired.” I know how they feel. has complained of his rough treatment at the hands of the proprietor of Good Math, Bad Math . While trying to defend his mathematical arguments against Mark mentions my comment about his visit to Prague University and objects: If you check the index to my A TOUR OF THE CALCULUS under 'limits' you will find page references to a complete and precise definition. The section in Prague was, of course, intended to be dream-like. It is not the mathematicians who are diminishing in sophistication, but the narrator. Was that supposed to clear things up? If you have the stomach for more, has returned to Good Math, Bad Math for Round 2. If, like me, you're ready for a rest cure, here's an opportunity to avert your gaze. (But don't forget to bookmark Good Math, Bad Math for future reference. Good stuff is added every week.) is assuming that the values are positive. If negative values are permitted, then larger values do always result in larger squares. For example, −1 is greater than −2, but (−1) is smaller than (−2) 13 comments: Perhaps Hungarians are not familiar with the expression: "Duh". Already the setup of the story is strange. Why would he lecture on Tychonoff's Theorem for such an audience? It is a beautiful theorem, but it is also textbook material. Matthew at UPenn has clued me in to the fact that Berlinski's Princeton Ph.D. is in philosophy, not math. That could explain a couple of things. (Berlinski's bio at anova.org says the doctorate is in math, but the Discovery Institute agrees it's philosophy.) It doesn't explain why Berlinski seem to be a bad philosopher too. In the paper "On the Origins of Life" at http://www.discovery.org he ends "But let us suppose that questions about the origins of the mind and the origins of life do lie beyond the grasp of “the model for what science should be.” In that case, we must either content ourselves with its limitations or revise the model." He is erroneously assuming that we can't answer questions by "We don't know (yet)", or that such an answer would necessarily depend on a limitation of science. BTW, in Zeno's post "David Berlinski vs. Goliath" there is an observation that Berlinski thinks a scientific theory must be highly mathematized. In the above piece Berlinski uses the phrase “the model for what science should be”, and attributes it to "the mathematicians J.H. Hubbard and B.H. West, in “On the Origins of the Mind” (Commentary, November 2004). The idea that science must conform to a certain model of inquiry is familiar. Hubbard and West identify that model with differential equations, the canonical instruments throughout physics and chemistry." It's very philosophical to try to confine the method of science to a specific model. It's also unclear if it's doable. After all, we know from Gödel that even rather simple formal systems needs to be indefinitely axiomatised as they are explored. If the result of science is unbounded, the boundedness of the models of it's methods isn't immediately obvious. Experience tells us otherwise, different fields use more or less different variants. So far, the method of science is more art than science. :-) BTW, it hit me that what you are describing Berlinski doing (or he himself describes) is rather the same as what Behe did after the Kitsmiller-Dover case, officially seemingly totally misinterpret the outcome of a social interaction. In Behe's case it was attributed to him being too incompetent to understand his incompetence. It looks to me the same may be the case for Berlinski. I'm trying to choose an interpretation of the actions of the Hungarian mathematicians. I can think of several: The one I like best, but is least likely, is that limits are elementary, but deep. It's possible to come back to limits over and over and come away with a new aspect of understanding. More likely (from what I have seen of math conference and seminar talks) is that his audience was sitting politely and quietly thinking to themselves: "is he ever going to get to something interesting?" "I wish I had brought a notepad so I could be working on my research instead of listening to this." 'Surely I can find something kind and complementary to say at the end of this, but I'm not attending his conference session next time." " I wonder if he's actually going to spend any time at all on the interesting stuff or if he's just going to wave his hand over it at the end?" "Surely I can stay awake and pretend to be interested for another x minutes." I'm afraid most mathematicians are pretty much resigned to boring talks in which 1/3 of the material is too easy, 1/3 is interesting (to us), and 1/3 is over our heads... I'd expect a talk on Tychonoff's theorem would be 1/2 too easy, and 1/2 just right, but I've attended too may talks to be surprised at one where nothing actually qualified as interesting. “They approach, those values, the number 9, so that the function is now seen as running up against a limit, a boundary beyond which it does not go.” Swoboda leans back and sighs audibly, as if for the first time he had grasped a difficult principle. It takes a lot of self-confidence to interpret loud sighs from your audience in quite that way. Incidentally, Berlinski seems to be claiming that if a function has a limiting value at some point, its value cannot "go beyond" that limit as it approaches that point. But that's trivially false; a limit is not a bound. For instance, x * cos(1/x) has the limit 0 as x approaches 0, but it oscillates and "goes past" 0 an infinite number of times on the way. I read that, years ago, and thought it was awful. Now that I know the author is a Creationist, I understand the whole segment about how numbers approach limits "as Man approaches God", without ever reaching them. What a kumquat. You know, it's a common complaint of math teachers that students think you can't ever reach a limit (because students only understand the getting closer and closer description, and not the epsilon-delta definition). So it's interesting that Berlinski would think that mathematicians would be so fascinated by an explanation of exactly the thing that we keep trying to get our students to understand is false. I think I'd feel pretty tired after an hours worth of listening to drivel like that. "I look out toward my audience. Swoboda and Schweik are looking at me intently, their faces serene, without irony. It is plain to me that they do not know the answer yet." Umm, OK. Maybe, MAYBE, the mathematicians were thinking "There's going to be an insight somewhere in here..." Frequently, the biggest ideas come from reexaming the obvious. Of course, it sounds like Berlinski never reached any insight which hadn't been reached 300 years ago, so I'm guessing that everyone walked away thinking "That talk didn't go anywhere..." It seems plausible to me that the mathematicians in the audience were simply stunned at the inanity of his example. They were probably like, "Oh my god, are all American mathematicians this stupid? How is this possible?" My thought was they may have been listening intently thinking "Surely we invited this guy for a reason? We've digressed into...limits?" I'm just a quasi-linguist with minor mathematics experience--but I lived in Prague and studied Czech. "Tak," or "Tak?" does mean something like "Right?" "Yes?" or, "You know what I mean?", but in Polish, not Czech. Usually you don't hear nonnative speakers use words like that as a visitor in another country, but if they do, I think they would try to use the words of that country, which in Prague would be "Ano?" "No?" ("'no" means yes in Czech) or even "Yo?" There must be something I missed; it could easily have happened, because I only glanced quickly inside "A Tour..." and gagged immediately on the writing. I can't tolerate it. No matter what it is that the guy is trying to say. I don't care because I can't take my medicine that way. It reminds me of movies. Sometimes you see a movie where some guy is using pieces of his native language, when otherwise he speaks perfect English, like "!Gracias, senor!" while speaking probably with an American from Hollywood. He must be doing it to make sure we all know that we're in Guatemala. I guess when it came to the strange and difficult words "thank you," his mind went blank. If Berlinski's listeners could understand "A function indicates a relationship in progress, arguments going to values", why can't they understand, "Right? You know what I mean? Do you understand?
{"url":"http://zenoferox.blogspot.com/2006/04/so-much-smarter-than-you.html","timestamp":"2014-04-21T02:01:38Z","content_type":null,"content_length":"134699","record_id":"<urn:uuid:d03288ed-0b33-480e-a4e6-441c08ef5fd8>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
What is DCDT+? DCDT+^™ is a Windows^® program for analysis of sedimentation velocity (SV) data that implements the dc/dt method originally developed by Walter Stafford, and also implements fitting of the derived g (s*) distributions as a mixture containing of up to 5 discrete (non-interacting) species. The program is designed to be very easy to use, even for novices, yet it also offers many options for the 'power user' when they are needed. It uses multi-page analysis 'documents' and a familiar user interface with toolbars and buttons, making it similar to other Windows programs to reduce the learning curve. It also incorporates a number of intelligent 'wizards' that automatically accomplish tasks such as locating the meniscus position, removal of systematic noise ("jitter") and fringe jumps from interference scans, and even selection of which scans should be used for This is not a package that tries to do everything---its purpose is to implement one particular approach, do that very well and very easily, and do it in a highly reproducible and well-documented The success of this approach is demonstrated by its use in 200+ publications. More significantly, many of these are from intermittent or casual users of AUC, proving you don't have to devote years to learning data analysis to successfully answer important research questions with SV. The program is now distributed as "freeware" and requires no registration. The only restriction is that it cannot be sold or packaged with other commercial software. The author asks only that you cite it properly when publishing your analyzed results. Unique features: Modern multi-document, Explorer-style user interface multiple analyses can be open simultaneously, and their g(s*) distributions are automatically incorporated into an overlay graph [example below] Explorer-like navigation tree to move to specific analysis pages (analysis steps) and between analysis documents click-and-drag setting of meniscus marker, data limits, peak limits etc. [see image below] New algorithm for fitting of g(s*) distributions or dc/dt curves as mixtures gives s, D, and M values for each species with theoretical accuracy of better than 0.1%, independent of the number of scans used in the analysis Load the entire run and then use scroll bars to select the subset of scans to be analyzed (or just let the program wizard do it for you) [see image below] Optionally normalizes g(s*) distributions to allow easy comparisons of samples at different concentrations (a great diagnostic to distinguish interacting systems from mixtures, as in example shown 3 pictures above) 'Standard' program mode alters user interface to hide options you probably don't need; 'Advanced' mode provides more options to power users Program wizards automatically accomplish many tasks Once scans are loaded, you can can calculate the g(s*) distribution and go all the way to a completed fit as a single species simply by repeatedly hitting the Enter key to accept the wizards' default choices New! Generates a log file that records every step of the data analysis, every option selected, every intermediate fitting result (including a table and graph), when it was done, and by whom (for computers with authenticated log-in servers) Evaluates fitted parameter confidence intervals using any of three orthogonal approaches: F statistics, the bootstrap method, or a Monte Carlo algorithm that puts the random noise in the theoretical scans [to eliminate any concerns about effects of the transformation to g(s*)] True local Help file to give context-sensitive Help for each dialog box or control, with extensive tutorials and a searchable index. (A PDF file to print a user manual is also available to registered users) Generates nicely formatted printed reports with tables, and which document every factor that influences the results. These reports (with graphs) can be pasted into your word processor or electronic lab notebook Saved analysis documents record every parameter, including the raw data, in a single file so you can re-load your analysis back to exactly the same state Analysis documents appear in your 'My Recent Documents' folder, just like your other documents Creates 9 types of publication-quality graphs of fit results DCDT+ is used by more than 100 laboratories around the world (partial list here). back to top How does DCDT+ differ from the ORIGIN^® version of DCDT supplied by Beckman Coulter? DCDT+ provides correct values of D and M from fitting the g(s*) distributions the Beckman implementations (even the newest Origin 6 versions) contain an error in converting the width of the fitted Gaussians to diffusion coefficients; the returned D and M values are never Just load the entire run in one operation and then select which scans to analyze using slider controls no more struggling to figure out which scans to load and analyze DCDT+ provides a quick test for whether you are broadening the curves by including too many scans get the best signal/noise possible without hoping and guessing about when you are averaging too many scans or struggling with cumbersome manual calculations standard errors or confidence intervals are calculated for all fitted parameters what good are fitting results if you don't know the uncertainty in the values returned? DCDT+ comes with a comprehensive, local context-sensitive Help file and optional printed manual includes step-by-step tutorials, tips, "how-to" guides, frequently-asked questions, a searchable index, and images of all forms full reports of analyses are provided for documented, reproducible results sedimentation coefficients can be internally converted to s[20,w ]values DCDT+ provides a unique option of fitting to dc/dt curves, giving improved accuracy and improved resolution of multiple species in some cases DCDT+ shows you the dc/dt curves for every individual scan pair so you can identify abnormal interference scans (outliers) before you unknowingly include them in your analyses bad scans can be removed from the analysis without starting over and reloading all the scans reproducible setting of the meniscus position no more struggling with the Origin^® data marker and a tiny graph results from fits give true molecular masses for any value of v-bar and density no more cumbersome manual calculations when the zero of the dc/dt curves is manually adjusted, that fact and the amount of the adjustment is documented in fit reports what good are results that can't be documented or reproduced? during multi-species fits the masses and/or sedimentation coefficients can be constrained to ratios appropriate for small oligomers includes a Claverie finite-element velocity experiment simulator for "what-if" testing and self-teaching unique option of using an alternative algorithm for calculating g(s*) distributions that works better than the standard Stafford algorithm when the time span of the scans grows long computes the number-, weight-, z-, and z+1-average sedimentation coefficients and total concentration from both the g(s*) and g^(s*) distributions, with error bars for all these values handles an unlimited number scans (up to available memory), not just 40 computes and displays g^(s*) (g-hat), the distribution that should be used when studying interacting systems, as well as g(s*) back to top How does DCDT+ differ from the ls-g(s) or c(s) methods in Peter Schuck's SEDFIT? The ls-g(s) distributions derived by SEDFIT are equivalent to the g(s*) distributions derived via the DCDT method only when the time span of the scans used in either method is small (a small number of scans). As the time span grows larger they become non-equivalent, and for ls-g(s) there is no longer any direct theoretical relationship between the width of a peak and the diffusion coefficient (or mass) of that species as there is for g(s*). For g(s*) derived via the DCDT method the removal of baseline noise (time-independent noise) and interference 'jitter' (radially-independent noise) is done via simple arithmetic and is model-independent. For ls-g(s) and c(s) the removal of this systematic noise is model-dependent. For c(s) distributions the diffusion information is essentially removed, which enhances the resolution for mixtures. The peak widths in c(s) are a function of signal/noise ratio and regularization parameters; they have no physical meaning. False peaks are often generated if the ls-g(s) method is applied to scans encompassing a large fraction of the run, especially for samples where diffusion is significant (conditions where it does not provide a good fit of the raw data). The c(s) method also generates false peaks if not used very carefully or whenever it cannot generate a good fit of the raw data. The dc/dt method never generates false peaks. Fitting of ls-g(s) distributions to one or more peaks is not implemented in SEDFIT; peak fitting makes no sense for c(s) distributions Neither ls-g(s) nor c(s) provides error bars for the distributions or properties of individual peaks SEDFIT does not provide reports, and does not store the analysis in a file so it can be re-opened SEDFIT cannot have multiple analyses open at the same time, and cannot overlay ls-g(s) or c(s) distributions from different samples SEDFIT does not have a local context-sensitive Help file with searchable index, or offer a printed manual back to top System Requirements DCDT+ version 2 will run under Windows 98, NT, 2000, XP, Vista, or Windows 7. Use via Windows dual-boot configurations on MacIntosh systems is neither supported nor guaranteed. This program also requires the Microsoft .NET framework to be present on the computer. This is often already true for newer computers. If it is not already present, this is detected during installation and a link is given so it can be downloaded from Microsoft and installed. Although DCDT+ will run correctly at a 640x480 (VGA) video resolution, a resolution of 1024 x 768 or higher is highly recommended. The program requires approximately 8 MBytes of disk space (most of which is for the comprehensive Help file). back to top How should I cite this program? 1. The primary citation for this program is Philo, J. S. (2006). Improved methods for fitting sedimentation coefficient distributions derived by time-derivative techniques. Anal. Biochem. 354, 238-246. 2. Because the original development of the dc/dt method was by Walter Stafford, please always also cite Walter Stafford's original paper: Stafford, W.F., III. 1992. Boundary analysis in sedimentation transport experiments: A procedure for obtaining sedimentation coefficient distributions using the time derivative of the concentration profile. Analytical Biochemistry 203:295-301. 3. The program itself should be mentioned in your Methods section as DCDT+ by John Philo, with the version number. Specialized method citations The reference for the special multi-segment dc/dt calculations is Philo, J. S. (2011). Limiting the sedimentation coefficient range for sedimentation velocity data analysis: Partial boundary modeling and g(s*) approaches revisited. Anal. Biochem. 412, 189-202. The reference for fitting to dc/dt data rather than g(s*), and for the Use true mean time button (the 'broad algorithm') is: Philo, J.S. (2000) A method for directly fitting the time derivative of sedimentation velocity data and an alternative algorithm for calculating sedimentation coefficient distribution functions. Analytical Biochemistry, 279, 151-163.
{"url":"http://www.jphilo.mailway.com/dcdt+.htm","timestamp":"2014-04-19T11:56:39Z","content_type":null,"content_length":"42832","record_id":"<urn:uuid:6a333bad-fc14-4474-be31-078b0121c36a>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
Items tagged with discrete Hello, I am trying to do a fourier transfrom using the package < DiscreteTransfroms >. The function is an gaussian function for now, Here is the code I tried > X := Vector(1000, proc (k) options operator, arrow; (1/200)*k-5/2 end proc); > Y := Vector(1000, proc (k) options operator, arrow; evalf(exp(-10*((1/100)*k-5)^2)) end proc); > X2, Y2 := FourierTransform(X, Y); Vector[column](%id = 18446744080244879358), Vector[column](%id = 18446744080244879478) > plot(X2, Re(Y2)); The program returns two vector, X2 and Y2 who are supposed to be the fourier transforme of a gaussian so.. a gausian but when I plot the result X2 on the horizontal and Y2 on vertical, the graph doesn't resemble a gaussian function or any function at all. Please help!!
{"url":"http://www.mapleprimes.com/tags/discrete","timestamp":"2014-04-18T03:40:57Z","content_type":null,"content_length":"55015","record_id":"<urn:uuid:28100d08-d736-4768-82a2-efc6179d0856>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
The Easiest Hard Problem The Easiest Hard Problem One of the cherished customs of childhood is choosing up sides for a ball game. Where I grew up, we did it this way: The two chief bullies of the neighborhood would appoint themselves captains of the opposing teams, and then they would take turns picking other players. On each round, a captain would choose the most capable (or, toward the end, the least inept) player from the pool of remaining candidates, until everyone present had been assigned to one side or the other. The aim of this ritual was to produce two evenly matched teams and, along the way, to remind each of us of our precise ranking in the neighborhood pecking order. It usually worked. None of us in those days—not the hopefuls waiting for our name to be called, and certainly not the two thick-necked team leaders—recognized that our scheme for choosing sides implements a greedy heuristic for the balanced number partitioning problem. And we had no idea that this problem is NP-complete—that finding the optimum team rosters is certifiably hard. We just wanted to get on with the game. And therein lies a paradox: If computer scientists find the partitioning problem so intractable, how come children the world over solve it every day? Are the kids that much smarter? Quite possibly they are. On the other hand, the success of playground algorithms for partitioning might be a clue that the task is not always as hard as that forbidding term "NP-complete" tends to suggest. As a matter of fact, finding a hard instance of this famously hard problem can be a hard problem—unless you know where to look. Some recent results, which make use of tools borrowed from physics and mathematics as well as computer science, have now shown exactly where the hard problems hide. The organizers of sandlot ball games are not the only ones with an interest in efficient partitioning. Closely related problems arise in many other resource-allocation tasks. For example, scheduling a series of jobs on a dual-processor computer is a partitioning problem: Sorting the jobs into two sets with equal running time will balance the load on the processors. Another example is apportioning the miscellaneous assets of an estate between two heirs. So What's the Problem? Here is a slightly more formal statement of the partitioning problem. You are given a set of n positive integers, and you are asked to separate them into two subsets; you may put as many or as few numbers as you please in each of the subsets, but you must make the sums of the subsets as nearly equal as possible. Ideally, the two sums would be exactly the same, but this is feasible only if the sum of the entire set is even; in the event of an odd total, the best you can possibly do is to choose two subsets that differ by 1. Accordingly, a perfect partition is defined as any arrangement for which the "discrepancy"—the absolute value of the subset difference—is no greater than 1. Try a small example. Here are 10 numbers—enough for two basketball teams—selected at random from the range between 1 and 10: Can you find a perfect partition? In this instance it so happens there are 23 ways to divvy up the numbers into two groups with exactly equal sums (or 46 ways if you count mirror images as distinct partitions). Almost any reasonable method will converge on one of these perfect solutions. This is the answer I stumbled onto first: (2 5 3 10 7) (2 5 3 9 8) Both subsets sum to 27. This example is in no way unusual. As a matter of fact, among all sets of 10 integers between 1 and 10, more than 99 percent have at least one perfect partition. (To be precise, of the 10 billion such sets, 9,989,770,790 can be perfectly partitioned. I know because I counted them—and it wasn't easy.) Maybe larger sets are more challenging? With a list of 1,000 numbers between 1 and 10, working the problem by pencil-and-paper methods gets tedious, but a simple computer program makes quick work of it. A variation on the two-bullies algorithm does just fine. First sort the list of numbers according to magnitude, then go through them in descending order, assigning each number to whichever subset currently has the smaller sum. This is called a greedy algorithm, because it takes the largest numbers first. The greedy algorithm almost always finds a perfect partition for a list of a thousand random numbers no greater than 10. Indeed, the procedure works equally well on a set of 10,000 or 100,000 or a million numbers in the same range. The explanation of this success is not that the algorithm is a marvel of ingenuity. Lots of other methods do as well or better. A particularly clever algorithm was described in 1982 by Narendra Karmarkar and Richard M. Karp, who were then both at the University of California, Berkeley. It is a "differencing" method: At each stage you choose two numbers from the set to be partitioned and replace them by the absolute value of their difference. This operation is equivalent to deciding that the two selected integers will go into different subsets, without making an immediate commitment about which numbers go where. The process continues until only one number remains in the list; this final value is the discrepancy of the partition. You can reconstruct the partition itself by working backward through the series of decisions. In the search for perfect partitions, the Karmarkar-Karp procedure succeeds even more often than the greedy algorithm. At this point you may be ready to dismiss partitioning as just a wimpy problem, unworthy of the designation NP-complete. But try one more example. Here is another list of 10 random numbers, chosen not from the range 1 to 10 but rather from the range between 1 and 2^10, or 1,024: This set does have a perfect partition, but there is just one, and finding it takes a little persistence. The greedy algorithm does not succeed; it gets stuck on a partition with a discrepancy of 32. Karmarkar-Karp does slightly better, reducing the discrepancy to 26. But the only sure way to find the one perfect partition is to check all possible partitions, and there are 1,024 of them. If this challenge is still not daunting enough, try 100 numbers ranging up to 2^100, or 1,000 numbers up to 2^1000. Unless you get very lucky, deciding whether such a set has a perfect partition will keep you busy for quite a few lifetimes. Where the Hard Problems Are To make sense of what's going on here, it's necessary first to be clear about what it means for a problem to be hard. Computer science has reached a rough consensus on this issue: Easy problems can be solved in "polynomial time," whereas hard problems require "exponential time." If you have a problem of size x, and you know an algorithm that can solve it in x steps or x^2 steps or even x^50 steps, then the problem is officially easy; all of these expressions are polynomials in x. But if your best algorithm needs 2^x steps or x^x steps, you're in trouble. Such exponential functions grow faster than any polynomial for large enough values of x. The easy, polynomial-time problems are said to lie in class P; hard problems are all those not in P. The notorious class NP consists of some rather special hard problems. As far as anyone knows, solving these problems requires exponential time. On the other hand, if you are given a proposed solution, you can check its correctness in polynomial time. (NP stands for "nondeterministic polynomial"—although admittedly that's not much help in understanding the concept.) Where does number partitioning fit into this taxonomy? Both the greedy algorithm and Karmarkar-Karp have polynomial running time; they can partition a set of n numbers in less than n^2 steps. For purposes of classification, however, these algorithms simply don't count, because they're not guaranteed to find the right answer. The hierarchy of problem difficulty is based on a worst-case analysis, which disqualifies an algorithm if it fails on even one problem instance. The only known method that does pass the worst-case test is the brute-force approach of examining every possible partition. But this is an exponential-time algorithm: Each integer in the set can be assigned to either of the two subsets, so that there are 2^n partitions to be considered. Partitioning is a classic NP problem. If someone hands you a list of n numbers and asks, "Does this set have a perfect partition?" you can always find the answer by exhaustive search, but this can take an exponential amount of time. If you are given a proposed perfect partition, however, you can easily verify its correctness in polynomial time. All you need to do is add up the two subsets and compare the sums, which takes time proportional to n. Indeed, partitioning is not just an ordinary member of the class NP; it is one of the elite NP problems designated NP-complete. What this means is that if someone discovered a polynomial-time algorithm for partitioning, it could be adapted to solve all NP problems in polynomial time. Each NP-complete problem is a skeleton key to the entire class NP. Given these sterling credentials as a hard problem, it's all the more perplexing that partitioning often yields so readily to simple and unsophisticated methods such as the greedy algorithm. Does this problem have teeth, or is it just a sheep in wolf's clothing? Boiling and Freezing Numbers An answer to this question has emerged in the past few years from a campaign of research that spans at least three disciplines—physics, mathematics and computer science. It turns out that the spectrum of partitioning problems has both hard and easy regions, with a sharp boundary between them. On crossing that frontier, the problem undergoes a phase transition, analogous to the boiling or freezing of water. The standard classification of computing problems as P or NP and so on assumes that difficulty increases as a function of problem size. In the case of number partitioning, two factors determine the size of a problem instance: how many numbers are in the set, and how big they are. Specifically, the size of an instance is the number of bits needed to represent it, and this depends both on the number of integers, n, and on the number of bits, m, in a typical integer. Thus a set of 100 integers in a range near 2^100 has n=100 and m=100 and a problem size of 10,000 bits. Given this measure of size, one might expect that partitioning problems would get harder as the product of n and m increases. This conclusion is not entirely wrong; algorithms do have to labor longer over bigger problems, if only to read the input. Yet, surprisingly, the product nm is not the best predictor of difficulty in number partitioning. Instead it's the ratio m/n. Some simple reasoning about extreme cases takes the mystery out of this assertion. Suppose the ratio of m to n is very small, say with m=1 and n=1,000. The task, then, is to partition a set of 1,000 numbers, each of which can be represented by a single bit. This is trivially easy: A one-bit positive integer must be equal to 1, and so the input to the problem is a list of a thousand 1s. Finding a perfect partition is just a matter of counting. At the opposite extreme, consider the case of m=1,000 and n=2 (the smallest n that makes sense in a partitioning problem). Here the separation into subsets is easy—how many ways can you partition a set of two items?—but the likelihood of a perfect partition is extremely low. It's just the probability that two randomly selected 1,000-bit numbers will be equal. Now it becomes clear why in my baby-boom neighborhood we so easily formed ourselves into well-matched teams. Among the dozen or more kids who would gather for a game, athletic abilities may have differed by a factor of 10, but surely not by a factor of 1,000. The parameter m is the base-2 logarithm of this range of talents, and so it was no greater than 3 or 4. Thus m/n was rather small—less than 1—and we had many acceptable solutions to choose from. Perhaps if we had had a young Michael Jordan or Mia Hamm in the neighborhood, I would take a different view of the number-partitioning problem today. Antimagnetic Numbers The ratio m/n divides the space of partitioning problems into two regions. Somewhere between them—between the fertile valley where solutions bloom everywhere and the stark desert where even one perfect partition is too much to expect—there must be a crossover region. There lies the phase transition. The concept of a phase transition comes from physics, but it also has a long history of applications to mathematical objects. Forty years ago Paul Erdo?s and Alfred Rényi described phase transitions in the growth of random graphs (collections of vertices and connecting edges). By the 1980s, phase transitions had been observed in many combinatorial processes. The most thoroughly explored example is an NP-complete problem called satisfiability. A 1991 article by Peter Cheeseman, Bob Kanefsky and William M. Taylor, titled "Where the Really Hard Problems Are," conjectured that all NP problems have a phase transition and suggested that this is what distinguishes them from problems in P. Meanwhile, in another paper with a provocative title ("The Use and Abuse of Statistical Mechanics in Computational Complexity"), Yaotian Fu of Washington University in St. Louis argued that number partitioning is an example of an NP problem without a phase transition. This assertion was disputed by Ian P. Gent of the University of Strathclyde and Toby Walsh of the University of York, who presented strong computational evidence for the existence of a phase transition. Their measurements suggested that the critical value of the m/n ratio, where easy problems give way to hard ones, is about 0.96. Stephan Mertens, of Otto von Guericke Universität in Magdeburg, Germany, has now given a thoroughgoing analysis of number partitioning from a physicist's point of view. His survey paper (which has been my primary source in writing this article) appears in a special issue of Theoretical Computer Science devoted to phase transitions in combinatorial problems. As a means of understanding the phase transition, Mertens sets up a correspondence between the number-partitioning problem and a model of a physical system. To see how this works, it helps to think of the partitioning process in a new context. Instead of unzipping a list of numbers into two separate lists, keep all the numbers in one place and multiply some of them by –1. The idea is to negate just the right selection of numbers so that the entire set sums to 0. Now comes the leap from mathematics to physics: The collection of positive and negative numbers is analogous to an array of atoms in a magnetic material, with the plus and minus signs representing up and down spins. Specifically, the system resembles an infinite-range antiferromagnet, where every atom can feel the influence of every other atom, and where the favored configuration has spins pointing in opposite directions. This strategy for studying partitioning may seem slightly perverse. It takes a simply stated problem in combinatorics and turns it into a messy and rather obscure system in statistical mechanics. Why bother? The reason is that physics offers some powerful tools for predicting the behavior of such a system. In particular, the interplay of energy and entropy governs how the collection of spins can be expected to evolve toward a state of stable equilibrium. The energy in question comes from the interaction between atomic spins (or between positive and negative numbers); because the system is an antiferromagnet, the energy is minimized when the spin vectors are oppositely oriented (or when the subsets sum to zero). The entropy measures the number of ways of achieving the minimum-energy state; a system with a unique ground state (or just one perfect partition) has zero entropy. When there are many equivalent ways of minimizing the energy (or partitioning a set perfectly), the entropy is high. The ratio m/n controls the state of this system. When m is much greater than n, the spins almost always have just one configuration of lowest energy. At the other pole, when m is much smaller than n, there are a multitude of zero-energy states, and the system can land in any one of them. Mertens showed that the transition between the two phases comes at m/n=1, at least in the limit of very large n. And he derived corrections for finite n that may explain why Gent and Walsh measured a slightly different transition point. Finally, Mertens showed just how hard the hard phase is. Searching for the best partition in this region is equivalent to searching a randomly ordered list of random numbers for the smallest element of the list. Only an exhaustive traverse of the entire list can guarantee an exact result. What's worse, there are no really good heuristic methods; no shortcuts are inherently superior to blind, random sampling. Yet this is not to say that heuristics for partitioning are totally worthless. On the contrary, the phase-transition model helps explain how the Karmarkar-Karp algorithm works. The differencing operation reduces the range of the numbers and so diminishes m/n, sliding the problem toward the easy phase. The algorithm can't be counted on to find the very best partition, but it's an effective way of avoiding the worst ones. Back to Mathematics The work of Mertens has answered most of the major questions about number partitioning, and yet it's not quite the end of the story. The methods of statistical mechanics were developed as tools for describing systems made up of vast numbers of component parts, such as the atoms of a macroscopic specimen of matter. When applied to number partitioning, the methods are strictly valid only for very large sets, where m and n both go to infinity (while maintaining a fixed ratio). Those who need to solve practical partitioning problems are generally interested in somewhat smaller values of m and n Mertens's results have another limitation as well. In calculating the distribution of optimal solutions, he had to adopt a simplifying approximation at one crucial step, assuming that certain energies in the spin system are random. The true distribution of those energies is harder to deduce, but the task has been undertaken by Christian Borgs and Jennifer T. Chayes of Microsoft Research and Boris Pittel of Ohio State University. They have reclaimed the problem from the realm of physics and brought it back to mathematics. Their paper giving detailed proofs runs to nearly 40 pages. To their surprise, Borgs, Chayes and Pittel discovered that Mertens's random-energy approximation actually yields exact results in the limiting case of infinite m and n. Under these conditions the phase transition is perfectly sharp. Anywhere below the critical ratio m/n=1, the probability of a perfect partition is 1; above this threshold the probability is 0. Borgs, Chayes and Pittel also give a precise account of how the phase transition softens and broadens for smaller problems. When n is finite, the probability of a perfect partition varies smoothly between 0 or 1, with a "window" of finite width surrounding the critical ratio. If my friends and I back on the ball field had known all this, would we have played better games? Probably not, but in retrospect I take satisfaction in the thought that our ritual for choosing teams was algorithmically well-founded. © Brian Hayes
{"url":"http://www.americanscientist.org/issues/id.3278,y.0,no.,content.true,page.5,css.print/issue.aspx","timestamp":"2014-04-17T19:37:12Z","content_type":null,"content_length":"114649","record_id":"<urn:uuid:eec18d23-cbb0-498d-a833-65dc0d6c34d8>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 167 , 2003 "... A bigraphical reactive system (BRS) involves bigraphs, in which the nesting of nodes represents locality, independently of the edges connecting them; it also allows bigraphs to reconfigure themselves. BRSs aim to provide a uniform way to model spatially distributed systems that both compute and comm ..." Cited by 1000 (29 self) Add to MetaCart A bigraphical reactive system (BRS) involves bigraphs, in which the nesting of nodes represents locality, independently of the edges connecting them; it also allows bigraphs to reconfigure themselves. BRSs aim to provide a uniform way to model spatially distributed systems that both compute and communicate. In this memorandum we develop their static and dynamic theory. In part I, we , 1992 "... Term Rewriting Systems play an important role in various areas, such as abstract data type specifications, implementations of functional programming languages and automated deduction. In this chapter we introduce several of the basic comcepts and facts for TRS's. Specifically, we discuss Abstract Re ..." Cited by 567 (16 self) Add to MetaCart Term Rewriting Systems play an important role in various areas, such as abstract data type specifications, implementations of functional programming languages and automated deduction. In this chapter we introduce several of the basic comcepts and facts for TRS's. Specifically, we discuss Abstract Reduction Systems - Inform. and Control , 1984 "... Within the context of an algebraic theory of processes, an equational specification of process cooperation is provided. Four cases are considered: free merge or interleaving, merging with communication, merging with mutual exclusion of tight regions, and synchronous process cooperation. The rewrite ..." Cited by 364 (51 self) Add to MetaCart Within the context of an algebraic theory of processes, an equational specification of process cooperation is provided. Four cases are considered: free merge or interleaving, merging with communication, merging with mutual exclusion of tight regions, and synchronous process cooperation. The rewrite system behind the communication algebra is shown to be confluent and terminating (modulo its permutative reductions). Further, some relationships are shown to hold between the four concepts of merging. © 1984 Academic Press, Inc. - Journal of the ACM , 1996 "... Abstract. In comparative concurrency semantics, one usually distinguishes between linear time and branching time semantic equivalences. Milner’s notion of ohsen~ation equirlalence is often mentioned as the standard example of a branching time equivalence. In this paper we investigate whether observa ..." Cited by 249 (14 self) Add to MetaCart Abstract. In comparative concurrency semantics, one usually distinguishes between linear time and branching time semantic equivalences. Milner’s notion of ohsen~ation equirlalence is often mentioned as the standard example of a branching time equivalence. In this paper we investigate whether observation equivalence really does respect the branching structure of processes, and find that in the presence of the unobservable action 7 of CCS this is not the case. Therefore, the notion of branching hisimulation equivalence is introduced which strongly preserves the branching structure of processes, in the sense that it preserves computations together with the potentials in all intermediate states that are passed through, even if silent moves are involved. On closed KS-terms branching bisimulation congruence can be completely axioma-tized by the single axiom scheme: a.(7.(y + z) + y) = a.(y + z) (where a ranges over all actions) and the usual laws for strong congruence. WC also establish that for sequential processes observation equivalence is not preserved under refinement of actions, whereas branching bisimulation is. For a large class of processes, it turns out that branching bisimulation and observation equivalence are the same. As far as we know, all protocols that have been verified in the setting of observation equivalence happen to fit in this class, and hence are also valid in the stronger setting of branching hisimulation equivalence. - I AND II. INFORMATION AND COMPUTATION , 1989 "... We present the ß-calculus, a calculus of communicating systems in which one can naturally express processes which have changing structure. Not only may the component agents of a system be arbitrarily linked, but a communication between neighbours may carry information which changes that linkage. The ..." Cited by 189 (3 self) Add to MetaCart We present the ß-calculus, a calculus of communicating systems in which one can naturally express processes which have changing structure. Not only may the component agents of a system be arbitrarily linked, but a communication between neighbours may carry information which changes that linkage. The calculus is an extension of the process algebra CCS, following work by Engberg and Nielsen who added mobility to CCS while preserving its algebraic properties. The ß-calculus gains simplicity by removing all distinction between variables and constants; communication links are identified by names, and computation is represented purely as the communication of names across links. After an illustrated description of how the ß-calculus generalises conventional process algebras in treating mobility, several examples exploiting mobility are given in some detail. The important examples are the encoding into the ß- calculus of higher-order functions (the -calculus and combinatory algebra), the tr... , 1993 "... . We proposed a syntactical format, the path format, for structured operational semantics in which predicates may occur. We proved that strong bisimulation is a congruence for all the operators that can be defined within the path format. To show that this format is useful we provided many examples t ..." Cited by 109 (5 self) Add to MetaCart . We proposed a syntactical format, the path format, for structured operational semantics in which predicates may occur. We proved that strong bisimulation is a congruence for all the operators that can be defined within the path format. To show that this format is useful we provided many examples that we took from the literature about CCS, CSP, and ACP; they do satisfy the path format but no formats proposed by others. The examples include concepts like termination, convergence, divergence, weak bisimulation, a zero object, side conditions, functions, real time, discrete time, sequencing, negative premises, negative conclusions, and priorities (or a combination of these notions). Key Words & Phrases: structured operational semantics, term deduction system, transition system specification, structured state system, labelled transition system, strong bisimulation, congruence theorem, predicate. 1980 Mathematics Subject Classification (1985 Revision): 68Q05, 68Q55. CR Categories: D.3.1... , 2001 "... In the recent years, many formalizations of security properties have been proposed, most of which are based on different underlying models and are consequently difficult to compare. A classification of security properties is thus of interest for understanding the relationships among different defini ..." Cited by 90 (16 self) Add to MetaCart In the recent years, many formalizations of security properties have been proposed, most of which are based on different underlying models and are consequently difficult to compare. A classification of security properties is thus of interest for understanding the relationships among different definitions and for evaluating the relative merits. In this paper, many non-interference-like properties proposed for computer security are classified and compared in a unifying framework. The resulting taxonomy is evaluated through some case studies of access control in computer systems. The approach has been mechanized, resulting in the tool CoSeC. Various extensions (e.g., the application to cryptographic protocol analysis) and open problems are discussed. This paper - FUNDAMENTA INFORMATICAE , 1995 "... A new kind of transformation of term rewriting systems (TRS) is proposed, depending on a choice for a model for the TRS. The labelled TRS is obtained from the original one by labelling operation symbols, possibly creating extra copies of some rules. This construction has the remarkable property t ..." Cited by 86 (14 self) Add to MetaCart A new kind of transformation of term rewriting systems (TRS) is proposed, depending on a choice for a model for the TRS. The labelled TRS is obtained from the original one by labelling operation symbols, possibly creating extra copies of some rules. This construction has the remarkable property that the labelled TRS is terminating if and only if the original TRS is terminating. Although the labelled version has more operation symbols and may have more rules (sometimes infinitely many), termination is often easier to prove for the labelled TRS than for the original one. This provides a new technique for proving termination, making classical techniques like path orders and polynomial interpretations applicable even for non-simplifying TRS's. The requirement of having a model can slightly be weakened, yielding a remarkably simple termination proof of the system SUBST of [11] describing explicit substitution in λ-calculus. - IEEE TRANSACTIONS ON SOFTWARE ENGINEERING , 1996 "... ..." - THEORETICAL COMPUTER SCIENCE , 1995 "... The π-calculus is a process algebra which originates from CCS and permits a natural modelling of mobility (i.e., dynamic reconfigurations of the process linkage) using communication of names. Previous research has shown that the π-calculus has much greater expressiveness than CCS, but also a much mo ..." Cited by 80 (11 self) Add to MetaCart The π-calculus is a process algebra which originates from CCS and permits a natural modelling of mobility (i.e., dynamic reconfigurations of the process linkage) using communication of names. Previous research has shown that the π-calculus has much greater expressiveness than CCS, but also a much more complex mathematical theory. The primary goal of this work is to understand the reasons of this gap. Another goal is to compare the expressiveness of name-passing calculi, i.e., calculi like π-calculus where mobility is achieved via exchange of names, and that of agent-passing calculi, i.e., calculi where mobility is achieved via exchange of agents. We separate the mobility mechanisms of the π-calculus into two, respectively called internal mobility and external mobility. The study of the subcalculus which only uses internal mobility, called I, suggests that internal mobility is responsible for much of the expressiveness of the π-calculus, whereas external mobility is responsible for many of...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=445341","timestamp":"2014-04-19T01:23:20Z","content_type":null,"content_length":"37203","record_id":"<urn:uuid:635400bd-fdf0-429b-a72a-4dc38eab2323>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Laguna Niguel Prealgebra Tutor Find a Laguna Niguel Prealgebra Tutor I have a doctorate in Nuclear Physics and many years of engineering experience designing electronics for industry. For the last two years, I have been teaching at the Community College Level as an Adjunct Professor. I have volunteer tutored math at a local high school and tutored privately. 8 Subjects: including prealgebra, physics, calculus, geometry I have a bachelors degree in Mathematics from La Sierra University. I am currently working on obtaining my California Teaching Credentials at National University. I am a third generation math teacher who has a passion for teaching younger generations. 3 Subjects: including prealgebra, algebra 1, trigonometry ...Many overseas students are sometimes first and second generation, and one burden is that their family knows hardly any English. So all the contracts and job descriptions fall on their shoulders; this is where I come in. Having gone through exactly this, I can help grind those hundred dollar words to seem like a mere two cents, wherever you may see them. 39 Subjects: including prealgebra, reading, chemistry, English ...Just this past year I tutored groups of students in SAT preparation, both in math and verbal, and a few years ago I tutored English for an organization in San Diego. Altogether, I've been tutoring for standardized tests for 30 years and know the patterns and concepts that are tested quite thoroughly. I am confident that I can get the material across in simplified, understandable, 29 Subjects: including prealgebra, reading, English, Spanish ...I also have experience as a co-teacher of a marketing class in high school. By experience in the field and experience teaching marketing, I am a tutor confident teaching marketing to high school or college students. As a Special Education teacher over the past 30 years, I have taught social studies at both elementary and secondary levels. 45 Subjects: including prealgebra, reading, English, biology
{"url":"http://www.purplemath.com/Laguna_Niguel_prealgebra_tutors.php","timestamp":"2014-04-18T13:55:48Z","content_type":null,"content_length":"24461","record_id":"<urn:uuid:876d5d9b-9684-483d-99f6-1d74b8381b31>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
Writing expressions involving percent increase and decrease Recall that whenever you see the percent symbol, $\,\%\,$, you can trade it in for a multiplier of $\frac{1}{100}$. (Indeed, per-cent means per-one-hundred.) For example, $\,20\%\,$ goes by all these names: $$\,20\% = 20\cdot\frac{1}{100}= \frac{20}{100} = \frac{2}{10} = \frac{1}{5} = 0.2$$ In particular, note that $\,100\% = 100\cdot\frac{1}{100} = 1\,$, so $\,100\%\,$ is just another name for the number $\,1\,$. Also recall that it's easy to go from percents to decimals: just move the decimal point two places to the left. For example: $\,20\% = 20.\% = 0.20$ It's good style to put a zero in the ones place (i.e., write $\ 0.20\ $, not $\ .20\ $). To change from decimals to percents, just move the decimal point two places to the right. For example: $0.2 = 0.20 = 20.\% = 20\%$ The ‘Puddle Dipper’ memory device may be useful to you: PuDdLe: to change from Percents to Decimals, move the decimal point two places to the Left. DiPpeR: to change from Decimals to Percents, move the decimal point two places to the Right. Here, you will practice writing expressions involving percent increase and decrease, and related concepts. Another name for the expression ‘$\,20\%\text{ of } x\,$’ is: $0.2x$ Why? The mathematical word ‘of’ indicates multiplication, so: $\,(20\%\text{ of } x) = (20\%)(x) = (0.2)(x) = 0.2x\,$. Another name for the expression ‘$\,100\%\text{ of } x\,$’ is: $x$ Another name for the expression ‘$\,300\%\text{ of } x\,$’ is: $3x$ If $\,x\,$ increases by $\,20\%\,$, then the new amount is: $x + 0.2x = 1x + 0.2x = 1.2x$ If $\,x\,$ has a $\,20\%\,$ increase, then the new amount is: $1.2x$ If $\,x\,$ increases by $\,47\%\,$, then the new amount is: $x + 0.47x = 1.47x$ If $\,x\,$ decreases by $\,30\%\,$, then the new amount is: $x - 0.3x = 1x - 0.3x = 0.7x$ If $\,x\,$ has a $\,30\%\,$ decrease, then the new amount is: $0.7x$ If $\,x\,$ increases by $\,100\%\,$, then the new amount is: $x + x = 1x + 1x = 2x$ If $\,x\,$ increases by $\,182\%\,$, then the new amount is: $x + 1.82x = 2.82x$ If $\,x\,$ increases by $\,200\%\,$, then the new amount is: $x + 2x = 3x$ If $\,x\,$ doubles, then the new amount is: $2x$ If $\,x\,$ triples, then the new amount is: $3x$ If $\,x\,$ quadruples, then the new amount is: $4x$ If $\,x\,$ is halved, then the new amount is: $\displaystyle\frac{1}{2}x = 0.5x$ Answers must be input in decimal form to be recognized as correct. Also, you must exhibit good style by putting a zero in the ones place, as needed. For example, input 0.5x , not (say) .5x or 1/2x .
{"url":"http://www.onemathematicalcat.org/algebra_book/online_problems/percent_inc_dec.htm","timestamp":"2014-04-17T03:51:42Z","content_type":null,"content_length":"31661","record_id":"<urn:uuid:94256f59-c849-4c8c-abfd-b1d854e6697c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
1 dollars equals how many rupiah You asked: 1 dollars equals how many rupiah Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/1_dollars_equals_how_many_rupiah","timestamp":"2014-04-18T04:01:34Z","content_type":null,"content_length":"56848","record_id":"<urn:uuid:60a355a2-3008-4f87-ac67-b271aae6cf92>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
pop puzzle Hello Lispers! It is usually said that (pop stack) is equivalent of (prog1 (first stack) (setf stack (rest stack))) or (let ((elt (first stack))) (setf stack (rest stack)) elt). I don't understand why the following function doesn't work as an equivalent of built-in pop: Code: Select all (defun my-pop (lst) (let ((x (first lst))) (setf lst (rest lst)) Code: Select all ? (setf lstA '(a b c)) ? (my-pop lstB) ? lstA (A B C) ? (setf lstB '(a b c)) ? (pop lstB) ? lstB (B C) ? (setf lstC '(a b c)) ? (let ((x (car lstC))) (setf lstC (cdr lstC)) ? lstC (B C) Last edited by dynamic on Fri Sep 16, 2011 9:52 am, edited 1 time in total. Re: pop puzzle Your setf is modifying a local variable, not the original structure. There are ways of destructively modifying a list in-place (edit the car and cdr of the cons cells); but a lot of lisp code assumes cons cells are immutable (not enforceable in CL). Read about defsetf and define-setf-expander. If you want to modify something in-place, macros are the way to go. Re: pop puzzle Thank you for your answer and pointers to defsetf and define-setf-expander. I will study it.
{"url":"http://www.lispforum.com/viewtopic.php?f=2&t=1365","timestamp":"2014-04-21T00:27:54Z","content_type":null,"content_length":"16980","record_id":"<urn:uuid:8dfec06f-605b-43ee-9eca-f1d546bb608a>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Table of Contents Jasymca can handle polynomials with symbolic variables. In this chapter, however, we work with the Matlab/Octave/Scilab-approach of using vectors as list of polynomial coefficients: A polynomial of degree n is represented by a vector having n+1 elements, the element with index 1 being the coefficient of the highest exponent in the polynomial. With poly(x) a normal polynomial is created, whose zeros are the elements of x, polyval(a,x) returns function values of the polynomial with coefficients a in the point x, roots(a) calculates the zeros, and polyfit(x,y,n) calculates the coefficients of the polynomial of degree $n$, whose graph passes through the points x and y. If their number is larger than n+1 a least square estimate is performed. The regression analysis in exercise 8 was performed using this method. The roots of a 4th-degree polynomial are -4,-2,2,4 and it intersects the y-axis at (y=-64). Calculate its coefficients: >> a=poly([-4,-2,2,4]) a = [ 1 0 -20 0 64 ] >> a = -64/polyval(a,0) * a a = -1 0 20 0 -64 >> a = polyfit([-4,-2,2,4,0],[0,0,0,0,-64],4) a = -1 0 20 0 -64 Example 2: On a grid with $x,y$-Koordinaten we have the following greyvalues of a digital image: y\x 1 2 3 4 Which greyvalue do you expect at position $x=2,35; y=2,74$? Calculate the bicubic interpolation. >> Z=[98,110,122,136; > 91,112,131,141; > 73,118,145,190; > 43,129,170,230]; >> i=1; p=polyfit(1:4,Z(i,:),3); z(i)=polyval(p,2.35); >> i=2; p=polyfit(1:4,Z(i,:),3); z(i)=polyval(p,2.35); >> i=3; p=polyfit(1:4,Z(i,:),3); z(i)=polyval(p,2.35); >> i=4; p=polyfit(1:4,Z(i,:),3); z(i)=polyval(p,2.35); >> p=polyfit(1:4,z,3); zp=polyval(p,2.74) zp = 124.82 Polynomial functions. Summary table You are not full member and have a limited access to this section. One can unlock advanced pages after becoming a full member . You can also request to edit this manual and insert comments. We have learnt that polynomials may be represented by the vector of their coefficient. Using a symbolic variable x we will now create a symbolic polynomial p. Conversely, we can extract the coefficients from a symbolic polynomial using the function coeff(p, x, exponent). The command allroots(p) returns the zeros. >> a=[3 2 5 7 4]; % coefficients >> syms x >> y=polyval(a,x) % symbolic polynomial y = 3*x^4+2*x^3+5*x^2+7*x+4 >> coeff(y,x,3) % get one coefficient ans = 2 >> b=coeff(y,x,4:-1:0) % or all at once b = [ 3 2 5 7 4 ] >> allroots(y) % same as roots(a) ans = [ 0.363-1.374i 0.363+1.374i -0.697-0.418i -0.697+0.418i ] Up to this point there is little advantage of using symbolic calculations, it is just another way of specifying a problem. The main benefit of symbolic calculations emerges when we are dealing with more than one symbolic variable, or, meaning essentially the same, when our polynomial has nonconstant coefficients. This case can be treated efficiently only with symbolic variables. Notice in the example below how the polynomial y is automatically multiplied through, and brought into a canonical form. In this form the symbolic variables are sorted alphabetically, i.e. z is main variable compared to x. The coefficients can be calculated for each variable separately. >> syms x,z >> y=(x-3)*(x-1)*(z-2)*(z+1) y = (x^2-4*x+3)*z^2+(-x^2+4*x-3)*z+(-2*x^2+8*x-6) >> coeff(y,x,2) ans = z^2-z-2 >> coeff(y,z,2) ans = x^2-4*x+3 The command allroots functions with variable coefficients also, but only, if the polynomials degree in the main variable is smaller than 3, or it is biquadratic. If roots of other variables x are searched, one should use the more general solve(p,x), which will be discussed in more detail later. >> syms x,z >> y = x*z^2-3*x*z+(2*x+1); >> allroots(y) ans = [ sqrt((1/4*x-1)/x)+3/2 -sqrt((1/4*x-1)/x)+3/2 ] >> solve(y,x) ans = -1/(z^2-3*z+2) Squarefree Decomposition The decomposition of p in linear, quadratic, cubic etc factors is accomplished by sqfr(p). Returned is a vector of factors sorted in ascending order of the exponents. >> syms x >> y=(x-1)^3*(x-2)^2*(x-3)*(x-4) y = x^7-14*x^6+80*x^5-242*x^4+419*x^3-416*x^2+220*x-48 >> z=sqfr(y) z = [ x^2-7*x+12 x-2 x-1 ] Division and Common Denominator The division of two polynomials p and q in one polynomial and remainder is calculated using divide(p,q). If the polynomials have more than one variable, an optional variable can be specified, which will be used for division. gcd(p,q) returns the greatest common denominator of two expressions. Both functions also work with numbers as arguments. >> divide(122344,7623) ans = [ 16 376 ] >> divide(2+i,3+2*i) ans = [ 1/2 1/2 ] >> syms x,z >> divide(x^3*z-1,x*z-x,x) ans = [ x^2*z/(z-1) -1 ] >> divide(x^3*z-1,x*z-x,z) ans = [ x^2 x^3-1 ] >> gcd(32897397,24552502) ans = 377 >> gcd(z*x^5-z,x^2-2*x+1) ans = x-1 Real- and Imaginary Part realpart(expression) and imagpart(expression) is used to decompose complex expressions. Symbolic variables in these expressions are assumed to be real-valued. >> syms x >> y=(3+i*x)/(2-i*x) y = (-x+3i)/(x+2i) >> realpart(y) ans = (-x^2+6)/(x^2+4) >> imagpart(y) ans = 5*x/(x^2+4)
{"url":"http://jwork.org/scavis/wikidoc/doku.php?id=man:jmathlab:polynomials","timestamp":"2014-04-19T17:02:28Z","content_type":null,"content_length":"52535","record_id":"<urn:uuid:dd9d5759-1052-4b13-bdee-0c0fdefb14db>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
On some Dynamical Conditions/I From Wikisource On some Dynamical Conditions applicable to Le Sage's Theory of Gravitation Part II→ Preston, Samuel Tolver (1877), “On some Dynamical Conditions applicable to Le Sage's Theory of Gravitation”, Philosophical Magazine 4: 206-213 1. THE tendency of modern science is undoubtedly to look to the existence of physical conditions or processes in those natural phenomena to which the theory of "action at a distance" has been applied. The gravitation theory of Le Sage has therefore of late naturally received a considerable share of attention. Le Sage finds it necessary, as a basis to his theory, to lay down certain conditions, some of which cannot but be regarded as arbitrary. Thus (as given in the paper by Sir William Thomson, ' Philosophical Magazine,' May 1873) Le Sage assumes among other conditions:— 1) That the direction of the streams of particles producing gravity is such that an equal number of particles are moving in all directions. 2) That the streams are all equally dense; or the total assemblage of matter forming the streams is of the same density in all parts. 3) That the mean velocity of the streams is everywhere the same. 2. These conditions cannot but be considered arbitrary. My object is to call attention to the fact (which, if it has been observed, would certainly appear to be deserving of more attention than it has received) that all these conditions which Le Sage, with the limited knowledge of his day, assumed to be arbitrary, are in reality inevitable deductions following [207] from the dynamical principles connected with the kinetic theory of gases, or that Le Sage unconsciously enunciated the inevitable principles of the kinetic theory — that, in short, all the conditions laid down in Le Sage's theory are perfectly satisfied by a gas whose particles are very minute, and consequently the mean length of path of whose particles is very great. In other words, it may be stated as a general proposition, that when two bodies are immersed in a gas at a less distance apart than the mean length of path of the particles of the gas, the two bodies will tend to be urged together. Thus all the arbitrary conditions of Le Sage's theory (and all the facts of gravity) would follow as inevitable deductions from the simple fundamental admission of the existence of matter in space, whose normal state is a state of motion. 3. The part of Le Sage's theory which most calls for explanation, and which he makes no attempt to explain, is (even if we allow as a purely arbitrary fact that the motion of his particles took place at one time uniformly or equally towards all directions) how this uniformity of motion of the particles could be kept up under the continual changes of direction resulting from the collisions of the particles against themselves and mundane matter. Now it has been proved mathematically by Professor Maxwell, in connexion with the kinetic theory of gases, that a self-acting adjustment goes on among a system of bodies or particles in free collision, such that the particles are caused to move equally towards all directions, this being the condition requisite to produce equilibrium of pressure. The method of calculating the rate of the above self-acting adjustment for any case is given in the Philosophical Transactions for 1866. This adjustment is of such a rigid character that, if by any artificial means the motions of the particles were interfered with and made to take place irregularly (i. e. unequally in different directions), the particles when left to themselves would in a very short time automatically return back to the above regular form of motion, i. e. so that an equal number of particles are moving in any two opposite directions. Thus it follows that when a system of particles are left in space with nothing to guide them, they will, by the rigid principles of dynamics adjust their motions in such a way as to be competent to produce the effects of gravity. In other words, the movement of streams of particles with perfect uniformity at all angles (which Le Sage assumed as a mere arbitrary postulate) is found to be the necessary consequence of dynamical principles; or the particles themselves adjust their motions so as to move in uniform streams in all directions ; and, further, when any disturbance of the uniformity [208] of the motion of the particles takes place due to their collisions with mundane matter, the particles themselves readjust the uniformity of motion. 4. Le Sage imagined that the collisions of the particles disturbed permanently the uniformity of their motions, and therefore supposed these collisions to take place only at intervals of time very remote from each other. Thus he assumes ".... that not more than one out of every hundred of the particles meets another during several thousands of years ; so that the uniformity of their motions is scarcely ever disturbed sensibly." We now know that, so far from the collisions of the particles among themselves disturbing the uniformity of their motions, this is the very cause which corrects and maintains the uniformity of motion, or preserves the uniformity of motion in opposition to external disturbing causes. The assumption, therefore, of the above enormous interval of time between the collisions of the particles, though admissible, is by no means necessary. The only necessary condition is that the path of the particle should be a certain length, not that a certain time should be occupied in traversing it. The time taken by the particle in traversing its path depends on its velocity; and this time might therefore be small, provided, under the conditions of the case, the velocity of the particle were high. Le Sage imagined that the collisions were detrimental, not only in destroying the uniformity of the motion of the particles, but also in destroying vis viva ; and he therefore supposed the collisions to take place as seldom as possible. This belief in the destruction of vis viva at collision was universal at the time of Le Sage ; and he therefore assumed that the gravific particles would finally come to rest, and gravity cease to exist. We now know that this is an error, and that motion is as naturally maintained among a system of particles as rest. Thus the one thing requiring to be admitted to account for all the effects of gravity is, that the universe is immersed in a gas the mean length of path of whose particles is great. 5. The other assumptions or postulates of Le Sage in connexion with his theory, viz. equal density in all parts of the streams of moving particles, equal mean velocity in all parts, follow no less as automatic consequences from the recent dynamical investigations connected with the kinetic theory of gases. Thus the conditions of Le Sage's theory become converted from a series of arbitrary assumptions or postulates, to a series of deductions following from the rigid principles of dynamics. 6. It forms a truly wonderful fact to consider, that a system of bodies or particles left to themselves, with nothing to guide them but their own collisions (which might well be regarded as fulfilling all the essentials of a chaos), produces and maintains the most rigid system of order, such that the number of particles contained in unit volume of the system (taken anywhere) is equal, the mean velocity equal in all parts, the mean distance of the particles the same in all parts, and the particles are moving uniformly towards all directions in all parts. 8uch is the result produced by pure dynamics. In fact it may be said that leaving the bodies to themselves constitutes the most perfect system of control, for any interference whatever would disturb the regularity of the motions. This regularity of movement is not only naturally continued, but forcibly and automatically maintained against any disturbance, — such that if it were imagined that a system of bodies were purposely put in motion in the most chaotic manner possible, the motion would of itself in a short time become regular, or the whole would become a system of order and uniformity. 7. Clausius, as is known, has investigated a relation between the mean length of path of the particles of a gas and the diameter of the particles. From this investigation it follows that the mean length of path of the particle of a gas (i. e. the average distance which the particle moves before encountering another particle) increases in proportion as the square of the diameter of the particle diminishes. Thus by making the particle small enough, its mean length of path may be increased to any extent. No objection, evidently, can be made to this, for à priori one size of particle is just as likely as another. This minute size would render it possible for the particle to possess a high velocity without producing thereby disturbance or displacement among the molecules of ordinary matter ; and this high velocity is necessary to accord with the observed facts of gravity. One velocity cannot be said à priori to be more likely than another. We must just be guided by the teaching of facts as to what the velocity is.^[1] 8. It is an interesting fact pointed out by Sir William Thomson (Phil. Mag. May 1873) that the distance through which gravity is effective would depend on the distance through which the gravific particles move before being intercepted by collision with each other (which is equivalent to the mean length of path of the particles). By assuming the distance of the stars to be a multiple of the mean length of path of the particles, it would therefore follow that the stars do not gravitate towards each other — this satisfying the condition for the stability of the universe. The assumption of all the bodies of the universe gravitating towards each other is evidently quite inconsistent with stability (as already pointed out by Professor Challis). All that we require to admit is that the effects of gravity hold through as great distances as we have observed them. 9. The distance through which gravity has been observed to act is well known to be but an infinitesimal fraction of the distance of the stars. It may therefore well be that the mean length of path of the particles of the medium producing gravity may be but an infinitesimal fraction of this distance. The column of the gravific medium intercepted between two stars would therefore on the whole be at rest, just as a column of gas is at rest between two bodies a visible distance apart (i. e. a distance which is a large multiple of the mean length of path of the particles of gas). Le Sage appears to have assumed that the mean length of path of the gravific particles swept through the universe ; or he assumed that streams of matter came from the depths of space and passed entirely through the visible universe into space beyond.^[2] This [211] assumption cannot but be regarded as fantastic, and, as we observe, is by no means necessary. The mean length of path of the particles, so far from being comparable to the dimensions of the visible universe, may be but an infinitesimal fraction of the distance of two of its primary components. All we require to admit is that the mean length of path of the particles of the medium is at least as great as the very limited range through which gravity has been observed to act ; or, in order to explain all the observed facts, it is sufficient to admit that the universe is immersed in a gas (or medium constituted according to the kinetic theory) the mean length of path of whose particles is so adjusted as to cause the minor or secondary portions of the universe to gravitate towards each other.^[3] Under the simple conception of a variation in the diameter of the particles of a medium, the mean length of path of the particles (and with it the range of gravity) is capable of adjustment with precision to any range. It would probably be difficult to imagine any more simple condition as a mechanical means to an end than this. 10. It is a necessary condition to Le Sage's theory (in order [212] that gravity may be sensibly proportional to mass) that the total volume of free space in a substance, in the form of interstices between the molecules or in their structure, must be great compared with the total volume of matter contained in the molecules themselves. Le Sage assumed the molecules of substances to have a sort of open structure in the form of cages with wide interstices. This condition of free interstices would be equally satisfied by assuming the molecules to be small relative to their mean distance, or on the condition of the vortex-ring atom theory, without any necessity for making the above somewhat fantastic assumption of cage-structure. 11. It is necessary to assume that the particles producing gravity are in very close proximity compared with molecules, otherwise the particles would be unable by their motion to produce a perfectly equable pressure upon the molecules of matter. It might be thought that, because the particles of the gravific medium are so close, and the molecules of ordinary matter relatively far apart, therefore the quantity of matter in the form of gravific particles enclosed in a given volume of space must be very great compared with the quantity of ordinary matter that that same volume of space would contain — or, in other words, that there must be a relatively enormous quantity of matter in the form of gravific particles. This by no means follows; for although the gravific particles may be very close, I he relative quantity of matter in them may be very small, provided the particles themselves are small. Indeed by simply conceiving an extreme degree of subdivision, the particles pervading a given volume of space may by continued subdivision bo conceived to be brought into as close proximity as we please ; and though the space itself is large, the total quantity of matter thus used may be conceived as small as we please. No consequence how minute the size (or mass) of a particle may be, the effect produced by its motion remains as groat, provided its velocity be adequately augmented. The minute size is the very condition adapted to a high velocity ; and this minute size is at the same time the necessary condition for a long mean path. Thus we may observe that the mechanical conditions of the problem fit into each other. The matter of the gravific medium is in such a finely subdivided state, and its motion so rapid, that its presence necessarily eludes detection. The pressure (termed "gravity") due to the motion of the particles of the gravific medium is no more difficult of realization than the pressure due to the motion of the molecules of air. If the motion of the molecules of air be unrecognized by the senses, how much more must this be the fact with the minute gravific particles ; indeed it is difficult to see what mechanical objection can be urged against this realization of the problem, which is extremely simple. 12. The theory of "action at a distance" being rejected, which is necessary in order to explain the facts at all, the effects of gravity can in principle be referred to only two conceivable causes. The tendency of two molecules of matter to approach each other can be referred (1) to a motion possessed by the molecules themselves disturbing the equilibrium of pressure of the medium between them ; (2) to a motion possessed by the medium itself (in the form of streams or currents) acting upon the molecules. The first of these two conditions appears to be inadmissible, from the fact that we cannot interfere with or modify gravity at will, whereas we can very readily interfere with or modify the motion of the molecules of matter (as by adding or subtracting heat, for example). It therefore would appear that gravity must be due to some motion that we cannot interfere with, i. e. to a motion in the external medium which we cannot handle or which is beyond our control. Only one conclusion appears therefore to be possible here ; and therefore it would seem that the theory of Le Sage can scarcely be regarded as a mere hypothesis, but rather as an irresistible deduction which is forced upon us in the absence of any other conceivable inference. Certainly, if simplicity be a recommendation, the theory needs no recommendation on that ground. London, July 1877. 1. ↑ I have shown (Phil. Mag. June 1877) that a physical relation exists between the velocity of the particles of a medium constituted according to the kinetic theory and the velocity of propagation of a wave in the medium. Professor Maxwell has calculated (as given in postscript to the paper) the numerical value of this relation at $\frac{\sqrt{5}}{3}$. Thus it appears that if the velocity of propagation of a wave in any medium constituted according to the kinetic theory can be measured, then the velocity of the particles of the medium is given by dividing this velocity by $\frac{\ sqrt{5}}{3}$. So, for example, the velocity of the molecules of air is given by dividing the » velocity of sound in air by $\frac{\sqrt{5}}{3}$; so of other gases. Thus it appears that the velocity of a wave in any medium constituted according to the kinetic theory (such as the velocity of a wave of sound in air) is solely dependent on and proportional to the velocity of the particles of the medium ; and this velocity of the wave is independent of the density or pressure of the medium, or of any thing else excepting the velocity of its particles. 2. ↑ Le Sage assumed that a continual supply of matter from without was necessary for the maintenance of gravity in the visible universe, and that all but a very small fraction of this supply passed through ineffectively and was dissipated again in ultramundane space — a means apparently quite disproportionate to the end in view. We observe that under the principles of the kinetic theory no such supply is necessary, but that all the conditions requisite for gravity may be fulfilled by a medium pervading the visible universe, and which is at rest as a whole ; or the gravific medium within the bounds of the visible universe may be compared to the air within a receptacle, which is, as a whole, at rest; and therefore there is no more waste of the matter producing gravity than if mundane matter did not exist. If we imagine, in analogy, a being extremely small compared with the mean length of path of an air molecule, to be stationed in the centre of a receptacle containing air ; then this minute observer would notice the air molecules passing continually in streams equally in all directions ; and the observer, not being able to trace the beginning and end of the path of the molecules, would naturally imagine that the molecules were being supplied in streams from outside the receptacle, and, observing the continued regularity of the streams equally in all directions, would suppose that there must be some external mechanism supplying the streams of molecules equally and symmetrically. This is what would be supposed in the absence of the application of the principles of the kinetic theory to the case. On applying this theory, it is seen that the same result can take place without a supply of air ; and that the adjustment of the streams of air molecules uniformly in all directions is automatic, or the inevitable result of dynamical principles. 3. ↑ If two particles of a gas be conceived gradually to increase in size and mass, they will gradually lose their translatory motion and finally come practically to rest. If now the two enlarged particles be at a distance apart less than the mean length of path of the remaining particles of the gas, these two particles will gravitate towards each other under the dynamic action of the other particles of the gas which pass them in streams (the opposed faces of the two enlarged particles being sheltered from the streams). This condition of things probably occurs when a vapour condenses, when the gradually enlarging water vesicles which happen to be at a distance apart less than the mean length of path of the other molecules of the vapour will be driven together by the dynamic action of these molecules, thus accelerating condensation. Thus possibly effects of "gravity" may be imitated by gaseous matter on molecular scale, as is observed in the large scale of the universe. It may be observed that these are simply applications on a smaller scale of the dynamical principles of Le Sage's theory, and therefore the deductions hold if the premises are admitted to be possible. This work is in the public domain in the United States because it was published before January 1, 1923. The author died in 1917, so this work is also in the public domain in countries and areas where the copyright term is the author's life plus 80 years or less. This work may also be in the public domain in countries and areas with longer native copyright terms that apply the rule of the shorter term to foreign works.
{"url":"http://en.wikisource.org/wiki/On_some_Dynamical_Conditions/I","timestamp":"2014-04-16T10:33:01Z","content_type":null,"content_length":"46353","record_id":"<urn:uuid:8abad1bd-692b-4472-8732-361047dee90b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
nd Physics Book Reccomendations Hey Everyone , i am new to Java programming ,actually i had not majored in CS when i were in the University , yeah it's beside the point , i dont have strong mathatical background ,so please help me find some good book in Math & Physics in Java . Thank you
{"url":"http://www.dreamincode.net/forums/topic/181247-java-math-and-physics-book-reccomendations/","timestamp":"2014-04-17T00:52:10Z","content_type":null,"content_length":"157063","record_id":"<urn:uuid:743ac171-993b-4e7b-b093-0733d9b020da>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
-Adic Hardy Operators and Their Commutators on Journal of Function Spaces and Applications Volume 2013 (2013), Article ID 359193, 10 pages Research Article Boundedness of -Adic Hardy Operators and Their Commutators on -Adic Central Morrey and BMO Spaces Department of Mathematics, Linyi University, Linyi, Shandong 276005, China Received 22 May 2013; Accepted 10 August 2013 Academic Editor: Dachun Yang Copyright © 2013 Qing Yan Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We obtain the sharp bounds of p-adic Hardy operators on p-adic central Morrey spaces and p-adic -central BMO spaces, respectively. We also establish the -central BMO estimates for commutators of p -adic Hardy operators on p-adic central Morrey spaces. 1. Introduction In the past decades, the field of -adic numbers has been intensively used in theoretical and mathematical physics (see [1–9] and references therein). As a consequence, new mathematical problems have emerged, among which we refer to [10, 11] for Riesz potentials [12–16], for -adic pseudodifferential equations, and so forth. In the past few years, there is an increasing interest in the study of harmonic analysis on -adic field and their various generalizations and the related theory of operators and spaces; see, for example [17–27]. For a prime number , let be the field of -adic numbers. It is defined as the completion of the field of rational numbers with respect to the non-Archimedean -adic norm . This norm is defined as follows: ; if any nonzero rational number is represented as , where is an integer and integers , are indivisible by , then . It is easy to see that the norm satisfies the following properties: Moreover, if , then . It is well known that is a typical model of non-Archimedean local fields. From the standard -adic analysis [7], we see that any nonzero -adic number can be uniquely represented in the canonical series as follows: where are integers, , and . The series (2) converges in the -adic norm since . The space consists of points , where , . The -adic norm on is Denote by the ball with center at and radius and by the sphere with center at and radius , . It is clear that , and We set and . Since is a locally compact commutative group under addition, it follows from the standard analysis that there exists a Haar measure on , which is unique up to positive constant multiple and is translation invariant. We normalize the measure by the equality where denotes the Haar measure of a measurable subset of . By simple calculation, we can obtain that for any . For a more complete introduction to the -adic field, see [27] or [7]. The well-known Hardy’s integral inequality [28] tells us that, for , where the classical Hardy operator is defined by for nonnegative integral function on , and the constant is the best possible. Thus the norm of Hardy operator on is Faris [29] introduced the following -dimensional Hardy operator, for nonnegative function on , where is the volume of the unit ball in . Christ and Grafakos [30] obtained that the norm of on is which is the same as that of the 1-dimensional Hardy operator. In [31], Fu et al. obtained the precise norm of -linear Hardy operators on weighted Lebesgue spaces and central Morrey spaces. Fu et al. [32] introduced -adic Hardy operators and got the sharp estimates of -adic Hardy operators on -adic weighted Lebesgue spaces. Moreover, they proved that the commutators generated by the -adic Hardy operators and the central BMO functions are bounded on -adic weighted Lebesgue spaces and -adic Herz spaces see; [33] for more information about Herz spaces. Ren and Tao [34] Yu and Lu [35] studied the boundedness of commutators of Hardy type on some Inspired by these results, in this paper we will establish the sharp estimates of -adic Hardy operators on -adic central Morrey and -central BMO spaces. Furthermore, we will discuss the boundedness for commutators of -adic Hardy operators and -central BMO functions on -adic central Morrey spaces. Definition 1. For a function on , we define the -adic Hardy operator as follows: where is a ball in with center at and radius . Morrey [36] introduced the spaces to study the local behavior of solutions to second-order elliptic partial differential equations. The -adic Morrey space is defined as follows. Definition 2. Let and let . The -adic Morrey space is defined by where Remark 3. It is clear that , . For some recent developments of Morrey spaces and their related function spaces on , we refer the reader to [37]. In 2000, Alvarez et al. [38] studied the relationship between central BMO spaces and Morrey spaces. Furthermore, they introduced -central bounded mean oscillation spaces and central Morrey spaces, respectively. Next, we introduce their -adic versions. Definition 4. Let and let . The -adic central Morrey space is defined by where . Remark 5. It is clear that When , the space reduces to ; therefore, we can only consider the case . If , by Hölder’s inequality, for . Definition 6. Let and let . The space is defined by the condition where . Remark 7. When , the space is just , which is defined in [32]. If , by Hölder’s inequality, for . By the standard proof as that in , we can see that Remark 8. The formulas (18) and (15) yield that is a Banach space continuously included in . In Section 2, we obtain the sharp estimates of -adic Hardy operators on -adic central Morrey spaces and -adic -central BMO spaces. Analogous result is also established for -adic Morrey spaces. In Section 3, we discuss the boundedness of commutators generated by -adic Hardy operators and -adic -central BMO functions on -adic central Morrey spaces. We should note that in Euclidean space, when estimating the Hardy operator, one usually discusses its restriction on radical functions. However, on -adic field, we will consider its restriction on the functions with instead. Throughout this paper the letter will be used to denote various constants, and the various uses of the letter do not, however, denote the same constant. 2. Sharp Estimates of -Adic Hardy Operator We get the following precise norms of -adic Hardy operators on -adic central Morrey spaces and -adic -central BMO spaces. Theorem 9. Let and let . Then Theorem 10. Let and let . Then Corollary 11. Let . Then Let denote the subspace of consisting of all functions with . We obtain the sharp estimate of -adic Hardy operator from to . Theorem 12. Suppose that , . Then maps to with norm Proof of Theorem 9. When , , by Corollary 2.2 in [32], When , we first claim that the operator and its restriction to the subset of , which consist of functions satisfying , have the same operator norm on . In fact, for , set It is easy to see that satisfies that and . By Hölder’s inequality, for , we have Therefore, Consequently, which implies the claim. In the following, without loss of generality, we may assume that satisfies . Then by Minkowski’s inequality, we have Thus, On the other hand, take . Then where the series converges due to . Thus, since Therefore, Then (31) and (34) imply that Proof of Theorem 10. As in the proof of Theorem 9, we first show that the operator and its restriction to the subset of consisting of functions with have the same operator norm on . In fact, set Then and . By change of variable, we get Using Minkowski’s inequality and (37), we have Therefore, We conclude that In the following, without loss of generality, we may assume that with . By Fubini theorem, we have Then by Minkowski's inequality, we get Namely, On the other hand, take . By Remark 8 and (32), for , we have . Then by (33), we get Therefore, We arrive at As a result, Then Theorem 10 follows from (43) and (47). Proof of Theorem 12. Let . Then . Using Minkowski’s inequality, we have On the other hand, as in the proof of Theorem 9, we take , and we only need to show that . Consider the following. (I) If and , then . Since , we have (II) If and , then ; therefore, . Recall that two balls in are either disjoint or one is contained in the other (cf. page 21 in [39]). So we have ; thus, From the previous discussion, we can see that . Then by (33), This completes the proof. 3. Boundedness for Commutators of -Adic Hardy Operators on -Adic Central Morrey Spaces The boundedness of commutators is an active topic in harmonic analysis due to its important applications. For example, it can be applied to characterizing some function spaces [40]. In this section, we consider the boundedness for commutators generated by and -central BMO functions on -adic central Morrey spaces. Definition 13. Let . The commutator of is defined by for some suitable functions . Theorem 14. Let , , , , , and . If ; then is bounded from to and satisfies Before the proof of this theorem, we need the following calculations. Lemma 15. Suppose that and , . Then, Proof. Without loss of generality, we may assume that . Recall that . By Hölder's inequality, we have Proof of Theorem 14. Assume that . Fix , and by Minkowski’s inequality, we have In the following, we will estimate and , respectively. For , since is bounded from to , [32], then, by Hölder's inequality (), we get Next, let us estimate as follows: For , by Hölder’s inequality () and the fact that , , we have For , by Lemma 15 and Hölder’s inequality, we obtain Notice that since , then and . The above estimates imply that Consequently, Theorem 14 is proved. This work was partially supported by NSF of China (Grant nos. 11271175, 11301248, 11171345, and 10901076), NSF of Shandong Province (Grant no. ZR2010AL006) and AMEP of Linyi University. 1. S. Albeverio and W. Karwowski, “A random walk on p-adics: the generator and its spectrum,” Stochastic Processes and Their Applications, vol. 53, no. 1, pp. 1–22, 1994. View at Publisher · View at Google Scholar · View at MathSciNet 2. V. A. Avetisov, A. H. Bikulov, S. V. Kozyrev, and V. A. Osipov, “p-adic models of ultrametric diffusion constrained by hierarchical energy landscapes,” Journal of Physics A, vol. 35, no. 2, pp. 177–189, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 3. A. Khrennikov, P-Adic Valued Distributions in Mathematical Physics, vol. 309, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1994. View at MathSciNet 4. A. Khrennikov, Non-Archimedean Analysis: Quantum Paradoxes, Dynamical Systems and Biological Models, vol. 427, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1997. View at Publisher · View at Google Scholar · View at MathSciNet 5. V. S. Varadarajan, “Path integrals for a class of p-adic Schrödinger equations,” Letters in Mathematical Physics, vol. 39, no. 2, pp. 97–106, 1997. View at Publisher · View at Google Scholar · View at MathSciNet 6. V. S. Vladimirov and I. V. Volovich, “p-adic quantum mechanics,” Communications in Mathematical Physics, vol. 123, no. 4, pp. 659–676, 1989. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 7. V. S. Vladimirov, I. V. Volovich, and E. I. Zelenov, P-Adic Analysis and Mathematical Physics, vol. 1 of Series on Soviet and East European Mathematics, World Scientific Publishing Co. Inc., River Edge, NJ, USA, 1994. View at MathSciNet 8. I. V. Volovich, “p-adic space-time and string theory,” Teoreticheskaya i Matematicheskaya Fizika, vol. 71, no. 3, pp. 337–340, 1987. View at Zentralblatt MATH · View at MathSciNet 9. I. V. Volovich, “p-adic string,” Classical and Quantum Gravity, vol. 4, no. 4, pp. L83–L87, 1987. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 10. S. Haran, “Riesz potentials and explicit sums in arithmetic,” Inventiones Mathematicae, vol. 101, no. 3, pp. 697–703, 1990. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 11. S. Haran, “Analytic potential theory over the p-adics,” Annales de l'Institut Fourier, vol. 43, no. 4, pp. 905–944, 1993. View at Publisher · View at Google Scholar · View at MathSciNet 12. S. Albeverio, A. Y. Khrennikov, and V. M. Shelkovich, “Harmonic analysis in the p-adic Lizorkin spaces: fractional operators, pseudo-differential equations, p-adic wavelets, Tauberian theorems,” Journal of Fourier Analysis and Applications, vol. 12, no. 4, pp. 393–425, 2006. View at Publisher · View at Google Scholar · View at MathSciNet 13. N. M. Chuong and N. V. Co, “The Cauchy problem for a class of pseudodifferential equations over p-adic field,” Journal of Mathematical Analysis and Applications, vol. 340, no. 1, pp. 629–645, 2008. View at Publisher · View at Google Scholar · View at MathSciNet 14. N. M. Chuong, V. Y. Egorov, A. Khrennikov, Y. Meyer, and D. Mumford, Harmonic, Wavelet and P-Adic Analysis, World Scientific Publishing, Singapore, 2007. View at Publisher · View at Google Scholar · View at MathSciNet 15. A. N. Kochubei, “A non-Archimedean wave equation,” Pacific Journal of Mathematics, vol. 235, no. 2, pp. 245–261, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 16. W. A. Zuniga-Galindo, “Pseudo-differential equations connected with p-adic forms and local zeta functions,” Bulletin of the Australian Mathematical Society, vol. 70, no. 1, pp. 73–86, 2004. View at Publisher · View at Google Scholar · View at MathSciNet 17. N. M. Chuong and H. D. Hung, “Maximal functions and weighted norm inequalities on local fields,” Applied and Computational Harmonic Analysis, vol. 29, no. 3, pp. 272–286, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 18. Y.-C. Kim, “Carleson measures and the BMO space on the p-adic vector space,” Mathematische Nachrichten, vol. 282, no. 9, pp. 1278–1304, 2009. View at Publisher · View at Google Scholar · View at 19. Y.-C. Kim, “Weak type estimates of square functions associated with quasiradial Bochner-Riesz means on certain Hardy spaces,” Journal of Mathematical Analysis and Applications, vol. 339, no. 1, pp. 266–280, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 20. S. Z. Lu and D. C. Yang, “The decomposition of Herz spaces on local fields and its applications,” Journal of Mathematical Analysis and Applications, vol. 196, no. 1, pp. 296–313, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 21. K. S. Rim and J. Lee, “Estimates of weighted Hardy-Littlewood averages on the p-adic vector space,” Journal of Mathematical Analysis and Applications, vol. 324, no. 2, pp. 1470–1477, 2006. View at Publisher · View at Google Scholar · View at MathSciNet 22. K. M. Rogers, “A van der Corput lemma for the p-adic numbers,” Proceedings of the American Mathematical Society, vol. 133, no. 12, pp. 3525–3534, 2005. View at Publisher · View at Google Scholar · View at MathSciNet 23. K. M. Rogers, “Maximal averages along curves over the p-adic numbers,” Bulletin of the Australian Mathematical Society, vol. 70, no. 3, pp. 357–375, 2004. View at Publisher · View at Google Scholar · View at MathSciNet 24. M. Taibleson, “Harmonic analysis on n-dimensional vector spaces over local fields—I. Basic results on fractional integration,” Mathematische Annalen, vol. 176, pp. 191–207, 1968. View at Publisher · View at Google Scholar · View at MathSciNet 25. M. H. Taibleson, “Harmonic analysis on n-dimensional vector spaces over local fields—II. Generalized Gauss kernels and the Littlewood-Paley function,” Mathematische Annalen, vol. 186, pp. 1–19, 1970. View at Publisher · View at Google Scholar · View at MathSciNet 26. M. H. Taibleson, “Harmonic analysis on n-dimensional vector spaces over local fields—III. Multipliers,” Mathematische Annalen, vol. 187, pp. 259–271, 1970. View at Publisher · View at Google Scholar · View at MathSciNet 27. M. H. Taibleson, Fourier Analysis on Local Fields, Princeton University Press, Princeton, NJ, USA, 1975. View at MathSciNet 28. G. H. Hardy, “Note on a theorem of Hilbert,” Mathematische Zeitschrift, vol. 6, no. 3-4, pp. 314–317, 1920. View at Publisher · View at Google Scholar · View at MathSciNet 29. W. G. Faris, “Weak Lebesgue spaces and quantum mechanical binding,” Duke Mathematical Journal, vol. 43, no. 2, pp. 365–373, 1976. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 30. M. Christ and L. Grafakos, “Best constants for two nonconvolution inequalities,” Proceedings of the American Mathematical Society, vol. 123, no. 6, pp. 1687–1693, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 31. Z. Fu, L. Grafakos, S. Lu, and F. Zhao, “Sharp bounds for m-linear Hardy and Hilbert operators,” Houston Journal of Mathematics, vol. 38, no. 1, pp. 225–244, 2012. View at MathSciNet 32. Z. W. Fu, Q. Y. Wu, and S. Z. Lu, “Sharp estimates of p-adic Hardy and Hardy-Littlewood-Pólya operators,” Acta Mathematica Sinica, vol. 29, no. 1, pp. 137–150, 2013. View at Publisher · View at Google Scholar · View at MathSciNet 33. S. Z. Lu, D. C. Yang, and G. E. Hu, Herz Type Spaces and Their Applications, Science Press, Beijing, China, 2008. 34. Z. Ren and S. Tao, “Weighted estimates for commutators of n-dimensional rough Hardy operators,” Journal of Function Spaces and Applications, vol. 2013, Article ID 568202, 13 pages, 2013. View at Publisher · View at Google Scholar · View at MathSciNet 35. X. Yu and S. Z. Lu, “Endpoint estimates for generalized commutators of Hardy operators on H^1 spaces,” Journal of Function Spaces and Applications, vol. 2013, Article ID 410305, 11 pages, 2013. View at Publisher · View at Google Scholar 36. C. B. Morrey, Jr., “On the solutions of quasi-linear elliptic partial differential equations,” Transactions of the American Mathematical Society, vol. 43, no. 1, pp. 126–166, 1938. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 37. W. Yuan, W. Sickel, and D. Yang, Morrey and Campanato meet Besov, Lizorkin and Triebel, vol. 2005 of Lecture Notes in Mathematics, Springer, Berlin, Germany, 2010. View at Publisher · View at Google Scholar · View at MathSciNet 38. J. Alvarez, J. Lakey, and M. Guzmán-Partida, “Spaces of bounded λ-central mean oscillation, Morrey spaces, and λ-central Carleson measures,” Collectanea Mathematica, vol. 51, no. 1, pp. 1–47, 2000. View at MathSciNet 39. S. Albeverio, A. Y. Khrennikov, and V. M. Shelkovich, Theory of P-Adic Distributions: Linear and Nonlinear Models, vol. 370 of London Mathematical Society Lecture Note Series, Cambridge University Press, Cambridge, UK, 2010. View at Publisher · View at Google Scholar · View at MathSciNet 40. R. R. Coifman, R. Rochberg, and G. Weiss, “Factorization theorems for Hardy spaces in several variables,” Annals of Mathematics, vol. 103, no. 3, pp. 611–635, 1976. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
{"url":"http://www.hindawi.com/journals/jfs/2013/359193/","timestamp":"2014-04-20T18:35:52Z","content_type":null,"content_length":"877750","record_id":"<urn:uuid:5e5ae578-07c1-489a-811f-15b65e902f78>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
trc :: DynGraph gr => gr a b -> gr a ()Source Finds the transitive closure of a directed graph. Given a graph G=(V,E), its transitive closure is the graph: G* = (V,E*) where E*={(i,j): i,j in V and there is a path from i to j in G}
{"url":"http://lambda.haskell.org/hp-tmp/docs/2011.2.0.0/packages/fgl-5.4.2.3/doc/html/Data-Graph-Inductive-Query-TransClos.html","timestamp":"2014-04-19T17:03:51Z","content_type":null,"content_length":"2738","record_id":"<urn:uuid:994da92d-8447-49a1-825d-6563003119f1>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
Representing unions of Cartesian products markjan@cs.kun.nl (Mark-Jan Nederhof) Wed, 26 Aug 1992 15:29:02 GMT From comp.compilers | List of all articles for this month | Newsgroups: comp.compilers,comp.compression From: markjan@cs.kun.nl (Mark-Jan Nederhof) Organization: University of Nijmegen, The Netherlands Date: Wed, 26 Aug 1992 15:29:02 GMT Keywords: theory, design, question I am looking for an efficient way to store and manipulate unions of Cartesian products. The application domain is abstract evaluation of certain programs (e.g. datalog programs) and analysis of formal grammars (context-free grammars extended with features/parameters over a finite domain). Our goal is to determine the set of parameters with which a nonterminal can generate a terminal string. If this is done by means of bottom-up analysis then we need the following operations: -For two or more sets of tuples of the form A_1 * A_2 * ... * A_n, for fixed n, we have to compute the union in an efficient form. (The operator * denotes the Cartesian product.) Here the sets A_i consist of atomic -For two unions of Cartesian products in such an efficient form, call them C_1 and C_2, we have to compute a third set C_3 in the same form which represents C_1 - C_2. (The operator - denotes the subtraction of sets of -Let us consider two unions of Cartesian products in such an efficient form, call them again C_1 and C_2. We now have to compute the set C_3 which represents the set of tuples {(a_1, a_2, ..., a_n) | (b_1, ..., b_m) <- C_1 AND (c_1, ..., c_k) <- C_2 AND R (a_1, ..., a_n, b_1, ..., b_m, c_1, ..., c_k)}, where R is some relation which for some fixed pairs of its arguments says that they should have the same value. (The operator <- denotes inclusion.) This operation would be needed in case of a grammar rule with two members (with m and k parameters respectively; R is determined by ``consistent substitution''). You can generalise this to arbitrary grammar rules. This query is inspired by two things: -The sets of atomic values are typically too big to represent sets of tuples in a naive way by one element for every tuple. -When the union of sets of the form A_1 * A_2 * ... * A_n is computed then very often these sets overlap, and are even ``almost the same but not At this moment I am using a representation where a union of Cartesian products is transformed into another such that the Cartesian products do no longer overlap (i.e. they have no tuples in common). If possible I merge two Cartesian products into one. These transformations are however rather expensive. Does anyone have a better suggestion? references to I suggest follow-up at comp.compression. Mark-Jan Nederhof University of Nijmegen The Netherlands Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/92-08-159","timestamp":"2014-04-21T00:15:09Z","content_type":null,"content_length":"7005","record_id":"<urn:uuid:7180df25-1dd5-42e0-827d-2c850731b77a>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
Noninteractive Statistical Zero-Knowledge Proofs for Lattice Problems Aristeidis Tentes Noninteractive Statistical Zero-Knowledge Proofs for Lattice Problems Chris Peikert and Vinod Vaikuntanathan (Proc. of Crypto 2008) We construct noninteractive statistical zero-knowledge (NISZK) proof systems for a variety of standard approximation problems on lattices, such as the shortest independent vectors problem and the complement of the shortest vector problem. Prior proof systems for lattice problems were either interactive or leaked knowledge (or both). Our systems are the first known NISZK proofs for any cryptographically useful problem not related to integer factorization. In addition, they are proofs of knowledge, have reasonable complexity, and generally admit efficient prover algorithms (given appropriate auxiliary input). In some cases, they even imply the first known interactive statistical zero-knowledge proofs for certain lattice problems. As an additional contribution, we construct an NISZK proof for a special language representing a disjunction (OR) of two or more variables. This may be useful for potential constructions of noninteractive (computational) zero knowledge proofs for NP based on lattice assumptions.
{"url":"http://cs.nyu.edu/crg/newAbstracts/abstract_05_11_09.html","timestamp":"2014-04-19T22:41:51Z","content_type":null,"content_length":"1604","record_id":"<urn:uuid:058d48c2-1787-4e04-bf9f-6f509f9a59f0>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
If Institutions Matter a Little, They Matter a Lot A strong correlation between cooperation and membership in international institutions is not enough to establish that international institutions cause cooperation . If we're to claim that institutions matter, we need to at least identify mechanisms by which institutions might promote cooperation among actors who would otherwise be disinclined to cooperate with one another. The mere fact that such mechanisms can be articulated does not itself tell us whether the correlation is causal, but it lends a certain measure of plausible to causal interpretations that would otherwise be lacking. Indeed, scholars identified a variety of such mechanisms, from raising reputation costs to solving coordination problems to monitoring compliance and thereby overcoming information problems. But even committed neo-liberals will generally grant that these arguments merely identify ways in which institutions provide a little push that can make the difference when (and only when) states meet the conditions under which cooperation would occur in an anarchic world. And if that's all that institutions do, then they can't really matter all that much, can they? Actually, yes. If you grant that international institutions matter at the margins, you've already conceded that they make a big difference to the overall level of cooperation we can expect to observe in the international system. Look below the fold for an explanation. As so much of this literature does, let's focus on the iterated prisoner's dilemma. The standard story is as follows. In each of an infinite number of stages, \(A\) and \(B\) must simultaneously decide whether to cooperate with one another or to defect. If they both cooperate, they both receive \(3\). If they both defect, they both receive \(2\). If one cooperates while the other defects, the cooperator receives \(1\) while the defector receives \(4\). In one-shot play, we expect mutual defection. Neither \(A\) nor \(B\) has any incentive to cooperate if the other is expected to defect (since \(2>1\)) nor even if the other is expected to cooperate (since \(4>2\)). But, as has long been recognized, mutual cooperation can be sustained in iterated play. Suppose, for example, that each side adopts a Grim Trigger strategy, whereby they initially cooperate and in each subsequent round cooperate if and only if their opponent has never defected. Applying a common discount factor of \(\delta \in (0,1)\) to payoffs one period in the future, the expected payoff of adopting a Grim Trigger strategy, conditional upon the other side likewise adopting a Grim Trigger strategy, is \(3 + \delta 3 + \delta^3 + ...\), which is equivalent to \(\displaystyle \frac{3}{1-\delta}\). Compare this to a single deviation. If either player is going to cheat, they want to do so immediately. There's nothing to be gained, in this setup, by waiting a while before defecting. If they defect, they receive \(4\) in the current stage, but starting in the next period, they will forever after receive \(2\). The net payoff then for this deviation from Grim Trigger is \(4 + \delta\ When is \(\displaystyle \frac{3}{1-\delta}\) better than \(4 + \delta\displaystyle\frac{2}{1-\delta}\)? Whenever \(\delta \geq 0.5\). That's the standard take. A long shadow of the future can induce cooperation. Nothing new here. Now suppose that \(A\) and \(B\) belong to some international institutions, and that membership in this institution somehow raises the benefit of cooperating while creating a cost for defecting. But let's assume that these effects are very modest -- say an additional \(0.05\) for cooperation and a loss of \(0.05\) for defecting. The standard PD payoffs have no inherent meaning. They are not measured on any scale. In fact, we often treat them as if they are cardinal, but they're basically ordinal. Therefore, to assume that an actor receives a benefit of \(0.05\) whenever they cooperate, and incurs a cost of \(0.05\) whenever they defect, is best interpreted as an assumption of reputation costs that are 5% as large in magnitude as the difference between mutual cooperation and mutual defection, or between mutual cooperation and the temptation payoff, etc. Adopting a Grim Trigger strategy against an opponent who is likewise adopting a Grim Trigger strategy now yields a payoff of \(\displaystyle \frac{3.05}{1-\delta}\). A single-period deviation from Grim Trigger now brings a payoff of \(3.95 + \delta\displaystyle\frac{1.95}{1-\delta}\). The former is greater than the latter so long as \(\delta\geq 0.45\). Thus, when \(0.45 \leq \delta < 0.5\), Grim Trigger can sustain mutual cooperation between states who would have been disinclined to cooperate in the absence of the institution. How important is that difference? Well, it really depends on what you assume about the distribution of \(\delta\). Less technically, it depends on how many states are moderately patient. Among states that care a great deal about the future, cooperation is likely to prevail even in the absence of institutions and norms. Among states that are exceptionally impatient, institutions and norms aren't going to be enough to overcome the short-run temptation to defect. But I don't think many of us think we live in a world of extremes. Suppose we assume that \(\delta\) is normally distributed, say with a mean of \(0.5\) and a standard deviation of \(0.1\), implying that roughly \(68\%\) percent of states will value future payoffs between \(40\%\) and \(60\%\) as much as current ones, and roughly \(95\%\) of states will value future payoffs between \(30\%\) and \(70\%\) as much as current ones. Those numbers are, of course, arbitrary. But they are useful for comparison. In this case, cooperation that would otherwise not occur will be observed in about \(19\%\) of cases. All of this is very stylized, of course. We have no strong theoretical reason for setting the payoffs in the PD to \(4\), \(3\), \(2\), and \(1\). Similarly, one might argue that even \(0.05\) is too big an adjustment. And we really don't have any way of knowing exactly how much states tend to value the future, or how much variation there is in that regard. But, even with those caveats, there's an important takeaway point here. Even relatively small differences in short-term payoffs can have a fairly significant impact on behavior, because small differences add up over time. If you're willing to grant that international institutions can make cooperation just a little bit more attractive, and defection just a little bit less attractive, then you've implicitly granted that international institutions can make the difference between whether states cooperate or not in a fairly large number of cases, at least provided that you think that most states are moderately concerned about the future. And the same exact argument can be made with respect to norms. 2 comments: 1. Thanks, Phil. I like this - simple and elegant. Bookmarked for future undergrad course reading assignment. As homework, could make them work out what happens if players use a TFT strategy. About the payoffs, I like to tell students they could use emoticons if they so choose, e.g. "most preferred = :), least preferred = :(, etc" to drive home the point that these are representations; numbers are just more convenient. 1. Glad you got something out of it, TT. Good point about emoticons, at least for ordinal payoffs. :)
{"url":"http://fparena.blogspot.com/2012/11/if-institutions-matter-little-they_12.html","timestamp":"2014-04-16T04:11:19Z","content_type":null,"content_length":"104515","record_id":"<urn:uuid:c71fbd13-5131-4ad0-86ef-9d319e799886>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
Mental arithmetic making you look bad? John E. Franklin's "Subtraction for kids" thread brough this to my mind. But I thought its a good topic in its own right. Has anyone else had the annoying problem that just because your an A math student, people think you can do lightening fast arithmetic in your head? And when you can't they think your a fraud? I once worked in a grocery store where the owner couldn't find the zeros of y = -3x^2 + 2x + 7, but his arithmetic was razor sharp as he did it all the time, every day. Me on the otherhand, who could tell you the area under y = -3x^2 + 2x + 7, was not so quick with arithmetic as I rarely practice it anymore. He thought I was mathematically illiterate and often laughed at my slowness. My other skills were of no use as all we ever did was add subtract and multiply. You don't need much else to run a grocery store. People seem to think slow or poor mental arithmetic proves you are not good at math and your just making it up if you say you are. Another time I was adding prices on a reciept where we had to multiply the price times the quantity bought. There was a box with the quantity number, and beside it a box with the price. In some spots, there was a price written in for an item (in pen) but no quantity listed. Now naturally I figured the absense of a coefficient is a coefficient of 1. x = 1*x so I marked each of those items once. But appearently not. No quanity meant 0. My boss laughed at me for being an idiot! Another time in english class my teacher asked me to calculate a percentage for him, and I said "umm..about 30,000"Some people in the room murmered and I realized it was incorrect. It drives you crazy! Moved to Dark Discussions folder - Ricky (You're wish is my command, Mikau) Last edited by mikau (2006-12-14 05:46:05) A logarithm is just a misspelled algorithm. Re: Mental arithmetic making you look bad? Oops.... I meant to post this in dark discussion, silly me.. A logarithm is just a misspelled algorithm. Real Member Re: Mental arithmetic making you look bad? Well, I wouldn't say I'm bad at arithmics. However, I totally get what you mean. Somehow, a lot of people can't understand you can be good at math, even if you're not very sharp at arithmetics. Support MathsIsFun.com by clicking on the banners.What music do I listen to? Clicky click Re: Mental arithmetic making you look bad? Me personally I have a very bad short term memory, I need a number of sentance down on paper, otherwise I forget it quickly. However, short term memory is not required to understand the analytical side of math. My arithmetic is fine with pencil and paper, its neither slow nor inaccurate. But when I do things in my head, I find I loose track of things quickly. A logarithm is just a misspelled algorithm. Re: Mental arithmetic making you look bad? I understand it well .... and you see those people on TV who can do huge sums in seconds. Or people who can calculate poker odds on the fly. Amazing skills, but won't get you a degree. However, you could train yourself to be fast ... that was why I made "Speed Math" (with the dice), and "Money Master" ... but I think I could make something better for the person who really wants to be good at mental math. Anyone interested? "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Re: Mental arithmetic making you look bad? I'm interested. But I think it demends on whether or not it is possible to "train your memory". I'd like to see what you've got in mind. A logarithm is just a misspelled algorithm. Re: Mental arithmetic making you look bad? Dr Kawashima's Brain Training is quite good for training yourself in speed maths. You get given a simple addition, subtraction or multiplication question and you have to write down the answer as fast as you can. There's also the option to add division questions in there, but they're all really simple, mostly single digit numbers. You get either 20 or 100 questions depending on what mode you're on, and you get given the overall time that it took you to answer them all. And there's also an option so that you can say the answers instead of writing them, but I don't usually get as good times on that one because the voice recognition takes a while to verify that I got the right answer. Still, quite good and quite fun. Especially for everyone else if you're doing the voice mode in a crowded place. Why did the vector cross the road? It wanted to be normal. Re: Mental arithmetic making you look bad? Well ... Speed Math must be pretty close to an arithmetic trainer, how should it be changed? Maybe the "answer" boxes could stay fixed (they would have to be smaller and lots more of them) ... maybe in a 10x10 grid for ten times table. And instead of dice just "6x8=" And for harder problems like "1.25 x 1.2 =" there could be multiple choice. What would you like to see? "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Re: Mental arithmetic making you look bad? I'd definitly say the answer box should stay fixed. I also think dice may not be the best option. That game seems fine for kids but for good practice maybe numeric digits and typable answers. A logarithm is just a misspelled algorithm. Re: Mental arithmetic making you look bad? Mmm, the answer boxes being fixed would definitely be better, as it would be more about speed maths and less about speed searching. But then it would weight it more towards speed mouse-precision... So maybe another possible option would be a box to type the numbers in. I don't know about other people, but I think that I'd be a lot faster if I could type answer rather than clicking on them. If both options were left in, then there'd be a choice and people could just pick the one that they can do faster. I think removing the dice is a good idea as well. They're very good and could be used very well somewhere else, but I think just having the numbers would be better here. That way, you can get into the 'Speed Zone' without having to keep waiting for the dice to stop rolling. Why did the vector cross the road? It wanted to be normal. Re: Mental arithmetic making you look bad? Pretty much exactly what I just said, Mathsy! :-) So thats two votes! A logarithm is just a misspelled algorithm. Real Member Re: Mental arithmetic making you look bad? As a fun little game, the dices are fine. However, if you really want to compete, against yourself or others, numbers and an input-box would be preferable though. Support MathsIsFun.com by clicking on the banners.What music do I listen to? Clicky click Real Member Re: Mental arithmetic making you look bad? Yeah, perhaps a scoreboard or something of the like could be implemented or an entire server could be created (I expect it shouldn't be too hard) so people can race against each other and stuff. It could have lots of people if it's good enough, it would be fantastic. I also support an input box so you can type your answers. Oh, and I was wondering: For multiplication with numbers of 2 digits or more, do you actually run through the whole algorithmic working in your head, do you do multiplication by parts (e.g 12*15=10*15+2*15), or do you use your own top-secret formula Real Member Re: Mental arithmetic making you look bad? mikau wrote: Pretty much exactly what I just said, Mathsy! :-) So thats two votes! Three votes. ;D It would be better to enter the numbers on your keyboard, takes too long to find the number in my opinion. Real Member Re: Mental arithmetic making you look bad? Devanté wrote: mikau wrote: Pretty much exactly what I just said, Mathsy! :-) So thats two votes! Three votes. ;D It would be better to enter the numbers on your keyboard, takes too long to find the number in my opinion. I Agree as well so thats 4 votes Dreams don't come true, you gotta make them come true. Re: Mental arithmetic making you look bad? w00t! Can I vote again? :-P A logarithm is just a misspelled algorithm. Re: Mental arithmetic making you look bad? So ... problems like "12x9=" and answers in a box. Got it. Give me a few days. Call it "Speed Math II" ?? "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Re: Mental arithmetic making you look bad? Seeing as this is for slighlty more advanced peoples, who don't need colorfull dice to entertain them, I'd say we should call it "The Integral of Acceleration Math" A logarithm is just a misspelled algorithm. Real Member Re: Mental arithmetic making you look bad? rida wrote: Devanté wrote: mikau wrote: Pretty much exactly what I just said, Mathsy! :-) So thats two votes! Three votes. ;D It would be better to enter the numbers on your keyboard, takes too long to find the number in my opinion. I Agree as well so thats 4 votes Hey, i voted yes too! So that's 5 then! Super Member Re: Mental arithmetic making you look bad? Mmmm, I share the same problem with Mikau. My long term memory is not good, either. Actually the partial reason why I love maths is that it is the subject that requires the least memorizing!! Literature is always the last subject I want... Last edited by George,Y (2006-12-16 19:21:26) Super Member Re: Mental arithmetic making you look bad? George,Y wrote: Actually the partial reason why I love maths is that it is the subject that requires the least memorizing!! apart from the thousands of identities you have to learn :p The Beginning Of All Things To End. The End Of All Things To Come. Re: Mental arithmetic making you look bad? Toast wrote: Oh, and I was wondering: For multiplication with numbers of 2 digits or more, do you actually run through the whole algorithmic working in your head, do you do multiplication by parts (e.g 12*15=10*15+2*15), or do you use your own top-secret formula The speed math thing doesn't ever ask you to do that, does it? In the most complicated mode, it asks you to add two dice together, then multiply that by what you get when you add another two dice together. So you'd very rarely be asked to multiplt two 2-digit numbers together, and when you were, the numbers would never be above 12. Anyway, to answer you question, I usually split it up like you have. So, 12*15 would be 10*15 + 2*15. For more complicated stuff, I sometimes split both numbers up to get, for example, However, for that particular question, I'd divide 12 by 2 and multiply that to the 15. So now I have 12*15 = 6*30, which is a lot easier because it's just two single-digit numbers multiplied together, with an extra 0 on the end. Another way that is sometimes helpful is to use the difference of two squares: (a+b)(a-b) = a² - b². If, for example, I had the question 15*25, then I could recognise that that was (20+5)(20-5) and so use the difference of two squares to change that into 20² - 5² = 375, which would be slightly quicker than working out the original sum. In general, I think the best way to do stuff like that quickly is to know as many methods as possible, and be able to choose the best one for the situation. mikau wrote: Seeing as this is for slighlty more advanced peoples, who don't need colorfull dice to entertain them, I'd say we should call it "The Integral of Acceleration Math" But that's "Velocity Math Plus c". If anything, "The Derivative of Distance Math" would work better. Or, you know, perhaps just "Speed Math II" Why did the vector cross the road? It wanted to be normal. Re: Mental arithmetic making you look bad? Velocity Math plus C! HAHAHA!! :-P But I think Velocity math sounds pretty cool! A logarithm is just a misspelled algorithm. Re: Mental arithmetic making you look bad? "Velocity Math" is cool ... or maybe "Reaction Math"? "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Re: Mental arithmetic making you look bad? How about mathletics? :-) A logarithm is just a misspelled algorithm.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=52634","timestamp":"2014-04-21T12:27:35Z","content_type":null,"content_length":"41803","record_id":"<urn:uuid:f37b9fa2-d77d-4b2e-b1d8-f06d4cd38203>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple Functions in Haskell I wasn’t really sure of quite how to start this off. I finally decided to just dive right in with a simple function definition, and then give you a bit of a tour of how Haskell works by showing the different ways of implementing it. So let’s dive right in a take a look at a very simple Haskell definition of the factorial function: fact n = if n == 0 then 1 else n * fact (n – 1) This is the classic implementation of the factorial. Some things to notice: 1. In Haskell, a function definition uses no keywords. Just “`name params = impl`”. 2. Function application is written by putting things side by side. So to apply the factorial function to `x`, we just write `fact x`. Parens are only used for managing precedence. The parens in “`fact (n – 1)`” aren’t there because the function parameters need to be wrapped in parens, but because otherwise, the default precedence rules would parse it as “`(fact n) – 1`”. 3. Haskell uses infix syntax for most mathematical operations. That doesn’t mean that they’re special: they’re just an alternative way of writing a binary function invocation. You can define your own mathematical operators using normal function 4. There are no explicit grouping constructs in Haskell: no “{/}”, no “begin/end”. Haskell’s formal syntax uses braces, but the language defines how to translate indentation changes into braces; in practice, just indenting the code takes care of grouping. That implementation of factorial, while correct, is not how most Haskell programmers would implement it. Haskell includes a feature called pattern matching, and most programmers would use pattern matching rather than the `if/then` statement for a case like that. Pattern matching separates things and often makes them easier to read. The basic idea of pattern matching in function definitions is that you can write what *look like* multiple versions of the function, each of which uses a different set of patterns for its parameters (we’ll talk more about what patterns look like in detail in a later post); the different cases are separated not just by something like a conditional statement, but they’re completely separated at the definition level. The case that matches the values at the point of call is the one that is actually invoked. So here’s a more stylistically correct version of the factorial: fact 0 = 1 fact n = n * fact (n – 1) In the semantics of Haskell, those two are completely equivalent. In fact, the deep semantics of Haskell are based on pattern matching – so it’s more correct to say that the if/then/else version is translated into the pattern matching version than vice-versa! In fact, both will basically expand to the Haskell pattern-matching primitive called the “`case`” statement, which selects from among a list of patterns: *(Note: I originally screwed up the following, by putting a single “=” instead of a “==” inside the comparison. Stupid error, and since this was only written to explain things, I didn’t actually run it. Thanks to pseudonym for pointing out the error.)* fact n = case (n==0) of True -> 1 False -> n * fact (n – 1) Another way of writing the same thing is to use Haskell’s list type. Lists are a very fundamental type in Haskell. They’re similar to lists in Lisp, except that all of the elements of a list must have the same type. A list is written between square brackets, with the values in the list written inside, separated by commas, like: [1, 2, 3, 4, 5] As in lisp, the list is actually formed from pairs, where the first element of the pair is a value in the list, and the second value is the rest of the list. Pairs are *constructed* in Haskell using “`:`”, so the list could also be written in the following ways: 1 : [2, 3, 4, 5] 1 : 2 : [3, 4, 5] 1 : 2 : 3 : 4 : 5 : [] If you want a list of integers in a specific range, there’s a shorthand for it using “`..`”. To generate the list of values from `x` to `y`, you can write: [ x .. y ] Getting back to our factorial function, the factorial of a number “n” is the product of all of the integers from 1 to n. So another way of saying that is that the factorial is the result of taking the list of all integers from 1 to n, and multiplying them together: listfact n = listProduct [1 .. n] But that doesn’t work, because we haven’t defined `listProduct` yet. Fortunately, Haskell provides a ton of useful list functions. One of them, “`foldl`”, takes a function *f*, an initial value *i*, and a list *[l[1], l[2],...,l[n]]*, and basically does *f(l[n](…f(l[2], f(i,l[1]))))*. So we can use `foldl` with the multiply function to multiply the elements of the list together. The only problem is that multiplication is written as an infix operator, not a function. In Haskell, the solution to *that* is simple: an infix operator is just fancy syntax for a function call; to get the function, you just put the operator into parens. So the multiplication *function* is written “`(*)`”. So let’s add a definition of `listProduct` using that; we’ll do it using a *`where`* clause, which allows us to define variables or functions that are local to the scope of the enclosing function definition: listfact n = listProduct [ 1 .. n ] where listProduct lst = foldl (*) 1 lst I need to explain one more list function before moving on. There’s a function called “`zipWith`” for performing an operation pairwise on two lists. For example, given the lists “`[1,2,3,4,5]`” and “ `[2,4,6,8,10]`”, “`zipWith (+) [1,2,3,4,5] [2,4,6,8,10]`” would result in “`[3,6,9,12,15]`”. Now we’re going to jump into something that’s going to seem really strange. One of the fundamental properties of how Haskell runs a program is that Haskell is a *lazy* language. What that means is that no expression is actually evaluated until its value is *needed*. So you can do things like create *infinite* lists – since no part of the list is computed until it’s needed. So we can do things like define the fibonacci series – the *complete* fibonacci series, using: fiblist = 0 : 1 : (zipWith (+) fiblist (tail fiblist)) This looks incredibly strange. But if we tease it apart a bit, it’s really pretty simple: 1. The first element of the fibonacci series is “0″ 2. The second element of the fibonacci series is “1″ 3. For the rest of the series, take the full fibonacci list, and line up the two copies of it, offset by one (the full list, and the list without the first element), and add the values pairwise. That third step is the tricky one. It relies on the *laziness* of Haskell: Haskell won’t compute the *nth* element of the list until you *explicitly* reference it; until then, it just keeps around the *unevaluated* code for computing the part of the list you haven’t looked at. So when you try to look at the *n*th value, it will compute the list *only up to* the *n*th value. So the actual computed part of the list is always finite – but you can act as if it wasn’t. You can treat that list *as if* it really were infinite – and retrieve any value from it that you want. Once it’s been referenced, then the list up to where you looked is concrete – the computations *won’t* be repeated. But the last tail of the list will always be an unevaluated expression that generates the *next* pair of the list – and *that* pair will always be the next element of the list, and an evaluated expression for the pair after it. Just to make sure that the way that “`zipWith`” is working in “`fiblist`” is clear, let’s look at a prefix of the parameters to `zipWith`, and the result. (Remember that those three are all actually the same list! The diagonals from bottom left moving up and right are the same list elements.) fiblist = [0 1 1 2 3 5 8 ...] tail fiblist = [1 1 2 3 5 8 ... ] zipWith (+) = [1 2 3 5 8 13 ... ] Given that list, we can find the *n*th element of the list very easily; the *n*th element of a list *l* can be retrieved with “`l !! n`”, so, the fibonacci function to get the *n*th fibonacci number would be: fib n = fiblist !! n And using a very similar trick, we can do factorials the same way: factlist = 1 : (zipWith (*) [1..] factlist) factl n = factlist !! n The nice thing about doing factorial this way is that the values of all of the factorials *less than* *n* are also computed and remember – so the next time you take a factorial, you don’t need to repeat those multiplications. 1. #1 JY November 28, 2006 For various (geek-humorous) haskell factorial implementations: 2. #2 Mark C. Chu-Carroll November 28, 2006 I love that link, it’s absolutely hysterical. And I have to admit to having passed through a couple of those phases (although I’ll *never* admit which ones!). 3. #3 Justin November 28, 2006 Nice start. I have been learning Haskell myself over the last 10 weeks, and I’m interested to see where you go. You may have introduced a little too much syntax at once (for example, ‘where’) and I know I didn’t get ‘foldl’ for a long time, so I’m not sure beginners will grok it here. The description of the arguments to foldl are great, but maybe you should make the accumulator aspect more obvious. One thing you may also consider is making these posts literate Haskell, so they could be pasted directly into an editor and run in an interpreter. Bird notation (that is, preceding each code line with “>”) is the easiest to use on a blog, IMHO. Thanks for this series, I’m excited to read the rest! 4. #4 Coin November 28, 2006 And using a very similar trick, we can do factorials the same way: factlist = 1 : (zipWith (*) [1..] factlist) factl n = factlist !! n The nice thing about doing factorial this way is that the values of all of the factorials less than n are also computed and remember – so the next time you take a factorial, you don’t need to repeat those multiplications. How fast is the “!! n” access? That is to say, is the !! operator smart enough to somehow just jump to the nth entry of the list, or will it have to iterate over the first n entries of a stored linked list (an operation that offhand seems like it could be much, much slower than just calculating the factorial manually again, due to n memory accesses being probably slower than n Also, wouldn’t this eventually create memory problems? Like, let’s say that I call the factorial once in the entire program, ever, at the beginning. And let’s say I foolishly ask for element 2,403,324. Do I understand correctly that haskell will then calculate and preserve a list of all the factorials 1..2,403,324, and there will be no way to reclaim that memory ever? It’s fascinating the language lets you do that, but this sounds like a dangerous feature. 5. #5 Mark C. Chu-Carroll November 28, 2006 There’s no single answer to your question; it’s up to the compiler to decide. If it’s a frequently used function, there’s a good chance that the compiler is memo-izing the results, which will make it very fast. If not, then it needs to traverse the list, which is potentially slow. Also remember that the compiler doesn’t need t promise that the results will *always* be available without computation. It can generate code that allows the list to be discarded on GC and replaced by the thunk that computes it, so that the next time you reference it, it’ll be recomputed, so it doesn’t necessarily use the memory forever. (And of course, if the compiler can show that the list won’t be referenced again after some point, it can be GCed as normal.) For the memory issue: it’s one of those cases where you, as the developer, need to make an intelligent choice about what’s appropriate for your system. If the function is going to be used rarely, and the way that it gets used will cause the infinite-list version to use huge amounts of space, then it’s not an appropriate implementation technique. Frankly, I would never use a factorial function that was written like that in real code. But there *are* definitely places where the infinite-list/indefinite-list approach is the correct approach in real code. And it’s a good example of what the laziness property of Haskell execution means. 6. #6 Koray November 28, 2006 Coin, the indexed access is as slow as it for lists in other languages. There are other datastructures better suited For that kind of access. This is just a simple example. Haskell implementations have proper GCs, so the generated list will be reclaimed at some point. 7. #7 Daniel Martin November 28, 2006 Note that the standard binary representation of the integer (2,403,324)! will itself require approximately 5.66 MB of memory to store. The base 10 representation of that number is over 14 million digits long. Each additional multiplication near the end of that computation will require over a million 32-bit integer multiplications. I know that GHC links in some impressive bignum libraries, and I know that some wicked fast stuff can be done with vector operations, but we may have other problems here aside from running out of 8. #8 Mark C. Chu-Carroll November 28, 2006 Just for fun, I decided to see how far Hugs can go with the infinite-list factorial function… On my MacBook, it can compute the factorials up to somewhere a little above 15000; trying to call it for numbers larger than that results in a control stack overflow; calling it for numbers somewhere over 45000 turns into an illegal instruction that crashes hugs. GHCI can compute the factorial of 100,000 (!!!), but at a cost of several hundred megabytes. But it can’t get much beyond that without dying. So I don’t think we need to worry much about the factorial of 2 million. 9. #9 Mark C. Chu-Carroll November 28, 2006 Actually, using the *recursive* version, ghci can compute the factorial of 500,000! (At a cost of approximately 180 megabytes!) But beyond that, it hits a stack overflow. And it manages to run both cores of my CPU at close to 100% for several minutes doing it! 10. #10 Cale Gibbard November 28, 2006 Colin: While it is true as others have pointed out that the performance of (!!) is up to the compiler, in practice, xs !! n takes O(n) time, because lists really are linked lists. Haskell has arrays with O(1) access time if you want them, but of course, they can’t be infinite. Also, because factlist never goes out of scope, it really will be memoised — it’s easy to prevent that by making it a part of the definition of factl, which would result in it being thrown away after each use. Of course, which of these options you want in any given case is dependent on your problem. Personally, I’d tend to write factorial n = product [1..n] Another thing I’d like to mention is that foldl is not the fold that you most commonly use in Haskell. This is counterintuitive to those who are coming from a strict functional language background, because they know that foldl is tail-recursive, and hence can be optimised into a loop by the compiler. While this is true, due to the nature of lazy evaluation, this loop does little more than building a large expression to evaluate. foldl f z [] = z foldl f z (x:xs) = foldl f (f z x) xs Lazy evaluation is outermost-first. (Together with the provision that values which come from the duplication of a single variable will only be evaluated once and the results shared.) As an example of how lazy evaluation proceeds with foldl: foldl (+) 0 [1,2,3] = foldl (+) (0 + 1) [2,3] = foldl (+) ((0 + 1) + 2) [3] = foldl (+) (((0 + 1) + 2) + 3) [] = ((0 + 1) + 2) + 3 = (1 + 2) + 3 = 3 + 3 = 6 So you can see that most of the time saved in implementing the foldl as a loop will be lost in the reduction of the expression it builds. You can also see the O(n) space usage. Looking at the code for foldl above, since it does nothing but call itself until it reaches the end of the list, foldl also won’t work well with infinite lists. So most of the time, what you really want in Haskell is foldr: foldr f z [] = z foldr f z (x:xs) = f x (foldr f z xs) You can think of foldr f z as replacing each (:) in the list with f, and the [] at the end with z. Note that in the second case, foldr calls f immediately. If for any reason, f doesn’t end up needing its second parameter, the rest of the foldr is never computed — this often occurs when not all of the result is demanded by the rest of the code. So foldr interacts nicely with laziness. There are still a few cases where that’s not really what you want — those cases when the result can’t be partially evaluated (like in taking the sum or product of a list of numbers). These remaining cases (I’d say ~5%, but at worst ~25%) are mostly covered by a variant of foldl called foldl’. The difference between foldl and foldl’ is that foldl’ forces the evaluation of the result being accumulated before recursing (it’s strict). So in that case, you get back the efficiency that foldl has in strict languages, foldl’ can still be implemented as a tight loop, and will operate in constant space. As an aid to figuring out what the various fold and scan functions in Haskell are doing, I’ve produced some diagrams here: http://cale.yi.org/index.php/Fold_Diagrams 11. #11 Cale Gibbard November 28, 2006 Mark: In the case where you got a stack overflow, try foldl’ (*) 1 [1..n]. The stack overflow is being caused because the intermediate calculations of actually multiplying the numbers are not being done until the very end, and you’re running out of stack by then. I would expect foldl’ (*) 1 [1..n] to work for all cases until the numbers involved start getting too large to hold in memory. It also runs in log(n) space. 12. #12 Don Stewart November 28, 2006 The illegal instruction after stack overflow in hugs is fixed in the newest hugs. Also, did you look at compiling the code with GHC, and using the compiled version in the GHCi interpreted environment? GHCi doesn’t optimise the code, in order to have fast turn around. Interpreting A.hs as bytecode: $ ghci A.hs *M> :t facts facts :: [Integer] Compiling A.hs to native code, with optimisations: $ ghc -c -O A.hs $ ls A.o $ ghci A.hs Prelude M> :t facts facts :: [Integer] 13. #13 Cale Gibbard November 28, 2006 Oops, I meant O(log(n!)) space of course. 14. #14 Pseudonym November 28, 2006 Mark, there’s a bug in your case code. It should read: case n == 0 of case (n=0) of It's similar to the difference between = and == in C, except that in both cases, they really mean equality, not assignment. The former is "strong equality", the latter is "semantic equality". 15. #15 Pseudonym November 28, 2006 Oh, one more thing. You can compute factorials beyond 500,000 using only slightly more sophisticated code. There are two problems with the recursive factorial function as presented. 1. It uses O(n) stack space to compute fact n. 2. It performs O(n^2) operations. The reason for this is not obvious, but it’s to do with the complexity of large integer multiplication. It’s less expensive if you try to arrange to multiply only numbers of similar magnitude, because then you can use faster multiplication algorithms, like Katsuba or FFT. Try this on for size: factBinary n = recursiveProduct 1 n recursiveProduct m n | m - n < 5 = product [m..n] | otherwise = recursiveProduct m mid * recursiveProduct (mid+1) n where mid = (m + n) `div` 2 O(log n) stack space, and something like O(n * log n * log log n) operations. 16. #16 Pseudonym November 28, 2006 Killed by HTML markup again. That second function should read: recursiveProduct m n | m - n < 5 = product [m..n] | otherwise = recursiveProduct m mid * recursiveProduct (mid+1) n where mid = (m+n) `div` 2 17. #17 Mark C. Chu-Carroll November 28, 2006 Ack. Yes, I did blow the case statement; the *one* thing that I didn’t bother to test, because it was just for explanatory purposes, and that’s the one where I make a stupid mistake! 18. #18 Kobold November 28, 2006 Pseudonym: As it stands, GHC uses GMP for integers. GMP is pretty kickass, and uses good multiplication algorithms including FFT multiplication for really large numbers. If you really want to take it to the next step speed-wise, implement a prime-swing algorithm: http://www.luschny.de/math/factorial/FastFactorialFunctions.htm 19. #19 Thomas Sutton November 28, 2006 Are you sure that the definition of fact using pattern matching is sugar for that particular case expression? It seems strange that the compiler would translate a pattern into something that requires that the type being matched be a member of Eq and would require nested expressions with more than two patterns (rather than just more branches on the one case expression). I’d expect it to transform to something like: fact n = case (n) of 0 -> 1 _ -> n * fact (n - 1) 20. #20 Pseudonym November 28, 2006 Kobold: GMP won’t use the fast multiplication algorithms effectively if you give it a couple of bad numbers to multiply. Incidentally, I should plug my own little library of Haskell code, which implements a couple of the fast factorial algorithms. 21. #21 Pseudonym November 28, 2006 Thomas Sutton, the Haskell language report states that this is an identity: case v of { k -> e; _ -> e') = if (v==k) then e else e' where k is a numeric, character or string literal. This works because the built-in Eq instances for numeric, character and string types all do the right thing. So while you’re right that it probably gets translated into a case expression, Mark’s translation is completely equivalent. 22. #22 Don Stewart November 28, 2006 To find out what GHC actually does to your code, add -ddump-simpl -O to the command line, and we see: fact 0 = 1 fact n = n * fact (n – 1) compiles (yay for compilation via transformation!), to: fact n = case n of 0 -> 1 _ -> case fact (n – 1) of m -> m * n (modulo a worker/wrapper split) 23. #23 ittay November 29, 2006 > factlist = 1 : (zipWith (*) [1..] factlist) won’t that produce a list [1 1 2 6 ...] ? then factl 2 gives you 1 24. #24 Masklinn November 29, 2006 there is an other way to express the factorial in Haskell by the way: let fact = product.enumFromTo 1 25. #25 Masklinn November 29, 2006 > won’t that produce a list [1 1 2 6 ...] ? then factl 2 gives you 1 That will produce a list [1,1,2,6,24,..] indeed, but Haskell lists are 0-indexed, so (factlist !! 2) will yield 2, and (factlist !! 20) will yield 2432902008176640000 which should be correct. Prelude> take 20 factlist 26. #26 Daniel Martin November 29, 2006 Masklinn, for people familiar with other languages and not with Haskell, you might point out that the expression fact = product.enumFromTo 1 means this: fact = (product) . (\x -> enumFromTo 1 x) And that . is the Haskell way of saying “function composition”. It’s just that currying (especially the automatic currying that Haskell does) is a very uncommon language feature, and many languages use . as a syntax feature to access properties that are part of a larger structure, and not as an operator with relatively low binding. This makes your rephrasing of factorial particularly cryptic. (I must say, I’ve never understood the “point free” school of Haskell programming that says “avoid formal arguments whenever possible”) 27. #27 r.g. May 3, 2007 I did the following in python 2.3.4: def f(x): return reduce(lambda a, b: a*b, xrange(1, x+1)) Calculating factorial of 100.000 took 3.5 cpu minutes on Intel(R) Pentium(R) D CPU 3.20GHz
{"url":"http://scienceblogs.com/goodmath/2006/11/28/simple-functions-in-haskell-1/","timestamp":"2014-04-20T14:18:11Z","content_type":null,"content_length":"73061","record_id":"<urn:uuid:70f666df-b8df-476a-b578-170ec6af0188>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Numerically solve convex lagrange multiplier problems with conjugate gradient descent. Here is an example from the Wikipedia page on Lagrange multipliers. Maximize f(x, y) = x + y, subject to the constraint x^2 + y^2 = 1 >>> maximize 0.00001 (\[x, y] -> x + y) [(\[x, y] -> x^2 + y^2) <=> 1] 2 Right ([0.707,0.707], [-0.707]) The first elements of the result pair are the arguments for the objective function at the minimum. The second elements are the lagrange multipliers. Helper types newtype FU a Source A newtype wrapper for working with the rank 2 types constraint functions. :: Double -> (forall s r. (Mode s, Mode r) => [AD2 s r Double] -> AD2 s r The function to maximize -> [Constraint Double] The constraints as pairs g <=> c which represent equations of the form g(x, y, ...) = c -> Int The arity of the objective function which should equal the arity of the constraints. -> Either (Result, Statistics) (Vector Double, Vector Double) Either an explanation of why the gradient descent failed or a pair containing the arguments at the minimum and the lagrange Finding the maximum is the same as the minimum with the objective function inverted :: Double -> (forall s r. (Mode s, Mode r) => [AD2 s r Double] -> AD2 s r The function to minimize -> [Constraint Double] The constraints as pairs g <=> c which represent equations of the form g(x, y, ...) = c -> Int The arity of the objective function which should equal the arity of the constraints. -> Either (Result, Statistics) (Vector Double, Vector Double) Either an explanation of why the gradient descent failed or a pair containing the arguments at the minimum and the lagrange This is the lagrangian multiplier solver. It is assumed that the objective function and all of the constraints take in the same amount of arguments. Experimental features feasible :: (forall s r. (Mode s, Mode r) => [AD2 s r Double] -> AD2 s r Double) -> [Constraint Double] -> [Double] -> BoolSource WARNING. Experimental. This is not a true feasibility test for the function. I am not sure exactly how to implement that. This just checks the feasiblility at a point. If this ever returns false, solve can fail.
{"url":"http://hackage.haskell.org/package/lagrangian-0.5.0.0/docs/Numeric-AD-Lagrangian.html","timestamp":"2014-04-19T16:09:57Z","content_type":null,"content_length":"17746","record_id":"<urn:uuid:87428a78-f053-4176-b1db-96269060c1a7>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
White Plains, NY Precalculus Tutor Find a White Plains, NY Precalculus Tutor ...In the past, for example, I've drawn pictures, sung songs, broken down information into bullet points, acted it out, made flash cards, and so many more! Whatever works best for the student works for me. I travel to student's homes (or any other location you prefer) to make the experience as convenient as possible for you. 37 Subjects: including precalculus, chemistry, physics, calculus ...B. Saul Agricultural and Murrell Dobbins High Schools in Philadelphia. My previous experiences with education have been through The Johns Hopkins University CTY Summer Program as a Resident Assistant and Teaching Assistant. 26 Subjects: including precalculus, calculus, writing, statistics ...Based on teaching in the classroom and one-on-one tutoring session, I think that anyone can learn math and science. Many people approach math and science with some fear and an idea that it's hard work. Math and science are about thinking logically, which everyone can do. 10 Subjects: including precalculus, physics, writing, algebra 2 Hi, I'm a Ph.D. Chemist with fifteen years of experience teaching and tutoring undergraduate Organic Chemistry. A multitude of such students have approached me over the years feeling frustrated about Organic Chemistry. 10 Subjects: including precalculus, chemistry, calculus, algebra 1 I am currently an adult Ramapo College student getting a second degree in mathematics and will be certified to teach K-12 mathematics in NJ. My professional background has been centered around finance and information technology. Recently, I have been at home raising two children. 12 Subjects: including precalculus, calculus, geometry, statistics Related White Plains, NY Tutors White Plains, NY Accounting Tutors White Plains, NY ACT Tutors White Plains, NY Algebra Tutors White Plains, NY Algebra 2 Tutors White Plains, NY Calculus Tutors White Plains, NY Geometry Tutors White Plains, NY Math Tutors White Plains, NY Prealgebra Tutors White Plains, NY Precalculus Tutors White Plains, NY SAT Tutors White Plains, NY SAT Math Tutors White Plains, NY Science Tutors White Plains, NY Statistics Tutors White Plains, NY Trigonometry Tutors
{"url":"http://www.purplemath.com/white_plains_ny_precalculus_tutors.php","timestamp":"2014-04-16T04:26:32Z","content_type":null,"content_length":"24264","record_id":"<urn:uuid:89f63d3e-6e80-4ec1-ad3b-34dc439c9a45>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
substitution of equals for equals Major Section: INTRODUCTION-TO-THE-THEOREM-PROVER Anytime you have an equality hypothesis relating two terms, e.g., (equal lhs rhs) it is legal to substitute one for the other anyplace else in the formula. Doing so does not change the truthvalue of the formula. You can use a negated equality this way provided it appears in the conclusion. For example, it is ok to transform (implies (true-listp x) (not (equal x 23))) (implies (true-listp 23) (not (equal x 23))) by substitutions of equals for equals. That is because, by propositional calculus, we could rearrange the formulas into their contrapositive forms: (implies (equal x 23) (not (true-listp x))) (implies (equal x 23) (not (true-listp 23))) and see the equality as a hypothesis and the substitution of 23 for x as sensible. Sometimes people speak loosely and say ``substitution of equals for equals'' when they really mean ``substitutions of equivalents for equivalents.'' Equality, as tested by EQUAL, is only one example of an equivalence relation. The next most common is propositional equivalence, as tested by IFF. You can use propositional equivalence hypotheses to substitute one side for the other provided the target term occurs in a propositional place, as discussed at the bottom of propositional calculus. Now use your browser's Back Button to return to the example proof in logic-knowledge-taken-for-granted.
{"url":"http://www.cs.utexas.edu/users/moore/acl2/v6-3/LOGIC-KNOWLEDGE-TAKEN-FOR-GRANTED-EQUALS-FOR-EQUALS.html","timestamp":"2014-04-16T05:27:39Z","content_type":null,"content_length":"2547","record_id":"<urn:uuid:3a7ae92c-abdf-4e0a-8e7f-158c42be54a4>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
Minnesota Supercomputing Institute Research Abstracts Online January 2008 - March 2009 University of Minnesota Twin Cities Institute of Technology Department of Mechanical Engineering PI: Paul J. Strykowski Co-PI: Terrence W. Simon, Associate Fellow Computational Modeling of Free Convection in Fluid Porous Layers Heated From Below These researchers are carrying out a numerical investigation to model the buoyancy-induced flow and heat transfer in a partially saturated porous medium. This type of convective motion may occur around a nuclear waste repository in crystalline rock. The system will be modeled as a cylinder that is partially filled with metal foam and is heated from below with a uniform heat flux. The continuum conservation equations and the local volume averaged equations will be applied to determine the velocity and temperature fields in the fluid layer and porous layer, respectively. The temperature solutions are utilized to determine the Nusselt number as a function of Raleigh number. The coupled momentum and energy equations will be discritized using the finite volume method. The second order upwind will be applied to handle the convection-diffusion flux at the control volume faces, and the SIMPLER algorithm will be utilized to handle the velocities and pressure coupling. The discritized equations are then solved iteratively using the Guess-seidel scheme and the convergence rate will be enhanced using a multigrid technique. Group Members Aiman Alshare, Graduate Student Guus B. Jacobs, Department of Mechanical and Industrial Engineering, University of Illinois, Chicago, Illinois Deborah Kacmarynski, Graduate Student Farzad Mashayek, Department of Mechanical and Industrial Engineering, University of Illinois, Chicago, Illinois
{"url":"https://www.msi.umn.edu/about/publications/annualreport/AR2008-09/08-09abstracts/Strykowski-Simon.html","timestamp":"2014-04-19T19:47:48Z","content_type":null,"content_length":"10280","record_id":"<urn:uuid:2d340da6-0774-4855-b717-164f904ded9f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter 5 Part 1: Home and School Investigation Send the Letter to Family (PDF file) home with each child. Once all of the children have brought in their flower pictures, put the children in pairs. Ask them to write a number sentence that tells how many flowers they have in all. You might want to display the flower pictures and the number sentences that go with each pair in the classroom. Part 2: Be an Investigator A good time to do this investigation is after Lesson 5 on writing a number sentence. Introducing the Investigation Introduce the investigation by telling children that they are going to write a story about ladybugs flying onto a flower. Show them the Investigator Worksheet and say, You will decide how many ladybugs are on the flower to start with. It needs to be fewer than six ladybugs. Draw a picture of those ladybugs on the flower. Then decide how many ladybugs will be flying onto the flower. Again there needs to be fewer than six ladybugs. Draw a picture of those ladybugs flying. Finally, write a number sentence below that tells how many ladybugs there are in Put the children in pairs to work on the investigation. Doing the Investigation You may want to use the following questions to help children get started. • How many ladybugs are on the flower? • How many ladybugs are flying? • How many ladybugs are there in all? • How can you show that with a number sentence? Have the children share their ladybug and flower stories. Extending the Investigation Have children tell more stories, drawing the pictures and writing the number sentences to go with the stories. They may choose to draw insects other than ladybugs.
{"url":"http://www.eduplace.com/math/mw/minv/1/minv_1c5.html","timestamp":"2014-04-19T09:37:46Z","content_type":null,"content_length":"4806","record_id":"<urn:uuid:390e52ab-1d06-4338-9c25-b781c322c96b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
Relation between acceleration and velocity Hi all, so my question is can i carryout normal algebraic operations on derivatives, for example: v=ds/dt and a=dv/dt then eliminating dt a=(dv/ds) *v then, a *ds= v*dv is that how you derive the relationship between acceleration, velocity and displacement? Derivatives are not ordinary fractions, they are limits. But it is quite common in derivations to treat differentials as if they were small numbers, and manipulate derivatives as if they were ratios of small numbers. But it's a heuristic--it's a way to get a hint as to the form of the answer, but it's not rigorous. Usually, differentials (quantities like ds) are just used as intermediate steps in a nonrigorous derivation. There are three ways that a differential can appear in the final result: 1. As a part of a derivative. 2. As an integrand to an integral. 3. As a finite difference equation. Here are those three ways of interpreting your result [itex]a \cdot ds= v \cdot dv[/itex]: 1. As a part of a derivative: [itex]a \cdot \dfrac{ds}{dt} = v \cdot \dfrac{dv}{dt}[/itex] 2. As an integrand to an integral: [itex]\int a \cdot ds = \int v \cdot dv[/itex]. If we multiply the left side by the mass [itex]m[/itex], then it gives the work done by the force acting on the object: [itex]W = \int F \cdot ds = \int m a \cdot ds[/itex]. On the right side, if we multiply by [itex]m[/itex], then it gives the change in kinetic energy: [itex] m \int v \cdot dv = \int d(\frac{1}{2} m v^2)[/itex]. So this says that the work done by the forces acting on an object is equal to the change in the kinetic energy of the object. 3. As a finite difference equation. It's approximately true that [itex]a \delta s = v \delta v[/itex], where [itex]\delta s[/itex] is the change in position during a small time period [itex]\delta t [/itex], and [itex]\delta v[/itex] is the change in velocity during that time period.
{"url":"http://www.physicsforums.com/showthread.php?s=443f0b2b5b60627d5498a5bfd0e6d26b&p=4307440","timestamp":"2014-04-16T04:33:29Z","content_type":null,"content_length":"42672","record_id":"<urn:uuid:fe6b39dd-9cca-429c-99b6-4684249ab873>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
area of ellipse March 29th 2009, 10:37 PM #1 Mar 2009 area of ellipse I would like to know how to calculate the area of ellipse and its total length from the given data.. Rx-x radius Ry-y radius start angle end angle Center of Ellipse (x,y) can anyone help me pleezzz......... The area of the ellipse is simply pi times the multiplication of the two axes: Area = pi*(Rx-x)*(Ry-y). I don't know what you mean by start or end angle and total length of ellipse, but I am assuming you are referring to the angle t formed the by the line from the center to the circumference. Then the perimeter of the ellipse is: 4*(Rx-x)*INTEGRAL_{0 to pi/2}[sqrt(1-((Rx-x)^2-(Ry-y)^2)/(Rx-x)^2 * sin(t)) * dt]. Area and perimeter of elliptical arc I had get this reading from an autoCAD image (civil Plan) Center of Ellipse: X , Y Start Angle: sa End Angle: ea Rx: rx Ry: ry these are the parameter which i have. Could any one help me to find the area and perimeter.This to apply in software program March 29th 2009, 10:52 PM #2 Junior Member Mar 2009 April 1st 2009, 09:22 PM #3 Mar 2009
{"url":"http://mathhelpforum.com/geometry/81399-area-ellipse.html","timestamp":"2014-04-17T03:06:52Z","content_type":null,"content_length":"34255","record_id":"<urn:uuid:d2a2d82e-aced-403d-8d73-15f9a15986dc>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
Ti83plus versus ti89 in math 2C Click here to go to the NEW College Discussion Forum Discus: SAT/ACT Tests and Test Preparation: May 2003 Archive: Ti83plus versus ti89 in math 2C i know the ti89 is more versatile and it seems that many ppl are using it for the sat math 2c, but i was wondering if using an 89 gives me any advantage over an 83 plus, b/c i was encouraged at my school to get an 83plus because that's the standard one everyone uses, some have 89's, but not many, so can i still write the math 2c and expect to get an 800 with an 83plus (considering i'll study my ass off of course, but i mean from an electronic function standpoint) the 89 did around 75 percent of the work for me on the math IIc, I mean I could probably do most of the stuff with the 83 and by hand, but why would i want to, I would also waste alot of time doing that, get an 89, when you take calculus you'll need one then anyways Ndhawk, can you please elaborate on what exactly the 89 helped you with. I have one and I would like to learn to use it very well for my SAT II- can you tell me what features to concentrate on. Also, do you know any programs useful for the SAT II? the solver and the matrix for me. the ti89 interface is a lot more user-friendly than 83 i think... and the solver's more accurate and takes a lot less time. like ti83 you need to have the guess and sometimes the root that comes up isn't the one you want, whereas w/ ti89 it gives you all answers. oh, and ti89 gives you exact answers if you want (like 4/7 instead of .57142...) which is VERY useful if you're dealing w/ fractions etc. interesting, i know with 83plus you can get a matrix solver as a program, but this other solver thing, is that for equations, like solve for one variable or what?? The solve function is excellent and very important for calculus. You can do hand work to get an equation. (like 2x=6.) With TI-89 (im not sure about 83-plus) you press algebra, solve, then type in the equation, write: (equation), dx press enter and it solves for x! know how to do everything trig-related, if i remember correctly there are alot of inverse sine and inverse cosine equations you have to solve, lots of times you'll need to be able to manipulate the equation so it will work in the solver, i dont know how to do matrices, I got an 800 on math IIc, also graphing lots of stuff helps, and if it asks about polars or something, just switch to polar mode and graph it, and go from there, know how to do logarithms, and know the change of base theorem, you might also want to store some of the formulas for geometric and arithmetic infinite and finite series and sequences, oh yea learn how to do summations in your calculator, i dont know if its possible to do limits on the 89? but if possible that would be helpful to you as well, save stuff like the law of sines and law of cosines in your calc as well if you dont know them, use the calc to factor if needed, and if it gives you any expression with a bunch of x's and y's, they do this alot, then just type it all in and hit enter, and the calc will simplify it to one of the answer choices for you what on earth is a summation?? a summation is when you add up several areas using: height times change in x. When change of x approaches zero, the area gets even more exact and you have a definite integral (calculus) Report an offensive message on this page E-mail this page to a friend
{"url":"http://www.collegeconfidential.com/discus/messages/69/11652.html","timestamp":"2014-04-16T07:24:55Z","content_type":null,"content_length":"16575","record_id":"<urn:uuid:75b9e567-0cc6-4a2b-b8a2-ecb856c5b88a>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: a math riddle. what is better - 1 million dollars tomorrow or 1 penny each day that doubles for a whole year. ex 1st day = 1 penny, 2nd day is 2 , 3rd= 4 and so on for a year Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4df177d40b8b370c28bb29c1","timestamp":"2014-04-19T02:03:49Z","content_type":null,"content_length":"58627","record_id":"<urn:uuid:187f57a1-f2c8-40d9-9c06-ca2697eabadc>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Bellefonte, DE Algebra 2 Tutor Find a Bellefonte, DE Algebra 2 Tutor ...One quick note about my cancellation policy, as it's different than most tutors: Cancel one or all sessions at any time, and there is NO CHARGE. Thank you for considering my services, and the best of luck in all your endeavors! Warm regards, Dr. 14 Subjects: including algebra 2, physics, calculus, ASVAB ...If you are interested in having me as your tutor, I look forward to hearing from you! On the other hand, if you have chosen one of the other talented tutors, keep on learning! "An expert problem solver must be endowed with two incomparable qualities: a restless imagination and a patient pertina... 9 Subjects: including algebra 2, chemistry, geometry, algebra 1 ...I have a great interest in European history and have traveled to Europe several times specifically to see things I have learned in European history. I enjoy reading books on the World wars and how European relations affected the initiation and outcome. While I have no formal degree in Government and politics, I have a great passion for it. 14 Subjects: including algebra 2, chemistry, physics, geometry I have taught middle school and high school mathematics in northern Virginia for 8 years. I have tutored privately most of that time as well. I know that everyone learns in a different way and I try to use real world objects, models and examples to help students understand abstract concepts with which they may be struggling. 28 Subjects: including algebra 2, calculus, geometry, ASVAB ...I've also taught 5th-12th grade English and history. In the tutoring area, I have taught most all academic subjects and I have had some really rewarding results with children with ADD/ADHD. I also have two master's degrees and certification in elementary and upper school. 21 Subjects: including algebra 2, English, reading, writing Related Bellefonte, DE Tutors Bellefonte, DE Accounting Tutors Bellefonte, DE ACT Tutors Bellefonte, DE Algebra Tutors Bellefonte, DE Algebra 2 Tutors Bellefonte, DE Calculus Tutors Bellefonte, DE Geometry Tutors Bellefonte, DE Math Tutors Bellefonte, DE Prealgebra Tutors Bellefonte, DE Precalculus Tutors Bellefonte, DE SAT Tutors Bellefonte, DE SAT Math Tutors Bellefonte, DE Science Tutors Bellefonte, DE Statistics Tutors Bellefonte, DE Trigonometry Tutors
{"url":"http://www.purplemath.com/Bellefonte_DE_algebra_2_tutors.php","timestamp":"2014-04-20T06:48:04Z","content_type":null,"content_length":"24202","record_id":"<urn:uuid:4963d1aa-ef6c-459a-b560-ee108e378d8d>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
Portability GHC, Hugs (MPTC and FD) Stability stable Maintainer robdockins AT fastmail DOT fm Safe Haskell None Skew heaps. • Daniel Sleator and Robert Tarjan. "Self-Adjusting Heaps". SIAM Journal on Computing, 15(1):52-69, February 1986. Type of skew heaps data Heap a Source Ord a => Eq (Heap a) (Eq (Heap a), Ord a) => Ord (Heap a) (Ord a, Read a) => Read (Heap a) (Ord a, Show a) => Show (Heap a) (Ord a, Arbitrary a) => Arbitrary (Heap a) (Ord a, CoArbitrary a) => CoArbitrary (Heap a) Ord a => Monoid (Heap a) (Eq a, Monoid (Heap a), Ord a) => CollX (Heap a) a (CollX (Heap a) a, Ord a) => OrdCollX (Heap a) a (CollX (Heap a) a, Ord a) => Coll (Heap a) a (Coll (Heap a) a, OrdCollX (Heap a) a, Ord a) => OrdColl (Heap a) a CollX operations Coll operations OrdCollX operations OrdColl operations
{"url":"http://hackage.haskell.org/package/EdisonCore-1.2.2/docs/Data-Edison-Coll-SkewHeap.html","timestamp":"2014-04-18T13:48:11Z","content_type":null,"content_length":"46009","record_id":"<urn:uuid:3bda81ad-b799-4602-9b71-945b720a0b2a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
solving semilinear PDE November 24th 2008, 11:37 AM #1 solving semilinear PDE Using the method of characteristics, show that the semilinear PDE $(x+y)u_x + (x-y)u_y = 2x$ has the general solution $H(u-x-y, y^2 + 2xy -x^2) = 0<br />$ Find the solution for the case $u(x,0) = 2x$ Last edited by Jason Bourne; November 27th 2008 at 05:03 AM. I think I need to solve the differential equation $\frac{dy}{dx} = \frac{x-y}{x+y} <br />$ Does anyone know how to solve the above equation to get y as a function of x? It's homogeneous: $(x+y)dy=(x-y)dx$ so let $y=vx$. Gets a little messy but I get $y^2+2xy-x^2=h$. Still though I think we could let $w=y^2+2xy-x^2$ and $z=y$ to solve the PDE. I'm probably not doing it the way you want but I get for one solution: $u(x,y)=2\left(y+1/2\sqrt{y^2-2xy+x^2}\right)+f(y^2+2xy-y^2)$ and supplying your side condition, then $f(x,y)=0$ November 27th 2008, 05:02 AM #2 November 27th 2008, 07:53 AM #3 Super Member Aug 2008
{"url":"http://mathhelpforum.com/differential-equations/61398-solving-semilinear-pde.html","timestamp":"2014-04-17T08:58:14Z","content_type":null,"content_length":"37528","record_id":"<urn:uuid:f174e546-4f9d-42c3-9c8e-b2b2b2027bba>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
age-mass-magnitude relation Yes, it does. At least most of it. The first relation you quote (L~M^3) is wrong, it holds for stars, not for star clusters. For a star cluster (of sufficiently high mass, higher than about 1000 Msun) the mass and luminosity are linearly proportional to each other (makes sense right? N times more stars, N times more light). We can get the age from the HR diagram? I presume you mean you can estimate the age from the HR diagram main-sequence turn-off point. That is true. But if you have all stars in an HR diagram too, you might also have its V magnitude (unless it's in bolometric luminosities). To get the plot you are referring you need to convolve all spectra of all stars (or the composite spectrum of all stars) with the V-band filter profile and compare that to a reference spectrum that belongs to the magnitude system (flat for AB magnitudes, Vega's spectrum for vegamags and so on). Let me know if you need more info!
{"url":"http://www.physicsforums.com/showthread.php?p=3265544","timestamp":"2014-04-18T13:42:53Z","content_type":null,"content_length":"29673","record_id":"<urn:uuid:41164dd5-4a7e-4cfc-acb8-2e56eb2fb474>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Concepts: Fractions, Greater Than, Less Than, Equivalence, Inequality What You Can Do: Fractions are incredibly useful in describing situations mathematically. However, fractions are a difficult concept for most students, possibly because kids don’t receive enough exposure to them in environments beyond the classroom. Look for opportunities to use fraction concepts in daily life. As a very simple example, having a child split a treat with a friend is a natural occurrence of 1/2. When a child is picking out clothes to wear in the morning, mention that 4/5 of the shirts being considered are blue. It’s this continual exposure to fractions in daily life that helps students become familiar with the part-whole relationship between the numerator and denominator. For comparing fractions, a fraction chart or fraction strips can be very useful. Math in the Game: Players create fractions that are larger or smaller than their opponent, which makes the math in the game immediately evident. However, students can learn more about fractions by using the Fraction Bar Chart. Using this tool, students can see that the fractions 1/4 and 2/8 are equivalent, or that 2/3 is greater than 4/7. Related Resources: Investigating Fractions with Pattern Blocks Students investigate the relationship between parts and wholes by comparing pattern blocks of various sizes. For instance, 6 green triangles have the same area as 1 yellow hexagon, so the area of the green triangle is equal to 1/6 the area of the yellow hexagon. Making and Investigating Fraction Strips Students make and use a set of fraction strips to discover fraction relationships and work with equivalent fractions.
{"url":"http://calculationnation.nctm.org/Games/GameDirections.aspx?GameId=791a122f-bfcb-4b10-9dec-217d5aafb6af","timestamp":"2014-04-20T10:47:42Z","content_type":null,"content_length":"22087","record_id":"<urn:uuid:5b4ce462-31e8-417f-a8e2-c511aac93a6b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
06. Fourier Analysis Fourier analysis is a fascinating activity. It deals with the essential properties of periodic waveforms of all kinds, and it can be used to find signals lost in apparently overwhelming noise. As just one example of its usefulness, if SETI (the Search for Extraterrestrial Intelligence) should ever detect an alien signal, that discovery will be made using Fourier analysis. It's important to say that, even though this article deals with simple waveforms, Fourier analysis is by no means limited to these classic examples — it can analyze and process images, it can efficiently compress images and video streams, and it can assist in visual pattern recognition, where a complex pattern may be efficiently and concisely described using a set of Fourier terms. One of the principles of Fourier analysis is that any imaginable waveform can be constructed out of a carefully chosen set of sinewave components, assembled in a particular way (the frequency -> time task). And conversely, any complex periodic signal can be broken down into a series of sinewave components for analysis (the time -> frequency task). And most important, the described tasks are reciprocal operations — in the same way that integration and differentiation are reciprocal operations in Calculus, encoding and decoding signals are reciprocal operations in Fourier analysis. Using that notion as a guide, this page has a section in which we will construct complex signals using sinewave components, and another in which we will decode complex signals into their components. The Frequency -> Time Task This section deals with the task of creating time-domain signals out of individual sinewave components in the frequency domain. First, download this Fourier example file and load it into Maxima. It contains a set of example functions that show aspects of Fourier analysis. In this section we'll be using Maxima to build complex waveforms out of simple components. To reiterate, any signal can be built out of individual sinewave components, arranged in particular ways. Remember this the next time you're listening to your favorite music — in principle, it can be created out of a mathematical description consisting only of sinewave frequencies, amplitudes and phases. Let's begin with a table of classic wave equations and corresponding graphs: Name Equation Graph Rectified Sine Obviously these graphs weren't created by summing terms from 1 to ∞ as indicated by the equations in the left column. In practice one chooses a summation number that balances display quality with a reasonable running time. But by choosing different upper bounds for these summations, we can learn something important about signal generation. Make sure you have loaded this Fourier examples file into Maxima, then enter this at the keyboard: This will accumulate one element from the square-wave equation, which, if you think about it, must be a sine wave (because all these generators use individual sine waves as their building blocks). Now change your entry to this: Notice the difference. Now change the number, go as high as you like, considering the issue of running time, and see how the plotted wave changes. You will begin to see a pronounced "ringing" transient at the switching points in the waveform (like the graph on this page), and the ringing will become more severe as the number of accumulations increases. There is a longstanding debate among mathematicians as to whether this ringing is an artifact of our imperfect generation methods or a real property of square waves. In case it isn't obvious from the equation listed above, the square wave is created by accumulating the odd multiples of a specified frequency — in this way (where x = 2 π f t): y = (4/π) (sin(x) + 1/3 sin(3x) + 1/5 sin(5x) + 1/7 sin(7x) ...) These sine terms are combined to create the time-varying waveform in the display. And if the opposite operation were to be performed on the result (something called a Fourier transform), these individual harmonic elements would reappear. It is important to reemphasize that waveform generation, and the Fourier transform, are reciprocal operations. You can use frequency components to generate a waveform in the time domain, then transform the result back to the frequency domain and recover what you started with. This reciprocal relationship is to Fourier analysis what the Fundamental Theorem of Calculus (the idea that integration and derivation are reciprocal operations) is to Calculus. The Fourier examples file contains a set of waveform generating functions, one for each kind of waveform in the above table. Here are their names: • fpsq(n); (square) • fptri(n); (triangle) • fpsaw(n); (sawtooth) • fprec(n); (rectified sine) Play with these functions — each will create a graph that results from summations of the generating function, and you choose the value for The Fourier examples file also has functions to create multiple plots for each choice, with results that look like this: Function Graph Here is a list of the multiple-plot functions: • fsqmult(n); (square) • ftrimult(n); (triangle) • fsawmult(n); (sawtooth) • frecmult(n); (rectified sine) These functions take much more time to generate their plots (because they must perform a summation for each individual trace in the plot), so don't enter large numbers unless you are a very patient The Time -> Frequency Task Converting from the time domain to the frequency domain is a bit more labor-intensive than the task of generating complex waveforms, the topic covered above. The general name for this conversion is "Fourier Transform", and because of its usefulness, much thought and ingenuity have been expended on this task. In the 1960s, an apparently new and much more efficient method for this conversion, now called the "Fast Fourier Transform" (FFT), was devised by J. W. Cooley and John Tukey. I say "apparently" because it turns out the basic idea behind the FFT was developed by Carl Friedrich Gauss in 1805 and was rediscovered by others in various limited forms in the intervening years. Maxima has an FFT package available, made accessible by loading a library named "fft". Here is an outline of the steps needed to perform an FFT in Maxima: Code Explanation load(fft); Load the required library. Establish an array size, which must be: n:2048; • A power of 2 (512,1024,2048, etc.). • Twice the size of the highest frequency of interest in Hertz, e.g. to resolve frequencies between 0 and N Hertz, the array size must be the next higher power of 2 above N * 2. array(ra,float,n-1); Declare a real-value array. array(ia,float,n-1); Declare an imaginary-value array. for i:0 thru n-1 do ( Fill the array with the desired data. ra[i]:my_function(i), This is a user-defined function that is able to create suitable data. ia[i]:0.0 Must set the imaginary array to zero if there are no data. fft(ra,ia); Perform the FFT conversion in-place (the original array data are overwritten). recttopolar(ra,ia); Optional: convert complex a+bi form to polar magnitude and angle form. lira:makelist(ra[i],i,0,(n/2)); For plotting, create a list out of the first half of the data array. xIndexList:makelist(i,i,0,length(lira)); Create an index list required by the plotting routine. wxplot2d([discrete, xIndexList,lira], [gnuplot_preamble, "unset key;set xrange [0:1024];set xlabel Create an inline plot (one that is integrated into wxMaxima's display). 'Frequency';set ylabel 'Amplitude';set title 'Amplitude Modulation — Frequency Domain';"]); There are example functions in the Fourier examples file that show a typical conversion: Function Graph Comment This time-domain plot shows a waveform that was once very familiar to electrical engineers: an amplitude-modulated (AM) radio signal. In the AM scheme, information is added to a radio carrier wave by changing its amplitude. Speaking mathematically, this modulation method creates sidebands whose distance from the carrier frequency is equal to the fpt(); modulation frequency. Typically the modulation of a real-world AM radio signal is complex and broad. In this example, we're modulating the carrier with a simple sinewave. This plot is the input data for the FFT. If you have loaded the Fourier examples file into Maxima, and if you enter the function name at the left, you will get this plot. Here is the frequency-domain equivalent of the above time-domain plot (with some random noise added for a touch of realism). The central peak is the carrier frequency, and there fpf(); are two sidebands, one above and one below the carrier frequency. The sideband frequencies are located at c+m and c-m, where c = carrier frequency and m = modulation frequency. This plot is the output data from the FFT. If you have loaded the Fourier examples file into Maxima, and if you enter the function name at the left, you will get this plot. There is one more experiment in the Fourier examples file. It is meant to clearly show the equivalence of time-domain and frequency-domain representations of spectral data. In this experiment, we • Create a square wave signal in the time domain using the methods established above, • Convert the time-domain data back to the frequency domain using the FFT, • Compare the resulting harmonic components to the individual terms in the square wave generator equation. Actually, this process is automated for you by two functions that are part of the the Fourier examples file. They are: Function Comment Predicts (and prints a list of) frequencies and amplitudes for square-wave harmonics based on the mathematical identity: y = (4/π) (sin(x) + 1/3 sin(3x) + 1/5 sin(5x) + 1/7 sin(7x) ...) fch(); Generates a square-wave data table in the time domain, converts it to the frequency domain using the FFT, then prints out the frequencies and amplitudes of the resulting harmonic Make sure you have loaded the Fourier examples, then enter the two function names listed at the left above. One ("fph()") will predict the expected harmonic frequencies and amplitudes based on the mathematical identity of a square wave, the other ("fch()") will use the FFT to convert a time-domain square-wave data list into its harmonic components, then search the results and print out the harmonics it finds. If everything is in order on your system and there is nothing wrong with your Maxima installation, the predicted and result data lists should be nearly identical. Because the FFT is a numerical procedure performed on machines with finite floating-pint resolution and a finite array size, all FFT results are approximate. I want to emphasize that the numerical, approximate nature of the FFT doesn't imply that all Fourier transforms are approximate. The FFT is a specific, very fast, embodiment of a well-understood mathematical transformation that can be carried out analytically in some circumstances. What I am saying here is ... don't confuse the FFT with the field itself. Apart from being rather intriguing in an aesthetic sense, Fourier analysis is a useful thing to know in the modern world. There are applications of this field in virtually all scientific and technological disciplines. I want to reemphasize one point about Fourier analysis, and it is the single fact essential to know about this field. If my readers have forgotten all the content of this page a week from now, I hope they remember this: Waveform creation in the time domain, and harmonic analysis in the frequency domain, are reciprocal operations. One can take a list of harmonic components and use them to create a time-domain waveform, then one can carry out a Fourier transform on the time-domain waveform to recapture the original harmonic components. It turns out that these two representations are equivalent and
{"url":"http://arachnoid.com/maxima/fourier_analysis.html","timestamp":"2014-04-17T09:35:21Z","content_type":null,"content_length":"29534","record_id":"<urn:uuid:17c3cdfa-b346-4615-8040-34538ae91ab2>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
U Welberry - p05rtf Principal Investigator T Richard Welberry Project p05 Research School of Chemistry Machine VP Co-Investigators Sheridan C Mayo, Andrew G Christy and Aidan P Heerdegen Research School of Chemistry Computation of X-ray Diffraction Patterns for 3D Model Systems The aim of our project is modelling the disorder that occurs in crystals of some organic molecules, inorganic materials and mineral systems, which we observe in our diffuse X-ray diffraction experiments. Conventional crystal structure analysis of disordered materials using the sharp Bragg diffraction data reveals only average one-body structural information, such as atomic positions, thermal ellipsoids and site occupancies. Diffuse scattering, on the other hand, gives two-body information and is thus potentially a rich source of information of how atoms and molecules interact with each other. Traditionally there have been two major mathematical approaches employed to describe and understand this diffuse part of the diffraction pattern--a reciprocal space (modulation-wave) approach and a real-space (correlation) approach. Either of these approaches may be employed to obtain sets of physical parameters which describe quite accurately the observed diffuse intensities. The ultimate goal of such studies, however, is not simply to find a set of mathematical correlation parameters that can describe the diffraction pattern but to obtain a realistic model of the how the material is organized and that is consistent with the observed scattering. Although such models can be derived a posteriori from the mathematical fit to the data, this approach can all too often lead to misinterpretation and unrealistic models for the disorder. This possibility can often be reduced if the normal order of analysis is reversed such that the investigation begins not with a mathematical description but with the models themselves. The idea is to develop a model which first satisfies any known physical/chemical constraints and then iteratively adjust the simulation until the diffraction pattern from the computer model matches the measured intensities. By turning the usual analysis procedure around the physical organization of the material is given the greatest emphasis and many possible, but unlikely or non-physical, configurations that are consistent with the diffraction data will thus be eliminated from consideration. What are the basic questions addressed? Can we, by using a detailed potential model of the systems under investigation, describe the short-range order properties of the materials sufficiently well that we may obtain computed diffuse diffraction patterns which are in substantive agreement with observed X-ray diffraction patterns? What computational techniques are used and why is a supercomputer required? We use mainly Monte Carlo simulation for the modelling. This part of the work is not generally performed on the VP since it does not vectorise well. The main part of the work carried out on the VP is obtaining the three dimensional diffraction patterns from the simulation results. This involves direct Fourier transformation for which we have developed a highly vectorised algorithm. For disordered crystals, large sample sizes are necessary in order to obtain statistically useful spatial information. In addition the diffraction patterns need to be computed on a fine grid of points in three dimensions. Because of the large amounts of CPU time which would be required, lesser classes of computer allow only the crudest models to be explored. What are the results to date and future of the work? The method has been used to study disorder in a number of quite diverse systems. One major on-going project is involved with trying to understand the disorder in cubic stabilized zirconias (CSZ's) which have commercial importance as "cubic zirconia" gems. A second system is that of Mullite which is a major component of nearly all aluminosilicate ceramics. For both of these systems three dimensional models of the way in which oxygen vacancies order have been established, and the way in which the rest of the structure relaxes around the vacancies. A new study that has commenced recently is on the material wüstite, Fe1-xO, which is thought to be a major constituent of the Earth's lower mantle. The methods have also been applied to a number of organic molecular crystal systems which exhibit disorder. This area possibly presents the most promise of realising a quantitative interpretation of observed diffuse scattering, thereby yielding valuable detailed information about intermolecular interactions. A new study that has recently commenced in this field is on the urea/hexadecane inclusion compound. Our methods are now established as a viable means of interpreting and studying disorder in a whole range of different materials. The diffraction calculation algorithm will continue to be used as a routine tool in the process. Lead Article: Interpretation of Diffuse X-ray Scattering via Models of Disorder, T R Welberry and B D Butler, J Appl. Cryst., 27, 205-231 (1994). Analysis of Diffuse Scattering from the Mineral Mullite, B D Butler and T R Welberry, J. Appl. Cryst., 27, 741-754 (1994). Diffuse X-ray Scattering in KLiSO4, T R Welberry and A M Glazer, J. Appl. Cryst., 27, 733-741 (1994). A paracrystalline description of defect distributions in wüstite, Fe1-xO, T R Welberry and A G Christy, Journal of Solid State Chemistry (1995), in press. A Modulation Wave Approach to Understanding the Disordered Structure of Cubic Stabilised Zirconias (CSZ's), T R Welberry, R L Withers and S C Mayo, J.Solid State Chem., (1995), in press.
{"url":"http://anusf.anu.edu.au/annual_reports/annual_report94/UWelberry-p05.html","timestamp":"2014-04-17T09:36:02Z","content_type":null,"content_length":"6379","record_id":"<urn:uuid:63474b96-d977-45a2-9877-5264edf5f9d0>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
mathematical and logical thinking The Mathematics/Logical Thinking Fundamental Skill category of the new General Education scheme focuses on courses that emphasize a substantial amount of quantitative analysis or computation as an educational experience. One would suppose most any college course involves some degree of logical thinking, but the mere use of logical thought and even occasional computations or quantitative considerations alone does not necessarily qualify it for a General Education course in this category. To be a course dedicated to Math/Logical Thinking and thus be awarded two General Education units, the course must be dedicated (at an appropriate level) to the study of mathematics or symbolic logic as its content material. Examples would include mathematics courses (exclusive of Math 101 or 102 which are deemed too remedial), but also courses such as Psychology 221 or QBA 201, which have a statistics content, and Philosophy 120, which is an introductory symbolic logic course. A course could be considered enriched for Math/Logical Thinking without being dedicated to the study of mathematics or logic per se if it incorporates a significant amount of computation or quantitative analysis during its study. Such courses would earn one General Education unit in this category. For example, certain science courses require students to consider computations or quantitative analysis on a regular basis on homework, exams, or projects. They often carry a mathematics or statistics prerequisite. Many Chemistry, Physics, and Engineering courses fit this description. Other courses that could be considered in this category may not necessarily have such specific mathematics prerequisites, but still involve substantial amounts of accurate quantitative or logical analysis.
{"url":"http://www.ohio.edu/gened/help/lmt.cfm","timestamp":"2014-04-16T23:29:20Z","content_type":null,"content_length":"36109","record_id":"<urn:uuid:88eda8e1-f0da-4c4c-9490-8b1c4096ecb2>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
AISNE New Teacher Workshop at Phillips Academy, MA Presentation at AISNE Math Teacher Workshop on October 4, 2012, “Thoughts on teaching math.” Let’s start by considering a scene from a neighborhood bar on a Thursday night… “Oh, what do you do?” says polished and sophisticated looking patron. “I am a math teacher,” you say. “I hated math,” s/he says. You think to yourself, “what do I say to that?” Sometime ago in our culture, it became okay to say that “I suck at math.” But no one who is educated would ever admit to being incapable of reading or writing – you would be labeled illiterate and people would feel compassion for you. Yet, there is a cult around math-phobia or should I say around people who are math-centric. It’s a love/hate relationship and your job is to be somewhere in the That’s right, I don’t want you to get your students involved in cults. That’s a good thing… but seriously, don’t mislead your students – very few of them will directly use what you are teaching them. But most of them won’t go on to perform MacBeth or shape foreign policy (so out with the Monroe Doctrine, because history never repeats itself). We have to give students a compelling reason for why they need to learn math. That’s your first goal. Hopefully, you will only have a few students who really question why they need to learn math. I see the syllabus as your opportunity to convey to your students who you are and who they can be. Here is my teaching philosophy, which I discuss with my students on the first day of classes (I hand this to them, briefly go over it, and then suggest they read it along with the syllabus and assignment sheet for homework – I am certain that most do not read it). Dear Students, I want to share with you how I approach your learning and my teaching, and what follows is my teaching philosophy – it lays out what you should expect this year. In this mathematics class, you will learn to think, reason, and write– because these are necessary skills for life! This means that the exercises and problems that I select for homework and for quiz/ test review may be vastly different from the actual questions on my assessments. I know I’m asking much of you, but you are some of the best and brightest; so if you put in the effort at the beginning of your Andover career you will be on the path toward success. I believe that students must be trained to take calculated intellectual leaps, intent to listen for valid opinions, reflect on their contribution to the larger community, and realize their potential to becoming model citizens. In the math classroom, I foster this by encouraging my students to see math in the real world − the logic in arguments, the structure of languages, the systematic way of annotating a page of math literature, and using examples to generalize arguments to reasonable conjectures, and writing proofs of theorems. I want my students to learn to grapple with questions. I fell in love with mathematics because I enjoyed the search for the answer. Sometimes the search involves asking someone for help, looking up a reference, or sitting down with paper and pencil and patiently waiting for the answer to “come to you.” Lastly, if I want my students to become life-long learners, tinkerers, experimenters, hypothesizers, and doers, it has to happen in the context of a structured learning environment. Homework, tests, and the final exam have to be consistent in the breadth and the scope of what is asked and set reasonable expectations for literacy. Classes should have a definite rhythm while allowing for the occasional improvisation, sharing a newspaper article, an alternative calculation/proof, or giving students the opportunity to go to the board to show their work. Students learn best when they are sitting next to each other, seeing each other’s work, and encouraged to talk about how they got from Step 4 to Step 5. Notice that in my teaching philosophy, there is very little about solving equations. I don’t mean to be intentionally misleading; rather, I want students to see that in learning theoretical mathematics, they are learning to think deep thoughts. So how do you set the bar high? Tests, quizzes, group projects, and alternative assessments. Try to mix it up… Another way to sum things up is by what our veteran math teacher, Doug Kuhlmann, calls “The Rule of Four”: every topic/idea in mathematics should be taught (and thereby assessed) symbolically, graphically, numerically, and verbally. Symbolically – students need to be comfortable with the syntax of mathematics – these are the words and phrases that make up math. Proof – why do we teach it? It needs to be a necessary part of the each math curriculum. In my PreCalculus class, I am currently struggling to find time to prove ideas. I have taught my students a ton of complex numbers, but they have yet to really see a proof. It’s a sad reality. There are many easy proofs one could do in a course (Euclid’s Lemma, , proof of the Pythagorean Theorem). Graphically – a picture is worth a thousand words. If a student can not graph the function, find the circumcenter, or see the big idea, what’s the point? Numerically – some students are motivated by theory. Others see things once they can get an opportunity to put values into the machine and see what comes out. Statistics or let’s put it more simply, data analysis is a valuable tool. Verbally – fundamental theorem of arithmetic, describing how to get the quadratic formula through completing the square, and to explaining to their aunt or uncle (who does not know Calculus) the concept of the derivative. Each of these ideas could easily take up page. Technology Ideas – can help students to reinforce or develop ideas, new ways of thinking, and make learning enjoyable and fun. WebWork Geogebra Geometer’s SketchPad Wolfram Alpha TI-84 or TI-Nspire iOS apps FluidMath Coursesmart.com Assessment Ideas Collect homework at least a few times a term – even if you decide not to grade it, make sure you have an idea of how students are doing in the class and the attempts they are making on homework. Student Journals Group Review for Extra Credit – math relays, similar to GUTS rounds at some math team contests 3 week minicourses on applied topics – see financial planning Writing math tests – 1. Cull from different sources. Have a number of tests from other teachers. Don’t be afraid to use a test bank, but don’t limit yourself to those problems. 2. Work together on writing exams/tests. Don’t let the Course Coordinator write the exam – ask if you can contribute problems so it can be a collobarative effort. 3. Those in high school – try to grade the AP exam somewhere in the next 5-10 years. 4. The more problems you do, the better the problem writer you will become. 5. Aim for 70% routine homework problems, 20% somewhat challenging, and 10% left field problems (that you haven’t prepared them for, but that are not Greek to them either) Managing class time (say a 50 minute class) • 2 minutes get to know you • 13 minutes homework review • 25-30 minute lesson (includes demonstrations, proofs, group work) • 5-10 minute wrap up, extension, or challenge Problem based learning – see the Exeter Math Problems – http://www.exeter.edu à Academics à Departments à Mathematics à (Mathematics) Teaching Materals Diversity in the Classroom Be mindful of students who have learning accommodations and study plans, and women and students of color for whom math/science may require you to think about stereotype threats. See this blog post by PA’s Head of School, John Palfrey, on why “Steele’s book should be required reading for anyone who works in a school.” Professional Development • Masters Degree in Mathematics or Mathematics Education • The Anja S. Greer Conference on Mathematics, Science and Technology, June 23 – June 28, 2013, http://www.exeter.edu/summer_programs/7325.aspx • Independent School Focused • Critical Friends Groups – using the Tuning Protocol as developed by NSRF (National School Reform Faculty) This entry was posted in Workshops. Bookmark the permalink.
{"url":"http://andoverstats.wordpress.com/2012/10/04/aisne-new-teacher-workshop-at-phillips-academy-ma/","timestamp":"2014-04-18T10:35:01Z","content_type":null,"content_length":"58727","record_id":"<urn:uuid:3740768f-7122-43e2-8362-7ec89a9eab90>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
Meeting Details For more information about this meeting, contact Ae Ja Yee, Karl Schwede, Robert Vaughan, Mihran Papikian. Title: Period functions for quantum modular forms Seminar: Algebra and Number Theory Seminar Speaker: Trung Hieu Ngo, University of Michigan A quantum modular form is a function on the rational numbers which is not necessarily modular but whose failure to be modular gives rise to an interesting analytic function. There are two important places where one finds a quantum modular form. First, it arises from a certain transform of the period function of a Maass form (thanks to Ramanujan, Andrews, Dyson, Hickerson, Cohen, Bruggeman, Lewis, Zagier,...). Second, it arises from the radial limit of both a mock theta function and a partial theta function (thanks to Rademacher, Ramanujan, Rhoades, Zagier, Zwegers,...). It is a conjecture of Dyson that the two instances belong to a unified framework. With Yingkun Li and Robert C. Rhoades, we discover and study a q-hypergeometric series construction, called renormalization, which tie together a duality in each of the two instances. We will describe a new application of renormalization results and quantum modular forms to counting integer partitions with prescribed properties. This is achieved via certain circle method technique, executed by Hardy-Ramanujan and Wright amongst others. The talk includes a motivated introduction to quantum modular form (with more emphasis on the perspective of Bruggeman-Lewis-Zagier). No background knowledge is assumed and graduate students are welcome. Room Reservation Information Room Number: MB106 Date: 02 / 13 / 2014 Time: 11:15am - 12:05pm
{"url":"http://www.math.psu.edu/calendars/meeting.php?id=20125","timestamp":"2014-04-18T10:40:57Z","content_type":null,"content_length":"4516","record_id":"<urn:uuid:121e5d5b-95a8-4d79-bc7a-3382419692ef>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic quantities Temperature A map of global long term monthly average surface air temperatures in Mollweide projection. A temperature is a numerical measure of hot or cold. Its measurement is by detection of heat radiation or particle velocity or kinetic energy, or by the bulk behavior of a thermometric material. It may be calibrated in any of various temperature scales, Celsius, Fahrenheit, Kelvin, etc. In thermodynamics, entropy (usual symbol S) is a measure of the number of specific ways in which a thermodynamic system may be arranged, often taken to be a measure of disorder, or a measure of progressing towards thermodynamic equilibrium. The entropy of an isolated system never decreases, because isolated systems spontaneously evolve towards thermodynamic equilibrium, which is the state of maximum entropy. which is found from the uniform thermodynamic temperature (T) of a closed system dividing an incremental reversible transfer of heat into that system (dQ). The above definition is sometimes called the macroscopic definition of entropy because it can be used without regard to any microscopic picture of the contents of a system. In thermodynamics, entropy has been found to be more generally useful and it has several other formulations. Entropy was discovered when it was noticed to be a quantity that behaves as a function of state, as a consequence of the second law of Acceleration For example, an object such as a car that starts from standstill, then travels in a straight line at increasing speed, is accelerating in the direction of travel. If the car changes direction at constant speedometer reading, there is strictly speaking an acceleration although it is often not so described; passengers in the car will experience a force pushing them back into their seats in linear acceleration, and a sideways force on changing direction. If the speed of the car decreases, it is usual and meaningful to speak of deceleration; mathematically it is acceleration in the opposite direction to that of motion. Definition and properties[edit] Acceleration is the rate of change of velocity. If there is a change in speed, direction, or both, then the object has a changing velocity and is said to be undergoing an acceleration. Constant velocity vs acceleration[edit] To have a constant velocity, an object must have a constant speed in a constant direction. Constant direction constrains the object to motion in a straight path (the object's path does not curve). Momentum Like velocity, linear momentum is a vector quantity, possessing a direction as well as a magnitude: Linear momentum is also a conserved quantity, meaning that if a closed system is not affected by external forces, its total linear momentum cannot change. In classical mechanics, conservation of linear momentum is implied by Newton's laws; but it also holds in special relativity (with a modified formula) and, with appropriate definitions, a (generalized) linear momentum conservation law holds in electrodynamics, quantum mechanics, quantum field theory, and general relativity. Newtonian mechanics The original form of Newton's second law states that the net force acting upon an object is equal to the rate at which its momentum changes with time. If the mass of the object is constant, this law implies that the acceleration of an object is directly proportional to the net force acting on the object, is in the direction of the net force, and is inversely proportional to the mass of the object. As a formula, this is expressed as: Force Potential From Wikipedia, the free encyclopedia In linguistics, the potential moodThe mathematical study of potentials is known as potential theory; it is the study of harmonic functions on manifolds. This mathematical formulation arises from the fact that, in physics, the scalar potential is irrotational, and thus has a vanishing Laplacian — the very definition of a harmonic function.In physics, a potential may refer to the scalar potential or to the vector potential. In either case, it is a field defined in space, from which many important physical properties may be Matter is a poorly defined term in science (see definitions below). The term often refers to a substance (often a particle) that has rest mass. Matter is also used loosely as a general term for the substance that makes up all observable physical objects.[1][2] All objects we see with the naked eye are composed of atoms. Potential energy is energy stored by virtue of the position of an object in a force field, such as a gravitational, electric or magnetic field. For example, lifting an object against gravity performs work on the object and stores gravitational potential energy; if it falls, gravity does work on the object which transforms the potential energy to kinetic energy associated with its speed. Some specific forms of energy include elastic energy due to the stretching or deformation of solid objects, chemical energy such as is released when a fuel burns, and thermal energy, the microscopic kinetic and potential energies of the disordered motions of the particles making up matter. Electric charge Electric charge is a physical property of matter that causes it to experience a force when near other electrically charged matter. There exist two types of electric charges, called positive and negative . Positively charged substances are repelled from other positively charged substances, but attracted to negatively charged substances; negatively charged substances are repelled from negative and attracted to positive. An object will be negatively charged if it has an excess of electrons , and will otherwise be positively charged or uncharged. The SI unit of electric charge is the coulomb (C), although in electrical engineering it is also common to use the ampere-hour (Ah), and in chemistry it is common to use the elementary charge ( e ) as a unit. In physics, mass (from Greek μᾶζα "barley cake, lump [of dough]") is a property of a physical body which determines the body's resistance to being accelerated by a force and the strength of its mutual gravitational attraction with other bodies. The SI unit of mass is the kilogram (kg). As mass is difficult to measure directly, usually balances or scales are used to measure the weight of an object, and the weight is used to calculate the object's mass. For everyday objects and energies well-described by Newtonian physics, mass describes the amount of matter in an object. However, at very high speeds or for subatomic particles, general relativity shows that energy is an additional source of mass. Thus, any body having mass has an equivalent amount of energy, and all forms of energy resist acceleration by a force and have gravitational attraction. Mass The flow of sand in an hourglass can be used to keep track of elapsed time. It also concretely represents the present as being between the past and the future. Time is a dimension in which events can be ordered from the past through the present into the future,[1][2][3][4][5][6] and also the measure of durations of events and the intervals between them.[3][7][8] Time has long been a major subject of study in religion, philosophy, and science, but defining it in a manner applicable to all fields without circularity has consistently eluded scholars.[3][7][8][9][10][11] Nevertheless, diverse fields such as business, industry, sports, the sciences, and the performing arts all incorporate some notion of time into their respective measuring systems.[12][13][14] Some simple, relatively uncontroversial definitions of time include "time is what clocks measure"[7][15] and "time is what keeps everything from happening at once".[16][17][18][19] In geometric measurements, length is the longest dimension of an object.[1] In other contexts "length" is the measured dimension of an object. For example it is possible to cut a length of a wire which is shorter than wire thickness. Length may be distinguished from height, which is vertical extent, and width or breadth, which are the distance from side to side, measuring across the object at right angles to the length. Length In the 19th and 20th centuries mathematicians began to examine non-Euclidean geometries, in which space can be said to be curved, rather than flat. According to Albert Einstein's theory of general relativity, space around gravitational fields deviates from Euclidean space.[4] Experimental tests of general relativity have confirmed that non-Euclidean space provides a better model for the shape of space. Philosophy of space Leibniz and Newton
{"url":"http://www.pearltrees.com/cleverchris/basic-quantities/id2067526","timestamp":"2014-04-18T16:31:09Z","content_type":null,"content_length":"27914","record_id":"<urn:uuid:5f457701-71e9-49d2-af41-05b196142a87>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
(Homotopy) Y ENR and contractible subset implies Y is a retract MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. I'm trying to solve the following question: Suppose $Y \subset R^n$ is a Euclidean neighborhood retract. I want to prove that if $Y$ is contractible, then it is a retract of $R^n$. up vote 0 down vote favorite at.algebraic-topology gn.general-topology homotopy-theory add comment Suppose $Y \subset R^n$ is a Euclidean neighborhood retract. I want to prove that if $Y$ is contractible, then it is a retract of $R^n$. Let me solve the case of compact $Y$. Thus $Y$ is a contractible ANR. Then Y is an AR. Indeed, the equivalence "contractible ANR $\Leftrightarrow$ AR" is a well known theorem, certainly known in the past to the founder of the theory of ANRs, Karol Borsuk. To prove this equivalence, consider an arbitrary compact metric space $X$, its closed subset $A$, and an arbitrary continuous map $f : A \rightarrow Y$, where $Y$ is a contractible ANR. Let $H:Y\times [0;1]\rightarrow Y$ be a contraction to a point $p\in Y$, meaning that $H(y\ 0)=p$ and $H(y\ 1)=y$ for every $y\in Y$. Now we get the constant map $g_0:X\rightarrow Y$, such that $\ forall_{x\in X}\ g_0(x)=p$, and a homotopy $\Phi : A\times [0;1]\rightarrow Y$ defined by: $$\forall_{a\in A}\forall_{t\in [0;1]}\quad \Phi(a\ t) := H(f(a)\ t)$$ Observe that $\forall_{a\ in A}\ \ g_0(a)=\Phi(a\ 0)$. Thus by the Borsuk homotopy extension theorem there exists a homotopy $F:X\times [0;1]\rightarrow Y$ such that • $\forall_{x\in X}\quad F(x\ 0)=g_0(x)$ (the value is $p$, of course) up vote 4 • $\forall_{a\in A}\forall_{t\in [0;1]}\quad F(a\ t) = \Phi(a\ t)$ down vote accepted Now define $g_1:X\rightarrow Y$ by $\forall_{x\in X}\ \ g_1(x) := F(x\ 1)$. Then $g_1$ is a continuous extension of $f:A\rightarrow Y$ onto $X$. Thus we have proven that $Y$ is an AR. It follows that $Y$ is a retract of the whole $\mathbf R^n$. Indeed, space $Y$--being an AR--is a retract of a cube $Q := [-\alpha;\alpha]^n$ which contains $Y$, while cube $Q$ is a retract of $\mathbf R^n$. REMARK The last argument was simple, correct and adequate but ad hoc. Here is a more basic (general) argument: let compact $Y\subseteq \mathbf R^n$ be and AR (for metric compact spaces). The Euclidean space $\mathbf R^n$ is a subspace of a metric compact space $C$ (e.g. of $\mathbf R^n\cup\{\infty\} = \mathbf S^n$), and $Y$ is a retract of $C$, hence $Y$ is a retract of $ \mathbf R^n$. add comment Let me solve the case of compact $Y$. Thus $Y$ is a contractible ANR. Then Y is an AR. Indeed, the equivalence "contractible ANR $\Leftrightarrow$ AR" is a well known theorem, certainly known in the past to the founder of the theory of ANRs, Karol Borsuk. To prove this equivalence, consider an arbitrary compact metric space $X$, its closed subset $A$, and an arbitrary continuous map $f : A \rightarrow Y$, where $Y$ is a contractible ANR. Let $H:Y\times[0;1]\rightarrow Y$ be a contraction to a point $p\in Y$, meaning that $H(y\ 0)=p$ and $H(y\ 1)=y$ for every $y\in Y$. Now we get the constant map $g_0:X\rightarrow Y$, such that $\forall_{x\in X}\ g_0(x)=p$, and a homotopy $\Phi : A\times [0;1]\rightarrow Y$ defined by: $$\forall_{a\in A}\forall_{t\in [0;1]}\quad \Phi(a\ t) := H(f(a)\ t)$$ Observe that $\forall_{a\in A}\ \ g_0(a)=\Phi(a\ 0)$. Thus by the Borsuk homotopy extension theorem there exists a homotopy $F:X\times [0;1]\rightarrow Y$ such that Now define $g_1:X\rightarrow Y$ by $\forall_{x\in X}\ \ g_1(x) := F(x\ 1)$. Then $g_1$ is a continuous extension of $f:A\rightarrow Y$ onto $X$. Thus we have proven that $Y$ is an AR. It follows that $Y$ is a retract of the whole $\mathbf R^n$. Indeed, space $Y$--being an AR--is a retract of a cube $Q := [-\alpha;\alpha]^n$ which contains $Y$, while cube $Q$ is a retract of $\ mathbf R^n$. REMARK The last argument was simple, correct and adequate but ad hoc. Here is a more basic (general) argument: let compact $Y\subseteq \mathbf R^n$ be and AR (for metric compact spaces). The Euclidean space $\mathbf R^n$ is a subspace of a metric compact space $C$ (e.g. of $\mathbf R^n\cup\{\infty\} = \mathbf S^n$), and $Y$ is a retract of $C$, hence $Y$ is a retract of $\mathbf R^n$. Observe that any retract of $\newcommand{\RR}{\mathbb{R}} \RR^n$ is necessarily a closed subspace of $\RR^n$. Assuming this necessary condition, the answer to the question is affirmative. More precisely, if $Y$ is a contractible ANR (absolute neighbourhood retract) and a closed subspace of $\RR^n$, then $Y$ is a retract of $\RR^n$. This is an instance of the following Claim: Let $X$ be a metrizable ANR, and let $Y$ be an ANR and a closed subspace of $X$. If the inclusion of $Y$ into $X$ is a homotopy equivalence, then $Y$ is a retract of $X$. This claim is an immediate consequence of the next two propositions. Proposition 1: Let $X$ be a metrizable ANR, and let $Y$ be a closed subspace of $X$. If $Y$ is an ANR, then the inclusion of $Y$ into $X$ is a cofibration. up vote 0 down vote This is stated as proposition A.6.7 in the appendix to Fritsch and Piccinini's book "Cellular structures in topology", page 282. It is also stated and proved in Sze-Tsen Hu's 1965 book " Theory of retracts": see theorem 3.2 and corollary 3.3 on pages 120 and 121. Alternatively, you may look at the single lemma and its proof in this other answer, which gives proposition 1 for the case of ENRs. Proposition 2: Assume $Y$ is a subspace of $X$ such that the inclusion of $Y$ into $X$ is a cofibration and a homotopy equivalence. Then $Y$ is a strong deformation retract of $X$; in particular, $Y$ is a retract of $X$. This is a standard result in basic homotopy theory. For example, it is stated and proved as corollary 0.20 in chapter 0 of Allen Hatcher's book "Algebraic topology". add comment Observe that any retract of $\newcommand{\RR}{\mathbb{R}} \RR^n$ is necessarily a closed subspace of $\RR^n$. Assuming this necessary condition, the answer to the question is affirmative. More precisely, if $Y$ is a contractible ANR (absolute neighbourhood retract) and a closed subspace of $\RR^n$, then $Y$ is a retract of $\RR^n$. This is an instance of the following result. Claim: Let $X$ be a metrizable ANR, and let $Y$ be an ANR and a closed subspace of $X$. If the inclusion of $Y$ into $X$ is a homotopy equivalence, then $Y$ is a retract of $X$. This claim is an immediate consequence of the next two propositions. Proposition 1: Let $X$ be a metrizable ANR, and let $Y$ be a closed subspace of $X$. If $Y$ is an ANR, then the inclusion of $Y$ into $X$ is a cofibration. This is stated as proposition A.6.7 in the appendix to Fritsch and Piccinini's book "Cellular structures in topology", page 282. It is also stated and proved in Sze-Tsen Hu's 1965 book "Theory of retracts": see theorem 3.2 and corollary 3.3 on pages 120 and 121. Alternatively, you may look at the single lemma and its proof in this other answer, which gives proposition 1 for the case of ENRs. Proposition 2: Assume $Y$ is a subspace of $X$ such that the inclusion of $Y$ into $X$ is a cofibration and a homotopy equivalence. Then $Y$ is a strong deformation retract of $X$; in particular, $Y$ is a retract of $X$. This is a standard result in basic homotopy theory. For example, it is stated and proved as corollary 0.20 in chapter 0 of Allen Hatcher's book "Algebraic topology".
{"url":"http://mathoverflow.net/questions/119385/homotopy-y-enr-and-contractible-subset-implies-y-is-a-retract?sort=oldest","timestamp":"2014-04-19T17:21:07Z","content_type":null,"content_length":"62240","record_id":"<urn:uuid:66da63fc-3659-452c-8a29-f196526c9a62>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
On Engel-anticommutative algebras up vote 0 down vote favorite Let $\mu:\mathbb{R}^n \times \mathbb{R}^n \longrightarrow \mathbb{R}^n$ be a alternating bilinear map, i.e. $\mu(X,Y)=-\mu(Y,X)$ (anticommutativity) and so, let $\mathfrak{a}=(\mathbb{R}^n,\mu)$ be a skew-symmetric algebra (this one is not necessarily a Lie algebra). 1. Is there a "famous" example of a skew-symmetric algebra that is not a Lie algebra? 2. We assume that for any $X \in \mathfrak{a}$, $\mu(X,\cdot):\mathbb{R}^n \longrightarrow \mathbb{R}^n$ is a nilpotent linear transformation. Is it true that $\mathfrak{a}$ is a nilpotent algebra? (recall that $\mathfrak{a}$ is not necessarily a Lie algebra) Thank you for your help. If you don't assume associativity, then the multiplication of imaginary octonions gives you a famous example of 1. – José Figueroa-O'Farrill Jun 3 '11 at 12:04 add comment 1 Answer active oldest votes 1. To rephrase José Figueroa-O'Farrill's comment above - a 7-dimensional simple exceptional Malcev algebra (which is a quotient of octonions under the commutator by 1-dimensional center). But, really, I find the question is formulated badly: just skew-commutativity is a very mild restriction to say something meaningful about an algebra in general, so it's almost the same as to ask for "famous" examples of a (nonassociative) algebra. 2. No, this is not true. I was able to construct counterexamples on computer, as finite-dimensional quotients of free algebras in respective "Engel varieties", but all they are large and cumbersome. Shorther examples can be found in the following paper by Koreshkov and Kharitonov, which, apparently, was published twice: About nilpotency of Engel algebras, Formal Power Series and Algebraic Combinatorics (ed. D. Krob et al.), Springer, 2000, 461-467; ZBL: 0983.17003 [available on google books and up vote 3 down vote Nilpotency of the Engel algebras, Russ. Math. (Izv. VUZ) 45 (2001), No. 11, 15-18; ZBL: 1103.17300 They prove also that this is true for algebras of dimension $\le 4$. It is also easy to see (and is recorded, for example, in: V.T. Filippov, Binary Lie algebras satisfying the third Engel condition, Siber. Math. J. 49 (2008), N4, 744-748; DOI: 10.1007/s11202-008-0071-3) that the second Engel condition implies nilpotency of degree $4$. On the other hand, Engel(-like) theorems were established for many particular ("famous"?) classes of anticommutative algebras considered in the literature: binary-Lie, Malcev, and some other generalizations of Lie algebras, and it is an open question, as far as I know, for Sagle algebras. For some reason I like your "which, apparently, was published twice" remark a lot. – Vladimir Dotsenko Jun 11 '11 at 10:56 @Vladimir Dotsenko: No offense or sarcasm meant, really. Just a mere (bibliographical) fact. I like this paper and have used it on some other occasion. – Pasha Zusmanovich Jun 11 '11 at Oh I did understand that. It's the option of a tongue-in-cheek interpretation that makes it lovely. – Vladimir Dotsenko Jun 11 '11 at 22:26 add comment Not the answer you're looking for? Browse other questions tagged lie-algebras or ask your own question.
{"url":"http://mathoverflow.net/questions/66745/on-engel-anticommutative-algebras/67430","timestamp":"2014-04-16T16:00:25Z","content_type":null,"content_length":"54647","record_id":"<urn:uuid:43721406-4b08-4222-816d-6079a37f12ab>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
A fast-moving introduction to advanced probability theory Here is a link to a PDF doc I wrote a few years back: My fast-moving introduction to Advanced Probability Theory. I was taught undergraduate probability theory by one of the best. Williams’s book Probability with Martingales is a popular introduction to advanced probability theory, and was the text I used to learn about the formal theoretical framework that holds probability together (I highly recommend going through all the exercises at the back of the book). But what is the best way to teach probability theory? I remember having a discussion with a PhD colleague on whether it is however really necessary to go through all the sigma-algebra part before getting into the interesting measure theory story. My view has been that advanced probability theory can be taught in a different way, and I always thought that we should put more emphasis on the intuition behind the measure-theory aspects and on how clever the notation is. As an effort to back up my claims, a few years ago I started to write my own Introduction to Probability. I got about 20 pages down. Uncompleted later chapters were given titles and show the direction I was heading. If I get interesting feedback on what’s there so far I’ll put in the effort to finish this off. Please add a comment if you like the article. anon Says: January 20, 2012 at 8:44 pm | Reply Like the doc, I had never gotten past the sigma algebra part of measure theory and this finally gave me the intuition for it. • Robert Says: January 20, 2012 at 9:32 pm | Reply That’s great to hear! Thanks for the comment. erman Says: January 27, 2012 at 9:38 am | Reply Robert hi, I think you do a good job with this blog, pdf very straightforward. I hope you will continue to blog regularly, its a pleasure to follow you. Kind regards • Robert Says: January 27, 2012 at 6:33 pm | Reply I fully intend to keep blogging regularly, glad you enjoy reading! w-g Says: June 20, 2012 at 4:26 pm | Reply Hi Robert! “If I get interesting feedback on what’s there so far I’ll put in the effort to finish this off.” — please do so! What you have done so far is really nice (thank you!), and it would be great to have a condensed/fast-moving intro to probability! Miguel Says: May 22, 2013 at 6:12 pm | Reply Thank you very much Robert! I join the cause and I would gently ask you to add more chapters!!!
{"url":"http://fermatslastspreadsheet.com/2011/11/30/introduction-to-probability-theory/","timestamp":"2014-04-18T20:43:37Z","content_type":null,"content_length":"52297","record_id":"<urn:uuid:a9c3bfb1-c214-46b5-bc3c-240ee29adf32>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
NLVM Ideas Subtraction With Virtual Manipulatives I would use Base Blocks Subtraction found on the National Library of Visual Manipulatives at http://nlvm.usu.edu/ to work with my class on subtraction. M.O.2.1.10 model 2- and 3-digit addition and subtraction with regrouping using multiple strategies. M.O.2.1.11 add and subtract 2- and 3-digit numbers without regrouping. I will introduce this on the white board to the whole class. For example, I will model the problem 135 - 89. We will build 135 with a flat, 3 rods and 5 units in blue.Then I will show 8 rods and 9 units in red. Students will use base ten blocks at their seats. We will take a rod from the 3 rods and break it into 10 units.Then, on the whiteboard the students visually match the rods and units as they subtract. The renaming is shown along with the answer as the problem is completed. This site also allows the teacher to create new problems. It could be used with second graders through third grade with subtracting in hundreds or across zeros. Students in fourth through sixth graders experiencing problems could also benefit. As younger children subtract, with this manipulative, they can see the ten rod break apart into 10 units or the flat break apart into ten rods. They can't physically do this with the real minipulatives. it adds a strong visual effect for the students. Submitted by June Angle 3 Digit Subtraction using Base Blocks I suggest using this to help teach multi-digit subtraction to 4th grade. Go to the National Library of Virtual Manipulatives http://nlvm.usu.edu, select Number and Operations, grades 3-5. Scroll down to Base Blocks Subtraction. I would begin by solving the problems already provided. I plan to use this to teach 3 digit subtraction, but would begin with 2 digit for practice and to familiarize students with the program before introducing a more difficult portion of subtraction. I would take students through the steps of borrowing from the 10s when there are not enough in the 1s place on the 2 digit problems. Once students seem to understand how the borrowing takes place, as far as breaking the 10s apart into 10 ones, I would move to 3 digit subtraction. I would the repeat the borrowing process as it pertains to borrowing from the hundreds and giving to the tens. I would be sure to make sure students understand that when they borrow they are merely breaking the 100s down into individual parts. Once students have developed some mastery over the process of subtracting three digit numbers, I would let them create their own problems in partners. They will take turns creating a solving each others problems. Submitted by Ashlie Bailey **Geoboard** – · Using the National Library of Manipulatives, use the geoboards to illustrate area and perimeter. · I would introduce this as a whole group at first. After a few I would have the students create some of their own. 1. Go to http://nlvm.usu.edu/en/nav/vlibrary.html 2. Go to Geometry grades 3-5 and to Geoboards. 3. With this site have the students use the “rubber bands” to create rectangles. 4. Once created have then try determine what would their perimeter and area of their rectangle. 5. Then have them click “Measure” and see if they are correct. Submitted by Tracy Berry Using the Factor Tree Using the Factor Tree (Firework Tree), Determine the Prime Factors. Using a Factor Tree (Firework Tree), Write an Equation Using Factors from a given Product. Using the Factor Tree (Firework Tree), Determine the Prime Factors. 1. Click on Number Operations 2. Scroll down to Factor Tree 3. Type a factor below the Product and press Enter 4. A factor is one of two or more numbers, that when multiplied together produce a given product. 5. A Product is an answer to a multipication problem 6. Continue until all factors are found and Factor Tree is completed. 7. A prime number is any number with no divisors other than itself and 1, such as 2 and 11. 8. Your Equation will be located below the Venn Diagram. 9. Equations are all the factors multiplied together to equal the Product (2*3*2*2=24) 10. Prime Factors are numbers when multiplied together will give us the product. This lesson is created for a Fourth Grade Classroom. Key Vocabulary are in Larger print. Donna D.Dunn Geneva Kent Elemetary Grade 4 Number and Operation - Naming Fractions Go to the National Library fo Virtual Manipulatives http://nlvm/usu.edu/. Select Number and Operations, grades 3 - 5. Day 1: Fraction - Visualizing This is a great introduction to fractions after you have discussed math vocabulary (fraction, numerator, denominator). Students get to illustrate a fraction by dividing a shape and highlighting the appropriate parts. Once a student is finished with a problem, he/she receives automatic feedback about his/her answer. If the answer is incorrect, it directs the student to the part of the answer that needs to be corrected. Day 2: Fractions - Naming This is the next step in helping students understand fractions. Students will write the fraction that corresponds to the highlighted portion of a shape. Students type the numerator and denominator. Once again, students are provided immediate feedback concerning their answers. If incorrect, it will state the correct part (using the terms denominator/numerator) and also the area that needs further attention. Carla Ferrell 4th grade GEOBOARD 3-5 - Classifying Triangles I would use the interactive geoboard from the National Library of Virtual Manipulatives to introduce our next mini unit on triangles, and parallelograms in the 4th grade. Day 1: I will begin by showing how to use the rubberbands to make shapes. I will invite students to come up to make the different triangles that you classify by angles-acute, right, and obtuse. Next, we will measure the angles on the intelliboard with a protractor. Day 2: We will make the triangles that are classified by side measurement-scalene, isosceles, and equilateral. I will again invite students to come up-construct their triangle, and then measure the sides with a large ruler. Extension--If the students seem ready, I will point out the "measures" button on the Geoboard. When you click "measures" it shows the area and perimeter of the constructed shape. Because we will be eventually finding the area of the triangles, we might discuss and find the area using the formula 1/2 bh, then click "measures" to check. I would revisit the geoboard whole class as we continue to work through our unit. I would also allow the students to manipulate the geoboard LATER in the unit >>>> we could come back to the geoboard to make parallelograms with triangles. The ability to change colors of the shapes on the geoboard helps to see what makes a parallelogram. ASSESSMENT- This would be a great addition to their formal test assessment on the unit, I could have the students construct each shape we worked on as a performance assessment. Amanda Flora, 4th grade Factor Tree Finding the GCF and the LCF Using a Venn Diagram First you will go to the bottom of your screen and click on two trees. You will see two numbers that you will use to make individual factor trees. Factor both problems. In the Venn diagram, you will first find the common factors between the two problems by dragging the numbers. After you find the common factors, you will drag the individual factors for each number. To find the GCF, multiply the common factors together. To find the LCM, multiply the uncommon factors by the GCF. Type in your answers in the boxes and check your problems by clicking on check. This is a great way to teach 4th grade factoring and finding the GCF and the LCF. I would use this before Benchmark testing and the West Test. Submitted by Donna Goodman, 4th grade Geneva Kent Elementary Perimeter and Area This lesson is for use on a Virtual Whiteboard to review Perimeter and Area. I would use the Rectangle Multiplication tool to have students create rectangles with a specified area. For example, they might have to create a rectangle with an area of 24. They would need to slide the base and height to numbers so that when multiplied they equal 24. I would probably have them give me the perimeter of that rectangle as well. This would be a review/bellringer excercise allowing individual students to solve unique problems. I found this tool in Numbers and Operations Grades 3-5. Ellen Jung-3rd Grade Base Block Subtraction I am going to use Base Block Subtraction for reviewing subtraction. I am also using this for students who need modification. This will give them extra help to see how they can subtract and regrouping by moving colored blocks. You can create or use problems already set. Once the students have moved the blocks from the smaller number the blocks disappear and the answer is given. It shows the student what the problem would look like on paper with regrouping. If you are having difficulty the instruction button is on the top right . You can find this through NLVM 3-5 - Numbers and Ami Mandt - 3rd Grade Lesson: Introducing Fractions Grade 5 On the National Library of Virtual Manipulatives site chose Number and Operation site and click on grades 3-5. Scroll down to Naming fractions. I plan to use this lesson as a whole group activity using the smart board. The students will write the fraction corresponding to the highlighted portion of a shape. *Introduce numerator and denominator *Students should begin by counting each section of the shape this will be the denominator (fill this part in first) *Explain that the shaded or colored part of the shape will be the numerator or top part of the fraction repeat process several times Denise Meadows grade 5 Lesson: Circle 21 Grade: 3-5 (Enrichment for tutoring students grade 3) On the National Library of Virtual Manipulatives site chose Numbers and Operations for grades 3-5. Scroll down to Circle 21. I will use this lesson as direct instruction for enrichment purposes. This lesson involves adding positive whole numbers to the sum of twenty one. This lesson encourages logical analysis while working with addition of numbers up to 21. This lesson will be a prerequsite for the Circle 99 lesson which targets two digit numbers up to 99. Things you should know: 1. The blue numbers can not be moved. 2. The combination of numbers in each circle must add up to 21. 3. Each circle has 3 sections, a number must be in each section. The combination of numbers must total 21. 4. When the combination of numbers total 21 the circle changes color. 5. When you get to the last two circles and the numbers won't add up to 21 you must go back and change the combination of numbers until each circle totals 21. 6. You have been successful when all circles turn colors and total 21. Money-Making a Dollar For 1st. Grade Each student will take a turn making a dollar using Math Freeware. The First Grade students are beginning to learn what each coin is and it's value. They need to learn how to add pieces of money to learn the value of money. They need to know that a nickel is worth 5 cents or five pennies, a dime is worth 10 pennies or 10 cents, and a quarter if worth 25 pennies or 25 cents. When adding money the need to know ones place, tens place, and hundreds place. Place-value must be taught first. Our goal in first grade is to teach them the value of each coin and how and what they can purchase with certain coins or $1.00. Deanna Pottorff First Grade Add and Subtract with Base Ten Blocks On the National Library of Virtual Manipulatives I plan on using the Base Ten Blocks for Addition and Subtraction that is under the Numbers and Operations tab in my room. We have just finished grouping and ungrouping larger numbers and some students still do not see why we have to group at all. Some students begin crossing off numbers that don't need to be ungrouped just because we have been practicing. I would use this as a whole group activity. I may have students grouped and work as a team to answer the problems on the Intelliboard. This could be a good introduction to ungrouping and grouping. 1. Go to www.nlvm.usu.edu/en/nav/vlibrary.html 2. Go to the 3-5 grade area with Number and Operations. 3. Go to the Base Block Addition tab. 4. Pull up the instructions tab and go over with the students so they will know how to lasso the groups of ten. 5. Allow a "playtime" to practice moving the blocks. 6. Have students work as a team to solve the problems. You can give points if you like to make it competitive, especially if this is a review versus a preview. By using the virtual manipulatives on the Intelliboard all the students can see the actual blocks ungroup themselves "magically" so to speak and hopefully gain a better understanding of why we do this. I would limit the problems at first the the hundreds place value as the thousands get crowded very fast and is harder to count. My only concern for using the subtraction blocks is the whole idea of cancelling out. Most of the time we are just taking away from the top number and cancelling out has not been taught yet. Some students may be a little familiar with it, so I will have to see how that part goes. Careful attention needs to be paid during instruction time so the blocks do not get so mixed up while they are My only concern for using the subtraction blocks is the whole idea of cancelling out. Most of the time we are just taking away from the top number and cancelling out has not been taught yet. Some students may be a little familiar with it, so I will have to see how that part goes. Careful attention needs to be paid during instruction time so the blocks do not get so mixed up while they are Submitted by Crystal Smith for 4th Grade Counting Money from the National Library of Virtual Manipulatives. Click the index Numbers and Operations for grades 3-5. Scroll to the topic Money. The teacher should demonstrate this lesson to the whole class to introduce a lesson about money. This lesson will give the students practice in counting money and making change. The students can practice in small groups working together at the white board. Then they can work on the lessons individually at their Day 1 How Much Money Day 2 Pay Exact Amount Day 3 Make a Dollar Extension: Have the children create a word problem to go with the problem created. Sharon Stenson Third Grade Spring Hill Elementary Naming Fractions Using the website NLVM, under Numbers and Operations, 3-5, I would use the Fraction Naming to help introduce fractions. I would start out as a whole group to show the concept on the virtual whiteboard. Students will be allowed to come up to the board and manipulate the fractions themselves. Once the students are comfortable with the program and concept enough to work on their own, it can be used as a review. I would use it in small groups or bell-ringers. It could also be used as a reteach activity for those still struggling with naming fractions. Lisa Sydnor 3rd Grade Time ...What Time Will It Be Objective: to learn how to calculate elasped time Activity: This lesson would be appropriate for a small group of children. They would create a story about the two clocks and share how much time elasped from the beginning of the activity to the end of the activity. They would use the clock manipulatives to solve their story problem. Would be a wonderful review before the west test. Submitted by: Gayle Wolfe Title One Math Teacher Add and Subtract Integers To review adding and subtracting positive and negative numbers we will use the resource http//moodle.cabe.k12.wv.us and the virtual manipulatives called "color chips addition " and "color chips subtraction". Using the smartboard we will practice together with the resource provided. Our next step will be to use the red and yellow chips at each students' desk to practice individually. Our final step will be to practice problems with manipulatives, with the method previously taught, called "hills and holes", or to work problems no longer using the manipulatives. M.O.6.1.4 Analyze and solve problems using integers. We will develop, test, and justify hypothesis to derive the rules of operations with integers. Trouble shooting-Follow the steps provided at the top of each lesson exactly as indicated to practice the activity or it will not cancel the integers. The postive and negative color chip rules are very different and may cause a problem if worked on the same day. Grade 6 -Leanne Woods Probability with a Virtual Spinner - Lesson plan for 5th grade created by Anita Rowe
{"url":"http://cabellcountyteachers.wikispaces.com/NLVM+Ideas?responseToken=00f7f23753c1fe880d5134aead2e8f0f8","timestamp":"2014-04-23T22:42:11Z","content_type":null,"content_length":"70371","record_id":"<urn:uuid:eaf933d9-1404-4f72-b2cb-7438310d48f7>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Isomorphism between D4 and Z2 x Z4 September 11th 2011, 11:31 PM #1 Isomorphism between D4 and Z2 x Z4 Can anyone please help with the following problem: Show that $D_4$ and $\mathbb{Z}_2$ x $\mathbb{Z}_4$ are not isomporphic Be grateful for help with the approach to this problem. For those who need an update or reaquaintance with $D_4$ I have attached the relevant pages of Fraleigh: A First Course in Abstract Algebra. Re: Isomorphism between D4 and Z2 x Z4 Can anyone please help with the following problem: Show that $D_4$ and $\mathbb{Z}_2$ x $\mathbb{Z}_4$ are not isomporphic Be grateful for help with the approach to this problem. For those who need an update or reaquaintance with $D_4$ I have attached the relevant pages of Fraleigh: A First Course in Abstract Algebra. It is known - or easy to prove - that homomorphic image of an abelian group is an abelian group, too. Notice that $D_4$ is not abelian while $\mathbb{Z}_2 \times\mathbb{Z}_4$ is abelian. Re: Isomorphism between D4 and Z2 x Z4 Another approach would be to note that that isomorphisms are bijections. Therefore, they preserve order. So, $\mathbb{Z}_2\times \mathbb{Z}_4$ has order 8, while the Klein 4-group has order 4. Re: Isomorphism between D4 and Z2 x Z4 Thanks for the post But Swlabr. this post concerns $D_4$ which has order 8 ... not the Klein 4-group (order 4) ... Re: Isomorphism between D4 and Z2 x Z4 Ah, sorry, there are two conventions - one can use either $D_{n}$ or $D_{2n}$ for the Dihedral group of order $2n$. I should have realised from context which one you were using... Re: Isomorphism between D4 and Z2 x Z4 oh ... understand! September 12th 2011, 02:30 AM #2 Junior Member Mar 2011 September 12th 2011, 02:35 AM #3 September 12th 2011, 02:54 AM #4 September 12th 2011, 04:25 AM #5 September 12th 2011, 01:34 PM #6
{"url":"http://mathhelpforum.com/advanced-algebra/187813-isomorphism-between-d4-z2-x-z4.html","timestamp":"2014-04-21T08:31:09Z","content_type":null,"content_length":"48885","record_id":"<urn:uuid:0a16f699-bbd2-4747-a89b-25bf24fbc4bc>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
[Haskell-cafe] Re: Type error oleg at pobox.com oleg at pobox.com Sun Apr 8 19:54:52 EDT 2007 Alfonso Acosta wrote: I have a type problem in my code which I dont know how to solve > data HDSignal a = HDSignal > class HDPrimType a where > class PortIndex a where > class SourcePort s where > -- Plug an external signal to the port > plugSig :: (HDPrimType a, PortIndex ix) =>ix -> s ->(HDSignal a -> b) -> b > class DestPort d where > -- Supply a signal to the port > supplySig :: (PortIndex ix, HDPrimType a) => HDSignal a -> ix -> d -> d > -- Connect providing indexes > connectIx :: (SourcePort s, PortIndex six, DestPort d, PortIndex dix) => > six -> s -> dix -> d -> d > connectIx six s dix d = plugSig six s $ (push2 supplySig) dix d I'm afraid the example may be a bit too simplified. The first question: what is the role of the |HDPrimType a| constraint in plugSig and supplySig? Do the implementations of those functions invoke some methods of the class HDPrimType? Or the |HDPrimType a| constraint is just to declare that the parameter |a| of HDSignal is a member of HDPrimType? If it is the latter, that intention is better declared > data HDPrimType a => HDSignal a = HDSignal Any function constructing the value HDSignal has to prove to the typechecker that the type |a| satisfied the HDPrimType constraint. Once we ensured that only properly constrained HDSignal can be constructed, the HDPrimType constraint can be removed from the signature of plugSig and supplySig, and the example compiles. It compiles because plugSig requires a supplicant that can process any HDSignal, and supplySig promises to be exactly this supplicant, which is able to process HDSignal without looking at it. With the original HDPrimType constraint, the meaning is the same. Although neither plugSig or supplySig care about which particular instance of HDPrimType is chosen, the typechecker must chose some instance. Alas, there is no information in the signature of connectIx to help the typechecker chose the instance. There is no defaulting. If plugSig and supplySig do use the methods of HDPrimType, one could use existentials: > data HDSignal' = forall a. HDPrimType a => HDSignal' (HDSignal a) > class SourcePort s where > plugSig :: (PortIndex ix) => ix -> s -> (HDSignal' -> b) -> b > class DestPort d where > supplySig :: (PortIndex ix) => HDSignal' -> ix -> d -> d That works too. More information about the Haskell-Cafe mailing list
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2007-April/024227.html","timestamp":"2014-04-21T03:37:33Z","content_type":null,"content_length":"5095","record_id":"<urn:uuid:0d6c2b88-048c-490f-8a49-3dfa4cf05f3d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: ON THE LOCAL QUOTIENT STRUCTURE OF ARTIN STACKS ABSTRACT. We show that near closed points with linearly reductive stabilizer, Artin stacks are for- mally locally quotient stacks by the stabilizer. We conjecture that the statement holds ´etale locally and we provide some evidence for this conjecture. In particular, we prove that if the stabilizer of a point is linearly reductive, the stabilizer acts algebraically on a miniversal deformation space, gen- eralizing the results of Pinkham and Rim. We provide a generalization and stack-theoretic proof of Luna's ´etale slice theorem which shows that GIT quotient stacks are ´etale locally quotients stacks by the stabilizer. This paper is motivated by the question of whether an Artin stack is "locally" near a point a quotient stack by the stabilizer at that point. While this question may appear quite technical in nature, we hope that a positive answer would lead to intrinsic constructions of moduli schemes parameterizing objects with infinite automorphisms (e.g. vector bundles on a curve) without the use of classical geometric invariant theory. We restrict ourselves to studying Artin stacks X over a base S near closed points |X| with linearly reductive stabilizer. We conjecture that this question has an affirmative answer in the ´etale topology. Precisely, Conjecture 1. If X is an Artin stack finitely presented over an algebraic space S and |X| is a closed point with linearly reductive stabilizer with image s S, then there exists an ´etale neighborhood
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/954/4300249.html","timestamp":"2014-04-20T01:35:31Z","content_type":null,"content_length":"8619","record_id":"<urn:uuid:daa0fd86-8146-46a7-8079-4a6e26e17d6e>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding the Volume January 6th 2010, 03:20 PM #1 Sep 2009 Finding the Volume Can someone please help me figure out this problem? I've attached the picture on the bottom. Problem: A box with an open top is to be constructed from a regular piece of cardboard with dimensions 12 in. by 20 in. by cutting out equal squares of side x at each corner and then folding up the sides as in the figure. Express the volume V of the box as a function of x. Can someone please help me figure out this problem? I've attached the picture on the bottom. Problem: A box with an open top is to be constructed from a regular piece of cardboard with dimensions 12 in. by 20 in. by cutting out equal squares of side x at each corner and then folding up the sides as in the figure. Express the volume V of the box as a function of x. So, what exactly are you having trouble with here? Note that $w=12-2x$ and Great, Thanks! I was having a hard time starting the problem. January 6th 2010, 04:07 PM #2 January 6th 2010, 04:56 PM #3 Sep 2009
{"url":"http://mathhelpforum.com/geometry/122686-finding-volume.html","timestamp":"2014-04-18T06:43:26Z","content_type":null,"content_length":"35881","record_id":"<urn:uuid:5a3d5668-7263-41b4-a8d7-d91c95691614>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
Advogato: Blog for johnw Journey into Haskell, part 6 Another thing to be learned down the Haskell rabbit-hole: Thinking in infinites. Today someone posed a puzzle which I tried to solve in a straight-forward, recursive manner: Building a list of primes. The requested algorithm was plain enough: > Create a list of primes "as you go", considering a number prime if it can't be divided by any number already considered prime. However, although my straightforward solution worked on discrete ranges, it couldn't yield a single prime when called on an infinite range -- something I'm completely unused to from other languages, except for some experience with the SERIES library in Common Lisp. ## An incomplete solution Looking similar to something I might have written in Lisp, I came up with this answer: primes = reverse . foldl fn [] where fn acc n | n `dividesBy` acc = acc | otherwise = (n:acc) dividesBy x (y:ys) | y == 1 = False | x `mod` y == 0 = True | otherwise = dividesBy x ys dividesBy x [] = False But when I suggested this on [#haskell](irc://irc.freenode.net/haskell), someone pointed out that you can't reverse an infinite list. That's when a light-bulb turned on: I hadn't learned to think in infinites yet. Although my function worked fine for discrete ranges, like `[1..100]`, it crashed on `[1..]`. So back to the drawing board, later to come up with this infinite-friendly version: primes :: [Int] -> [Int] primes = fn [] where fn _ [] = [] fn acc (y:ys) | y `dividesBy` acc = fn acc ys | otherwise = y : fn (y:acc) ys dividesBy _ [] = False dividesBy x (y:ys) | y == 1 = False | x `mod` y == 0 = True | otherwise = dividesBy x ys Here the accumulator grows for each prime found, but returns results in order whose value can be calculated as needed. This time when I put `primes [1..]` into GHCi it printed out prime numbers immediately, but visibly slowed as the accumulator grew larger. Syndicated 2009-03-26 13:00:00 (Updated 2009-03-23 05:28:45) from Lost in Technopolis
{"url":"http://advogato.org/person/johnw/diary/64.html","timestamp":"2014-04-20T05:50:38Z","content_type":null,"content_length":"5916","record_id":"<urn:uuid:74defd8f-7fd6-45ef-ad0e-fb4afd78d007>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus - Anti-Derivatives Posted by Sean on Friday, March 14, 2008 at 5:55pm. How would you find the Integral of (cos(x\8))^3, defined from -(4 x pi)/3) to (4 x pi)/3) • Calculus - Anti-Derivatives - Anonymous, Friday, March 14, 2008 at 6:22pm cos^3(x)dx = cos^2(x)cos(x)dx = [1-sin^2(x)]cos(x)dx = • Calculus - Anti-Derivatives - drwls, Friday, March 14, 2008 at 6:27pm Is cos(x)8 supposed to be cos 8x, (cosx)^8 or 8 cosx? • Calculus - Anti-Derivatives - Sean, Friday, March 14, 2008 at 6:29pm Its actually supposed to be (cos(x/8))^3. With what Anonymous answered, I see he left out the x/8, instead only using x. Would you, Drwls, mind using x/8? • Calculus - Anti-Derivatives - drwls, Friday, March 14, 2008 at 6:52pm Sorry, I missed the \ mark. It is always better to use / for fractions when typing. Without messing with trig identities for cos (x/8), let's just substitute u for x/8. Then your integral becomes Integral of (cos u)^3 = (1- sin^2 u) cos u du , from u = -(pi/6) to (pi/6) Now let sin u = v dv = cos u du and your integral becomes Integral of (1- v^2)dv , from v = -1/2 to 1/2, since that is what v is when u = +/- pi/6 Now it should be easy! The indefinite integral is v - v^3/3. Evaluate it at v = 1/2 and -1/2. My answer: 11/24 - (-11/24) = 11/12 Don't trust my algebra. Check it yourself. • Calculus - Anti-Derivatives - Sean, Friday, March 14, 2008 at 8:09pm I can follow everything except when you change the (4 x pi)/3 to pi/6, and change pi/6 into 1/2. Could you clear that up for me? • Calculus - Anti-Derivatives - Damon, Friday, March 14, 2008 at 8:19pm the upper limit for example was x = 4 pi/3 u = x/8 so the upper limit using u instead of x is u = (4 pi/3) /8 u = pi/6 then it changes again to go from u to v v = sin u so at that same upper limit where x = 4 pi/3 and u = pi/6, we need to find v v at this upper limit is: v = sin (pi/6) = sin 30 degrees = 1/2 • Calculus - Anti-Derivatives - Sean, Friday, March 14, 2008 at 8:38pm Oh. Thank you, Damon, and Drwls. That really helped me out. Related Questions calc - find the area between the x-axis and the graph of hte given function over... please help me calc. have test tom - d/dx integral from o to x of function cos(2... calc - d/dx integral from o to x of function cos(2*pi*x) du is first i do the ... Linear Algebra - Hello, I'm trying to find the Fourier Series of a function ... Linear Algebra - Hello, I'm trying to find the Fourier Series of a function ... Calculus - Evaluate the integral: 16csc(x) dx from pi/2 to pi (and determine if ... calculus - pleaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaasw help can you pleaaaaase ... Calculus - Use the symmetry of the graphs of the sine and cosine functions as an... Math-Derivitives - if f(x)=x^3+5x, then f'(2)= if f(x)=cos(2x), then f'(pi/4) ... trig integration - i'm having trouble evaluating the integral at pi/2 and 0. i ...
{"url":"http://www.jiskha.com/display.cgi?id=1205531737","timestamp":"2014-04-18T22:31:09Z","content_type":null,"content_length":"10937","record_id":"<urn:uuid:92181f37-bd31-4100-9386-644b213dadc5>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
A. Governing equations B. External flow and temperature fields far from the droplet C. Nondimensional parameters D. Droplet shape E. Thermocapillary migration speed A. Temperature and velocity fields B. Force on the droplet C. Numerical solution D. Code validation A. Temperature field B. Thermocapillary migration velocity C. Interior flow field 1. Comparison with the model 2. Comparison with experiment
{"url":"http://scitation.aip.org/content/aip/journal/pof2/21/4/10.1063/1.3112777","timestamp":"2014-04-18T11:16:57Z","content_type":null,"content_length":"135131","record_id":"<urn:uuid:95ffcee6-6c94-4e38-b087-9da8c1aeefe1>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
- - Please install Math Player to see the Math Symbols properly Click on a 'View Solution' below for other questions: ee Identify a number that is a rational number, an integer, a whole number, and a natural number. View Solution ee Which of the following is a natural number? View Solution ee Which of the following is a rational number and an integer? ee View Solution ee The temperature recorded for the past three days in three different locations are - 12 ^oC, - 9 ^oC and 2 ^oC. Plot these values on the number line. ee View Solution ee Plot the number - 1.8 on the number line. View Solution ee Arrange the real numbers - 2, - 4, 5, 6, in the ascending order. ee View Solution ee There are three different types of flowers from which a florist has to select one flower from each type to make a bouquet.The probability of selecting the first flower from each type is View 0.78, 0.44 and 0.71. Order the probabilities in ascending order. Solution ee On which of the number lines are the numbers - 3, 0, and 4 plotted correctly? ee View Solution ee Which of the number lines represents the integers 10, 7 and 5? View Solution ee The probability of selecting a boy for the basketball team of a school is 0.56 and that of a girl is 34. Who is more probable to be selected? View Solution ee On which of the number lines are the numbers - 2, 0, and 2 plotted correctly? ee View Solution ee The total profit of a company is 0.45% of the total sales. Graph the percentage of the profit on a number line. View Solution ee Which of the number lines represents the integers - 3, 3 and - 2? View Solution ee On which of the number lines are the integers - 1, - 4 and - 6 plotted correctly? ee View Solution ee On which of the number lines are the numbers 3.2, 3.4, and 3.6 plotted correctly? View Solution ee On which of the number lines is the fraction 83 plotted correctly? View Solution ee Identify the number lines in which the fraction - 154 is plotted correctly. View Solution ee Which of the number line represents the fractions, 12, 34 and 67? View Solution ee On which of the number lines are the real numbers - 12, - 34, - 14 and - 75 plotted correctly? View Solution ee Order the real numbers 57, - 171D=2 and 45 using number line. ee View Solution ee Order the real numbers 12, 34, 37 using number line. View Solution ee Compare the real numbers 34and 54 using number line. ee View Solution ee Compare the real numbers 0.45 and 23 using number line. View Solution ee The heights reached by a bouncing ball when it was dropped from the top of a building are as 0.9, 34and 12. Order them in ascending order. ee View Solution ee Which of the number lines represents the numbers - 0.5, 0 and 0.5? ee View Solution ee Compare the real numbers -12and -72 using a number line. View Solution ee Compare the integers - 5 and 2 using number line. View Solution ee Which of the following inequalities are true for the number line? ee View Solution ee Compare the numbers represented on the number line. View Solution ee Two rockets were launched from a space center. One with a speed of 0.45 m/sec and the other with a speed of 1129m/sec. Compare the speeds of the two rockets. View Solution ee Using the number line, select the correct inequality. View Solution ee Order the real numbers 4, 7, - 1 and - 8 using number line. View Solution ee Order the real numbers 1/2, - 1, 0, and - 4/5 using number line. View Solution ee Which of the following is a natural number? ee View Solution ee Which of the following is a rational number and an integer? View Solution ee Identify a number that is a rational number, an integer, a whole number, and a natural number. ee View Solution ee Which of the following is not a rational number? View Solution ee Which of the following statements is false? ee View Solution ee Identify the number line that shows the approximate location of -32. View Solution ee Identify the number line that shows the location of 144. ee View Solution ee Choose the point that represents 11 on the number line. ee View Solution D Choose the point that represents 24 on the number line. D View Solution D Identify the square root that the point represents on the number line. View Solution D Choose the point that represents 42 on the number line. View Solution
{"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgdjaxkjdba&.html","timestamp":"2014-04-16T22:02:50Z","content_type":null,"content_length":"87208","record_id":"<urn:uuid:a3e5d00c-0db5-475d-a291-6ec4869fecf8>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
WyzAnt Resources A farmer is planning to put in a garden next to his barn. He is planning to fence in the garden on three sides (the barn will make the fourth side). He has 100 feet of fencing to use. Find an what is the length of the pasture? the width of a pasture is 40 yards more than twice its length. if the area is 2250 square yards what is the length of the pasture? Optimization Problems Two parabolas y=16-3x2 and y=x2-9 are drawn. A rectangle is contructed with sides parallel to the coordinate axes with the two upper vertices on the graph of y=16-3x2 and the two lower vertices... Optimization Problems A piece of wire 200 cm is cut into two pieces: one a square and another a circle. Where should the cut be made if the sum of the two areas (square and circle) is to be a minimum. Optimization Problems A piece of plexiglass is in the shape of a semicircle with radius 2 m. Determine the dimensions of the rectangle with the greatest area that can be cut from the piece of plexiglass. Find the ratio of the area of triangle XBY to the area of triangle ABC for the given measurements. Image: https://media.glynlyon.com/g_geo_2013/8/23183_5.gif Find the ratio of the area of triangle XBY to the area of triangle ABC for the given measurements, if XY = 2, AC... Find the ratio of the area of triangle XBY to the area of triangle ABC for the given measurements, if BY = 2, BC = 4 1/3 1/4 1/2 The image: https://media.glynlyon.com/g_geo_2013/8/23183_5.gif Find the ratio of the area of triangle XBY to the area of triangle ABC for the given measurements, if BY =... The sides of a quadrilateral are 3, 4, 5, and 6. Find the length of the shortest side of a similar quadrilateral whose area is 9 times as great. The sides of a quadrilateral are 3, 4, 5, and 6. Find the length of the shortest side of a similar quadrilateral whose area is 9 times as great. Answers to choose from: A... Square feet less a fraction The area of a room is 99-3/7 square feet. The area of the kitchen is 1-3/4 the area of room. If the length of kitchen is 14-1/2 feet how wide is the kitchen? I have no idea where... Suppose you have an isosceles triangle Suppose you have an isosceles triangle, and each of the equal sides has a length of foot. Suppose the angle formed by those two sides is 45 degrees . The area of the triangle is 0.3536 square... A triangular parcel of land has sides of lengths 110 feet, 820 feet and 827 feet. A triangular parcel of land has sides of lengths 110 feet, 820 feet and 827 feet. 1)What is the area of the parcel of land? 2) If land is valued at 2300 per acre (1 acre is 43,560 feet... what is the size of the moon in milimeters? (area) I have a school project and I need to know the answer. Find the area of the parallelogram? Find the area of the parallelogram spanned by the vectors <1, 0, -1> and <-2, 2, 0>. I have troubles with dot products. Find the area of the triangle? Find the area of the triangle with vertices A(2, 1), B(5, 3), and C(6, 4). If perimeter is 16 inches what is area Find the area Find the area of the parallelogram amdc. Round to the nearest tenth. Segment AB is perpendicular to segment CD. MD = 12mm AB = 6mm BD = 20mm Find the area of the parallelogram amdc. Round to the nearest tenth. Segment AB is perpendicular to segment CD. MD = 12mm AB = 6mm BD = 20mm Can anyone help me with this Measuring Area & Volume study guide? A cube has a volume of 6 cubic centimeters. If all side lengths are doubled,what is the volume of the new cube? A rectangular prism has a volume of 32 cubic feet. What is the new volume if... What is the Length and Width of a rectangle when the area equals 19460 square feet where Length = L and Width = (5/8)L ? To find the area of a rectangle the equation is A=L*W; where A = area, L=Length, and W=Width. If I know that A=19,460 Ft squared and I also know that the width is 5/8 (five eighths)... write a simplified expression for the area of the rectangle. State all restrictions on x I need help on finding the Circumference and Area of Circles. I need help on this, my math teacher is not a very good teacher when it comes to teaching. Please explain this to me. This > π < is the pi symbol. Thanks! Solve using: ...
{"url":"http://www.wyzant.com/resources/answers/area?f=votes","timestamp":"2014-04-19T08:17:51Z","content_type":null,"content_length":"56969","record_id":"<urn:uuid:45955e19-8897-454b-b7cc-8f26e61256a9>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Vandermonde matrix linear algebra , a Vandermonde matrix , named after Alexandre-Théophile Vandermonde , is a with the monomial terms of a geometric progression in each row, i.e., an 1 & alpha_1 & alpha_1^2 & dots & alpha_1^{n-1} 1 & alpha_2 & alpha_2^2 & dots & alpha_2^{n-1} 1 & alpha_3 & alpha_3^2 & dots & alpha_3^{n-1} vdots & vdots & vdots & ddots &vdots 1 & alpha_m & alpha_m ^2 & dots & alpha_m^{n-1} end{bmatrix} or $V_\left\{i,j\right\} = alpha_i^\left\{j-1\right\}$ for all indices . (Some authors use the of the above matrix.) The determinant of a square Vandermonde matrix (so m=n) can be expressed as: The above determinant is sometimes called the discriminant, although many sources, including this article, refer to the discriminant as the square of this determinant. Using the Leibniz formula $det\left(V\right) = sum_\left\{sigma in S_n\right\} sgn\left(sigma\right) , alpha_1^\left\{sigma\left(1\right)-1\right\} cdots alpha_n^\left\{sigma\left(n\right)-1\right\},$ for the determinant, we can rewrite this formula as where S[n] denotes the set of permutations of {1, 2, ..., n}, and sgn(σ) denotes the signature of the permutation σ. If m≤n, then the matrix V has maximum rank (m) if and only if all α[i] are distinct. When two or more α[i] are equal, the corresponding polynomial interpolation problem (see below) is underdetermined. In that case one may use a generalization called confluent Vandermonde matrices, which makes the matrix non-singular while retaining most properties. If α[i] = α[i + 1] = ... = α[i+k] and α[i] ≠ α[i − 1], then the (i + k)th row is given by $V_\left\{i+k,j\right\} = begin\left\{cases\right\} 0, & mbox\left\{if \right\} j le k; frac\left\{\left(j-1\right)!\right\}\left\{\left(j-k-1\right)!\right\} alpha_i^\left\{j-k-1\right\}, & mbox \left\{if \right\} j > k. end\left\{cases\right\}$ The above formula for confluent Vandermonde matrices can be readily derived by letting two parameters go arbitrarily close to each other. The difference vector between the rows corresponding to scaled to a constant yields the above equation (for = 1). Similarly, the cases > 1 are obtained by higher order differences. Consequently, the confluent rows are derivatives of the original Vandermonde row. These matrices are useful in polynomial interpolation , since solving the system of linear equations Vu Vandermonde matrix is equivalent to finding the coefficients u[j] of the polynomial $P\left(x\right)=sum_\left\{j=0\right\}^\left\{n-1\right\} u_j x^j$ of degree ≤ −1 which has the values at α . The Vandermonde determinant plays a central role in the Frobenius formula , which gives the conjugacy classes of the symmetric group When the values $alpha_k$ range over powers of a finite field, then the determinant has a number of interesting properties: for example, in proving the properties of a BCH code. Confluent Vandermonde matrices are used in Hermite interpolation. A commonly known special Vandermonde matrix is the discrete Fourier transform matrix, where the numbers α[i] are chosen equal to the m different mth roots of unity. The Vandermonde matrix diagonalizes a companion matrix. See also • Roger A. Horn and Charles R. Johnson, Topics in matrix analysis, (1991) Cambridge University Press. See Section 6.1. • Lecture 4 reviews the representation theory of symmetric groups, including the role of the Vandermonde determinant.
{"url":"http://www.reference.com/browse/Vandermonde+matrix","timestamp":"2014-04-16T07:57:03Z","content_type":null,"content_length":"84822","record_id":"<urn:uuid:a17b69b8-c78b-4a02-ae61-4250f50088ba>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
Syllabus Entrance EC 315 Quantitative Research Methods Torres, David Mission Statement: The mission of Park University, an entrepreneurial institution of learning, is to provide access to academic excellence, which will prepare learners to think critically, communicate effectively and engage in lifelong learning while serving a global community. Vision Statement: Park University will be a renowned international leader in providing innovative educational opportunities for learners within the global society. Course EC 315 Quantitative Research Methods Semester S2B 2007 BL Faculty Torres, David Title Economics Instructor Degrees/Certificates BBA in Economics BBA in Finance Concentration in Banking Masters of Science in Economics Daytime Phone 915-594-5654 E-Mail p431276@pirate.park.edu Class Days TTR Class Time 7:40 - 10:10 PM Prerequisites MA120 & CS140 Credit Hours 3 STATISTICAL TECHNIQUES in Business and Economics, Lind, Marachal and Wathen, McGraw-Hill, Irwin 13th edition "Guide to Quantitative Analysis" by Pete Soule Additional Resources: Not required but a Statistical / Scientific calculator will be helpful! Students are not allowed to use any multi-functional electronic devices (cell phones, ect) on the final. McAfee Memorial Library - Online information, links, electronic databases and the Online catalog. Contact the library for further assistance via email or at 800-270-4347. Career Counseling - The Career Development Center (CDC) provides services for all stages of career development. The mission of the CDC is to provide the career planning tools to ensure a lifetime of career success. Park Helpdesk - If you have forgotten your OPEN ID or Password, or need assistance with your PirateMail account, please email helpdesk@park.edu or call 800-927-3024 Resources for Current Students - A great place to look for all kinds of information http://www.park.edu/Current/. Course Description: This intermediate level statistics course covers the fundamentals of conducting quantitative research for the social and administrative sciences. The course is organized around a research project on quantitative analysis of data. The research paper reflects the requirements of the core assessment for the course. PREREQUISITES: MA 120 and CS 140. 3:0:3 Learning Outcomes: Core Learning Outcomes 1. Write a proposal for a research project on an original multiple regression model with four independent variables. 2. Gather data and research articles related to the multiple regression model. 3. Use a statistical software package to regress the original model. 4. Write a formal research report that defines and analyzes the evaluative information provided by a statistical software package, including the adjusted R-square, F statistic, t statistics, and correlation coefficients for multicollinearity. 5. Do hypothesis testing and determine confidence intervals using the t statistic. Also, conduct hypothesis testing using the F and Chi-square statistics. Core Assessment: DESCRIPTION OF CORE ASSESSMENT FOR EC 315: All Park University courses must include a core assessment that measures the course's Core Learning Outcomes. The purpose of this assessment is to determine if expectations have been met concerning mastery of learning outcomes across all instructional modalities. For this course, the core assessment is a research project that includes a written proposal and final research paper. This project is worth 20 percent of the student's final grade and will assess students' mastery of four Core Learning Outcomes (outcomes 1, 2, 3, and 4 listed on this syllabus). Although guidelines for the research proposal are listed below, the Core Assessment Rubric refers to the grading of the final research paper only. Research Topic Proposal As preparation for the final research paper, formulate an original theory about the correlation between four measurable independent variables (causes) and one measurable dependent variable (the effect). The topic proposal should include the following four items which serve as the foundation for the final research paper after instructor feedback is given. 1) Purpose Statement In one paragraph, state the correlation and identify the primary independent variable. State the correlation: “The dependent variable _______ is determined by independent variables ________, _________, ________, and ________.” Identify and defend the “primary” independent variable, or the variable believed to have the strongest impact on the dependent variable: “The most important independent variable in this relationship is ________ because _________.” 2) Definition of Variables For each variable, write a single definition paragraph (five paragraphs total). Paragraphs should be in this order: dependent variable, primary independent variable, and remaining three independent In addition to defining the independent variables, defend why each determines the dependent variable. For the primary independent variable, at least two research sources that discuss the variable also must be cited. These sources need not be technical documents but should contain evidence to justify the relationship between the primary independent variable and the dependent variable. List these sources on the Works Cited (reference) page. Citations from encyclopedias, abstracts, or non-governmental websites are not acceptable research sources. 3) Data Description For each of the five variables, at least 30 observations of cross-sectional data must be obtained. Thus for the final research paper, a data matrix that is at least 30 rows by five columns must be In one paragraph, identify the data sources and describe the data (i.e., which government agencies supply the data, which methods are used to compile them, when they were collected, etc.). Attach a Xerox copy of the original data tables from which the data will be compiled after the proposal is reviewed and approved by the instructor. 4) Works Cited Page The final page of the proposal should be a Works Cited page listing the two research sources for the primary independent variable and the data sources, with a separate citation for each table of data, including specific table numbers for each of the five sources. The appropriate format should be employed (see below). Final Research Paper Purpose Statement and Model 1) In the introductory paragraph, state why the dependent variable has been chosen for analysis. Then make a general statement about the model: “The dependent variable _______ is determined by variables ________, ________, ________, and ________.” 2) In the second paragraph, identify the primary independent variable and defend why it is important. “The most important variable in this analysis is ________ because _________.” In this paragraph, cite and discuss the two research sources that support the thesis, i.e., the model. 3) Write the general form of the model, with the primary independent variable as X[1]: The model is: Y = Y: brief definition of Y X[1]: brief definition of X[1 ][etc. for each variable] Definition of Variables 4) Define and defend all variables, including the dependent variable, in a single paragraph for each variable. Also, state the expectations for each independent variable. These paragraphs should be in numerical order, i.e., dependent variable, X[1], then X[2], etc. In each paragraph, the following should be addressed: < How is the variable defined in the data source? < Which unit of measurement is used? < For the independent variables: why does the variable determine Y? < What sign is expected for the independent variable's coefficient, positive or negative? Why? Data Description 5) In one paragraph, describe the data and identify the data sources. < From which general sources and from which specific tables are the data taken? (Citing a website is not acceptable.) < Which year or years were the data collected? < Are there any data limitations? Presentation and Interpretation of Results 6) Write the estimated (prediction) equation: The results are: 7) Identify and interpret the adjusted R^2 (one paragraph): < Define “adjusted R^2.” < What does the value of the adjusted R^2 reveal about the model? < If the adjusted R^2 is low, how has the choice of independent variables created this result? 8) Identify and interpret the F test (one paragraph): < Using the p-value approach, is the null hypothesis for the F test rejected or not rejected? Why or why not? < Interpret the implications of these findings for the model. 9) Identify and interpret the t tests for each of the coefficients (one separate paragraph for each variable, in numerical order): < Are the signs of the coefficients as expected? If not, why not? < For each of the coefficients, interpret the numerical value. < Using the p-value approach, is the null hypothesis for the t test rejected or not rejected for each coefficient? Why or why not? < Interpret the implications of these findings for the variable. < Identify the variable with the greatest significance. 10) Analyze multicollinearity of the independent variables (one paragraph): < Generate the correlation matrix. < Define multicollinearity. < Are any of the independent variables highly correlated with each other? If so, identify the variables and explain why they are correlated. < State the implications of multicollinearity (if found) for the model. 11) Other (not required): < If any additional techniques for improving results are employed, discuss these at the end of the paper. Works Cited Page 12) Use the proper format to list the works cited under two headings: Research: two sources Data: a separate citation for each of the five variables Link to Class Rubric Class Assessment: There will be two exams, homework and a written research project in this course. The research paper is the core assessment for the course. The comprehensive final is not a take home exam. The comprehensive final is not open book or open notes. Homework 10% 100 points Midterm test 30% 100 points Final 30% 100 points Research Paper 30% 100 points TOTAL 400 points Grades will be determined as follows: A 90% to 100% 360 to 400 points B 80% to <90% 320 to <360 points C 70% to <80% 280 to <320 points D 60% to <70% 240 to <280 points F <60% <240 points All final exams will be comprehensive and will be closed book and closed notes. If calculators are allowed, they will not be multifunctional electronic devices that include features such as: phones, cameras, instant messaging, pagers, and so forth. Electronic Computers will not be allowed on final exams unless an exception is made by the Associate Dean. Late Submission of Course Materials: Homework will be accepted late, the research project CAN NOT AND WILL NOT be accepted late! Classroom Rules of Conduct: Students are encouraged to read the current Park University catalog for standards, policies and SOP's. Course Topic/Dates/Assignments: Week 1: Class introduction and Admin Probability Distributions, Chapter 7 Estimation and Confidence Intervals Chapter 9 Week 2: Intro to Research Reqmts Central Limit Theorem Chap 8 pp 258 - 269 Hypothesis testing Chap 10 Intro to Regression Chap 13 pp 429 - 444 Homework Problem Set #1 due Week 3: Standard Error Chap 13 pp 446 - 459 Two Sample Test of Hypothesis Large Sample Chap 11 pp 356 - 363 Homework Problem Set #2 Due Review Progress of Research Projects Week 4: Two Sample Test of Hypothesis Small Sample Chapter 11 pp 356 - 366 Review Progress of Research Projects Midterm Test Week 5: Review results of Midterm test ANOVA and F Distributions Chapter 12 Multiple Regression Chapter 14 Problem Set # 3 Due Review Progress of Research Project Week 6: Chi Sqr applications, Chap. 15 Times Series and Forecasting, Chap. 19 Problesm Set # 4 Due Week 7: Time Series and Forecasting Chap. 19 Research Projects Due Week 8: Final Test Academic Honesty: Academic integrity is the foundation of the academic community. Because each student has the primary responsibility for being academically honest, students are advised to read and understand all sections of this policy relating to standards of conduct and academic life. Park University 2006-2007 Undergraduate Catalog Page 87-89 Plagiarism involves the use of quotations without quotation marks, the use of quotations without indication of the source, the use of another's idea without acknowledging the source, the submission of a paper, laboratory report, project, or class assignment (any portion of such) prepared by another person, or incorrect paraphrasing. Park University 2006-2007 Undergraduate Catalog Page 87 Attendance Policy: Instructors are required to maintain attendance records and to report absences via the online attendance reporting system. 1. The instructor may excuse absences for valid reasons, but missed work must be made up within the semester/term of enrollment. 2. Work missed through unexcused absences must also be made up within the semester/term of enrollment, but unexcused absences may carry further penalties. 3. In the event of two consecutive weeks of unexcused absences in a semester/term of enrollment, the student will be administratively withdrawn, resulting in a grade of "W". 4. A "Contract for Incomplete" will not be issued to a student who has unexcused or excessive absences recorded for a course. 5. Students receiving Military Tuition Assistance or Veterans Administration educational benefits must not exceed three unexcused absences in the semester/term of enrollment. Excessive absences will be reported to the appropriate agency and may result in a monetary penalty to the student. 6. Report of a "F" grade (attendance or academic) resulting from excessive absence for those students who are receiving financial assistance from agencies not mentioned in item 5 above will be reported to the appropriate agency. Park University 2006-2007 Undergraduate Catalog Page 89-90 Disability Guidelines: Park University is committed to meeting the needs of all students that meet the criteria for special assistance. These guidelines are designed to supply directions to students concerning the information necessary to accomplish this goal. It is Park University's policy to comply fully with federal and state law, including Section 504 of the Rehabilitation Act of 1973 and the Americans with Disabilities Act of 1990, regarding students with disabilities. In the case of any inconsistency between these guidelines and federal and/or state law, the provisions of the law will apply. Additional information concerning Park University's policies and procedures related to disability can be found on the Park University web page: http://www.park.edu/disability . ┃ Competency │ Exceeds Expectation (3) │ Meets Expectation (2) │ Does Not Meet Expectation (1) │ No Evidence (0) ┃ ┃Evaluation │Four independent variables are appropriately│Two to three independent variables are │One independent variable is appropriately chosen│No independent variable is appropriately ┃ ┃Outcomes │chosen and are measurable. │appropriately chosen and are measurable. │and is measurable. │chosen and no variable is measurable. ┃ ┃1 │ │ │ │ ┃ ┃Synthesis │Data for five variables are appropriate and │Data for two to four variables are appropriate │Data for one variable are appropriate and │No data are appropriate or documented; no┃ ┃Outcomes │documented; two research articles are cited.│and documented; two research articles are │documented; OR less than two research articles │research articles are cited. ┃ ┃2 │ │cited. │are cited. │ ┃ ┃Analysis │All of the following statistics are │Three to seven of the following statistics are │Less than three of the following statistics are │None of the following statistics is ┃ ┃Outcomes │perfectly analyzed: R2, F statistic, four t │perfectly analyzed: R2, F statistic, four t │perfectly analyzed: R2, F statistic, four t │analyzed: R2, F statistic, four t ┃ ┃4 │statistics, and correlation coefficients for│statistics, and correlation coefficients for │statistics, and correlation coefficients for │statistics, and correlation coefficients ┃ ┃ │multicollinearity │multicollinearity │multicollinearity │for multicollinearity ┃ ┃Application │All statistical results are generated with │All statistical results are generated with only│All statistical results are generated with two │ ┃ ┃Outcomes │no errors. │one error. │or more errors. │Statistical results are not generated. ┃ ┃3 │ │ │ │ ┃ ┃Content of │ │ │ │ ┃ ┃Communication│Works Cited page is properly formatted and │Works Cited page has one to two errors. │Works Cited page has three or more errors. │Works Cited page is not present. ┃ ┃Outcomes │complete. │ │ │ ┃ ┃4 │ │ │ │ ┃ ┃Technical │All of the following statistics are │Two to three of the following statistics are │ │None of the following statistics is ┃ ┃Skill in │perfectly defined: R2, F statistic, t │correctly defined: R2, F statistic, t │One of the following statistics is correctly │defined: R2, F statistic, t statistic, ┃ ┃Communicating│statistic, and correlation coefficients/ │statistic, and correlation coefficients/ │defined: R2, F statistic, t statistic, and │and correlation coefficients/ ┃ ┃Outcomes │multicollinearity │multicollinearity │correlation coefficients/multicollinearity │multicollinearity ┃ ┃4 │ │ │ │ ┃ ┃First │ │ │ │ ┃ ┃Disciplinary │The p-value approach to hypothesis testing │The p-value approach to hypothesis testing is │The p-value approach to hypothesis testing is │The p-value approach to hypothesis ┃ ┃Competency │is perfectly defined and analyzed for all │correctly defined and analyzed for two to three│correctly defined and analyzed for one t-test. │testing is not defined or analyzed for ┃ ┃Outcomes │four t-tests. │t-tests. │ │any t-test. ┃ ┃4 │ │ │ │ ┃ ┃Second │ │ │ │ ┃ ┃Disciplinary │The p-value approach to hypothesis testing │The p-value approach to hypothesis testing is │The p-value approach to hypothesis testing is │The p-value approach to hypothesis ┃ ┃Competency │is perfectly defined and analyzed for the │defined and analyzed with at least one error │not defined and analyzed with at least one error│testing is neither defined nor analyzed ┃ ┃Outcomes │F-test. │for the F-test. │for the F-test. │for the F-test. ┃ ┃4 │ │ │ │ ┃ This material is protected by copyright and can not be reused without author permission. Last Updated:2/13/2007 9:43:11 AM
{"url":"https://app.park.edu/syllabus/syllabus.aspx?ID=40289","timestamp":"2014-04-24T11:06:38Z","content_type":null,"content_length":"154296","record_id":"<urn:uuid:c844eaa8-0055-4f3d-9f4f-b38408235d5d>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic quantities Temperature A map of global long term monthly average surface air temperatures in Mollweide projection. A temperature is a numerical measure of hot or cold. Its measurement is by detection of heat radiation or particle velocity or kinetic energy, or by the bulk behavior of a thermometric material. It may be calibrated in any of various temperature scales, Celsius, Fahrenheit, Kelvin, etc. In thermodynamics, entropy (usual symbol S) is a measure of the number of specific ways in which a thermodynamic system may be arranged, often taken to be a measure of disorder, or a measure of progressing towards thermodynamic equilibrium. The entropy of an isolated system never decreases, because isolated systems spontaneously evolve towards thermodynamic equilibrium, which is the state of maximum entropy. which is found from the uniform thermodynamic temperature (T) of a closed system dividing an incremental reversible transfer of heat into that system (dQ). The above definition is sometimes called the macroscopic definition of entropy because it can be used without regard to any microscopic picture of the contents of a system. In thermodynamics, entropy has been found to be more generally useful and it has several other formulations. Entropy was discovered when it was noticed to be a quantity that behaves as a function of state, as a consequence of the second law of Acceleration For example, an object such as a car that starts from standstill, then travels in a straight line at increasing speed, is accelerating in the direction of travel. If the car changes direction at constant speedometer reading, there is strictly speaking an acceleration although it is often not so described; passengers in the car will experience a force pushing them back into their seats in linear acceleration, and a sideways force on changing direction. If the speed of the car decreases, it is usual and meaningful to speak of deceleration; mathematically it is acceleration in the opposite direction to that of motion. Definition and properties[edit] Acceleration is the rate of change of velocity. If there is a change in speed, direction, or both, then the object has a changing velocity and is said to be undergoing an acceleration. Constant velocity vs acceleration[edit] To have a constant velocity, an object must have a constant speed in a constant direction. Constant direction constrains the object to motion in a straight path (the object's path does not curve). Momentum Like velocity, linear momentum is a vector quantity, possessing a direction as well as a magnitude: Linear momentum is also a conserved quantity, meaning that if a closed system is not affected by external forces, its total linear momentum cannot change. In classical mechanics, conservation of linear momentum is implied by Newton's laws; but it also holds in special relativity (with a modified formula) and, with appropriate definitions, a (generalized) linear momentum conservation law holds in electrodynamics, quantum mechanics, quantum field theory, and general relativity. Newtonian mechanics The original form of Newton's second law states that the net force acting upon an object is equal to the rate at which its momentum changes with time. If the mass of the object is constant, this law implies that the acceleration of an object is directly proportional to the net force acting on the object, is in the direction of the net force, and is inversely proportional to the mass of the object. As a formula, this is expressed as: Force Potential From Wikipedia, the free encyclopedia In linguistics, the potential moodThe mathematical study of potentials is known as potential theory; it is the study of harmonic functions on manifolds. This mathematical formulation arises from the fact that, in physics, the scalar potential is irrotational, and thus has a vanishing Laplacian — the very definition of a harmonic function.In physics, a potential may refer to the scalar potential or to the vector potential. In either case, it is a field defined in space, from which many important physical properties may be Matter is a poorly defined term in science (see definitions below). The term often refers to a substance (often a particle) that has rest mass. Matter is also used loosely as a general term for the substance that makes up all observable physical objects.[1][2] All objects we see with the naked eye are composed of atoms. Potential energy is energy stored by virtue of the position of an object in a force field, such as a gravitational, electric or magnetic field. For example, lifting an object against gravity performs work on the object and stores gravitational potential energy; if it falls, gravity does work on the object which transforms the potential energy to kinetic energy associated with its speed. Some specific forms of energy include elastic energy due to the stretching or deformation of solid objects, chemical energy such as is released when a fuel burns, and thermal energy, the microscopic kinetic and potential energies of the disordered motions of the particles making up matter. Electric charge Electric charge is a physical property of matter that causes it to experience a force when near other electrically charged matter. There exist two types of electric charges, called positive and negative . Positively charged substances are repelled from other positively charged substances, but attracted to negatively charged substances; negatively charged substances are repelled from negative and attracted to positive. An object will be negatively charged if it has an excess of electrons , and will otherwise be positively charged or uncharged. The SI unit of electric charge is the coulomb (C), although in electrical engineering it is also common to use the ampere-hour (Ah), and in chemistry it is common to use the elementary charge ( e ) as a unit. In physics, mass (from Greek μᾶζα "barley cake, lump [of dough]") is a property of a physical body which determines the body's resistance to being accelerated by a force and the strength of its mutual gravitational attraction with other bodies. The SI unit of mass is the kilogram (kg). As mass is difficult to measure directly, usually balances or scales are used to measure the weight of an object, and the weight is used to calculate the object's mass. For everyday objects and energies well-described by Newtonian physics, mass describes the amount of matter in an object. However, at very high speeds or for subatomic particles, general relativity shows that energy is an additional source of mass. Thus, any body having mass has an equivalent amount of energy, and all forms of energy resist acceleration by a force and have gravitational attraction. Mass The flow of sand in an hourglass can be used to keep track of elapsed time. It also concretely represents the present as being between the past and the future. Time is a dimension in which events can be ordered from the past through the present into the future,[1][2][3][4][5][6] and also the measure of durations of events and the intervals between them.[3][7][8] Time has long been a major subject of study in religion, philosophy, and science, but defining it in a manner applicable to all fields without circularity has consistently eluded scholars.[3][7][8][9][10][11] Nevertheless, diverse fields such as business, industry, sports, the sciences, and the performing arts all incorporate some notion of time into their respective measuring systems.[12][13][14] Some simple, relatively uncontroversial definitions of time include "time is what clocks measure"[7][15] and "time is what keeps everything from happening at once".[16][17][18][19] In geometric measurements, length is the longest dimension of an object.[1] In other contexts "length" is the measured dimension of an object. For example it is possible to cut a length of a wire which is shorter than wire thickness. Length may be distinguished from height, which is vertical extent, and width or breadth, which are the distance from side to side, measuring across the object at right angles to the length. Length In the 19th and 20th centuries mathematicians began to examine non-Euclidean geometries, in which space can be said to be curved, rather than flat. According to Albert Einstein's theory of general relativity, space around gravitational fields deviates from Euclidean space.[4] Experimental tests of general relativity have confirmed that non-Euclidean space provides a better model for the shape of space. Philosophy of space Leibniz and Newton
{"url":"http://www.pearltrees.com/cleverchris/basic-quantities/id2067526","timestamp":"2014-04-18T16:31:09Z","content_type":null,"content_length":"27914","record_id":"<urn:uuid:5f457701-71e9-49d2-af41-05b196142a87>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: what is the maximum length of rod that can pass through this L-shaped corridor : • one year ago • one year ago
{"url":"http://openstudy.com/updates/5051f753e4b02b4447c15384","timestamp":"2014-04-20T10:50:49Z","content_type":null,"content_length":"1041131","record_id":"<urn:uuid:6dc03016-ad7c-4660-b1ca-156ef4409264>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
South Gate Science Tutor Find a South Gate Science Tutor ...I am also currently serving as a mentor/adviser for pre-health students at Duke University, so questions about the application process are welcome too! I have studied math and science all my life and really enjoy helping others. I am calm, friendly, easy to work with and have been tutoring for many years. 24 Subjects: including genetics, microbiology, GED, ACT Science ...I like to use repetition, key phrases and mnemonics to help my students remember the concepts. For example, to remember the relationship of the sides of a right triangle and the functions sine, cosine, tangent: Oscar Had A Headache Over Algebra (sine=Opposite/Hypotenuse, cosine=Adjacent/Hypotenu... 31 Subjects: including physics, ADD/ADHD, Aspergers, autism ...In the meantime, I would love to get experience working with students and strengthening my skills in teaching strategies that I am acquiring during my time in the credential program. During my time as a graduate student, I taught the Ecology & Evolution lab (Biology 171L) at CSU Fullerton for ap... 5 Subjects: including zoology, botany, genetics, biology ...During this time I was trained on the use of Macintosh operating systems (OSX and iOS) as well as software. In addition, I lead workshops for people to learn both operating systems and how to use Mac software. I possess a certificate of proficiency on Microsoft Office, from the Spring of 2001 (... 10 Subjects: including philosophy, Spanish, literature, elementary math Hello! My name is Brigitte and I will be in the UCLA area for 6 weeks! I am willing to tutor for many subjects (see description).I am an English major at Princeton University. 24 Subjects: including anatomy, algebra 1, biology, English
{"url":"http://www.purplemath.com/South_Gate_Science_tutors.php","timestamp":"2014-04-20T02:30:12Z","content_type":null,"content_length":"23794","record_id":"<urn:uuid:0ac041b0-8812-4914-96f9-cd7bbf259dc8>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
Bitwise operations in C are used to implement masking of variables, produce logical AND’s, logical OR’s, XOR operations on two variables, the occasional NOT, and even some bit shifting. The operations are called bitwise operations because each operation is done bit by bit for each variable. Here are the most common and supported bitwise operations that C provides. Bitwise AND Operation In C The AND operation, also known as a conjunction is true if and only if both values are true. The operator is a single & ampersand sign in C. So for example: To produce a compound assignment where ‘a’ receives the value of “a & b” we can do: An example of how the bitwise operation takes place: 1 0101 (5) 2 AND 0011 (3) 3 = 0001 (1) Working from right to left we produce each compare vertically, as if we were adding two large numbers. So, 1 & 1 = 1, 0 & 1 = 0, 1 & 0 = 0, 0 & 0 = 0. Thus 0001. Bitwise OR Operation in C The OR operation, aka the disjunction is true if at least one of its operands is true. In C its symbol is a single bar | or affectionately known as a pipe in Linux/Unix circles. To produce a compound assignment where ‘a’ receives the value of “a | b” we can do: An example of how the OR operation takes place: 1 0111 (7) 2 OR 0101 (5) 3 = 0111 (7) Execution occurs in the same manner mentioned under the AND operation. Bitwise NOT Operation In C The bitwise not symbol in C is the tilde, ~ symbol. The not just inverses each bit within your variable: 1 NOT 0011 (2) 2 = 1100 (12) So in this case, 2 becomes twelve as we flip the bits. Bitwise XOR Operation In C An exclusive disjunction or XOR is true if one but not both of its operands is true. This is sometimes used for basic encryption. To produce an XOR in C we use the ^ symbol, also known as a carat. To produce a compound assignment where ‘a’ receives the value of “a ^ b” we can do: For example: 1 0101 (5) 2 XOR 0011 (3) 3 = 0110 (6) So following the same methodology above, we XOR 5 and 3 and receive 6. Bit-shifting Operations In C The bit shifting that occurs in C is known as a logical shift because the bits that are shifted out are lost, rather than circulating to the front of that variable. To execute a shift in C, we use << or >> for left and right shift respectively. The programmer may specify how many bits to shift by providing a number, for example: This example will shift ‘b’ two bits to left, and assign that value to ‘a’. To shift to the right instead, use the >> symbols. What will happen is, if our variable contained 2 in it: 1 0010 (2) 2 // We execute the shift two places and notice 1 moved left two spots 3 1000 (8) Bitwise operations on older systems were faster for addition and subtraction operations and largely faster than multiplication and division, however, on today’s architectures bitwise operations are essentially as efficient, with multiplication and division still requiring more operations. Latest Comments
{"url":"http://www.lainoox.com/bitwise-operations-c/","timestamp":"2014-04-19T14:29:56Z","content_type":null,"content_length":"53744","record_id":"<urn:uuid:d45b4f6d-358c-4fea-ac24-ff9f90fd08de>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
MathFiction: Pythagoras' Revenge: A Mathematical Mystery (Arturo Sangalli ) Freelance science journalist Sangalli has written a book which presents some historical information about Pythagoras and his beliefs in the form of a novel of the detail driven conspiracy theory adventure genre that is so popular these days. The plot involves a large number of characters (including an academic mathematician going through a midlife crisis, his sister who works as a computer security consultant, and a world famous mathematician) who get caught up in a neo-Pythagorean cult's attempt to discover the modern reincarnation of their hero. You essentially have to accept the notion of reincarnation and fate (because of all of the unlikely coincidences) to get caught up in the plot, which was difficult for a skeptic like me. But, if the purpose of the plot was merely to get me to absorb the facts about history, philosophy and math that the author was embedding in the novel, then it was successful. Mathematical topics discussed include the notion of proof itself, Platonic solids, what it means to be ``random'', and (of course) the Pythagorean Theorem. Yet, the main point seems to be to contrast Pythagoras' notion that ``all is number'' with more modern discoveries of Gödel (whose name is not mentioned in the book), Turing and Chaitin which indicate the limitations of mathematics. (Personally, I do not see these ideas as being as much in conflict as the book suggests.) At times, the author seems to lose his literary voice and collapses into journalistic writing. For instance, when Chapter 8 begins with a present tense description of one of Pythagoras' contemporaries waking up from a nightmare, I expected the chapter to continue as a bit of historical fiction that would reveal events through the eyes of someone from the past. However, it soon devolves into saying things like ``How the Pythagoreans...discovered the existence of incommensurable lengths has been the subject of much speculation." I agree with Doron Zeilberger's assessment on the back cover that this was a ``fun read''. As compared with books like A Certain Ambiguity and Pythagorean Crimes, this one is rather light on mathematics, despite some mathematical details contained in the appendices at the end. Rather than feeling enlightened about math, I feel instead as if I've learned more about Pythagoras as a philosopher and religious figure from this entertaining book.
{"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf713","timestamp":"2014-04-21T04:32:04Z","content_type":null,"content_length":"11359","record_id":"<urn:uuid:b8366ec1-f5c7-4388-be3d-0af19edb7429>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum Gravity This series consists of talks in the area of Quantum Gravity. It has recently uncovered that the intertwiner space for LQG carries a natural representation of the U(N) unitary group. I will describe this U(N) action in details and show how it can be used to compute the LQG black hole entropy, to define coherent intertwiner states and to reformulate the LQG dynamics in new terms. 15 years ago Ishibashi, Kawai and collaborators developed non-critical string field theory, starting with the formalism of dynamical triangulations. The same construction can be repeated using causal dynamical triangulations, and in this case one can actually sum explicitly over all genera. The theory can be viewed as stochastic quantization of space, proper (world sheet) time playing the role of stochastic time. Path integral formulations for gauge theories must start from the canonical formulation in order to obtain the correct measure. A possible avenue to derive it is to start from the reduced phase space formulation. We review this rather involved procedure in full generality. Moreover, we demonstrate that the reduced phase space path integral formulation formally agrees with the Dirac's operator constraint quantisation and, more specifically, with the Master constraint quantisation for first class constraints. The asymptotic formula for the Ponzano-Regge model amplitude is given for non-tardis triangulations of handlebodies in the limit of large boundary spins. The formula produces a sum over all possible immersions of the boundary triangulation in three dimensional Euclidean space weighted by the cosine of the Regge action evaluated on these immersions. Furthermore the asymptotic scaling registers the existence of flexible immersions. The scaling analysis in the large spin limit of Feynman amplitudes for the Bosonic colored group field theory are considered in any dimension starting with dimension 4. By an explicit integration of two colors, we show that the model is positive. This formulation could be useful for the constructive analysis of this type of models. Spin foam models aim at defining non-perturbative and background independent amplitudes for quantum gravity. In this work, I argue that the dynamics and the geometric properties of spin foam models can be nicely studied using recursion relations. In 3d gravity and in the 4d Ooguri model, the topological invariance leads to recursion relations for the amplitudes. I also derive recursions from the action of holonomy operators on spin network functionals. After implementing an effective minimal length, we will present a new class of spacetimes, describing both neutral and charged black holes. As a result, we will improve the conventional Schwarzschild and Reisner-Nordstroem spacetimes, smearing out their singularities at the origin. On the thermodynamic side, we will show how the new black holes admit a maximum temperature, followed by the ``SCRAM phase'', a thermodynamic stable shut down, characterized by a positive black hole heat capacity. The relation between loop quantum gravity (LQG) and ordinary quantum field theory (QFT) on a fixed background spacetime still bears many obstacles. When looking at LQG and ordinary QFT from a mathematical perspective it turns out that the two frameworks are rather different: Although LQG is a true continuum theory its Hilbert space is defined in terms of certain embedded graphs which are labeled by irreducible representations of SU(2). The natural arena for ordinary QFT, on the other hand, is a Fock space which strongly uses the metric properties of the underlying continuum How sure are you that spacetime is continuous? One approach to quantum gravity, causal set theory, models spacetime as a discrete structure: a causal set. This talk begins with a brief introduction to causal sets, then describes a new approach to modelling a quantum scalar field on a causal set. We obtain the Feynman propagator for the field by a novel procedure starting with the Pauli-Jordan commutation function. The candidate Feynman propagator is shown to agree with the continuum result. This model opens the door to physical predictions for scalar matter on a causal set. We review some recent results on tachyon nonperturbative solutions of the nonlocal, lowest-level, effective action of string field theory. It is shown how nonlocality is encoded in a spacetime diffusion equation and how the latter emerges from the symmteries of the full, background-independent theory.
{"url":"http://www.perimeterinstitute.ca/video-library/collection/quantum-gravity?page=9","timestamp":"2014-04-17T22:47:56Z","content_type":null,"content_length":"65305","record_id":"<urn:uuid:263563ee-3b9d-4a7f-8831-6209d07c7c8a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
Nest quantifiers and determining truth value January 24th 2010, 05:04 PM #1 Jan 2010 Nest quantifiers and determining truth value I must be misunderstanding something... Here's the issue. I'm going to use A and E as my universal and existential quantifiers because I don't know how to type upside down A's and backwards E's. Q(x,y) is "x + y = x - y"; domain is all integers For every integer x there is a integer y such that Q(x,y). My understanding is...this is true when, for every x there is a y for which Q(x,y) is true. Looking at x + y = x - y...if y=0, then x=x. So this whole thing should be true. Correct? BUT...my other understand is that, when there is an x such that Q(x,y) is false for every y, then the whole thing is false. Is this correct? So if you let x=0....then y=-y, which is always false. So then the whole thing is false. It can't be both! What am I not getting right? Oops...never mind...I'm dumb. For x+y=x-y, setting x=0 is not false for the case y=0. And there is no x such that Q(x,y) if false for every y. So the entire logic statement cannot be false. I think I got it. However, if I'm still stupid, please let me know! Thanks! January 24th 2010, 06:29 PM #2 Jan 2010
{"url":"http://mathhelpforum.com/discrete-math/125295-nest-quantifiers-determining-truth-value.html","timestamp":"2014-04-19T20:26:03Z","content_type":null,"content_length":"31616","record_id":"<urn:uuid:4903b475-0382-4fa5-82cb-93a8d2491ec0>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
Area, Volume, and Arc Length Making Mercury: Free Fish Sticks True or False 1. Let R be the finite region bounded by the graphs y = x^2, y = 6 – x, and x = 4. Which picture shows R as the shaded region? Which of the following integrals give the area of the shaded region? Let R be the region bounded by the graphs of y = x and R? Let R be the region graphed below. Which of the following integral expressions give the area of R? 5. Which of the following integrals gives the area of a right triangle whose non-hypotenuse sides have lengths 7 and 9? -> 6. Which of the following is the area of the unit circle? -> 7. A square has a diagonal of length 10. Which of the following expressions does NOT give the area of the square? -> Below is a graph of the polar function r = f(θ). 8. The area of the shaded region is -> 9. What is the area of one petal of the graph of the polar function r = 1 + cos(3θ)? -> 10. What is the area of the shaded region?
{"url":"http://www.shmoop.com/area-volume-arc-length/quiz-1-true-false.html","timestamp":"2014-04-19T00:11:55Z","content_type":null,"content_length":"40756","record_id":"<urn:uuid:635ccc9a-d3cc-4d14-8253-c333e6e4d1ce>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
Cave Creek Precalculus Tutor ...I also help students with SAT Prep, and AIMS prepI've been a teacher for 30 years. I'm good at what I do. I've helped prepare students for the ACT test for the past 5 years. 15 Subjects: including precalculus, English, geometry, statistics ...Also, having being born and raised in Tokyo, I am able to teach proper Japanese. I have all the necessary qualifications to be an effective tutor and will be able to guide you through an enjoyable learning process. Thank you for reading my profile. 12 Subjects: including precalculus, physics, geometry, calculus ...I am now an anatomy professor at my Alma mater. My expertise is condensing the information and helping my students understand what it all means and why it is important. I took physiology in undergraduate school and then again in medical school. 14 Subjects: including precalculus, chemistry, physics, calculus ...I currently am an Information Systems Supervisor, and an adjunct Mathematics and Computer Science instructor at a college. I have a diverse background of systems administration, systems integration, computer programming and database programming. I have had several college-level courses in Fortran using both VAX/VMS and IBM Fortran compilers. 23 Subjects: including precalculus, geometry, statistics, Pascal ...I will work hard to help you or your child develop the skills and confidence needed to tackle difficult subjects. I have dual degrees in Molecular and Cellular Biology and Near Eastern Studies from the University of Arizona, and am currently pursuing graduate school. I specialize in math and science. 15 Subjects: including precalculus, chemistry, writing, geometry
{"url":"http://www.purplemath.com/cave_creek_precalculus_tutors.php","timestamp":"2014-04-18T00:56:44Z","content_type":null,"content_length":"23950","record_id":"<urn:uuid:ddd32e3b-9df3-488b-87cb-4c67a0b12417>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
Rosslyn, VA Prealgebra Tutor Find a Rosslyn, VA Prealgebra Tutor ...I like to get feedback from my students often in order to improve their experience with tutoring continuously and to be able to cater to their specific needs. I have gotten tremendous satisfaction from seeing my students' grades improve and from hearing positive feedback from them (including con... 40 Subjects: including prealgebra, English, reading, chemistry ...I have several years of experience tutoring elementary level math from kindergarten up to sixth grade. I am very patient with students who are struggling with math, and I can help students become less anxious about the subject and more confident in their skills. I have several years experience tutoring elementary students. 46 Subjects: including prealgebra, English, Spanish, algebra 1 ...I am also a great test taker and I can pass on some of my strategies to my students. I have taken a number of classes on digital signal processing which are rooted in discrete math. These are required for the Electrical Engineering diploma. 23 Subjects: including prealgebra, Spanish, calculus, statistics My name is Bekah and I graduated from BYU with a degree in Math Education. While I was in college, I was a professor's assistant for 3 years in a calculus class, which included me lecturing twice a week, and working one-on-one with students. After graduating, I taught high school math for one year... 10 Subjects: including prealgebra, calculus, geometry, algebra 1 ...All of my students received a grade of B or higher, which I am quite proud of. While in high school, I also tutored classmates in various math courses. I am an active volunteer in the community, working with children to teach them about science concepts. 25 Subjects: including prealgebra, chemistry, physics, calculus Related Rosslyn, VA Tutors Rosslyn, VA Accounting Tutors Rosslyn, VA ACT Tutors Rosslyn, VA Algebra Tutors Rosslyn, VA Algebra 2 Tutors Rosslyn, VA Calculus Tutors Rosslyn, VA Geometry Tutors Rosslyn, VA Math Tutors Rosslyn, VA Prealgebra Tutors Rosslyn, VA Precalculus Tutors Rosslyn, VA SAT Tutors Rosslyn, VA SAT Math Tutors Rosslyn, VA Science Tutors Rosslyn, VA Statistics Tutors Rosslyn, VA Trigonometry Tutors Nearby Cities With prealgebra Tutor Arlington, VA prealgebra Tutors Avondale, MD prealgebra Tutors Baileys Crossroads, VA prealgebra Tutors Cameron Station, VA prealgebra Tutors Chillum, MD prealgebra Tutors Crystal City, VA prealgebra Tutors Fort Hunt, VA prealgebra Tutors Fort Myer, VA prealgebra Tutors Landover, MD prealgebra Tutors Lewisdale, MD prealgebra Tutors Lincolnia, VA prealgebra Tutors North Springfield, VA prealgebra Tutors Pimmit, VA prealgebra Tutors Seven Corners, VA prealgebra Tutors Tysons Corner, VA prealgebra Tutors
{"url":"http://www.purplemath.com/rosslyn_va_prealgebra_tutors.php","timestamp":"2014-04-16T19:28:51Z","content_type":null,"content_length":"24248","record_id":"<urn:uuid:4d4c63a2-860f-4d0b-ba80-cd4ea2164ee1>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Gabriel Cramer Gabriel Cramer certainly moved rapidly through his education in Geneva, and in 1722 while he was still only 18 years old, he was awarded a doctorate having submitted a thesis on the theory of sound. Two years later he was competing for the chair of philosophy at the Académie de Clavin in Geneva. He ended up sharing a chair of mathematics there with Calandrini. They divided up the mathematics courses each would teach - Cramer taught geometry and mechanics while Calandrini taught algebra and astronomy. While there, Cramer proposed a major innovation, which the Academy accepted, which was that he taught his courses in French instead of Latin, the traditional language of scholars at that time. As part of his appointment, Cramer did lots of travelling. He visited leading mathematicians in many different cities and countries of Europe. He headed straight away for Basel where many leading mathematicians were working, spending five months working with Johann Bernoulli, and also Euler who soon afterwards headed off to St. Petersburg to be with Daniel Bernoulli. Cramer then visited England where he met Halley, de Moivre, Stirling, and other mathematicians. From England, Cramer made his way to Paris where he had discussions with Fontenelle, Maupertuis, Buffon, Clairaut, and others. His discussions with these mathematicians and the continuing correspondence with them after he returned to Geneva had a big influence on Cramer's work. Back in Geneva in 1729, Cramer was at work on an entry for the prize set by the Paris Academy for 1730. Cramer's entry was judged as the second best of those received by the Academy, the prize being won by Johann Bernoulli. In 1734, Calandrini was appointed to the chair of philosophy and Cramer became the sole holder of the Chair of Mathematics. Cramer lived a busy life, for in addition to his teaching and correspondence with many mathematicians, he produced articles of considerable interest, although these are not of the importance of the articles written by most of the top mathematicians with whom he corresponded. He published articles covering a wide range of subjects including the study of geometric problems, the history of mathematics, philosophy, and the date of Easter. He published an article on the aurora borealis, and one on law where he applied probability to demonstrate the significance of having independent testimony from 2 or 3 witnesses rather than from a single witness. His work was not confined to academic areas for he was also interested in local government and served as a member of the Council of Two Hundred in 1734 and of the Council of Seventy in 1749. His work on these councils involved him using his broad mathematical and scientific knowledge, for he undertook tasks involving artillery, fortification, reconstruction of buildings, excavations, and he acted as an archivist. Cramer gained a reputation as a skilled editor. Johann Bernoulli's Complete Works was published by Cramer in four volumes in 1742. It shows how much respect that Bernoulli had for Cramer that he insisted that no other edition of his works be published by any editor other than Cramer. In 1745, jointly with Johann Castillon, Cramer published the correspondence between Johann Bernoulli and Leibniz. Cramer also edited a 5 volume work by Christian Wolff. Cramer's most famous book was Introduction à l'analyse des lignes courbes algébraique. It is a work which Cramer modelled on Newton's memoir on cubic curves. After an introductory chapter in which types of curves are defined and techniques for drawing their graphs are discussed, Cramer goes on to a second chapter in which transformations to simplify curves are studied. The third chapter looks at a classification of curves and it is in this chapter that the now famous Cramer's rule is given. After giving the number of arbitrary constants in an equation of degree n as n(n+3)/2, he deduces that an equation of degree n can be made to pass through n points. Taking n=5 he gives an example of finding the 5 constants involved in making an equation of degree 2 pass through 5 points. This leads to 5 linear equations in 5 unknowns and he refers the reader to an appendix containing Cramer's rule for their solution. We should remark, of course, that Cramer was certainly not the first to give this rule. Cramer's name has sometimes been attached to another problem, namely the Castillon-Cramer problem. This problem, proposed by Cramer to Castillon, asked how to inscribe a triangle in a circle so that it passed through three given points. Castillon solved the problem 25 years after Cramer's death, and the problem went on to various generalisations about inscribed polygons in a conic section. Cramer is also known for Cramer's paradox.
{"url":"http://www2.stetson.edu/~efriedma/periodictable/html/Cm.html","timestamp":"2014-04-18T15:56:14Z","content_type":null,"content_length":"5343","record_id":"<urn:uuid:6e259bdc-580d-45c2-ac15-96362e57b563>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
NSDUH 2002 SoF - Appendix B - Description of the Eleventh Equation Skip To Content Cap pr, a function of d, a, and t is the ratio of two quantities. The numerator is the weighted sum across individuals i, denoted by w sub i, of y sub i of d, a, and t. The denominator is the weighted sum, w sub i, across individuals i of x sub i of a and t.
{"url":"http://samhsa.gov/data/nhsda/2k2nsduh/results/DfigBeq11.htm","timestamp":"2014-04-20T05:41:56Z","content_type":null,"content_length":"1719","record_id":"<urn:uuid:96600dd8-bf96-4c96-a766-c53e56e51b53>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
If your salary of 33,000 this year was 10% more than last year, what was last years $ If your salary of 33,000 this year was 10% more than last year, what was last years salary? If your salary of 33,000 this year was 10% more than last year, what was last years salary? If last year salary was 100 then this year it will be 110 so when this year salary is 110 last year it was 100 then if this year salary is 33000, then last year it was 33000 × 100/110 = 300 × 100 = 30000 ---- Yet another way: Suppose you salary last year was "A". 10% of that is 0.01A and your last years salary plus 10^% is A+ 0.10A= (1+ .1)A=1.1A. Since 33,000 is "10% more than last year, 1.1A= 33000. Divide both side by 1.1 to get mush's solution.
{"url":"http://mathhelpforum.com/algebra/71456-if-your-salary-33-000-year-10-more-than-last-year-what-last-years.html","timestamp":"2014-04-20T17:52:01Z","content_type":null,"content_length":"38988","record_id":"<urn:uuid:f32404b2-0444-4217-b3b5-2605ba7518a8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
year 13 algebra printable worksheet Author Message Xoywaad Jahempni Posted: Friday 29th of Dec 20:46 Hey friends , I have just completed one week of my high school , and am getting a bit upset about my year 13 algebra printable worksheet home work. I just don’t seem to grasp the topics. How can one expect me to do my homework then? Please help me. From: San Diego Jahm Xjardx Posted: Saturday 30th of Dec 09:12 Hey dude, I was in a similar situation a month ago and my friend suggested me to have a look at this site, http://www.algebra-equation.com/solving-multiple-step-equations.html. Algebrator was very useful since it gave all the basics that I needed to solve my assignment in Algebra 1. Just have a look at it and let me know if you need further information on Algebrator so that I can offer assistance on Algebrator based on the experience that I currently posses. From: Odense, Denmark, EU Dxi_Sysdech Posted: Saturday 30th of Dec 14:01 I didn’t encounter that Algebrator program yet but I heard from my friends that it really does assist in answering math problems. Since then, I noticed that my peers don’t really have troubles answering some of the problems in class. It might really have been efficient in improving their solving abilities in algebra. I can’t wait to use it someday because I think it can be very effective and help me have a good grade in algebra. From: Right here, can't you see me? Voumdaim of Posted: Sunday 31st of Dec 13:16 Obpnis A truly piece of math software is Algebrator. Even I faced similar difficulties while solving difference of cubes, radical expressions and multiplying matrices. Just by typing in the problem from homework and clicking on Solve – and step by step solution to my math homework would be ready. I have used it through several math classes - Intermediate algebra, Pre Algebra and Remedial Algebra. I highly recommend the program. From: SF Bay Area, CA, USA phbeka Posted: Tuesday 02nd of Jan 12:09 Amazing! This really sounds great. Do you know where I can obtain the software ? From: UK MoonBuggy Posted: Tuesday 02nd of Jan 20:52 You can find out all about it at http://www.algebra-equation.com/solving-quadratic-equation.html. It is really the best algebra help program available and is available at a very reasonable price. From: Leeds, UK
{"url":"http://www.algebra-equation.com/solving-algebra-equation/like-denominators/year-13-algebra-printable.html","timestamp":"2014-04-18T10:33:28Z","content_type":null,"content_length":"25206","record_id":"<urn:uuid:cd2ba9b6-29dd-439b-9bae-a825ed2b6827>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
Wayne, NJ Algebra 1 Tutor Find a Wayne, NJ Algebra 1 Tutor ...I am going for training in the new Common Core curriculum. I have a B.A in Economics from Queens College and an MBA in accounting from the University of Massachusetts. I have 20 years financial experience in financial services. 20 Subjects: including algebra 1, geometry, economics, finance ...The different properties of the Lines, polygons and other elements are essential to understand advance skills such as trigonometry and calculus. Solving equations with trigonometric identities. The trigonometric functions correspond to the lengths of various line segments in a right triangle. 13 Subjects: including algebra 1, English, algebra 2, trigonometry ...Properties of logaritmics. Arithmic and geometric sequences and series. Counting and probabilities. 4 Subjects: including algebra 1, algebra 2, elementary math, prealgebra ...As for my tutoring style, the needs of the student play a large role in that determination. I am not a fan of punishment, and prefer to stay positive with my interactions, and I find that brings the best results. We can work together to agree upon specific goals, identifying areas on which we will focus, and I will provide weekly feedback. 17 Subjects: including algebra 1, reading, writing, geometry ...Please note that unless you have already submitted a credit card number to Wyzant, I will not be able to see your phone number or email address in your initial message, and the Wyzant system will be our only available mode of contact. Thank you for your consideration. Algebra 1 is a textbook title or the name of a course, but it is not a subject. 25 Subjects: including algebra 1, chemistry, calculus, physics Related Wayne, NJ Tutors Wayne, NJ Accounting Tutors Wayne, NJ ACT Tutors Wayne, NJ Algebra Tutors Wayne, NJ Algebra 2 Tutors Wayne, NJ Calculus Tutors Wayne, NJ Geometry Tutors Wayne, NJ Math Tutors Wayne, NJ Prealgebra Tutors Wayne, NJ Precalculus Tutors Wayne, NJ SAT Tutors Wayne, NJ SAT Math Tutors Wayne, NJ Science Tutors Wayne, NJ Statistics Tutors Wayne, NJ Trigonometry Tutors Nearby Cities With algebra 1 Tutor Clifton, NJ algebra 1 Tutors Fair Lawn algebra 1 Tutors Fairfield, NJ algebra 1 Tutors Fairlawn, NJ algebra 1 Tutors Garfield, NJ algebra 1 Tutors Haledon algebra 1 Tutors Hawthorne, NJ algebra 1 Tutors Little Falls, NJ algebra 1 Tutors North Haledon, NJ algebra 1 Tutors Passaic algebra 1 Tutors Passaic Park, NJ algebra 1 Tutors Paterson, NJ algebra 1 Tutors Preakness, NJ algebra 1 Tutors Totowa algebra 1 Tutors Woodland Park, NJ algebra 1 Tutors
{"url":"http://www.purplemath.com/wayne_nj_algebra_1_tutors.php","timestamp":"2014-04-16T16:19:16Z","content_type":null,"content_length":"23845","record_id":"<urn:uuid:14c7b88d-bade-4c26-ae76-0dbcc92d6603>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
Half Moon Bay Calculus Tutors ...It can be an interesting and sometimes exciting change in Mathematics! Geometry is a study of our spacial world that surrounds us. It builds a foundation of logic and proofs that is a change from the fundamentals of Algebra. 13 Subjects: including calculus, statistics, algebra 2, geometry ...I'm an enthusiastic teacher, who loves helping students to succeed to the best of their ability, and loves all facets of mathematics and physics. I will teach all levels from middle school mathematics up to calculus. I have extensive experience teaching international baccalaureate mathematics. 11 Subjects: including calculus, chemistry, physics, statistics ...I also greatly appreciate feedback so I can continually improve as a tutor. I have a somewhat busy schedule, though I will be as flexible as possible with time. I currently have a 4-hour cancellation policy, though I will do my best to work with you to find a system that works best. 27 Subjects: including calculus, chemistry, physics, geometry ...My experience in education includes >1000 hours of private tutoring over the course of 3 years in addition to classroom level student teaching at Mountain View High School and Del Mar High School. As a private tutor, I've had the privilege of guiding a wide variety of students through every l... 14 Subjects: including calculus, chemistry, physics, algebra 1 ...I have designed curricula for several math courses, ranging from algebra to calculus, and a number of astronomy-related courses specifically targeting younger students. I also have experience working with pre-written curricula provided to me by supervising organizations, such as Mad Science (in ... 29 Subjects: including calculus, English, physics, French
{"url":"http://www.algebrahelp.com/Half_Moon_Bay_calculus_tutors.jsp","timestamp":"2014-04-18T00:19:43Z","content_type":null,"content_length":"25229","record_id":"<urn:uuid:465fbc92-b0c0-4e39-bd79-d56289f25e6e>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
probability puzzle - selecting a person up vote 4 down vote favorite there are n people on a round table. one of the them is the head and he plans to make another person from the rest the new head. he has a coin. he flips the coin. if he gets a head he gives the coin to the person to his left and if he gets a tail he gives the coin to the person to his right. anyone who receives the coin can no longer become the head. the person who receives the coin repeats the similar procedure. when there is only one person remaining who has not received the coin, he becomes the head. what is the probability of each of the n-1 people becoming a head. for n = 3, 4 we get a uniform probability distribution. how to solve it for higher n? i have struggled with this for some time but could not solve yet. it seems you are having a tough time solving this! i could not find the solution on google. approach that i was using: let ai be the probability of that the ith person is not selected. summation i:1 to n (1-ai) = 1; a1 = 0; we need another equation to use the fact that the probability of getting a head on a coin flip is 1/2. tried Bayes etc. could not get it. try induction anything else you might like I don't know if this question is proper here. I think you might want to look at random walk. See en.wikipedia.org/wiki/Random_walk – Tran Chieu Minh Jan 16 '10 at 14:16 friends at stackoverflow told me it is not programming related. it is mathematics related. right? – Rohit Banga Jan 16 '10 at 14:46 3 BTW, saying "it seems you are having a hard time solving this" is being a bit presumptuous, unless you have some way of knowing how many people are actually trying to answer your question – Yemon Choi Jan 16 '10 at 15:34 That looks like part of a conversation with someone else which was accidentally left in. – Douglas Zare Jan 16 '10 at 17:29 4 This is an example of a result I put in the category of "hidden independence and uniformity." For more such problems, see math.mit.edu/~rstan/s34/indep.pdf. – Richard Stanley Apr 10 '10 at 0:30 show 1 more comment 5 Answers active oldest votes The probability distribution is uniform. Each person P has two neighbors, R and L. One is eliminated before the other, say R. Then the probability P is selected is the conditional up vote 14 probability that L is eliminated before P, with the coin starting at R. By symmetry, this is the same for each person. down vote not very clear. how is it the same for each person. – Rohit Banga Jan 16 '10 at 14:47 For each person P, it's the same random walk problem: One neighbor is eliminated. Will the coin go around the table to the other neighbor before getting to P? If so, P is selected. If not, P is eliminated. It's also a standard exercise that the probability is 1/(n-1) as discussed <a href="mathoverflow.net/questions/7004/…;. – Douglas Zare Jan 16 '10 at 14:54 1 In fact, this is clearly not right for n=4: heads-heads selects person 0, and tails-tails person 2, but person 1 is selected for either heads-tails or tails-heads. Thus, the probability is not uniform. – Neil Apr 13 '10 at 22:13 1 @Neil If there are 4 people at the table, then when the coin is passed left and then right, the second pass does not eliminate a person. The coin is back in the original position, and there are two people who have not received the coin. The player to the right of the head has probability 1/3 of being the last not to hold the coin, and the player opposite the head now has a 2/3 chance to be the last. At the start, the probability distribution is uniform over all people except the head. – Douglas Zare Apr 13 '10 at 22:48 2 Oh, I assumed that people get up from the table when they can no longer be head of the table... Sorry. – Neil Apr 14 '10 at 0:23 show 1 more comment The following argument is from A note on the last new vertex visited by a random walk, L. Lovasz, P. Winkler, Journal of Graph Theory, Vol. 17, No. 5, 1993. Each person P has two neighbors R and L. Let $p$ be the probability that starting from node R you visit P last. By symmetry, $p$ is also the probability that starting from node L you visit P last. Also by symmetry, $p$ does not depend on the starting person $P$ - it only depends on $n$. up vote 5 Claim: If your starting person is not P, then $p$ is the probability that P is visited last. down vote Proof: You will get to R or L before P with probability 1. The probability that $P$ is then visited last is $p$. This proves that the probability of P being last does not depend on the person P, so it must be $1/(n-1)$. add comment It's uniform for all n. First consider the person on the immediate right of the head. The probability he gets it last can be calculated using random walk theory as 1/(n-1). The same argument works for the person on the immediate left of the head. Now consider someone in an arbitrary position (not next to the head) He can't receive the coin before both the person on his right and before the person on his left. call him k , the up vote 3 person on his left k-1 and the person on his right k+1. Then P[k is last]= P[K+1 before k-1 before k]+P[k-1 before k+1 before k] =P[k+1 before k-1]/(n-1)+P[k-1 before k+1]/(n-1) this is down vote because once k-1 or K+1 gets it were in the position on the person immediately to the right on the head. add comment This is to answer a question raised by unknown: in the asymmetric case where the coin moves clockwise with probability $p$ and counterclockwise with probability $1-p$, the new head is located $k$ seats away from the former head, counting seats clockwise, with probability $r^k(r-1)/(r^n-r)$, for every $1\le k\le n-1$, where $r=(1-p)/p$. (When $p\to1/2$, $r\to1$ and the up vote 2 limit is indeed uniform, that is, equal to $1/(n-1)$ for every $k$.) down vote add comment I have written an article about this game, and solved it numerically for the case of 10 person. I computed that the probability is the same for each of the 9 person. In short, I have defined the following: Let P(n,i,j,k) be the probability of at the n-th round, the k person is having the coin, while the people from i counting clockwise to k have all received the coin before. And 3 recurrence equations are formulated: $P (n+1,i,j,k)= \frac{1}{2} P (n,i,j,k-1)+ \frac{1}{2} P (n,i,j,k+1)$ ........(1) up vote -1 $P (n+1,i,j,j)=\frac{1}{2} P (n,i,j-1,j-1)+ \frac{1}{2} P(n,i,j,j-1)$ ........(2) down vote For details, please refer to: Solving a probability game using recurrence equations and python In this article, a(n,i) which denotes the probability of the i-th person being the head in the n-th round is found and plotted out as well. You can make explicit formulas involving a single summation for your a(n,i). The number of walks which stay within an interval can be counted, and you want the walks from the origin to L or R which do stay inside the path excluding i, but which do not stay inside the path excluding i and the other neighbor. See mathoverflow.net/questions/16892/… – Douglas Zare Apr 9 '10 at 18:43 @Douglas: I read your comment of random walk before giving the answer. I gave this answer, since unknown (google) asked for any approach for this problem. I am just giving a general approach for a complicated probability problem. Sorry that I can't solve the equations analytically. Would you mind giving me some guide in solving these kinds of partial difference equations? I don't know if generating function works in this case, due to the circle index. – Ross Tang Apr 17 '10 at 23:58 add comment Not the answer you're looking for? Browse other questions tagged pr.probability or ask your own question.
{"url":"http://mathoverflow.net/questions/11984/probability-puzzle-selecting-a-person?sort=oldest","timestamp":"2014-04-19T12:07:28Z","content_type":null,"content_length":"82566","record_id":"<urn:uuid:308c5daa-c91e-4e40-abb1-e8ae8230baf1>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
log differentiation May 25th 2010, 10:07 AM #1 Super Member Sep 2008 log differentiation Differentiate with respect to 'x'. $log_{4}(x^{2})$ I know that $\frac{d}{dx}(log_a{x}) = \frac{1}{lna} \times \frac{1}{x}$ so, based on this formula, I got the answer: $\frac{1}{ln4} \times \frac{1}{x^{2}}$ But this is not the correct answer, can someone please show me how to correctly differentiate this function? Thank you! Differentiate with respect to 'x'. $log_{4}(x^{2})$ I know that $\frac{d}{dx}(log_a{x}) = \frac{1}{lna} \times \frac{1}{x}$ so, based on this formula, I got the answer: $\frac{1}{ln4} \times \frac{1}{x^{2}}$ But this is not the correct answer, can someone please show me how to correctly differentiate this function? Thank you! Hint: $\log_4(x^2)=2\log_4x$. Can you continue? EDIT: If you continue doing it your way, you have to apply chain rule to the $x^2$ term. Differentiate with respect to 'x'. $log_{4}(x^{2})$ I know that $\frac{d}{dx}(log_a{x}) = \frac{1}{lna} \times \frac{1}{x}$ so, based on this formula, I got the answer: $\frac{1}{ln4} \times \frac{1}{x^{2}}$ But this is not the correct answer, can someone please show me how to correctly differentiate this function? Thank you! You need to apply the chain rule: $\frac{d}{dx}\left(f(g(x))\right) = f'(g(x))\cdot g'(x)$ In this case $f(x) = \log_4(x)$ and $g(x)=x^2$. So $\left(\log_4(x^2)\right)' = \frac{1}{\ln(4)}\frac{1}{x^2}\cdot 2x = \frac{2}{x\cdot \ln(4)}$ I think so.... $2 \times \frac{1}{ln4} \times \frac{1}{x}$ $\frac{2}{ln4} \times \frac{1}{x}$ $\frac{1}{ln2} \times \frac{1}{x}$ Are you 'allowed' to factor out the '2' like this? Even though there is ln 'attached' to 4? Is this method correct? But is my method correct? As I simply used the formula in my book, I did not use the chain rule. As a matter of learning, i think it's important to identify that the chain rule is still used! But it's the fact that the derivative of x is equal to 1, which allows us to neglect directly thinking about the chain rule. This is an obvious result but I think for those having trouble, it might be important! May 25th 2010, 10:09 AM #2 May 25th 2010, 10:10 AM #3 May 25th 2010, 10:14 AM #4 Super Member Sep 2008 May 25th 2010, 10:15 AM #5 May 25th 2010, 10:16 AM #6 May 25th 2010, 10:18 AM #7 Super Member Sep 2008 May 25th 2010, 10:19 AM #8 May 25th 2010, 10:21 AM #9 May 25th 2010, 11:53 AM #10
{"url":"http://mathhelpforum.com/calculus/146372-log-differentiation.html","timestamp":"2014-04-16T06:11:30Z","content_type":null,"content_length":"67210","record_id":"<urn:uuid:4e2e9302-8dc1-46e9-8254-47021e8a1542>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
Colin Morningstar Associate Professor Ph.D., University of Toronto Email: colin_morningstar@cmu.edu Phone: 1-412-268-2728 FAX: 1-412-681-0648 My research interests primarily concern nonperturbative phenomena in quantum field theories, with particular emphasis on the study of hadron formation and confinement in quantum chromodynamics using computer simulations of quarks and gluons. Selected talks □ Progress in Excited Hadron States in Lattice QCD &nbsp &nbsp Colloquium at Florida Int. University, Miami, November 20, 2009. □ Exploring Excited Hadrons &nbsp &nbsp Plenary talk at the 26th International Symposium on Lattice Field &nbsp &nbsp Theory (Lattice 2008), Williamsburg, Virginia, 14-20 Jul 2008. □ The Monte Carlo Method in Quantum Field Theory &nbsp &nbsp Lectures given at the Hampton University Graduate Studies at JLab, June 12-16, 2006 □ The Monte Carlo method in Quantum Mechanics &nbsp &nbsp CMU Physics Undergraduate Colloquium, April 20, 2004 □ Bayesian curve fitting for lattice gauge theorists (talk) &nbsp &nbsp (write up) &nbsp &nbsp Workshop on Lattice Hadron Physics, Cairns, Australia, July 13, 2001 □ Gluonic excitations from the lattice: A brief survey &nbsp &nbsp Hadron 2001, Protvino, Russia, August 28, 2001 □ Quarks, Gluons, and Computers: Towards an understanding of the Strong Force &nbsp &nbsp CMU Physics Colloquium, September 10, 2001 (3Mb pdf file) □ Study of Exotic Hadrons in Lattice QCD &nbsp &nbsp Workshop on Advanced Computing in Nuclear &nbsp &nbsp and Hadronic Physics, Santorini, Greece, October 2, 2001 □ Fall 2009 -- 33770 Quantum Mechanics IV (Field Theory) □ Spring 2010 -- 33771 Quantum Mechanics V (Field Theory) Other links
{"url":"http://www.andrew.cmu.edu/user/cmorning/","timestamp":"2014-04-21T01:03:23Z","content_type":null,"content_length":"7479","record_id":"<urn:uuid:1bc6b7f2-a841-476e-92d9-fc52d5ae8787>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebraic topology I will summarize the results I developed and presented another thread in a more coherent and easier-to-follow fashion here. 1. HOMOTOPY You have two topological spaces X and Y and two continuous functions . Suppose for each , there corresponds a continuous function such that and for all . A continuous function is callled a homotopy from . We say that are homotopic iff there exists a homotopy from , and we write ; we can also express the fact that is a homotopy from by writing . The relation “is homotopic to” is an equivalence relation on , the class of all continuous functions from (i) For any , the function is a homotopy from to itself. (ii) For any , if , then is a homotopy from (ii) For any , if and , define as follows. Then . the equivalence classes under this equivalence relation are called homotopy classes.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=97278","timestamp":"2014-04-20T16:35:31Z","content_type":null,"content_length":"27011","record_id":"<urn:uuid:ae14521f-de4b-455d-b342-b2d36c6551e0>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
North Riverside, IL Algebra Tutor Find a North Riverside, IL Algebra Tutor ...These methods are nothing unusual, but on the part of the teacher, they require a great deal of self-discipline, and willingness to "push" the students. I completed a Discrete math course (included formal logic, graph theory, etc.) in college, and computer science courses that handled automata t... 21 Subjects: including algebra 1, algebra 2, chemistry, calculus ...I spent a summer teaching at a high school math camp, I have been a coach for the Georgia state math team for a few years now, and I was a grader for the IU math department. I've tutored many people I've encountered, including friends, roommates, and people who've sat next to me on trains, aside... 13 Subjects: including algebra 2, algebra 1, calculus, statistics ...I have over 10 years of experience teaching high school students and high school graduates especially, to prepare them for college entrance examination. I have a Bachelor's degree in Electrical / Electronics Engineering from Yaba College of Technology in Nigeria. I have taken and passed the Illinois State Board Examination. 9 Subjects: including algebra 1, algebra 2, geometry, GED I will be teaching honors physics and chemistry this year. This summer, I worked for ComEd's "smart grid" education program. I also spent a year doing ACT tutoring at Huntington Learning Center. I am available for tutoring chemistry, physics, earth science, math, and ACT on the weekends 12 Subjects: including algebra 1, algebra 2, chemistry, physics ...Working one-on-one with me enables you to learn at your own pace. We will review a concept until you have mastered that material, and will introduce new concepts only when you are ready. Most importantly, my goal is to help students take ownership of their learning, with the goal of making you an independent and successful Spanish student. 28 Subjects: including algebra 1, Spanish, English, world history
{"url":"http://www.purplemath.com/North_Riverside_IL_Algebra_tutors.php","timestamp":"2014-04-16T13:44:33Z","content_type":null,"content_length":"24349","record_id":"<urn:uuid:66fc9248-079f-4975-9ad3-799c7140a02d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
Does a simple explanation of group theory exist? - Inorganic Chemistry Hey guys, I had a couple lectures on group theory this week, but it's only an undergraduate inorganic chemistry class and the whole concept is very abstract and hard for me to grasp, specifically once I get past the basic operations. I understand the basic applications, how to calculate the number of vibrational modes for nonlinear and linear molecules, but my problem is explaining and interpreting character tables. I don't understand what the A, B, etc. are, and I have trouble calculating their values. So my question is, does anyone know of a good resource on the web to assist me? I've looked for some but I was hoping someone on here knows something that has helped them or is just in general very
{"url":"http://www.scienceforums.net/topic/44983-does-a-simple-explanation-of-group-theory-exist/","timestamp":"2014-04-17T01:12:54Z","content_type":null,"content_length":"43145","record_id":"<urn:uuid:01b93078-9853-4cd5-92ac-4718e307a090>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
Good uses of Siegel zeros? up vote 18 down vote favorite The short version of my question goes: What is known to follow from the existence of Siegel zeros? A longer version to give an idea of what I have in mind: The "expectional zeros" of course first cropped up in the work of Deuring and Heilbronn on the Gauss class number problem, the finitude of discriminants $D$ where $h(D) = 1$ (or $h(D) = c$ in general) is shown to follow from the failure of the RH (Deuring-Mordell) or, more generally, the failure of the GRH (Heilbronn). Since the finitude also follows (with good bounds) on the assumption of the GRH (Hecke-Landau) this settles the result unconditionally. (A more careful analysis by Heilbronn-Linfoot gives explicit bounds so that $h(D) = 1$ can happen at most 10 times.) Siegel later crystallised out the peculiar case of the exceptional real zero and proved the result in a very sharp form. Linnik then used these ideas to show that $P(a,q) \ll q^L$, $P(a,q)$ being the least prime $\equiv a \pmod q$ and $L$ an explicit constant. Again this was known to follow on the GRH (with $L < 2 + \ varepsilon$), hence the result is unconditional. Iwaniec [Conversations on the Exceptional Character, p. 118 ] mentions on the other hand that Siegel zeros can be exploited to give even stronger results than what is available on the (presumably true) GRH; for example it follows that $L < 2$, hence improving on the bound known on the GRH. This is related to earlier work by Heath-Brown showing that the existence of exceptional zeros implies both that $L < 3$ (which is better than what is known unconditionally, although not better than the $L < 2 + \varepsilon$ that follows on the GRH) and the existence of infinitely many twin primes. The results of Heath-Brown and Iwaniec, however, assumes an infinite number of exceptional zeros, to increasing modulus $q$, to derive results stronger than what is known unconditionally (or even on the GRH). What I would like to know is if similar results are proved in the literature working only from a single exceptional zero (or even from the weaker assumption of the mere existence of any zeros off the critical line, like Deuring and Heilbronn did)? Putting $\Theta =$ supremum of the real parts of the zeros, we have of course that $1/2 < \Theta$ (i.e. the failure of the RH) would mess up the approximate formulas for different prime counting functions (e.g. by a theorem of Schmidt we have $\Pi(x) - {\rm li} (x) = \Omega_\pm(x^{\Theta-\varepsilon})$, thus $= \Omega_\pm(x^{1/2})$ in case RH fails), but as long as the mere existence of any zero off the line is assumed (not some horrifying disaster like $\Theta = 1$), I know of no results stronger than the unconditionally known irregularites in the approximation (e.g. from Littlewood's theorem). The assumption of a Siegel zero of course is an assumption of such a special failure of the GRH on the other hand, so perhaps I'm missing some rather obvious examples? @Kálmán: I am not sure if you understood what kiskis said below. Heath-Brown proved that the existence of an exceptional zero implies the twin prime conjecture: Prime twins and Siegel zeros, Proc. London Math. Soc. (3) 47 (1983), no. 2, 193-224. – GH from MO May 19 '13 at 21:44 @GH: I mentioned this result in my question above, but "an" exceptional zero isn't quite right: H-B crucially relies on an infinitude of them. What kiskis was hinting at was a possible unconditional proof of the twin prime conjecture though (unless I misunderstood), to use the result of H-B for that you need to show that the conjecture holds also conditioned on the absence of (an infinitude of) Siegel zeros. – Kálmán Kőszegi May 19 '13 at 21:59 @Kálmán: The notion of exceptional zero does not make sense for a single zero (this is also what Terry Tao tried to say). At any rate, HB's result can be formulated as: if the twin prime conjecture is false, then there is no exceptional zero (with an appropriate constant in the definition of the exceptional zero). In other words, there is $c>0$ such that if there exist a quadratic $χ$ modulo q and a real zero $\sigma>1−c/\log q$ of $L(s,\chi)$, then the twin prime conjecture is true. I continue in next comment. – GH from MO May 19 '13 at 23:00 Also, I think your opening line "What is known to follow from the existence of Siegel zeros?" does not reflect what you want (in the light of your comments). You really want: What is known to follow from both the existence and the absence of Siegel zeros. That is, what can we prove unconditionally by using the notion of Siegel zeros on the way. – GH from MO May 19 '13 at 23:01 @GH: The constant $c$ in your statement above (essentially Theorem 2 of HB's paper) is not computable however and it depends on whether the infinite sequence of zeros HB used in his Theorem 1 exists (if it doesn't, we just put $c$ small enough that no zero as specified exists and the implication is trivially true). [HB has a warning about this right after stating Theorem 2.] For my question to make proper sense you should assume that $c$ is given once and for all (cf. my first comment to Tao's answer below), in that case there is no trouble with the notion of a single Siegel zero. – Kálmán Kőszegi May 20 '13 at 0:10 show 2 more comments 4 Answers active oldest votes With the usual definition of a Siegel zero (involving an unspecified constant $C_\varepsilon$ for each $\varepsilon>0$), it is not easy to talk about a "single" Siegel zero unless one decides to fix exactly how $C_\varepsilon$ is to depend on $\varepsilon$. On the other hand, the classical proof of the prime number theorem also shows that $L(\sigma+it,\chi)$ has no zeroes in the region $\sigma \geq 1-\frac{c}{\log q(|t|+1)}$ for some effective (and very explicit) $c>0$, with at most one exception. This gives an effective prime number theorem in arithmetic progressions $$ \psi(x; a,q) = \frac{x}{\phi(q)} - \frac{\chi(a)}{\phi(q)} \frac{x^\beta}{\beta} + O( x \exp(-b \sqrt{\log x}))$$ for an absolute and effective constant $b>0$, where $\beta$ is the exceptional zero (if it exists) of the exceptional quadratic character $\chi$. (If there is no exceptional zero, the second term on the right-hand side is simply deleted.) This formula can then be used as a partial but effective substitute for the Siegel-Walfisz theorem for all sorts of number-theoretic up applications, e.g. this formula (or something very close to it) is used in all the known effective unconditional proofs of Vinogradov's three primes theorem. In many cases the results are vote actually easier to prove if the exceptional zero is present. Iwaniec's ICM survey at http://www.icm2006.org/proceedings/Vol_I/16.pdf discusses these issues in more detail. down ADDED LATER: Another interesting phenomenon, first observed by Montgomery and Weinberger, is that the existence of a single Siegel zero $L(\sigma,\chi)=0$ forces many other L-functions $L(s,\ vote psi)$ to have most of their zeroes (at a certain height) arranged on the critical line and to lie close to an arithmetic progression (this type of behaviour is occasionally referred to as the "Alternative Hypothesis", being the extreme opposite to the more commonly believed "GUE hypothesis" but which thus far has proven impossible to completely exclude). Roughly speaking, the reason for this is that if $L(\sigma,\chi)=0$ for some $\sigma$ close to $1$, then the residue of $\zeta(s) L(s,\chi)$ is unexpectedly small at $1$, making the Dirichlet convolution $1*\chi$ much sparser than expected. For any other Dirichlet character $\psi$, $\psi*\chi\psi$ is pointwise dominated by $1*\chi$ and is similarly sparse. This means that the function $L(s,\psi) L(s,\ psi\chi)$ is very well behaved for typical $\psi$ and for $s$ near the critical line; indeed, it is dominated by the initial segment of the Dirichlet series $\sum_n \frac{\psi*\psi\chi(n)}{n^ s}$ (which is very smooth in $s$), plus the complementary term coming from the functional equation (or equivalently, from Poisson summation), which oscillates at a precise frequency depending on the height of $s$ and the conductor of $\psi$ and $\psi \chi$. The interaction between these two terms is what places the zeroes of $L(s,\psi) L(s,\psi\chi)$, and hence of $L(s,\psi)$, near an arithmetic progression. Is the usual definition $\varepsilon$-dependent like that? At p. 293 in Iwaniec's survey he seems to be providing a definition of "exceptional zero" as a zero $\beta > 1 - \frac{c}{log(D)} $, $c$ being "some fixed sufficiently small positive constant". (My understanding was that $c$ might as well be given some numerical value and was left unspecified for convenience only.) – Kálmán Kőszegi Mar 23 '13 at 0:07 The formula above (as well as Ralph's class number formula below) is of course a non-trivial consequence of the existence of $\beta$, but maybe not a truly "good" use of Siegel zeros since 1 $\beta$ itself is turning up in the formula we prove. Since $\beta$ is (we all think?) a fiction this doesn't actually provide us with any true mathematical results we couldn't derive without assuming the existence of $\beta$, even though the formula might be useful for obtaining unconditional results. H-B's twin prime result would be a nice example of what I want, but requires an infinity of zeros... – Kálmán Kőszegi Mar 23 '13 at 0:26 I didn't know this nice survey by Iwaniec. Thanks for the link! – Joël Mar 23 '13 at 5:55 The main application that Montgomery & Weinberger give [the paper is available at: matwbn.icm.edu.pl/ksiazki/aa/aa24/aa2457.pdf] is the determination of all $D$ with $h(D) = 2$, this is just in line with the original Deuring-Heilbronn result though: a known consequence of GRH is shown to follow in the presence of Siegel zeros and we get an unconditional result. I would be interested in the best known bounds for the phenomenon though: assuming a Siegel zero to modulus $q$, how far up the strip can we guarantee that zeros of other L-functions lie on the critical line? – Kálmán Kőszegi May 20 '13 at 0:39 add comment There is an asymptotic formula for the relative class number of $p$th-cyclotomic fields where a Siegel zero $\beta$ occurs: $$ h^-(p)=\frac{p+3}{4}\log p-\frac{p}{2}\log 2\pi + \log (1-\ beta) +O(\log^2_2 p)$$ If no Siegel zero exists or if $p\equiv 1(4)$, the term $\log (1-\beta)$ disappears in the formula. This is Theorem 1 from Puchta: On the class number of $p$-th cyclotomic field. Arch. Math. 74 (2000), 266-268. But note the misprint of the sign in the theorem: $\frac{p-3}{4}$ should be up vote 5 $\frac{p+3}{4}$. down vote Compare also with a classical formula of Masley, Montgomery (Cyclotomic fields with unique factorization. J. Reine Angew. Math. , 286/287 (1976), 248–256. Theorem 1): $$h^-(p)=\frac{p+3} {4}\log p-\frac{p}{2}\log 2\pi+O(\log p)$$ Is this really a formula for $h^-(p)$, not $\log h^-(p)$? – Noam D. Elkies May 19 '13 at 19:01 add comment I just ran across this paper. I don't know if it is a "good" use or not. It is in the spirit of Heath-Brown I presume. Lithuanian Mathematical Journal January 2010, Volume 50, Issue 1, pp 43-53 L. Germán, I. Kátai, On multiplicative functions on consecutive integers up vote 4 down http://link.springer.com/article/10.1007%2Fs10986-010-9070-8 From the abstract: Concerning the Liouville function $\lambda$, we find an upper estimate for ${1\over x}|\sum_{n\le x}\lambda(n)\lambda(n+1)|$ under the unproved hypothesis that $L(s,\chi)$ have Siegel zeros for an infinite sequence of L-functions. add comment Siegel zeros are motivated by a possible problem with the uniformity of the distribution of primes in progression. As such if there are only finitely many Siegel zeros then the uniformity problem would not arise, and we would say that no Siegel zeros exist! up vote 0 One of the most striking consequence of the existence of Siegel zeros that I know of, is Heath-Brown's proof that Siegel zeros imply twin prime. It will be interesting to watch and see if down vote this result will be used in the eventual proof of the twin prime conjecture (now that we have bounded gaps between primes). Getting the twin prime conjecture from some niceness property of zeros of L-functions (even the full GRH) seems out of reach at present, so in terms of recent results it seems that Helfgott's proof of weak Goldbach is a more promising place to look for applications of anomalous zeros; since WGC is known to follow on the GRH we are free to assume a counterexample $\ beta$ for a nonconditional proof. From what I gather from skimming the articles this isn't the strategy he adopted though, only a numerical verification of GRH for many values. – Kálmán Kőszegi May 19 '13 at 21:13 Dear Kalman, As you might know Yitang Zhang has recently shown that there exists infinitely many primes with gaps less than 70 million, for the proof see here: annals.math.princeton.edu/ articles/7954 . Many mathematicians have started to think about reducing the 70 million down to a smaller number and eventually to two. In the final attack on the twin prime conjecture, we can assume that Siegel zeros do not exist because of Heath-Brown's result. – kiskis May 23 '13 at 8:05 Another comment on the Goldbach conjecture: Vinogradov's first proof of the ternary Goldbach relied on the Siegel-Walfisz theorem, which in turn depends on Siegel's bound for the Siegel zero. As a result the constant C, such that all odd n > C are a sum of three primes was ineffective. In order to make the constant effective you have to use the weaker, but effective analogue of Siegel-Walfish. This produces an enormous constant. In order to bring the constant down you have to play games with the Siegel zero. – kiskis May 23 '13 at 8:19 add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory analytic-number-theory zeta-functions riemann-zeta-function riemann-hypothesis or ask your own question.
{"url":"http://mathoverflow.net/questions/125276/good-uses-of-siegel-zeros/132868","timestamp":"2014-04-17T12:52:23Z","content_type":null,"content_length":"86128","record_id":"<urn:uuid:6f1d33c5-71bb-4f04-b08e-26281a56232f>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
5x^2-6-12x^2+12+2x - WyzAnt Answers Tutors, please to answer this question. Hi there. Like terms are terms that have the same variable factors. 2x and 2y are not like terms, but 5x and -8x are like terms. x^2 and x are NOT like terms, because x^2=x*x (it has two variable factors). In an expression, find the like terms and add their coefficients to combine like terms. -7x^2 + 2x + 6 Don't add exponents. Remember that x*x = x^2 but 2x = x + x.
{"url":"http://www.wyzant.com/resources/answers/6299/5x_2_6_12x_2_12_2x","timestamp":"2014-04-18T06:40:14Z","content_type":null,"content_length":"35728","record_id":"<urn:uuid:bccfe6b4-8c9e-4d06-b601-3f24f4f92791>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US20040087835 - Methods for consistent forewarning of critical events across multiple data channels [0001] This invention was made with assistance under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The Government has certain rights in this invention. [0002] The field of the invention is methods of computer analysis for forewarning of critical events, such as epileptic seizures in human medical patients, and mechanical failures in machines and other physical processes. [0003] Hively et al., U.S. Pat. Nos. 5,743,860 and 5,857,978 disclose methods for detecting and predicting epileptic seizures by acquiring brain wave data from a patient, and analyzing the data with traditional nonlinear methods. [0004] Many of the prior art methods of epileptic forewarning were based on intercranial electroencephalogram (EEG) data. The present invention can be practiced with EEG data obtained from sensors applied to the scalp of the patient. Prior advances using scalp EEG data removed artifacts with a zero-phase quadratic filter to permit analysis of single-channel scalp EEG data. Hively et al., U.S. Pat. No. 5,815,413, disclosed the use of phase space dissimilarity measures (PSDM) to forewarn of impending epileptic events from scalp EEG in ambulatory settings. Despite noise in scalp EEG data, PSDM has yielded superior performance over traditional nonlinear indicators, such as Kolmogorov entropy, Lyapunov exponents, and correlation dimension. However, a problem still exists in forewarning indicators, because false positives and false negatives may occur. [0005] Hively et al., U.S. Pat. No. 5,815,413, also discloses the applicability of nonlinear techniques to monitor machine conditions such as the condition of a drill bit or the performance of an electrical motor driving a pump. [0006] The present invention uses prior advances in the application of phase space dissimilarity measures to provide forewarning indications. In this method, a renormalized measure of dissimilarity, e.g., U(χ^2) or U(L), is compared to a threshold value (U[C]) and upon exceeding the threshold value for a sequential number of occurrences (N[occ]), a forewarning indication is determined. [0007] Forewarning indications are further resolved into true positives, true negatives, false positives and false negatives in multiple data sets taken from the same patient over multiple channels, where it is known whether the patients experienced biomedical events or did not experience such events. The results are then used to calculate a channel-consistent total true rate (f[T]). This approach allows the observation of a channel or channels providing the largest channel-consistent total-true forewarning indications. Test data is then processed from the selected channel or channels to develop measures of dissimilarity and forewarning indications, which are most likely to be true forewarning indications. [0008] Forewarning indications are used to forewarn of critical events, such as various biomedical events. Typical biomedical events and sources of data include, but are not limited to, epileptic seizures from EEG, cardiac fibrillation from EKG, and breathing difficulty from lung sounds. [0009] The methods of the present invention also include determining a trend in renormalized measures, e.g., U(χ^2) or U(L), based on phase space dissimilarity measures (χ^2, L) for data sets collected during increasing fault conditions in machines or other physical processes. The invention then uses a “least squares” analysis to fit a straight line to the sum of the renormalized measures in order to forewarn of a critical event, such as a machine or process failure. [0010] Typical machines include, but are not limited to, motors, pumps, turbines, and metal cutting. Typical time-serial machine data include, but are not limited to, electrical current, voltage, and power; position, velocity, and acceleration; and temperature and pressure. Other physical processes capable of being monitored by sensors can also be observed to forewarn of malfunctions or failures. [0011] In the present invention, the data can also be analyzed to determine values for specific parameters that maximize the total true rate for one or more respective channels. [0012] A further aspect of the invention enhances techniques by utilizing equiprobable symbols for computing the distribution functions from a connected or unconnected phase space. [0013] Other objects and advantages of the invention, besides those discussed above, will be apparent to those of ordinary skill in the art from the description of the preferred embodiments, which follows. In the description reference is made to the accompanying drawings, which form a part hereof, and which illustrate examples of the invention. Such examples, however are not exhaustive of the various embodiments of the invention, and therefore reference is made to the claims, which follow the description for determining the scope of the invention. [0019] In a first embodiment of the present invention, a database of forty (40) data sets were collected, each with at least one electrographic temporal lobe (TL) event, as well as twenty (20) data sets without epileptic events, as controls. Data sets were obtained from forty-one (41) different patients with ages between 4 and 57 years. This data included multiple data sets from eleven (11) patients (for a total of 30 events), which were used for the channel consistency analysis. Such data can be collected (for example) with a 32-channel EEG instrument (Nicolet-BMSI, Madison, Wis.) with 19 scalp electrodes in the International 10-20 system of placement as referenced to the ear on the opposing hemisphere. Each channel of scalp potential was amplified separately, band-pass filtered between 0.5-99 Hz, and digitized at 250 Hz. The 19 EEG channels in each of these data sets had lengths between 5,016 seconds (1 hour and 23 minutes) and 29,656 seconds (8 hours and 14 minutes). [0020] The following description incorporates the methods first disclosed in Hively, U.S. patent application Ser. No. 09/520,693, filed Mar. 7, 2000 and now pending. To the extent that the methods of the present invention build upon the methods disclosed there, the disclosure of that application is hereby incorporated by reference. [0021] In the present invention, eye blink artifacts from scalp EEG are removed with a zero-phase quadratic filter is more efficient than conventional linear filters. This filter uses a moving window of data points, e[i], with the same number of data points, w, on either side of a central point. A quadratic curve is fitted to these 2w+1 data points, taking the central point of the fit as the low-frequency artifact, f[i]. The residual, g[i]=e[i]−f[i], has essentially no low-frequency artifact activity. All subsequent analysis uses this artifact-filtered data. [0022] Next, each next artifact-filtered value is converted into a symbolized form, s[i], that is, one of S different integers, 0,1, . . . , S-1: 0≦s [i]=INT[S(g [1] −g [min])/(g [max] −g [min])]≦S−1.(1) [0023] The function INT converts a decimal number to the closest lower integer; g[min ]and g[max ]denote the minimum and maximum values of g[i], respectively, over the base case (reference data). To maintain S distinct symbols, the following expression holds, namely s[i]=S−1 when g[i]=g[max]. Expression (1) creates symbols that are uniformly spaced between the minimum and maximum in signal amplitude (uniform symbols). Alternatively, one can use equiprobable symbols, by ordering all N base case data points from the smallest to the largest value. The first N/S of these ordered data values correspond to the first symbol, 0. Ordered data values (N/S)+1 through 2N/S correspond to the second symbol, 1, and so on up to the last symbol, S-1. By definition, equiprobable symbols have non-uniform partitions in signal amplitude and present the advantage that dynamical structure arises only from the phase-space reconstruction. Moreover, large negative or positive values of g[i ]have little effect on equiprobable symbolization, but significantly change the partitions for uniform symbols. Finally, the mutual information function is a smooth function of the reconstruction parameters for equiprobable symbols, but is a noisy function of these same parameters for uniform symbols. Thus, equiprobable symbols provide better discrimination of condition change than uniform symbols when constructing a connected phase space. [0024] Phase-space (PS) construction uses time-delay vectors, y(i)=[s[i], s[i+λ] . . . , s[i+(d−]1)λ] to unfold the underlying dynamics. Critical parameters in this approach are the time delay, λ, and system dimensionality, d, and the type and number of symbols, S. Symbolization divides the phase space into S^d bins. The resulting distribution function (DF) is a discretized density on the attractor, which is obtained by counting the number of points that occur in each phase space bin. The population of the ith bin of the distribution function, is denoted Q[i], for the base case, and R [i ]for a test case, respectively. The test case is compared to the base case by measuring the difference between Q[i ]with R[i ]as: [0025] Here, the summations run over all of the populated phase space cells. These measures account for changes in the geometry, shape, and visitation frequency of the attractor, and are somewhat complementary. The χ^2 measure is one of the most powerful, robust, and widely used statistics for comparison between observed and expected frequencies. In this context, χ^2 is a relative measure of dissimilarity, rather than an unbiased statistic for accepting or rejecting a null statistical hypothesis. The L distance is the natural metric for distribution functions by its direct relation to the total invariant measure on the attractor and defines a bona fide distance. Consistent calculations of these measures obviously require the same number of points in both the base case and test case distribution functions, identically sampled; otherwise, the distribution functions must be properly rescaled. [0026] The connected PS is constructed by connecting successive PS points as prescribed by the underlying dynamics, y(i)→y(i+1). Thus, a discrete representation of the process flow is obtained in the form of a 2d-dimensional vector, Y(i)=[y(i), y(i+1)], that is formed by adjoining two successive vectors from the d-dimensional reconstructed PS. Y(i) is a vector for the connected phase space (CPS). As before, Q and R denote the CPS distribution functions for the base case and test case, respectively. The measure of dissimilarity between the two distribution functions for the CPS, signified by the “c” subscript are thus defined as follows: [0027] The first subscript in Eqs. (4)-(5) denotes the initial PS point, and the second subscript denotes the sequel PS point in the PS pair. These CPS measures have higher discriminating power than unconnected PS measures of dissimilarity. Indeed the measures defined in Eqs. (4)-(5) satisfy the following inequalities: χ^2≦L, χ[c] ^2≦L[c], L≦L[c], and χ^2≦χ[c] ^2, where χ[c] ^2 and L[c ]are dissimilarity measures for connected phase space and χ^2 and L are dissimilarity measures for unconnected PS. [0028] To assure robustness, the construction of the base case data requires careful statistics to eliminate outlier base case cutsets. The first B non-overlapping windows of N points (cutsets) for each dataset become the base case cutsets. A few of these base case cutsets can be very atypical causing a severe bias in the detection of condition change. The base case cutsets are tested for outliers as follows. A comparison of the B(B−1)/2 unique pairs among the B base case cutsets via Eqs. (2)-(5) yields dissimilarities, from which we obtain an average, V, and sample standard deviation, σ, for each of the four measures of dissimilarity, V={L, L[c], χ^2, and χ[c] ^2}, where the subscript, c, denotes the measures of dissimilarity for the CPS. Then, a χ[j] ^2 statistic, χ[j] ^2=Σ[i](V[ij]−V)^2/σ, is calculated for each of these four dissimilarity measures. The index i runs over the B non-overlapping basecase cutsets. The index j is fixed, to test the jth cutset against the other B−1 cutsets, thereby giving B−1 degrees of freedom in the χ[j] ^2 statistic. The null statistical hypothesis allows a random outlier with a probability less than 2/B(B−1). If this hypothesis is not satisfied, we identify an outlier cutset as having χ[j] ^2>19.38 for at least one of the four dissimilarity measures, which corresponds to a probability larger than {fraction (1/ 45)} for B=10. If this analysis does not identify an outlier, then the previous values of V and σ are used for subsequent renormalization, as described below. If this analysis identifies an outlier, the cutset is removed. The analysis is repeated with a new value, B=9 for the remaining base case cutsets to identify any additional outliers. Their presence is indicated by the largest χ[j] ^2 statistic exceeding the new threshold of 17.24, corresponding to a random probability larger than {fraction (1/36)}, as interpolated from standard statistical tables for 8 degrees of freedom. Rejection of the null hypothesis for even fewer remaining cutsets (degrees of freedom) corresponds to a χ[j] ^2 statistic larger than 15.03, 12.74, and 10.33, for B=8, 7, and 6, respectively. If the analysis identifies five (or more) we would have to reject all of the base cases as unrepresentative, and acquire a new set of ten cutsets as base cases. However, in the present analysis, more than four outliers were not seen. [0029] The disparate range and variability of these measures are difficult to interpret for noisy EEG, so a consistent method of comparison is needed. To this end, the dissimilarity measures are renormalized, as described below. The B non-outlier base case cutsets are compared to each test case cutset, to obtain the corresponding average dissimilarity value, V[i], of the ith cutset for each dissimilarity measure. Here, V denotes each dissimilarity measure from the set, V={L, L[c], χ^2, and χ[c] ^2}. The mean value, V, and the standard deviation, σ, of the dissimilarity measure V are calculated using the remaining base case cutsets, after the outliers have been eliminated, as discussed above. The renormalized dissimilarity is the number of standard deviations that the test case deviates from the base case mean: U(V)=|V[i]−V|/σ. [0030] Once the renormalized measures for the test and base cases have been obtained, a threshold, U[C], is selected for each renormalized measure U to distinguish between normal (base) and possibly abnormal (test) regimes. The choice of a reasonable threshold is critical for obtaining robust, accurate, and timely results. A forewarning indication is obtained when a renormalized measure of dissimilarity exceeds the threshold, U≧U[C], for a specified number, N[OCC], of sequential occurrences within a preset forewarning window. [0031] According to the present invention, and as illustrated in FIG. 1, N[OCC ]sequential occurrences above the threshold (U>U[C]) are interpreted as a forewarning indication. Such forewarning indications are true positives, TP, if they occur within a forewarning window 20. Other performance metrics include true negatives, TN, false positives, FP, and false negatives, FN, also defined with respect to the same preset forewarning window, as shown in FIG. 1. The horizontal axis represents time, t. The thick vertical line at T[EVENT ]denotes an event onset time. The thin vertical lines delimit the forewarning-time window 20, during which T[1]≦t≦T[2]<T[EVENT]. For illustration, “reasonable” forewarning windows are set at T[1]=T[EVENT]−60 min and T[2]=T[EVENT]−1 min, for biomedical events. A typical forewarning window for machine failure is on the order of either hours or days. The vertical axis corresponds to a renormalized measure of dissimilarity, U, as discussed above. The horizontal dashed line (--) shows the threshold, U[C]. A forewarning time in one channel, T[FW], is that time when the number of simultaneous indications, N[SIM], among the four dissimilarity measures exceeds some minimum value. The best elimination of FPs occurs for a value of N[SIM]=4. Analysis starts at t=0, and proceeds forward in time until the first forewarning occurs, as defined above. The algorithm then obtains the forewarning statistics by an ordered sequence of logical tests for each channel: [0032] FP=false positive=forewarning at any time, when no event occurs, or forewarning with T[FW]<T[1], or T[FW]>T[2], for an event at t=T[EVENT]; [0033] TP=true positive=forewarning with T[1]≦T[FW]≦T[2 ]for an event at t=T[EVENT]; [0034] TN=true negative=no forewarning, when no event occurs; and [0035] FN=false negative=no forewarning for t≦T[EVENT ]with an event at t=T[EVENT]. [0036] The i-th dataset is referred to as TP if at least one channel shows forewarning within the desired window, T[1]≦T[FW]≦T[2]. This indication is equivalent to TP[i]=1 in the equations below. A TN dataset shows no forewarning in at least one channel when no event occurs. This indication is equivalent to TN[i]=1 in the equations below. The total true rate, T=Σ[i](TP[i]+TN[i])/Σ[i](TP[i]+TN [i]+FP[i]+FN[i]), and the total false rate, F=Σ(FP[i]+FN[i])/Σ[i](TP[i]+TN[i]+FP[i]+FN[i]), where the sums run over all data sets. This approach allows selection of an appropriate channel for subsequent real-time forewarning, consistent with the previous characterization of the data. [0037] Improvement in the channel-consistent total-true rate is carried out by maximizing an objective function that measures the total true rate for any one channel, as well as channel consistency. For this analysis, 30 data sets were used from 11 different patients with multiple data sets as follows: 7 patients with 2 data sets, one patient with 3 data sets, 2 patients with 4 data sets, and one patient with data sets. To quantify channel consistency, the following notation and definitions are used: [0038] i=dataset number; [0039] j=channel number in which forewarning is determined (1≦j≦19); [0040] k=patient number; [0041] M(k)=number of data sets for the k-th patient; [0042] P=number of patients with multiple data sets (eleven for the present analysis); [0043] TN[ijk]=1 for a true negative indication in the j-th channel of the i-th dataset for the k-th patient, and =0 for a false negative indication in the j-th channel of the i-th dataset for the k-th patient; [0044] TP[ijk]=1 for a true positive indication in the j-th channel of the i-th dataset for the k-th patient, and =0 for a false positive indication in the j-th channel of the i-th dataset for the k-th patient. [0045] The total-true rate for the j-th channel of the k-th patient is T[jk]=Σ[i][TP[ijk]+TN[ijk]], by summing over the datasets, i=1 to M(k). The occurrence of more than one true positive and/or true negative in the j-th channel is indicated by T[jk]≧2, while T[jk]≦1 means that the j-th channel provides no consistency with other data sets for the same patient. Consequently, the channel overlap is defined as: [0046] The channel-consistent total-true rate is the average, f[T]=[Σ[k]c[k]]/[Σ[k]M(k)], where the index, k, sums over all P patients, weighting each dataset equally. If the channel-consistent total-true rate had been defined as [Σ[k]max (T[jk])/M(k)]/P, then patients with only one dataset would have been improperly weighted the same as patients with several data sets). For selected values of the parameters (e.g., N, w, S, d), the renormalized measures of PS dissimilarity are computed with these parameters for each dataset, and then exhaustively searched over N[OCC ]and U[C ]to find the largest f[T ]value. [0047]FIGS. 2a-2 d illustrate a series of single parameter searches to maximize the channel-consistent total-true rate for forewarning of epileptic seizures. The value of one parameter is systematically changed, while the others are fixed. In this example, the first-round optimization uses the parameter pair, {S, d}, constrained by a computational limit on the numeric labels for the CPS bins using modular arithmetic. This limit arises from the largest double-precision real number 2^52 that can be distinguished from one unit larger: S^2d≦2^52, or d≦INT(26ln2/lnS). Two PS symbols (S=2) limit the search to 2≦d≦26; three PS symbols (S=3) correspond to 2≦d≦16; and so forth. Since equiprobable symbols always yield larger values for f[T], the analysis in this example was performed using this symbolization. [0048]FIG. 2a shows f[T ]as a noisy function of the number of equiprobable symbols, S, with the number of PS dimensions, d=2. The largest channel-consistent total-true rate is f[T]=0.8667 at S=20 and S=26. Next, the largest value of f[T ]was obtained over all of the possible equiprobable symbols for each value of PS dimension in the range, 2≦d≦26. These results are displayed as the solid curve in FIG. 2b, showing that f[T ]decreases non-monotonically from a maximum at d=2 to a minimum at d=17-18, and then rises somewhat for still larger values of d. For completeness, FIG. 2b also shows similar analysis only for an even number of uniform symbols (dash-dot curve), because the central bin for an odd number of uniform symbols accumulates the vast majority of the PS points, thus degrading the results in every case. Based on these results, further analysis for uniform symbols is unnecessary. Thus, we set d=2 and S=20 (equiprobable symbols) while varying the half-width of the artifact filter window, w, as shown in FIG. 2c. It is observed that f[T ]also is a noisy function of w with a maximum value of f[T]=0.8833 at w=54 and w=98. The former value (w=54) is selected for the next parameter scan, because a slight trend for larger f[T ]lies in that region. FIG. 2d displays f[T ]versus the number of points in each cutset, N, in increments of 1000 with the other parameters fixed at S=20, d=2, and w=54. This plot shows that f[T ]rises non-monotonically with increasing cutset length to a noisy plateau for N>21,000. The largest value, f[T]=0.8833, occurs at N= 22,000; N=54,000, and N=61,000. All of the above analysis used a time delay, λ=INT[0.5+M[1]/(d−1)], which in general is different for every channel of each dataset, as discussed previously. However, this parameter also is a variable. Consequently, variation of f[T ]was determined as a function of the time delay, λ, which was set to the same value for every channel of every dataset. A single peak occurs at f[T]=0.9 for λ=17, with the other parameters fixed at S=20, d=2, w=54, and N=22 000. [0049] The above results show that a substantial improvement in the rate of channel-consistent total trues (f[T]=0.9) is obtained for sub-optimal choices of the analysis parameters. Moreover, the new set of analysis parameters yielded credible event forewarning (29 total trues) for solitary data sets from each of 30 different patients. It is expected that a robust choice of the parameters can be obtained by analysis of much more data, and subsequently fixed for an ambulatory device. Alternatively, initial clinical monitoring might be used to determine the best patient-specific analysis parameters, which would be fixed subsequently for ambulatory monitoring. This sequence of single parameter searches for maximizing the objective function was necessary because the computational effort for an exhaustive search is excessive. However, a small amount of experimental data or very narrow parameter ranges make an exhaustive parameter search feasible, as one normally skilled in the art can appreciate. [0050] The best choice of the parameter set for analyzing the dissimilarity measures for each channel, (e.g., N, w, S, d, B, N[OCC], and U[C]), depends not only on the system, but also on the specific data under consideration. A “reasonable” value for the number of base case cutsets, 5≦B≦10, was selected as a balance between a reasonably short quasi-stationary period of “normal” dynamics and a sufficiently long period for statistical significance. This method of the present invention involves: selecting the parameters to be included in a parameter set, such as {N, w, S, and d}, finding specific values for the parameters that maximize the objective function for the respective channels, computing the renormalized measures of PS dissimilarity for the specific data sets with the parameters set to their best values, and systematically searching over the values of N[OCC ]and U[C ]to find the best channel for forewarning indication. [0051] Besides epileptic seizures, the above methods can be applied to detect condition change in patients having cardiac or breathing difficulties. [0052] The above methods can also be applied to electric motor predictive maintenance, other machinery, and physical processes. In the second example, data sets were recorded in snapshots of 1.5 seconds, sampled at 40 kHz (60,000 total time-serial samples), including three-phase voltages and currents, plus tri-axial accelerations at inboard and outboard locations on a three-phase electric motor. The subsequent description describes analysis of one seeded fault. [0053] The test sequence began with the motor running in its nominal state (first dataset), followed by progressively more severe broken rotor bars. The second dataset involved a simulated failure that was one rotor bar cross section cut through by 50%. The third dataset was for the same rotor bar now cut through 100%. The fourth dataset was for a second rotor bar cut 100%, exactly 180° from and in addition to the first rotor fault. The fifth dataset was for two additional rotor bars cut adjacent to the first rotor bar, with one bar cut on each side of the original, yielding four bars completely open. These five datasets were concatenated into a single long dataset for ease of analysis. The three-phase voltages, V[i], and currents, I[i], were converted into instantaneous power, P= Σ[i ]I[i]V[i], where the sum runs over the three phases. We split each of the five datasets into five subsets of 12,000 points each, giving twenty-five (25) total subsets. The power has a slow, low-amplitude variation with a period of roughly 0.1 s. To avoid confounding the analysis, this artifact was removed with the zero-phase quadratic filter. [0054] The PS reconstruction parameters were systematically varied, as before, to obtain the most linear increase in the logarithm of condition change, in a least-squares sense, for the broken-rotor test sequence. FIG. 3 shows that the phase-space dissimilarity measures rise by ten-fold over the test sequence. The parameters are: S=88 (number of equiprobable phase-space symbols), d=4 (number of phase-space dimensions), λ=31 (time delay lag in time steps), and w=550 (half width of the artifact filter window in time steps). The exponential rise in the severity of the broken-rotor faults (doubling from 0.5 to 1.0 to 2.0 to 4.0) is mirrored in FIGS. 3a-3 d by a linear rise (solid line) in the logarithm of all four dissimilarity measures (*) for the chosen set of analysis parameters. [0055] The present invention not only responds to the problem of false positives and false negatives in forewarning of events from biomedical data, but also is also applicable to forewarning of machine failures and even failures in other physical processes capable of being measured through sensors and transducers. [0056] A third example involves tri-axial acceleration data from a motor connected to a mechanical load via a gearbox. Application of excess load causes accelerated failure of the gears. The data were obtained at ten-minute intervals through the test sequence, sampled at 102.4 kHz. The total amount of data was 4.5 GB (three accelerometer channels, times 401 snapshots for a total of 1203 files). The 100,000 data points were serially concatenated from each of the data files into a single three-channel dataset for ease of analysis (1.6 GB). Each 100,000-point snapshot was divided into ten 10,000-point subsets for this analysis; the results were then averaged over these ten cutsets to obtain a typical value for the entire snapshot. The accelerometer data shows quasi-periodic, complex, nonlinear features. [0057] The use of tri-axial acceleration has an important advantage, which can be explained as follows. Acceleration is a three-dimensional vector that can be integrated once in time to give velocity (vector). Mass times acceleration (vector) is force (vector). The vector dot-product of force and velocity is power (scalar). Thus, three-dimensional acceleration data can be converted directly into a scalar power (within a proportionality constant), which captures the relevant dynamics and has more information about the process than any single accelerometer channel. The resulting accelerometer power also has very complex, nonlinear features. [0058]FIG. 4 shows a systematic rise in a composite PS dissimilarity of accelerometer power as the test progresses, with an additional abrupt rise at the onset of failure, which occurs at dataset # 394. This result was obtained by constructing a composite measure, C[i], of condition change, namely the sum of the four renormalized measures of dissimilarity in accelerometer power for each of the datasets in the test sequence. The following method was used to obtain this result: [0059] 1) Construct a composite measure, C[i]=U(χ^2)+U(χ[c] ^2)+U(L)+U(L[C]), for the i-th dataset; [0060] 2) Fit C[i ]to a straight line, y[i]=ai+b via least-squares over a window of m datasets (datasets #194-393 in this case), also shown in FIG. 4; [0061] 3) Obtain the variance, σ[1] ^2=Σ[i](y[i]−C[i])^2/(m−1), of C[i], about the straight-line fit from step 2; [0062] 4) Determine the statistic, χ^2=Σ[i](y[i]−C[i])^2/σ[1] ^2, from this straight-line fit for datasets #394-400; [0063] 5) Maximize the value of χ^2 from step 4 over the parameters (d, S, λ). [0064] The variance, σ[1] ^2, in step 3 measures the variability of C[i ]about the straight-line fit over the window of m datasets (#194-393). [0065] The statistic, χ^2, in step 4 measures the variability of datasets #394-400 from the straight-line fit. The value from step 4 is χ^2=180.42, which is inconsistent with a normal distribution for n=7 degrees of freedom (corresponding to the seven datasets in the computation of the χ^2 statistic in step 4), and is a strong indication of the failure onset. Indeed, FIG. 5 shows a clear statistical indication of failure onset. The bottom plot (labeled “normal distribution”) in FIG. 5 depicts the maximum value of the χ^2 statistic for n sequential values out of 200 samples from a Gaussian (normal) distribution with zero mean and a unity sample standard deviation. The middle curve in FIG. 5 is the maximum value of the χ^2 statistic, using step 4 above, for n sequential values of the composite measure, C[i], over the window of m=200 datasets that span the straight-line fit (datasets #194-393). The upper curve in FIG. 5 is the χ^2 statistic, also using step 4 above, for n sequential values from datasets #394-400. This upper curve (labeled “failure onset”) deviates markedly from the lower curves after two datasets (#394-395), with overwhelming indication for three and more datasets. Thus, the composite PS dissimilarity measure provides an objective function that shows consistent indication of condition change, as well as clear indication of the failure onset. [0066] The fourth and final example used the same overloaded gearbox test bed, as in the third example. A separate test sequence acquired load torque that was sampled at 1 kHz. Each ten-second dataset had 10,000 data points, all of which were concatenated serially into a single data file for ease of analysis. These data are quasi-periodic with complex, nonlinear features. The analysis parameters were varied, as described above, to obtain phase-space dissimilarity measures that remain below a threshold for datasets #1-29. All four dissimilarity measures subsequently rise, beginning at dataset #30, and remain above threshold (U>U[C]=0.894) for the remainder of the test sequence until final failure at dataset #44. These results illustrate that the phase-space dissimilarity measures can provide forewarning of an impending machine failure, not unlike the first example for forewarning of an epileptic event from EEG data. [0067] This has been a description of detailed examples of the invention. These examples illustrate the technical improvements, as taught in the present invention: use of equiprobable symbols, quantitification of channel-consistent total-true rate of forewarning, various objective functions for event forewarning, different search strategies to maximize these objective functions, and forewarning of various biomedical events and failures in machines and physical processes. Typical biomedical events and data include, but are not limited to, epileptic seizures from EEG, cardiac fibrillation from EKG, and breathing difficulty from lung sounds. Typical machines include, but are not limited to, motors, pumps, turbines, and metal cutting. Typical time-serial machine data include, but are not limited to, electrical current, voltage, and power; position, velocity, and acceleration; and temperature and pressure. It will apparent to those of ordinary skill in the art that certain modifications might be made without departing from the scope of the invention, which is defined by the following claims. [0014]FIG. 1 is a graph of renormalized dissimilarity as a function of forewarning time (t) for determining true positive (TP), true negative (TN), false positive (FP) and false negative (FN) forewarning indications. [0015]FIGS. 2a-2 d are graphs of channel-consistent total-true rates for forewarning of epileptic seizures, f[T ]vs. selected parameters: (a) f[T ]versus S (number of phase space symbols) for d=2, w= 62, N=22,000; (b) largest f[T ]versus d (number of phase space dimensions) for w=62, N=22,000 using equiprobable symbols (solid curve) and uniform symbols (dash-dot curve); (c) f[T ]versus w (half width of the artifact-filter window width) for d=2, S=20, N=22,000; and (d) f[T ]versus N (number of data points in each cutset) for d=2, S=20, and w=54. [0016]FIGS. 3a-3 d are semi-log[10 ]plots of the four nonlinear dissimilarity measures for a set of broken-rotor seeded-fault power data. Dataset #1 is for the nominal (no fault) state. Dataset #2 is for the 50% cut in one rotor bar. Dataset #3 is for the 100% cut in one rotor bar. Dataset #4 is for two cut rotor bars. Dataset #5 is for four cut rotor bars. The exponential rise in the severity of the seeded faults is shown as an almost linear rise (solid line) in the logarithm of all four dissimilarity measures (*) for the chosen set of phase-space parameters. [0017]FIG. 4 is a plot of the composite PS dissimilarity measure, C[i], versus dataset number for a gearbox failure (d=2, S=274, and λ=1). [0018]FIG. 5 is a plot of the maximum value of the χ^2 statistic versus the number (n) of sequential points from the sample distribution for (bottom) a normal distribution with zero mean and unity sample standard deviation; (middle) composite measure, C[i], of condition change from the 200 datasets that span the straight-line fit; (top) composite measure, C[i], of condition change during failure onset (datasets #394-400). The middle and top curves use the same analysis parameters as in FIG. 4.
{"url":"http://www.google.com/patents/US20040087835?ie=ISO-8859-1&dq=6480844","timestamp":"2014-04-20T14:35:46Z","content_type":null,"content_length":"92087","record_id":"<urn:uuid:10d709d4-b5a2-496f-a3bc-219e06f63fef>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Algebra Tutors Los Angeles, CA 90027 Ivy League grad specializing in Math, Writing, & Test Prep ...I have a penchant for numbers, conquering multivariable calculus and the fundamentals of computer science by the end of high school, and continuing on to linear algebra and number theory in college. Additionally, I am a trained and Federally authorized tutor... Offering 10+ subjects including calculus
{"url":"http://www.wyzant.com/geo_Pasadena_CA_linear_algebra_tutors.aspx?d=20&pagesize=5&pagenum=3","timestamp":"2014-04-20T16:55:05Z","content_type":null,"content_length":"61416","record_id":"<urn:uuid:9727d719-f83d-42d3-b7ba-4cdb7d427b51>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
parametric dy/dx August 20th 2011, 12:35 AM #1 Jul 2011 parametric dy/dx Can somebody tell me if i am correct? find dy/dx in terms of t for the curve with the parametric representation: x=(1-t)(1+3t) y=(1+t)(1-3t) My solution; x=1 + 2t - 3t^2 y= 1 - 2t - 3t^2 dx/dt= 2-6t dy/dt= -2-6t dy/dx=(dy/dt)/ (dx/dt) Can somebody tell me if this is correct-and can somebody recomend a good website with more parametric dy/dx examples-my maths book is not very good. Re: parametric dy/dx I think it's correct! Sorry I don't know any site with more exercices. Re: parametric dy/dx thanks Siron-yet again? The problem with my maths book is he examples are really basic and the self tests at the end of the chapters are really difficult. Not all the answers are available. thanks for your help Re: parametric dy/dx You're welcome! You can create exercices by yourself if you want and make them more difficult Re: parametric dy/dx You could, of course, cancel. dy/dx= (-1-3t)/(1-3t) might be considered a better answer. August 20th 2011, 01:02 AM #2 August 20th 2011, 01:12 AM #3 Jul 2011 August 20th 2011, 02:13 AM #4 August 20th 2011, 05:30 AM #5 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/calculus/186444-parametric-dy-dx.html","timestamp":"2014-04-20T11:56:22Z","content_type":null,"content_length":"40567","record_id":"<urn:uuid:e949564b-ee9c-4418-b5ef-7274ddf32e0e>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
Reducible Equations July 14th 2010, 01:11 AM #1 Senior Member Jul 2009 Reducible Equations a) $4^{2x}- 5 .4 ^x +4 =0$ I know I have to make $u = 4x$ or something along those lines. But I have no idea what to do next Find the value of A, B, C if $A(x+1)^2 + Bx+C \equiv 2x^2 + 8x -6$ So I expand the first one... $Ax^2 + 2Ax + A + Bx + C$ $Ax^2 + x(2A+B)+A+C$ With this I know that A = 2 B = -4 but I can't figure out C. I get -2 but it's wrong. What did I do wrong? Last edited by jgv115; July 14th 2010 at 01:23 AM. a) You actually let $u = 4^x$ so that you get a quadratic equation in $u$ $u^2 - 5u + 4 = 0$. Solve it for $u$, then solve for $x$. b) $A + C = -6$ and you know $A = 2$. Solve for $C$. I think you'll also find that $B = 4$, not $-4$. Oh I thought $5.4^{x}$ meant $\left(\dfrac{54}{10}\right)^x$... I think it would be better to use \cdot to avoid confusion, as in $5\cdot4^x$. ok, I get it now! $2+ C = -6$ So $C = -8$ Here is another (sorry I'm trying to pick this up myself with only the textbook and I'm trying my best Find the value of A,B,C if $Ax(x-1)+Bx^2+C(x-1)\equiv x-4-3x^2$ Expand: $Ax^2 - Ax+Bx^2+Cx-C$ I'm so sure what to do next so I group the terms into groups? $x^2(A+B) -x(A-C) -C$ I'm pretty sure this is incorrect. Could someone help me out (not necessarily give me the answer) Thanks for all the help!!!! ok, I get it now! $2+ C = -6$ So $C = -8$ Here is another (sorry I'm trying to pick this up myself with only the textbook and I'm trying my best Find the value of A,B,C if $Ax(x-1)+Bx^2+C(x-1)\equiv x-4-3x^2$ Expand: $Ax^2 - Ax+Bx^2+Cx-C$ I'm so sure what to do next so I group the terms into groups? $x^2(A+B) -x(A-C) -C$ I'm pretty sure this is incorrect. Could someone help me out (not necessarily give me the answer) Thanks for all the help!!!! Technically what you have is right, but it's better to write it as $(A + B)x^2 + (C-A)x - C \equiv -3x^2 + x - 4$. That means $A + B = -3$ $C - A = 1$ $-C = -4$. Solve for $A, B, C$. Thanks!! I finally get it!!! I appreciate your help so much!! Thanks!!!!!!!! July 14th 2010, 01:45 AM #2 July 14th 2010, 01:54 AM #3 July 14th 2010, 01:56 AM #4 July 14th 2010, 02:33 AM #5 Senior Member Jul 2009 July 14th 2010, 03:19 AM #6 July 14th 2010, 03:56 AM #7 Senior Member Jul 2009
{"url":"http://mathhelpforum.com/pre-calculus/150902-reducible-equations.html","timestamp":"2014-04-19T05:46:58Z","content_type":null,"content_length":"57131","record_id":"<urn:uuid:407987a4-29be-407f-b0d5-be630114acd2>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometry Applications April 27th 2010, 09:51 AM #1 Apr 2010 In the country..... Jamison takes a 20 foot board and props it up with a one foot block at the far end to make a daredevil jump ramp for his bike. What is the measure of the angle of elevation of the ramp to the nearest degree? Ignoring the thickness of the board, it looks like an application of the sine function. Let's see what you get. What do you mean? I dont know how to solve it.. Hi triknobchin, The 20 foot board will be the hypotenuse of a right triangle formed by the board, the road, and the 1 foot block. The angle of elevation will be the angle formed where the board meets the highway. You have opposite side and hypotenue. Use inverse sine to find the angle of elevation. I found its 3 degrees Thanks so MUCH!! April 27th 2010, 09:59 AM #2 MHF Contributor Aug 2007 April 27th 2010, 10:02 AM #3 Apr 2010 In the country..... April 27th 2010, 10:21 AM #4 A riddle wrapped in an enigma Jan 2008 Big Stone Gap, Virginia April 27th 2010, 10:48 AM #5 Apr 2010 In the country.....
{"url":"http://mathhelpforum.com/trigonometry/141715-trigonometry-applications.html","timestamp":"2014-04-20T02:48:48Z","content_type":null,"content_length":"41535","record_id":"<urn:uuid:c487db76-296a-4cf2-b162-6d9f453603be>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
Orinda Algebra 2 Tutor Find an Orinda Algebra 2 Tutor ...And finally, as a PhD student at Cal, I have been a TA for an undergrad level intro to discrete math and probability class required for all computer science undergraduate students at Cal. In addition to my experience teaching recitation sections of up to 40 students, I am very comfortable (and e... 27 Subjects: including algebra 2, chemistry, physics, geometry I am a highly successful high school physics and earth science teacher and tutor since 1996. I have been certified to teach physics and earth science/geosciences in three states including California. Courses that I have taught include Conceptual Physics, General Physics, Honors Physics, Advanced P... 32 Subjects: including algebra 2, reading, calculus, physics I am an experienced and licensed teacher. I have taught 6th grade earth science and 8th grade physical science. I have tutored algebra 2, geometry, and Spanish as well as various sciences. 24 Subjects: including algebra 2, chemistry, Spanish, reading ...I am also currently tutoring elementary students in reading comprehension, vocabulary, and reading fluency. I have helped students raise their scores by 275+ points on the math, verbal, and critical reading sections between their first practice tests and their actual test scores after receiving ... 34 Subjects: including algebra 2, Spanish, reading, English ...At UC Davis, I worked at the Student Academic Success Center as an organic chemistry tutor. It was my job to assist students understand challenging concepts by breaking them down into smaller components and going through every step slowly and thoroughly. I often had to adjust my teaching methods to the different learning styles. 6 Subjects: including algebra 2, chemistry, physics, biology
{"url":"http://www.purplemath.com/orinda_algebra_2_tutors.php","timestamp":"2014-04-20T13:23:39Z","content_type":null,"content_length":"23739","record_id":"<urn:uuid:ac8229c5-9a5b-4ea4-9b1f-fa85deced8ec>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00125-ip-10-147-4-33.ec2.internal.warc.gz"}