content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the solution of the system of equations?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50f98dc4e4b007c4a2ec24a4","timestamp":"2014-04-18T16:11:20Z","content_type":null,"content_length":"49945","record_id":"<urn:uuid:32839950-dfe4-4860-915c-bd390c79892c>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Integral closure
But for quadratic case, you can do this by **hand** and the ring of integers in [tex]\mathbb{Q}(\sqrt{2})[/tex] is just [tex]\mathbb{Z}[\sqrt{2}][/tex].
Do people read anymore?
And discriminants, ramified primes, etc.. are pretty basic concepts you learn in the first course in algebraic number theory. It's common for students to ask, "so, how do you in general find integral
closure of number fields?" after having seen an example of direct (and very elementary exercise) calculation of finding integral closure of quadratic extensions.
I apologize so much for having given a student a little perspective on how discriminants can be used, and not so much about what's in every number theory textbook (which is to say, very basic
calculation of integral closure in quadratic extensions).
So we should not answer that question until they have learned some more class field theory. Oh and while we're at that, we should not discuss these very basic ideas until we have discussed langlands.
Quite ridiculous.
|
{"url":"http://www.physicsforums.com/showthread.php?p=2953872","timestamp":"2014-04-19T02:23:47Z","content_type":null,"content_length":"82206","record_id":"<urn:uuid:25d6bab9-7f0a-4c57-ad77-446d1b8a634f>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Does the conservation law prove that energy is eternal?
That doesn't strike me so much as an assumption as it is just a statement that it's always going to be possible to describe the world in terms of
symmetries. One can show, for instance, that it is possible to take purely random, time-variant laws of nature, and show that due to the ambiguity of the time coordinate, there exists a
time-invariant way of describing the system:
I tend to expect that in a sense, then, the existence of
symmetries is an inevitability, just based upon how we approach understanding the world.
That said, physical theories don't just rest at the most basic of symmetries like time invariance. Hypothetical high-energy laws of physics typically consider all of physical law as stemming from
some fundamental construct (particles, strings, what have you) that obey some very specific symmetries.
I don't think the most basic of symmetries are really an assumption so much as order we impose on the world by the way in which we describe it.
|
{"url":"http://www.physicsforums.com/showpost.php?p=2316843&postcount=24","timestamp":"2014-04-21T02:15:09Z","content_type":null,"content_length":"7921","record_id":"<urn:uuid:dab5469c-e22d-4a7e-850f-f18abe89421d>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Let's imagine, instead of numbers, we have a sequence of animals. The sequence begins with three animals: green, tap dancing elephant, purple, singing cockatoo, and orange, quarterback tiger. We
could refer to the first term as green, tap dancing elephant, the second as the purple, singing cockatoo, and so on. But we will just become confused in our circus of exotic animal talent.
Or, we could just give each of sequence a short label.
We call the first term of a sequence a[1] (the elephant in our circus), the second term a[2] (the cockatoo), and so on:
a[1], a[2], a[3], ..., a[n], ....
The n^th term, a[n], is called the general term of a sequence.
Sometimes it's useful to think of a sequence as starting with a "zero^th" term instead of a "first" term. Then we would think of the sequence as
a[0], a[1], a[2], a[3], ..., a[n], ....
It sounds like we are just being difficult, defining a term by a number that means none. We aren't. We'll tell you why later. For now, assume sequences start with n = 1 unless told otherwise.
With sequences, we like to know how to get there from here. To describe or define a sequence we need to know how to find the general term. It's like knowing how to get from your couch, where you were
reading up on Greek mythology for giggles, to the kitchen to fetch a slice of watermelon.
This description may or may not involve a mathematical formula. Some sequences can't be described by a nice tidy formula.
Sample Problem
Define a sequence by
a[n] = high temperature in Mountain View, CA on the n^th day after January 1, 2010.
This is a perfectly reasonable definition. However, as far as anyone knows, there is no mathematical formula that can produce the n^th term. Despite our extensive knowledge of rain dancing, we can't
predict the weather that accurately.
|
{"url":"http://www.shmoop.com/sequences/evaluating-sequence-term.html","timestamp":"2014-04-16T10:29:09Z","content_type":null,"content_length":"27437","record_id":"<urn:uuid:049d99ca-4e45-452b-a491-445270a6d19a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: RE: problem with reshape
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: RE: problem with reshape
From jaks@zoom.co.uk
To statalist@hsphsun2.harvard.edu
Subject Re: st: RE: problem with reshape
Date Wed, 23 Jul 2003 14:22:03 +0100
My variables are
i=hhid (household id, for each household),
j= memno (line number, different for every member of each household. e.g the
2nd person in household 1 will have memno=2, as would the 2nd person of
household 3,4,5,6...and so on)
The variables such as age, sex, relation to household head etc go from 01 to
22, assuming that there is at least one household in the sample with up to 22
members, and are thus currently suffixed by their number. e.g memage_04
represents the age of the 4th member of any given household in the sample.
My data is currently wide.i.e. the title row: hhid
with columns going accross as memage_01 memex_01 reltohead_01 memage_02
memsex_02 reltohead_02......up to 22.
When typing in reshape long (varname), i(hhid) j(memno)
I get the message 'memno contains all missing values' and it will not allow the
reshape to commence. I would like to rehshape the data from wide to long.
My aim is to get a column for memno alongside columns for variables memage,
memsex, reltohead without the numbered suffix. i.e.
memno memsex memage reltohead
1 . . .
2 . . .
. . . .
. . . . (for household 1)
. . . .
22 . . .
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2003-07/msg00528.html","timestamp":"2014-04-16T10:16:10Z","content_type":null,"content_length":"6374","record_id":"<urn:uuid:db3e456b-d668-4c68-950d-ade0b2250321>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent US5202847 - Digital signal processing
The present invention relates to digital signal processing and particularly to apparatus and methods for use in computing separable two dimensional linear transforms on blocks of digital data
It is known to calculate separable two dimensional linear transforms on blocks of data elements for various purposes in digital signal processing. Such transforms may include discrete cosine
transform (DCT) inverse discrete cosine transform, a low pass filter and identity. Discrete cosine transforms are particularly useful in image compression systems and may be used to limit the amount
of data necessary for transmission and storage of video signals. The above mentioned transforms may be used to meet requirements of the CCITT Video Codec standard.
It is the object of the present invention to provide improved apparatus and methods for computing separable two dimensional linear transforms with improved circuitry which may permit more than one
type of transform to be carried out on data passing through the circuitry. It is a further object of the invention to provide improved circuitry requiring a reduced number of transistors thereby
providing benefits in the chip area required, cost and power dissipation.
The invention provides digital signal processing circuitry comprising a first and second processor each for effecting a linear transform on respective blocks of digital data, said first and second
processors being coupled as a linear pipeline having timing control circuit whereby a succession of blocks of digital data may be processed by each of said first and second processors in succession,
each of said first and second processors being arranged to effect one of a selection of different linear transforms and each including selection circuitry independently operable of the other
processor whereby said first and second processors may effect simultaneously different transforms on respective blocks of data.
The invention also provides a method of computing linear transforms on blocks of digital data, said method comprising transmitting data successively through first and second processors each arranged
to carry out a linear transform, controlling flow of data through said first and second processors in a time controlled manner to effect a linear pipeline, selecting a first set of coefficients for
use in said first processor, selecting a second set of coefficients for use in the second processor and transmitting blocks of data successively through the two processors whereby different
transforms may be carried out simultaneouly by the first and second processors on respective blocks of data.
Preferably each processor includes a bank of carry save adders to effect multiplication of transform coefficients by repeated addition to form a plurality of inner products, each addition
corresponding to one bit position of a data word in said blocks of data.
Preferably sum and carry signals from said bank of carry save adders are resolved by a carry propagate adder before data is supplied to the second processor.
In a preferred embodiment said first and second processors comprise substantially the same circuitry.
Preferably the inner product computation is effected by distributed arithmetic techniques.
Preferably each bank of adders operates in 2's complement format and inverter circuitry is provided to invert data supplied to the adder corresponding to the most significant bits of data supplied
from each word.
Preferably the inversion is effected by one or more cross-over switches used to interchange the connection of a pair of lines to an array of adders forming a succession of addition stages, the or
each cross-over switch connecting a pair of lines to adders in successive stages and interchanging the connection of the two lines on operation of the switch.
In a preferred embodiment the digital signal processing circuitry is implemented in CMOS.
Preferably said circuitry is arranged to effect two dimensional separable transforms.
Preferably the transform is effected as two successive one dimensional transforms.
Preferably the invention is arranged to effect a discrete cosine transform. The invention may also be arranged to provide an inverse discrete cosine transform, a low pass filter and identity
FIG. 1 is a schematic view of the system used to compute a separable two dimensional linear transform,
FIG. 2 is a block diagram of the system used to compute an inner product in accordance with FIG. 1,
FIG. 3 is a block diagram of an apparatus for implementing the system shown in FIG. 1, in accordance with the invention,
FIG. 4 shows a register system for use in FIG. 3,
FIG. 5 is a block diagram of part of the apparatus shown in FIG. 3,
FIG. 6 shows an adder network for use in FIG. 5,
FIG. 7 shows a further part of the apparatus of FIG. 3,
FIG. 8 shows an inverter arrangement for use in FIGS. 5 and 6,
FIG. 9 is a circuit diagram of one implementation for use in FIG. 8 and
FIG. 10 shows a pipeline control arrangement for the apparatus of FIG. 3.
The embodiment of the invention which is shown in the drawings is arranged to compute separable two dimensional linear transforms on 8 of 16 bit data elements. Processing of contiguous blocks is
continuous and the transform coefficients are programmed in a ROM. The apparatus of this example may carry out four different transforms which are discrete cosine transform, inverse discrete cosine
transform, a low pass filter and identity.
Separable 2-D Linear Transforms
A one dimensional linear transform, which transforms N into N ##EQU1## G is the forward transform kernel. In matrix form, the 1-D transform can be written as ##EQU2## This transform can be viewed as
N inner products, each of which requires N multiplications and N-1 additions.
The inverse transform, which transforms vector T back into f, is ##EQU3## where H is the inverse transform kernel.
A two dimensional linear transform maps an N N ##EQU4## and the inverse transform is ##EQU5## The forward kernel is separable if ##EQU6## and symmetric if ##EQU7## The same applies to the reverse
kernel. Separable transforms can be performed in two steps, each of which is a 1-D transform. First the rows of the input array are transformed: ##EQU8## then the columns ##EQU9## or
If G.sub.1 =G.sub.2 =G (as is the case with the DCT), then
T=G.sup.T FG
The discrete Cosine Transform (DCT)
A special case of the above is the discrete cosine transform and its inverse. The 2-D DCT of the array of data points y(m,n),M=0, 1, . . . , (M-1), n=0, 1, . . . , (N-1) is ##EQU10## The transform is
obviously separable, and if M=N, it is symmetric and can be expressed as two 1-D transforms. Thus the DCT of a data sequence x(n), n=0, 1, . . . , (N-1) is defined as ##EQU11## where G.sub.x (k) is
the kth DCT coefficient.
For example, if N=4, the DCT computation can be written as ##EQU12## The inverse DCT is defined as ##EQU13##
This example comprises CMOS circuitry for implementing transforms of the above type. As the transforms are separable, they are carried out as two successive one dimensional transforms. In FIG. 1 a
first one dimensional transform is carried out in a processor 11 on a matrix or block F in accordance with coefficients G stored in a ROM 12. The output 13 of the first transformation must be
transposed in a transposer 14 so that rows become columns and vice versa. This is necessary as the inputs to the processor 11 enter row by row but the outputs are formed column by column. The second
transformation is carried out by processor 15 in accordance with coefficients stored in a further ROM 16. This leads to an output 17 which again requires transposition of rows for columns. In this
way the two dimensional transform is effected as two successive one dimensional transforms. Each of the transforms requires performance of all the multiplications and additions of the matrix
elements. The same structure used to implement the arrangement in FIG. 1 can be used for any transform of the same size. It is merely necessary to select the correct coefficients in the ROMs 12 and
16. The multiplications and additions needed are implemented in this example by use of distributed arithmetic which is a method of computing inner products of two N-bit vectors one of which is a
constant coefficient vector. This method of computing using distributed arithmetic is known and described in an article by S.A. White "Applications of distributed arithmetic to digital signal
processing: a tutorial review" IEEE ASSP Magazine July 1989 pages 4-19. The contents of that article are incorporated herein by cross-reference.
The method of computing an N-bit inner product using distributed arithmetic is shown in FIG. 2.
Data representing one row or column of the matrix F consists of eight 16 bit words which are supplied on a bus 21 to a stack of registers or memory 22 which act as a corner turning device. Each word
is loaded in parallel into a respective register and the bits of all the registers are then shifted out in bit wise manner on outputs 23 forming an address for use in a 2.sup.N word memory 24. In
accordance with known distributed arithmetic techniques, the memory 24 contains precomputed combinations of the transform coefficients depending on the type of transform which is to be effected. As
the contents of the registers 22 are shifted out in bit wise manner the memory 24 is successively addressed at different locations and the output of the memory is formed on bus 25 and added into the
most significant part 26 of an accumulator 27 which performs a right shift (that is to positions of lower significance) between additions. As shown in FIG. 2, an output from part 26 is provided on
line 28 which is fed back through line 29 to a summing device 30 which combines the new output 25 from the memory 24 with the existing contents of part 26 of the accumulator 27. As shown in FIG. 2
the least significant part of the accumulator 27 is indicated at 31 and provides a separate output 32. The number of addition cycles required is equal to the number of bits in the input words on bus
21. A complete one dimensional transform requires N inner product units of the type shown in FIG. 2 each of which will have a different memory 24 with combinations of another N transform
coefficients. An N in FIG. 2. After the required number of addition cycles by each inner product unit of the type shown in FIG. 2 the combined outputs 28 and 32 of each of the inner product units
will provide the transform output 13 shown in FIG. 1.
FIG. 3 shows in more detail the implementation of the present example for carrying out the system shown in FIG. 1. Two identical signal processing units 11 and 15 are used to carry out successive one
dimensional transforms. The processing units 11 and 15 are coupled together as a linear pipeline. Processing unit 11 has a parallel-serial converter 22 at its input followed by eight inner product
accumulators 33 and eight registers 34 providing an output 40 to an add/round and select circuit 41. The output 42 of circuit 41 is supplied to the transposer 14 which provides an input 21a to a
further parallel-serial converter 22a forming an input to the second processing unit 15. The second processing unit 15 has the same circuitry as processing unit 11 and similar parts have been marked
with the same reference numeral having the suffix a. The output 43 from the processing unit 15 is fed to a further add/round and select circuit 44 similar to circuit 41. This provides an output on
bus 45 representing the two dimensional linear transformation. As the processing units 11 and 15 have the same construction the following description will relate to the one processing unit 11 and it
will be understood that a similar description applies to processing unit 15. The parallel-serial converter 22 is shown in more detail in FIG. 4. The inner product accumulator of processor 11 which is
provided by ROMs 24 and accumulators 27 is shown in more detail in FIGS. 5 and 6 and the add/round and select unit 41 is shown in more detail in FIG. 7.
As already explained, data is fed word by word representing elements of a row or column of a matrix along bus 21 into the parallel serial converter 22. Each word in this example is sixteen bits long
and the rows and columns of each vector have eight elements. As shown in FIG. 4, the sixteen bits of each word are fed in parallel into a first register 50 and as each successive word arrives the
contents of register 50 are advanced to the next register 51 and so on until all eight words have been loaded into eight registers 50-57. In the next cycle of operation the contents of registers
50-57 are transferred into shift registers 58-65 each associatied with a respective one of the registers 50-57. In subsequent cycles the contents of the shift registers 58-65 are output in bit wise
fashion two bits at a time starting with the least significant bits. Odd numbered bits are output on line 66 and even numbered bits are output on line 67. This provides two simultaneous bit slices on
lines 66 and 67. It will be understood that while the bits of the shift registers are being output in eight successive cycles registers 50-57 are being reloaded with new data words so that the
process is continuous.
As shown in FIG. 3, processor 11 has eight similar inner product accumulators 33 one of which is shown in more detail in FIG. 5. Lines 66 and 67 are each connected to each of the inner product
accumulators 33. As shown in FIG. 5, each of the input buses 66 and 67 is separated into two four bit signals and this reduces the ROM size necessary for the memories 24. Each of the four bit signals
is supplied to a respective one of four rows 68, 69, 70 and 71 each row having four ROMs 24. The ROMs 24 are arranged in four banks 72, 73, 74 and 75 each bank corresponding to a different transform.
Each of the memories 24 contains pre-computed combinations of transform coefficients depending on which of the four transforms discrete cosine transform, inverse discrete cosine transform, low pass
filter or identity is to be performed. A transform selector 77 is provided which outputs signals on line 76 to couple the inputs from line 66 and 67 to the bank of ROMs 24 related to the selected
transform. The non-selected ROMs are inoperative. The precomputed combinations from the ROMs 24 corresponding to the locations addressed by signals on the inputs 66 and 67 are output on lines 78, 79,
80 and 81 each of which consists of a sixteen bit bus. The outputs are summed by a bank of adders 82 consisting of a first stage of fifteen carry save adders 83 a second stage of seventeen carry save
adders 84 a third stage of eighteen carry save adders 85 and a fourth stage 86 consisting of nineteen carry save adders. As is shown in FIG. 6, the adder network comprises an array of full adders 87
each having three inputs and both sum and carry outputs. Each of the inputs on lines 78-81 is a sixteen bit input and the connection of these inputs with the stages of the adders is indicated in FIG.
6. Some are connected to stage 1, some to stage 2, 1 to stage 3 and 2 to stage 4 as shown. The output signals 90 from the last stage 86 of adders are provided on two nineteen bit buses 91 and 92 to
the accumulator 27.
The outputs from the ROMs 24 are in two's complement format and consequently it is necessary to invert and thereby make negative signals on lines 80 and 81 corresponding to the most significant bit
slice from the shift registers 58-65. These will occur on line 67 as this provides slices for even numbered bits. When the data on lines 80 and 81 corresponds to the most significant bit, the signals
are inverted by inverters 93 and 94 in lines 80 and 81 respectively. The inverters 93 and 94 are activated by a signal on line 95 from a control unit 96 responsive to the most significant and least
significant slices. In addition to inverting signals on lines 80 and 81 it is necessary for the inversion to add 1 in the least significant position and this is done by a signal on line 97 from the
control unit 96. As it is necessary to add 1 for each of the invertions on lines 80 and 81 a single 1 is added into the bit 2 position as shown at 97 in FIG. 6 and this addition into the bit 2
position has the same effect as adding one twice in the bit 1 position. When adding two numbers of unequal bit length in two's complement it is necessary to repeat the most significant bit of the
shorter number for each of the remaining bit positions of the longer number. From FIG. 6 it can be seen that in stage 84 an input to bit 16 is also input to bit 17 and similarly for bits 17 and 18 in
stage 85.
The accumulator 27 provides two nineteen bit output signals on line 98 and this output is also fed back on lines 99 and 100 to stages 85 and 86 respectively of the adder array. A switch 101 is
arranged to supply zeros on lines 99 and 100 in response to a signal from control unit 96 on a line 102 indicating that the data corresponds to the least significant slice. In this way the least
significant slice does not have added data corresponding to a previous block of data. For all other conditions the switch 101 allows the feedback from line 98 through lines 99 and 100 to the adder
network. The register 27 is a four bit shift shift register so that in each cycle of operation it shifts four bits towards the least significant position. These four bits are fed through a serial
adder 103 for resolving the carry signals on the four bits shifted. A one bit register 104 is provided to store any carry bit from the adder 103. The result output of the adder 103 is fed to a
register 105 from which a fourteen bit output is provided on line 106. Any carry signal from the register 104 is provided on line 107. A switch 108 similar to switch 101 is provided to respond to a
signal from the control unit 96 indicating that the data corresponds to the least significant slice so as to supply a zero in the carry signal to the serial adder 103 at the beginning of data
corresponding to each new block of data. The output on lines 106, 107 and 98 from each of the inner product accumulators 33 is held in a respective 53 bit register 34 as shown in FIG. 3.
It will be appreciated that the use of an adder network having carry save adders 87 avoids the need for carry propagate adders in every inner product accumulator. Some simplification is achieved in
each inner product accumulator by use of the serial adder 103 which reduces the bits of lower significance into normal binary form as the shifts by register 27 are effected. The outputs from
registers 34 are fed on line 40 to the circuit 41 which is shown in more detail in FIG. 7. In FIG. 7 the outputs from lines 98 and 107 in FIG. 5 form an input 108 to a carry propagate adder 109. This
is of conventional design and reduces the 39 input bits to an output of nineteen bits on lines 110 which is fed to a selector 111. The output 106 from register 105 in FIG. 5 is fed directly to the
selector 111. The selector is programmed by use of a control ROM 112 to route any contiguous row of 9 to 16 of the 33 input bits to a sixteen bit output on line 113. If less than sixteen bits are
selected one or more of the lower order bits are set to zero. The selector 111 receives three programmed inputs from the control ROM 112 on lines 114, 115 and 116. These correspond respectively to a
five bit signal on line 114 defining the top bit of the word selected. The input on line 115 is a three bit signal defining the width of the output word. Signal on line 116 gives a signed saturation
signal to the selector 111. Data is output from the selector on line 113 to a saturator and rounding adder 117. This also receives control signals on lines 118 and 119 from the selector 111. There is
an overflow condition if the most significant bits rejected are not all 1 or 0. The rounding condition is set when the least significant discarded bits have the pattern 100 . . . . If there is a 011
. . . pattern in the output word then a maximum integer condition is detected. Three saturation control output signals are generated by the selector and supplied on line 118 whereas the rounding
signal is provided on line 119. On line 118 a "satpos" signal is set if there is an overflow and the input word is positive. The condition "satneg" is set if an overflow is detected and the input
number is negative and if the input "signed saturation" on line 116 is set. The condition "satzero" is set if the input number is negative and the "signed saturation" signal is not set on lines 116.
A rounding bit is set on line 119 if the rounding condition is detected, the output does not represent the maximum integer, there is no overflow and saturation to zero is not being generated.
The control ROM 112, control unit 96 and transform selector 77 may form part of a common control unit.
The invertor circuits 93 and 94 are shown in more detail in FIGS. 8 and 9. FIG. 8 illustrates the output from the memories 24 on lines 78-81 at for example bit position 15 shown in FIG. 6. This
indicates that the signals on lines 78 and 79 are supplied directly to adder 87 in stage 83. The signal on line 80 is fed to the adder in stage 83 through a cross-over switch 120 forming part of the
inverter circuitry 93 and 94. The signal on line 81 is normally supplied through the switch 120 to the adder 87 in stage 84. In order to invert signals 80 and 81 when the most significant slice
signal is received on line 95 the switch 120 is switched over so that the signal on line 80 is supplied to the adder in stage 84 and the signal on line 81 is supplied to the adder in stage 83. This
has the effect of forming the negative of both signals on lines 80 and 81. As already explained, the circuitry used in this embodiment is CMOS circuitry and the adders 87 used in this example all
cause inversion of the output signals relative to the input signals. For this reason the signals which are normally supplied directly to the second stage 84 of adders are stored inverted in the
memories 24 whereas the signals fed directly to the adders in the first stage 83 are not inverted in the memories 24.
The conventional manner of effecting inversion to negate signals has involved putting an exclusive OR gate in the signal line where inversion is required. To form the negative of the signals on lines
80 and 81 would therefore require by conventional circuitry two exclusive OR gates and in CMOS technology each OR gate would normally require six transistors thereby requiring twelve transistors for
each pair of lines 80 and 81. By use of the cross-over switch 120 in place of exclusive OR gates the circuitry may be considerably simplified as shown in FIG. 9. In this example, line 80 includes a
transistor 121 for supplying the signal to the adder in stage 83 when inversion is not required and line 81 includes a similar transistor 122 to allow supply of signal to the adder 87 in stage 84
when inversion is not required. The gates of transistors 121 and 122 are each connected to the control line 95 from the control unit 96. The circuit includes two cross-over transistors 124 and 125
whose gates are connected through an inverter 123 to the control line 95. When the signal on line 95 indicates that the data corresponds to the most significant slice, transistors 121 and 122 are
turned off and transistors 124 and 125 are turned on. In this state the signal on line 80 is connected to the adder in stage 84 and the signal on line 81 is connected to the adder in stage 83. The
connections are reversed when the signal on line 95 reverts to a normal state not indicating that the data corresponds to the most significant slice. It will therefore be seen that by use of the
circuit in FIG. 9 the inversion of signals on two lines in the adder array previously described is achieved by use of only four transistors rather than twelve transistors needed with exclusive OR
gates. It will be appreciated that the circuit described in FIGS. 8 and 9 is repeated for each of the sixteen bit positions on the signal outputs for lines 80 and 81 thereby resulting in a
considerable reduction in the number of transistors required. This provides substantial benefits in the space required on the chips needed for the circuitry together with reduction in cost and power
As already described, the above example operates as a single pipeline in which data flows in continuously word by word and the transform data flows out at the same rate at a fixed number of clock
cycles later. As shown in FIG. 10 the pipeline consists of a sequential connection of a first serial/parallel converter 22, a first set of inner product accumulators 33, and first add/round and
selector 41, a transposer 14, a second serial/parallel converter 22, a second set of inner product accumulators 33 and a second add/round and selector 41. The pipeline is controlled by a control unit
125 which includes the transform selector 77 control unit 96 and control ROM 112. The control 125 consists of two shift registers 126 and 127 for each of the units 22, 33, 41 and 14 in the pipeline
and each of the shift registers 126 and 127 requires the same number of clock cycles for passage of signals through those registers as the associated units 22, 33, 41 and 14 require for processing of
data through those units. The control of all circuitry is in accordance with clock pulses from a clock 128. To start processing a first 64 element block of data, a GO signal is supplied to register
126 associated with the parallel/serial converter 22 and a selection signal determining the type of transform to be effected is supplied to the beginning of the shift register 127 associated with the
parallel/serial converter 22. The GO and selection signals are shifted sequentially along the shift registers 126 and 127 in the pipeline in synchronism with each other. After eight cycles a signal
is supplied on line 130 to the parallel serial converter 22 to indicate that data can now be shifted out to the inner product accumulator 33 which on the next cycle is reset by a signal on line 131
and has the transform type selected by a signal on line 76. After the required number of clock cycles by the inner product accumulator 33 the GO and select signals will have entered the next shift
registers corresponding to unit 41. This again will provide a reset signal on line 132 and selection signals on lines 114-116. After the number of cycles required by unit 41 GO and select signals in
the shift registers 126 and 127 will reset the transposer 14. This consists of two sixtyfour word memories which is reset when the GO signal arrives and writing to the first memory starts at the
first location in the memory. The data is subsequently read out orthogonally in order to make the transposition. When in continuous use data is read out from one memory while data is being written
into the other. The data is subsequently passed through the second set of units 22, 33 and 41 and similarly controlled by the passage of the GO and select signals through the clock controlled shift
registers 126 and 127. As can be seen from FIG. 10, the pipeline is long enough to have parts of three blocks of data in the pipeline simultaneously. In the case shown in FIG. 10 data from block i+2
is being loaded into the parallel serial converter 22 while data from block i+1 is being processed in the first set of inner product accumulators 33, unit 41 and transposer 14. At the same time data
from block i is being read out of the transposer 14 and processed by the second set of units 22, 33 and 41.
It will therefore be seen that the chip used for implementing the above described circuitry may be carrying out two different transforms simultaneously (that is on the second part of one block of
data and on the first part of a subsequent block of data). The two transforms on the two blocks of data need not be the same. Similarly the two transforms carried out on a single block of data need
not necessarily be the same. By selecting the identity transform for the second processing unit, the chip may carry out a 1D transform. When used to carry out discrete cosine transforms, inverse
discrete cosine transforms and low pass filter transforms the apparatus may be used for image compression purposes in transmitting or storing video signals.
The invention is not limited to the details of the foregoing examples.
|
{"url":"http://www.google.ca/patents/US5202847","timestamp":"2014-04-17T15:54:44Z","content_type":null,"content_length":"104557","record_id":"<urn:uuid:00a72669-ca14-45db-8703-9f8d60327fad>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Barrington Hills, IL Statistics Tutor
Find a Barrington Hills, IL Statistics Tutor
...I must be notified of cancellations or rescheduling of tutor sessions 24 hours before the scheduled session. Both the student's, parent's, and my time is valuable, and I don't want to
compromise any potential for students to learn. I greatly appreciate your understanding.
15 Subjects: including statistics, chemistry, Spanish, biology
...Finally, I have been using algebra ever since, from teaching college-level physics concepts to building courses for professional auditors. I was an advanced math student, completing calculus as
a junior in high school and getting a 5 on the AP calculus BC exam. I went on to study engineering in college and while tackling the advanced courses, I tutored calculus on the side.
13 Subjects: including statistics, calculus, geometry, algebra 1
...Having completed six probability and statistics classes, including graduate-level Time-Series Analysis and Cross-Section Econometrics, I have tutored dozens of students on statistical concepts
and helped them craft their own original statistical research projects in psychology, economics, and nur...
57 Subjects: including statistics, chemistry, calculus, English
...I completed a Discrete math course (included formal logic, graph theory, etc.) in college, and computer science courses that handled automata theory, finite state machine, etc. I completed a
semester course on Ordinary Differential Equations (ODE's) at Caltech. My course textbook was Elementary...
21 Subjects: including statistics, chemistry, calculus, geometry
I have a PhD in microbial genetics and have worked in academic research as a university professor and for commercial companies in the biotechnology manufacturing sector. I have a broad background
in science and math, a love of written and oral communication and a strong desire to share the knowledg...
35 Subjects: including statistics, English, chemistry, reading
|
{"url":"http://www.purplemath.com/Barrington_Hills_IL_Statistics_tutors.php","timestamp":"2014-04-21T02:33:21Z","content_type":null,"content_length":"24676","record_id":"<urn:uuid:854f9401-9881-493c-adb3-2f87ff19bcc7>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Simulating physics with computers, Internat
Results 1 - 10 of 12
- SIAM J. on Computing , 1997
"... A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation
time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. ..."
Cited by 882 (2 self)
Add to MetaCart
A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time by
at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems which are
generally thought to be hard on a classical computer and which have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a
hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, e.g., the number of digits of the integer to be factored.
- in Proc. 25th Annual ACM Symposium on Theory of Computing, ACM , 1993
"... Abstract. In this paper we study quantum computation from a complexity theoretic viewpoint. Our first result is the existence of an efficient universal quantum Turing machine in Deutsch’s model
of a quantum Turing machine (QTM) [Proc. Roy. Soc. London Ser. A, 400 (1985), pp. 97–117]. This constructi ..."
Cited by 482 (5 self)
Add to MetaCart
Abstract. In this paper we study quantum computation from a complexity theoretic viewpoint. Our first result is the existence of an efficient universal quantum Turing machine in Deutsch’s model of a
quantum Turing machine (QTM) [Proc. Roy. Soc. London Ser. A, 400 (1985), pp. 97–117]. This construction is substantially more complicated than the corresponding construction for classical Turing
machines (TMs); in fact, even simple primitives such as looping, branching, and composition are not straightforward in the context of quantum Turing machines. We establish how these familiar
primitives can be implemented and introduce some new, purely quantum mechanical primitives, such as changing the computational basis and carrying out an arbitrary unitary transformation of
polynomially bounded dimension. We also consider the precision to which the transition amplitudes of a quantum Turing machine need to be specified. We prove that O(log T) bits of precision suffice to
support a T step computation. This justifies the claim that the quantum Turing machine model should be regarded as a discrete model of computation and not an analog one. We give the first formal
evidence that quantum Turing machines violate the modern (complexity theoretic) formulation of the Church–Turing thesis. We show the existence of a problem, relative to an oracle, that can be solved
in polynomial time on a quantum Turing machine, but requires superpolynomial time on a bounded-error probabilistic Turing machine, and thus not in the class BPP. The class BQP of languages that are
efficiently decidable (with small error-probability) on a quantum Turing machine satisfies BPP ⊆ BQP ⊆ P ♯P. Therefore, there is no possibility of giving a mathematical proof that quantum Turing
machines are more powerful than classical probabilistic Turing machines (in the unrelativized setting) unless there is a major breakthrough in complexity theory.
- SIAM JOURNAL OF COMPUTATION , 1997
"... Recently a great deal of attention has been focused on quantum computation following a ..."
- In Proc. 37th FOCS , 1996
"... It has recently been realized that use of the properties of quantum mechanics might speed up certain computations dramatically. Interest in quantum computation has since been growing. One of the
main difficulties in realizing quantum computation is that decoherence tends to destroy the information i ..."
Cited by 201 (4 self)
Add to MetaCart
It has recently been realized that use of the properties of quantum mechanics might speed up certain computations dramatically. Interest in quantum computation has since been growing. One of the main
difficulties in realizing quantum computation is that decoherence tends to destroy the information in a superposition of states in a quantum computer, making long computations impossible. A further
difficulty is that inaccuracies in quantum state transformations throughout the computation accumulate, rendering long computations unreliable. However, these obstacles may not be as formidable as
originally believed. For any quantum computation with t gates, we show how to build a polynomial size quantum circuit that tolerates O(1 / log c t) amounts of inaccuracy and decoherence per gate, for
some constant c; the previous bound was O(1 /t). We do this by showing that operations can be performed on quantum data encoded by quantum error-correcting codes without decoding this data. 1.
- SIAM Journal of Computation , 1997
"... Abstract. In this paper some theoretical and (potentially) practical aspects of quantum computing are considered. Using the tools of transcendental number theory it is demonstrated that quantum
Turing machines (QTM) with rational amplitudes are sufficient to define the class of bounded error quantum ..."
Cited by 114 (0 self)
Add to MetaCart
Abstract. In this paper some theoretical and (potentially) practical aspects of quantum computing are considered. Using the tools of transcendental number theory it is demonstrated that quantum
Turing machines (QTM) with rational amplitudes are sufficient to define the class of bounded error quantum polynomial time (BQP) introduced by Bernstein and Vazirani [Proc. 25th ACM Symposium on
Theory of Computation, 1993, pp. 11–20, SIAM J. Comput., 26 (1997), pp. 1411–1473]. On the other hand, if quantum Turing machines are allowed unrestricted amplitudes (i.e., arbitrary complex
amplitudes), then the corresponding BQP class has uncountable cardinality and contains sets of all Turing degrees. In contrast, allowing unrestricted amplitudes does not increase the power of
computation for error-free quantum polynomial time (EQP). Moreover, with unrestricted amplitudes, BQP is not equal to EQP. The relationship between quantum complexity classes and classical complexity
classes is also investigated. It is shown that when quantum Turing machines are restricted to have transition amplitudes which are algebraic numbers, BQP, EQP, and nondeterministic quantum polynomial
time (NQP) are all contained in PP, hence in P #P and PSPACE. A potentially practical issue of designing “machine independent ” quantum programs is also addressed. A single (“almost universal”)
quantum algorithm based on Shor’s method for factoring integers is developed which would run correctly on almost all quantum computers, even if the underlying unitary transformations are unknown to
the programmer and the device builder.
- in Math. and Computers in Simulation 28(1986
"... We ask if analog computers can solve NP-complete problems efficiently. Regarding this as unlikely, we formulate a strong version of Church’s Thesis: that any analog computer can be simulated
efficiently (in polynomial time) by a digital computer. From this assumption and the assumption that P ≠ NP w ..."
Cited by 36 (0 self)
Add to MetaCart
We ask if analog computers can solve NP-complete problems efficiently. Regarding this as unlikely, we formulate a strong version of Church’s Thesis: that any analog computer can be simulated
efficiently (in polynomial time) by a digital computer. From this assumption and the assumption that P ≠ NP we can draw conclusions about the operation of physical devices used for computation. An
NP-complete problem, 3-SAT, is reduced to the problem of checking whether a feasible point is a local optimum of an optimization problem. A mechanical device is proposed for the solution of this
problem. It encodes variables as shaft angles and uses gears and smooth cams. If we grant Strong Church’s Thesis, that P ≠ NP, and a certain ‘‘Downhill Principle’ ’ governing the physical behavior of
the machine, we conclude that it cannot operate successfully while using only polynomial resources. We next prove Strong Church’s Thesis for a class of analog computers described by well-behaved
ordinary differential equations, which we can take as representing part of classical mechanics. We conclude with a comment on the recently discovered connection between spin glasses and combinatorial
optimization. 1.
- RAIRO Theor. Inform. Appl , 1998
"... Foundations of the notion of quantum Turing machines are investigated. According to Deutsch’s formulation, the time evolution of a quantum Turing machine is to be determined by the local
transition function. In this paper, the local transition functions are characterized for fully general quantum Tu ..."
Cited by 21 (5 self)
Add to MetaCart
Foundations of the notion of quantum Turing machines are investigated. According to Deutsch’s formulation, the time evolution of a quantum Turing machine is to be determined by the local transition
function. In this paper, the local transition functions are characterized for fully general quantum Turing machines, including multi-tape quantum Turing machines, extending an earlier attempt due to
Bernstein and Vazirani.
, 2001
"... Abstract. These notes discuss the quantum algorithms we know of that can solve problems significantly faster than the corresponding classical algorithms. ..."
, 1999
"... Deutsch proposed two sorts of models of quantum computers, quantum Turing machines (QTMs) and quantum circuit families (QCFs). At present quantum algorithms are represented by these two models.
This paper shows the equivalence of the computational powers of these two models. For this purpose, we int ..."
Cited by 17 (7 self)
Add to MetaCart
Deutsch proposed two sorts of models of quantum computers, quantum Turing machines (QTMs) and quantum circuit families (QCFs). At present quantum algorithms are represented by these two models. This
paper shows the equivalence of the computational powers of these two models. For this purpose, we introduce two notions of uniformity for QCFs and complexity classes based on uniform QCFs. For Monte
Carlo algorithms, it is proved that the complexity classes based on uniform QCFs are identical with the corresponding classes based on QTMs. For Las Vegas algorithms, various complexity classes are
introduced for QTMs and QCFs according to constraints on the algorithms and their interrelations are investigated in detail. In addition, we generalize Yao’s construction of quantum circuits
simulating single tape QTMs to multi-tape QTMs and give a complete proof of the existence of a universal QTM simulating multi-tape QTMs efficiently.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3044692","timestamp":"2014-04-21T00:44:27Z","content_type":null,"content_length":"37446","record_id":"<urn:uuid:495d0aac-8fe9-40ad-9d3d-812fb3a7be04>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gary P. Morriss
BMath N'cle (NSW), PhD Melb.
Department and Research Groups
Theoretical Physics
Research Interests
Nonequilibrium statistical mechanics and dynamical systems, New ideas and techniques from the modern theories of chaotic dynamical systems suggest that a new fundamental conceptual basis for a
theory of nonequilibrium steady states is possible. We have proved to conjugate pairing rule for a class of thermostatted nonequilibrium steady states. This together with the Gallavotti-Cohen
fluctuation theorem are the two new results of the 90's.
Statistical hydrodynamics, I am interested in the interface between statistical mechanics and hydrodynamics, particularly for convective flows.
Selected Publications
• Dimensional Contraction in Nonequilibrium Systems. G.P. Morris, Physics Letters, A143, 307-313 (1989).
• Statistical Mechanics of Nonequilibrium Liquids. D.J. Evans & G.P. Morriss, Academic Press, (1990).
• The viscosity of a simple fluid from its maximal Lyapunov exponents. D.J. Evans, E.G.D. Cohen & G.P. Morriss, Phys. Rev., A42, 5990 (1990).
• The nonequilibrium Lorentz gas. J. Lloyd, M. Niemeyer, L. Rondoni & G.P. Morriss, Chaos, 5, 536 (1995).
Contact Details
Mail Address
School of Physics
The University of New South Wales
SYDNEY 2052
Email Address
Personal Home Page
• Personal Home Page
Phone Number
Facsimile Number
|
{"url":"http://www.phys.unsw.edu.au/STAFF/ACADEMIC/morriss.html","timestamp":"2014-04-21T16:11:09Z","content_type":null,"content_length":"18446","record_id":"<urn:uuid:20b07ad3-131a-438b-9f03-698e7ef06333>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hayward, CA Precalculus Tutor
Find a Hayward, CA Precalculus Tutor
...I have designed and written an approved AP CollegeBoard Audit for more than one high school for AP Statistics and taught the course with success in high school and community college. I have,
during my teaching career, attended many professional development sessions in this area and the continuin...
13 Subjects: including precalculus, calculus, statistics, geometry
...Seeing many advanced students who struggle with algebra 1 concepts makes me feel good about my algebra 1 students because I help them to learn it properly from the beginning. It helps for many
upcoming years. "Great Tutor" - Margaret P. Oakland, CA Andreas is a fabulous tutor.
41 Subjects: including precalculus, calculus, geometry, statistics
Hello! I have been a professional tutor since 2003, specializing in math (pre-algebra through AP calculus), AP statistics, and standardized test preparation. I am very effective in helping
students to not just get a better grade, but to really understand the subject matter and the reasons why things work the way they do.
14 Subjects: including precalculus, calculus, statistics, geometry
...In fact, I enjoyed teaching so much that I kept on doing it to this day. Through the years, I have worked mostly with junior high and high schoolers, but I have also worked with kids as young
as 4th graders and adults at university or community colleges. My number one goal is the academic success of my students.
11 Subjects: including precalculus, chemistry, algebra 2, calculus
...I have been a tutor to numerous students over several years, ranging from middle school to graduate students. One of the things I have learned through my education is that it is not enough to
work hard, you have to learn to work smart. The methods I teach will not only help you learn the materi...
29 Subjects: including precalculus, calculus, statistics, geometry
Related Hayward, CA Tutors
Hayward, CA Accounting Tutors
Hayward, CA ACT Tutors
Hayward, CA Algebra Tutors
Hayward, CA Algebra 2 Tutors
Hayward, CA Calculus Tutors
Hayward, CA Geometry Tutors
Hayward, CA Math Tutors
Hayward, CA Prealgebra Tutors
Hayward, CA Precalculus Tutors
Hayward, CA SAT Tutors
Hayward, CA SAT Math Tutors
Hayward, CA Science Tutors
Hayward, CA Statistics Tutors
Hayward, CA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/hayward_ca_precalculus_tutors.php","timestamp":"2014-04-21T00:15:53Z","content_type":null,"content_length":"24249","record_id":"<urn:uuid:8383b637-2b8f-425b-98fb-3e0f413d42de>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Thomas J. Sargent
Math courses
The opinions expressed on this page are my own and do not represent the policy or opinions of the economics department. The recommendations here are based primarily on the success of students who
have taken the path I describe.
Math is the language of economics. If you are an NYU undergraduate, studying math will open doors to you in terms of interesting economics courses at NYU and job opportunities afterwards. Start with
the basics: take three calculus courses (up to and including multivariable calculus), linear algebra, and a good course in probability and statistics. These basic courses will empower you. After you
have these under your belt, you have many interesting options all of which will further empower you to learn and practice economics. I especially recommend courses in (1) Markov chains and stochastic
processes, and (2) differential equations.
Superb economists at NYU (e.g., Adam Brandenburger, Robert Engle, Roy Radner, Stanley Zin, Jess Benhabib, Douglas Gale, Boyan Jovanovic, David Pearce, Debraj Ray, Ennio Stacchetti, Charles Wilson,
and others) have made notable contributions to economics partly because they are creative but also because they studied more math than others.
My personal opinion is that if you are an undergraduate at Stern or the NYU CAS economics department and you are seriously interested in learning as much rigorous economics as you can at NYU, you
will be much better off taking one or two additional math and statistics courses rather than spending time and credits writing an undergraduate honors thesis. This will also look better on your
transcript if you plan to apply to graduate school.
These courses listed above are very useful courses for applied work in econometrics, macroeconomic theory, and applied industrial organization. They describe the foundations of methods used to
specify and estimate dynamic competitive models.
Just as in jogging, I recommend not overdoing it. Rather, find a pace that you can sustain throughout your years here. You will find that taking these courses doesn't really cost time, because of
your improved efficiency in doing economics.
There are many other courses that are interesting and useful. The most important thing is just to get started acquiring the tools and habits these courses will convey.
|
{"url":"https://files.nyu.edu/ts43/public/math_courses.html","timestamp":"2014-04-19T11:57:36Z","content_type":null,"content_length":"5959","record_id":"<urn:uuid:481e51fb-75c4-4677-affb-c34a7e68010a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Linear Difference equation
[ Follow Ups ] [ Post Followup ] [ Netlib Discussion Forum ] [ FAQ ]
Posted by Christian Oehreneder on August 13, 1998 at 12:10:08:
I need to solve a problem of the following kind:
d^2 U(x,y)/dx^2 + d^2 U(x,y)/dy^2 - U(x,y)*f(x,y) = - g(x,y)
or in discretized form
U[m-1,n] + U[m+1,n] + U[m,n-1] + U[m,n+1] - 4*U[m,n] - U[m,n]*f[m,n] = -g[n,m]
f >= 0
The problem is to be solved on a square domain.
At the boundary I use a symmetric continuation of U
to give the above equation meaning.
Is this a "known" problem. If yes, under what name is
it referenced in the literature?
For the 1D case it involves the solution of a tridiagonal
symmetric Matrix with subdiagonal elements all the same.
I solved with some special solver for tridiagonal Matrizes
which works fine.
For the 2D case everything seems more complicated. In view
of the very regular structure of the equation I thought
there might be a special purpose solver for that type
of problems. It seems to be in close relation to other
finite difference problems.
Can anyone give me an advise?
Many Thanks
|
{"url":"http://www.netlib.org/utk/forums/netlib/messages/482.html","timestamp":"2014-04-18T15:44:50Z","content_type":null,"content_length":"2008","record_id":"<urn:uuid:6244e701-a9c2-4ad4-9c45-5557c58644c6>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Universal voting protocol tweaks to make manipulation hard
Results 1 - 10 of 86
- In AAMAS , 2006
"... Encouraging voters to truthfully reveal their preferences in an election has long been an important issue. Recently, computational complexity has been suggested as a means of precluding
strategic behavior. Previous studies have shown that some voting protocols are hard to manipulate, but used N P-ha ..."
Cited by 91 (23 self)
Add to MetaCart
Encouraging voters to truthfully reveal their preferences in an election has long been an important issue. Recently, computational complexity has been suggested as a means of precluding strategic
behavior. Previous studies have shown that some voting protocols are hard to manipulate, but used N P-hardness as the complexity measure. Such a worst-case analysis may be an insufficient guarantee
of resistance to manipulation. Indeed, we demonstrate that N P-hard manipulations may be tractable in the averagecase. For this purpose, we augment the existing theory of average-case complexity with
some new concepts. In particular, we consider elections distributed with respect to junta distributions, which concentrate on hard instances. We use our techniques to prove that scoring protocols are
susceptible to manipulation by coalitions, when the number of candidates is constant. 1.
, 2006
"... ... problem for multiagent systems, and one general method for doing so is to vote over the alternatives (candidates). Unfortunately, the Gibbard-Satterthwaite theorem shows that when there are
three or more candidates, all reasonable voting rules are manipulable (in the sense that there exist s ..."
Cited by 76 (6 self)
Add to MetaCart
... problem for multiagent systems, and one general method for doing so is to vote over the alternatives (candidates). Unfortunately, the Gibbard-Satterthwaite theorem shows that when there are three
or more candidates, all reasonable voting rules are manipulable (in the sense that there exist situations in which a voter would benefit from reporting its preferences insincerely). To circumvent
this impossibility result, recent research has investigated whether it is possible to make finding a beneficial manipulation computationally hard. This approach has had some limited success,
exhibiting rules under which the problem of finding a beneficial manipulation is NP- hard, #P-hard, or even PSPACE-hard. Thus, under these rules, it is unlikely that a computationally efficient
algorithm can be constructed that always finds a beneficial manipulation (when it exists). However, this still does not preclude the existence of an efficient algorithm that often finds a successful
manipulation (when it exists). There have been attempts to design a rule under which finding a beneficial manipulation is usually hard, but they have failed. To explain this failure, in this paper,
we show that it is in fact impossible to design such a rule, if the rule is also required to satisfy another property: a large fraction of the manipulable instances are both weakly monotone, and
allow the manipulators to make either of exactly two candidates win. We argue why one should expect voting rules to have this property, and show experimentally that common voting rules clearly
satisfy it. We also discuss approaches for potentially circumventing this impossibility result.
- in Proc. IJCAI-05 Multidisciplinary Workshop on Advances in Preference Handling , 2005
"... We extend the application of a voting procedure (usually defined on complete preference relations over candidates) when the voters ’ preferences consist of partial orders. We define possible
(resp. necessary) winners for a given partial preference profile R with respect to a given voting procedure a ..."
Cited by 72 (13 self)
Add to MetaCart
We extend the application of a voting procedure (usually defined on complete preference relations over candidates) when the voters ’ preferences consist of partial orders. We define possible (resp.
necessary) winners for a given partial preference profile R with respect to a given voting procedure as the candidates being the winners in some (resp. all) of the complete extensions of R. We show
that, although the computation of possible and necessary winners may be hard in general case, it is polynomial for the family of positional scoring procedures. We show that the possible and necessary
Condorcet winners for a partial preference profile can be computed in polynomial time as well. Lastly, we point out connections to vote manipulation and elicitation. 1
- In Proceedings of the Ninth ACM Conference on Electronic Commerce (EC , 2008
"... We introduce a class of voting rules called generalized scoring rules. Under such a rule, each vote generates a vector of k scores, and the outcome of the voting rule is based only on the sum of
these vectors—more specifically, only on the order (in terms of score) of the sum’s components. This clas ..."
Cited by 61 (18 self)
Add to MetaCart
We introduce a class of voting rules called generalized scoring rules. Under such a rule, each vote generates a vector of k scores, and the outcome of the voting rule is based only on the sum of
these vectors—more specifically, only on the order (in terms of score) of the sum’s components. This class is extremely general: we do not know of any commonly studied rule that is not a generalized
scoring rule. We then study the coalitional manipulation problem for gener-alized scoring rules. We prove that under certain natural assump-), then tions, if the number of manipulators is O(n p) (for
any p < 1 2 the probability that a random profile is manipulable is O(n p − 1 2), where n is the number of voters. We also prove that under another set of natural assumptions, if the number of
manipulators is Ω(n p) (for any p> 1) and o(n), then the probability that a random pro-2 file is manipulable (to any possible winner under the voting rule) is 1 − O(e −Ω(n2p−1)). We also show that
common voting rules satisfy these conditions (for the uniform distribution). These results generalize earlier results by Procaccia and Rosenschein as well as even earlier results on the probability
of an election being tied.
- In Proceedings of the 16th International Symposium on Algorithms and Computation , 2005
"... This paper addresses the problem of constructing voting protocols that are hard to manipulate. We describe a general technique for obtaining a new protocol by combining two or more base
protocols, and study the resulting class of (vote-once) hybrid voting protocols, which also includes most previous ..."
Cited by 55 (4 self)
Add to MetaCart
This paper addresses the problem of constructing voting protocols that are hard to manipulate. We describe a general technique for obtaining a new protocol by combining two or more base protocols,
and study the resulting class of (vote-once) hybrid voting protocols, which also includes most previously known manipulationresistant protocols. We show that for many choices of underlying base
protocols, including some that are easily manipulable, their hybrids are NP-hard to manipulate, and demonstrate that this method can be used to produce manipulationresistant protocols with unique
combinations of useful features. 1
"... The Gibbard-Satterthwaite theorem states that every non-trivial voting method between at least 3 alternatives can be strategically manipulated. We prove a quantitative version of the
Gibbard-Satterthwaite theorem: a random manipulation by a single random voter will succeed with non-negligible probab ..."
Cited by 55 (1 self)
Add to MetaCart
The Gibbard-Satterthwaite theorem states that every non-trivial voting method between at least 3 alternatives can be strategically manipulated. We prove a quantitative version of the
Gibbard-Satterthwaite theorem: a random manipulation by a single random voter will succeed with non-negligible probability for every neutral voting method between 3 alternatives that is far from
being a dictatorship.
- IN PROCEEDINGS OF THE NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI , 2004
"... Coalition formation is a key problem in automated negotiation among self-interested agents. In order for coalition formation to be successful, a key question that must be answered is how the
gains from cooperation are to be distributed. Various solution concepts have been proposed, but the computati ..."
Cited by 53 (7 self)
Add to MetaCart
Coalition formation is a key problem in automated negotiation among self-interested agents. In order for coalition formation to be successful, a key question that must be answered is how the gains
from cooperation are to be distributed. Various solution concepts have been proposed, but the computational questions around these solution concepts have received little attention. We study a concise
representation of characteristic functions which allows for the agents to be concerned with a number of independent issues that each coalition of agents can address. For example, there may be a set
of tasks that the capacityunconstrained agents could undertake, where accomplishing a task generates a certain amount of value (possibly depending on how well the task is accomplished). Given this
representation, we show how to quickly compute the Shapley value—a seminal value division scheme that distributes the gains from cooperation fairly in a certain sense. We then show that in
(distributed) marginal-contribution based value division schemes, which are known to be vulnerable to manipulation of the order in which the agents are added to the coalition, this manipulation is
NP-complete. Thus, computational complexity serves as a barrier to manipulating the joining order. Finally, we show that given a value division, determining whether some subcoalition has an incentive
to break away (in which case we say the division is not in the core) is NP-complete. So, computational complexity serves to increase the stability of the coalition.
- In The ACM-SIAM Symposium on Discrete Algorithms (SODA , 2008
"... We investigate the problem of coalitional manipulation in elections, which is known to be hard in a variety of voting rules. We put forward efficient algorithms for the problem in Scoring rules,
Maximin and Plurality with Runoff, and analyze their windows of error. Specifically, given an instance on ..."
Cited by 46 (10 self)
Add to MetaCart
We investigate the problem of coalitional manipulation in elections, which is known to be hard in a variety of voting rules. We put forward efficient algorithms for the problem in Scoring rules,
Maximin and Plurality with Runoff, and analyze their windows of error. Specifically, given an instance on which an algorithm fails, we bound the additional power the manipulators need in order to
succeed. We finally discuss the implications of our results with respect to the popular approach of employing computational hardness to preclude manipulation. 1
, 2006
"... We study the complexity of influencing elections through bribery: How computationally complex is it for an external actor to determine whether by a certain amount of bribing voters a specified
candidate can be made the election’s winner? We study this problem for election systems as varied as scorin ..."
Cited by 45 (18 self)
Add to MetaCart
We study the complexity of influencing elections through bribery: How computationally complex is it for an external actor to determine whether by a certain amount of bribing voters a specified
candidate can be made the election’s winner? We study this problem for election systems as varied as scoring protocols and Dodgson voting, and in a variety of settings regarding the nature of the
voters, the size of the candidate set, and the specification of the input. We obtain both polynomial-time bribery algorithms and proofs of the intractability of bribery. Our results indicate that the
complexity of bribery is extremely sensitive to the setting. For example, we find settings where bribing weighted voters is NP-complete in general but if weights are represented in unary then the
bribery problem is in P. We provide a complete classification of the complexity of bribery for the broad class of elections (including plurality, Borda, k-approval, and veto) known as scoring
- IN UNCERTAINTY IN ARTIFICIAL INTELLIGENCE: PROCEEDINGS OF THE TWENTIETH CONFERENCE (UAI2005 , 2005
"... Voting is a very general method of preference aggregation. A voting rule takes as input every voter's vote (typically, a ranking of the alternatives), and produces as output either just the
winning alternative or a ranking of the alternatives. One potential view of voting is the following. The ..."
Cited by 45 (13 self)
Add to MetaCart
Voting is a very general method of preference aggregation. A voting rule takes as input every voter's vote (typically, a ranking of the alternatives), and produces as output either just the winning
alternative or a ranking of the alternatives. One potential view of voting is the following. There exists a "correct" outcome (winner/ranking), and each voter's vote corresponds to a noisy perception
of this correct outcome. If we are given the noise model, then for any vector of votes, we can compute the maximum likelihood estimate of the correct outcome. This maximum likelihood estimate
constitutes a voting rule. In this paper, we ask the following question: For which common voting rules does there exist a noise model such that the rule is the maximum likelihood estimate for that
noise model? We require that the votes are drawn independently given the correct outcome (we show that without this restriction, all voting rules have the property). We study the question both for
the case where outcomes are winners and for the case where outcomes are rankings. In either case, only some of the common voting rules have the property.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=124473","timestamp":"2014-04-17T22:49:44Z","content_type":null,"content_length":"40993","record_id":"<urn:uuid:bb85ef9f-733c-4f16-ac27-02608408ff69>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bitwise operators in c# OR(|), XOR(^), AND(&), NOT(~)
All posts have moved to Typps
See you there.
Seshasai said:
Its excellent. Great job.
Vesko Kolev said:
I think that it is very good article, too! In the good old days the bitwise operators were part of our everyday life. Now they are used only from the good devs which still think for performance.
P.S. Why not adding some words about << and >> just to cover the whole topic?
Thanks again!
Brad said:
it is not helpful as I am finding the operator for XNOR operation.
I encrypt the password using XOR operator, now i want to decrypt it. What is the process to achieve this?
Brad2 said:
Brad, if you XOR something, just XOR it again to get back your original value.
chethan said:
i want the logic as how to write bitwise xor like
for 101 we get 0 by doing bitwise xor
for 100 we get 1 by doing bitwise xor
ARUNIMA BHATTACHARYA said:
A GOOD STUFF INDEED.I want to add: If one XORs anything twice he will get the actual number again.That is how one can use it to encrypt and decrypt.
Arunima Bhattacharya said:
If we want to get 0 for 101 and 1 for 100,it can be assumed that only the LSB is used for XOR operation,now if we XOR the number with 1,i.e.001 then we get the desired output.
fako said:
this article really helps
Vojtěch Vít said:
My Thanks to the author. This article is short but very, very clear:-)
laila said:
greet job:)
but these are built in functions
F2F said:
i don't understand this article.
for example:
var a = 1;
var b = 2;
if (a == b ) {
//do sothing.
if i use bitwise operators, i will write:
if (a & b == a) {
//do sothing
is it true ? how about other operators ?
Please help me.
Alex van Beek said:
Cool use of the ~ operator is the binary search method in the List class. It returns a negative integer when it can't find the specified item. This negative integer becomes the correct insertion
point to keep the list sorted, when you apply the ~ operator.
Kyi Thar said:
Pls help me..
I have to compute for binary Not AND(NAND), NOR and XNOR(Exclusive NOR)in C#. operator ~ and ! are not support for NAND, NOR and XNOR. so pls tell me how to ..
NAND (Not AND)
0 NAN 0 = 1
0 NAN 1 = 1
1 NAN 0 = 1
1 NAN 1 = 0
NOR (Not OR)
0 NOR 0 = 1
0 NOR 1 = 0
1 NOR 0 = 0
1 NOR 1 = 0
XNOR (Exclusive NOR)
0 XOR 0 = 1
0 XOR 1 = 0
1 XOR 0 = 0
1 XOR 1 = 1
Chris said:
Thanks, very well written and easy to understand post.
Ganesh said:
Waw gr8 examples,
It's really helpfull for me.
Jaroslaw Dobrzanski said:
Exactly what I was looking for. Thanks!
arshpreet said:
u hav done a great job....
i was trying to get it form months.........
thanks a lot>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Brice said:
Very impressive. Short, but to the point. I skipped class yesterday an apparently they spent the whole hour going over c# bitwise operators. Took ten min of reading this article to catch up lol.
Deepak said:
Thanks for writing such an understanding article on Bitwize
Jake said:
really helpful, thx. some tricks with ~, good explained. As <keep complex...> asked: what's one's complement?
|
{"url":"http://weblogs.asp.net/alessandro/archive/2007/10/02/bitwise-operators-in-c-or-xor-and-amp-amp-not.aspx","timestamp":"2014-04-21T14:49:28Z","content_type":null,"content_length":"33327","record_id":"<urn:uuid:04eb6e6e-e485-48ad-853f-c70c6d3e86fe>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
|
No data available.
Please log in to see this content.
You have no subscription access to this content.
No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
Assessment of a simple correction for the long-range charge-transfer problem in time-dependent density-functional theory
FIG. 1.
Schematic representation of the (uncorrected) coupling matrix for a system consisting of two fragments with a large separation. Left: full coupling matrix in the basis of all occupied (o1/o2)-virtual
(v1/v2) orbital pairs for fragments 1 and 2. Right: coupling matrix after the removal of the orbital pairs corresponding to CT excitations. The white areas correspond to matrix elements that will be
(close to) zero due to a zero differential overlap.
FIG. 2.
Excitation energies for the system He⋯Be as a function of the internuclear distance from SAOP/TZ2P calculations [fcorr: corrected according to Eq. (4)]. CISD data from Ref. 9 are given for
FIG. 3.
Isosurface plots of orbitals around the HOMO-LUMO gap involved in some of the low-lying CT excitations of the ethylene-tetrafluoroethylene complex (ascending orbital energies from left to right;
distance: ).
FIG. 4.
Adiabatic excited-state potential energy curves (solid lines) for irrep of the ethylene-tetrafluoroethylene complex (SAOP/TZP; zero point: ground-state energy at ). Top: no kernel correction; bottom:
kernel correction applied. Labels correspond to the character of the excitation at a distance of ; the character of the excitations may change due to avoided crossings. In the lower diagram, also a
pure -like curve for the state (dotted line; shifted by for clarity of presentation) as well as “intuitive” diabatic states are shown. The latter curves connect data points of states with similar
characters (dashed lines; shifted by for clarity of presentation).
FIG. 5.
Adiabatic excited-state potential energy curves for irrep of the ethylene-tetrafluoroethylene complex (SAOP/TZP; zero point: ground-state energy at ). Top: no kernel correction; bottom: kernel
correction applied. Labels correspond to the characters of the excitation at a distance of ; the character of the excitations may change due to avoided crossings.
FIG. 6.
Adiabatic excited-state potential energy curves (solid lines) for irrep of the ethylene-tetrafluoroethylene complex (CC2/TZVP; zero point: ground-state energy at ). We also show a pure -like curve
for the CT-state (dotted line; shifted by for clarity of presentation) as well as the “intuitive” diabatic potential energy curve for the lowest CT-like transition (dashed lines; shifted by for
clarity of presentation). For short distances, the character of this excitation spreads over the three lowest excitations in this irrep (indicated by additional dashed lines).
FIG. 7.
Isosurface plots of the orbitals of the ethylene-tetrafluoroethylene complex showing a pronounced mixing for a distance of .
FIG. 8.
Excitation energies obtained for different numbers of optimized states in irrep of the ethylene-tetrafluoroethylene complex. Left: default guess (orbital energy differences) used to construct guesses
for the lowest excitations; right: corrected guess [Eq. (16)] applied.
FIG. 9.
Structure of the acetone-water cluster and isosurface plot of one of the orbitals with a partial lone pair character .
FIG. 10.
Spectra (SAOP/TZP/DZ) of the acetone∙20 cluster shown in Fig. 9 from a conventional TDDFT calculation (“no correction”) as well as from two calculations using the asymptotic correction to the
coupling matrix with different values of the switching parameter . The spectra are modeled by applying a Gaussian broadening of (dotted lines) and (solid lines). For the spectra with a half width of
, also the positions of the maxima are indicated.
Table I.
Number of matrix-vector products needed to converge roots (irrep ) in the TDDFT calculation. A: default zero-order guess used to construct lowest-energy eigenvectors and preconditioner; B: guess
vectors based on corrected guess energies [Eq. (16)]; and C: guess vectors and preconditioner based on Eq. (16). Note that scheme A converges to eigenvalues different from those obtained in schemes B
and C for small (see Fig. 8).
Table II.
Excitation energies (SAOP/TZP/DZ; in units of eV) of the lowest transitions of the acetone-water cluster shown in Fig. 9 from a conventional TDDFT calculation (“conv.”) and calculations with the
asymptotic correction. In the latter case, we either used the default switching parameter or a larger value of . Also given are the oscillator strengths (in a.u.) from the conventional calculation
and the dominant orbital contributions; the orbitals are characterized in Table III.
Table III.
Characterization of the orbitals (SAOP/TZP/DZ) of the acetone-water cluster shown in Fig. 9.
Article metrics loading...
|
{"url":"http://scitation.aip.org/content/aip/journal/jcp/124/21/10.1063/1.2197829","timestamp":"2014-04-18T12:08:09Z","content_type":null,"content_length":"97797","record_id":"<urn:uuid:faad2fc3-7874-4865-8504-efae26322d23>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
|
College Algebra
KASKASKIA COLLEGE
MATH 134 COLLEGE ALGEBRA
INSTRUCTOR: ERIC HOFELICH
Office Hours: MW 9:00am -9:30am, 12:15pm - 1:00pm
TR 8:30am - 9:30am, 1:00pm - 2:15pm
OFFICE LOCATION: ST. – 116
OFFICE PHONE: 545-3359
PLACEMENT REQUIREMENTS: Math 107 or ACT Math Score of 23-25, or KC Asset Exam Score of 41-55 on Int.Alg. test
COURSE DESCRIPTION: This course will consider basic algebraic operations and expand their use to cover major topics of factoring; working with exponents; solving equations, including linear,
quadratic and systems; graphing; and functions.
TEXTBOOK: College Algebra, by Larson, Hostetler (7th edition, 2007)
EVALUATION: Five 50-minute exams will be given during the semester. 100 pts. Each.
Test 1 Chapter 1 (sec. 1.4 – 1.7) Quadratic Equations and Inequalities
Chapter 4 (sec. 4.3 - 4.4) Conics
Test 2 Chapter 2 Functions and Graphs
Test 3 Chapter 3 Polynomial Functions
Test 4 Chapter 5 Exponential and Logarithmic Functions
Chapter 6 Systems of Linear Equations
Test 5 Chapter 7 Matrices and Determinants
Homework and announced/unannounced quizzes 100 pts.
The lowest exam score may be dropped
Thus, TOTAL POINTS FOR THE CLASS would be 500 pts.
Grades will be assigned as follows:
450 – 500 A, 400 – 449 B, 350 – 399 C, 300 – 349 D, below 300 F
CHEATING POLICY: If caught cheating in any way, the student will receive an F for the final grade.
1. The College Enhancement Center @ Kaskaskia College does tutoring 8-5pm, Monday through Friday
The following web sites will be extremely helpful with many of the concepts of college algebra. I highly recommend you visit and try these sites out.
• Geocities.com (http://www.geocities.com/CapeCanaveral/Launchpad/2426/index1.html)
You need to scroll down and select pop up menu to see index to lessons and practice quizes on a wide variety of college algebra concepts. This is a must see site. Highly recommended for you to
see what is available.
• Purplemath.com (http://www.purplemath.com/modules/index.htm)
Scroll down to advanced algebra topics to find help with an extremely large variety of college algebra concepts. You should review this site to observe what topics are covered and note when you
will need them.
• Expage.com (http://expage.com/teachermathpage)
Scroll down to the all about algebra link to access a wide variety of useful algebra links.
• Sosmath.com (http://www.sosmath.com/algebra/algebra.html)
This site provides explanations on quadratic functions (completing the square, quadratic formula, etc), composite functions, etc. Scroll down to see the index of lessons. You can also get some
good explanations on factoring as a review for chapter 1 of the text. There are also lessons on rational functions, polynomial functions, inverse functions, exponential functions, and logarithmic
ATTENDANCE POLICY: To be successful in a math course, attendance would be very important, almost critical. If more than two weeks of classes are missed without a valid excuse ( death in family,
hospitalization, nuclear blast, etc.) I reserve the right to withdraw you from class with an F. If you know in advance that you cannot attend class on a certain day, you may possibly get my prior
approval. There are no make-up exams or quizzes. If you come to class late, you will not receive extra time for exams or quizzes. You must take the Final Exam (test 5) to have your lowest test score
Math 134 College Algebra Outcomes
After successful completion of Math 134 a student should be able to perform the following at a 70% success rate. (C or better)
Find the domain & range of a function from its equation and graph.
Sketch a graph of a function using transformations.
Find combinations and composition of functions.
Determine if a function has an inverse.
Find the inverse of a function algebraically and graphically.
Sketch the graph of polynomial functions by applying the leading coefficient test and rational zero test.
Use the remainder theorem and synthetic division to evaluate a polynomial.
Perform operations with complex numbers and write the results into standard form.
Sketch the graph of a rational function.
Sketch the graph of an exponential or logarithm function.
Use the compound interest formula.
Evaluate logarithms by using the change of base formula.
Solve exponential and logarithmic equations.
Solve linear systems of equations in two variables graphically and algebraically.
Solve a linear system of three variables using Gaus-Jordan elimination and matrices.
Perform basic operations with matrices.
Use an inverse matrix to solve a system of linear equations.
Evaluate the determinant of a 2x2 or 3x3 matrix.
|
{"url":"http://kaskaskia.edu/EHofelich/math134.html","timestamp":"2014-04-19T09:38:47Z","content_type":null,"content_length":"7776","record_id":"<urn:uuid:20c7f76d-63da-42b6-8223-829f171ca992>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Python - Interpolating between lines of data
up vote 0 down vote favorite
I have data on a 2d grid characterized by points (x,Y,Z). The X and Y values indicate each point's position and Z is "height" or "intensity" at each point.
My issue is that my data coordinates along the X axis are extremely closely spaced (~1000 points), while my Y coordinates are spread out (~50 points). This means that when plotted on a scatter plot,
I essentially have lines of data with an equal amount of blank space between neighboring lines.
Example of how my data is spaced on a scatter plot:
I want to interpolate these points to get a continuous surface. I want to be able to evaluate the "height" at any position on this surface. I have tried what seems like every scipy interpolation
method and am not sure of what the most "intelligent" method is. Should I interpolate each vertical slice of data, then stitch them together?
I want as smooth a surface as possible, but need a shape preserving method. I do not want any of the interpolated surface to overshoot my input data.
Any help you can provide would be very helpful.
As I think about the problem more, it seems that interpolating the vertical slices and then stitching them together wouldn't work. That would cause the value along a vertical slice to only be
effected by that slice, Wouldn't that result in an inaccurate surface?
python numpy matplotlib scipy
What do you mean by 'overshoot'? – Jon Cage Jan 2 '13 at 13:28
Did you ever get this sorted out? – tcaswell Oct 5 '13 at 0:51
add comment
2 Answers
active oldest votes
If you're looking for the surface, my assumption would be that you can get by using vertical slices, and then plotting the filled out data.
up vote 0 down vote
add comment
I recommend this tutorial. The guts of it are (lifted from link):
>>> grid_x, grid_y = np.mgrid[0:1:100j, 0:1:200j]
>>> from scipy.interpolate import griddata
>>> grid_z0 = griddata(points, values, (grid_x, grid_y), method='nearest')
up vote 0 >>> grid_z1 = griddata(points, values, (grid_x, grid_y), method='linear')
down vote >>> grid_z2 = griddata(points, values, (grid_x, grid_y), method='cubic')
Which will get you three different levels of interpolation of your data (doc).
Thanks for the link. This is actually one of the many methods I am implementing already. Cubic is out of the questions since it overshoots the input data. Linear interpolation doesn't
seem to fit very well, but it is an option. – user1764386 Jan 1 '13 at 2:07
Did you also look at the spline fitting further down that page? I have found the 1D splines to work very well for noisy data. Can you smooth your data at all (that will help with the
over-shooting which is caused by fluctuations in the 2nd derivative of the data). – tcaswell Jan 1 '13 at 3:32
add comment
Not the answer you're looking for? Browse other questions tagged python numpy matplotlib scipy or ask your own question.
|
{"url":"http://stackoverflow.com/questions/14106912/python-interpolating-between-lines-of-data","timestamp":"2014-04-18T01:29:40Z","content_type":null,"content_length":"72969","record_id":"<urn:uuid:5c480f57-9cbb-4c0d-a70c-ad3112149a72>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Programming Language Semantics Seminar
Programming Language Semantics Seminar, 1997-98
Mitchell Wand
College of Computer Science, Northeastern University
360 Huntington Avenue #161CN,
Boston, MA 02115
Internet: wand@ccs.neu.edu
Phone: (617) 373-2072 / Fax: (617) 373-5121
[Semantics at NU | College of Computer Science | Northeastern University]
Mon 7/6/98
Erik Meijer, Utrecht University, will present:
Haskell as an ActiveX Scripting Engine
Microsoft's ActiveX Scripting Architecture is a set of standard COM interfaces that defines a language independent protocol for connecting a scripting engine to a host application. Microsoft provides
standard scripting engine implementations for JScript and VBScript to script standard scripting hosts such as Internet Explorer and the Windows Scripting Host.
Based on Yale-Nottingham Hugs, we have implemented HaskellScript, a scripting engine for the purely functional language Haskell. In this talk we discuss HaskellScript, including implementation
features and design decisions, and we will give numerous examples of how to use HaskellScript for client-side web scripting and automating application such as Word, Excel, and Visio. In particular we
will compare the implementation of the "Nervous Text" applet written in HaskellScript to ones written in Java and JavaScript, and show how Haskell-specific features such as monads and
list-comprehension make it possible to come up with a concise solution.
Wed, 7/1/98
Greg Sullivan will present
"Typechecking and Modules for Multimethods", by Craig Chambers and Gary T. Leavens, (TOPLAS, Nov. 1995)
Abstract: Two major obstacles hindering the wider acceptance of multi-methods are concerns over the lack of encapsulation and modularity and the absence of static typechecking in existing
multi-method-based languages. This paper addresses both of these problems. We present a polynomial-time static typechecking algorithm that checks the conformance, completeness, and consistency of a
group of method implementations with respect to declared message signatures. This algorithm improves on previous algorithms by handling separate type and inheritance hierarchies, abstract classes,
and graph-based method lookup semantics. We also present a module system that enables independently-developed code to be fully encapsulated and statically typechecked on a per-module basis. To
guarantee that potential conflicts between independently-developed modules have been resolved, a simple well-formedness condition on the modules comprising a program is checked at link-time. The
typechecking algorithm and module system are applicable to a range of multi-method-based languages, but the paper uses the Cecil language as a concrete example of how they can be applied.
Wed, 5/27/98
Allyn Dimock will continue "Single and Loving It" (with recapitulation for those who weren't there on 5/13).
Abstract of Abstract:
In standard control-flow analyses for higher-order languages, a single abstract binding for a variable represents a set of exact bindings, and a single abstract reference cell represents a set of
exact reference cells. While such analyses provide useful may-alias information, they are unable to anser must-alias questions about variables and cells, as these questions ask about equality of
specific bindings and references.
In this paper, we present a novel program analysis for higher-order languages that answers must-alias questions.
Must-alias information facilitates various program optimizations such as lightweight closure conversion. In addition, must-alias information permits analyses to perform strong updates on abstract
reference cells known to be single.
Wed, 5/20/98
Will Clinger will present his paper "Proper tail recursion and space efficiency." (PLDI 98)
The IEEE/ANSI standard for Scheme requires implementations to be properly tail recursive. This ensures that portable code can rely upon the space efficiency of continuation-passing style and other
idioms. On its face, proper tail recursion concerns the efficiency of procedure calls that occur within a tail context. When exmined closely, proper tail recursion also depends upon the fact that
garbage collection can be asymptotically more space-efficient than Algol-like stack allocation.
This paper offers a formal and implementation-independent defintion of proper tail recursion for Scheme. It also shows how an entire family of reference implementations can be used to characterize
safe-for-space properties, and proves the asymptotic inequalities that hold between them.
The paper is available.
Wed, 5/13/98
Will Clinger will present: "Set Constraints for Destructive Array Update Optimization" by Wand and Clinger. This is a preview of his ICCL98 presentation.
Allyn Dimock will present "Single and Loving It: Must-Alias Analysis for Higher-Order Languages" by Jagannathan, Thiemann, Weeks, and Wright. (POPL98)
Abstract of Abstract:
In standard control-flow analyses for higher-order languages, a single abstract binding for a variable represents a set of exact bindings, and a single abstract reference cell represents a set of
exact reference cells. While such analyses provide useful may-alias information, they are unable to anser must-alias questions about variables and cells, as these questions ask about equality of
specific bindings and references.
In this paper, we present a novel program analysis for higher-order languages that answers must-alias questions.
Must-alias information facilitates various program optimizations such as lightweight closure conversion. In addition, must-alias information permits analyses to perform strong updates on abstract
reference cells known to be single.
We will meet in 107 CN from 10 to 12, as usual.
Wed, 4/29/98
Will Clinger will talk about numerical benchmarks, mostly for functional and higher order programming languages, using material from five sources:
Jeffrey Mark Siskind's EM benchmarks, recently posted to comp.lang.ml, comp.lang.functional, comp.lang.scheme, and comp.lang.lisp.
Hartel, Feeley, et al. Benchmarking implementations of functional languages with "Pseudoknot", a float-intensive benchmark. Journal of Functional Programming 6(4), pages 621-655, 1996.
Hammes, Sur, and B\:ohm. On the effectiveness of functional language features: NAS benchmark FT. Journal of Functional Programming 7(1), pages 113-124, 1997.
Andrew Appel. Intensional equality ;-) for continuations. ACM SIGPLAN Notices 31(2), pages 55-57, February 1996.
William Kahan. The baleful effect of computer languages and benchmarks upon applied mathematics, physics and chemistry. The John von Neumann Lecture at the 45th annual meeting of SIAM, Stanford, 15
July 1997.
Wed, 4/22/98
Patrik Jansson, Chalmers Institute (Sweden)
Polytypic Programming
Many functions have to be written over and over again for different datatypes, either because datatypes change during the development of programs, or because functions with similar functionality are
needed on different datatypes. Examples of such functions are pretty printers, pattern matchers, equality functions, unifiers, rewriting functions, etc. Such functions are called polytypic functions.
A polytypic function is a function that is defined by induction on the structure of user-defined datatypes. This talk introduces polytypic functions, shows how to construct and reason about polytypic
functions and says a few words about the implementation of the polytypic programming system PolyP.
PolyP extends a functional language (a subset of Haskell) with a construct for writing polytypic functions. The extended language type checks definitions of polytypic functions, and infers the types
of all other expressions. Programs in the extended language are translated to Haskell.
Wed, 4/1/98
John Kalamatianos (ECE) Temporal-based Procedure Reordering for Improved Instruction Cache Performance
As the gap between memory and processor performance continues to grow, it becomes increasingly important to exploit cache memory effectively. Both hardware and software techniques can be used to
better utilize the cache. Hardware solutions focus on organization, while most software solutions investigate how to best layout a program on the available memory space.
In this talk we present a new link-time code reordering algorithm targeted at reducing the frequency of misses in the cache. Past work has focused on eliminated first generation cache conflicts
(i.e., conflicts between a procedure, and any of its immediate callers or callees) based on calling frequencies. In this work we exploit procedure-level temporal interaction, using a structure called
a Conflict Miss Graph (CMG). In the CMG every edge weight is an approximation of the worst-case number of misses two competing procedures can inflict upon one another. We use the ordering implied by
the edge weights to apply color-based mapping and eliminate conflict misses between procedures lying either in the same or in different call chains.
Using programs taken from SPEC 95, Gnu applications, and C++ applications, we have been able to improve upon previous algorithms, reducing the number of instruction cache conflicts by 20% on average
compared to the best procedure reordering algorithm.
Wed, 3/4/98
Mitch Wand will present "A Formal Basis for Architectural Connection" by Allen and Garlan (TOESEM, 7/97)
Condensed Abstract:
We present a formal approach to one aspect of archtiectural design: the interactions among components. The key idea is to define architectural connectors as explicit semantic entities. These are
specified as a collection of protocols that characterize each of the participant roles in an interaction and how these roles interact. We provide a formal semantics and show how this leads to a
system in which architectural compatibility can be checked in a way analogous to type-checking in programming languages.
Wed, 2/25/98
Allyn Dimock will present Mossin's "Exact Flow Analysis" (SAS '97)
We present a flow analysis based on annotated types. The analysis is exact in the following sense: if the analysis predicts a redex, then there exist a reduction sequence (using standard reduction
plus context propagation rules) such that this redex will be reduced. The precision is accomplished using intersection typing.
It follows that the analysis is non-elementary recursive - more surprisingly, the analysis is decidable. We argue that the specification of such an analysis provides a good starting point for
developing new flow analyses and an important benchmark against which other flow analyses can be compared. Furthermore, we believe that the methods employed for stating and proving exactness are of
independent interest: they provide methods for reasoning about the precision of program analyses.
Wed, 2/18/98
John Ramsdell will present "The Tail-Recursive SECD Machine"
A tail recursive implementation of a programming language allows the execution of an iterative computation in constant space, even if the iterative computation is described by a syntactically
recursive procedure. With a tail recursive implementation, iteration can be expressed using the ordinary procedure call mechanics, so that special iteration constructs are useful only as
The standard which specifies some programming languages requires implementations that are tail recursive. With these languages, a new style of programming is available which relies on the fact that
implementations have this property. Given the importance of tail recursion in these languages, it is distressing that every standard simply requires tail recursive implementations without defining
the requirement.
In this talk, I will present two versions of an abstract machine for Landin's functional programming language ISWIM. One of the two machines is tail recursive and a comparison between the two will
show the essence of tail recursion.
An automated correctness proof of both abstract machines has been performed using the Boyer-Moore Theorem Prover. The correctness proof for the tail recursive abstract machine suggests how to define
part of the tail recursion requirement for real programming languages. A soon to be released revision of the Scheme Programming Language standard will contain text defining its tail recursion
requirement, which was motivated by this work. The talk will conclude with a description of the requirement and how it was motivated by the correctness proof.
Paper: The Tail Recursive SECD Machine
URL: http://www.ccs.neu.edu/home/ramsdell/papers/trsecd.ps.gz
Wed, 2/11/98
Ali Ozmez will present "Linear Subtransitive Control-Flow Analysis" by Heintze & MacAllister (PLDI '97).
Wed, 2/4/98
Lars Hansen will present "The Measured Cost of Copying Garbage Collection Mechanisms," by Michael W. Hicks, Jonathan T. Moore, and Scott M. Nettles (ICFP 1997, p 292-305)
We examine the costs and benefits of a variety of copying garbage collection mechanisms across multiple architectures and programming languages. Our study covers both low-level object representation
and copying issues as well as the mechanisms needed to support more advanced techniques such as generational collection, large object spaces, and type-segregated areas.
In general, we found that careful implementation of GC mechanisms can have a significant benefit. For a simple collector, we measured improvements of as much as 95%. We then found that while the
addition of advanced features can have a sizeable overhead (up to 15%), the net benefit is quite positive, resulting in additional gains of up to 42%. We also found that results varied depending upon
the platform and language. Machine characteristics such as cache arrangements, instruction set (RISC/CISC), and register pool were important. For different languages, average object size seemed to be
most important.
[I will include a short overview of the basics of copying GC in the talk, so attendees should not worry about not being up on their GC algorithms.]
Wed, 1/7/98
Ali Ozmez will present:
Linear-time Subtransitive Control-Flow Analysis by Heintze and McAllester (PLDI '97)
We present a linear-time algorithm for bounded-type programs that builds a directed graph whose transitive closure gives exactly the results of the standard (cubic-time) Control-Flow Analysis
algorithm. Our algorithm can be used to list all function calls from all call sites in (optimal) quadratic time. More importantly, it can be used to give linear-time algorithms for CFA-consuming
applications such as: effects analysis, k-limited CFA, and called-once analysis.
Wed 12/3
Johan Ovlinger will present "Three Approaches to Type Parameterization in Java"
This will summarize three papers on this subject: Pizza (by Odersky and Wadler), the Myers-Bank-Liskov proposal (both in POPL 97) and the Ageson-Freund-Mitchell proposal (OOPSLA 97).
Wed 11/19
Mira Mezini will present "Variation-Oriented Programming: Beyond Classes and Inheritance"
In my work I argue that the basic mechanisms of object-oriented languages, classes and inheritance, while perfectly enabling software to be organized in a way that allows incremental modeling of new
kinds of abstractions, do not suffice when other kinds of behavior variations are needed. Variations that depend on factors, such as the internal state of objects, perspectives or aspects,
application requirements, or characteristics of the environment, are not as properly modeled with classes and inheritance alone. For this reason, a new language model called Rondo is proposed. It
enriches the design space of object-oriented languages with more powerful mechanisms that enable Rondo software to be more robust in terms of extensibility and reusability.
We will meet in 107 CN from 1000 til 1200, as usual.
Wed 11/12
Harry Mairson will present his paper: "Parallel beta reduction is not elementary recursive" (POPL '98, joint work with Andrea Asperti).
Condensed abstract:
We analyze the inherent complexity of implementing Levy's notion of optimal evaluation for the lambda-calculus, where similar redexes are contracted in one step via ``parallel beta-reduction.'' We
prove that the cost of parallel beta-reduction is not bounded by any Kalmar-elementary recursive function. Not merely do we establish that the parallel beta-step cannot be a unit-cost operation, we
demonstrate that the time complexity of implementing a sequence of $n$ parallel beta-steps is not bounded as $O(2^n)$, $O(2^{2^n})$, $O(2^{2^{2^n}})$, or in general, $O(K_p(n))$ where $K_p(n)$ is a
fixed stack of $p$ 2s with an $n$ on top.
The POPL98 version can be retrieved at the URL: http://www.cs.brandeis.edu/~mairson/Papers/am97-abs.ps.gz
We will meet in 107 CN from 1000 til 1200, as usual.
Wed 10/29/97
Arthur Nunes will continue his presentation on "Compilation and Equivalence of Imperative Objects", by Andy Gordon, Paul Hankin, and S. Lassen. We had only a brief introduction on 10/22, so newcomers
can probably catch up.
Condensed abstract:
We adopt the untyped imperative object calculus of Abadi and Cardelli as a minimal setting which to study problems of compilation and program equivalence that arise when compiling object-oriented
languages... Our first result is a direct proof of the correctness of compilation to a stack--based abstract machine via a small-step decompilation algorithm. Our second result is that contextual
requivalence of objects coincides with a form of Mason and Talcott's CIU equivalence; the latter provises a tractable means of establishing operational equivalences. Finally, we prove correct an
algorithm, used in our prototype compiler, for statically resolving method offsets. This is the first study of correctness of an object-oriented abstract machine, and of operational equivalence for
the imperative object calculus.
For NU users, the paper is available at /home/wand/people/gordon/imperative-objects.ps .
Wed 10/8/97
Allyn Dimock will present "From Polyvariant Flow Information to Intersection and Union Types" by Jens Palsberg and Christina Pavlopoulu (POPL 98)
Condensed Abstract:
Many polyvariant program analyses have been studied in the 1990s... The idea of polyvariance is to analyze functions more than once and thereby obtain better precision for each call site. In this
paper we present the first formal relationship between polyvariant analysis and standard notions of type. We present a parameterized flow analysis and prove that if a program can be safety-checked by
a finitary instantiation of the flow analysis, then it can also be typed in a type system with intersection types, union types, subtyping, and recursive types, but no polymorphism.
Wed 10/1/97
*** OPENING MEETING ***
We will have short presentations on open problems and projects from me, Will, and maybe others.
We'll deal with scheduling problems, etc; hopefully we can continue with our traditional Wednesday morning meeting time.
And of course we'll get some folks signed up to give talks. I've got a whole new stratum on the top of the Book of Sand, in case anyone is looking for things to present.
|
{"url":"http://www.ccs.neu.edu/home/wand/pl-seminar/97-98.html","timestamp":"2014-04-16T04:58:28Z","content_type":null,"content_length":"21119","record_id":"<urn:uuid:0a849fd2-12de-48d0-8213-74466a0328e0>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[sbml-discuss] response to "fast" attribute action item
Schaff,Jim Schaff at NEURON.UCHC.EDU
Wed Mar 29 14:58:37 PST 2006
(same as previous post - this time with "[sbml-discuss]" in title ... sorry).
I was given an action item at the last SBML workshop to clarify the meaning of the "fast" attribute for reactions within SBML. In response, I have both a concise definition (below) and a supplemental guide that discusses some of the implementation issues for continuous systems (see attached).
---- A concise definition that could be included in spec ----
The set of reactions that have the "fast" attribute set to "true" defines those reactions whose time scales are sufficiently fast, relative to the remaining reactions, that together they form a subsystem that is well described by a pseudo steady state approximation. Under this approximation, relaxation of any initial condition or perturbation from this psuedo steady state would relax infinitely fast. It is important to note that the correctness of this approximation requires a significant separation of time scales.
----- Some Discussion -----
I didn't include (in the attachment) any discussion of automated time scale analysis algorithms which could determine which reactions should be "fast". The "fast" attribute is an assumption to be encoded in a model rather than a numerical technique used to solve the "full" problem. Although the Virtual Cell implements the PSSA slightly differently (as a time splitting method that is very useful for solving PDEs ... without splitting some inconvenient spatial operators can appear), the paper that I provided describes an approach to formulate these systems as a traditional DAE (although the derivation is not rigorous).
On the one hand, just like many other aspects of SBML models, consistency of modeling assumptions (e.g. proper use of HMM kinetics) is not yet within the scope of SBML language itself. On the other hand, when introducing a "new" feature (although "fast" is not technically new SBML), a "best practices" guide should be appropriately informative.
In one class of applications, the modeler is explicitly introducing one or more reactions that are assumed to be in "fast equilibrium", and where the actual time scale is assumed to be much faster than the other dynamics, but need not be exactly known (e.g. only Kd's known rather than both kforward and kreverse).
In another class of applications, the time scale of all processes are well described, but inconveniently fast and very well separated from the set of slower dynamics in the model. In this class the PSSA can be used to reduce the order of the system (at least eliminate one parameter .. kforward or kreverse) and to allow an efficient solution where any initial fast transients are not resolved (like a fast boundary layer), but subsequent dynamics are faithfully reproduced.
So, when is "fast" fast enough? When does a modeling assumption become sufficiently justified (or wholy inappropriate)? There are techniques for determining time scales based on a local linearization along a trajectory (evaluating Jacobians along the solution) that I have seen referenced but have not investigated myself. There are other techniques from combustion (lots of fast intermediates), and other more heuristic methods of time-scale based model reduction that we have kicked around. A problem is that for nonlinear systems, the time scales are time/state dependent. For automatically analyzing time scales, maybe others in the SBML community have more actual experience (I could look into this but find other obligations to be calling me).
The best answer is that this option should not be used unless the modeler is aware of the implications.
Jim Schaff
Software Lead - Virtual Cell Project (http://vcell.org)
Richard D. Berlin Center for Cell Analysis and Modeling
University of Connecticut Heath Center
263 Farmington Ave, MC-1507
Farmington, CT 06030
-------------- next part --------------
A non-text attachment was scrubbed...
Name: SBML_FastReactions_schaff.doc
Type: application/msword
Size: 62464 bytes
Desc: SBML_FastReactions_schaff.doc
Url : https://utils.its.caltech.edu/pipermail/sbml-discuss/attachments/20060329/2aff60ef/SBML_FastReactions_schaff-0001.doc
More information about the Sbml-discuss mailing list
|
{"url":"https://lists.caltech.edu/pipermail/sbml-discuss/2006-March/001638.html","timestamp":"2014-04-20T20:56:49Z","content_type":null,"content_length":"7074","record_id":"<urn:uuid:d71f7bf4-103f-49e1-9d9f-071d5b23336b>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What should a Physics Major Know?
An incomplete list
Based on quick analysis of two old Physics GRE exams
Atomic - Bohr model (energies)
Atomic - Bohr Model - Def'n of K transition
Atomic - Bohr model, 1/n^2 dependence for energy
Atomic - Emission lines in a magnetic field, general properties
Atomic - filling the levels (order, total number)
Atomic - first order Stark effect for H in ground state.
Atomic - ground state spin of Helium atom
Atomic - Hydrogen atom - energy levels
Atomic - life-time broadening
Atomic - notation ("^3S ground state")
Atomic - notation (1s^2 2s^2 2p^6 , etc)
Atomic - positronium (Bohr model for)
Atomic - Scattering - cross section, differential (order of mag. from data)
Atomic - selection rules (electric dipole)
Atomic - typical K series transition energies
Circuits - RC time constant
Circuits - RLC circuit with damping - natural frequency of
Circuits - current divider
Circuits - finding currents in branches
Circuits - impedance matching
Circuits - LR Time constant
Circuits - Ohm's law
Circuits - P = I^2 R
Circuits - parallel and series resistors, equivalents
Circuits - voltage divider, voltages in simple
Constants - Speed of light
Constants - hc (or equivalent)
Constants - mass of electron (511 keV/c^2 )
E&M (Waves) - reflection of plane waves from conductor
E&M - accelerating charge through a potential (Energy = qV)
E&M - boundary conditions at conductor
E&M - Capacitor - energy stored
E&M - charged particle in E and M fields, general properties of motion
E&M - Coulomb's law (in MKS)
E&M - Current in wire, drift velocity
E&M - current loop is magnetic dipole (far away)
E&M - Cyclotron Frequency
E&M - definition of capacitance
E&M - direction of M field from current carrying wire
E&M - effect of dielectric on capacitance
E&M - F = q v cross B
E&M - fall off of fields from dipole
E&M - field from finite charged rod, along axis
E&M - Gauss's Law - indicator of charge present
E&M - Gauss's law, field inside sphere
E&M - general knowledge - exp. to determine exponent in coulomb's law accurately
E&M - image charges (for infinite conducting plane)
E&M - internal resistance of battery
E&M - Magnetism - Ampere's law (simple path)
E&M - Magnetism - Faraday's law of induction
E&M - Magnetism - force between two (parallel) wires
E&M - Magnetism - Lenz's law
E&M - Maxwell's equations
E&M - Maxwell's equations - meaning of terms
E&M - polarization of dielectric, surface charge density
E&M - Potential difference - from electric field and distance (simple cases)
E&M - radiation from accelerating charge (general properties)
E&M - Radiation from oscillating charge, general properties
E&M - Superposition principle
E&M - surface charge density on conductor with charges nearby
General - Angular frequencies (omega)
General - Circumference of circle
General - Complex exponentials
General - Exponentials and natural logarithms - time constants
General - field lines, interpreting
General - Fourier series - simple cases (odd/even fcn, odd/even harmonics)
General - Matrices - moving rows and columns
General - Matrices - recognizing eigenvalues
General - meaning of "divergence is zero"
General - Right Hand Rule for cross products
General - Units (checking)
General - Vectors - i, j, k notation
General - Vectors - resolving components
General - Volume of sphere
Mechanics - accel near earth = g
Mechanics - Circular motion - separating components
Mechanics - Circular motion - uniform, description in x,y coordinates.
Mechanics - Circular motion, uniform, F=ma for
Mechanics - Collisions, Simple
Mechanics - Cons. Of Energy
Mechanics - Conservation of angular momentum
Mechanics - Conservation of momentum
Mechanics - Constant acceleration
Mechanics - Eigenfrequencies (of normal modes)
Mechanics - elastic collisions (cons. of p)
Mechanics - extremums (variational methods)
Mechanics - F=dp/dt , mass not constant (rocket problem)
Mechanics - Falling body (constant acceleration)- with initial conditions
Mechanics - Falling body with air resistance - general properties
Mechanics - Forces - resolving components
Mechanics - Forces - resolving components - tension in string
Mechanics - frequency of harmonic oscillator (mass on spring)
Mechanics - Friction - simple model, maximum static
Mechanics - friction, dynamic
Mechanics - getting F from V
Mechanics - Gravitation - Force inside spherical shell is zero
Mechanics - Gravitation - Universal Law (1/r^2 dependance)
Mechanics - Hamiltonian, writing down for simple case
Mechanics - Impulse - change of speed by
Mechanics - inelastic collision
Mechanics - Kinetic energy ( 1/2 mv^2)
Mechanics - Lagrangians - def'n of generalized momentum
Mechanics - Lagrangians - what is for simple cases (including simple constraint)
Mechanics - Lagrangians - when is generalized momentum conserved
Mechanics - Mass - computed from volume and density
Mechanics - Moment of inertial - simple systems
Mechanics - Moments of Inertia - Rod about its end
Mechanics - Normal Modes
Mechanics - Orbits, relating periods for different radii
Mechanics - Oscillations, velocity from A and f
Mechanics - Pendula
Mechanics - Potential energy from F(r)
Mechanics - Reduced mass (positronium)
Mechanics - Rigid pendulum - frequency for small oscillations (changing I)
Mechanics - Rolling bodies (down incline)
Mechanics - Rolling bodies - contact point acceleration
Mechanics - Rotating bodies - kinetic energy (1/2 I omega^2)
Mechanics - Rotation bodies - tau = I alpha (for constant alpha)
Mechanics - satellite orbits, perturbing circular
Mechanics - Simple pendulum - omega = sqrt(g/l)
Mechanics - Small angle approximation (pendula)
Mechanics - Small oscillation approximation - getting frequencies of
Mechanics - Speed = distance/time
Mechanics - Symmetry - use of
Mechanics - Torques - as vectors
Mechanics - Torques - balancing
Mechanics - Torques - r cross F
Mechanics - Work, simple computations
Mechanics - work-energy theorem
Mechanics - zero of potential arbitrary
Nuclear - basic nuclear decay equations
Nuclear - binding energies, general trends in periodic table
Nuclear - Cerenkov Radiation - conditions for
Nuclear - pair production, general
Nuclear - Radioactive decay - half life from counts per minute
Nuclear - scattering cross section
Nuclear - types of decay
Optics - diffraction gratings
Optics - diffraction limit
Optics - group vs phase velocity in materials
Optics - how do holograms work?
Optics - lens coating thickness, understanding non-reflective
Optics - lens formula (simple telescope)
Optics - Michelson interferometer (basic idea, conditions for fringes)
Optics - multiple polarizers in path
Optics - phase velocity in dielectric
Optics - polarizers, behavior of
Optics - refractive index (speed of light)
Particles - Muon, general properties
Particles - what is a "decay due to the weak interaction"
Practical - Amplifier gain fall-off from log-log plot
Practical - Count rate errors ( N ^1/2 ), nuclear counting
Practical - Errors, combining two uncorrelated errors for total error
Practical - Ideal Diode behavior
Practical - Lissajous figures - interpreting
Practical - Mass of Earth - estimate within 3 orders of magnitude given radius
Practical - nuclear radiation - typical penetration depths for various types
Practical - OR gate, what is
Practical - Oscilloscope (what it shows)
Quantum - adding angular momenta (max and min possible)
Quantum - Bohr magneton (mass dependance)
Quantum - common particles, fermion or boson?
Quantum - commutation - simultaneous eigenvalues
Quantum - Compton scattering - basics of
Quantum - computing probabilities
Quantum - deBroglie wavelength
Quantum - E[photon] = hc/lambda (Need to know value for hc or equivalent!)
Quantum - finite square well - general form of wavefunctions
Quantum - form of wavefunctions for H.O. (Odd/Even parity)
Quantum - Franck-Hertz experiment, what does it show?
Quantum - ground state energy (infinite square well)
Quantum - ground state wavefunction of H.O., recognizing
Quantum - Hamiltonian from classical Hamiltonian
Quantum - Harmonics Oscillator - ground state energy of
Quantum - how operators used to get expectation values
Quantum - Hydrogen atom - Spherical harmonics and orbital quantum numbers
Quantum - impact of (electron) spin on properties (of materials)
Quantum - infinite square well - energy eigenvalues
Quantum - infinite square well - momentum of eigenstate = 0
Quantum - infinite wall boundary conditions
Quantum - infinite well, n dependence of E
Quantum - infinite well, perturbation theory, general (odd/even)
Quantum - infinite well, recognize n from graph of wavefunction
Quantum - normalizing a wavefunction, rigid rotator
Quantum - orthogonality (def'n off).
Quantum - Pauli Exclusion Principle
Quantum - photoelectric effect
Quantum - spacing of rotational levels (free rotor)
Quantum - two particle wavefunctions, fermions vs bosons
Quantum - Uncertainty principle
Quantum - Wavefunction of free particle
Solid State - Bragg Reflection
Solid State - conductivities for semiconductors vs metals, general trends and magnitudes
Solid State - Debye and Einstein theory, specific heat from
Solid State - Effective mass from E(k)
Solid State - Fermi temperature - kinetic energy of conduction electron
Solid State - Hall effect, general
Solid State - types of binding in
Solid State - Why E of conduction electrons > kT ?
Special Relativity - conditions to move at c (on mass, spin)
Special relativity - E^2 = (pc)^2 + (mc^2)^2
Special Relativity - Length contraction
Special relativity - Lorentz transformation (invarients)
Special relativity - x^2 + y^2 + z^2 - t^2 = frame independent
Special relativity - space-time interval - computing from coordinates
Special relativity - speed of light = constant
Special relativity - time dilation (half-lives)
Special relativity - Transformation of EM field, general properties
Thermo - (differential) relations between thermo quantities
Thermo - ave energy of particles with 3 states
Thermo - Average energies in equilibrium
Thermo - Blackbody radiation - T^4 law
Thermo - Boltzmann to quantum transition, fermions vs bosons, general
Thermo - Carnot cycle, general properties
Thermo - diatomic gas, specific heat for dumbbell vs masses on spring, general properties
Thermo - entropy of ensemble of two-state particles - high and low T
Thermo - Entropy related to laws of thermo
Thermo - Heat capacitance from internal energy vs T
Thermo - Heat engine - Carnot cycle (work done during)
Thermo - ideal gas, specific heats (why const. Vol & const. Press. Different?)
Thermo - isotherms, def'n
Thermo - phase transitions (qualitative), critical temperatures, co-existence (from diagram)
Thermo - probability of occupation of states
Thermo - Probability related to entropy
Thermo - Specific Heat - diatomic molecule (masses on spring model)
Thermo - Work done by expanding gas (reversible, isothermal)
Waves - Dopplar effect (sound), general properties
Waves - group velocity and phase velocity (from dispersion curves)
Waves - group velocity from dispersion relation
Waves - Interference - from soap film (explain)
Waves - Interference - two slit, finite width slit
Waves - Interference from two coherent sources, diff. phases
Waves - reflection of
Waves - single slit diffraction (circular hole)
Waves - Transmission lines - characteristic impedance, terminating in
Waves - Travelling Waves - plane waves
Waves - wavenumbers (k)
Back To Quarters to Semesters
To MTU Physics Home
|
{"url":"http://www.phy.mtu.edu/q2s/phystopics.html","timestamp":"2014-04-21T14:39:40Z","content_type":null,"content_length":"13132","record_id":"<urn:uuid:dc39aaa8-625a-4e79-8313-b45f802b26f7>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Multiply. (2-square root 2)(3 square root 6+1) Simplify your answer as much as possible.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4faf2d94e4b059b524fa4199","timestamp":"2014-04-19T15:35:46Z","content_type":null,"content_length":"529816","record_id":"<urn:uuid:a59c753c-6901-413e-832e-31b3bc349a39>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bounded subset of R
September 7th 2010, 10:47 PM
Bounded subset of R
I'm having trouble with this problem. Could someone give me a hand?
Let S be a nonempty subset of R that is bounded above. Show that if sup S is not in S, then for every $\epsilon >0$, the interval $(sup S-\epsilon, sup S)$ contains infinitely many elements of S.
September 8th 2010, 12:09 AM
Suppose for some $\epsilon >0$ that $(sup S-\epsilon, sup S)$ contains only finitely many points of $S$ (that it is non-empty should be obvious).
September 8th 2010, 06:14 AM
Thanks a lot for your help, CB. I actually tried to assume the contrary as you suggested, but I got stuck. I have $x_1, x_2, ...., x_n$ is in $(supS- \epsilon, supS)$. So, $supS - \epsilon <x_1,
x_2,...,x_n < sup S$ I think I should be able to show that this lead to sup S is in S, but I don't know how.
September 8th 2010, 10:05 AM
Thanks a lot for your help, CB. I actually tried to assume the contrary as you suggested, but I got stuck. I have $x_1, x_2, ...., x_n$ is in $(supS- \epsilon, supS)$. So, $supS - \epsilon <x_1,
x_2,...,x_n < sup S$ I think I should be able to show that this lead to sup S is in S, but I don't know how.
A finite subset of the reals has a largest element, which will be an upper bound for $S$ and less than ${\text{sup}}(S)$ which is a contradiction etc...
|
{"url":"http://mathhelpforum.com/differential-geometry/155529-bounded-subset-r-print.html","timestamp":"2014-04-19T05:15:10Z","content_type":null,"content_length":"8784","record_id":"<urn:uuid:6028bb9c-6633-47f1-8a45-f7e36871653a>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dirac delta function
A function, eh?
Not a proper
, in the strict sense, but very useful all the same. It is defined by its properties when integrated.
Definition: The Dirac delta function δ(x) satisfies (integral) δ(x) dx = 1 if the range of integration includes zero, =0 if not.
Take a moment to convince yourself that no
function satisfies this. Note that for any function continious at zero, (integral) f(x)δ(x) dx = f(0).
What use is it?
The neatest way to convince yourself of its necessity is to think of it as a
generalisation of the
Kronecker delta
(sum) f[i]δ[ij] = f[j]
(integral) f(x)δ(x) dx = f(0)
As such, the notion of
orthonormal eigenstate
s in
Dirac's own formalism
can be gerneralise from the discrete eigenvalue case to the continious case:
<i|j> = δ[ij] becomes
<x|y> = δ(x-y)
...which you need for such continious eigenvalues such as position and momentum. This, and the above stated property, make the sum-over-all-intermediate-states rule work out as it should:
|x> = (integral) |y><y|x> dy
Generalisation to R^n:
Definition: δ^(n)(x) = δ(x[1])δ(x[2])...δ(x[n])
...and then if you integrated this over some (n)volume, the result is 1 if zero is in the volume, 0 otherwise.
|
{"url":"http://everything2.com/title/Dirac+delta+function","timestamp":"2014-04-20T21:14:29Z","content_type":null,"content_length":"20537","record_id":"<urn:uuid:228e74e2-0447-4c25-b9da-8e80026c5d10>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Does volt amps = watts?
01-17-08, 01:30 AM #1
Senior Member
Join Date
Oct 2007
Does volt amps = watts?
Pardon my ignorance. I am a mechanic and a master of digging ditches, running conduit, pulling wire and troubleshooting but get easily confused on simple technical issues.
Is VA basically the same as watts?
If you have unity power factor than VA=Watts. This means that the phase angle between the voltage and current is zero. If that is not the case than VA will be larger than Watts.
If you have unity power factor than VA=Watts. This means that the phase angle between the voltage and current is zero. If that is not the case than VA will be larger than Watts.
If the phase angle (whatever that is) is not zero, is the difference between VA and watts significant?
I am installing some 120v plug mold in a commercial building and code says 90 volt amps per outlet. In this case is 90 VA equal to 90 watts? Does the voltage/phase figure into the unity power
The plug mold is rated in VA because the power factor the load it may supply is unknown. power factor = watts/VA
EX. 1
A 90 Watt incandescent bulb has a power factor of 1. In this case, 90VA = 90Watts
An electric motor 90VA with a power factor of .8: VA= 90 watts = 72. conductors suppling the load is size to carry the VA not only the Watts
VA = Volts X Amps.That is all that is necessary to consider in the case of the plug mold.
Everything I say is fully substantiated by my own opinion
Where do you get the power factor from? is it on specs of motor,device, etc.
Geoffrey Lyons
VA = Apparent Power
Watts = True Power
Ex. Line side of xfmr = apparent power (VA) expressed in kVA. The load side of xfmr would be true power (watts), which is the power actually being used, and not lost through the xfmr for various
reasons. In the case of plug mold I wouldn't worry about it.
Here's a plain old English explanation.
Simply put, a wattmeter is an electric motor whose speed depends on both the voltage between line terminals and the current through the lines. The wattmeter actually responds to volts and amps
(volt-amps), not watts.
Voltage peaks twice per cycle, and current peaks twice per cycle. A wattmeter measures power by simultaneously measuring the voltage and the current, and then 'calculates' the power consumed as
what we call watts.
If the voltage and current peaks occur simultaneously, the volt-amps and watts are genuinely the same, but if there is a time difference between the two peaks, the wattmeter turns slower for a
given amount of volt-amps.
However, in spite of a deceptively-low wattmeter reading, the power system must be sized to safely carry both peaks, whether they occur simultaneously or not. Since the voltage is a given, the
current is really the variable.
The circuit must be designed for the voltage and the current. The insulation doesn't care what the current is (of properly-sized conductors, of course), and the (properly-insulated) conductor
doesn't care what the voltage is.
Power companies and electricians alike must design the system components to safely carry the voltage and the current, not the power. Poor power-factor results in a system that must carry current
not useable by the load.
To answer the OP directly, yes, for the sake of Plugmold, you can consider VA to equal wattage. Since voltage is considered to be a constant, amperage is the variable that must be accomodated. I
hope this made some sense.
Code references based on 2005 NEC
Larry B. Fine
Master Electrician
Electrical Contractor
Richmond, VA
A slightly different way to think about it. As best I can determine, the below is consistent with what Larry wrote above, just from a different perspective:
instantaneous volts * instantaneous amps = instantaneous watts
In an AC circuit, volts, amps and watts are constantly changing.
To describe these constantly changing values, we use 'RMS' measurements. RMS measurements are a type of average suited for measuring electrical quantities. (Side note: the most common meaning of
the term "average" describes a specific mathematical operation, and RMS is _not_ that operation, so if you prefer, think of RMS as being analogous to "average", but for electrical measurements.)
RMS voltage is a useful measurement, because when you apply AC of X volts RMS to a resistor, you dissipate the same amount of power as applying DC of X volts to that resistor.
But like any averaging process, you are throwing away a bunch of information.
For 'pure resistive loads', RMS amps * RMS volts = average watts
For many loads, the above equation is not true.
So we use the term 'power factor' to describe the difference.
average watts = RMS amps * RMS voltage * PF
There are many reasons that a load could have a power factor, but what it boils down to is that voltage and current are not changing in lock step. Both a time difference between peak voltage and
peak current, or a difference in _shape_ between voltage waveform and current waveform, can result in a power factor less than 1.
Here's a plain old English explanation.
Simply put, a wattmeter is an electric motor whose speed depends on both the voltage between line terminals and the current through the lines. The wattmeter actually responds to volts and amps
(volt-amps), not watts.
Voltage peaks twice per cycle, and current peaks twice per cycle. A wattmeter measures power by simultaneously measuring the voltage and the current, and then 'calculates' the power consumed as
what we call watts.
If the voltage and current peaks occur simultaneously, the volt-amps and watts are genuinely the same, but if there is a time difference between the two peaks, the wattmeter turns slower for a
given amount of volt-amps.
However, in spite of a deceptively-low wattmeter reading, the power system must be sized to safely carry both peaks, whether they occur simultaneously or not. Since the voltage is a given, the
current is really the variable.
The circuit must be designed for the voltage and the current. The insulation doesn't care what the current is (of properly-sized conductors, of course), and the (properly-insulated) conductor
doesn't care what the voltage is.
Power companies and electricians alike must design the system components to safely carry the voltage and the current, not the power. Poor power-factor results in a system that must carry current
not useable by the load.
To answer the OP directly, yes, for the sake of Plugmold, you can consider VA to equal wattage. Since voltage is considered to be a constant, amperage is the variable that must be accomodated. I
hope this made some sense.
The simplest way to think of it is two people trying to pull a block of concrete, one guy is named Mr. Amperes and the other one is simply known as Volts. If they line up along a single line to
pull, all their strength will be fully additive. when they are trying to pull in different directions, say Mr. Amperes is pulling toward 12 o'clock, but Volts is heading to 10 o'clock they each
will need to exert more power than before and eventually will produce no movement regardless of the strength excerted when Mr Amperes tries to head toward 9 o'clock while Volts is pulling toward
3 o'clock.
01-17-08, 01:34 AM #2
01-17-08, 01:53 AM #3
Senior Member
Join Date
Oct 2007
01-17-08, 02:20 AM #4
01-17-08, 06:25 AM #5
Senior Member
Join Date
Aug 2005
01-17-08, 07:29 AM #6
Join Date
Dec 2007
01-17-08, 07:37 AM #7
Senior Member
Join Date
Aug 2007
Broward Co.
01-17-08, 01:13 PM #8
01-17-08, 02:04 PM #9
Senior Member
Join Date
Apr 2006
Portland, OR
01-17-08, 05:37 PM #10
|
{"url":"http://forums.mikeholt.com/showthread.php?t=94657","timestamp":"2014-04-17T00:59:18Z","content_type":null,"content_length":"76546","record_id":"<urn:uuid:aa8c2385-384f-42a2-a414-a964fa6df584>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Control Theory 101
I find control theory and PID algorithms very fascinating (at least at a simple level) so I want to share a few basics. Although the FlightGear PID algorithm is a bit more sophisticated, effective
control modules can be built with just a few simple building blocks.
Typical autopilots are built using a PID algorithm. PID stands for proportional, integral, and derivative. Typically a PID controller manipulates one control output to force a process value towards a
reference point.
Let me say that again an a bit different way. Imagine a cruise control on a car. We know the current speed. We know the target speed. And we know the accelerator position. The cruise control will
manipulate the accelerator position (control output) to try to make the current speed (process value) equal to the target speed (reference point.) How the cruise control calculates what accelerator
position is needed to hold the desired speed (even going up and down hills) is where the magic happens.
I'll explain the three components (proportional, integral, and derivative) of a PID controller next.
All three components of the PID algorithm are driven by the difference between the process value (i.e. the current speed) and the reference point (i.e. the target speed.) We will call this difference
(of error) for one particular time step . For that same time step, we call the process value and the reference point .
The output value (i.e. the accelerator position) is called .
The proportional component simply calculates based on the size of the error term by simply multiplying it by a constant, .
For simple situations, this all by itself can be a very effect control algorithm. Typically this works best when you know that when then . For example, imagine a simple wing leveler in an aircraft.
The process value is going to be bank angle, the reference point is going to be zero (zero bank angle means the wings are level.) Assume a well trimmed aircraft with neutral stability so that when
the ailerons are zero there is no change in bank. A proportional only control would set the aileron deflection inversely proportional to the bank angle. As the bank angle gets closer to zero, the
aileron deflection gets closer to zero. Something as simple as this (a formula with one multiply operation) can be an amazingly effective and stable controller.
Unfortunately life is often more complicated than we'd like, and even in the case of a simple wing leveler, you encounter situations where the aircraft isn't perfectly trim and zero aileron
deflection does not always equal zero roll motion. In an aircraft such as a Cessna 172, the amount of aileron deflection needed to keep the wing level can vary with speed. In these cases, a
proportional only controller will stabilize out quickly, but will stabilize to the wrong value. We need a way to drive the error in the proportional only controller to zero.
Enter the Integral component of the PID algorithm. Remember back to your calculus days, integral refers to the area under a curve. If you have a function, the integral of that function produces a
second function which tells you the area under curve of the first function.
Fortunately we usually don't have a formula for the first function since it changes depending on external conditions (i.e. current speed in a car.) That means we can't integrate this function
directly and we are spared all the potentially messy calculus.
So we use an alternative approach to approximate the error under the process value curve. At each time step we know which is the difference between the process value and the reference point. If we
multiply this distance times (the time step) we get an area which approximates the error under the curve just for this time step. If we add these areas up over time, we get a very reasonable
approximation of the area under the curve.
Essentially what this does is that the longer time passes with us not at our target value, the larger the sum of the (error dt)'s becomes over time. If we use this sum to push our output value (i.e.
our accelerator position) then the longer we don't quite hit our target speed, the further the system pushes the accelerator pedal. Over time, the integral component compensates for the error in the
proportional component and the system stabilizes out at the desired speed.
Hopefully someone else can chip in and add more explanation to this section. But going back again to calculus. The derivative of a function implies the rate of change of the function output. If you
know the function, you can take the derivative of that function to produce a second function. For any point in time, the derivative function will tell you the rate of change (or slope) of the first
Conceptually, this makes sense in the context of a controller. How quickly we are closing on our target value (i.e. the rate of change from each time step to the next) is an important piece of
information that can help us build a more stable system that more quickly achieves the target value.
For a car cruise control, we are measuring velocity at each time step. The rate of change of velocity is defined as acceleration (for those that remember your physics.)
Now don't you wish you took calculus and physics in school? Or if you did take them, don't you wish you had been paying attention? Me too. :-)
Here is a key point to understand. The proportional component is very stable. The Integral and Derivative components are very unstable.
If we build a proportional only controller, it will be very stable but will stabilize to the wrong value. (i.e. if we want to go 90km/hr, it might stabilize out to 82km/hr.)
If we build an integral only controller it will quickly hit the target value, but will overshoot, then overcompensate, and will oscillate wildly around the target value. It is very unstable.
The trick then is to combine these components together by summing them. The actual output is equal to what the P component says the output should be plus what the I component says the output should
be plus what the D component says the output should be. You can assign a weighting value to each component to increase or decrease it's relative power to influence the final output value.
As you can see, the actual math involved in a PID controller (while rooted in some deep theory) is actually quite simple to implement. The real trick for creating a well behaved PID controller and a
well behaved autopilot is tuning the relative weights of each of the P, I, and D components.
Curtis L. Olson 2004-02-04
|
{"url":"http://www.flightgear.org/Docs/XMLAutopilot/node2.html","timestamp":"2014-04-19T20:11:05Z","content_type":null,"content_length":"12521","record_id":"<urn:uuid:8e9543b6-fa92-4904-925f-27f0c1f512fa>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Need a walkthrough for a quick Polynomial Multiplication worksheet problem
October 11th 2010, 12:07 AM #1
Oct 2010
Need a walkthrough for a quick Polynomial Multiplication worksheet problem
Ok so basically Il open this up with a little about my situation here. So back when I was a teenager in High School I was able to do all sorts of equations in my sleep. Now I am grown up 15 years
later trying to relearn everything starting from the basics. I learn quick mostly watching people solve problems in front of me then mimicing their pattern.
In my most prior classes we are now starting on Polynomial Multiplication. The teacher has so far taught us the First and Second Laws of exponents AND the Distributive Property. He has now given
us an assignment with some questions I havent seen yet.
This is question #1.
(x³ - y² +12) -3[( x² - 6x + 10)+4]
Can anyone walk me through this one?
EDIT: Gah sorry this is my first time on the forum I believe I posted this in the entire wrong section SORRY.
Last edited by jbwut; October 11th 2010 at 12:20 AM. Reason: edit: wrong section please delete mod
I'm not seeing any polynomial multiplication in this problem, but this is what I got from it:
I believe this is as far simplified the expression can get.
October 11th 2010, 06:25 AM #2
|
{"url":"http://mathhelpforum.com/algebra/159141-need-walkthrough-quick-polynomial-multiplication-worksheet-problem.html","timestamp":"2014-04-21T05:05:22Z","content_type":null,"content_length":"33660","record_id":"<urn:uuid:a59c01d8-f49e-4f32-9bd1-7e42a5176ea4>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calhoun, GA Precalculus Tutor
Find a Calhoun, GA Precalculus Tutor
...I currently work in the math lab at school, which provides tutoring services to the Berry College community, but I would like to expand my tutoring to the rest of Rome and surrounding
communities. As a rising senior, I have taken my fair share of math classes, so almost all subjects are open for...
23 Subjects: including precalculus, reading, calculus, geometry
...My grade in this course was an A-. Upon transferring to the University of Tennessee Chattanooga engineering school, I enrolled in and completed a programming course in simple C structured
programming using the compiler DevC++. In this course, I receive an A letter grade. At Milwaukee School of...
52 Subjects: including precalculus, reading, English, Spanish
...I have tutored students in Mathematics for over 30 years. A few years ago I took the Graduate Records Exam to work on a post graduate degree and received a perfect score of 800 on the
Mathematics portion. The GRE and GED are two very different exams, but the study and preparation methods are very similar for both.
28 Subjects: including precalculus, chemistry, English, calculus
...I started teaching violin myself when I was 16. I have continued to improve over the last 12 years and do not plan on stopping. I have been playing drums since I was 11 years old.
19 Subjects: including precalculus, chemistry, physics, calculus
...Effort is always required on the part of the student and the teacher for this guarantee to become a reality. I taught a unit of logic every year for 14 years that I taught geometry. I have
taught Math 1001 at Georgia Highlands that included a unit on logic.
12 Subjects: including precalculus, calculus, geometry, ASVAB
Related Calhoun, GA Tutors
Calhoun, GA Accounting Tutors
Calhoun, GA ACT Tutors
Calhoun, GA Algebra Tutors
Calhoun, GA Algebra 2 Tutors
Calhoun, GA Calculus Tutors
Calhoun, GA Geometry Tutors
Calhoun, GA Math Tutors
Calhoun, GA Prealgebra Tutors
Calhoun, GA Precalculus Tutors
Calhoun, GA SAT Tutors
Calhoun, GA SAT Math Tutors
Calhoun, GA Science Tutors
Calhoun, GA Statistics Tutors
Calhoun, GA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Calhoun_GA_precalculus_tutors.php","timestamp":"2014-04-21T04:42:42Z","content_type":null,"content_length":"23969","record_id":"<urn:uuid:548edda2-df55-42a9-b834-e31fcc79123e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Understanding Peak-Finding
No matter how far we are in our careers as professional developers, it’s great to freshen up on our fundamentals. Be it the importance of Memory Access Patterns or algorithms in general, it’s really
beneficial. I find it quiet interesting that it’s been a pretty long time since I sat in the algorithms and data structures course on my technical institute and I tend to understand it completely
different now. I heard a really great thing from a professor at MIT who said the following:
You can practice really hard for two years to become a great programmer and you can practice for 10 years to become an excellent programmer. Or you can practice for two years and take an
algorithms course and become an excellent programmer
A lot of us might not think about the daily algorithms and data structures that we use, in fact, we hide behind ORMs and such that hides complexity and introduces behavior that we might not be aware
of. Which is one of the reasons I personally like to read up on algorithms from time to time. This time though, I’ve decided to share some of the things I learn and like, hopefully you’ll like it and
it will help you to become an excellent programmer.
If you haven’t seen this, MIT has a website called Open Courseware where they have video recordings, lecture notes, assignments and exams from their courses. There is one in particular which I
recently found and it’s excellent so far, it’s called Introduction to Algorithms. If it’s been a while since you looked into these topics, have a look at their content. Some of the examples and
snippets here are from the lectures in this particular course.
Let’s get to it! Back to basics!
What is Peak-Finding?
Imagine you have a set of numbers, these numbers are stored in a one dimensional array; hence a normal array. Now you want to find one of the elements where the element peaks. Notice that we don’t
want to find the highest peak, we just want to find a peak. As to any problem there are multiple solutions and these solutions might differentiate from one and another. Some might be faster and some
might be slower.
Let’s say that we have the following set of numbers: {1, 2, 4, 3, 5, 1, 3}
How would you find the peak in that?
First we need to define the requirements for it to be a peak: The element needs to be larger or equal to both the elements on its sides
There’s one really obvious way to solve this, can you think of it?
Finding the peak in one dimension (slow) O(n)
How about if we just iterate over each element and make sure that the elements surrounding it are less or equal? It’s a simple solution, but is it the best and fastest? Remember that we just need to
find if there is a peak somewhere in the array, it doesn’t have to be the highest point.
I won’t bother with showing the code for this one, it’s just a simple loop with some boundary checks. The problem here is that we need to look at every element in the collection, which makes the time
to run the algorithm grow linear with the growth of n.
Finding the peak in one dimension (fast) O(log n)
As the heading says, this is logarithmic, base 2 logarithmic to be exact. This means that somewhere in our algorithm we are dividing the set in two and doing so as n grows. So what might this mean,
in terms of solving the problem? We’re taking a divide and conquer approach! Just as you would with binary search. Binary search divides the array in half until it finds the correct element.
Searching a phone book with 2^32 amount of records would take only 32 tries because we know it is sorted!
The same approach is applicable for the peak finding. If we take a look at the set of numbers we have again: {1, 2, 4, 3, 5, 1, 3} we know that if we start in the middle we will look at the value 3,
which is less than both 4 and 5. So what now? Which side do we jump to? We can jump to the left here and divide the set in half, leaving us with the following: {1, 3, 4} and we’re in the middle so
we’ve selected the three here. But, three is only larger than 1 and less than 4 so we have another step to do here and that is to jump to the right, this time we only have {4} left so this is our
base case, we only have one item and such this is a peak.
Here’s a breakdown of the algorithm where a defines the array and n the amount of elements.
/2] <
/2 - 1]
then only look at the left
1 ... n/2 - 1 else if
/2] <
/2 + 1]
then only look at the right n
/2 +1 ... n else
/2 is
a peak
There’s some boundary checks that needs to go into it as well, but you get the idea and you can play around with the implementation of this.
Two Dimensional Peak-Finder
Things are about to get interesting, we’ve looked at the one dimensional array which is sort of just divide and conquer. Now how about adding another dimension to it and looking at a 2D array? If
you’re unaware of what a 2D array looks like, here’s a good example of that:
{0, 0, 9, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0},
{0, 1, 0, 0, 0, 0, 0},
{0, 2, 0, 0, 0, 0, 0},
{0, 3, 0, 0, 0, 0, 0},
{0, 5, 0, 0, 0, 0, 0},
{0, 4, 7, 0, 0, 0, 0}
It’s simply represented by a int[][].
In a one dimensional approach we looked at our neighbors and we’re going to do the exact same thing in this scenario as well, however in this case we’ve got two more that just moved into our block.
If we had people living on our west and east sides we now also have someone living on north and south. There are of course edge cases where we need to check the boundary of the lonely soles that have
no one living to their west, north, east or south.
If you think about it, how would you approach this? The MIT course that I listed above has a great Python example that you can download and play with, it comes with a interactive html export when you
generate the result. I’ve recorded how the algorithm behaves which might make it easier for you to figure out what happens in this algorithm. In the below animation, when the 5 turns pink, that is
when it found the peak.
There are of course faster and slower approaches to this problem as well, this is not the fastest one and it is not the slowest one. Let’s just say it’s one of the ones in the middle. Here’s a
breakdown of what the algorithm does where m is the amount of columns, n the amount of rows.
Pick the middle column j
Find the largest value
the current column span
global max
Compare to neighbors
larger than all
this is
the 2D peak
Jump to left or right depending on comparison
divide and conquer
run recursively
you are at the last column, the current global max
a 2D peak
There’s a bit more to this than with a single dimension and there is also room for improvement, but read the definition of finding the 2D peak a couple of times, look at the animation and you will
see this pattern. Remember that it won’t find the largest peak, just one of the peaks where it is a peak according to our rules.
Finding the 2D Peak
Consider that we have the following method signature for our method that looks for a 2D peak: int int FindPeak(int[][] problem, int left = 0, int right = -1) now as you might have seen above, this is
a recursive method so instead of slicing the array, we just pass a reference to the array and a point to where it starts and where it ends.
We then call it like this:
= new[]{ new [] {0
new [] {0
new [] {0
new [] {0
new [] {0
new [] {0
new [] {0
}; int
There are a couple of edge cases that we might want to handle while we are at it, such as if the array us empty. The beginning if our FindPeak method will look something like this:
if (problem.Length <= 0) return 0;
if (right == -1) right = problem.Length;
int j = (left + right) / 2;
int globalMax = FindGlobalMax(problem, j);
As you see here, we handle the case of when we first call our method with the value of right being -1. We initialize this with the length of the array, we could move this outside the method to reduce
some branches in each recursion. Now we compute the current column (middle) of our start and stop. After that we look for the global max, I introduced a helper method to do this. All it does is that
it goes over the same column position for each row in the array. This way we can find the index of the largest element in that column. This method can look like this:
int FindGlobalMax(int[][] problem, int column)
int max = 0;
int index = 0;
for (int i = 0; i < problem.Length; i++)
if (max < problem[i][column])
max = problem[i][column];
index = i;
return index;
We use the top rows column if we can’t find a value that is larger than it, if we do we just increase the index until we can’t find a larger one. It’s time to check the neighbors and see how they are
doing, this statement can be simplified and refactored into multiple methods but let’s leave it verbose for now, you can refactor it all you want and play with it on your own:
if (
(globalMax - 1 > 0 &&
problem[globalMax][j] >=
problem[globalMax - 1][j]) &&
(globalMax + 1 < problem.Length &&
problem[globalMax][j] >=
problem[globalMax + 1][j]) &&
(j - 1 > 0 &&
problem[globalMax][j] >=
problem[globalMax][j - 1]) &&
(j + 1 < problem[globalMax].Length &&
problem[globalMax][j] >=
problem[globalMax][j + 1])
return problem[globalMax][j];
We’re checking 4 things, actually in this case we are only going to check 3 things because as we selected the middle column that has only 0s, there is no global max and it will use the top one when
checking the neighbors as seen in this picture:
Which is also why we are doing the boundary checks so that we are not doing any Index out of Bounds exceptions! If this were the largest one of its neighbors, we would simply return from here. While
writing up this article I found some interesting edge cases which I hadn’t thought of in the first implementation. Play around with different values yourself and see if you can find some errors.
After checking the neighbors we know that we need to either jump somewhere if there is a place to jump to, or we are at the current global max. If we jump to the left, we set the new right position
to our current middle and if we jump to the right we set the new left to the current middle. Then we simply call ourselves like this:
else if (j > 0 && problem[globalMax][j - 1] > problem[globalMax][j])
right = j;
return FindPeak(problem, left, right);
else if (j + 1 < problem[globalMax].Length && problem[globalMax][j + 1] > problem[globalMax][j])
left = j;
return FindPeak(problem, left, right);
return problem[globalMax][j];
Now let us take a look at that animation again and see if we can follow along and do the programming steps in our head.
We’ve now found the peak in our 2D array! Here’s a question for you: What is the time time complexity of this algorithm?
The complete code is available on GitHub in my Algorithms repository.
Keep learning, keep coding and keep solving problems! Let me know if you liked this and if you found something to optimize or fix in my examples!
One Response to Understanding Peak-Finding
|
{"url":"http://blog.filipekberg.se/2014/02/10/understanding-peak-finding/","timestamp":"2014-04-19T01:54:25Z","content_type":null,"content_length":"67824","record_id":"<urn:uuid:bbb0e6eb-fb66-43f2-a9ba-7645c08ef425>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Article
Bibliographic References
Add to:
Incomplete Star: An Incrementally Scalable Network Based on the Star Graph
January 1994 (vol. 5 no. 1)
pp. 97-102
ASCII Text x
S. Latifi, N. Bagherzadeh, "Incomplete Star: An Incrementally Scalable Network Based on the Star Graph," IEEE Transactions on Parallel and Distributed Systems, vol. 5, no. 1, pp. 97-102, January,
BibTex x
@article{ 10.1109/71.262593,
author = {S. Latifi and N. Bagherzadeh},
title = {Incomplete Star: An Incrementally Scalable Network Based on the Star Graph},
journal ={IEEE Transactions on Parallel and Distributed Systems},
volume = {5},
number = {1},
issn = {1045-9219},
year = {1994},
pages = {97-102},
doi = {http://doi.ieeecomputersociety.org/10.1109/71.262593},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Parallel and Distributed Systems
TI - Incomplete Star: An Incrementally Scalable Network Based on the Star Graph
IS - 1
SN - 1045-9219
EPD - 97-102
A1 - S. Latifi,
A1 - N. Bagherzadeh,
PY - 1994
KW - Index Termsmultiprocessor interconnection networks; graph theory; network routing; interconnectionnetwork; massively parallel systems; star graph; incomplete star graph; incrementallyscalable
network; interconnecting; labeling; point-to-point communications; C/sup n/splminus/1/ graph; Hamiltonian; Cayley graph; routing
VL - 5
JA - IEEE Transactions on Parallel and Distributed Systems
ER -
Introduces a new interconnection network for massively parallel systems called theincomplete star graph. The authors describe unique ways of interconnecting and labelingthe nodes and routing
point-to-point communications within this network. In addition,they provide an analysis of a special class of incomplete star graph called C/sup n/splminus/1/ graph and obtain the diameter and
average distance for this network. For theC/sup n/spl minus/1/ graph, an efficient broadcasting scheme is presented. Furthermore,it is proven that a C/sup n/spl minus/1/ with N nodes (i.e. N=m(n/spl
minus/1)!) isHamiltonian if m=4 or m=3k, and k/spl ne/2.
[1] S. B. Akers, D. Horel, and B. Krishnamurthy, "The star graph: An attractive alternative to then-cube," inProc. Int. Conf. Parallel Processing, pp. 393-400, 1987.
[2] S. B. Akers and B. Krishnamurthy, "A group-theoretic model for symmetric interconnection networks," inProc. Int. Conf. Parallel Processing, pp. 216-223, 1986.
[3] S. B. Akers and B. Krishnamurthy, "The fault tolerance of star graphs," inProc 2nd Int. Conf. Supercomputing, 1987.
[4] D. Marusic, "Hamiltonian circuits in Cayley graphs,"Discrete Math., pp. 49-54, 1983.
[5] A. Menn and A. K. Somani, "An efficient sorting algorithm for the star graph interconnection network," inProc. Int. Conf. Parallel Processing, vol. III, pp. 1-8, 1990.
[6] M. Nigam, S. Sahni, and B. Krishnamurthy, "Embedding Hamiltonians and Hypercubes in Star Interconnection Networks," inProc. Int. Conf. Parallel Processing, pp. 340-343, 1990.
[7] J. S. Jwo, S. Lakshmivarahan, and S. K. Dhall, "Embedding of cycles and grids in star graphs,"J. Circuits, Syst., Computers, vol. 1, No. 1, pp. 43-74, 1991.
[8] H. Katseff, "Incomplete hypercubes,"IEEE Trans. Comput., vol. 37, pp. 604-608, May 1988.
[9] S. G. Akl, K. Qiu, and I. Stojmenovic, "Data communication and computational geometry on the star and pancake interconnection networks," inProc. 3rd IEEE Symp. Parallel Distrib. Processing, 1991,
pp. 415-422.
[10] S. Sur and K. Srimani, "Super star: A new optimally fault tolerant network," Colorado State Univ., Tech. Rep. CS-90-107, 1990.
Index Terms:
Index Termsmultiprocessor interconnection networks; graph theory; network routing; interconnectionnetwork; massively parallel systems; star graph; incomplete star graph; incrementallyscalable
network; interconnecting; labeling; point-to-point communications; C/sup n/splminus/1/ graph; Hamiltonian; Cayley graph; routing
S. Latifi, N. Bagherzadeh, "Incomplete Star: An Incrementally Scalable Network Based on the Star Graph," IEEE Transactions on Parallel and Distributed Systems, vol. 5, no. 1, pp. 97-102, Jan. 1994,
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/trans/td/1994/01/l0097-abs.html","timestamp":"2014-04-18T11:15:48Z","content_type":null,"content_length":"51586","record_id":"<urn:uuid:9a15f647-bfda-4516-a78c-c2132052c903>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Example 1. Read this number:
Answer. Starting from the left, 256, read each three-digit group. Then say the name of the class.
"256 Quadrillion, 312 Trillion, 785 Billion, 649 Million, 408 Thousand, 163."
Do not say the class name "Ones."
Example 2. To distinguish the classes, place commas in this number:
Answer. Starting from the right, place commas every three digits:
Read the number:
"8 million, 792 thousand, 456."
Example 3. Read this number: 7,000,020,002
Answer. "Seven billion , twenty thousand, two."
When a class is absent, we do not say its name; we do not say, "Seven billion, no million, ..."
Also, every class has three digits and so we must distinguish the following:
002 "Two"
020 "Twenty"
200 "Two hundred"
As for "and," in speech it is common to say "Six hundred and nine," but in writing we should reserve "and" for the decimal point, as we will see in the next Lesson. (For example, we should write
$609.50 as "Six hundred nine dollars and fifty cents." Not "Six hundred and nine dollars.")
Example 4. Write in numerals:
Four hundred eight million, twenty-nine thousand, three hundred fifty-six.
Answer. Pick out the classes: "million", "thousand". Each class (except perhaps the first class on the left) has exactly three digits:
Example 5. Write in numerals:
Five billion, sixteen thousand, nine.
Answer. After the billions, we expect the millions, but it is absent. Therefore write
Again, we must write "sixteen thousand" as 016; and "nine" as 009; because each class must have three digits. The exception is the class on the extreme left. We may write "Five" as 5 rather than 005.
When writing a four-digit number, such as Four thousand five hundred, it is permissible to omit the comma and write 4500. In fact, we often read that as "Forty-five hundred." But when a number has
more than four digits, then for the sake of clarity we should always place the commas.
Example 6. Distinguish the following:
a) Two hundred seventeen million b) Two hundred million seventeen
a) 217,000,000 b) 200,000,017
At this point, please "turn" the page and do some Problems.
Continue on to Section 2: Place value
Introduction | Home | Table of Contents
Please make a donation to keep TheMathPage online.
Even $1 will help.
Copyright © 2014 Lawrence Spector
Questions or comments?
E-mail: themathpage@nyc.rr.com
|
{"url":"http://www.themathpage.com/ARITH/powers-of-10.htm","timestamp":"2014-04-21T07:56:15Z","content_type":null,"content_length":"19841","record_id":"<urn:uuid:7a57ae1e-5259-4109-abf8-d2c8f70b8a82>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
|
IntroductionExperimental SetupBackgroundNumerical ModelFormulationNumerical ResultsExperimental Results and DiscussionsDissipation CoefficientsVoltage-Newton ConversionComparison of the Numerical and Experimental ResultsDiscussion and ConclusionsReferencesFigures and Tables
Sensors Sensors 1424-8220 Molecular Diversity Preservation International (MDPI) 10.3390/s130101231 sensors-13-01231 Article A Comparative Study on Three Different Transducers for the Measurement of
Nonlinear Solitary Waves NiXianglei CaiLuyao RizzoPiervincenzo^* Laboratory for Nondestructive Evaluation and Structural Health Monitoring Studies, Department of Civil and Environmental Engineering,
University of Pittsburgh, 3700 O'Hara Street, Pittsburgh, PA 15261, USA; E-Mails: xin1@pitt.edu (X.N.); luc21@pitt.edu (L.C.) Author to whom correspondence should be addressed; E-Mail: pir3@pitt.edu;
Tel.: +1-412-624-9575; Fax: +1-412-624-0135. 2013 18 01 2013 13 1 1231 1246 30 11 2012 28 12 2012 11 01 2013 © 2013 by the authors; licensee MDPI, Basel, Switzerland. 2013
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
In the last decade there has been an increasing interest in the use of highly- and weakly- nonlinear solitary waves in engineering and physics. Nonlinear solitary waves can form and travel in
nonlinear systems such as one-dimensional chains of particles, where they are conventionally generated by the mechanical impact of a striker and are measured either by using thin transducers embedded
in between two half-particles or by a force sensor placed at the chain's base. These waves have a constant spatial wavelength and their speed, amplitude, and duration can be tuned by modifying the
particles' material or size, or the velocity of the striker. In this paper we propose two alternative sensing configurations for the measurements of solitary waves propagating in a chain of spherical
particles. One configuration uses piezo rods placed in the chain while the other exploits the magnetostrictive property of ferromagnetic materials. The accuracy of these two sensing systems on the
measurement of the solitary wave's characteristics is assessed by comparing experimental data to the numerical prediction of a discrete particle model and to the experimental measurements obtained by
means of a conventional transducer. The results show very good agreement and the advantages and limitations of the new sensors are discussed.
highly nonlinear solitary waves discrete particle model transducers magnetostrictive sensors
In the last fifteen years the numerical and experimental studies on the propagation of nonlinear solitary waves in one-dimensional chains of granular media, and in particular of spherical elastic
beads, have thrived [1–20]. The nonlinearity arises from the Hertzian type contact between two adjacent particles and zero tensile force. When the chain is non- or weakly-compressed by means of its
self-weight or by the action of some form of static pre-compression, highly nonlinear solitary waves (HNSWs) can form and propagate in the chain. The term “weakly” implies that the pre-compression is
very small compared to the dynamic force amplitude associated with the wave propagation.
It has been demonstrated that HNSWs propagating in granular crystals have the potential to be used as acoustic lenses [21], vibration absorbers [22], impurity detectors [15,23], acoustic diodes [24]
and as a tool for nondestructive testing [25–28]. In some of these engineering applications, the measurement of the dynamic force is necessary. To date, this measurement is attained either by means
of one or more sensor beads placed in the chain [6–14,25–30] or by using a force sensor mounted at the base of the chain [4–8]. The former usually consists of two half beads bonded to a thin
piezoelectric crystal in order to form a sensor particle able to measure the dynamic force at its center. The main advantage of this assembly is two-fold: it can be placed anywhere in the chain; it
does not alter the characteristics of the propagating wave, since the geometry and the material property of the sensor are essentially identical to those of the other particles composing the chain.
However, the manufacture of sensor beads requires machining and therefore can be time-consuming and costly. Moreover, once in place, the bead should not be allowed to rotate in order to maintain its
sensitivity constant and the wires are prone to accidental breakages. A force sensor instead measures the characteristics of the wave at the end of the chain. This configuration is unpractical when
the same end needs to be in contact with another material or structure. Finally, few studies exploited the photoelasticity to measure the stress wave propagation in photoelastic grains [31,32].
However, this approach is not suitable for metallic or other non-photoelastic materials and might be expensive.
In the study presented here, we investigated numerically and experimentally two alternative sensing systems to measure the propagation of solitary waves in a 1-D chain of metallic particles. The
first design replaces the sensor beads with piezo rods having thickness and diameter comparable to the size of the particles composing the chain. The second system considers the use of coils wrapped
around a segment of the chain to create a magnetostrictive sensor (MsS). To the best of the authors' knowledge, the use of magnetostriction or piezoelectric cylinders to measure the propagation of
HNSWs was never reported in the past. In this paper the working principles of these novel transducers are introduced and the experimental results are compared to the measurements obtained using
conventional instrumented beads and to the numerical prediction derived with a discrete particle model.
The paper is organized as follows: the experimental setup is described in Section 2. The principles of the three types of sensors are introduced in Section 3. Section 4 presents the numerical model
of wave propagation in a chain of spherical particles. In Section 5, the experimental results are presented. Finally, Section 6 concludes the paper with a discussion on the advantages and
disadvantages of the three sensing configurations.
In order to compare the novel sensing systems to the conventional one, a plastic tube with inner diameter of 4.8 mm and outer diameter of 12.7 mm was filled with twenty nine 4.76 mm-diameter, 0.45
gr, low carbon steel beads (McMaster-Carr product number 96455K51). An identical bead was used as striker. For convenience, the particles are herein numbered 1 to 30 where particle 1 identifies the
striker and particle 30 represents the sphere at the opposite end of the chain. The stroke of the particle 1, equal to 7.2 mm, was governed by an electromagnet mounted on top of the tube and remotely
controlled by a switch circuit connected to a National Instruments PXI running in LabVIEW. Figure 1 schematizes the setup described above.
Three pairs of sensors were used in this study: bead sensors, rod-form piezos, and MsSs. Each bead sensor was assembled by embedding a zirconate titanate based piezogauge (3 mm by 3 mm by 0.5 mm)
inside two half steel spheres, as shown in Figure 2(a). They were located at the positions 13 and 18 in the tube. Figure 2(b) shows instead one of the two piezoelectric cylinders. They were custom
made (Piezo Kinetics Inc. ND0.187-0.000-0.236-509) with 36AWG × 25.4 mm soldered tinned copper lead wires. The rods had nominal dimension 4.76 mm outer diameter and 6 mm height. According to the
manufacturer, their mass was 0.8144 g, Young's modulus 63 GPa, and Poisson's ratio equal to 0.31. When they were used, the piezo cylinders replaced the bead sensors at location 13 and 18 in the tube.
Finally, each MsS consisted of a 7 mm coil made of AWG36 magnetic wire and 1100 turns wrapped around a plastic tube having inner diameter of 12.7 mm and 1.6 mm thick. A permanent bridge magnet
(McMaster-Carr product number 5841K12) was fixed to the coil as shown in Figure 2(c) in order to create a constant magnetic field parallel to the longitudinal axis of the chain. Figure 2(d) shows the
schematic of the sensor once the plastic tube containing the chain of particles was inserted. The two magnetostrictive transducers were mounted such that their centers were located at the same
elevation of particles 13 and 18.
The capability and the repeatability to measure the amplitude and speed of the HNSWs was evaluated by taking 500 measurements at 10 MHz sampling rate for each sensing configuration.
In a one-dimensional chain of spherical particles, the interaction between two adjacent beads is governed by the Hertz's law [1,4]: F = A δ 3 / 2where F is the compression force between two beads, δ
is the closest approach of particle centers, and A is a coefficient given by: A = E 2 R 3 ( 1 − ν 2 )where R is the radius of the particles, and ν and E are the Poisson's ratio and Young's modulus of
the material constituting the particles, respectively. The combination of this nonlinear contact interaction and a zero tensile force in the chain of spheres leads to the formation and propagation of
compact solitary waves. More detailed formulation on the analytical foundation of HNSW propagation may be found in several references [1–12].
A single pulse is commonly induced by mechanically impacting the first bead of the chain with a striker having the same mass of the particles composing the chain. Ni et al. [13] showed that
laser-pulses can also be used in lieu of mechanical impacts. When a bead sensor is inserted in a chain, the force measured is the average of the dynamic forces at the bead's ends [6]. Similarly, the
force measured by a piezoelectric cylinder is the average of the dynamic forces at the two cross-section ends. Both sensing configurations are based on the use of piezoelectric crystals.
The MsS takes advantage of the efficient coupling between the elastic and magnetic states of the ferromagnetic particles and in particular of the magnetostrictive phenomena that convert magnetic
energy into mechanical energy and vice versa [33]. The magnetostriction principle can be used in the active or in the passive mode based upon the Faraday's law and the Villari's effect, respectively.
According to the Faraday's law an electrical current passing along a coil induces a magnetic field that is perpendicular to the current's direction. If the coil enwinds a ferromagnetic material, an
alternating current passing through a wire creates a time-varying magnetic field within the coil that, in turn, produces a change of magnetostriction of the material. The subsequent deformation known
as Joule's effect [34] produces a stress wave. According to the Faraday's law the voltage output in the coil can be expressed as [33,35–37]: V = − N d φ d twhere V is the induced voltage in the coil,
N is the number of turns of the coil, φ = BS[c] is the magnetic flux. Here B is the magnetic induction, S[c] is the area of coil in the magnetic field. Equation (3) can be written as: V = − N S c d B
d t
The inverse mechanism can be used for the detection of waves. A pulse propagating in the ferromagnetic material modulates an existing magnetic field by means of the Villari's effect [33,38], thereby
exciting a voltage pulse in the receiver coil. In both transduction and detection, a constant magnetic field (bias) is superimposed to enhance the coupling between the elastic and magnetic states,
i.e., to increase the signal-to-noise ratio of the stress wave generated and detected by the magnetostrictive transducer. One of the authors have designed, built, and used magnetostrictive
transducers for the generation and detection of ultrasonic guided waves in strands, solid cylinders, and pipes [39–44].
In the design of the MsS used in the present study we exploited the Villari effect to detect the propagation of nonlinear solitary waves across the chain. The particles are the magnetostrictive
material subjected to a biased magnetic field and are surrounded by a coil. We hypothesized that the change of the magnetic induction is proportional to the change of the dynamic contact force
between neighboring particles. The output voltage was proportional to the time-derivative of dynamic contact force: V ∝ d F d t
Therefore, the dynamic force associated with the solitary wave propagation is proportional to the integral of the sensor output voltage. Based upon the geometry of the MsS [see Figure 2(d)], we
assumed that the permanent magnet biased four contact points, which implied that the dynamic force measured by the coils sensor could be reasonably considered the average of four dynamic forces at
these points.
The experimental setup was simulated using a chain of spherical particles in contact with a wall which was considered as a half-infinite medium, as shown in Figure 3. We adopted the discrete particle
model [1,11] to predict the characteristics of the nonlinear solitary pulses generated by the impact of a striker. In the model the motion of the particles was considered in the axial direction, and
the interaction between two adjacent spheres was governed by the Hertz's law (Equation (1)). The equation of motion of the i-th particle can be expressed as [1,11,12]: m i u ¨ i = A i − 1 δ i − 1 3 /
2 − A i δ i 3 / 2 + γ i − 1 δ ˙ i − 1 − γ i δ ˙ i + F i , i = 1 , 2 , … , NWhere: A i = { 0 , i = 0 A c = E 2 R 3 ( 1 − ν 2 ) , i = 1 , 2 , … , N − 1 A w = 4 R 3 ( 1 − ν 2 E + 1 − ν w 2 E w ) − 1 , i
= N γ i = { 0 , i = 0 γ c , i = 1 , 2 , … , N − 1 γ w , i = N δ i = { [ − u 1 ] + , i = 0 [ u i − u i + 1 ] + , i = 1 , 2 , … , N − 1 [ u N ] + , i = N
Here, the subscripts c and w refer to the point of contact between two neighboring particles and the point of contact between the last particle and the half-infinite wall, respectively. The values of
R, m, and u are respectively the radius, mass, and axial displacement from the equilibrium position of the particle. A is the contact stiffness between adjacent beads (A[c]) or between the last bead
and the wall (A[w]). γ is a coefficient that takes into account the dissipative effects associated with the contact of the chain with the inner tube's surface and the wall [11,12]. F is the sum of
the body force, e.g., gravity, and external forces applied on the chain. The dot represents the time-derivative while the operator [][+] returns the value of the variable if the variable is positive,
otherwise it returns 0. Finally, E and ν are the Young's Modulus and the Poisson's ratio, respectively, of the beads (E[c], ν[c]) and of the wall (E[w], ν[w]).
The impact of the striker was simulated by setting the initial displacement u[0]=0 and the initial velocity u ˙ 0 = 2 g h, where g is the gravitational constant and h is the experimental falling
height (h = 7.2 mm). The other initial conditions were u[i] = 0 for i = 1, 2, …, N, u̇[1] = u̇[0] and u̇[i] = 0 for i = 1, 2, …, N. The differential Equation (6) was solved to calculate u and u̇ for each
particle by using the fourth order Runge-Kutta method.
The model of the chain with the instrumented beads did not consider the fact that they are slightly heavier than the other spheres. This is because it was demonstrated [6,8] that the effect of the
mass difference on the wave propagation can be neglected in the model. As such, particles 13 and 18 had the same mass of all the other particles. Moreover, because the piezogauge-half sphere contact
stiffness is much higher than the sphere-sphere contact stiffness, the sensor still could be considered as a rigid body.
In order to model the presence of the piezo rods, the modeled chain comprised two solid rods having the same geometric and material properties of the piezoelectric cylinders at position 13 and 18.
And the mass and contact stiffness (terms A[i]) at positions 13 and 18 were replaced accordingly. Finally, the model relative to the presence of the coil and the bias magnet, considered the tube
filled with 29 particles. The presence of the magnetic bias was included by considering four particles per coils subjected to a static compressive force equal to 1.8 N. This value was estimated by
comparing the experimental wave speed of the incident solitary wave to the analytical prediction provided by the long-wavelength limit [1,7,13]. Irrespective of the sensing system considered the
model included the static pre-compression due to the self-weight of the particles, and the dissipation coefficients γ, whose determination will be discussed in Section 5.1.
The numerical model was applied to simulate the three setups described in Section 2. To predict the measurements of the three types of transducers, the numerical values of the force-time profiles at
contact points c[8], c[11], c[12], and c[13] (see the notation in Figure 3) are presented in Figure 4. The figure shows the presence of a single solitary wave, whose amplitude and time of arrival are
almost the same for all three types of sensors. The pulse measured by MsS at contact points c[11]-c[13] has a slightly earlier arrival due to the presence of the magnetically induced precompression.
The presence of the cylinder alters the temporal force profile due to the following mechanism. As the single pulse propagates, the particle 11 compresses the next particle giving rise to the first
peak visible in Figure 4(b). Because the piezo rod is heavier than particle 12, particle 12 bounces back. Thus, the corresponding amplitude of the dynamic force is higher and it is visible in Figure
4(c). By bouncing back, particle 12 gets closer to sphere 11 originating the small hump visible in Figure 4(b). This reflected wave propagates backward to the top of the chain and it appears also in
Figure 4(a) at around 170 microseconds. Meantime, after particle 12 is compressed by particle 11 for the second time, its velocity becomes slightly larger than that of the cylinder. This compresses
again particle 12 against the rod, as demonstrated by the small pulse in Figure 4(c) at 200 microseconds. This creates a state of compression between the cylinder and its neighboring particle 14,
which gives rise to a secondary pulse that trails the incident wave and is visible in Figure 4(d) at 220 microseconds. The figure also indicates that the amplitude of the wave passing through the
cylindrical sensor reduces significantly when compared to the same pulse monitored by the bead sensor or the MsS.
The numerical results of the force profiles at position 13 for the three sensing systems are shown in Figure 5. Figure 5(a) refers to the sensor bead. The dashed lines represent the dynamic forces at
the contact points c[12] and c[13], whereas the continuous line is the average value of the two dynamic contact forces and it represents the force measured by the sensor bead [6]. Figure 5(b)
presents the results relative to the rod. Similar to Figure 5(a), Figure 5(b) shows the values of the force at the contact points of the cylinder with the particles 12 and 14 (dashed lines) and the
averaged value (continuous line). Because the cylinder's mass and stiffness are different than those of the spheres, the incoming wave is partially reflected at the interface, thus reducing the
amplitude of the transmitted pulse.
Finally, Figure 5(c) shows the results associated with the presence of the coil centered at location 13. Owing to the length of the coil and the position of the magnetic bias (Figure 2(d)) the
dynamic forces at contact pointsc[11] to c[14] are presented. Because the force measured by the MsS is the average of four dynamic contact forces at contact points 11 to 14, its amplitude is smaller
and its duration is longer with respect to the dynamic force measured by the other two sensing systems.
Although the dissipation coefficients can be determined empirically by measuring the magnitudes of the dynamic forces at different positions in the chain [11,12], we adopted another approach that is
illustrated here. Figure 6 shows the voltage output measured by both spherical sensors. The first pulses represent the main solitary wave traveling from the impact point to the wall, while the second
pulses are the waves reflected from the rigid wall. We characterized the HNSW by means of three parameters: the time-of-flight (TOF), the speed, and the amplitude ratio (AR). The first parameter
denotes the difference in the transit time at a given position between the incident and the reflected wave. The speed can be simply computed by dividing the distance between two sensors over the
difference in the arrival time of the two amplitude peaks. Finally, we define the AR as the ratio of the reflect wave amplitude over the incident pulse amplitude.
In order to calibrate the numerical model to our experimental setup, the dissipation was taken into account. Because the force amplitudes of both incident and reflected solitary pulses were
proportional to the voltage-force conversion factor, their AR was independent upon this conversion factor. We computed the dissipation coefficient by considering the 500 experimental measurements of
the amplitude ratios AR[top] and AR[bottom] measured by the sensors located at positions 13 and 18, respectively. For different combination of dissipation coefficients γ[c] and γ[w], the numerically
predicted AR[top-num] and AR[bottom-num] were calculated. Then an objective function y: y ( γ c , γ w ) = norm ( [ A R top − num − A R top A R top , A R bottom − num − A R bottom A R bottom ] )was
defined in terms of the difference between numerical and experimental results. We optimized this function by finding the combination of γ[c] and γ[w] that minimized the value of the objective
function subjected to the following constraints: γ[c] ≥ 0 and γ[w] ≥ 0. The result of the optimization process yielded to γ[c]= 4.8 kg/s and γ[w] = 0. As it is not very plausible that there is no
dissipation at the wall, the result of the optimization may suggest that the dissipation along the chain might be more dominant than the energy loss from the last particle-wall interaction.
As is said earlier, when the piezoelectric cylinders and MsSs were used, the sensor beads were replaced by the two piezo rods and two spheres, respectively. Because the coefficient γ[c] accounts for
the dissipation along the entire chain, we assumed that the replacement of two particles out of 29 had negligible effect on the value of γ[c]. Thus, the value γ[c] = 4.8 kg/s was applied to all three
kinds of sensing technology.
In applications such as nondestructive testing, voltage measurements are sufficient to correlate the characteristics of the solitary pulses to the properties of the structure or material under
inspection. However, other engineering applications may require the quantitative measurement of the dynamic force associated with the traveling pulse. Thus, the relationship between this force and
the output voltage from the sensor needs to be known. To establish this relationship, we adopted the following procedure for all sensing configuration. The experimental time profiles, expressed in
Volts and collected using the experimental setup described in Section 2 were compared to the force profiles (expressed in Newton) computed with the discrete particle model described in Section 4. The
model considered the effect of dissipation. Figure 7(a,c,e) shows the voltage output associated with the three transducers pair. One out of 500 measurements is displayed. With respect to Figure 7
(a,c), Figure 7(e) shows negative values of the output voltage. When the solitary pulse travels through the particles surrounded by the MsS, there is an increase in compression due to the
contribution of the dynamic contact force. This increase generates a positive gradient of the magnetic flux visible in the positive output voltage. When the pulse propagates away, the compression
between adjacent spheres decreases, creating a negative gradient of the magnetic flux. This negative gradient is represented by the negative portion of the signal in Figure 7(e). Similarly, the
dashed lines in Figure 7(b,d,f) shows the corresponding numerical predictions. Because the overall shapes of the incident experimental and numerical temporal profiles were almost identical, a
conversion factor K[i] associated with each measurement i was calculated as: K i = F Num V i , Exp ( i = 1 , 2 , … , 500 )
Here V[i,Exp] is the maximum experimental output voltage, and F[Num] is the maximum amplitude of the dynamic force determined numerically. A unique conversion factor K was then established by
averaging 500 ratios. Table 1 summarizes the coefficients determined for every transducer. The small standard deviations prove the repeatability of the setup and the consistency of the novel sensing
systems. It is noted that, as the force measured by the MsS is related to the integral of the voltage (see Equation (5)), the corresponding units for factor K are different from the others.
In the last part of our study, we quantitatively compared the shape of the experimental and numerical force profiles. Figure 7(b) shows the time series associated with the bead sensors. For
convenience of representation, the experimental data are shifted horizontally in order to overlap the numerical and experimental peak of the incident pulse at the particle 13. Clearly, the plot shows
good agreement between the experiment and the numerical model and a slight discrepancy is only visible for the reflected waves.
Similarly, Figure 7(d) refers to the results relative to the piezo rod. The figure shows that the amplitude of the incident pulse measured by the bottom cylindrical sensor is slightly smaller than
the numerical prediction and the amplitudes of the reflected pulses are smaller than the corresponding numerical predictions. One possible reason is that the numerical model may underestimate the
attenuation at the bead-cylinder interface, when the effect of possible static friction between the cylinder and the inner wall of the tube is ignored.
Finally, Figure 7(f) shows the experimental force profiles measured by the MsSs and the corresponding numerical result. By integrating the signal in Figure 7(e) and multiplying the integral with the
calibration coefficient K listed in Table 1, Figure 7(f) is obtained. The presence of the magnetically induced precompresson reduces the attenuation and increases the pulse's velocity, thus reducing
the TOF. The figure rveals the presence of a trough preceding the arrival of the main peak at both sensors. The origin of this response is not fully understood yet but we speculate that it might be
associated with the presence of the permanent magnetic field. Nonetheless, the ability of the MsS to capture the presence and characteristics of HNSWs is evident. Overall Figure 7(b,d,f) shows very
good agreement between the numerical and the experimental results relative to the incident pulse. There is some noticeable discrepancy in the time of arrival and amplitude of the reflected wave. This
is likely due to the modeling of the rigid wall at the base of the chain.
To assess and compare the capability of the novel transducers to measure the characteristics of the HNSW propagating in a straight chain of particles, Tables 2 and 3 are presented. Table 2 summarizes
the numerical and experimental values of the TOF and AR associated with the propagating pulse. The experimental values represent the average and the standard deviation (std) of the 500 measurements.
The small variation between numerical and experimental values suggest that the two novel transducers perform equally well to the sensor bead. The small standard deviation also remarks the
repeatability of the sensing systems. The speed of the incident and reflected solitary pulses are presented in Table 3. The small differences between numerical and experimental results indicate that
all three types of sensors are able to measure the wave speed accurately. The smallest standard deviation, which was found for the magnetostrictive transducer, indicates the highest degree of
repeatability of the setup. The speed of the solitary wave depends upon the setup. This is not surprising as the the solitary waves exhibit unique physical properties when compared to linear elastic
waves. One of these properties is the dependency of the speed to the pulse's amplitude and to the level of precompression in the chain. Thus, the wave speed measured by the MsSs was the largest
because they induce precompression magnetically. The speed measured by the piezo rods was the smallest as the presence of the solid rod diminishes the amplitude of the traveling pulse.
In this paper we investigated numerically and experimentally three sensing systems for the measurements of highly nonlinear solitary pulses propagating in a chain of spherical particles. The
transducers were a conventional pair of instrumented beads, and two novel designs based on the utilization of a pair of piezo rods, and a pair of coils. The latter aimed at exploiting the
magnetostrictive properties of the particles.
We compared the experimental results to the numerical predictions obtained by means of a discrete particle model. We found that the two novel designs performed equally well when compared to the
conventional sensor beads, which require micromachining and must not be allowed to rotate in order to keep their sensitivity constant. From this study the following consideration can be made. Owing
to its geometry the piezoelectric cylinder is not prone to rotation and does not require the machining of half-particles. However it may originate unwanted secondary pulses that trails the incident
wave and attenuates the amplitude of the incident pulse. To prevent this problem, the cylindrical sensor should have the same mass of the other particles composing the chain, and the sphere-cylinder
contact stiffness should be the same as the sphere-sphere contact stiffness. Based on the contact mechanics theory, in order to have same contact stiffness, the elastic modulus of the cylinder should
be approximately equal to 55% of the particle material's elastic modulus [11], which may not be feasible in practice for piezoelectric material. Moreover the wiring of the cylinder may be too brittle
and therefore prone to rupture. Conversely, the use of coils has multi-fold advantage: it can be mounted outside the chain; it is noncontact; is can slide at convenience at any position along the
chain. However, this design can be used only when the particles composing the chain are sensitive to magnetostriction, i.e., they are able to convert magnetic energy into mechanical energy and vice
versa. In fact, the magnetostrictive sensors exploit the coupling between the dynamic deformation of the magnetostrictive material to which they are applied and the variation of magnetic field
surrounding the material. As confirmed by the empirical results, the voltage output is proportional to the dynamic deformation caused by the passage of the solitary wave pulse (Equation (5)). Future
studies may look at improving the design of the magnetostrictive transducers and at investigating the response of the novel sensing system design in presence of much smaller and much larger crystals.
The authors acknowledge the support of the University of Pittsburgh's Mascaro Center for Sustainable Innovation seed grant program, the Federal Railroad Administration under contract
DTFR53-12-C-00014 (Leith Al-Nazer was the Program Manager), and the U.S. National Science Foundation (CMMI 1200259).
NesterenkoV.F.Propagation of nonlinear compression pulses in granular media198324733743 LazaridiA.N.NesterenkoV.F.Observation of a new type of solitary waves in one-dimensional granular
medium19852640540810.1007/BF00910379 NesterenkoV.F.LazaridiA.N.SibiryakovE.B.The decay of soliton at the contact of two “acoustic vacuums”19953616616810.1007/BF02369645 CosteC.FalconE.FauveS.Solitary
waves in a chain of beads under Hertz contact1997566104611710.1103/PhysRevE.56.6104 CosteC.GillesB.On the validity of Hertz contact law for granular material acoustics1999715516810.1007/s100510050598
DaraioC.NesterenkoV.F.HerboldE.B.JinS.Strongly nonlinear waves in a chain of Teflon beads200572016603:1016603:9 DaraioC.NesterenkoV.F.HerboldE.B.JinS.Tunability of solitary wave properties in
one-dimensional strongly nonlinear phononic crystals200673026610:1026610:10 JobS.MeloF.SokolowA.SenS.How Hertzian solitary waves interact with boundaries in a 1D granular medium200594178002:1178002:4
JobS.MeloF.SokolowA.SenS.Solitary wave trains in granular chains- experiments, theory and simulations200710132010.1007/s10035-007-0054-2 NesterenkoV.F.DaraioC.HerboldE.B.JinS.Anomalous wave
reflection at the interface of two strongly nonlinear granular media200595158702:1158702:4 YangJ.SilvestroC.KhatriD.De NardoL.DaraioC.Interaction of highly nonlinear solitary waves with linear
elastic media201183046606:1046606:12 Carretero-GonzálezR.KhatriD.PorterM.A.KevrekidisP.G.DaraioC.Dissipative solitary waves in granular crystals2009102024102:1024102:4 NiX.RizzoP.DaraioC.Laser-based
excitation of nonlinear solitary waves in a chain of particles201184026601:1026601:5 NiX.RizzoP.DaraioC.Actuators for the generation of highly nonlinear solitary waves201182034902:1034902:6
SenS.ManciuM.WrightJ.D.Solitonlike pulses in perturbed and driven Hertzian chains and their possible applications in detecting buried impurities1998572386239710.1103/PhysRevE.57.2386
ChatterjeeA.Asymptotic solution for solitary waves in a chain of elastic spheres1999595912591910.1103/PhysRevE.59.5912 ManciuF.S.SenS.Secondary solitary wave formation in systems with generalized
Hertz interactions200266016616:1016616:11 HongJ.Universal power-law decay of the impulse energy in granular protectors200594108001:1108001:4 VergaraL.Scattering of Solitary Waves from Interfaces in
Granular Media200595108002:1108002:4 RosasA.RomeroA.H.NesterenkoV.F.LindenbergK.Observation of two-wave structure in strongly nonlinear dissipative granular chains200798164301:1164301:4
SpadoniA.DaraioC.Generation and control of sound bullets with a nonlinear acoustic lens20101077230723410.1073/pnas.100151410720368461 FraternaliF.PorterM.A.DaraioC.Optimal design of composite
granular protectors20091711910.1080/15376490802710779 HongJ.XuA.Nondestructive identification of impurities in granular medium2002814868487010.1063/1.1522829
BoechlerN.TheocharisG.DaraioC.Bifurcation-based acoustic switching and rectification20111066566810.1038/nmat307221785416 NiX.RizzoP.YangJ.KatriD.DaraioC.Monitoring the hydration of cement using
highly nonlinear solitary waves201252768510.1016/j.ndteint.2012.05.003 NiX.RizzoP.Use of highly nonlinear solitary waves in NDT201270561569 NiX.RizzoP.Highly Nonlinear Solitary Waves for the
Inspection of Adhesive Joints2012521493150110.1007/s11340-012-9595-3 YangJ.SilvestroC.SangiorgioS.N.BorkowskiS.L.EbramzadehE.De NardoL.DaraioC.Nondestructive evaluation of orthopaedic implant
stability in THA using highly nonlinear solitary waves201221012002:1012002:10 LeonardA.FraternaliF.DaraioC.Directional wave propagation in a highly nonlinear square packing of spheres201110.1007/
s11340-011-9544-6 LeonardA.DaraioC.Stress wave anisotropy in centered square highly nonlinear granular systems2012108214301:1214301:4 ZhuY.ShuklaA.SaddM.H.The effect of microstructural fabric on
dynamic load transfer in two dimensional assemblies of elliptical particles1996441283130310.1016/0022-5096(96)00036-1 GengJ.ReydelletG.ClémentE.BehringerR.P.Green's function measurements of force
transmission in 2D granular materials200318227430310.1016/S0167-2789(03)00137-4 CalkinsF.T.FlatauA.B.DapinoM.J.Overview of magnetostrictive sensor technology2007181057106610.1177/1045389X06072358
JouleJ.P.On the effects of magnetism upon the dimensions of iron and steel bars1847307687 KleinkeD.K.UrasM.H.A magnetostrictive force sensor1994651699171010.1063/1.1144863 KleinkeD.K.UrasM.H.Modeling
of magnetostrictive sensors19966729430110.1063/1.1146584 TumanskiS.Induction coil sensors—A review200718R31R4610.1088/0957-0233/18/3/R01 VillariE.Change of magnetization by tension and by electric
current186512687122 Lanza di ScaleaF.RizzoP.SeibleF.Stress measurement and defect detection in steel strands by guided stress waves20031521922710.1061/(ASCE)0899-1561(2003)15:3(219) RizzoP.Lanza di
ScaleaF.Monitoring in cable stays via guided wave magnetostrictive ultrasonics20046210571065 RizzoP.Lanza di ScaleaF.Ultrasonic inspection of multi-wire steel strands with the aid of the wavelet
transform20051468569510.1088/0964-1726/14/4/027 RizzoP.SorriviE.Lanza di ScaleaF.ViolaE.Wavelet-based outlier analysis for guided wave structural monitoring: application to multi-wire
strands2007307526810.1016/j.jsv.2007.06.058 RizzoP.BartoliI.MarzaniA.Lanza di ScaleaF.Defect classification in pipes by neural networks using multiple guided ultrasonic wave features extracted after
wavelet processing200512729430310.1115/1.1990213 RizzoP.Lanza di ScaleaF.Feature extraction for defect detection in strands by guided ultrasonic waves2006529730810.1177/1475921706067742
Schematic diagram of the experimental setup.
Sensing technologies used in this study. (a) Bead sensor formed by a thin piezoelectric crystal embedded between two half particles, (b) commercial piezo rod, (c) magnetostrictive sensor formed by a
coil and a bridge magnet, (d) Schematic diagram of one magnetostrictive sensor assembled with the tube filled with spherical particles.
Schematic diagram of the one-dimensional discrete element model. The c[1], c[2], …, c[N] indicates the points of contact between two neighboring particles. When the presence of the piezo rod is
modeled, the spheres 13 and 18 are replaced by solid rods.
Discrete particle model results showing the temporal force profile for all threesensing configurations at contact points: (a) c[8], (b) c[11], (c) c[12], and (d) c[13].
Discrete particle model results showing the temporal force profile at some contact points (dashed lines) and as measured by three sensors (solid lines): (a) bead sensor, (b) piezo rod, (c)
magnetostrictive sensor.
Typical waveforms measured by the bead sensors.
(a) Experimental results for bead sensors, (b) comparison of experimental and numerical results for bead sensors, (c) experimental results for cylindrical sensors, (d) comparison of experimental and
numerical results for cylindrical sensors, (e) experimental results formagnetostrictive sensors, (f) comparison of experimental and numerical results for magnetostrictive sensors.
Calibration coefficients adopted in this study.
Calibration coefficient K Relative standard deviation (%) Unit of K
Bead sensor top 2.072 4.71 N/V
Bead sensor bottom 2.160 4.31 N/V
Cylindrical sensor top 0.632 4.00 N/V
Cylindrical sensor bottom 0.557 4.01 N/V
MSS top 0.131 1.44 N/V·sec
MSS bottom 0.156 1.56 N/V·sec
Experimental and numerical time-of-flight and amplitude ratio.
Time of flight (microsec) AR
Sensor Type Sensor Position Numerical Experimental Difference (%) Numerical Experimental Difference (%)
Mean Std Mean Std
Bead Top 318.7 309.8 2.6 2.79 0.57 0.55 0.04 3.51
Bottom 227.5 219.9 1.6 3.34 0.68 0.73 0.02 7.35
Cylindrical Top 341.0 337.0 5.3 1.17 0.42 0.37 0.03 11.9
Bottom 234.9 228.4 3.0 2.77 0.68 0.60 0.07 11.8
MSS Top 298.8 287.3 0.9 3.85 0.70 0.69 0.02 1.43
Bottom 217.6 207.0 0.7 4.87 0.79 0.86 0.01 8.86
Experimental and numerical speed of the incident and reflected HNSW pulse.
Sensor Type Type of Wave Numerical (m/s) Experimental (m/s) Difference (%)
mean std
Bead Incident 540.0 547.0 13.1 1.30
Reflected 505.6 514.1 16.0 1.68
Cylindrical Incident 486.3 512.0 14.6 5.28
Reflected 458.7 422.9 38.6 7.80
MSS Incident 596.8 602.1 1.75 0.89
Reflected 576.6 584.8 3.36 1.42
|
{"url":"http://www.mdpi.com/1424-8220/13/1/1231/xml","timestamp":"2014-04-16T13:36:32Z","content_type":null,"content_length":"90568","record_id":"<urn:uuid:c6077362-9f48-4c95-b7a6-8a06d725e6cb>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Moduli spaces of geometric structures on surfaces and modular operads
Seminar Room 1, Newton Institute
It is well-known that the moduli space of Riemann surfaces with nonempty boundary is homotopy equivalent to the moduli space of metric ribbon graphs. We generalize this to a large class of geometric
structures on surfaces, including unoriented, Spin, r-spin, and principal G-bundles, etc. For any class of geometric structures that can be described in terms of sections a suitable sheaf of spaces
on a surface, we define a moduli space and show that it is homotopy equivalent to a moduli space of graphs with appropriate decorations at the vertices. The construction rests on the contractibility
of the arc complex and can be interpreted in terms of derived modular envelopes of cyclic operads.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
|
{"url":"http://www.newton.ac.uk/programmes/GDO/seminars/2013040513301.html","timestamp":"2014-04-18T23:22:05Z","content_type":null,"content_length":"6172","record_id":"<urn:uuid:432efcd4-418b-4b2c-bce3-0434ef919ff7>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
|
West Newton, MA Prealgebra Tutor
Find a West Newton, MA Prealgebra Tutor
...My main enterprise is a management consulting firm. I am considered an expert on business innovation, business-model innovation, and entrepreneurship. Rather than being someone who just
teaches how to do better, I'm someone who has led a life of high accomplishment.
55 Subjects: including prealgebra, reading, English, writing
...I would be happy to talk to you more about your student's goals and how I can help your child achieve them.I work with students on increasing their vocabulary knowledge through flashcards (and
teach them how to make the right kind of flashcard). I also teach students how to decipher meanings of w...
26 Subjects: including prealgebra, English, linear algebra, algebra 1
...I focus on giving my students the concrete tools they need to do a sophisticated rhetorical analysis, which not only allows them to get questions right, but also boosts their confidence --
because they can finally explain what makes the best answer better than all the others. I earned a perfect ...
47 Subjects: including prealgebra, English, chemistry, reading
...I have also been fortunate to travel to 40 plus countries, in many cases I had to understand the geography of the region before I could travel. I have been a student of the atlas my whole
life. I have a minor degree in history which includes knowledge of government and politics.
22 Subjects: including prealgebra, physics, biology, writing
I've been tutoring math to high-school and middle-school students in the Canton, Dedham, Sharon, Norwood and Stoughton area for ten years. I've tutored nearly all the students I've worked with
for many years, and I've also frequently tutored their brothers and sisters - also for many years. I enjo...
11 Subjects: including prealgebra, geometry, algebra 1, precalculus
Related West Newton, MA Tutors
West Newton, MA Accounting Tutors
West Newton, MA ACT Tutors
West Newton, MA Algebra Tutors
West Newton, MA Algebra 2 Tutors
West Newton, MA Calculus Tutors
West Newton, MA Geometry Tutors
West Newton, MA Math Tutors
West Newton, MA Prealgebra Tutors
West Newton, MA Precalculus Tutors
West Newton, MA SAT Tutors
West Newton, MA SAT Math Tutors
West Newton, MA Science Tutors
West Newton, MA Statistics Tutors
West Newton, MA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/West_Newton_MA_Prealgebra_tutors.php","timestamp":"2014-04-16T19:19:11Z","content_type":null,"content_length":"24365","record_id":"<urn:uuid:bd762dcb-5026-422f-8fa2-c27471ab9e3f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Electric Field due to a Charged Cylinder
Doc Al
No. A "hollow disk" is a ring.
Oh...ok that makes sense now.
Doc Al
Start with the field from a ring of charge. Imagine the hollow cylinder as a stack of such rings and integrate.
Ok, so just to see if I am doing this right,
{{dE}_{net}} = {{dE}_{P1_{z}}}
{{dE}_{P1_{z}}} = {{dE}_{P1}}{{cos}{\beta}}
{{dE}_{P1}} = {\frac{{{k}_{e}}{dq}}{{\left({r}_{{}_{1P}}\right)}^{2}}}
{cos}{\beta} \equiv \frac{adj.}{hyp.}
{cos}{\beta} = {\frac{s}{{\sqrt{{{s}^{2}}+{{{R}_{0}}^{2}}}}}}
{{dE}_{P1_{z}}} = {{dE}_{P1}}{{cos}{\beta}}
{{dE}_{P1_{z}}} = {\left(\frac{{k_{e}}{dq}}{{\left(r_{_{1P}}\right)}^{2}}\right)}{{\left( \frac{s}{{\sqrt{{{s}^{2}}+{{{R}_{0}}^{2}}}}}\right)}}
{\lambda} = {\frac{dq}{ds}}
{dq} = {\lambda}{ds}
In addition,
{r_{_{1P}}} = \sqrt{{s}^{2}+{R_{0}}^{2}}
Ok, I am up to here, however I just want to make sure I understand the integration correctly.
For the charged ring a differential (length-wise) segment of the ring was the [itex]{ds}[/itex].
However for this charged cylinder we are letting a differential width (thickness) of the ring be [itex]{ds}[/itex].
Will the integral
work anyway since we are using [itex]{\lambda}[/itex], but noting how [itex]{ds}[/itex] does
represent the same differential shape (in both the situations above)?
I get the feeling as if I should be using [itex]{\sigma}[/itex] (because we are dealing with width (thickness))...
So, why does this integration work anyway with [itex]{\lambda}[/itex] as opposed to [itex]{\sigma}[/itex] in
Thanks Doc Al.
|
{"url":"http://www.physicsforums.com/showthread.php?t=188011","timestamp":"2014-04-21T09:55:40Z","content_type":null,"content_length":"68000","record_id":"<urn:uuid:08dc89ff-3e21-4093-87d6-c77d7e4df823>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Eloy Algebra Tutor
Find an Eloy Algebra Tutor
MS Business Education -- Computer Science, Montana State UniversityBBA Business Administration -- Accounting, University of Alaska SoutheastWith over twenty years of experience with computers, I
have worked on projects dealing with the following: Software Development, Data Systems Design, Software D...
12 Subjects: including algebra 1, algebra 2, Microsoft Excel, computer programming
...I am proficient in math, pre-algebra, college algebra, trigonometry, calculus 1-3, and statistics. I have had experience tutoring all of those classes with the exception statistics, however, I
am confident in my teaching abilities. I realize everyone learns differently and am very good at explaining things multiple ways.
28 Subjects: including algebra 1, algebra 2, reading, English
...Here's a big accomplishment that I want you to know:I am already used to hearing “I need your help,” from elementary students. When I began tutoring I found out that a majority of the students
weren’t performing well on Arizona’s standardized test. The main reason I thought students weren’t performing well was due to a paucity of tutors.
67 Subjects: including algebra 2, American history, biology, chemistry
...My specialties are Algebra and Geometry. I am also proficient in Calculus. I have a passion for Math and am enthusiastic tutor with a lot of patience.
13 Subjects: including algebra 1, algebra 2, calculus, geometry
...I have taught English in Japan and in the U.S. I love meeting people from different countries, and I look forward to working with you!I received my K - 8 teaching credential from San Jose State
University in 1989. I taught elementary school for 3 years before having children, and since then I have substitute taught and tutored.
25 Subjects: including algebra 2, algebra 1, reading, English
|
{"url":"http://www.purplemath.com/Eloy_Algebra_tutors.php","timestamp":"2014-04-17T19:14:16Z","content_type":null,"content_length":"23598","record_id":"<urn:uuid:e163d225-cdff-4630-94c8-c4cfdfd0a67b>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Commensurators of finitely generated non-free Kleinian groups
C. J. Leininger, D. D. Long & A. W. Reid
September 2, 2009
1 Introduction
Let G be a group and 1, 2 < G. 1 and 2 are called commensurable if 1 2 has finite index
in both 1 and 2. The Commensurator of a subgroup < G is defined to be:
CG() = {g G : gg-1
is commensurable with }.
When G is a semi-simple Lie group, and a lattice, a fundamental dichotomy established by
Margulis [25], determines that CG() is dense in G if and only if is arithmetic, and moreover,
when is non-arithmetic, CG() is again a lattice.
Historically, the prominence of the commensurator, was due in large part to the density of the
commensurator in the arithmetic setting being closely related to the abundance of Hecke operators
attached to arithmetic lattices. These operators are fundamental objects in the theory of automor-
phic forms associated to arithmetic lattices (see [38] for example). More recently, the commensurator
of various classes of groups has come to the fore due its growing role in geometry, topology and ge-
ometric group theory; for example in classifying lattices up to quasi-isometry, classifying graph
manifolds up to quasi-isometry, and understanding Riemannian metrics admitting many "hidden
symmetries" (for more on these and other topics see [2], [4], [17], [18], [24], [34] and [37]).
In this article, we will study CG() when G = PSL(2, C) and a finitely generated non-
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/772/2475043.html","timestamp":"2014-04-16T13:48:46Z","content_type":null,"content_length":"8575","record_id":"<urn:uuid:2c701ee0-a31c-477b-9a87-e5016a35ac95>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Igcse Math Papers 2011
Igcse Math Papers 2011 PDF
Sponsored High Speed Downloads
Cambridge is publishing the mark schemes for the October/November 2011 question papers for most IGCSE, GCE Advanced Level and Advanced Subsidiary Level syllabuses and some Ordinary Level syllabuses.
Page 2 Mark Scheme: Teachers’ version Syllabus Paper
Cambridge International Examinations October / November 2011 IGCSE - DUBAI The United Kingdom’s international organisation for educational opportunities and cultural relations.
Cambridge is publishing the mark schemes for the October/November 2011 question papers for most IGCSE, GCE Advanced Level and Advanced Subsidiary Level syllabuses and some Ordinary Level syllabuses.
Page 2 Mark Scheme: Teachers’ version Syllabus Paper
Introduction The Edexcel International General Certificate of Secondary Education (IGCSE) in Mathematics (Specification A) is designed for schools and colleges.
Cambridge International Examinations IGCSE - May / June 2011 Abu Dhab i The United Kingdom’s international organisation for educational opportunities and cultural relations.
Syllabus Cambridge IGCSE Mathematics Syllabus code 0580 Cambridge IGCSE Mathematics (with coursework) Syllabus code 0581 For examination in June and November 2012
Syllabus Cambridge IGCSE Mathematics (US) Syllabus Code 0444 For examination in 2013 This syllabus is only available to Centers taking part in the
IGCSE London Examinations IGCSE Mathematics (4400) For examination in May and November 2004, 2005, 2006, 2007 November 2003, Issue 2 ... • papers will have approximately equal marks available for
each of the targeted grades to be awarded
Additional Math International Math ... IGCSE ENGLISH 2011-2013 This two-year English course is well balanced. ... In the second year of the IGCSE two papers similar to the board exam are set for
the Mock Examinations: one paper ...
examinations of the IGCSE papers set by the Cambridge International Examinations Syndicate. ... CDL MATHEMATICS Course Structure 2011-2012 ... 10 Foundation 11 Math 2 12 Foundation
IGCSE Mathematics (4400) London Examinations November 2004 delivered locally, recognised globally Mark Scheme with Examiners’ Report. ... one of the more demanding questions on the papers. Some
mistakes occurred at the beginning with fg(x), ...
All candidates take two papers. ... IGCSE and Cambridge International Level 1/Level 2 Certificate syllabuses are at the same level. Calculating aids: Paper 1 – the use of all calculating aids is
prohibited. ... 4/1/2011 9:05:10 AM ...
Mathematics Specification 2011 onwards. This specification is available on the website http://web.aqa.org.uk/qual/igcse/maths.php The question papers are intended to represent the length and balance
of the papers that will be set for the examination and to indicate the types of
First examination 2011 IGCSE Mathematics (Specification A) Edexcel, a Pearson company, is the UK’s largest awarding body, ... Relationship of Assessment Objectives to Papers for IGCSE 32 Entering
your students for assessment 32 Student entry 32
IGCSE Weekly Assignments Grade Date IGCSE Coordinator Notes 11 A/B 01/05/11 to 05/05/11 Ms.Zahida English Prepare for Class test on Wednesday 4th May 2011 (Ex: 1-2 and 3-4).Solve M/J 2009 Past Paper
(1-2). Math Solve Revision Ex:1A Page:38. Environmental Management Revise the units: 2.1,2.2,2 .3 ...
September 2011 and first examination in May/June 2012 (unless otherwise specified) ... for use with IGCSE or AS / AL Biology, Chemistry, Physics, ... (Cambridge A Level Pure Math Papers 1, 2 and 3) T
/S Doublestruck www.qkit.co.uk
IGCSE Specimen Papers and Mark Schemes – London Examinations IGCSE in Mathematics (4400) Publication code: UG013054 Issue 1, July 2003 91 IGCSE Mathematics (4400) Mark Schemes for Specimen Papers
with Specification Grid Paper 4H (Higher Tier) Qu. Syllabus
IGCSE English Literature examination in the second year of the course. ... In 2010, the examination will consist of two papers: Component Duration Weighting Paper 4: Set Texts – Closed Books: A 2
hours 15 mins 75%
All candidates will take two written papers. The syllabus content will be assessed by Paper 1 and Paper 2. Paper 1 Duration Marks ... Please note that IGCSE, Cambridge International Level 1/Level 2
Certificates and O Level syllabuses are at
CIE is publishing the mark schemes for the May/June 2010 question papers for most IGCSE, GCE Advanced Level and Advanced Subsidiary Level syllabuses and some Ordinary Level syllabuses. Page 2 Mark
Scheme: Teachers’ version Syllabus Paper
Mathematics IGCSE – Frequently Asked Questions What is IGCSE (International GCSE)? It is a GCSE exam, originally designed for overseas candidates.
Friday 10 June 2011 – Morning Time: 1 hour 45 minutes Materials required for examination Items included with question papers Ruler graduated in centimetres and Nil millimetres, protractor, compasses,
pen, HB pencil, eraser. Tracing ...
UC IGCSE Timetable May 14 UNIVERSITY OF CAMBRIDGE INTERNATIONAL EXAMINATIONS IGCSE EXAMINATIONS – MAY/JUNE 2014 TIMETABLE Timetable Clash MORNING – STARTS AT 9:00 AM (All papers start at 9:00am
unless otherwise specified) DATE AFTERNOON – STARTS AT ...
papers restrict the available grades to C-G whereas extended papers allow access to A*- E grades ... IGCSE / A-LEVEL ENTRY DEADLINE 01/07/2011 RESULTS end January JANUARY EDEXCEL IGCSE /A-LEVEL ENTRY
DEADLINE 30/09/2011 RESULTS end March MAY/JUNE EDEXCEL
GCSE 2012 spec. IGCSE 2009/2011 spec. note 1, 2 May/Jun May/Jun January/February Accounting Arabic (First Language) Art ... (Foreign Language) having papers 1, 2, 3 and 4 is offered to extended
curriculum candidates. Sep 2013_V2 Subject IGCSE GCE O-level Oct/Nov May/Jun May/Jun
Ans: All the support materials such as Syllabi, Study guides, sample question papers, students guides, teacher training have been made available either through the ... Ans: O Level last session is in
January 2011 and IGCSE first session is May/June 2011. Q: ...
18 Math IGCSE Revision Guide for Mathematics Optional 1 56.90 ... AUGUST 2010 / JULY 2011 YEAR 10 No. Subject Item Qty RM/Unit RM/Subtotal ... 65 Graph papers Pack of 10 sheets 1 1.10 1.10 66 Note
Book Optional 1 1.20 1.20
International GCSE Mathematics (4MA0) Paper 4H January 2012 January 2012 International GCSE Mathematics (4MA0) Paper 4H Mark Scheme Apart from Questions 3, 13(b) and 17(f) (where the mark scheme
states otherwise), the correct answer,
School Math Papers Math Papers for 2nd Grade Mathematics Papers 1 2 3 4 5 ... IGCSE May 2012 Pure Maths Paper 1, ... Mathematics Papers Math Papers to Print Out Maths 2011. Title: 2012 dse maths
paper 1 - Bing Created Date:
International GCSE Mathematics (4MA0) Paper 3H January 2012 January 2012 International GCSE Mathematics (4MA0) Paper 3H Mark Scheme Question Working Answer Mark Notes
NILAI INTERNATIONAL SCHOOL ACADEMIC YEAR 2011/2012 YEAR 10/11 ... RM/UNIT 11/12 REMARK ... (Extended) Extended Mathematics for Cambridge IGCSE with CD-ROM (Third Edition) MYR 82.90 5 Add Math New
Additional ... Topic Extension 2 12 12 Rowland-Jones $36.95 $18.50 Past papers 3U Ext 2 1998 ...
Mark schemes must be read in conjunction with the question papers and the report on the ... CIE is publishing the mark schemes for the October/November 2010 question papers for most IGCSE, GCE
Advanced Level and Advanced Subsidiary Level syllabuses and some Ordinary Level syllabuses.
PEARSON GLOBAL SCHOOLS. Your local learning partner. Primary/Elementary Secondary IB Diploma. IGCSE
Gce o Level Question Papers Free Essays 1 - 20 ... 2008 question paper 0610 BIOLOGY 0610/02 ... for most IGCSE, GCE Advanced Level ... geimetry 2011 january regents answers diagram of weed eater
model sst25 ford 7700
Math mock exam. http://www.vakebooks ... interactive resource allowing learners to practice multiplication 2011 [Tue] 3:37 ... animal ... Doc Brown's Chemistry website Revision Notes KS4 GCSE/IGCSE
Science GCSE/IGCSE Chemistry GCSE Biology GCSE Physics GCE AS A2 A Advanced Level Chemistry
You must pass at least six O-levels and/or IGCSE/GCSE subjects, and two A-level subjects. 3. ... Level Title Code Papers Board GCSE Arabic * 1606/ 1607 1606/1607 Edexcel ... Math. + AS Arabic) ...
Sillabus Entry for IGCSE / AS & A Level, LSP Checkpoint October November 2011 ... must be indicated on all exam papers and correspondence with the school. Kind regards Heidi van Zyl ... Math 1112 LSP
1: Paper 1 ( Non-calculator) 2: Paper 2 (Calculator) Leave blank
Eight Students will write six papers (two x one hour papers in each of these three subjects). In Grade Nne, we will be integrating nine different Cambridge IGCSE Curricula with the equivalent Ontario
Secondary School (OSS ... SEPTEMBER 7 First Day of Classes for the 2011-2012 Academic ...
Math English Student Diary I.D. Card (one only) Escort Card ... (IGCSE / IB related syllabus past papers / guidelines /CDs) Title: Application for admission 2011.pmd Author: Admin Created Date: 5/10/
2011 1:17:06 PM ...
2011 Advanced Award 400 UMS A(320) B(280) C(240) D(200) E(160) Art & Design GCE ARTA1; Art and Design (Art, Crafts and Design) Unit 1 100 UMS; ... Language IGCSE; 8705/1F Paper 1 Foundation Tier; 139
UMS C(120) D(100) E(80) F(60) G(40) 8705/1H Paper 1 Higher Tier; 200 UMS A*(180) A(160) B(140) C ...
IGCSE Pass Rate: 100% ... above in ALL papers. S/N Candidates Name A-Level Subjects Grade 1 Kayanja Lawrence PCB/ Math A*A*AA 2 Aine Sheba PCB/ Math A*AAA ... January 2011 with their current mock
results. Students in UNEB S.1- S.3 can apply for
and"examination"papers"and"constantly"updates"thesethroughout"theSchool"Year.""In"addition,"both" ... Students"in"Year"11taken"IGCSE,"set"by"International"Edexcel"after"completing"atwo"year"courseof"
2011-2012 See also Deledda ... the first biennial program was changed to the IGCSE program and Deledda International was authorized as a ... short and extended responses, essays, research papers,
projects, portfolios, class discussions, group and indi-
MATH MATH MATH SPECIAL CLASSES PHYS# PRACTICAL No school 9:00-11:00 Sun 15TH Dec ... TEST & EXAMINATION PAPERS BREAK ... On December 11, 2011 NIS took the decision to allow children the choice ...
INTERNATIONAL Written specifically for the International GCSE Includes a component with audio Published in the UK ... 3 & Algebra Readiness ©2011 13-15 Math Navigator ©2012 Common Core 16-17 ... the
papers but also additional worked solution videos of AO2 and AO3 questions!
Education (IGCSE) which is taken at ... Math Quest and various community projects. Governance: The school is governed by a nine member Board of Trustees, which is elected by the School Association,
... ² Percentage of ISM papers graded 5 or higher .
2011/2011L (8 credits) United States: History. AMH 0301 (3 credits) AMH 2010 : And 2020 (6 credits) ... Math & Sciences. Group B Languages. ... Papers. AICE Thinking Skills. AICE Biology. Accelerated
Courses by Grade Level. 12 th Grade. AP
Education (IGCSE) which is taken at ... Math Quest and various community projects. Governance: The school is governed by a nine member Board of Trustees, which is elected by the School Association,
... ² Percentage of ISM papers graded 5 or higher
MATH MATH MATH MATH PHYS# PRACTICAL No school 9:00-11:00 Sun 15TH ... TEST & EXAMINATION PAPERS Please note that NIS does not review test and examination papers with parents ... 2011 NIS took the
decision to allow children the
IGCSE, IB, Qatar Independent school, American, British, Pakistani, ... Legal Papers required. UURGENTLY REQUIRESRGENTLY REQUIRES Send CV to email: ... 2011. 2 Units W. 500 Komatsu Wheel loader model
2011. Contact: 70018189, 33564090, ...
|
{"url":"http://ebookilys.org/pdf/igcse-math-papers-2011","timestamp":"2014-04-19T14:47:54Z","content_type":null,"content_length":"40783","record_id":"<urn:uuid:1a72cbb1-e4e2-4319-a9a2-f2a1002a9933>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Teach with Faculty-coached, In-class Problem Solving
Jump to: How do I make this approach work for me
How to set up and manage effective groups
How to coach students
How students receive feedback
How to design in-class problems
Success depends on developing problem sets that build student knowledge incrementally and that include content analysis, transfer, and synthesis skills. The problem solving approach works best when
interactive lectures and problems are integrated into an instructional unit. Problems which are relevant to students' lives increase their engagement and allow them to build on their previous
Variation in problem types accommodates the needs of different learners, and exposes all students to different ways of learning. Problems can be grouped into one (or more) of the following
• Problems requiring model building and other hands-on activities
• Problems requiring labeled diagrams of a mechanism or process
• Problems requiring synthesis (for example, a concept map)
• Checkpoints (problems providing a short immediate review of main concepts before moving on)
• Analysis-level, context-rich problems
• Case studies (for example, a case study about single nucleotide polymorphisms )
• Problems that build-in a study technique (for example a problem directing students to interpret a textbook figure)
• Problems that replace lecture (students use their reading and reasoning skills to independently learn new material--the "you're not going to get this in lecture" type of problem)
• Problems requiring data analysis, experimental design, or understanding of techniques
• Optional "challenge problems" that allow students who work quickly to continue to benefit from problem solving time
In Our Course
One problem we face is how to handle a wide range of previous science backgrounds among our students. We have found that using research-focused questions, such as those requiring data analysis or
understanding of research techniques, is one way to minimize the discrepancies in students' backgrounds; even advanced high school courses typically don't expose students to this type of science
problem. For an example of a research-focused problem see the
Malnutrition, DNA replication, development, and schizophrenia homework problem
Problem set development requires flexibility on the faculty member's part to respond to student needs. Ongoing formative assessments drive the development of new problems (Tanner and Allen, 2004).
Although writing problems is a time consuming effort, you can gradually adopt an in-class problem-solving approach. Each time we teach this course we incorporate less lecture and more problems; it
took time to develop a sufficient number of challenging, engaging problems. (See a list of
example problems
Most of the problems the students solve in groups are ungraded to emphasize the
of learning the material. A secondary advantage is the ability to reuse problems from year to year without advantaging students who could get course material from previous students.
Homework Problems
To increase individual accountability students are given a homework assignment prior to each exam. These assignments are based on current research and involve students interpreting data,
understanding techniques, and synthesizing concepts. At the beginning of a new unit, students are given a science news article summarizing a current finding that relates to the topics that will be
covered. Prior to the exam they are given the homework assignment that contains figures from a research paper related to the science news article they read, as well as some background text
summarizing the goals of the paper and key techniques. The students must interpret the figures to answer the questions. The final question asks the students to link all of the concepts covered in the
unit and to connect those concepts to this research article. They are asked to do this in a diagram form supported by text. (See an example homework problem.)
Problem Keys
Problem keys are essential. In our course, we make the keys available (online) after giving the students time to struggle through the problems on their own (at least two days before the exam). The
keys model the problem solving process for the students, and include thorough explanations. The keys provide an opportunity to reteach concepts or to make explicit connections between concepts in
response to student performance in class. Keys may include hand drawn figures to mimic student diagrams, as well as computer generated drawings where appropriate. Students may need to be reminded in
class to use the answer keys while studying. For an example of a problem key see the RNA processing and Northern blot analysis problem.
How do I make this approach work for me?
Back to top
The faculty-coached, in-class problem solving approach may require a restructuring of the material in an existing course. Rather than introducing new material primarily through lecture or interactive
lecture, in this approach, new material can be presented both by interactive lecture and the problems the students will solve. The problems are not an "add-on" to what is currently being done, but
rather can be thought of as a replacement for portions of existing lectures. The function of the lectures is to provide a starting point that allows students to solve the problems.
In designing the shorter, interactive lectures we recognized that covering a topic in class via lecture doesn't automatically mean that students understand and can apply the material. One way we have
reduced lecture time is by including fewer historical experiments (although of course we mention key players and their role in science). Instead we incorporate newer experiments into the problems the
students solve; this allows us to select a more diverse representation of scientists and exposes the students to current techniques used in biology.
Here are some additional strategies to consider:
• think restructuring, not "add-on"
• assign readings before class that will not be reintroduced in lecture
• convert current homework problems or think-pair-shares into in-class problems
• occasionally have students begin problems in class, finish at home, and allow time for questions the next day
• consider moving some of the problems to lab and linking to lab concepts
How to deal with a large class and many groups
Given a large number of groups in class, and the amount of time spent actively solving problems, it may be useful to hire graduate or undergraduate teaching assistants to supplement faculty
interactions during problem solving sessions. Coaching students requires a strong knowledge base, insight into common misconceptions, sensitivity to diversity, and an understanding of group dynamics,
skills that may require training for teaching assistants.
In Our Course
We felt it was beneficial to have two faculty members in class coaching. Our introductory courses are typically team-taught, so we chose to have both faculty members present in class throughout the
course. One faculty member is responsible for presenting new information and providing the problems, and the other is present to help student groups as they solve problems. We hire two undergraduate
teaching assistants who attend class to help with problem solving. We choose these TAs carefully, selecting students who have already completed this course. When possible we hire students from
underrepresented groups in order to provide peer models and to increase representation in the department. The primary interactions with these TAs take place in class; there are no weekly outside
meetings scheduled. Our goal is to help students make efficient use of their time in class, without adding any additional "recitation" style sessions.
The faculty-coached, in-class problem solving approach has not reduced the number of concepts we are able to teach in our course,
Genes, Evolution, and Development
. To see the concepts we include and how our example problems fit in with the course, we've included a link to the
syllabus (Acrobat (PDF) 63kB Oct29 09)
. The course is part of a two-course introductory series and the second course is
Energy Flow in Biological Systems
How to set up and manage effective groups
Back to top
Embedding problem solving in the context of group work requires careful attention to the principles established by K. Heller and P. Heller (2004) and P. Heller and Hollobaugh (1992). As instructors,
• assign groups thoughtfully
□ set the group size at three students
□ assign groups to avoid individuals feeling excluded
□ assign new groups throughout the term to foster the sense of community in the class
• provide written guidelines for effective group work
□ summarize team-building skills
□ list the benefits of group work
□ describe the roles often adopted by members of well-functioning groups
• carefully monitor group interactions while solving problems
□ intervene when groups are not optimizing their potential
□ regroup students in response to particular group situations
In Our Course
Many of our students have expressed gratitude for being placed in groups rather than being left to organize their own groups. At the beginning of the term, we assign groups in class as part of an ice
breaker (based on a technique used in
Team Based Learning
). We ask students to line up according to the population of their hometown and then number off by threes. This makes the group assignment process transparent to the students, and helps ensure that
students from the same hometown are unlikely to be in the same group. To help build a sense of community, we reassign groups several times, typically after each exam. When we reassign groups, we try
to have groups composed of two female students and one male student, or three of one gender as suggested by P. Heller and Hollobaugh (1992). We get to know the students very well and groupings are
based on our own observations of individuals' working styles, attitudes toward solving problems, or interactions in previous groups.
We avoid grouping students with widely different working speeds. We have found that students are serious in their approach to the problems, and that even though they are not graded, nearly all
students complete nearly all problems. Students can become frustrated if group members either move too quickly for them to follow or hold back their thinking. In our experience, students are more
likely to work together productively when they work at a similar speed. For example, we may group three students who tend to work the most quickly, and who do well on exams. Our discussions with this
group often include enrichment material not covered in the course. We also group students who struggle on exams; these students benefit from being in the same group and having a faculty member
reiterate key points from the lecture.
How to coach students
Back to top
Coaching students is probably very similar to what faculty members do during office hours, and individual style will vary from one faculty member to another. Generally speaking, good coaches:
• often respond to a student's question by not answering it; instead, they:
□ ask a question in return
□ refer back to the text of the problem, and have students look there for help
□ tell students to look through their class notes for related material
□ remind students the answer to the problem will not be directly stated in their notes
• encourage students when they get frustrated or overwhelmed
• recognize when struggling students need an informal review of lecture material on the spot
• explain to resistant students why particular problems or types of problems are useful to their understanding
• help students stay focused on the problems at hand
• encourage students to be metacognitive, and think about how they arrived at a particular solution
• ensure that each member of the group understands a concept by asking individuals to explain the solution
• challenge students to understand the relevant concepts behind a solution
• help students connect solutions back to earlier topics in the course when appropriate
• provide enrichment to students ready for more of a challenge
• keep track of common misconceptions encountered by multiple groups, and follow up with lecture or additional problems to reinforce the correct concepts
How students receive feedback
Back to top
Students can check their understanding through:
• Informal interactions when solving problems
□ Students find out immediately if their solutions agree with those of their group members.
□ Through the process of solving problems, students recognize their points of confusion or their need to review their notes to find information.
□ Frequent interaction with faculty allows students to check their understanding.
• Exams, quizzes, and graded homework assignments
□ A graded homework assignment for each unit helps students synthesize unit concepts.
□ A quiz shortly before an exam provides individual accountability and helps students prepare for the exam.
□ Multiple exams (including an early, challenging exam) provide multiple opportunities for students to get individualized feedback and respond by changing their study habits.
• Answer keys
□ Detailed keys for all problems are made available (posted online).
□ These important learning tools model the use of labeled diagrams, contextual information, and multiple solutions where appropriate.
|
{"url":"http://serc.carleton.edu/sp/carl_ltc/coached_problems/how.html","timestamp":"2014-04-18T05:30:27Z","content_type":null,"content_length":"40374","record_id":"<urn:uuid:8b5f5419-daa2-4950-b83e-01888e896f12>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Graph transformations - circle
March 19th 2008, 04:48 AM
Graph transformations - circle
I am unsure about how to determine if the origin is located inside, outside or on the circle or ellipse after a graph transformation. For example, given the equation of the circle to be (x+3)^2 +
(y-4)^2 = 5 , how do I know if the origin is within or outside or a point on the graph itself? How about in the case of an ellipse?
Thank you!
March 19th 2008, 07:38 AM
Hello, Tangera!
I am unsure about how to determine if the origin is located inside,
outside or on the circle or ellipse after a graph transformation.
For example, given the equation of a circle: $(x+3)^2 + (y-4)^2 \:= \:5^2$
how do I know if the origin is within or outside or a point on the graph itself?
How about in the case of an ellipse?
A little thought will give you the answer . . .
What does it mean when a point in on a graph?
. . It means that the coordinates of the point satisfy the equation.
Given the circle: $(x+3)^2 + (y-4)^2\:=\:5^2$
Where is the origin (0,0) relative to the circle?
. . Substitute $x=0,\:y=0\!:\;\;(0+3)^2 + (0-4)^2 \;=\;3^2 + (-4)^2 \;\;{\bf{\color{blue}= \;25}}$
(0,0) satisfies the equation . . . The origin is on the circle.
Where is (1,3) ?
. . Substitute $x=1,y=3\!:\;\;(1+3)^2 + (3-4)^2 \:=\:16 + 1 \:=\:17 \;\;{\bf{\color{blue}<\:25}}$
(1,3) is inside the circle.
Where is (3,5) ?
. . Substitute $x=3, y=5\!:\;\;(3+3)^2 + (5-4)^2 \:=\:37\;\;{\bf{\color{blue}> \;25}}$
(3,5) is outisde the circle.
The same procedure applies to ellipses.
March 20th 2008, 01:22 AM
I get it! I get it!! Thank you very very much!! :D
|
{"url":"http://mathhelpforum.com/pre-calculus/31416-graph-transformations-circle-print.html","timestamp":"2014-04-20T05:09:37Z","content_type":null,"content_length":"6618","record_id":"<urn:uuid:dd4e9c3d-e063-4a85-a395-41f9b95bac18>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Timing belts in linear positioning
Day 1
Topics of discussion:
Belt and pulley pitch
Belt length and center distance
Reinforced urethane timing belts work well in high-accuracy linear motion and conveying applications because they stretch very little, do not creep or slip, and are much stiffer than neoprene, which
means less tooth deflection. In linear positioning roles, however, belts are subject to distinctly different load patterns than in traditional power transmission and rotary motion applications. To
accurately assess the dynamics that affect performance in these applications, certain factors must be analyzed that previously were of no concern.
This four-part series begins with belt drive geometry, which applies to any application. Later installments will delve into the various forces and deflections acting within the system, as well as
linear position errors under load.
Belt and pulley pitch
Belt pitch p is the distance between centerlines of adjacent teeth. Pitch is measured along the belt pitch line, which corresponds to both the center of the reinforcing cords’ placement and the
neutral bending axis of the belt. (The neutral axis is the neutral plane edge-on. Under bending, axial strands along the neutral plane remain free of stress, while strands on one side compress and
those on the other stretch.)
Pulley pitch (or sprocket pitch) is, similarly, the arc length between the centerlines of the pulley grooves, measured along the pulley’s pitch circle. The pitch circle coincides with the pitch line
of a meshing belt, thus the pitch diameter d of a synchronous belt pulley is larger than the actual outside pulley diameter d[o]; this outside diameter is a concern with particular types of belting,
as we shall see. Figures 1 & 2 show relevant geometric parameters on simdifferent belt-and-pulley mesh configurations.
Pitch diameter relates to belt pitch and the number of pulley teeth z[p] by the formula.
Outside pulley diameter relates to pitch differential, belt pitch, and number of pulley teeth as follows.
Metric AT series belts, on the other hand, are intended to contact the bottom lands of the pulley grooves with the belt teeth (see figure 2). As a result, errors in the pulley root diameter d[r] will
cause a mismatch between belt pitch and pulley pitch. The root diameter of a pulley is given by.
where u[r] is the radial distance between the pulley’s pitch diameter and root diameter. The parameter u[r] has standard values for given AT series belt sections; several AT types are given in table
Belt length and center distance
A length of belt must accommodate the size of the pulleys and their distance from one another, fitting snugly over them. But also, with toothed belts, an integer number of teeth of the right pitch
must be possible with a given pulley configuration. (For simplicity, this “Course audit” series will continually use a two-pulley arrangement to illustrate concepts that can be readily applied to
more elaborate systems.)
Belt length L is measured along the pitch line and is calculated as.
where z[b] is the number of belt teeth. Most linear actuators and conveyors contain two pulleys of equal diameter. In such cases, belt length relates to center distance C and pitch diameter d by the
When two pulleys do not have equal diameters, as shown in figure 3, you first need the angle of wrap around each pulley. The small pulley’s angle of wrap θ[1] is calculated as.
where d[1] and d[2] are (respectively) the small and large pulley diameters. The angle of wrap θ[2] around the large pulley is given as.
Span length L[S] refers to a section of belt that does not contact the pulley — there is a span length at both slack and taut sides. L[S] can be seen in figure 2 and is calculated thus.
The overall belt length for pulleys of unequal diameter can now be written.
Note that the small pulley’s angle of wrap θ[1] is a function of the center distance C, as is the overall belt length. Therefore, our most recent equation is not closed-form. Center distance,
however, can be calculated through numerical methods; a handful of iterations may suffice. Or, an approximate value can be obtained analytically.
Information for Course Audit this month was provided by Krzysztof Kras, engineering manager, Mectrol Corp., Salem, N.H.
|
{"url":"http://machinedesign.com/print/linear-motion/timing-belts-linear-positioning","timestamp":"2014-04-17T22:34:56Z","content_type":null,"content_length":"19140","record_id":"<urn:uuid:87332c56-1f87-44fc-8b61-5ce709f9a4d1>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Activity: 100m sprint times
You are here:
• Activity: 100m sprint times
Activity: 100m sprint times
AOs | Indicators | Outcomes | Snapshot | Learning experiences
Cross curricular | Assessment | Spotlight | Links | Connections
Students will investigate the progression of 100m sprint world record times since the start of the 20th century.
Achievement objectives
• NA6-7 Relate graphs, tables, and equations to linear, quadratic, and simple exponential relationships found in number and spatial patterns.
• NA6-8 Relate rate of change to the gradient of a graph.
• S6-1 Plan and conduct investigations using the statistical enquiry cycle:
□ Identifying and communicating features in context (trends, relationships between variables, and differences within and between distributions), using multiple displays.
□ Justifying findings, using displays and measures.
• Makes connections between representations such as number patterns, spatial patterns, tables, equations and graphs.
• Identifies and uses key features including gradient, intercepts, vertex, and symmetry.
• Calculates average rate of change for the given data.
• Relates average rates of change to the gradient of lines joining two points on the graph of linear, quadratic, or exponential functions.
Specific learning outcomes
Students will be able to:
• plot points and spot trends in the progression of 100m sprint times
• question, validate and critique the data they are using
• describe what is happening in particular time segments to the 100m sprint times, for example, statements like 'the world record time is reducing by x seconds per y time period'.
Diagnostic snapshot(s)
Students plot (x,y) coordinates using suitable linear scale:
Planned learning experiences
Source 100m world record times and provide these for students (or get the students to source them from the Internet).
• Students plot the 100m times and join the points and then a ‘best fit’ curve.
• Students investigate the data by asking questions such as:
□ What’s the overall trend?
□ What could happen eventually with the 100m sprint times?
□ What is more likely to happen? How can we ‘prove’ this?
□ What are the differences between men’s and women’s ‘curves’?
□ Using decade long periods, by how much do the times reduce on average?
Other investigations include:
• Students plotting the differences to see the non-linearity of the progression.
• Equation of line of best fit (linear) and using this to interpolate and extrapolate by using substitution.
Possible adaptations to activity
• There is scope to introduce non linear curves.
• Use of difference scales and investigate the non-changing gradient but perhaps different conclusions ('the eyes have it').
• Investigate other sports whose progression is characterised by other ‘decreases’ over time (skiing, marathon etc) or increases (long jump, highest cricket test score etc).
• Students could run 100m and compare their times with the early 20th century times.
Cross curricular links
There is a strong link to history (for example, Jesse Owens, 1936) and physical education (the anatomical reasons behind the progression of 100m sprint times).
Planned assessment
This teaching and learning activity could lead towards assessment in the following achievement standards:
• 1.3 Investigate relationships between tables, equations or graphs.
Spotlight on
• Encouraging reflective thought and action:
□ Supporting students to explain and articulate their thinking.
• Making connections to prior learning and experience:
□ Checking prior knowledge using a variety of diagnostic strategies.
• Teaching as inquiry:
□ What are next steps for learning?
Key competencies
• Thinking:
□ Students explore and use patterns and relationships in data and they predict and envision outcomes.
• Using language, symbols and text:
□ Students interpret visual representations such as graphs.
• Managing self:
□ Students develop skills of independent learning.
Values and principles
• Students will be encouraged to value innovation, inquiry, and curiosity, by thinking critically, creatively, and reflectively.
Planning for content and language learning
• Lovitt, C., & Clarke, D. (1992). Snippets. MCTP professional development package: Activity bank volume 1 (p. 31). Carlton, Victoria: Curriculum Corporation.
Download a Word version of this activity:
Last updated June 12, 2013
|
{"url":"http://seniorsecondary.tki.org.nz/Mathematics-and-statistics/Learning-programme-design/Year-11-programme-design/Level-5-6/Activity-100m-sprint-times","timestamp":"2014-04-19T22:18:11Z","content_type":null,"content_length":"238152","record_id":"<urn:uuid:8c92af59-cff4-4655-969b-deb29528c15a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: THE DENSEST LATTICES IN PGL3(Q2)
Abstract. We find the smallest possible covolume for lattices
in PGL3(Q2), show that there are exactly two lattices with this
covolume, and describe them explicitly. They are commensurable,
and one of them appeared in Mumford's construction of his fake
projective plane.
The most famous lattice in the projective group PGL3(Q2) over the
2-adic rational numbers Q2 is the one Mumford used to construct his
fake projective plane [22]. Namely, he found an arithmetic group P1
(we call it PM ) containing a torsion-free subgroup of index 21, such
that the algebraic surface associated to it by the theory of p-adic uni-
formization [23, 24] is a fake projective plane. The full classification of
fake projective planes has been obtained recently [26].
The second author and his collaborators have developed a diagram-
matic calculus [8, 15] for working with algebraic curves (including orb-
ifolds) arising from p-adic uniformization using lattices in PGL2 over
a nonarchimedean local field. It allows one to read off properties of
the curves from the quotient of the Bruhat-Tits tree and to construct
lattices with various properties, or prove they don't exist. We hope
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/246/2084298.html","timestamp":"2014-04-17T02:12:14Z","content_type":null,"content_length":"8323","record_id":"<urn:uuid:364f6b89-0666-4dda-a4f6-ff9004919e12>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Q: How do you talk about the size of infinity? How can one infinity be bigger than another?
Q: How do you talk about the size of infinity? How can one infinity be bigger than another?
Physicist: When you have two finite sets it’s easy to say which one has more things in it. You count up the number of things in each, compare the numbers, and which ever is more… is more. However,
with an infinite set you can’t do that. Firstly, you’ll never be done counting, and secondly, infinity isn’t a number.
So now you need to come up with a more rigorous definition of “same size”, that reduces to “same number of elements” in the finite case, but continues to work in the infinite case.
Here it is: instead of counting up the number of elements, and facing the possibility that you’d never finish, take the elements from each set one at a time and pair them up. If you can pair up
every element from one set with with every element from another set, without doubling up and without leaving anything out, then the sets must be the same size.
Mathematicians, who enjoy sounding smart as much or more than they enjoy being smart, would call this “establishing a bijective mapping between sets”.
So the requirement for to sets to have the same size is that some pairing exists. For example, in the right side of the picture above you could have chosen to pair up every element in the left
column with the element below and to the right forever, leaving the one element
Even worse, you can show that two sets that have “obviously” different sizes are in reality the same size. For example, the counting numbers (1, 2, 3, …) and the integers (…, -2 , -1, 0, 1, 2, 3,
$\begin{array}{rcccccccccccccccccc}\textrm{Counting numbers:}&\,&1&\,&2&\,&3&\,&4&\,&5&\,&6&\,&7&\,&8&\,& \cdots\\\textrm{Integers:}&\,&0&\,&1&\,&-1&\,&2&\,&-2&\,&3&\,&-3&\,&4&\,&\cdots\end{array}$
One of the classic “thought experiments” of logic is similar to this: You’re the proprietor of a completely booked up hotel with infinite rooms. Suddenly an infinite tour bus with infinite tourists
rolls up. What do you do? What… do you do?
Easy! Ask everyone in your hotel to double their room number, and move to that room (where there should be a gratis cheese basket with a note that says “sorry you had to move what was most likely an
infinite distance“). So now you’ve gone from having all of the rooms full to having only all of the even rooms full, while all of the odd rooms are vacant.
Another way to look at this is: ∞ + ∞ = ∞.
Here’s something even worse. There are an infinite number of primes, and you can pair them up with the counting numbers:
$\begin{array}{rccccccccccccccccc}\textrm{Counting numbers:}&1&\,&2&\,&3&\,&4&\,&5&\,&6&\,&7&\,& \cdots\\\textrm{Prime numbers:}&2&\,&3&\,&5&\,&7&\,&11&\,&13&\,&17&\,&\cdots\end{array}$
There are also an infinite number of rational numbers, and you can pair them up with the counting numbers.
$\begin{array}{rccccccccccccccccc}\textrm{Counting numbers:}&1&\,&2&\,&3&\,&4&\,&5&\,&6&\,&7&\,& \cdots\\\textrm{Rational numbers:}&\frac{1}{1}&\,&\frac{1}{2}&\,&\frac{2}{1}&\,&\frac{1}{3}&\,&\frac
By the way, you can include the negative rationals by doing the same kind of trick that was done to pair up the counting numbers and integers.
Now you can construct a pairing between the rational numbers and the primes:
$\begin{array}{rccccccccccccccccc}\textrm{Prime numbers:}&2&\,&3&\,&5&\,&7&\,&11&\,&13&\,&17&\,& \cdots\\\textrm{Counting numbers:}&1&\,&2&\,&3&\,&4&\,&5&\,&6&\,&7&\,& \cdots\\\textrm{Rational
For those of you considering a career in mathing, be warned. From time to time you may be called upon to say something as bat-shit crazy as “there are exactly as many prime numbers as rational
There are infinities objectively bigger than the infinities so far. All of the infinities so far have been “countably infinite”, because they’re the same “size” as the counting numbers. Larger
infinities can’t be paired, term by term, with smaller infinities.
Set theorists would call countable infinity “$\aleph_0$” (read “aleph null”). Strange as it sounds, it’s the smallest type of infinity.
The size of the set of real numbers is an example of a larger infinity. While rational numbers can be found everywhere on the number line, they leave a lot of gaps. If you went stab-crazy on a
piece of paper with an infinitely thin pin, you’d make a lot of holes, but you’d never destroy the paper. Similarly, the rational numbers are pin pricks on the number line. Using a countable
infinity you can’t construct any kind of “continuous” set (like the real numbers). You need a bigger infinity.
The number line itself, the real numbers, is a larger kind of infinity. There’s no way to pair the real numbers up with the counting numbers (it’s difficult to show this). The kind of infinity
that’s the size of the set of real numbers is called “$\aleph_1$“.
Before you ask: yes, there’s an $\aleph_2$, $\aleph_3$, and so forth, but these are more difficult to picture. To get from one to the next all you have to do is take the “power set” of a set that’s
as big as the previous $\aleph$. Isn’t that weird?
A commenter kindly pointed out that this “power set thing” is a property of “$\beth$ numbers” (“beth numbers”). But, if you buy the “generalized continuum hypothesis” you find that $\aleph_i = \
beth_i$. This is a bit more technical than this post needs, but it’s worth mentioning.
Quick aside: If A is a set, then the power set of A (written 2^A, for silly reasons) is the “set of all subsets of A”. So if A = (1,2,3), then 2^A = (Ø, (1), (2), (3), (1,2), (1,3), (2,3), (1,2,3)
). Finite power sets aren’t too interesting, but they make good examples.
Update: The Mathematician was kind enough to explain why the real numbers are the size of the power set of the counting numbers, in the next section.
Strangely enough, there doesn’t seem to be infinities in between these sizes. That is, there doesn’t seem to be an “$\aleph_{1.5}$” (e.g., something bigger than $\aleph_1$ and smaller than $\
aleph_2$). This is called the “continuum hypothesis“, and (as of this post) it’s one of the great unsolved mysteries in mathematics. In fact it has been proven that, using the presently accepted
axioms of mathematics, the continuum hypothesis can’t be proven and it can’t be dis-proven. This may be one of the “incomplete bits” of logic that Godel showed must exist. Heavy stuff.
Mathematician: This isn’t rigorous, but gives the intuition perhaps.
Let’s suppose we think of each natural number as representing one binary digit of a number between 0 and 1 (so the nth natural number corresponds to the nth binary digit). Now, the power set is the
set of all subsets of the natural numbers, so let’s consider one such subset of the natural numbers. We can think of representing that subset as a binary number, with a 1 for each number in the
subset, and a 0 for each number not in the subset. Hence, each element of the power set corresponds to an infinite sequence of binary digits, which can just be thought of as a number between 0 and 1.
Then you just need a function from [0,1] to all of the real numbers, like $\frac{x-0.5}{x(x-1)}$, which leads us to believe that there should be a function mapping each real number into an element of
the power set of the natural numbers.
14 Responses to Q: How do you talk about the size of infinity? How can one infinity be bigger than another?
1. GCH isn’t exactly the type of “incompleteness bits” Godel showed must exist. I mean, of course it is an example of incompleteness, but it’s more sophisticated than the Godel incompleteness
theorems. It uses forcing. Whereas Godel’s theorems dealt with silly self-referential paradoxy sentences like “This sentence is false”, in order to prove the independence of GCH, you must
construct a model of ZFC where GCH is true (so ZFC can’t disprove GCH) and then construct a model of ZFC where GCH is false (so ZFC can’t prove GCH).
2. Could you explain why the cardinality of R should be the cardinality of the power set of N? I can’t imagine why that should be so.
3. How does one apply that to infinite points in space?
Or to rephrase, there are an equal amount of points in a marble as there are in the sun, correct?
If the center a sphere can be defined as the origin (0,0,0), there are infinite degrees upon which one can cut a cross-sectional 2D plane intersecting the origin, and upon each of those 2D planes
one can plot infinite points, thus both the marble and the sun, while having vastly different volumes, have infinite points
I know I answered it myself, but can you do an extended explanation similar to the original post?
Also, which power set does this infinite fall into?
4. The bijective map between the marble and the sun is just a scaling. So, line up the centers, then multiply the distant of each point in the marble by some huge number, so that it lines up with
the sun’s size.
All finite dimensional (Euclidean) spaces fall into the aleph 1 category.
To see that, what you can do is take a point in, say, 2-D space: (1.234567…, 9.876543…). Then define a bijection from 2-D real space, to 1-D real space (the real line) by “inter-weaving” the
digits: 19.283746556473…
You can extend the same idea to higher dimensions no problem. So, any number gives you a unique point in 3-D space, and any point in 3-D space gives you a unique number.
5. You are using aleph numbers incorrectly. aleph-1 is the infinity bigger than aleph-0. By the definition of aleph numbers, there is no such thing as aleph-1.5. I think what you are refering to is
actually the beth numbers, where each beth number is defined as the cardinality of the power set of the previous one with beth-0 equal to aleph-0.
6. You’re absolutely right.
My bad, I was assuming the Generalized Continuum Hypothesis (like a damn fool!).
7. Gregory Chaitin’s Incompleteness Theorem
Gregory Chaitin proved a theorem similar to Gödel’s incompleteness theorems, and the framework of Chaitin’s version makes a lot more intuitive sense, I find. I don’t really understand the
technicalities of these incompleteness theorems, but I do feel that Chaitin’s version is more enlightening.
Gödel’s incompleteness theorem says that, in any consistent axiomatic mathematical system, there will always be mathematical statements that can neither be proved true, nor proved false . That
is, there will always be mathematical statements whose truth or falsity is undecidable – beyond the ability of the system’s axioms to determine.
However, Gödel never gives any sense of why this might be the case. Gödel does not even hint at the reason why the truth or falsity of some mathematical statements may within the grip of the
axioms to determine, but other statements lie beyond the reach of the axioms.
Chaitin considers the algorithmic complexity of the axioms of mathematics. The algorithmic complexity of a string of data is defined as the shortest and most efficient program that can produce
the string – ie, the most compressed representation of that data string.
I believe Chaitin showed that a given axiomatic mathematical system can only determine the truth or falsity of a mathematical statement if the algorithmic complexity of that statement is less
than or equal to the algorithmic complexity of the set of axioms.
This makes more intuitive sense, and Chaitin’s theorem throws light on why there are mathematical statements that are not decidable: undecidable statements encode more information (algorithmic
complexity) than do the axioms themselves; so the “spec” of the axioms is in effect insufficient to cover these mathematical statements of greater information content.
Chaitin says: if you have ten pounds of axioms, and a twenty-pound theorem, then that theorem cannot be derived from those axioms.
Another way I understand this is as follows:
When you prove a given mathematical statement to be true with respect to the axioms of a mathematical system, what you are really saying is that this mathematical statement is in accord with the
foundational axioms, and this mathematical statement expresses some (but perhaps not all) of the information encoded in the axioms. (And if you prove the mathematical statement false, then you
are just saying that the mathematical statement contradicts your axioms – but in either case, you have a definite answer).
However, when a mathematical statement contains more information than is found in the foundational axioms, it’s as if the axioms do not have sufficient “authority” to “adjudicate” upon the truth
or falsity of such mathematical statements.
In the case of the continuum hypothesis being undecidable via the current axioms of mathematics, would I be right in thinking that Chaitin’s ideas suggest that the statement of the continuum
hypothesis contains more information (algorithmic complexity) than does the entire current set of axioms of mathematics?
Further speculations:
Is it possible that a mathematical statement may be considered as an axiom in its own right, and so can, if you like, act as a axiomatic specification, in some way, of its own mathematical
If so, I wonder whether, when a mathematical statement contains more information than is found in the entire set of foundational axioms, it is possible to consider a complete role reversal, and
instead ask if the “foundational axioms” can be proved true or false, with respect to the mathematical statement, this statement now raised the status of a defining foundational axiom?!
Might it also be possible to see whether working backwards in this way throws light on what augmentation of the foundational axioms might be necessary in order to make a given undecidable
mathematical statement decidable?
In this way, could it be possible to work backwards from the statement of the continuum hypothesis, and see what enhancement of the current axioms of mathematics is necessary to make the
continuum hypothesis decidable?
8. Hmm this doesn’t make any sense at all to me . You can’t have different sizes of
infinity because infinities have no size . Infinty to me means WITHOUT BOUNDS.
and if something has no boundary it cannot be sized .
Heres the mistake i see in the logic posted above .
Quote :- “There are infinities objectively bigger than the infinities so far. All of the infinities so far have been “countably infinite”, because they’re the same “size” as the counting numbers.
Larger infinities can’t be paired, term by term, with smaller infinities.”
Why do you say the set of counting numbers have a size. ? Just because you
can count a small section of them (no matter how long you count for you have only counted a small section of them)doesnt mean the set has a size.They arent all countable of course becuz you never
reach the end . So then if the set of positive integers has no size then all that follows above about “size” is also wrong.
9. You’re absolutely right, none of the infinities have a “size” the way we’re used to talking about it.
We define sets to be the same size if their elements can be paired up, and we define one set to be larger than another if not all of it’s elements cannot be paired up with the elements of the
“smaller” set.
To show that a set is “countably infinite” you don’t actually have to do the counting, just show that it could be (given infinite time).
So, when you talk about the size of infinite sets it’s necessary to come up with a new idea of what size means.
10. Im still non the wiser sorry .I,m not an expert mathmatician but I have tried
to understand it since i saw a docu about it .The problem is the use of terms like
“size” “bigger and “smaller” .AS you clarify these words dont mean the same when used in this context as our normal understanding of them .IF so why then use these words ? wouldnt it make more
sense to use or invent new words , rather than use ones that everyone has a good understanding off but are missused in this context.
Quote :- “To show that a set is “countably infinite” you don’t actually have to do the counting, just show that it could be (given infinite time).”
I have to take issue with any infinite set being countable, it isn’t , at least not the WHOLE set.Two infinties dont cancel each other out .Even if you had infinite time you still couldn’t count
the WHOLE set of counting numbers . IT IS IMPOSSIBLE
to reach the end and thus count the WHOLE SET thats the very essence of the word “infinite” I believe.
11. Often mathematicians do make up new words, but for the purposes of this blog I try not to use them. It’s just one more thing to learn and keep track of. There’s a reason why “many-words-a-day”
calenders don’t sell well.
Recognizing that the word “size” doesn’t cleanly apply to infinite things, mathematicians use the word “cardinality” instead. In fact, if you were so inclined, you could replace every instance of
the word “size” with “cardinality” in this post.
It is frustrating that mathematicians talk about “counting to infinity”, while knowing that it’s impossible to actually do so. At the same time, it’s not like there are any surprises. You’re not
going to get to a billion and six, and suddenly there’s a new number you weren’t expecting. So, you can talk about infinities without “doing the legwork”.
12. Could you explain why the cardinality of R should be the cardinality of the power set of N? I can’t imagine why that should be so.
Oh, I can explain this one! It’s pretty neat, actually.
Consider an arbitrary element of the power set of N — that is, consider an arbitrary set of natural numbers. Every natural number is either in this set or not in it. You could make a list of all
the natural numbers and next to each number write “yes” if that number is in the set and “no” if it’s not, right? And that list would fully specify the contents of that set. So there is a
one-to-one mapping between the set of such listings and the power set of N.
Okay now: Let’s notate those litsts a little differently. For “yes” write 1, and for “no” write 0. And put a decimal point at the beginning of the whole thing. You have just turned each set into
a real number between 0 and 1, written in binary. So, there’s a mapping from the power set of N to [0, 1]. To turn this into a mapping to all of R, just feed it through a function that maps [0,
1] to the entirety of the reals, such as f(x) = tan((x-.5)*pi).
(Actually, there’s a slight problem with this, in that binary expansions have ambiguous representations just like decimal expansions: just as 0.99999… = 1.0, so too does binary 0.111111… So it
isn’t quite a one-to-one mapping. Resolving this is left as an exercise for the reader.)
13. Basically to what I understand, in the end, there is only two types of infinite sets, the so-called countable set (integers, rationals, primes, etc., etc.) and un-countable sets, which is the set
of real numbers, that’s it. Nothing in between them.
Another way to look at these two types of set is to consider one as “discrete set” and the other as “continuous set”, or “continuum”. Like our physical world, particularly in Electronics, there’s
either Digital or Analogue, nothing in between, therefore, your set either falls under countable or un-countable, nothing in between. Like Quantum Mechanics and General Relativity, one explain
our universe in “discrete form” while the latter in “continuous form”, which I think partly the reason why both can’t be fuse together.
Therefore, real numbers, in essence, are considered “continuous”, you really can’t draw a line between two real numbers, they just don’t exist. Therefore, there’s really no such sets of number
which can be count and at the same time continuous. CH seems to be true in that sense.
This entry was posted in -- By the Mathematician, -- By the Physicist, Logic, Math. Bookmark the permalink.
|
{"url":"http://www.askamathematician.com/2011/03/q-how-do-you-talk-about-the-size-of-infinity-how-can-one-infinity-be-bigger-than-another/","timestamp":"2014-04-20T13:21:13Z","content_type":null,"content_length":"166898","record_id":"<urn:uuid:0fc6ed08-a340-47c6-8cd6-2f75488ba1ad>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quantum entanglement shows that reality can't be local
Quantum entanglement shows that reality can’t be local
Either that, or faster-than-light communications is a go.
by Matthew Francis - Oct 30, 2012 5:10 pm UTC
Quantum entanglement stands as one of the strangest and hardest concepts to understand in physics. Two or more particles can interact in a specific ways that leave them entangled, such that a later
measurement on one system identifies what the outcome of a similar measurement on the second system—no matter how far they are separated in space.
Repeated experiments have verified that this works even when the measurements are performed more quickly than light could travel between the sites of measurement: there's no slower-than-light
influence that can pass between the entangled particles. However, one possible explanation for entanglement would allow for a faster-than-light exchange from one particle to the other. Odd as it
might seem, this still doesn't violate relativity, since the only thing exchanged is the internal quantum state—no external information is passed.
But a new analysis by J-D. Bancal, S. Pironio, A. Acín, Y-C. Liang, V. Scarani, and N. Gisin shows that any such explanation would inevitably open the door to faster-than-light communication. In
other words, quantum entanglement cannot involve the passage of information—even hidden, internal information, inaccessible to experiment—at any velocity, without also allowing for other types of
interactions that violate relativity.
Experiments have definitively demonstrated entanglement, and ruled out any kind of slower-than-light communication between two separated objects. The standard explanation for this behavior involves
what's called nonlocality: the idea that the two objects are actually still a single quantum system, even though they may be far apart. That idea is uncomfortable to many people (including most
famously Albert Einstein), but it preserves the principle of relativity, which states in part that no information can travel faster than light.
To get around nonlocality, several ideas have been proposed over the decades. Many of these fall into the category of hidden variables, wherein quantum systems have physical properties (beyond the
standard quantities like position, momentum, and spin) that are not directly accessible to experiment. In entangled systems, the hidden variables could be responsible for transferring state
information from one particle to the other, producing measurements that appear coordinated. Since these hidden variables are not accessible to experimenters, they can't be used for communication.
Relativity is preserved.
Hidden variable theories involving slower-than-light transfer of state information are already ruled out by the experiments that exclude more ordinary communication. Some modern variations combine
hidden variables with full nonlocality, allowing for instantaneous transfer of internal state information. But could non-instantaneous, faster-than-light hidden variables theories still work?
To investigate this possibility, the authors of the new study considered the possible experimental consequences. Obviously, one way to test it would be to increase the separation between the parts of
the entangled system to see if we can detect a delay in apparently instantaneous correlation we currently observe. Sufficiently fast rates of transfer, however, would still be indistinguishable from
nonlocality, given that real lab measurements take finite time to perform (this assumes that both experiments happen on Earth).
The researchers took a theoretical approach instead, using something known as the no-signalling conditions. They considered an entangled system with a set of independent physical attributes, some
observable, some hidden variables. Next, they allowed the state of the hidden variables to propagate faster than the speed of light, which let them influence the measurements on the separated pieces
of the experiment.
However, because of the nature of quantum mechanical systems, there was a symmetry between the hidden and measurable attributes of the system—meaning if the hidden variables could transfer
information faster than light, then the properties we can measure would do so as well. This is a violation of the no-signalling condition, and causes serious problems for the ordinary interpretations
of quantum physics.
Of course, one conceivable conclusion would be that faster-than-light communication is possible; this result provided a possible avenue for testing that possibility. By restricting the bounds on the
speed of interaction between entangled systems, future experiments could show whether any actual information is traveling or not.
However, the far more likely option is that relativity is correct. In that case, the strong ban on faster-than-light communication would rule out the possibility of faster-than-light transfer of
information encoded in hidden variables, and force us to deal with nonlocality. Once again, it would seem that local realism and relativity are incompatible notions in the quantum world.
Nature Physics, 2012. DOI: 10.1038/NPHYS2460 (About DOIs).
192 Reader Comments
1. AlhazredArs Scholae Palatinae
Panther Modern wrote:
Crap... I overlooked that part.
Wow, what a dilemma; measurement of the entangled particle causes its state to flip, but to a random, indeterminable value on both the "sender" and "receiver" end. Since the "receiver" has no
idea whether or not you've "measured" your particle yet, they have no way to receive information on the other end without changing the state of the particle on their own, simply by nature of
observing it.
Phenomenally frustrating.
Sure is, an ansible would be pretty awesome (though it is hard to imagine a real good use for it right now today...).
Mother Nature is pretty darn subtle. She gives you that little flash of her lacy nickers, but you never get the good stuff. She knows how to go just exactly to the line and no further!
2. Bengie25Ars Tribunus Militum
I was recently reading some semi-laymen stuff about the more interesting aspects of quantum world by some 20+ year theoretical physicists from Stanford, Cambridge, etc, and the one thing that got
me was when they said Paraphrased "In order for a quantum state to collapse from a probability to an actual state, it must first be observed by a *conscience* observer"
Then they went on to talk about how everything in our universe should be in a quantum state, but aren't because they are being observed by humans. My brain broke.
Anyone have any insight into this whole "conscience observer" thing? This stuff was said by lead world class researchers.
3. AlhazredArs Scholae Palatinae
Bengie25 wrote:
I was recently reading some semi-laymen stuff about the more interesting aspects of quantum world by some 20+ year theoretical physicists from Stanford, Cambridge, etc, and the one thing that got
me was when they said Paraphrased "In order for a quantum state to collapse from a probability to an actual state, it must first be observed by a *conscience* observer"
Then they went on to talk about how everything in our universe should be in a quantum state, but aren't because they are being observed by humans. My brain broke.
Anyone have any insight into this whole "conscience observer" thing? This stuff was said by lead world class researchers.
Are you sure they are really world class physicists or some cranks? Nobody can define in a mathematically rigorous (or any other actually) way what "Conciousness" is. Thus there is no possible
formalism for such a ridiculous statement. They are blowing smoke out their tushes, which is what makes me wonder what sort of fools we're talking about.
Observer is just "another part of the system". Relational QM for instance would state that when a detector determines which slit a photon went through in a 2-slit experiment that the detector has
now become related to the photon in such a way that it definitely went through that slit FROM THE RELATIVE VIEWPOINT OF THE DETECTOR. Now, when you the scientist read the detector you also
participate in that relation. This does not mean that other parts of the Universe necessarily fall into the same relation and in fact they may see a superposition (an interference pattern). The
rules by which the relationships between objects work simply insures that ultimately no 2 observers will ever REMAIN in disagreement when and if they compare notes. This is a lot like how in GR
two observers can disagree on the order of events, but they will always agree on a consistent final state when they are in the same reference frame. Nature does not require that every observer
sees the same HISTORY of the universe, just that they all agree on the current state to the extent that they share a context. There is no need to invoke some fuzzy concept like consciousness
here. All matter, energy, and spacetime participate in this little dance. Some parts just build mental models about it and try to predict the future from them, and others don't.
4. JCS3Smack-Fu Master, in training
Panther Modern wrote:
Alhazred wrote:
Panther Modern wrote:
With regards to FTL communication, here's a thought experiment for you:
Since "information" cannot be transmitted, we're left with the ability to flip a random state change in a particle at distance. This state change can be any one of the hidden variables, but it is
impossible to determine which would be flipped, negating the ability to derive "information" from the flip, right?
This could potentially be solved simply by having two discrete pairs of entangled particles, right? Example:
Pair A1 & A2, and pair B1 & B2
Particles A1 & B1 go with FTL ship A
Particles B2 & B2 go with FTL ship B
FTL Ship A flips the state of particle A1, and the corresponding A2 particle on FTL Ship B flips randomly.
FTL Ship A flips the state of particle B1, and the corresponding B2 particle on FTL Ship B flips randomly.
Particles A1 & A2 act as "0"
Particles B1 & B2 act as "1"
The flipping of states (even random) can therefore transmit information by flipping the states of two separate pairs of particles individually, representing "1" and "0".
...or am I just getting this absolutely and utterly wrong?
I don't understand how having 2 pairs of entangled particles is helping you...
The fundamental problem remains, you can tell what the other guy has measured (or will measure if you go first) when he looks at his half of a pair, but you cannot decide WHAT he will measure.
Equally important you cannot tell IF he has made a measurement (there is no measurable "this is still not decided" state). Thus NO information is exchanged, and it just makes no difference how
many of these pairs you have.
An example: Rocket A jets off into space and goes either right or left. A month later they decide to tell Rocket B which direction they went in. How do they do this? If they measure pair A then
what does that tell Rocket B about which direction they took? The result will either be "0" or "1", but there's no way they can assign one of those values to their 'signal' to Rocket B, it learns
nothing about the course of Rocket A. Adding a measurement of pair B doesn't make this any better.
Crap... I overlooked that part.
Wow, what a dilemma; measurement of the entangled particle causes its state to flip, but to a random, indeterminable value on both the "sender" and "receiver" end. Since the "receiver" has no
idea whether or not you've "measured" your particle yet, they have no way to receive information on the other end without changing the state of the particle on their own, simply by nature of
observing it.
Phenomenally frustrating.
If I can expand on this idea and combine it with the approach I proposed earlier.
Have Ship A combine particles A1 and B1, such that they become entangled in the manner used here (Weird! Quantum Entanglement Can Reach into the Past, http://www.livescience.com/19975-spooky ...
ement.html). This causes Particles A2 and B2 to now be entangled. Ship B can now test its particles, independent of any knowledge of what Ship A has done. If Ship B's particles show entanglement,
then Ship B now knows that Ship A made the decision to entangle its particles.
5. Panther ModernArs Tribunus Militum
Have Ship A combine particles A1 and B1, such that they become entangled in the manner used here (Weird! Quantum Entanglement Can Reach into the Past, http://www.livescience.com/19975-spooky ...
ement.html). This causes Particles A2 and B2 to now be entangled. Ship B can now test its particles, independent of any knowledge of what Ship A has done. If Ship B's particles show entanglement,
then Ship B now knows that Ship A made the decision to entangle its particles.
How are you going to entangle separated particles at distance? As I understand it, particle entanglement only occurs in local spacetime and requires tremendous energy.
6. MujokanArs Scholae Palatinae
Bengie25 wrote:
I was recently reading some semi-laymen stuff about the more interesting aspects of quantum world by some 20+ year theoretical physicists from Stanford, Cambridge, etc, and the one thing that got
me was when they said Paraphrased "In order for a quantum state to collapse from a probability to an actual state, it must first be observed by a *conscience* observer"
Then they went on to talk about how everything in our universe should be in a quantum state, but aren't because they are being observed by humans. My brain broke.
Anyone have any insight into this whole "conscience observer" thing? This stuff was said by lead world class researchers.
Thinking a conscious observer was required for wavefunction collapse was a semi-popular view a couple of decades ago. Even some respectable people did hold it.
This led to a lot of crank quantum physics, e.g. people saying the universe would give you whatever you wished for.
These days this is pretty much a dead point of view. Even the idea of "collapse" per se is not very popular any more. The environment as a whole is considered as the observer, and it doesn't
matter if consciousness is involved.
To take this view, basically you have to think that there is something magical about consciousness. This is a view some physicists will hold, not being neurologists or philosophers.
Getting the universe out of superposition is thanks to symmetry breaking (if you ask me).
Last edited by Mujokan on Wed Oct 31, 2012 12:42 pm
7. AlhazredArs Scholae Palatinae
JCS3 wrote:
Panther Modern wrote:
Alhazred wrote:
Panther Modern wrote:
With regards to FTL communication, here's a thought experiment for you:
Since "information" cannot be transmitted, we're left with the ability to flip a random state change in a particle at distance. This state change can be any one of the hidden variables, but it is
impossible to determine which would be flipped, negating the ability to derive "information" from the flip, right?
This could potentially be solved simply by having two discrete pairs of entangled particles, right? Example:
Pair A1 & A2, and pair B1 & B2
Particles A1 & B1 go with FTL ship A
Particles B2 & B2 go with FTL ship B
FTL Ship A flips the state of particle A1, and the corresponding A2 particle on FTL Ship B flips randomly.
FTL Ship A flips the state of particle B1, and the corresponding B2 particle on FTL Ship B flips randomly.
Particles A1 & A2 act as "0"
Particles B1 & B2 act as "1"
The flipping of states (even random) can therefore transmit information by flipping the states of two separate pairs of particles individually, representing "1" and "0".
...or am I just getting this absolutely and utterly wrong?
I don't understand how having 2 pairs of entangled particles is helping you...
The fundamental problem remains, you can tell what the other guy has measured (or will measure if you go first) when he looks at his half of a pair, but you cannot decide WHAT he will measure.
Equally important you cannot tell IF he has made a measurement (there is no measurable "this is still not decided" state). Thus NO information is exchanged, and it just makes no difference how
many of these pairs you have.
An example: Rocket A jets off into space and goes either right or left. A month later they decide to tell Rocket B which direction they went in. How do they do this? If they measure pair A then
what does that tell Rocket B about which direction they took? The result will either be "0" or "1", but there's no way they can assign one of those values to their 'signal' to Rocket B, it learns
nothing about the course of Rocket A. Adding a measurement of pair B doesn't make this any better.
Crap... I overlooked that part.
Wow, what a dilemma; measurement of the entangled particle causes its state to flip, but to a random, indeterminable value on both the "sender" and "receiver" end. Since the "receiver" has no
idea whether or not you've "measured" your particle yet, they have no way to receive information on the other end without changing the state of the particle on their own, simply by nature of
observing it.
Phenomenally frustrating.
If I can expand on this idea and combine it with the approach I proposed earlier.
Have Ship A combine particles A1 and B1, such that they become entangled in the manner used here (Weird! Quantum Entanglement Can Reach into the Past, http://www.livescience.com/19975-spooky ...
ement.html). This causes Particles A2 and B2 to now be entangled. Ship B can now test its particles, independent of any knowledge of what Ship A has done. If Ship B's particles show entanglement,
then Ship B now knows that Ship A made the decision to entangle its particles.
Yeah, but I seriously question whether that will work (actually I highly suspect it just won't, but that's just a hypothesis). For one thing how do you know that A2 and B2 are entangled? You
would presumably measure both of them, but any measurement related to quantum variables is always statistical in nature. You can't KNOW that Ship A did anything. You can only guess. Also, since
entanglement CAN work in a retrodictive manner isn't it quite possible that you are just determining that AT SOME POINT A1 and B1 became entangled? My suspicion is that by the time you account
for all the 'noise' correlations you are left with no signal. Possibly someone else has a little more insight on this, I really haven't looked at it before.
8. MujokanArs Scholae Palatinae
Voix des Airs wrote:
Alhazred wrote:
You're just adding some new property to say an electron (call it 'chewiness') and then inventing a way for it to work
Akhazred... In the event a new property or observable is in fact ever required, you have my full, complete, total and unreserved support that it be called "chewiness".
I guess it would make about as much sense as stuff like "spin" or "color".
9. JCS3Smack-Fu Master, in training
Panther Modern wrote:
Have Ship A combine particles A1 and B1, such that they become entangled in the manner used here (Weird! Quantum Entanglement Can Reach into the Past, http://www.livescience.com/19975-spooky ...
ement.html). This causes Particles A2 and B2 to now be entangled. Ship B can now test its particles, independent of any knowledge of what Ship A has done. If Ship B's particles show entanglement,
then Ship B now knows that Ship A made the decision to entangle its particles.
How are you going to entangle separated particles at distance? As I understand it, particle entanglement only occurs in local spacetime and requires tremendous energy.
I'm not entangling separate particles at a distance, Ship A has the particles with it and Ship B has its particles with it. The "spooky action" is the fact that the change that A make's to its
particles gets transmitted to B without B having to do anything. B is in fact ignorant of when or if A has done anything. B then measures its two particles to see if they happen to be entangled.
If they are B has gained information from A.
Entanglement doesn't require a lot of energy, many entanglement experiments take place with nothing but lasers, mirrors, and photon detectors.
10. MujokanArs Scholae Palatinae
JCS3 wrote:
The "spooky action" is the fact that the change that A make's to its particles gets transmitted to B without B having to do anything.
It's not transmitted, it's just that both become determined together.
11. Panther ModernArs Tribunus Militum
Mujokan wrote:
JCS3 wrote:
The "spooky action" is the fact that the change that A make's to its particles gets transmitted to B without B having to do anything.
It's not transmitted, it's just that both become determined together.
Yeah, that's what I missed at first too; even though flipping particle A1 causes particle A2 to flip as well, the "receiver" on the other end has no way to know whether the "sender" has flipped
the particle state already or not, and when they "observe" their "A2" particle, THEIR observed state is again reflected on the "A1" particle...
..but the ship that houses the "A1" particle has no idea that the ship with the "A2" particle has even measured theirs.
There's simply no way to know, since observing the state is the same as changing the state.
12. JCS3Smack-Fu Master, in training
Mujokan wrote:
JCS3 wrote:
The "spooky action" is the fact that the change that A make's to its particles gets transmitted to B without B having to do anything.
It's not transmitted, it's just that both become determined together.
Let me back up here and ask a clarifying question, because I really am curious about this. I truly appreciate your patience
If we have two set of entangled particles [A1,A2] [B1,B2].
Then we entangle A1 with B1, does it not follow that A2 and B2 then become entangled with one another?
Isn't that what these articles are saying?
Reliable entanglement transfer between pure quantum states, http://arxiv.org/abs/quant-ph/0607112
Quantum Entanglement transfer between spin-pairs, http://arxiv.org/abs/1011.5352
Weird! Quantum Entanglement Can Reach into the Past, http://www.livescience.com/19975-spooky ... ement.html
13. Panther ModernArs Tribunus Militum
If we have two set of entangled particles [A1,A2] [B1,B2].
Then we entangle A1 with B1, does it not follow that A2 and B2 then become entangled with one another?
Isn't that what these articles are saying?
No. The particles don't just become entangled like that.
Even if they did, since observing the state flips the state, this ensures that no matter what way you observe, you also change, making determination of when (and by who) a "change" occurred
Think of it like this:
You have a "two-way radio". You can transmit "noise" by pressing your magic button. The person on the other end has the same model of radio, and they can receive noise by pressing their magic
button. The problem is that since the people who hold the radios cannot see or communicate with each other in any way, they each have no idea when each other are pressing the button. Since you
can only send or receive (not both) at any given time, there is no way to tell whether the "noise" you're listening to is the noise you just generated by holding down your own button, or whether
the guy on the other end is holding down his button or not.
14. JCS3Smack-Fu Master, in training
Panther Modern wrote:
If we have two set of entangled particles [A1,A2] [B1,B2].
Then we entangle A1 with B1, does it not follow that A2 and B2 then become entangled with one another?
Isn't that what these articles are saying?
No. The particles don't just become entangled like that.
Then what is entanglement swapping?
15. I-ku-uWise, Aged Ars Veteran
Mujokan wrote:
I-ku-u wrote:
So why can't a measurement be seen as simply entangling ourselves with the particles being measured?
That would involve combining you and the system into one wavefunction in superposition, which is impossible. Measurement is the environment impinging on the wavefunction and causing it to
As to your first sentence:
No, it's not impossible, it's just undetectable. Equating the two is a logical mistake.
As to your second sentence:
"…causing it de-cohere" is an interpretation, based on the same logical fallacy as in your first sentence. And entanglement is easily created (as has been oft demonstrated in experiments) when an
"environment" consisting of one particle that impinges on the wavefunction of another, so my question stands unaddressed.
16. MujokanArs Scholae Palatinae
JCS3 wrote:
Let me back up here and ask a clarifying question, because I really am curious about this.
I am not the best person to ask, it is better to look around on the web re. delayed choice. Basically being certain of what happened still requires classical communication. But when you entangle
two halves of two entanglements you do entangle the whole thing.
Try this: http://motls.blogspot.ch/2012/03/has-an ... -time.html
17. AnonymousRichArs Praefectus
Panther Modern wrote:
Mujokan wrote:
JCS3 wrote:
The "spooky action" is the fact that the change that A make's to its particles gets transmitted to B without B having to do anything.
It's not transmitted, it's just that both become determined together.
Yeah, that's what I missed at first too; even though flipping particle A1 causes particle A2 to flip as well, the "receiver" on the other end has no way to know whether the "sender" has flipped
the particle state already or not, and when they "observe" their "A2" particle, THEIR observed state is again reflected on the "A1" particle...
The state of the particles will be the same no matter who does what first or second anyway, so what's the point of this exercise? In fact, if there's a difference between entanglement and the
particles always having said states, I would appreciate someone explaining it to me.
18. MujokanArs Scholae Palatinae
I-ku-u wrote:
Mujokan wrote:
I-ku-u wrote:
So why can't a measurement be seen as simply entangling ourselves with the particles being measured?
That would involve combining you and the system into one wavefunction in superposition, which is impossible. Measurement is the environment impinging on the wavefunction and causing it to
As to your first sentence:
No, it's not impossible, it's just undetectable. Equating the two is a logical mistake.
As to your second sentence:
"…causing it de-cohere" is an interpretation, based on the same logical fallacy as in your first sentence. And entanglement is easily created (as has been oft demonstrated in experiments) when an
"environment" consisting of one particle that impinges on the wavefunction of another, so my question stands unaddressed.
I am no expert so I don't really see what you are getting at. You can't say anything about the undetectable. For the rest it depends what you mean by "entanglement". In this context it is talking
about getting superposition, which you can't get with a macroscopic object.
19. JCS3Smack-Fu Master, in training
Mujokan wrote:
JCS3 wrote:
Let me back up here and ask a clarifying question, because I really am curious about this.
I am not the best person to ask, it is better to look around on the web re. delayed choice. Basically being certain of what happened still requires classical communication. But when you entangle
two halves of two entanglements you do entangle the whole thing.
Try this: http://motls.blogspot.ch/2012/03/has-an ... -time.html
Very useful article thank you.
For any curious where my logic and understanding failed.
Let us assume that our set of particles [A1,A2] and [B1,B2] and entangled such that the sum of each set always equals 3.
Being in the state of entanglement, simply means that we know what possible configurations A1 and A2 can have and by knowing one we know the other. "Entangling" A1 and B1, simply means that we
are no longer considering the A's and B's as discrete groups but as a whole. Entangling the particles (A1 and B1) does not mean that the sum of A1 and B1 now equals 3 and A2 and B2 now equal 3.
Measuring to two particles on Ship A will give us a range of values between 0 and 6, and Ship A will now know what the sum of the two particles are on Ship B. And Ship B can derive the same
information through its own experiments. But in this situation neither has any more information than they started with.
Last edited by JCS3 on Wed Oct 31, 2012 3:03 pm
20. Panther ModernArs Tribunus Militum
The state of the particles will be the same no matter who does what first or second anyway
Wrong. The state changes randomly from observation alone. Observation is the same as action.
21. sactomikeSmack-Fu Master, in training
Could the answer be in multiple dimensions? If a three dimensional creature makes contact with a two dimensional world, the people in the two dimensional world will only perceive lines and dots.
If a quantum entanglement is actually a single object in an E-8, 256 dimension world, tickling it in the visible 4 dimensional world that we measure could cause it to giggle where we tickle it
and simultaneously where it has intersected another part of our 4 dimensional world.
BTW, the two box theory proposed in other comments will not work. If the same present is put in each box and they are then separated and opened, the communication occurred when the boxes were
together, not when they were apart. An entangled quantum analogy would be that both boxes start empty and only after you separate them you place the surfing physicist in one of them with the
result that he simultaneously appears in the other as well.
22. I-ku-uWise, Aged Ars Veteran
Mujokan wrote:
I-ku-u wrote:
Mujokan wrote:
I-ku-u wrote:
So why can't a measurement be seen as simply entangling ourselves with the particles being measured?
That would involve combining you and the system into one wavefunction in superposition, which is impossible. Measurement is the environment impinging on the wavefunction and causing it to
As to your first sentence:
No, it's not impossible, it's just undetectable. Equating the two is a logical mistake.
As to your second sentence:
"…causing it de-cohere" is an interpretation, based on the same logical fallacy as in your first sentence. And entanglement is easily created (as has been oft demonstrated in experiments) when an
"environment" consisting of one particle that impinges on the wavefunction of another, so my question stands unaddressed.
I am no expert so I don't really see what you are getting at. You can't say anything about the undetectable. For the rest it depends what you mean by "entanglement". In this context it is talking
about getting superposition, which you can't get with a macroscopic object.
As you say, "You can't say anything about the undetectable," which means that you can't say it's impossible, yet you did.
I don't claim to be an expert on the physics either. But superposition involving a macroscopic object is allowed. Creating it verifiably in an experiment just requires better experimental
procedures than we currently have, and more carefully constructed states than simply "alive" & "dead". IIRC, a small metal rod, so small that perhaps you likely don't consider it "macroscopic",
has been placed in a superposition of vibration states.
What I am getting at, in relation to your comment, essentially boils down to one question, but first the setup. Basically, replace me for the famous cat, except there's no poison vial broken so I
don't die, I just sit and think about the detector's result. The standard question would then be, "would an outside observer see me in a superposition of states?" Given the aforementioned rod
experiment, I think the answer would be yes, if carefully enough tested for, even though we can't today imagine anyway that might actually occur. But that's not my question, and so my opinion on
the answer isn't relevant.
My question is, without any interaction with anything outside my box, "how can I prove I'm not in a superposition?"
Everything I've seen and understand about quantum mechanics says that the answer is "no", while at the same time, everyone talks as if the answer is intuitively and obviously "yes". But where's
the proof?
Edit: tiny clarification of my question added.
23. EvilsushiSmack-Fu Master, in training
atmartens wrote:
I don't know much about physics. That being said, why can't it just be as follows:
Analogy: take two boxes and put the same message in each. Separate the boxes a million light years. The instant you open each box, you will get the same message at each location, without FTL
These aren't hidden variables necessarily, just variables which are set to the same values. Or maybe I'm misunderstanding, and these are hidden variables...
That was the idea proposed by Einstein but has since been disproven. The better analogy would be you have two exact pair of dice, you put them into bottles and separate them by millions of light
years. You shake up one bottle and then check the state of the dice and see 6. If you checked the other dice it would also show six.
24. HackOfAllTradesSmack-Fu Master, in training
atmartens wrote:
...why can't it just be as follows:
Analogy: take two boxes and put the same message in each. Separate the boxes a million light years. The instant you open each box, you will get the same message at each location, without FTL
Your message in a box corresponds to entanglement where Alice & Bob measure attributes perfectly correlated (=1) or perfectly anti-correlated (=0). Both Local Realism (LR) and Quantum Mechanics
(QM) give identical predictions in those cases.
The conflict occurs when Alice & Bob make measurements where the correlation is expected to be between 0 and 1 (like .5). Then QM predicts higher correlation than LR, and experimentally measured
correlations match the predictions of QM.
It's difficult to think of a simple analogy to measurements, but try this: We bake a pizza that's exactly half cheese only, and half pepperoni. Then slice it in half, and send half to Bob and
half to Alice. The pizza halves are entangled.
To be a good analogy for Quantum Measurement, Bob & Alice cannot simply open the pizza box and look at the entire half. They must agree to take a peak at only one angle (0..180 degrees), and
record the angle and what was there (cheese or pepperoni). After many, many pizzas they compare results to find the measured correlation. What do they expect?
First, consider the cases where Alice & Bob happened to make their measurements 180 degrees apart (opposite sides of the whole pizza.) Wherever Alice sees cheese, Bob must find pepperoni. So
measurements at 180 degrees must be perfectly anti-correlated (=0).
Second, consider measurements made of the corresponding edges. If that cut went through cheese, then both will see cheese. So measurements at 0 degrees must be perfectly correlated (=1). (Our
analogy fails if the pizza were sliced *exactly* between the cheese & pepperoni. Hey, it's just an analogy.)
Finally, consider measurements made at different angles (the difference between where Bob & Alice look). For any Local Realism Pizza(tm) we expect the correlation to vary linearly between 1 and
0. That is, when Alice & Bob looked at places close together on the pizza, they were more likely to see the same thing, and when they looked at places farther apart they more likely saw the
When they look at all the measurements, Alice & Bob must conclude that Local Realism Pizza if perfectly precise. Every pizza was exactly half cheese and half pepperoni.
BUT! For Quantum Pizzas(tm) that same correlation will not be linear (in fact it's a Cosine). For measurements where Bob & Alice look 0 or 180 degrees apart, the QM and LR pizzas both give 1 or
0. But when Alice & Bob measure at any other angle, QM predicts higher correlation than LR.
If they looked at only the 0 and 180 degree measurements, Alice & Bob would think Quantum Pizza was just like Local Realism. But looking at all the angles, they find that Quantum Pizzas always
have more of one ingredient than the other.
All right. I admit it's not a great analogy, and it's way too complex. But that's the way it is in QM. Things that can only be True or False (1 or 0) in LR can take on values between 1 and 0 in
25. nekoniaowArs Centurion
LawOfEntropy wrote:
By an astute choice of coordinates, you can get an expected correlation measure that is inconsistent with local variables (quantum mechanics maxes out, as expressed in the Wikipedia section, at
2sqrt{2} while local variables max out at 2 for the measure of correlation used). The idea of x and z axes is that you choose two orthogonal directions on which to measure spin and then do each
one randomly. Your partner measuring x and you measuring z' results in different answers than your partner measuring z and you measuring z'.
This is really difficult to express in layman's terms. In softcore quantum, your partner measuring along the x axis puts your particle into an eigenstate of x spin (eigenstate = has exact known
value). This eigenstate is a linear combination of x' and z' states (linear combination = will randomly get one or the other with weighted probability). If your partner measures z, your particle
will be in a z eigenstate, which is a distinct (in general) linear combination of x' and z' states from the x eigenstate. If you measure x', you will choose from one of the x' components in your
particle's composite state. If you measure z', you'll choose one of the z' components. A local variable can't change the eigenstate its particle is in on the fly to match up with your partner's
choice. It would have to know at the beginning "ok, this other guy is going to measure x, and then the other guy is going to measure z', so I'll be in state a_x and you be in state b_z', ok?"
They can't know that, so they can't do as well as a nonlocal theory that allows the particle measured along x to alert the particle what linear of combination of states it needs to be in. I
really hope this helps; I'm not sure I can do better.
It looks like there is still a bit of context I'm missing here.
You're essentially telling me that the measure of (arbitrary) x/z spin components is subject to a degree of uncertainty: if you measure one, you lose precision on the other and hence only obtain
a statistical distribution for it.
Obviously this stays true regardless of the coordinate system since spin is a vector and any coordinate system chosen is by definition arbitrary (the universe doesn't have an x/y/z axis).
So let's say that spin is a "fuzzy" vector in a sense. I guess that as for the uncertainty between speed and position there's a clear mathematical foundation for this (which I'll look up later).
Now, what you're telling me is that after the first measurement of spin z, the measurement of spin x on the peer particle will suffer from an uncertainty which matches the initial measurement and
can't be transmitted by local variables since they have no clue of the coordinate system used (it's our variable, not the particle's).
That essentially means that the x axis is in some way distinguishable from the z one, otherwise it would be easy to do the same measurement but changing the names of axes by simply doing a
circular permutation: we'd still do the exact same measurement but what we'd call x would be the old z and what we'd call z would be the old y which the particle has no way of knowing.
So there is a way to distinguish x and z which makes the whole difference between them and that has nothing to do with entanglement, this seems to be a property of the spin right?
Sorry for dragging my feet on that one but I really want to make sure I understand how the elements we're talking about are behaving before I draw any conclusion.
But then, that means that the hidden variables don't have to encode this information since it's already in the spin itself. What is the logical link between "the correlation is still observed"
and "the hidden variables can't encode it", why would they have to if that property of the spin is independent of the coordinate system?
The hidden variables have to record, on the fly, wha should be obtained for every possible axis along which the spin could be projected given that your partner can set the particle to be an
eigenstate along an arbitrary axis. Say that your partner says "spin is up along x" and what that means to you is "spin has components sqrt{3} of x' down and some other stuff". Then you randomly
choose to measure x' and get x' down. But what if your partner measured spin up along z? Then you might have x' being sqrt{3} up in order to make this different linear combination. Now your
particle needs to have been born with x' up and have carried that local variable along with it. It can't do both!
But as I mentioned above, this uncertainty relation between x/z seems already to be non spatial, since it's observed independently of the coordinate system for a single particle whether it's
entangled or not. Ie, these components aren't really spatial ones: we measure them spatially but they have a relation which seems to be independent of the coordinate system, so why would we
assume that anything needs to be transmitted?
The spin vector could simply be formulated as a an amplitude and a probability distribution (if that makes sense) and be set as such at creation time, there's no need to exchange anything.
It's clear that I need to look up what that damn spin measure actually means and how it's done
Thanks again for your time though!
Last edited by nekoniaow on Thu Nov 01, 2012 3:24 pm
26. nekoniaowArs Centurion
Sorry for the redundant post, I pressed "reply" instead of "edit" several times...
Last edited by nekoniaow on Thu Nov 01, 2012 3:23 pm
27. nekoniaowArs Centurion
Redundant post. Sorry for that.
Last edited by nekoniaow on Thu Nov 01, 2012 3:22 pm
28. nekoniaowArs Centurion
Redundant post. Sorry.
Last edited by nekoniaow on Thu Nov 01, 2012 3:22 pm
29. nekoniaowArs Centurion
Redundant post. Sorry.
30. Bengie25Ars Tribunus Militum
Mujokan wrote:
Thinking a conscious observer was required for wavefunction collapse was a semi-popular view a couple of decades ago. Even some respectable people did hold it.
This led to a lot of crank quantum physics, e.g. people saying the universe would give you whatever you wished for.
These days this is pretty much a dead point of view. Even the idea of "collapse" per se is not very popular any more. The environment as a whole is considered as the observer, and it doesn't
matter if consciousness is involved.
I figured as such because wild claims that aren't generally recognized aren't typically true. But what I found strange is I did find a few professors from Stanford/Cambridge/etc who have been at
these institutes for decades and they still teach this. Based on what I could quickly find on the web.
I'll have to actually spend some time and dig into this at some point because it's going to bother me.
31. TheWerewolfArs Centurion
People who are asking about why it can't be that when you have an entangled pair, the pair's properties aren't preset and just unknown - thus not actually transferring information have to take a
few mins to read about Bell's Inequality.
It's a bit more than can be explained in a comment, but the short version is that Bell described a relatively simple experiment (in theory) where two properties are measured at the same time on
entangled particles and if there are hidden variables (ie: the properties are preset and are only discovered), you'd get a correlation of specific value or less. But when they finally figured out
how to do the experiement (Alain Aspect in the early 1980s), they found the correlation was different than this value, which means there can't be hidden variables (preset values would be a kind
of hidden variable).
This means that the properties aren't preset, they get set *at the time of the measurement*.
That being said, there's one other option that wasn't mentioned in the article. It's possible that the communication channel that sends the state change information isn't a normal channel.
Consider that it might be a mechanism that's not actually within the normal physics. One interesting feature of this mechanism is that you cannot put information into it. Quantum mechanical
processes are inherently random, so making a measurement - even though it communicates the decision instantly - you can't force the state of the measurement and thus signal information to the
other particle.
So, that back channel cannot be used for communications by anything in this universe and doesn't violate the spirit of relativity (even if it's kind of bends the letter of it).
32. TanithSmack-Fu Master, in training
Mad props to the article author for the image used. This is so amazingly fitting to the subject on so many levels and so incredibly cute on top, it's just wonderful.
33. robert_13Smack-Fu Master, in training
barfat, the difference between the experiment in the article and your boxes is you have no way of knowing when the other box will be opened. You cannot communicate fast enough for this experiment
to work in order to coordinate that. For example, when the spin of one of the correlated pair is flipped upside down, the other flips simultaneously, or at least in a time that would require
communication between them 10,000 times the speed of light. Even that is likely a result of the limits of accuracy in measurement.
The suspicion is that it is absolutely simultaneous. There are experiments that also show that this correlation happens at a level more fundamental than space and time, so there is at that level
neither space nor time between the particles. This implies that the fundamental foundation upon which natural structure of every kind exists generates all locally apparent phenomena, including
space and time. It is a level that transcends space and time. Interestingly, this agrees with very ancient eastern knowledge.
You must login or create an account to comment.
Feature Story (3 pages)
No Romulans, just angry volunteers: One man’s journey to restore Star Trek’s bridge
One diehard fan's plan to save the captain's chair has now morphed into a museum.
|
{"url":"http://arstechnica.com/science/2012/10/quantum-entanglement-shows-that-reality-cant-be-local/?comments=1&start=160","timestamp":"2014-04-18T04:53:35Z","content_type":null,"content_length":"152036","record_id":"<urn:uuid:56221540-ce0d-43bc-a3cc-74443264a362>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Irving Park, Chicago, IL
Chicago, IL 60625
Reliable and professional tutor
...I am also professional, and understand the importance of tutoring in the learning process. I hold a Bachelor's degree in Math, and have taken math classes up to Calculus 3, differential equation,
and linear algebra. I hold a bachelor's degree in math, and have taken...
Offering 4 subjects including algebra 2
|
{"url":"http://www.wyzant.com/geo_Irving_Park_Chicago_IL_College_Algebra_tutors.aspx?d=20&pagesize=5&pagenum=4","timestamp":"2014-04-19T22:35:47Z","content_type":null,"content_length":"60575","record_id":"<urn:uuid:66a11242-7d85-483d-be0a-682a897238cf>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ordinary Differential Equations/Applications of Second Order DEQs
There are several uses for second-order differential equations. In this chapter, I will cover the use of second-order differential equations to describe the motion of a mass at the end of a spring.
The chapter is broken up into three sections:
1. Motion with an Outside Force
Chapter NotationEdit
The formulae in this chapter are written with the following notation in mind. If you've learned a different manner of notation, please take note of the differences. I made every attempt to use a
standard set of notation.
Important TermsEdit
Terms that I feel deserve your undivided attention will appear like This. You will see this term referred to often in the text that follows, so it's recommended that you fully understand what it
Derivatives with respect to timeEdit
If a derivation is taken with respect to time (t), then an equivalent symbol is used and is pronounced x double-dot, x triple-dot, etc.
• Example:$\frac{d^2x}{dt^2}\equiv\ddot{x}$
Rendering MATH PNG ImagesEdit
This chapter was written using the built-in TeX markup language present in MediaWiki. It's recommended that you view the chapter with your preferences set to render all Math in PNG. Check your
preferences for this setting.
Last modified on 23 January 2013, at 19:30
|
{"url":"http://en.m.wikibooks.org/wiki/Ordinary_Differential_Equations/Applications_of_Second_Order_DEQs","timestamp":"2014-04-21T12:41:56Z","content_type":null,"content_length":"16388","record_id":"<urn:uuid:a0127cf6-9fc0-476d-862a-8486c30febbf>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Baseline Education - Business, Finance & Quantitative Methods
With continuous compounding, present value formulas and future value formulas are different from those with discrete compounding, PV = FV / (1+r)^t or FV = PV * (1+r)^t .
The present value (PV) and future value (FV) of a cash flow with continuous compounding is given by:
PV = FV*e^(-rt) or FV = PV*e^(rt)
where r is the continuously compounded interest or discount rate which is different from the discrete compounded or discount rate.
As the present value formulas are different, so are the formulas for the net present values of annuities, growing annuities, perpetuity and growing perpetuity with continuously compounded rates.
However, we will make use of the geometric progression analysis available within this blog at the link (click link) . The summary results are the sum of the first n terms of a geometric progression
and the sum of the infinite terms of a geometric progression.
Sn = [A/(1-R)]*(1- R^n)
(formula for sum of the first n terms)
S∞ =A / (1-R)
(formula for sum of infinite terms of the GP) where A is the first term and R is the constant ratio or constant multiple.
We start with the cash flows and the associated present values, assuming the cash flows grow by a factor g (also a continuously compounded rate). The cash flows follow the following sequence:
C , C*e^g , C*e^2g, ..., C*e^(n-1)g, ... (note: nth term has an index of (n-1)g
The present value of each cash flow, bearing in mind the formula PV = FV*e^(-rt) follow the following sequence:
Ce^(-r), C*e^(g-2r) , C*e^(2g-3r), ..., C*e^[(n-1)g-nr], ...
On closer inspection, these terms are a geometric progression with first term, A = Ce^(-r) and a constant ratio R = e^(g-r). These two terms can be used to find the formulas for the annuity, growing
annuity, perpetuity and growing perpetuity. Let's start with the growing annuity.
Growing Annuity and Annuity with continuous compounding
using Sn = [A/(1-R)]*(1- R^n) (formula for sum of the first n terms) then PV (GA) = [Ce^(-r)/(1-e^(g-r))]*[1- e^(g-r)n]
multiplying the numerator and denominator by e^(r-g) gives PV (GA) = [{Ce^(-r)}*e^(r-g)/[{1-e^(g-r)}*e^(r-g)]]*[1- e^(g-r)n] or PV (GA) =[Ce^(-g)/(e^(r-g)-1)]*[1- e^(-(r-g)n)]
the present value of an annuity with no growth (g=o) directly results as:
PV (A) =[C/(e^r-1)]*[1- e^(-rn)]
Growing Perpetuity and Perpetuity with continuous compounding
Using S∞ =A / (1-R) (formula for sum of infinite terms of the GP) then PV (GP) = [Ce^(-r)/(1-e^(g-r))]
multiplying the numerator and denominator by e^(r-g) gives PV (GP) = [{Ce^(-r)}*e^(r-g)]/[{1-e^(g-r)}*e^(r-g)] or PV (GP) =[Ce^(-g)/(e^(r-g)-1)]
the present value of a perpetuity with no growth (g=o) directly results as:
PV (P) =C/({e^r}-1)
Summary results
Continuous compounding formulas
PV (A) =[C/(e^r-1)]*[1- e^(-rn)] - Present value of an annuity PV (GA) =[Ce^(-g)/(e^(r-g)-1)]*[1- e^(-(r-g)n)] - Present value of a growing annuity PV (P) =C/({e^r}-1) - Present value of a perpetuity
PV (GP) =[Ce^(-g)/(e^(r-g)-1)] - Present value of a growing perpetuity
Discrete compounding formulas
PV(A) = [C/r]* [1-{1/(1+r)^n}] - Present value of an annuity PV(GA) = [C/(r-g)]* [1-{(1+g)/(1+r)}^n] - Present value of a growing annuity PV(P) = C/r - Present value of a perpetuity PV(GP) = C/(r -g)
- Present value of a growing perpetuity
1 comment:
Impaired Life Annuity said...
Hi, Thanks for the very informative post. This post is so useful for me now I can easily calculate my annuity rates.
|
{"url":"http://baselineeducation.blogspot.co.uk/2012/10/annuities-and-perpetuities-with.html","timestamp":"2014-04-16T19:28:51Z","content_type":null,"content_length":"102236","record_id":"<urn:uuid:2ad9fd74-6f35-4db1-ac28-ba5ed1643fd8>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why a DFT is usually called an FFT in practice
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search
Practical implementations of the DFT are usually based on one of the Cooley-Tukey ``Fast Fourier Transform'' (FFT) algorithms [17].^8.1 For this reason, the matlab DFT function is called `fft', and
the actual algorithm used depends primarily on the transform length ^8.2 The fastest FFT algorithms generally occur when signal processing, we routinely zero-pad our FFT input buffers to the next
power of 2 in length (thereby interpolating our spectra somewhat) in order to enjoy the power-of-2 speed advantage. Finer spectral sampling is a typically welcome side benefit of increasing A
provides a short overview of some of the better known FFT algorithms, and some pointers to literature and online resources.
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email]
|
{"url":"https://ccrma.stanford.edu/~jos/mdft/Why_DFT_usually_called.html","timestamp":"2014-04-18T15:47:54Z","content_type":null,"content_length":"9100","record_id":"<urn:uuid:4a4ca945-dac7-485d-82b8-b839acc1e5b8>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Studio City Calculus Tutor
...From my own experiences in school and in teaching others, I take a holistic approach with each person. Different strategies work for different people, so I will always take the time to find the
perfect method of instruction. Patience is one of my stronger virtues, so you can be sure that I will never give up on a student.
24 Subjects: including calculus, reading, chemistry, English
...I love learning to overcome internal boundaries, and helping people who have said, "I can't" start saying "I DID!" I strive to elucidate the obscure and make my clients stronger as a person. I
have an extensive background with Math and Science: I've seen a ton of physics problems, and I have an...
44 Subjects: including calculus, reading, chemistry, Spanish
...I also have students struggling with math exams and we spent a short of time easily pass the exams. I have been teaching and tutoring Mandarin for 6 years. I successfully helped a language
school using immersion program to build a mandarin preschool program.
7 Subjects: including calculus, Chinese, algebra 1, algebra 2
...While it is not a necessary precursor to calculus (it generally doesn't involve much/any actual calculus), it will certainly help prepare most students. Even students who don't go on to take
calculus can still gain a greater understanding of mathematics and problem-solving skills from precalc. ...
11 Subjects: including calculus, physics, statistics, SAT math
...To provide some more details on my academic background, I have listed my experiences in the following areas of study: Math: College Level Calculus 1 - Grade = "A"; Calculus II - Grade = "A"; AP
Statistics; High School: Honors Pre-Calculus, Calculus; English: Received an "A" in Boston College's...
43 Subjects: including calculus, English, reading, writing
|
{"url":"http://www.purplemath.com/studio_city_calculus_tutors.php","timestamp":"2014-04-21T02:38:30Z","content_type":null,"content_length":"24141","record_id":"<urn:uuid:490b4b9e-aada-4b5f-806e-7a99d99abf7c>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How To Do Trigonometry For This...?
January 10th 2009, 05:31 PM #1
Apr 2008
How To Do Trigonometry For This...?
I know how to do cos(5pi/6) and any question like it.
But I don't know how to do sin, cos, tan, csc, cot, sec, of angles like 0 (0pi/12), 90 (3pi/12), 180 (6pi/12), 270 (9pi/12), 360 (12pi/12), 450 (15pi/12), etc.
Can somebody show me how?
ALso, I don't understand why cos (0pi/12) = 0, too. Or why some are undefined.
there is nothing to "do" here ... this is just a matter of knowing the unit circle.
note that $\cos\left(\frac{5\pi}{6}\right) = -\frac{\sqrt{3}}{2}$
here is how to find a trig value for an angle involving $\frac{\pi}{12}$ that is not on the unit circle ...
$\cos\left(\frac{\pi}{12}\right) = \cos\left(\frac{\pi}{3} - \frac{\pi}{4}\right)$
you need the difference identity for cosine to calculate the exact value ...
$\cos(a-b) = \cos(a)\cos(b) + \sin(a)\sin(b)$
If none of this makes sense to you, then it's obvious that you have some major gaps in your knowledge of basic trigonometry. you should see your teacher to get some face to face help.
finally ... "cos(0pi/12)" = cos(0) = 1 , not 0.
January 10th 2009, 05:56 PM #2
|
{"url":"http://mathhelpforum.com/math-topics/67616-how-do-trigonometry.html","timestamp":"2014-04-18T12:07:59Z","content_type":null,"content_length":"33856","record_id":"<urn:uuid:c4a43001-14df-4316-8fee-df9947db0303>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Clifton, VA Calculus Tutor
Find a Clifton, VA Calculus Tutor
...I played on my high school varsity team for three years and earned team MVP for my senior season. I encourage a smooth, easy swing in my students in order to lessen stress on the body and
improve accuracy. I also cover short game, etiquette, and how to choose shots well so you can reach your goal out on the course.
13 Subjects: including calculus, writing, algebra 1, GRE
...Please note that lesson time is for a minimum of 90 minutes (30 minutes minimum for online lessons.)I have 15 years' experience in Java programming. I have a Masters degree in pure Mathematics.
I have taught, tutored, graded students homework, and taken several classes in discrete math (several...
37 Subjects: including calculus, physics, geometry, statistics
...In tutoring, I always make it a point to figure out the student's style of learning and I plan my tutoring sessions accordingly, spending extra time to prepare for the session prior to meeting
with the student. My broad background in math, science, and engineering combined with my extensive rese...
16 Subjects: including calculus, physics, statistics, geometry
...I have published a number of research papers about computer science, mathematics and the teaching of children in the most qualified international journals, such as Discrete Math, Applied Math,
etc. I was selected one of the top 200 tutors in the entire country in 2011. I worked as a committee member and chairman of several international conferences, such as IEEE.
12 Subjects: including calculus, geometry, algebra 1, algebra 2
My name is Bekah and I graduated from BYU with a degree in Math Education. While I was in college, I was a professor's assistant for 3 years in a calculus class, which included me lecturing twice
a week, and working one-on-one with students. After graduating, I taught high school math for one year...
10 Subjects: including calculus, geometry, algebra 1, algebra 2
|
{"url":"http://www.purplemath.com/Clifton_VA_calculus_tutors.php","timestamp":"2014-04-21T15:21:49Z","content_type":null,"content_length":"24135","record_id":"<urn:uuid:01693fe7-469a-4189-b96f-57bf24e8b039>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Approximation Schemes for
Scheduling on Parallel Machines
Noga Alon
Yossi Azar
Gerhard J. Woeginger
Tal Yadid §
We discuss scheduling problems with m identical machines and n jobs where each job
has to be assigned to some machine. The goal is to optimize objective functions that
solely depend on the machine completion times.
As a main result, we identify some conditions on the objective function, under which
the resulting scheduling problems possess a polynomial time approximation scheme. Our
result contains, generalizes, improves, simplifies, and unifies many other results in this
area in a natural way.
Keywords: Scheduling theory, approximation algorithm, approximation scheme, worst
case ratio, combinatorial optimization.
1 Introduction
In this paper we consider scheduling problems with m identical machines Mi, 1 i m,
and n independent jobs Jj, 1 j n, where job Jj has processing time (or length) pj. A
schedule is an assignment of the n jobs to the m machines. For 1 i m, the completion
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/174/3027584.html","timestamp":"2014-04-16T05:11:53Z","content_type":null,"content_length":"8097","record_id":"<urn:uuid:7a1c5af8-e6ba-41c9-9e1b-92170bf03293>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Proposition 11
If four magnitudes are proportional, and the first is commensurable with the second, then the third also is commensurable with the fourth; but, if the first is incommensurable with the second, then
the third also is incommensurable with the fourth.
Let A, B, C, and D be four magnitudes in proportion, so that A is to B as C is to D, and let A be commensurable with B.
I say that C is also commensurable with D.
Since A is commensurable with B, therefore A has to B the ratio which a number has to a number.
And A is to B as C is to D, therefore C also has to D the ratio which a number has to a number. Therefore C is commensurable with D.
Next, let A be incommensurable with B.
I say that C is also incommensurable with D.
Since A is incommensurable with B, therefore A does not have to B the ratio which a number has to a number.
And A is to B as C is to D, therefore neither has C to D the ratio which a number has to a number. Therefore C is incommensurable with D.
Therefore, if four magnitudes are proportional, and the first is commensurable with the second, then the third also is commensurable with the fourth; but, if the first is incommensurable with the
second, then the third also is incommensurable with the fourth.
|
{"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookX/propX11.html","timestamp":"2014-04-20T00:49:14Z","content_type":null,"content_length":"4272","record_id":"<urn:uuid:74f9fa5f-37c0-499c-bc27-14b146832b26>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Foil Calculator
Step 1 :
Multiply first term of the first bracket to the first term of the other bracket
Step 2 :
Then multiply the first term of the first bracket to the last term of the second bracket.
Step 3 :
Now come to the second term of first bracket, multiply second term with first term of second bracket.
Step 4 :
Finaly multiply last term of first bracket with the last term of the second bracket.
Step 5 :
Adding all the answers got from the steps, we get the final answer.
|
{"url":"http://calculator.tutorvista.com/foil-calculator.html","timestamp":"2014-04-16T10:40:23Z","content_type":null,"content_length":"28581","record_id":"<urn:uuid:eb10aabb-67e9-44ce-bec1-98d054b9266c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Get homework help at HomeworkMarket.com
Submitted by
on Tue, 2012-03-13 19:35
due date not specified
answered 1 time(s)
Find three positive consecutive odd integers such that the largest decreased by three times the second is 23 less than...
Find three positive consecutive odd integers such that the largest decreased by three times the second is 23 less than the smallest
Submitted by
on Tue, 2012-03-13 21:17
price: $5.00
Complete answer with step-by-step instructions
body preview (324 words)
Let the xxxxx xxxxxxxxxxx odd numbers xx a, x and c. We will have the xxxxxxxxx xxxxxxxxxx
x x b x x
x = x x 2
xx xx replace x xx the xxxxx xxxxxxxx with xxx value in xxx xxxxxx xx xxxx xxxxx
x = xx + 2) x x xx
x = c + 4
From xxx hypothesis xx xxxx x
a x 3 * b = x x xx
xx xxx xxx
- - - more text follows - - -
Buy this answer Try it before you buy it
Check plagiarism for $2.00
|
{"url":"http://www.homeworkmarket.com/content/find-three-positive-consecutive-odd-integers-such-largest-decreased-three-times-second-23-le","timestamp":"2014-04-18T00:58:34Z","content_type":null,"content_length":"49294","record_id":"<urn:uuid:121604a9-6a07-48b7-be8d-179fdfd035c2>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathML interactions with the Wide World
Because MathML is, typically, embedded in a wider context, it is important to describe the conditions that processors should acknowledge in order to recognize XML fragments as MathML. This chapter
describes the fundamental mechanisms to recognize and transfer MathML markup fragments within a larger environment such as an XML document or a desktop file-system, it raises the issues of combining
external markup within MathML, then indicates how cascading style sheets can be used within MathML.
This chapter applies to both content and presentation MathML and indicates a particular processing model to the semantics, annotation and annotation-xml elements defined in Section 5.1 Semantic
7.1 Invoking MathML Processors: namespace, extensions, and mime-types
7.1.1 Recognizing MathML in an XML Model
Within an XML document supporting namespaces [XML], [Namespaces], the preferred method to recognize MathML markup is by the identification of the math element in the appropriate namespace, i.e. that
of URI http://www.w3.org/1998/Math/MathML.
This is the recommended method to embed MathML within [XHTML] documents. Some user-agents' setup may require supplementary information to be available.
Markup-language specifications that wish to embed MathML may provide special conditions independent of this recommendation. The conditions should be equivalent and the elements' local-names should
remain the same.
7.1.2 Resource Types for MathML Documents
Although rendering MathML expressions often occurs in place in a Web browser, other MathML processing functions take place more naturally in other applications. Particularly common tasks include
opening a MathML expression in an equation editor or computer algebra system. It is important therefore to specify the encoding-names that MathML fragments should be called with:
MIME types [RFC2045], [RFC2046] offer a strategy that can be used in current user agents to invoke a MathML processor. This is primarily useful when referencing separate files containing MathML
markup from an embed or object element, or within a desktop environment.
[RFC3023] assigns MathML the MIME type application/mathml+xml which is the official mime-type. The W3C Math Working Group recommends the standard file extension .mml within a registry associating
file formats to file-extension. In MathML 1.0, text/mathml was given as the suggested MIME type. This has been superceded by RFC3023. In the next section, alternate encoding names are provided for
the purposes of desktop transfers.
7.1.3 Names of MathML Encodings
MathML contains two distinct vocabularies: one for encoding mathematical semantics called Chapter 4 Content Markup and one for encoding visual presentation called Chapter 3 Presentation Markup. Some
MathML-aware applications import and export only one of these vocabularies, while other may be capable of producing and consuming both. Consequently, we propose three distinct MathML encoding names:
Flavor Name Description Deprecated
MathML Content Instance contains content MathML markup only MathML-Content, Content MathML, cMathML
MathML Presentation Instance contains presentation MathML markup only MathML-Presentation, Presentation MathML, pMathML
MathML Any well-formed MathML instance presentation markup, content markup, or a mixture of the two is allowed
Any application producing one of the encodings above should ensure to output the values of the first column but should accept encoding names of the deprecated column.
7.2 Transferring MathML in Desktop Environments
MathML expressions are often exchanged between applications using the familiar copy-and-paste or drag-and-drop paradigms. This section provides recommended ways to process MathML while applying these
Applying them will transfer MathML fragments between the contexts of two applications by making them available in several flavors, often called clipboard formats or data flavors. The copy-and-paste
paradigm lets application place content in a central clipboard, one data-stream per clipboard format; consuming applications negotiate by choose to read the data of the format they elect. The
drag-and-drop pardigm lets application offer content by declaring the available formats and potential recipients accept or reject a drop based on this list; the drop action then lets the receiving
application request the delivery of the format in the indicated format. The list of flavors is generally ordered, going from the most wishable to the least wishable flavor.
Current desktop platforms offer both of these transfer paradigms using similar transfer architectures. In this section we specify what applications should provide as transfer-flavors, how they should
be named, and how they should handle the special semantics, annotation, and annotation-xml elements.
To summarize the two negotiation mechanisms, we shall, here, be talking of flavors, each having a name (a character string) and a content (a stream of binary data), which are exported.
7.2.1 Basic Transfer Flavors' Names and Contents
Note that MathML Content, MathML Presentation and MathML are the exact strings that should be used to describe the flavors corresponding to the encodings in Section 7.1.3 Names of MathML Encodings.
On operating systems that allow such, applications should register such names (e.g. Windows' RegisterClipboardFormat).
When transferring MathML, for example when placing it within a clipboard, an application MUST ensure the content is a well-formed XML instance of a MathML schema. Specifically:
1. The instance MUST begin with a XML processing instruction, e.g. <?xml version="1.0">
2. The instance MUST contain exactly one root math element.
3. Since MathML is frequently embedded within other XML document types, the instance MUST declare the MathML namespace on the root math element. In addition, the instance SHOULD use a schemaLocation
attribute on the math element to indicate the location of MathML schema documents against which the instance is valid. Note that the presence of the schemaLocation attribute does not require a
consumer of the MathML instance to obtain or use the cited schema documents.
4. The instance MUST use numeric character references (e.g. α) rather than character entity names (e.g. α) for greater interoperability.
5. The character encoding for the instance MUST be either specified in the XML header, UTF-16, or UTF-8. UTF-16-encoded data MUST begin with a byte-order mark (BOM). If no BOM or encoding is given,
the character encoding will be assumed to be UTF-8.
7.2.2 Recommended Behaviors when Transferring
Applications that transfer MathML SHOULD adhere to the following conventions:
1. Applications that have pure presentation markup and/or pure content markup versions of an expression SHOULD offer as many of these two flavors as are available.
2. Applications that only export one MathML flavor should name it "MathML" independent of the nature of the fragments they export. Applications that export the two flavours should export the the
"MathML Content" and "MathML Presentation" flavors as well as the "MathML" flavor which combines the two others using a top-level MathML's semantics element (see Section 5.4.1 Top-level Parallel
3. When an application exports a MathML fragment whose root element is a semantics element, it SHOULD offer, after the flavors above, a flavor for each annotation or annotation-xml element that has
a clipboardFlavor attribute: the flavor name should be given by the clipboardFlavor attribute value of the annotation or annotation-xml element, and the content should be the child text in the
surrounding encoding (if the annotation element contains only textual data), a valid XML fragment (if the annotation-xml element contains children), or the data resulting of requesting the URL
given by the href attribute.
User-agents implementors should be aware that some clipboard flavors, when put in the platform's clipboard or transferred through such a gesture as drag-and-drop maybe be used in a way that
executes the programmes contained in the transferred content and this without the traditional security restrictions of web-content; they should, thus, only allow transfer only of safe content
4. As a final fallback applications MAY export a version of the data in plain-text flavor (such as CF_UNICODETEXT, UnicodeText, NSStringPboardType, text/plain, ...). When an application has multiple
versions of an expression available, it may choose the version to export as text at its discretion. Since some older MathML-aware programs expect MathML instances transferred as text to begin
with a math element, the text version should generally omit the XML processing instruction, DOCTYPE declaration and other XML prolog material before the math element. Similarly, the BOM should be
omitted for Unicode text encoded as UTF-16. Note, the Unicode text version of the data should always be the last flavor exported, following the principle that exported flavors should be ordered
with the most specific flavor first and the least specific flavor last.
7.2.3 Discussion
For purposes of determining whether a MathML instance is pure content markup or pure presentation markup, the math element and the semantics, annotation and annotation-xml elements should be regarded
as belonging to both the presentation and content markup vocabularies. This is obvious for the root math element which is required for all MathML expressions. However, the semantics element and its
child annotation elements comprise an arbitrary annotation mechanism within MathML, and are not tied to either presentation or content markup. Consequently, applications consuming MathML should
always process these four elements even if the application only implements one of the two vocabularies.
It is worth noting that the above recommendations allow agents producing MathML to provide binary data for the clipboard, for example as an image or an application-specific format. The sole method to
do so is to reference the binary data by the href attribute since XML character data does not allow arbitrary byte-streams.
While the above recommendations are intended to improve interoperability between MathML-aware applications utilizing the transfer flavors, it should be noted that they do not guarantee
interoperablility. For example, references to external resources (e.g. stylesheets, etc.) in MathML data can also cause interoperability problems if the consumer of the data is unable to locate them,
just as can happen when cutting and pasting HTML or many other data types. Applications that make use of references to external resources are encouraged to make users aware of potential problems and
provide alternate ways for obtaining the referenced resources. In general, consumers of MathML data containing references they cannot resolve or do not understand should ignore them.
7.2.4 Examples
7.2.4.1 Example 1
An e-Learning application has a database of quiz questions, some of which contain MathML. The MathML comes from multiple sources, and the e-Learning application merely passes the data on for display,
but does not have sophisticated MathML analysis capabilities. Consequently, the application is not aware whether a given MathML instance is pure presentation or pure content markup, nor does it know
whether the instance is valid with respect to a particular version of the MathML schema. It therefore places the following data formats on the clipboard:
│Flavor Name │ Flavor Content │
│MathML │<?xml version="1.0"?> │
│ │<math xmlns="http://www.w3.org/1998/Math/MathML">...</math> │
│Unicode Text│<math xmlns="http://www.w3.org/1998/Math/MathML">...</math> │
7.2.4.2 Example 2
An equation editor is able to generate pure presentation markup, valid with respect to MathML 2.0, 2nd Edition. Consequently, it exports the following flavors:
│ Flavor Name │ Flavor Content │
│MathML Presentation│<?xml version="1.0"?> │
│ │<math xmlns="http://www.w3.org/1998/Math/MathML">...</math> │
│Tiff │(a rendering sample) │
│Unicode Text │<math xmlns="http://www.w3.org/1998/Math/MathML">...</math> │
7.2.4.3 Example 3
A schema-based content management system contains multiple MathML representations of a collection of mathematical expressions, including mixed markup from authors, pure content markup for interfacing
to symbolic computation engines, and pure presentation markup for print publication. Due to the system's use of schemas, markup is stored with a namespace prefix. The system therefore can transfer
the following data:
│ Flavor Name │ Flavor Content │
│ │<?xml version="1.0"?> │
│ │<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" │
│ │ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" │
│MathML Presentation│ xsi:schemaLocation="http://www.w3.org/Math/XMLSchema/mathml2/mathml2.xsd"> │
│ │ <mml:mrow> │
│ │ ... │
│ │ <mml:mrow> │
│ │</mml:math> │
│ │<?xml version="1.0"?> │
│ │<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" │
│ │ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" │
│MathML Content │ xsi:schemaLocation="http://www.w3.org/Math/XMLSchema/mathml2/mathml2.xsd"> │
│ │ <mml:apply> │
│ │ ... │
│ │ <mml:apply> │
│ │</mml:math> │
│ │<?xml version="1.0"?> │
│ │<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" │
│ │ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" │
│ │ xsi:schemaLocation="http://www.w3.org/Math/XMLSchema/mathml2/mathml2.xsd"> │
│MathML │ <mml:mrow> │
│ │ <mml:apply> ... content markup within presentation markup ... </mml:apply> │
│ │ ... │
│ │ </mml:mrow> │
│ │</mml:math> │
│TeX │{x \over x-1} │
│ │<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" │
│ │ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" │
│ │ xsi:schemaLocation="http://www.w3.org/Math/XMLSchema/mathml2/mathml2.xsd"> │
│Unicode Text │ <mml:mrow> │
│ │ ... │
│ │ <mml:mrow> │
│ │</mml:math> │
7.2.4.4 Example 4
A similar content management system is web-based and delivers MathML representations of mathematiacly expressions. The system is able to produce presentation MathML, content MathML, TeX and pictures
in PNG format. In web-pages being browsed, it could produce a MathML fragment such as the following:
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
<mml:annotation-xml encoding="MathML Content">...</mml:annotation-xml>
<mml:annotation clipboardFlavor="TeX">{1 \over x}</mml:annotation>
<mml:annotation clipboardFlavor="image/png" href="formula3848.png"/>
A web-browser that receives such a fragment and tries to export it as part of a drag-and-drop action, can offer the following flavors:
│ Flavor Name │ Flavor Content │
│ │<?xml version="1.0"?> │
│ │<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" │
│ │ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" │
│MathML Presentation│ xsi:schemaLocation="http://www.w3.org/Math/XMLSchema/mathml2/mathml2.xsd"> │
│ │ <mml:mrow> │
│ │ ... │
│ │ <mml:mrow> │
│ │</mml:math> │
│ │<?xml version="1.0"?> │
│ │<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" │
│ │ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" │
│MathML Content │ xsi:schemaLocation="http://www.w3.org/Math/XMLSchema/mathml2/mathml2.xsd"> │
│ │ <mml:apply> │
│ │ ... │
│ │ <mml:apply> │
│ │</mml:math> │
│ │<?xml version="1.0"?> │
│ │<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" │
│ │ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" │
│ │ xsi:schemaLocation="http://www.w3.org/Math/XMLSchema/mathml2/mathml2.xsd"> │
│MathML │ <mml:mrow> │
│ │ <mml:apply> ... content markup within presentation markup ... </mml:apply> │
│ │ ... │
│ │ </mml:mrow> │
│ │</mml:math> │
│TeX │{x \over x-1} │
│image/png │(the content of the picture file, requested from formula3848.png │
│ │<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" │
│ │ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" │
│ │ xsi:schemaLocation="http://www.w3.org/Math/XMLSchema/mathml2/mathml2.xsd"> │
│Unicode Text │ <mml:mrow> │
│ │ ... │
│ │ <mml:mrow> │
│ │</mml:math> │
7.3 Combining MathML and Other Formats
Since MathML is most often generated by authoring tools, it is particularly important that opening a MathML expression in an editor should be easy to do and to implement. In many cases, it will be
desirable for an authoring tool to record some information about its internal state along with a MathML expression, so that an author can pick up editing where he or she left off. The following
markup is proposed:
1. For any extra information that is encoded in signficantly more than an attribute value, MathML-3 proposes the usage of the semantics element presented in Section 5.1 Semantic Annotations.
2. For any extra information that cannot be declared as such, and is, expectedly, private to the application. MathML-3 suggests to use the maction, see Section 3.6.1 Bind Action to Sub-Expression
7.3.1 Mixing MathML and HTML
│ Issue allow-well-specified-embedding │wiki (member only) │
│ Allow well specified foreign markup │
│This section should not fully prohibit children of MathML markup containing foreign markup as it does currently. We should leave it possible for specifications to define how embedded foreign markup│
│in MathML token elements can work (expectedly XSL:FO and HTML5) while suggesting processors that cannot do anything with such markup to ignore it. │
│ │
│Moreover, the schema should exist in strict versions, prohibiting foreign markup and in lax or parametrized version to open support for external formats. (type parametrization in XML-schema, entity│
│redifinition in DTD, something in RelaxNG) │
│ Resolution │None recorded │
In order to fully integrate MathML into XHTML, it should be possible not only to embed MathML in XHTML, as described in Section 7.1.1 Recognizing MathML in an XML Model, but also to embed XHTML in
MathML. However, the problem of supporting XHTML in MathML presents many difficulties. Therefore, at present, the MathML specification does not permit any XHTML elements within a MathML expression,
although this may be subject to change in a future revision of MathML.
In most cases, XHTML elements (headings, paragraphs, lists, etc.) either do not apply in mathematical contexts, or MathML already provides equivalent or better functionality specifically tailored to
mathematical content (tables, mathematics style changes, etc.). However, there are two notable exceptions, the XHTML anchor and image elements. For this functionality, MathML relies on the general
XML linking and graphics mechanisms being developed by other W3C Activities.
7.3.2 Linking
│ Issue Linking-and-marking-ids │wiki (member only) │
│ Linking and Marking IDs │
│We wish to stop using xlink for links since it seems unimplemented and add the necessary attributes at presentation elements. │
│ Resolution │None recorded │
MathML has no element that corresponds to the XHTML anchor element a. In XHTML, anchors are used both to make links, and to provide locations to which a link can be made. MathML, as an XML
application, defines links by the use of the mechanism described in the W3C Recommendation "XML Linking Language" [XLink].
A MathML element is designated as a link by the presence of the attribute xlink:href. To use the attribute xlink:href, it is also necessary to declare the appropriate namespace. Thus, a typical
MathML link might look like:
<mrow xmlns:xlink="http://www.w3.org/1999/xlink"
MathML designates that almost all elements can be used as XML linking elements. The only elements that cannot serve as linking elements are those which exist primarily to disambiguate other MathML
constructs and in general do not correspond to any part of a typical visual rendering. The full list of exceptional elements that cannot be used as linking elements is given in the table below.
MathML elements that cannot be linking elements
mprescripts none
malignmark maligngroup
Note that the XML Linking [XLink] and XML Pointer Language [XPointer] specifications also define how to link into a MathML expressions. Be aware, however, that such links may or may not be properly
interpreted in current software.
7.3.3 Images
The img element has no MathML equivalent. The decision to omit a general mechanism for image inclusion from MathML was based on several factors. However, the main reason for not providing an image
facility is that MathML takes great pains to make the notational structure and mathematical content it encodes easily available to processors, whereas information contained in images is only
available to a human reader looking at a visual representation. Thus, for example, in the MathML paradigm, it would be preferable to introduce new glyphs via the mglyph element which at a minimum
identifies them as glyphs, rather than simply including them as images.
7.3.4 MathML and Graphical Markup
Apart from the introduction of new glyphs, many of the situations where one might be inclined to use an image amount to displaying labeled diagrams. For example, knot diagrams, Venn diagrams, Dynkin
diagrams, Feynman diagrams and commutative diagrams all fall into this category. As such, their content would be better encoded via some combination of structured graphics and MathML markup. However,
at the time of this writing, it is beyond the scope of the W3C Math Activity to define a markup language to encode such a general concept as "labeled diagrams." (See http://www.w3.org/Math for
current W3C activity in mathematics and http://www.w3.org/Graphics for the W3C graphics activity.)
One mechanism for embedding additional graphical content is via the semantics element, as in the following example:
<annotation-xml encoding="SVG1.1">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 290 180">
<clipPath id="a">
<circle cy="90" cx="100" r="60"/>
<circle fill="#AAAAAA" cy="90" cx="190"
r="60" style="clip-path:url(#a)"/>
<circle stroke="black" fill="none" cy="90" cx="100" r="60"/>
<circle stroke="black" fill="none" cy="90" cx="190" r="60"/>
<annotation-xml encoding="application/xhtml+xml">
<img xmlns="http://www.w3.org/1999/xhtml" src="intersect.gif" alt="A intersect B"/>
Here, the annotation-xml elements are used to indicate alternative representations of the Content MathML depiction of the intersection of two sets. The first one is in the "Scalable Vector Graphics"
format [SVG1.1] (see [XHTML-MathML-SVG] for the definition of an XHTML profile integrating MathML and SVG), the second one uses the XHTML img element embedded as an XHTML fragment. In this situation,
a MathML processor can use any of these representations for display, perhaps producing a graphical format such as the image below.
Note that the semantics representation of this example is given in Content MathML markup, as the first child of the semantics element. In this regard, it is the representation most analogous to the
alt attribute of the img element in XHTML, and would likely be the best choice for non-visual rendering.
7.4 Using CSS with MathML
When MathML is rendered in an environment that supports [CSS2], controlling mathematics style properties with a CSS stylesheet is obviously desirable. MathML 2.0 has significantly redesigned the way
presentation element style properties are organized to facilitate better interaction between MathML renderers and CSS style mechanisms. It introduces four new mathematics style attributes with
logical values. Roughly speaking, these attributes can be viewed as the proper selectors for CSS rules that affect MathML.
Controlling mathematics styling is not as simple as it might first appear because mathematics styling and text styling are quite different in character. In text, meaning is primarily carried by the
relative positioning of characters next to one another to form words. Thus, although the font used to render text may impart nuances to the meaning, transforming the typographic properties of the
individual characters leaves the meaning of text basically intact. By contrast, in mathematical expressions, individual characters in specific typefaces tend to function as atomic symbols. Thus, in
the same equation, a bold italic 'x' and a normal italic 'x' are almost always intended to be two distinct symbols that mean different things. In traditional usage, there are eight basic
typographical categories of symbols. These categories are described by mathematics style attributes, primarily the mathvariant attribute.
Text and mathematics layout also obviously differ in that mathematics uses 2-dimensional layout. As a result, many of the style parameters that affect mathematics layout have no textual analogs. Even
in cases where there are analogous properties, the sensible values for these properties may not correspond. For example, traditional mathematical typography usually uses italic fonts for single
character identifiers, and upright fonts for multicharacter identifier. In text, italicization does not usually depend on the number of letters in a word. Thus although a font-slant property makes
sense for both mathematics and text, the natural default values are quite different.
Because of the difference between text and mathematics styling, only the styling aspects that do not affect layout are good candidates for CSS control. MathML 3.0 captures the most important
properties with the new mathematics style attributes, and users should try to use them whenever possible over more direct, but less robust, approaches. A sample CSS stylesheet illustrating the use of
the mathematical style attributes is available in Appendix C Sample CSS Style Sheet for MathML. Users should not count on MathML implementations to implement any other properties than those in the
Font, Colors, and Outlines families of properties described in [CSS2] and implementations should only implement these properties within MathML elements. Note that these prohibitions do not apply to
CSS stylesheets that implement the MathML for CSS profile [MathMLforCSS].
Generally speaking, the model for CSS interaction with the math style attributes runs as follows. A CSS style sheet might provide a style rule such as:
math *.[mathsize="small"] {
font-size: 80%
This rule sets the CSS font-size properties for all children of the math element that have the mathsize attribute set to small. A MathML renderer would then query the style engine for the CSS
environment, and use the values returned as input to its own layout algorithms. MathML does not specify the mechanism by which style information is inherited from the environment. However, some
suggested rendering rules for the interaction between properties of the ambient style environment and MathML-specific rendering rules are discussed in Section 3.2.2 Mathematics style attributes
common to token elements, and more generally throughout Chapter 3 Presentation Markup.
It should be stressed, however, that some caution is required in writing CSS stylesheets for MathML. Because changing typographic properties of mathematics symbols can change the meaning of an
equation, stylesheet should be written in a way such that changes to document-wide typographic styles do not affect embedded MathML expressions. By using the MathML mathematics style attributes as
selectors for CSS rules, this danger is minimized.
Another pitfall to be avoided is using CSS to provide typographic style information necessary to the proper understanding of an expression. Expressions dependent on CSS for meaning will not be
portable to non-CSS environments such as computer algebra systems. By using the logical values of the new MathML 3.0 mathematics style attributes as selectors for CSS rules, it can be assured that
style information necessary to the sense of an expression is encoded directly in the MathML.
MathML 3.0 does not specify how a user agent should process style information, because there are many non-CSS MathML environments, and because different users agents and renderers have widely varying
degrees of access to CSS information. In general, however, developers are urged to provide as much CSS support for MathML as possible.
|
{"url":"http://www.w3.org/TR/2008/WD-MathML3-20081117/chapter7.html","timestamp":"2014-04-17T13:16:08Z","content_type":null,"content_length":"58434","record_id":"<urn:uuid:ca8be6dc-ae29-46a3-93d7-96d78c1144f5>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Self-Similarity ( Read ) | Geometry
What if you were given an object, like a triangle or a snowflake, in which a part of it could be enlarged (or shrunk) to look like the whole object? What would each successive iteration of that
object look like? After completing this Concept, you'll be able to use the idea of self-similarity to answer questions like this one.
Watch This
CK-12 Foundation: Self-Similarity
When one part of an object can be enlarged (or shrunk) to look like the whole object it is self-similar.
To explore self-similarity, we will go through some examples. Typically, each step of a process is called an iteration. The first level is called Stage 0.
Example A (Sierpinski Triangle)
The Sierpinski triangle iterates a triangle by connecting the midpoints of the sides and shading the central triangle (Stage 1). Repeat this process for the unshaded triangles in Stage 1 to get Stage
Example B (Fractals)
Like the Sierpinski triangle, a fractal is another self-similar object that is repeated at smaller scales. Below are the first three stages of the Koch snowflake.
Example C (The Cantor Set)
The Cantor set is another example of a fractal. It consists of dividing a segment into thirds and then erasing the middle third.
CK-12 Foundation: Self-Similarity
Guided Practice
1. Determine the number of edges and the perimeter of each snowflake shown in Example B. Assume that the length of one side of the original (stage 0) equilateral triangle is 1.
2. Determine the number of shaded and unshaded triangles in each stage of the Sierpinkski triangle. Determine if there is a pattern.
3. Determine the number of segments in each stage of the Cantor Set. Is there a pattern?
Stage 0 Stage 1 Stage 2
Number of Edges 3 12 48
Edge Length 1 $\frac{1}{3}$ $\frac{1}{9}$
Perimeter 3 4 $\frac{48}{9} = \frac{15}{3}$
Stage 0 Stage 1 Stage 2 Stage 3
Unshaded 1 3 9 27
Shaded 0 1 4 13
The number of unshaded triangles seems to be powers of $3: 3^0, 3^1, 3^2, 3^3, \ldots$
3. Starting from Stage 0, the number of segments is $1, 2, 4, 8, 16, \ldots$$2^0, 2^1, 2^2,\ldots$
1. Draw Stage 4 of the Cantor set.
2. Use the Cantor Set to fill in the table below.
Number of Segments Length of each Segment Total Length of the Segments
Stage 0 1 1 1
Stage 1 2 $\frac{1}{3}$ $\frac{2}{3}$
Stage 2 4 $\frac{1}{9}$ $\frac{4}{9}$
Stage 3
Stage 4
Stage 5
3. How many segments are in Stage $n$
4. Draw Stage 3 of the Koch snowflake.
5. A variation on the Sierpinski triangle is the Sierpinski carpet, which splits a square into 9 equal squares, coloring the middle one only. Then, split the uncolored squares to get the next stage.
Draw the first 3 stages of this fractal.
6. How many colored vs. uncolored square are in each stage?
7. Fractals are very common in nature. For example, a fern leaf is a fractal. As the leaves get closer to the end, they get smaller and smaller. Find three other examples of fractals in nature.
|
{"url":"http://www.ck12.org/geometry/Self-Similarity/lesson/Self-Similarity/","timestamp":"2014-04-20T13:23:26Z","content_type":null,"content_length":"107639","record_id":"<urn:uuid:b9ddb9e6-61ca-4960-892f-7ffe98c7ca81>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Statistical Formulas For Programmers
Statistical Formulas For Programmers
By Evan Miller
DRAFT: May 19, 2013
Being able to apply statistics is like having a secret superpower.
Where most people see averages, you see confidence intervals.
When someone says “7 is greater than 5,” you declare that they're really the same.
In a cacophony of noise, you hear a cry for help.
Unfortunately, not enough programmers have this superpower. That's a shame, because the application of statistics can almost always enhance the display and interpretation of data.
As my modest contribution to developer-kind, I've collected together the statistical formulas that I find to be most useful; this page presents them all in one place, a sort of statistical
cheat-sheet for the practicing programmer.
Most of these formulas can be found in Wikipedia, but others are buried in journal articles or in professors' web pages. They are all classical (not Bayesian), and to motivate them I have added
concise commentary. I've also added links and references, so that even if you're unfamiliar with the underlying concepts, you can go out and learn more. Wearing a red cape is optional.
Send suggestions and corrections to emmiller@gmail.com
Table of Contents
1. Formulas For Reporting Averages
One of the first programming lessons in any language is to compute an average. But rarely does anyone stop to ask: what does the average actually tell us about the underlying data?
1.1 Corrected Standard Deviation
The standard deviation is a single number that reflects how spread out the data actually is. It should be reported alongside the average (unless the user will be confused).
\[ s = \sqrt{\frac{1}{N-1} \sum_{i=1}^N(x_i - \bar{x})^2} \]
• \(N\) is the number of observations
• \(x_i\) is the value of the \(i\)^th observation
• \(\bar{x}\) is the average value of \(x_i\)
Reference: Standard deviation (Wikipedia)
1.2 Standard Error of the Mean
From a statistical point of view, the "average" is really just an estimate of an underlying population mean. That estimate has uncertainty that is summarized by the standard error.
\[ SE = \frac{s}{\sqrt{N}} \]
Reference: Standard error (Wikipedia)
1.3 Confidence Interval Around the Mean
A confidence interval reflects the set of statistical hypotheses that won't be rejected at a given significance level. So the confidence interval around the mean reflects all possible values of the
mean that can't be rejected by the data. It is a multiple of the standard error added to and subtracted from the mean.
\[ CI = \bar{x} \pm t_{\alpha/2} SE \]
• \(\alpha\) is the significance level, typically 5% (one minus the confidence level)
• \(t_{\alpha/2}\) is the \(1-\alpha/2\) quantile of a t-distribution with \(N-1\) degrees of freedom
Reference: Confidence interval (Wikipedia)
1.4 Two-Sample T-Test
A two-sample t-test can tell whether two groups of observations differ in their mean.
The test statistic is given by:
\[ t = \frac{\bar{x_1} - \bar{x_2}}{\sqrt{s^2_1/n_1 + s^2_2/n_2}} \]
The hypothesis of equal means is rejected if \(|t|\) exceeds the \((1-\alpha/2)\) quantile of a t distribution with degrees of freedom equal to:
\[ {\rm df} = \frac{(s_1^2/n_1+s_2^2/n_2)^2}{(s_1^2/n_1)^2/(n_1-1)+(s_2^2/n_2)^2/(n_2-1)} \]
You can see a demonstration of these concepts in Evan's Awesome Two-Sample T-Test.
Reference: Student's t-test (Wikipedia)
2. Formulas For Reporting Proportions
It's common to report the relative proportions of binary outcomes or categorical data, but in general these are meaningless without confidence intervals and tests of independence.
2.1 Confidence Interval of a Bernoulli Parameter
A Bernoulli parameter is the proportion underlying a binary-outcome event (for example, the percent of the time a coin comes up heads). The confidence interval is given by:
\[ CI = \left(p + \frac{z^2_{\alpha/2}}{2N} \pm z_{\alpha/2} \sqrt{[p(1-p) + z^2_{\alpha/2}/4N]/N}\right)/(1+z^2_{\alpha/2}/N) \]
• \(p\) is the observed proportion of interest
• \(z_{\alpha/2}\) is the \((1-\alpha/2)\) quantile of a normal distribution
This formula can also be used as a sorting criterion.
Reference: Binomial proportion confidence interval (Wikipedia)
2.2 Multinomial Confidence Intervals
If you have more than two categories, a multinomial confidence interval supplies upper and lower confidence limits on all of the category proportions at once. The formula is nearly identical to the
preceding one.
\[ CI = \left(p_j + \frac{z^2_{\alpha/2}}{2N} \pm z_{\alpha/2} \sqrt{[p_j(1-p_j) + z^2_{\alpha/2}/4N]/N}\right)/(1+z^2_{\alpha/2}/N) \]
• \(p_j\) is the observed proportion of the \(j\)th category
Reference: Confidence Intervals for Multinomial Proportions
2.3 Chi-Squared Test
Pearson's chi-squared test can detect whether the distribution of row counts seem to differ across columns (or vice versa). It is useful when comparing two or more sets of category proportions.
The test statistic, called \(X^2\), is computed as:
\[ X^2 = \sum_{i=1}^{n}\sum_{j=1}^m \frac{(O_{i,j} - E_{i,j})^2}{E_{i,j}} \]
• \(n\) is the number of rows
• \(m\) is the number of columns
• \(O_{i,j}\) is the observed count in row \(i\) and column \(j\)
• \(E_{i,j}\) is the expected count in row \(i\) and column \(j\)
The expected count is given by:
\[ E_{i,j} = \frac{\sum_{k=1}^nO_{k,j} \sum_{l=1}^mO_{i,l} }{N} \]
• \(N\) is the sum of all the cells, i.e., the total number of observations
A statistical dependence exists if \(X^2\) is greater than the (\(1-\alpha\)) quantile of a \(\chi^2\) distribution with \((m-1)\times(n-1)\) degrees of freedom.
You can see a 2x2 demonstration of these concepts in Evan's Awesome Chi-Squared Test.
Reference: Pearson's chi-squared test (Wikipedia)
3. Formulas For Reporting Count Data
If the incoming events are independent, their counts are well-described by a Poisson distribution. A Poisson distribution takes a parameter \(\lambda\), which is the distribution's mean — that is,
the average arrival rate of events per unit time.
3.1. Standard Deviation of a Poisson Distribution
The standard deviation of Poisson data usually doesn't need to be explicitly calculated. Instead it can be inferred from the Poisson parameter:
\[ \sigma = \sqrt{\lambda} \]
This fact can be used to read an unlabeled sales chart, for example.
Reference: Poisson distribution (Wikipedia)
3.2. Confidence Interval Around the Poisson Parameter
The confidence interval around the Poisson parameter represents the set of arrival rates that can't be rejected by the data. It can be inferred from a single data point of \(c\) events observed over
\(t\) time periods with the following formula:
\[ CI = \left(\frac{\gamma^{-1}(\alpha/2, c)}{t}, \frac{\gamma^{-1}(1-\alpha/2, c+1)}{t}\right) \]
• \(\gamma^{-1}(p, c)\) is the inverse of the lower incomplete gamma function
Reference: Confidence Intervals for the Mean of a Poisson Distribution
3.3. Conditional Test of Two Poisson Parameters
Please never do this:
From a statistical point of view, 5 events is indistinguishable from 7 events. Before reporting in bright red text that one count is greater than another, it's best to perform a test of the two
Poisson means.
The p-value is given by:
\[ p = 2\times\frac{c!}{t^c}\times\min\left\{ \sum_{i=0}^{c_1} \frac{t_1^i t_2^{c-i}}{i!(c-i)!}, \sum_{i=c_1}^{c} \frac{t_1^i t_2^{c-i}}{i!(c-i)!} \right\} \]
• Observation 1 consists of \(c_1\) events over \(t_1\) time periods
• Observation 2 consists of \(c_2\) events over \(t_2\) time periods
• \(c = c_1 + c_2\) and \(t = t_1 + t_2\)
You can see a demonstration of these concepts in Evan's Awesome Poisson Means Test.
Reference: A more powerful test for comparing two Poisson means (PDF)
4. Formulas For Comparing Distributions
If you want to test whether groups of observations come from the same (unknown) distribution, or if a single group of observations comes from a known distribution, you'll need a Kolmogorov-Smirnov
test. A K-S test will test the entire distribution for equality, not just the distribution mean.
4.1. Comparing An Empirical Distribution to a Known Distribution
The simplest version is a one-sample K-S test, which compares a sample of \(n\) points having an observed cumulative distribution function \(F\) to a known distribution function having a c.d.f. of \
(G\). The test statistic is:
\[ D_n = \sup_x|F(x) - G(x)| \]
In plain English, \(D_n\) is the absolute value of the largest difference in the two c.d.f.s for any value of \(x\).
The critical value of \(D_n\) at significance level \(\alpha\) is given by \(K_\alpha/\sqrt{n}\), where \(K_\alpha\) is the value of \(x\) that solves:
\[ 1 - \alpha = \frac{\sqrt{2\pi}}{x}\sum_{k=1}^\infty \exp{\left(-(2k-1)^2\pi^2/(8x^2)\right)} \]
The critical must be solved iteratively, e.g. by Newton's method. If only the p-value is needed, it can be computed directly by solving the above for \(\alpha\).
Reference: Kolmogorov-Smirnov Test (Wikipedia)
4.2. Comparing Two Empirical Distributions
The two-sample version is similar, except the test statistic is given by:
\[ D_{n_1,n_2} = \sup_x|F_1(x) - F_2(x)| \]
Where \(F_1\) and \(F_2\) are the empirical c.d.f.s of the two samples, having \(n_1\) and \(n_2\) observations, respectively. The critical value of the test statistic is \(K_\alpha/\sqrt{n_1 n_2/
(n_1 + n_2)}\) with the same value of \(K_\alpha\) above.
Reference: Kolmogorov-Smirnov Test (Wikipedia)
4.3. Comparing Three or More Empirical Distributions
A \(k\)-sample extension of Kolmogorov-Smirnov was described by J. Kiefer in a 1959 paper. The test statistic is:
\[ T = \sup_x \sum_{j=1}^k n_j |F_j(x) - \bar{F}(x)| \]
Where \(\bar{F}\) is the c.d.f. of the combined samples. The critical value of \(T\) is \(a^2\) where \(a\) solves:
\[ 1 - \alpha = \frac{4}{\Gamma\left(\frac{h}{2}\right)2^{h/2}a^h} \sum_{n=1}^\infty \frac{(\gamma_{(h-2)/2,n})^{h-2}\exp[-(\gamma_{(h-2)/2,n})^2/2a^2]}{[J_{h/2}(\gamma_{(h-2)/2,n})]^2} \]
• \(h=k-1\)
• \(J_{h/2}\) is a Bessel function of the first kind with order \(h/2\)
• \(\gamma_{(h-2)/2,n}\) is the \(n\)^th zero of \(J_{(h-2)/2}\)
To compute the critical value, this equation must also be solved iteratively. When \(k=2\), the equation reduces to a two-sample Kolmogorov-Smirnov test. The case of \(k=4\) can also be reduced to a
simpler form, but for other values of \(k\), the equation cannot be reduced.
Reference: K-sample analogues of the Kolmogorov-Smirnov and Cramer-v. Mises tests (JSTOR)
5. Formulas For Drawing a Trend Line
Trend lines (or best-fit lines) can be used to establish a relationship between two variables and predict future values.
5.1. Slope of a Best-Fit Line
The slope of a best-fit (least squares) line is:
\[ m = \frac{\sum_{i=1}^N(x_i - \bar{x})(y_i - \bar{y})}{\sum_{i=1}^N(x_i - \bar{x})^2} \]
• \(\{x_1, \ldots, x_N\}\) is the independent variable with sample mean \(\bar{x}\)
• \(\{y_1, \ldots, y_N\}\) is the dependent variable with sample mean \(\bar{y}\)
5.2. Standard Error of the Slope
The standard error around the estimated slope is:
\[ SE = \frac{\sqrt{\sum_{i=1}^N(y_i - \bar{y} - m (x_i - \bar{x}))^2/(N-2)}}{\sqrt{\sum_{i=1}^N(x_i - \bar{x})^2}} \]
5.3. Confidence Interval Around the Slope
The confidence interval is constructed as:
\[ CI = m \pm t_{\alpha/2} SE \]
• \(\alpha\) is the significance level, typically 5% (one minus the confidence level)
• \(t_{\alpha/2}\) is the \(1-\alpha/2\) quantile of a t-distribution with \(N-2\) degrees of freedom
Reference: Simple linear regression (Wikipedia)
If you own a Mac, my desktop statistics software Wizard can help you analyze more data in less time and communicate discoveries visually without spending days struggling with pointless command
syntax. Check it out!
Back to Evan Miller's home page – Follow on Twitter – Subscribe to RSS
|
{"url":"http://www.evanmiller.org/statistical-formulas-for-programmers.html","timestamp":"2014-04-17T07:35:18Z","content_type":null,"content_length":"19105","record_id":"<urn:uuid:2458cff4-2e87-42d8-918a-a622f2576f9f>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Article
Bibliographic References
Add to:
ASCII Text x
Deji Chen, Aloysius K. Mok, Tei-Wei Kuo, "Utilization Bound Revisited," IEEE Transactions on Computers, vol. 52, no. 3, pp. 351-361, March, 2003.
BibTex x
@article{ 10.1109/TC.2003.1183949,
author = {Deji Chen and Aloysius K. Mok and Tei-Wei Kuo},
title = {Utilization Bound Revisited},
journal ={IEEE Transactions on Computers},
volume = {52},
number = {3},
issn = {0018-9340},
year = {2003},
pages = {351-361},
doi = {http://doi.ieeecomputersociety.org/10.1109/TC.2003.1183949},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Computers
TI - Utilization Bound Revisited
IS - 3
SN - 0018-9340
EPD - 351-361
A1 - Deji Chen,
A1 - Aloysius K. Mok,
A1 - Tei-Wei Kuo,
PY - 2003
KW - Preemptive fixed-priority scheduling
KW - rate-monotonic priority assignment
KW - utilization bound.
VL - 52
JA - IEEE Transactions on Computers
ER -
Abstract—Utilization bound is a well-known concept introduced in the seminal paper of Liu and Layland, which provides a simple and practical way to test the schedulability of a real-time task set.
The original utilization bound for the fixed-priority scheduler was given as a function of the number of tasks in the periodic task set. In this paper, we define the utilization bound as a function
of the information about the task set. By making use of more than just the number of tasks, better utilization bound over the Liu and Layland bound can be achieved. We investigate in particular the
bound given a set of periods for which it is still unknown if there is a polynomial algorithm for the exact bound. By investigating the relationships among the periods, we derive algorithms that
yield better bounds than the Liu and Layland bound and the harmonic chain bound. Randomly generated task sets are tested against different bound algorithms. We also give a more intuitive proof of the
harmonic chain bound and derive a computationally simpler algorithm.
[1] N.C. Audsley, A. Burns, R.I. Davis, K.W. Tindell, and A.J. Wellings, “Fixed Priority Pre-Emptive Scheduling: A Historical Perspective,” Real-Time Systems, vol. 8, pp. 173-198, 1995.
[2] N.C. Audsley, A. Burns, M. Richardson, K. Tindell, and A. Wellings, "Applying New Scheduling Theory to Static Priority Preemptive Scheduling," Software Eng. J. vol. 8, no. 5, pp. 284-292, Sept.
[3] A. Burchard, J. Liebeherr, Y. Oh, and S.H. Son, “Assigning Real-Time Tasks to Homogeneous Multiprocessor Systems,” IEEE Trans. Computers, vol. 44, no. 12, pp. 1429-1442, Dec. 1995.
[4] A. Burns, K. Tindell, and A. Wellings, “Effective Analysis for Engineering Real-Time Fixed Priority Schedulers,” IEEE Trans. Software Eng., vol. 21, no. 5, pp. 475-480, May 1995.
[5] D. Chen, “Real-Time Data Management in the Distributed Environment,” PhD thesis, Univ. of Texas at Austin, 1999.
[6] D. Chen, A.K. Mok, and T.-W. Kuo, “Utilization Bound Re-Visited,” Proc. Sixth Int'l Conf. Real-Time Computing Systems and Applications, 1999.
[7] R. Devillers and J. Goossens, “Liu and Layland's Schedulability Test Revisited,” Information Processing Letters, vol. 73, nos. 5-6, pp. 157-161, Mar. 2000.
[8] C.-C. Han, “A Better Polynomial-Time Scheduleability Test for Real-Time Multiframe Tasks,” Proc. IEEE Real-Time Systems Symp., Dec. 1998.
[9] C.-C. Han, H.y. Tyan, “A Better Polynomial-Time Schedulability Test for Real-Time Fixed-Priority Scheduling Algorithms,” Proc. IEEE Real-Time Systems Symp., pp. 36-45, Dec. 1997.
[10] M. Joseph and P. Pandya, “Finding Response Times in a Real-Time System,” The Computer J., vol. 29, no. 5, pp. 390-395, Oct. 1986.
[11] M. Klein, T. Ralya, B. Pollak, R. Obenza, and M.G. Harbour, A Practitioner's Handbook for Real-Time Analysis: Guide to Rate Monotonic Analysis for Real-Time Systems. Boston: Kluwer Academic,
[12] T.-W. Kuo and A.K. Mok, “Load Adjustment in Adaptive Real-Time Systems,” Proc. IEEE Real-Time Systems Symp., Dec. 1991.
[13] S. Lauzac, R. Melhem, and D. Mosse, “An Efficient RMS Admission Control and Its Application to Multiprocessor Scheduling,” Proc. Int'l Parallel Processing Symp., pp. 511-518, 1998.
[14] J.P. Lehoczky, “Fixed Priority Scheduling of Periodic Task Sets with Arbitrary Deadlines,” Proc. IEEE Real-Time Systems Symp., Dec. 1990.
[15] J.Y.-T. Leung and J. Whitehead, “On the Complexity of Fixed-Priority Scheduling of Periodic, Real-Time Tasks,” Performance Evaluation, vol. 2, pp. 237-250, 1982.
[16] C.L. Liu and J.W. Layland, “Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment,” J. ACM, vol. 20, no. 1, Jan. 1973.
[17] A.K. Mok and D. Chen, "A Multiframe Model for Real-Time Tasks," Proc. IEEE Real-Time System Symp., pp. 22-29,Washington DC, Dec. 1996.
[18] D.-W. Park, “A Generalized Utilization Bound Test for Fixed-Priority Real-Time Scheduling,” PhD thesis, Texas A&M Univ.
[19] D.-W. Park, S. Natarajan, A. Kanevsky, and M.J. Kim, “A Generalized Utilization Bound Test for Fixed-Priority Real-Time Scheduling,” Proc. Second Int'l Workshop Real-Time Computing Systems and
Applications, pp. 73-77, 1995.
[20] D.-T. Peng and K.G. Shin, “A New Performance Measure for Scheduling Independent Real-Time Tasks,” J. Parallel and Distributed Computing, vol. 19, pp. 11-26, 1993.
[21] O. Serlin, “Scheduling of Time Critical Processes,” Spring Joint Computer Conf., vol. 41, pp. 925-932, 1972.
[22] D. Shuzhen, X. Qiwen, and Z. Naijun, “A Formal Proof of the Rate Monotonic Scheduler,” Real-Time Computing Systems and Applications, pp. 500-503, 1998.
[23] M. Sjodin and H. Hansson, “Improved Response-Time Analysis Calculations,” Proc. IEEE Real-Time Systems Symp., pp. 36-45, Dec. 1998.
Index Terms:
Preemptive fixed-priority scheduling, rate-monotonic priority assignment, utilization bound.
Deji Chen, Aloysius K. Mok, Tei-Wei Kuo, "Utilization Bound Revisited," IEEE Transactions on Computers, vol. 52, no. 3, pp. 351-361, March 2003, doi:10.1109/TC.2003.1183949
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/trans/tc/2003/03/t0351-abs.html","timestamp":"2014-04-18T11:19:31Z","content_type":null,"content_length":"57109","record_id":"<urn:uuid:d257a65d-7447-45c9-8ce4-a9ad62c44806>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
|
314 threads found:
221. Terms in an algebraic expressions
Teaching analysis of terms in an algebraic expression
- Date of thread's last activity: 29 January 2003
222. Software simulation of dynamic systems
Dynamic systems analysis using software
- Date of thread's last activity: 3 December 2007
223. Integer chips
Using integer chips to teach the concept of integer operations.
- Date of thread's last activity: 30 March 2007
224. Discrete Mathematics Text
Suggestions for a Discrete Math book aimed at high school juniors and seniors.
- Date of thread's last activity: 4 January 2007
225. Lesson plan
Some effective lesson plans for proficiency math class
- Date of thread's last activity: 21 October 2008
226. Algebra Contests
Algebra contests for a home-schooler about to begin tenth grade.
- Date of thread's last activity: 3 December 2007
227. Textbook recommendations for honors analysis class
Selecting a text-book for honors analysis course
- Date of thread's last activity: 22 October 1998
228. Math Tricks
Tricks for remembering math concepts.
- Date of thread's last activity: 6 April 2012
229. Making math fun for children
Ideas to make learning mathematics fun for students from all grades up to the high school.
- Date of thread's last activity: 29 September 2006
230. Applications of high school mathematics curriculum
Real world applications of high school mathematics curriculum from 1st year Algebra through Pre-Calculus.
- Date of thread's last activity: 28 October 1998
|
{"url":"http://mathforum.org/t2t/browse/branch.taco?level_child=high&start_at=221","timestamp":"2014-04-20T16:27:28Z","content_type":null,"content_length":"7961","record_id":"<urn:uuid:b02c218b-7977-4bab-8b72-b15ab2e7358e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Make an Octagon
Edit Article
Protractor and ruler methodCompass and straightedge method
Edited by Andrew, Flickety, Samuel Wirajaya, Jupiter and 6 others
An octagon is an eight sided shape. Have you ever wondered how to make a perfectly regular octagon? Well here's how..
Method 1 of 2: Protractor and ruler method
1. 1
Determine the length of the octagon you're going to draw.
2. 2
On a paper, draw a line with that length. This will be one of the sides of the octagon.
3. 3
Using a protractor and a ruler, create a line with same length, that is angled 135 degrees to the line. This is another side of the octagon.
4. 4
Create a line with same length, that is angled 135 degrees to the newly created line. Repeat this steps until you have created a complete octagon.
Method 2 of 2: Compass and straightedge method
1. 1
Using a compass, draw a circle with its diameter. The diameter of the circle will be the longest diagonal of the octagon.
2. 2
Increase the radius setting of the compass a bit. For instance, if you set (in step 1) the radius to 2in, you probably want to add half an inch. From now on, make sure the compass is unchanged
3. 3
Place the needle tip of the compass to one of the intersections between the diameter and the circle, and trace out an arc, passing near the center of the circle.
4. 4
Repeat step 3 for another intersection. Now you have an 'eye' in the middle of the circle.
5. 5
With a ruler or a straightedge, create a line that passes through the corners of the eye. This line must be long enough to intersect the circle twice. This line is perpendicular to the diameter.
6. 6
Repeat step 3 for the intersections between the line and the circle. Now you have two overlapping 'eyes', with 4 intersections between them.
7. 7
Using a straightedge, create a cross that passes through four intersections of our two 'overlapping eyes'. The lines should be long enough to intersect the circle.
8. 8
Now there are eight intersections between lines (not including arcs) and the circle. These intersections are corners of a regular octagon. Connect them to complete an octagon.
9. 9
Erase the circle, lines, and arcs we have drawn, leaving the octagon alone.
• It is easier to fold the paper or material and make one from a square to get more even edges.
• Be precise if you want to draw perfectly regular octagon.
• Do not cut yourself with scissors, or poke yourself with a compass. This hurts.
|
{"url":"http://www.wikihow.com/Make-an-Octagon","timestamp":"2014-04-16T05:05:23Z","content_type":null,"content_length":"71358","record_id":"<urn:uuid:c1a4d3fe-ff54-41fc-8ecf-d3bbdde72b10>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
I figured everything out thanks though
• one year ago
• one year ago
Best Response
You've already chosen the best response.
unethical, can you just study instead?
Best Response
You've already chosen the best response.
okay im not in heath but do you take Geometry A
Best Response
You've already chosen the best response.
exchanging answers is against website's user guidelines, i reported you, sorry.
Best Response
You've already chosen the best response.
I am not looking to exchange answers @aaronq I simply need help understanding a few things.
Best Response
You've already chosen the best response.
then why don't you ask what you don't understand instead of inquiring on a specific unit/lesson?
Best Response
You've already chosen the best response.
because it would be a waist of time to write out the few questions i have. the question for this are really long. so, i thought it would be easier to see who else has it and go from there. I do
not copy work, and i enjoy learning, however; there are a few things that i don't understand and need help to get the answer and understand it.
Best Response
You've already chosen the best response.
you know, you don't have to write the whole question out, just the part you don't understand
Best Response
You've already chosen the best response.
thats the thing, it is the whole question. here is an example: (this is one of the questions) Question: Irene is a high school sophomore who lives with her mom and her little brother. Irene's mom
works two jobs that require her to be away seven days a week. Irene is left to care for her little brother, prepare meals, and tidy the home. Irene is a straight-A student who not only excels in
the classroom, but also excels in softball, basketball and soccer. What would be the best way for Irene to minimize her stress levels? Answer Choices: Talk about her problems with a trusted
adult. Self-talk about how tough life is right now. Take quick breaths to relieve her stress. Get a job to keep her mind off of her problems. Do you see what I mean? The questions are really long
and I can't just giv ethe actual question without the supporting detail. I have a guess what the answer is, but I am not 100%, which is why I would ask someone.
Best Response
You've already chosen the best response.
i see what you mean. social sciences are ambiguous and many answers can be correct, such as the question you posted. It's not a bad idea to post the whole question and then attempt to answer it,
and ask for input, rather than just simply ask "did anyone do the exam"? Talk about her problems with a trusted adult. YES Self-talk about how tough life is right now. NO Take quick breaths to
relieve her stress. YES Get a job to keep her mind off of her problems. NO the most logical answer his the first.
Best Response
You've already chosen the best response.
I can understand where you are coming from, but that was simply my way of asking and I will not apologize for it. I am a little mad/sad/ that you reported me without even getting my imput on why
I asked it like I did. In responce to the question i posted; those were the two answers that I thought were best, but I wanted to find out was why is A better then C?
Best Response
You've already chosen the best response.
Because talking bout problems to trusted individuals minimizes stress since it brings issues out into the open and then you are able to construct a plan which will help deal with factors which
surround it in the safest manner. A lot of this stuff is really common sense. and i didn't report you, i reported the other person
Best Response
You've already chosen the best response.
Ok. Well, thanks for your input and helping me with that question. Again, I figured it was A, but it could be C. Therefore, I wanted to see another persons opinion.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50f46997e4b0abb3d8706f4a","timestamp":"2014-04-21T07:52:44Z","content_type":null,"content_length":"56072","record_id":"<urn:uuid:94dfe25d-98df-49a0-b6a0-f77310c836ac>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quasi-Weak Cost Automata: A New Variant of Weakness
when quoting this document, please refer to the following
http://drops.dagstuhl.de/opus/volltexte/2011/3351/ Kuperberg, Denis
Vanden Boom, Michael
Quasi-Weak Cost Automata: A New Variant of Weakness
Cost automata have a finite set of counters which can be manipulated on each transition but do not affect control flow. Based on the evolution of the counter values, these automata define functions
from a domain like words or trees to \N \cup \set{\infty}, modulo an equivalence relation which ignores exact values but preserves boundedness properties. These automata have been studied by
Colcombet et al. as part of a "theory of regular cost functions", an extension of the theory of regular languages which retains robust equivalences, closure properties, and decidability like the
classical theory. We extend this theory by introducing quasi-weak cost automata. Unlike traditional weak automata which have a hard-coded bound on the number of alternations between accepting and
rejecting states, quasi-weak automata bound the alternations using the counter values (which can vary across runs). We show that these automata are strictly more expressive than weak cost automata
over infinite trees. The main result is a Rabin-style characterization theorem: a function is quasi-weak definable if and only if it is definable using two dual forms of non-deterministic Büchi cost
automata. This yields a new decidability result for cost functions over infinite trees.
BibTeX - Entry
author = {Denis Kuperberg and Michael Vanden Boom},
title = {{Quasi-Weak Cost Automata: A New Variant of Weakness }},
booktitle = {IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
pages = {66--77},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-939897-34-7},
ISSN = {1868-8969},
year = {2011},
volume = {13},
editor = {Supratik Chakraborty and Amit Kumar},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2011/3351},
URN = {urn:nbn:de:0030-drops-33517},
doi = {http://dx.doi.org/10.4230/LIPIcs.FSTTCS.2011.66},
annote = {Keywords: Automata, infinite trees, cost functions, weak}
Keywords: Automata, infinite trees, cost functions, weak
Seminar: IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)
Issue date: 2011
Date of publication: 2011
|
{"url":"http://drops.dagstuhl.de/opus/volltexte/2011/3351/","timestamp":"2014-04-21T00:35:51Z","content_type":null,"content_length":"9183","record_id":"<urn:uuid:1cbbddfc-2c92-43ab-8d42-1dc30f9b62fa>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
|
0.999.../The limit of a sequence
From Wikibooks, open books for an open world
In calculus, sequences such as a[1]=1/1,a[2]=1/2,a[3]=1/3..... are discussed. However, most mathematicians and the majority of the mathematical laity prefer the notation a[n]=1/n (n≥1) for the
sequence above. Often, the limit of a sequence is discussed. A sequence a is said to converge to a limit L if a becomes arbitrarily close to L and stays there. For the case of 0.9999..., the sequence
would be a[n]=1-10^-n,where n is the number of 9's after the decimal point. For infinitely continuing digits, as n→∞, a[n]=1-10^-∞, thus proving that 0.999999999... until infinity tends to 1.
|
{"url":"https://en.wikibooks.org/wiki/0.999.../The_limit_of_a_sequence","timestamp":"2014-04-20T06:33:06Z","content_type":null,"content_length":"23890","record_id":"<urn:uuid:18fb5f66-4ed8-4f1f-98e4-2f7977877ad7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Validity vs. soundness
From Iron Chariots Wiki
In logic there is an important distinction between validity and soundness. A logical argument or syllogism is valid if true premises always lead to a true conclusion. An argument is sound if and only
if the argument is valid and all of the premises are true. Thus validity refers to the structure or form of the argument and not to its contents, while soundness considers the structure and content.
Consider this logical syllogism:
P1: All G are S
P2: All S are D
C1: Therefore, all G are D
(The form is valid) and this particular syllogistic form is named "Barbara". If P1 and P2 are both true, C1 must be true. If we insert some "common knowledge" content into the argument, we can
demonstrate an argument which is both valid and sound: ( ) The form is not valid. All Arguments must pass a validity test. The Ts stand for True and Fs stand for False. An argument with 2 premises
and a conclusion have 8 possible worlds in which none may make an invalid inference or the entire argument is invalid. The number of worlds in an argument is calculated by 2 to the power of N=
variables. Every letter is a variable. So in this case 2 to the power of 3 is 8.
G is 1. All G are S , S is 2. All S are D, and D is 3. All G are D
G S D
T T T
F T T
T F T
F F T
( T T F ) Invalid All true premises lead to a false conclusion making this an invalid argument.
F T F
T F F
F F F
P1: I (G) am a man (S)
P2: All men (S) are mortal (D)
C1: Therefore, I (G) am mortal (D)
What happens when the premises are untrue? Consider the following example:
P1: All toothpicks (G) are made of metal (S)
P2: All metal objects (S) are toasters (D)
C1: Therefore, all toothpicks (G) are toasters (D)
We can prove that P1 and P2 are false by finding either a toothpick which isn't made of metal, or a metal object that isn't a toaster. In this particular case, P1 and P2 are not only false, they
directly contradict each other (if all metal objects are toasters, clearly toothpicks can't be made of metal) and no external verification is required - the argument is valid, but the conclusion is
Let's look at an example where only one of the premises is untrue:
P1: All mammals (G) have backbones (S)
P2: All creatures with backbones(S) have scales (D)
C1: Therefore, all mammals(G) have scales (D)
In this example, P1 is true, but P2 is not. This one false premise renders the argument unsound. Let's modify this latest argument just a bit to demonstrate an important point:
P1: All mammals (G) have backbones (S)
P2: All creatures with backbones(S) have three bones in each ear (D)
C1: Therefore, all mammals(G) have three bones in each ear (D)
P1 is still true and P2 is still false (there are vertebrates with only one bone, the stapes, in each ear) however, the conclusion (C1) in this example happens to be true. If an argument is unsound,
the conclusion may be either true or false - there's simply no way to tell from the argument alone. This issue is seen in many common logical fallacies and can be confusing to those who aren't
skilled in assessing logical arguments.
It's possible to reach the correct conclusion by accident, but in order to actually demonstrate that the conclusion is true, the argument must be both valid and sound.
|
{"url":"http://wiki.ironchariots.org/index.php?title=Validity_vs._soundness&oldid=14060","timestamp":"2014-04-21T15:39:58Z","content_type":null,"content_length":"18748","record_id":"<urn:uuid:01f127a4-ac0b-4cd4-a6b3-9296d612c67a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
\(\LaTeX\) practice - no questions being asked.
• 9 months ago
• 9 months ago
Best Response
You've already chosen the best response.
\(\begin{array}{r,c,l} ax^2 + bx + c & =& 0 \\ \end{array}\)
Best Response
You've already chosen the best response.
\[\begin{array}{r,c,l} 2x+3 & = & 4x + 7 \\ -3 & = &-3\\ 2x & = &4x+4\\ -4x & = & -4x\\ -2x&=&4\\ x&=&-2 \end{array}\]
Best Response
You've already chosen the best response.
\[\begin{array}{c,c,c,c,c,c,c,c,c,c} &2x&+&3 & = && 4x& +& 7 \\ &&-&3 & =&&& -&3\\ &&&2x & = &&4x&+&4\\ &&-&4x & = & -&4x\\ &&-&2x&=&&&&4\\ &&&x&=&&&-&2 \end{array}\] Now, if I can figure out how
to reduce the spacing between columns...
Best Response
You've already chosen the best response.
there are some codes for this, and you cold post another question on that (since i don't know those; you must remeber imma newb)
Best Response
You've already chosen the best response.
I'm going to look it up on LaTeXWiki. Thanks for trying. :-)
Best Response
You've already chosen the best response.
This is much easier than you are making it. Dont use a table just space them apart. \(2x+3=4x+7\) \(~~~~-3=~~~~-3\) \(~~~~~~2x=4x+4\) \(~~-4x=-4x\) \(~~~~~~2x=4\) \(~~~~~~~~x=-2\)
Best Response
You've already chosen the best response.
Your right, there is an easier way to do this example, but I still want to know how to reduce the space between the columns. My point is to learn \(\LaTeX\) and which of the many \(\LaTeX\)
commands are available on this forum. Since different commands are available in different forums, I'm making a \(\Huge{huge}\) spreadsheet to list all the commands I have learned, what packages
they are in and what forums they can be use in. I will also include equivalent HTML commands for those that have them.
Best Response
You've already chosen the best response.
@SnuggieLad spacing does not always come out cleanly. The longer some things are, the harder it gets to align them with spaces, but an array will do it nicely. However, for something as simple as
that one, the first array is not that bad to make. The second is overkill for that and I would save it for things like this: \(\begin{array}{rrrrc} 12x_1 & +5x_2 & & -x_4 & = 7\\ & x_2 & +3x_3 &
&= 4\\ -3x_1 & -3x_2& &+2x_4 &= 2 \end{array}\) And its cousin: \(\left[ \begin{array}{cccc|c} 12 & 5 &0 & -1 & 7\\ 0 & 1 & 3 & 0&4\\ -3 & -3&0 &2 & 2 \end{array} \right]\) It also helps if you
do the edits in a tool like this: http://www.codecogs.com/eqneditor Which can do a lot of the grunt work, leaving you to just put in the numbers.
Best Response
You've already chosen the best response.
As for reducing: \(\begin{array}{c,c,c,c,c,c,c,c,c,c} &2x&+&3 & = && 4x& +& 7 \\ &&-&3 & =&&& -&3\\ &&&2x & = &&4x&+&4\\ &&-&4x & = & -&4x\\ &&-&2x&=&&&&4\\ &&&x&=&&&-&2 \end{array}\) Simple.
Don't make a separate column for signs and leep things to the right. \(\begin{array}{rrcrr} 2x & +3 & = & 4x & +7 \\ & -3 & = & & -3\\ & 2x & = & 4x &+4\\ & -4x & = & -4x &\\ & -2x & =& &4 \\ & x
& = & & -2 \end{array}\)
Best Response
You've already chosen the best response.
And matrix works well too.
Best Response
You've already chosen the best response.
As for LaTeXWiki, please realize this is MathJax and not \(\LaTeX\). Sure, it uses the \(\TeX\) markups, but it is the poor cousin who can't do as much.
Best Response
You've already chosen the best response.
@e.mccormick - that is exactly why I am creating my own spreadsheet. I work with multiple versions of \(\LaTeX\) so I need to know which markups will work with each platform. Keeping the signs
with the terms is a great idea(thanks), but I really want to condense the columns as well. I know it can be done in \(\LaTeX\) (maybe not on here) but I haven't found the appropriate markups.
Best Response
You've already chosen the best response.
@gypsy1274 It can not be done. There are not markups for everything like that that. Unless you over lap stuff. Sometimes we want to think of LaTeX as a code as in depth as HTML or JAVASCRIPT but
its not. It is a simple code used for little odd jobs that other codes make harder than they should be. If you need help with anything email me at OpenStudy.Intern.SnuggieLad@gmail.com If your
into tables and stuff listen to E.mccormic he is right. Otherwise just space it out if you want a sleek clean look.
Best Response
You've already chosen the best response.
It is apparent you have worked with LaTeX elsewhere but it is not a full coding system. It is a simple type of code. Lastly, we do not have the full version. We can not support it as there are
multiple columns on this site that do not create full pages so stuff would run over. Its not magic, you can not do everything. It is not a REAL code like HTML.
Best Response
You've already chosen the best response.
\[Z = \frac{x-\mu}{\sigma}\]
Best Response
You've already chosen the best response.
\(\begin{array}{ccc} 1&2&3\\4&5&6 \end{array}\) \(\setlength{\arraycolsep}{3pt} \begin{array}{ccc} 1&2&3\\4&5&6 \end{array}\) Seems the standard method does not work here. If you really needed
that for an extensive thing, you could do it on codecogs and link the URL: http://latex.codecogs.com/gif.latex?
%5Cbg_white%20%5Csetlength%7B%5Carraycolsep%7D%7B3pt%7D%20%5Cbegin%7Barray%7D%7Bccc%7D%201%262%263%5C%5C4%265%266%20%5Cend%7Barray%7D Or, you can use one of the online \(\LaTeX\) document
creators that lets you link documents. Or, use any of the toops to make an image or PDF and attach that. This really comes in handy for things where an accuate diagram is useful. See attached
Best Response
You've already chosen the best response.
@e.mccormick Did you create that with \(\LaTeX\)? If so, I would really like to see your source document. I could learn a lot from that. And thanks for the \setlength, even if it doesn't work
here it will work in other places. @SnuggieLad You hit on exactly what I am trying to figure out, what pieces of \(\LaTeX\) will work here, what will work on each of the other platforms I use,
and what things are just not possible. Thanks for all your helpful information. And please realize, that I ask questions to find out if it can be done or not - I don't expect that everything that
pops into my head can be done easily. :-)
Best Response
You've already chosen the best response.
Yes, I made it. It was when I was just starting with TikZ and PGF, so it is not as clean as it could be. You can do math inside the image declarations to get things to come out properly as well.
Best Response
You've already chosen the best response.
I want to be ablee to graph on here so badly
Best Response
You've already chosen the best response.
No kidding. Or at least have the drawings created someplace else to show up as a picture and not a file to be opened separately. :-)
Best Response
You've already chosen the best response.
@e.mccormick Thanks so much. Just from a quick glance, I can already see how to do some of the things I've been trying to learn.
Best Response
You've already chosen the best response.
For sharing graphs I use: https://www.desmos.com/calculator/vgnihkr8sn Cause it does piecewise, multiple lines, points, and so on. All pretty easy.
Best Response
You've already chosen the best response.
68-95-99.7 Percent Rule \[\begin{array}{|c|c} \hline >3 & 0.15& \\ \hline 3 & 2.1 \\ \hline 2 & 13.6 \\ \hline 1 & 34.1 \\ \hline -1 & 34.1 \\ \hline -2 & 13.6 \\ \hline -3 & 2.1 \\ \hline <-3&
0.15 \\ \hline \end{array}\]
Best Response
You've already chosen the best response.
Hmmm... that table looks a little deviated... but that is standard in statistics. \(\Large \ddot \smile\)
Best Response
You've already chosen the best response.
Cute! Thanks. I needed that laugh.
Best Response
You've already chosen the best response.
\(\dfrac{x}{y} = 3\) Or \(\frac{x}{y}=3\)
Best Response
You've already chosen the best response.
There is also \tfrac to force the text size one inside a `\[ \]` code block.
Best Response
You've already chosen the best response.
And \cfrac for contunued fractions. \[\cfrac{2}{1+\cfrac{2}{1+\cfrac{2}{1+\cfrac{2}{1}}}} \]
Best Response
You've already chosen the best response.
An Identity of Ramanujan with frac: \[\frac{1}{\Bigl(\sqrt{\phi \sqrt{5}}-\phi\Bigr) e^{\frac25 \pi}} = 1+\frac{e^{-2\pi}} {1+\frac{e^{-4\pi}} {1+\frac{e^{-6\pi}} {1+\frac{e^{-8\pi}} {1+\ldots} }
} }\] And with \cfrac \[\frac{1}{\Bigl(\sqrt{\phi \sqrt{5}}-\phi\Bigr) e^{\frac25 \pi}} = 1+\cfrac{e^{-2\pi}} {1+\cfrac{e^{-4\pi}} {1+\cfrac{e^{-6\pi}} {1+\cfrac{e^{-8\pi}} {1+\ldots} } } }\]
Best Response
You've already chosen the best response.
Good to know. Thanks @e.mccormick. \[\Huge \text{(^◡^ )}\]
Best Response
You've already chosen the best response.
Is it possible to draw a bell curve here using \(\LaTeX\)?
Best Response
You've already chosen the best response.
That would require TikZ and PGF. See my post in the feedback section on how they could add that. =) For use elsewhere, those are what you want to look into.
Best Response
You've already chosen the best response.
For not having questions, you sure have gotten some answers!
Best Response
You've already chosen the best response.
And those answers have been much appreciated. Thanks.
Best Response
You've already chosen the best response.
np. It is all good information when someone has a use for it. Otherwise, it is just unused information. So I would rather share it and get some use!
Best Response
You've already chosen the best response.
I feel the same way. Information should be shared. :-)
Best Response
You've already chosen the best response.
\(\Huge 4x − \cancel{6} + 3x + \cancel{6}=?\) `\(\Huge 4x − \cancel{6} + 3x + \cancel{6}=?\)` It is helpful to be able to see the code others have written. I have learned new things today.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
\(\not{a}\) \(\cancel{a}\) Ahh, a little different angle than \not Never looked to see if they were the same or not. Hehe.
Best Response
You've already chosen the best response.
\(\cancel{2x+4}\) \(\not{2x+4}\) Apparently the \not command only works on one character. The cancel command seems to work on more than one character. Much better for my purposes. Both have their
Best Response
You've already chosen the best response.
Oh wow... I SOLVED IT! YES!!! An answe to your firstg question! \(\begin{array}{ccc} 1& 2 & 3 \\ 4 & 5 & 6 \end{array}\\ \begin{array}{ccc} \! 1\!&\! 2\!&\!3\!\\\!4\!&\!5\!&\!6\! \end{array}\)
And yah, cancel is sweet because of the ability to cancel a compound term.
Best Response
You've already chosen the best response.
\! negates the kerning spaces. \(AB\) \(A\!B\) `\(AB\) \(A\!B\)`. Well, kill all the kerning in a table and guess what happens!
Best Response
You've already chosen the best response.
\[\begin{array}{ccccc} \! 2x \! & \! +3 \! & \! =\! & \! 4x \! & \! +7 \\ & -3 & = & & -3\\ \hline & 2x & = & 4x &+4\\ & -4x & = & -4x &\\ \hline & -2x & =& &4 \\ & x & = & & -2 \end{array}\] It
works well on your example, but doesn't seem to have much of an effect on my table. \[\begin{array}{ccc} 1& 2 & 3 \\ 4 & 5 & 6 \end{array}\\ \begin{array}{ccc} \! 1\!&\! 2\!&\!3\!\\\!4\!&\!5\!&\!
6\! \end{array}\]
Best Response
You've already chosen the best response.
\[\begin{array}{ccccc} \! 2x \! & \! +3 \! & \! =\! & \! 4x \! & \! +7 \\ & -3 & = & & -3\\ \hline & 2x & = & 4x &+4\\ & -4x & = & -4x &\\ \hline & -2x & =& &4 \\ & x & = & & -2 \end{array}\] \[\
begin{array}{ccccc} \! 2x \! & \! +3 \! & \! =\! & \! 4x \! & \! +7 \\ \! & \!-3\! & \!=\! &\! &\! -3\!\\ \hline \! & \!2x \!&\! =\! & \!4x\! &\!+4\!\\ \! & \!-4x \!& \!= \!& \!-4x\! &\!\\ \hline
\! & \!-2x \!&\! =\!&\! &4\! \\ \! &\! x\! &\! = \!&\! &\! -2\! \end{array}\]
Best Response
You've already chosen the best response.
Certainly better, but I've seen equation looking better than that. Must be a different solution. Although, that \! will be handy for other things.
Best Response
You've already chosen the best response.
Well, the column separation command is better elsewhere. So this is just a way to cheat a little with MathJax.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/51ed5575e4b00daf471999bb","timestamp":"2014-04-18T18:34:28Z","content_type":null,"content_length":"149039","record_id":"<urn:uuid:14590e51-2d51-4f53-8e85-3f21375af56b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
|
EconPort - The Individual's Public Good Choice Problem in the Market
Handbook> Public Goods> Solutions> Pareto Efficiency> Individual Choice Printer Friendly
The following section is intended for intermediate/upper intermediate students who are familiar with optimization techniques. It summarizes the differences between the market and Pareto efficient
outcomes in mathematical form.
The individual i chooses how much of the public good to buy on his own ([i]) to maximize his utility u^i(x, y[i]) from consuming the public good x and private consumption y[i], taking contributions
of others as given (x[-i]). The consumer's problem can be then written as follows :
max u^i(x, y[i]) = u^i(x[-i] + [i], y[i]) subject to constraints [i], y[i] [i] + y[i]
where p denotes the price of one unit of the public good and m denotes the value of i-th person initial endowment or income.
First order conditions: MRS^i ^i = p if
Suppose a unit of public good costs
u^i(x, y[i]) = y[i] + [i]log x for all i = 1,...,n.
Then MRS^i = [i] / x
Let A = [i][i] and ^* = max {[i] | i
Pareto Efficiency:
^i = p
i.e., [i] / x) =
(1 / x) A =
x´ = A /
Market Outcome:
i.e., [i] / x
i.e., x [i] / p, for all i.
Let's examine when an idividual purchases positive amount of public good
MRS^i = p if [i] > 0, i.e., x = [i] /
From this follows that [i] = 0 if [i] < ^* = max {[i] | i ^m = ^* / ^m denotes the market outcome.
Note that ^* << A and therefore, x^m << x´, meaning that the market outcome is severly inefficient.
|
{"url":"http://www.econport.org/econport/request?page=man_pg_individualchoice","timestamp":"2014-04-19T19:35:08Z","content_type":null,"content_length":"26709","record_id":"<urn:uuid:84dc6076-46f2-4fc6-8d8b-88523ea6e5ba>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
|
NAG Library
NAG Library Routine Document
1 Purpose
D02NMF is a reverse communication routine for integrating stiff systems of explicit ordinary differential equations.
2 Specification
SUBROUTINE D02NMF ( NEQ, LDYSAV, T, TOUT, Y, YDOT, RWORK, RTOL, ATOL, ITOL, INFORM, YSAV, SDYSAV, WKJAC, NWKJAC, JACPVT, NJCPVT, IMON, INLN, IRES, IREVCM, ITASK, ITRACE, IFAIL)
INTEGER NEQ, LDYSAV, ITOL, INFORM(23), SDYSAV, NWKJAC, JACPVT(NJCPVT), NJCPVT, IMON, INLN, IRES, IREVCM, ITASK, ITRACE, IFAIL
REAL (KIND=nag_wp) T, TOUT, Y(NEQ), YDOT(NEQ), RWORK(50+4*NEQ), RTOL(*), ATOL(*), YSAV(LDYSAV,SDYSAV), WKJAC(NWKJAC)
3 Description
D02NMF is a general purpose routine for integrating the initial value problem for a stiff system of explicit ordinary differential equations,
An outline of a typical calling program is given below:
! Declarations
call linear algebra setup routine
call integrator setup routine
1000 CALL D02NMF(NEQ, LDYSAV, T, TOUT, Y, YDOT, RWORK, RTOL, &
ATOL, ITOL, INFORM, YSAVE, SDYSAV, WKJAC, NWKJAC, &
JACPVT, NJCPVT, IMON, INLN, IRES, IREVCM, ITASK, &
ITRACE, IFAIL)
IF (IREVCM.GT.0) THEN
IF (IREVCM. EQ. 8) THEN
supply the Jacobian matrix (i)
ELSE IF(IREVCM.EQ.9) THEN
perform monitoring tasks requested by the user (ii)
ELSE IF(IRECVM.EQ.1.OR.IREVCM.GE.3.AND.IREVCM.LE.5) THEN
evaluate the derivative (iii)
ELSE IF(IREVCM.EQ.10) THEN
indicates an unsuccessful step
GO TO 1000
! post processing (optional linear algebra diagnostic call
! (sparse case only), optional integrator diagnostic call)
There are three major operations that may be required of the calling (sub)program on an intermeditate return (${\mathbf{IREVCM}}e 0$) from D02NMF; these are denoted (i), (ii) and (iii) above.
The following sections describe in greater detail exactly what is required of each of these operations.
(i) Supply the Jacobian Matrix
You need only provide this facility if the parameter
if using sparse matrix linear algebra) in a call to the linear algebra setup routine (see
). If the Jacobian matrix is to be evaluated numerically by the integrator, then the remainder of section (i) can be ignored.
We must define the system of nonlinear equations which is solved internally by the integrator. The time derivative,
${y}^{\prime }$
, has the form
is the current step size and
is a parameter that depends on the integration method in use. The vector
is the current solution and the vector
depends on information from previous time steps. This means that
$\frac{d}{d{y}^{\prime }}\left(\text{ }\right)=\left(hd\right)\frac{d}{dy}\left(\text{ }\right)$
The system of nonlinear equations that is solved has the form
but is solved in the form
where the function
is defined by
It is the Jacobian matrix
$\frac{\partial r}{\partial y}$
that you must supply as follows:
$∂ri ∂yj =1-hd ∂gi ∂yj if i=j, ∂ri ∂yj =0-hd ∂gi ∂yj otherwise,$
are located in
respectively and the array
contains the current values of the dependent variables. Only the nonzero elements of the Jacobian need be set, since the locations where it is to be stored are preset to zero.
Hereafter in this document this operation will be referred to as JAC.
(ii) Perform Tasks Requested by You
This operation is essentially a monitoring function and additionally provides the opportunity of changing the current values of
, HNEXT (the step size that the integrator proposes to take on the next step), HMIN (the minimum step size to be taken on the next step), and HMAX (the maximum step size to be taken on the next
step). The scaled local error at the end of a timestep may be obtained by calling real function
as follows:
IFAIL = 1
ERRLOC = D02ZAF(NEQ,RWORK(51+NEQ),RWORK(51),IFAIL)
! CHECK IFAIL BEFORE PROCEEDING
The following gives details of the location within the array
of variables that may be of interest to you:
Variable Specification Location
TCURR the current value of the independent variable ${\mathbf{RWORK}}\left(19\right)$
HLAST last step size successfully used by the integrator ${\mathbf{RWORK}}\left(15\right)$
HNEXT step size that the integrator proposes to take on the next step ${\mathbf{RWORK}}\left(16\right)$
HMIN minimum step size to be taken on the next step ${\mathbf{RWORK}}\left(17\right)$
HMAX maximum step size to be taken on the next step ${\mathbf{RWORK}}\left(18\right)$
NQU the order of the integrator used on the last step ${\mathbf{RWORK}}\left(10\right)$
You are advised to consult the description of
for details on what optional input can be made.
is changed, then
must be set to
before return to D02NMF. If either of the values of HMIN or HMAX are changed, then
must be set
$\text{}\ge 3$
before return to D02NMF. If HNEXT is changed, then
must be set to
before return to D02NMF.
In addition you can force D02NMF to evaluate the residual vector
by setting
and then returning to D02NMF; on return to this monitoring operation the residual vector will be stored in
, for
$\mathit{i}=1,2,\dots ,{\mathbf{NEQ}}$
Hereafter in this document this operation will be referred to as MONITR.
(iii) Evaluate the Derivative
This operation must evaluate the derivative vector for the explicit ordinary differential equation system defined by
is located in
Hereafter in this document this operation will be referred to as FCN.
4 References
5 Parameters
this routine uses
reverse communication.
Its use involves an initial entry, intermediate exits and re-entries, and a final exit, as indicated by the parameter
. Between intermediate exits and re-entries,
all parameters other than YDOT, RWORK, WKJAC, IMON, INLN and IRES must remain unchanged
1: NEQ – INTEGERInput
On initial entry: the number of differential equations to be solved.
Constraint: ${\mathbf{NEQ}}\ge 1$.
2: LDYSAV – INTEGERInput
On initial entry: an upper bound on the maximum number of differential equations to be solved during the integration.
Constraint: ${\mathbf{LDYSAV}}\ge {\mathbf{NEQ}}$.
3: T – REAL (KIND=nag_wp)Input/Output
On initial entry
, the value of the independent variable. The input value of
is used only on the first call as the initial point of the integration.
On final exit
: the value at which the computed solution
is returned (usually at
4: TOUT – REAL (KIND=nag_wp)Input
On initial entry
: the next value of
at which a computed solution is desired. For the initial
, the input value of
is used to determine the direction of integration. Integration is permitted in either direction (see also
Constraint: ${\mathbf{TOUT}}e {\mathbf{T}}$.
5: Y(NEQ) – REAL (KIND=nag_wp) arrayInput/Output
On initial entry
: the values of the dependent variables (solution). On the first call the first
elements of
must contain the vector of initial values.
On final exit
: the computed solution vector evaluated at
6: YDOT(NEQ) – REAL (KIND=nag_wp) arrayInput/Output
On intermediate re-entry
: must be set to the derivatives as defined under the description of
On final exit: the time derivatives ${y}^{\prime }$ of the vector $y$ at the last integration point.
7: RWORK($50+4×{\mathbf{NEQ}}$) – REAL (KIND=nag_wp) arrayCommunication Array
On initial entry
: must be the same array as used by one of the method setup routines
, and by one of the storage setup routines
. The contents of
must not be changed between any call to a setup routine and the first call to D02NMF.
On intermediate re-entry
: elements of
must be set to quantities as defined under the description of
On intermediate exit
: contains information for JAC, FCN and MONITR operations as described in
Section 3
and the parameter
8: RTOL($*$) – REAL (KIND=nag_wp) arrayInput
the dimension of the array
must be at least
, and at least
On initial entry: the relative local error tolerance.
${\mathbf{RTOL}}\left(i\right)\ge 0.0$
for all relevant
9: ATOL($*$) – REAL (KIND=nag_wp) arrayInput
the dimension of the array
must be at least
, and at least
On initial entry: the absolute local error tolerance.
${\mathbf{ATOL}}\left(i\right)\ge 0.0$
for all relevant
10: ITOL – INTEGERInput
On initial entry
: a value to indicate the form of the local error test.
indicates to D02NMF whether to interpret either or both of
as a vector or a scalar. The error test to be satisfied is
, where
is defined as follows:
ITOL RTOL ATOL ${w}_{i}$
1 scalar scalar ${\mathbf{RTOL}}\left(1\right)×\left|{y}_{i}\right|+{\mathbf{ATOL}}\left(1\right)$
2 scalar vector ${\mathbf{RTOL}}\left(1\right)×\left|{y}_{i}\right|+{\mathbf{ATOL}}\left(i\right)$
3 vector scalar ${\mathbf{RTOL}}\left(i\right)×\left|{y}_{i}\right|+{\mathbf{ATOL}}\left(1\right)$
4 vector vector ${\mathbf{RTOL}}\left(i\right)×\left|{y}_{i}\right|+{\mathbf{ATOL}}\left(i\right)$
${e}_{i}$ is an estimate of the local error in ${y}_{i}$, computed internally, and the choice of norm to be used is defined by a previous call to an integrator setup routine.
Constraint: ${\mathbf{ITOL}}=1$, $2$, $3$ or $4$.
11: INFORM($23$) – INTEGER arrayCommunication Array
12: YSAV(LDYSAV,SDYSAV) – REAL (KIND=nag_wp) arrayCommunication Array
13: SDYSAV – INTEGERInput
On initial entry
: the second dimension of the array
as declared in the (sub)program from which D02NMF is called. An appropriate value for
is described in the specifications of the integrator setup routines
. This value must be the same as that supplied to the integrator setup routine.
14: WKJAC(NWKJAC) – REAL (KIND=nag_wp) arrayInput/Output
On intermediate re-entry
: elements of the Jacobian as defined under the description of
. If a numerical Jacobian was requested then
is used for workspace.
On intermediate exit: the Jacobian is overwritten.
15: NWKJAC – INTEGERInput
On initial entry
: the dimension of the array
as declared in the (sub)program from which D02NMF is called. The actual size depends on the linear algebra method used. An appropriate value for
is described in the specifications of the linear algebra setup routines
for full, banded and sparse matrix linear algebra respectively. This value must be the same as that supplied to the linear algebra setup routine.
16: JACPVT(NJCPVT) – INTEGER arrayCommunication Array
17: NJCPVT – INTEGERInput
On initial entry
: the dimension of the array
as declared in the (sub)program from which D02NMF is called. The actual size depends on the linear algebra method used. An appropriate value for
is described in the specifications of the linear algebra setup routines
for banded and sparse matrix linear algebra respectively. This value must be the same as that supplied to the linear algebra setup routine. When full matrix linear algebra is chosen, the array
is not used and hence
should be set to
18: IMON – INTEGERInput/Output
On intermediate exit
: used to pass information between D02NMF and the MONITR operation (see
Section 3
). With
contains a flag indicating under what circumstances the return from D02NMF occurred:
Exit from D02NMF after ${\mathbf{IRES}}=4$ caused an early termination (this facility could be used to locate discontinuities).
The current step failed repeatedly.
Exit from D02NMF after a call to the internal nonlinear equation solver.
The current step was successful.
On intermediate re-entry
: may be reset to determine subsequent action in D02NMF.
Integration is to be halted. A return will be made from D02NMF to the calling (sub)program with ${\mathbf{IFAIL}}={\mathbf{12}}$.
Allow D02NMF to continue with its own internal strategy. The integrator will try up to three restarts unless ${\mathbf{IMON}}e -1$.
Return to the internal nonlinear equation solver, where the action taken is determined by the value of INLN.
Normal exit to D02NMF to continue integration.
Restart the integration at the current time point. The integrator will restart from order $1$ when this option is used. The solution Y, provided by the MONITR operation (see Section 3), will
be used for the initial conditions.
Try to continue with the same step size and order as was to be used before entering the MONITR operation (see Section 3). HMIN and HMAX may be altered if desired.
Continue the integration but using a new value of HNEXT and possibly new values of HMIN and HMAX.
19: INLN – INTEGERInput/Output
On intermediate re-entry
: with
specifies the action to be taken by the internal nonlinear equation solver. By setting
and returning to D02NMF, the residual vector is evaluated and placed in
, for
$\mathit{i}=1,2,\dots ,{\mathbf{NEQ}}$
and then the MONITR operation (see
Section 3
) is invoked again. At present this is the only option available:
must not be set to any other value.
On intermediate exit: contains a flag indicating the action to be taken, if any, by the internal nonlinear equation solver.
20: IRES – INTEGERInput/Output
On intermediate exit
: with
contains the value
On intermediate re-entry
: should be unchanged unless one of the following actions is required of D02NMF in which case
should be set accordingly.
Indicates to D02NMF that control should be passed back immediately to the calling (sub)program with the error indicator set to ${\mathbf{IFAIL}}={\mathbf{11}}$.
Indicates to D02NMF that an error condition has occurred in the solution vector, its time derivative or in the value of $t$. The integrator will use a smaller time step to try to avoid this
condition. If this is not possible D02NMF returns to the calling (sub)program with the error indicator set to ${\mathbf{IFAIL}}={\mathbf{7}}$.
Indicates to D02NMF to stop its current operation and to enter the MONITR operation (see Section 3) immediately.
21: IREVCM – INTEGERInput/Output
On initial entry: must contain $0$.
On intermediate re-entry: should remain unchanged.
On intermediate exit
: indicates what action you must take before re-entering. The possible exit values of
, which should be interpreted as follows:
${\mathbf{IREVCM}}=1$, $3$, $4$ and $5$
Indicates that an FCN operation (see Section 3) is required: ${y}^{\prime }=g\left(t,y\right)$ must be supplied, where ${\mathbf{Y}}\left(\mathit{i}\right)$ is located in ${y}_{\mathit{i}}$,
for $\mathit{i}=1,2,\dots ,{\mathbf{NEQ}}$.
For ${\mathbf{IREVCM}}=1$ or $3$, ${y}_{\mathit{i}}^{\prime }$ should be placed in location ${\mathbf{RWORK}}\left(50+2×{\mathbf{NEQ}}+\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf
For ${\mathbf{IREVCM}}=4$, ${y}_{\mathit{i}}^{\prime }$ should be placed in location ${\mathbf{RWORK}}\left(50+{\mathbf{NEQ}}+\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{NEQ}}$.
For ${\mathbf{IREVCM}}=5$, ${y}_{\mathit{i}}^{\prime }$ should be placed in location ${\mathbf{YDOT}}\left(\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{NEQ}}$.
Indicates that a JAC operation (see Section 3) is required: the Jacobian matrix must be supplied.
If full matrix linear algebra is being used, then the $\left(i,j\right)$th element of the Jacobian must be stored in ${\mathbf{WKJAC}}\left(\left(j-1\right)×{\mathbf{NEQ}}+i\right)$.
If banded matrix linear algebra is being used then the $\left(i,j\right)$th element of the Jacobian must be stored in ${\mathbf{WKJAC}}\left(\left(i-1\right)×{m}_{B}+k\right)$, where ${m}_{B}
={m}_{L}+{m}_{U}+1$ and $k=\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({m}_{L}-i+1,0\right)+j$; here ${m}_{L}$ and ${m}_{U}$ are the number of subdiagonals and superdiagonals,
respectively, in the band.
If sparse matrix linear algebra is being used then
must be called to determine which column of the Jacobian is required and where it should be stored.
CALL D02NRF(J, IPLACE, INFORM)
will return in
the number of the column of the Jacobian that is required and will set
. If
, then the
th element of the Jacobian must be stored in
; otherwise it must be stored in
Indicates that a MONITR operation (see Section 3) can be performed.
Indicates that the current step was not successful, due to error test failure or convergence test failure. The only information supplied to you on this return is the current value of the
independent variable $t$, located in ${\mathbf{RWORK}}\left(19\right)$. No values must be changed before re-entering D02NMF; this facility enables you to determine the number of unsuccessful
On final exit
indicated the user-specified task has been completed or an error has been encountered (see the descriptions for
Constraint: ${\mathbf{IREVCM}}=0$, $1$, $3$, $4$, $5$, $8$, $9$ or $10$.
22: ITASK – INTEGERInput
On initial entry
: the task to be performed by the integrator.
Normal computation of output values of $y\left(t\right)$ at $t={\mathbf{TOUT}}$ (by overshooting and interpolating).
Take one step only and return.
Stop at the first internal integration point at or beyond $t={\mathbf{TOUT}}$ and return.
Normal computation of output values of $y\left(t\right)$ at $t={\mathbf{TOUT}}$ but without overshooting $t={\mathbf{TCRIT}}$. TCRIT must be specified as an option in one of the integrator
setup routines before the first call to the integrator, or specified in the optional input routine before a continuation call. TCRIT (e.g., see D02NVF) may be equal to or beyond TOUT, but not
before it in the direction of integration.
Take one step only and return, without passing TCRIT (e.g., see D02NVF). TCRIT must be specified under ${\mathbf{ITASK}}=4$.
Constraint: $1\le {\mathbf{ITASK}}\le 5$.
23: ITRACE – INTEGERInput
On initial entry
: the level of output that is printed by the integrator.
may take the value
$-1$ is assumed and similarly if ${\mathbf{ITRACE}}>3$, then $3$ is assumed.
No output is generated.
Only warning messages are printed on the current error message unit (see X04AAF).
Warning messages are printed as above, and on the current advisory message unit (see X04ABF) output is generated which details Jacobian entries, the nonlinear iteration and the time
integration. The advisory messages are given in greater detail the larger the value of ITRACE.
24: IFAIL – INTEGERInput/Output
On initial entry
must be set to
$-1\text{ or }1$
. If you are unfamiliar with this parameter you should refer to
Section 3.3
in the Essential Introduction for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value
$-1\text{ or }1$
is recommended. If the output of error messages is undesirable, then the value
is recommended. Otherwise, because for this routine the values of the output parameters may be useful even if
${\mathbf{IFAIL}}e {\mathbf{0}}$
on exit, the recommended value is
When the value $-\mathbf{1}\text{ or }1$ is used it is essential to test the value of IFAIL on exit.
On final exit
unless the routine detects an error or a warning has been flagged (see
Section 6
6 Error Indicators and Warnings
If on entry
, explanatory error messages are output on the current error message unit (as defined by
Errors or warnings detected by the routine:
On entry, the integrator detected an illegal input, or that a linear algebra and/or integrator setup routine has not been called prior to the call to the integrator. If
${\mathbf{ITRACE}}\ge 0$
, the form of the error will be detailed on the current error message unit (see
The maximum number of steps specified has been taken (see the description of optional inputs in the integrator setup routines and the optional input continuation routine,
With the given values of
no further progress can be made across the integration range from the current point
. The components
${\mathbf{Y}}\left(1\right),{\mathbf{Y}}\left(2\right),\dots ,{\mathbf{Y}}\left({\mathbf{NEQ}}\right)$
contain the computed values of the solution at the current point
There were repeated error test failures on an attempted step, before completing the requested task, but the integration was successful as far as
. The problem may have a singularity, or the local error requirements may be inappropriate.
There were repeated convergence test failures on an attempted step, before completing the requested task, but the integration was successful as far as
. This may be caused by an inaccurate Jacobian matrix or one which is incorrectly computed.
Some error weight
became zero during the integration (see the description of
). Pure relative error control
was requested on a variable (the
th) which has now vanished. The integration was successful as far as
The FCN operation (see
Section 3
) set the error flag
continually despite repeated attempts by the integrator to avoid this.
Not used for this integrator.
A singular Jacobian $\frac{\partial r}{\partial y}$ has been encountered. This error exit is unlikely to be taken when solving explicit ordinary differential equations. You should check the
problem formulation and Jacobian calculation.
An error occurred during Jacobian formulation or back-substitution (a more detailed error description may be directed to the current error message unit, see
The FCN operation (see
Section 3
) signalled the integrator to halt the integration and return by setting
. Integration was successful as far as
The MONITR operation (see
Section 3
) set
and so forced a return but the integration was successful as far as
The requested task has been completed, but it is estimated that a small change in
is unlikely to produce any change in the computed solution. (Only applies when you are not operating in one step mode, that is when
${\mathbf{ITASK}}e 2$
The values of
are so small that D02NMF is unable to start the integration.
7 Accuracy
The accuracy of the numerical solution may be controlled by a careful choice of the parameters
, and to a much lesser extent by the choice of norm. You are advised to use scalar error control unless the components of the solution are expected to be poorly scaled. For the type of decaying
solution typical of many stiff problems, relative error control with a small absolute error threshold will be most appropriate (that is, you are advised to choose
small but positive).
The cost of computing a solution depends critically on the size of the differential system and to a lesser extent on the degree of stiffness of the problem; also on the type of linear algebra being
used. For further details see
Section 8
of the documents for
(full matrix),
(banded matrix) or
(sparse matrix).
In general, you are advised to choose the backward differentiation formula option (setup routine
) but if efficiency is of great importance and especially if it is suspected that
$\frac{\partial g}{\partial y}$
has complex eigenvalues near the imaginary axis for some part of the integration, you should try the BLEND option (setup routine
9 Example
This example solves the well-known stiff Robertson problem
$a′ = -0.04a+1.0E4bc-3.0E7b2 b′ = -0.04a-1.0E4bc-3.0E7b2 c′ = -0.04a-1.0E4bc-3.0E7b2$
over the range
with initial conditions
and with scalar error control (
). The integration proceeds until
is passed, providing
interpolation at intervals of
through a MONITR operation. The integration method used is the BDF method (setup routine
) with a modified Newton method. The Jacobian is a full matrix, which is specified using the setup routine
; this Jacobian is to be calculated numerically.
9.1 Program Text
9.2 Program Data
9.3 Program Results
|
{"url":"http://www.nag.com/numeric/FL/nagdoc_fl24/html/D02/d02nmf.html","timestamp":"2014-04-23T08:25:34Z","content_type":null,"content_length":"110363","record_id":"<urn:uuid:98a1da76-49ec-48d7-8aa5-f048f299bb1b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Braingle: 'Seven Jack O'Lanterns' Brain Teaser
Seven Jack O'Lanterns
Logic puzzles require you to think. You will have to be logical in your reasoning.
Puzzle ID: #20639
Category: Logic
Submitted By: fishmed
Corrected By: boodler
It appears that you have angered the spirit of Halloween by failing to revere the Great Pumpkin, and now a curse has befallen you. On the walkway to your house is a Ward of Seven Jack O'Lanterns
arranged in a circle. If midnight comes and any of the seven are still lit, a dark reaper and seven dark horses with seven dark riders shall visit thy abode. They shall surround thy domicile and,
while circling it, they will proceed to pelt thy dwelling with eggs and cream of shaving. And come morn there will be a great mess to be reckoned with. Verily. So you better get those lanterns out.
You quickly discover something odd about these lanterns. When you blow out the first one, the lanterns on either side extinguish as well! But there is more. If you blow out a lantern adjacent to one
that is extinguished, the extinguished one(s) will relight. It seems that blowing on any lantern will change the state of three - the one you blew on and its two neighbors. Finally, you can blow on
an extinguished lantern and it will relight, and its neighbors will light/extinguish as applicable. After trying once and finding all seven lit again, you decide, being the excellent puzzler, you
sit down and examine this closer. But hurry, I hear the beating of many hooves...
If you examine the setup carefully, you'll note a number of facts which make the puzzle easier to solve by deduction. First, blowing on a lantern is a commutative property; blowing on lanterns 1, 5,
then 3 is the same as blowing on 3, then 1, and then 5. No matter what order the lanterns are blown on, if the same lanterns are blown on the same number of times, the result won't change. For that
reason, blowing on a lantern twice is as good as not blowing on it at all. And three times is as good as one time. So, it seems that it should be able to be done in seven steps or less.
What else can we tell about the solution? Since each operation changes the state of three lanterns, and there are 7 lanterns, and each lantern must change its state an odd number of times, it's a
safe bet that there will need to be an odd number of steps. We can easily see it can't be done in 1 or 3 steps, so it must be 5 or 7. Trying 5 steps comes up with 3 different patterns that are not
symmetrical and fail to leave all lanterns extinguished. So that leaves 7 steps and to your surprise, based on the commutative property, the easiest solution is to blow on each one in order! So
doing this, the Great Pumpkin has decided to give you a treat for figuring this out and you find all seven lanterns full of candy the next morning! Congratulations! Hide
What Next?
rugbys_girl Very nice teaser. I have seen something similar done with a wall of bricks where the four bricks surrounding the first brick become recessed.....FUN!
Jan 26, 2005
kri_kri This was hard to me because i have a short attention span and ummm what was i talking about?
Jan 26, 2005
alex_jane nice one but is a bit long
Jan 27, 2005
suzygirl This is one of my favorites. I didn't really look at it the way you did though, I just figured out what pattern they had to be in to extinguish them all, and then I got to that
Jan 28, 2005 point. It is interesting though because it took me exactly seven steps. Nice one!
waker Actually it can be done in seven steps
Jan 29, 2005
Imagine these are the seven (B=burning, O=Out)
Start: BBBBBBB
#1: OOOBBBB
#2: OOBOBBB
#3: OOBBOBB
#4: OOBBBOO
#5: OOOOOOO
waker whups, I meant 5 steps
Jan 29, 2005
waker again whups i did it wrong
Jan 29, 2005
fishmed I just requested the last 3 were removed for you. I am glad you guys liked it.
Jan 29, 2005
2sexy4wrds u add way to much information... just tell me wat i need to know and get it over with... i get lost when there is too much info
Jan 31, 2005
kellgo Very well written and entertaining.
Feb 01, 2005 Fabulous Job!!!
fishmed Thanks. Glad you liked it.
Feb 01, 2005
cnmne Nothing wrong with a good storyline.
Feb 02, 2005
theketchupwins loved this one! i guess i'm one of few who likes the funny story line?
Feb 09, 2005
took me a little while to figure out. nice work.
theketchupwins loved this one! i guess i'm one of few who likes the funny story line?
Feb 09, 2005
took me a little while to figure out. nice work.
rose_rox Like Peanuts?
Feb 11, 2005
fishmed Of course I do!
Feb 11, 2005
short_stuff780 great!
Feb 13, 2005
fishmed Thanks.
Feb 13, 2005
juggleboy502 too long... i got lost on the first paragraph
Feb 28, 2005
rotwyla98 heh. I blew out the two corner ones and then the middle one
May 10, 2005
sueintexas I loved the story and the challenge
May 15, 2005
fishmed rotwyla, they are in a circle, so there are no 'corner' ones. Nice thought, though.
May 16, 2005
JCDuncan I like this teaser, but I loved the answer. Explaining it in with the commutative power is wonderful. I did answer it, but with trial and error and I did not come close to
May 31, 2005 understanding the mathmatical proof for solving. (actually the answer was the first attempt I made)
Vigo95 WHAT ???
Mar 25, 2006 yah ... thanks alot !
paul726 Nicely done, Fish! Kudos!
May 21, 2006
nu_rob_roy I'm really lost, who or what is Jack O'Lanterns? But seems like a well liked teasers.
Jul 29, 2006
fishmed Jack O' Lanterns are hollowed-out pumpkins that have faces carved into them and then are lit from inside with a candle. Usually used for decorations during Halloween.
Aug 11, 2006
geekgirljess Nice ripoff, this was originally posted at www.greylabyrinth.com back in 1998.
Feb 07, 2007
fishmed I actually got it from my father. I am not sure where he got it from.
Feb 08, 2007
LeafFan4life this one was cleverly nice
Jun 05, 2007
HarryPutter nice one to the good teaser
Jun 10, 2007
4demo Fun teaser but quite difficult!
Jun 18, 2007
scallio This was funny, entertaining and unique... I loved it!
Jan 18, 2008
I cut 7 little squares from paper, drew a little lit candle on the one side and left the other blank then arranged them in a circle, lit side up. When I altered one, I flipped the
two adjacent squares. Wasn't too difficult. Trick was to "blow out" all but one then blow out the two middles of the lit ones.
Anyway, a lot of fun!
javaguru Excellent teaser! I don't generally like too much extraneous story, but I thought it was amusing enough and not too long, so it was fine.
Feb 28, 2009
I missed the commutive property, but came up with another line of reasoning to determine that seven steps were the minimum.
Each step toggles three candles. You need an odd multiple of seven toggles in order to toggle the seven candles off. The least common multiple between three and seven is 3 x 7 = 21,
so seven steps is the minimum. Then I just guessed that you could blow them out in order and was mildly surprised when is worked!
(user deleted) It is easier than you describe. All lanterns need to be switched an odd number of times, so the total number of switches must fall in the sequence 7, 21, 35, 49 etc. but since 3 are
Feb 20, 2010 switched with each blow the total must also be divisible by 3. The first such number in the sequence is 21, meaning 7 blows are required (7x3=21). By blowing a lantern once and the
lanterns either side once as well the lantern in question ends up being blown out. It is clear from this that each must be blown once because all are blown once, and for each lantern
both either side are blown once - meaning they all end up being out.
princess2007 I didn't really analyze very much. Just tried it out and got it on the first try with 7 blows. Reading the answer and finding out that the order of the blows didn't really matter
Feb 22, 2013 made me laugh...
|
{"url":"http://www.braingle.com/brainteasers/teaser.php?op=2&id=20639&comm=1","timestamp":"2014-04-20T06:34:58Z","content_type":null,"content_length":"55216","record_id":"<urn:uuid:cb3402f3-36c6-4c52-a7b9-b040ce2890cd>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hypothesis Testing: Statistics as Pseudoscience
Northern Prairie Wildlife Research Center
Hypothesis Testing: Statistics as Pseudoscience
Douglas H. Johnson
Northern Prairie Wildlife Research Center
U.S. Geological Survey, Biological Resources Division
Jamestown, North Dakota 58401
Presented at the Fifth Annual Conference of the Wildlife Society, Buffalo, New York, 26 September 1998. Part of the symposium, Evaluating the Role of Hypothesis Testing/Power Analysis in Wildlife
Science, sponsored by the Biometrics Working Group.
Abstract: Wildlife biologists recently have been subjected to the credo that if you're not testing hypotheses, you're not doing real science. To protect themselves against rejection by journal
editors, authors cloak their findings in an armor of P values. I contend that much statistical hypothesis testing is misguided. Virtually all null hypotheses tested are, in fact, false; the only
issue is whether or not the sample size is sufficiently large to show it. No matter if it is or not, one then gets led into the quagmire of deciding biological significance versus statistical
significance. Most often, parameter estimation is a more appropriate tool than statistical hypothesis testing. Statistical hypothesis testing should be distinguished from scientific hypothesis
testing, in which truly viable alternative hypotheses are evaluated in a real attempt to falsify them. The latter method is part of the deductive logic of strong inference, which is better-suited to
simple systems. Ecological systems are complex, with components typically influenced by many factors, whose influences often vary in place and time. Competing hypotheses in ecology rarely can be
falsified and eliminated. Wildlife biologists perhaps adopt hypothesis tests in order to make what are really descriptive studies appear as scientific as those in the "hard" sciences. Rather than
attempting to falsify hypotheses, it may be more productive to understand the relative importance of multiple factors.
Literature: The following is a compilation of references cited in the presented paper, as well as citations provided by Marks R. Nester for his A Myopic View and History of Hypothesis Testing, which
follows the Literature. Marks Nester's email address is nesterm@qfri1.se2.dpi.qld.gov.au. Additional comments are available at David Parkhurst's Quotes Criticizing Significance Testing.
Altman, D. G. 1985. Discussion of Dr Chatfield's paper. Journal of the
Royal Statistical Society A 148 : 242.
Anscombe, F. J. 1956. Discussion on Dr. David's and Dr. Johnson's Paper.
Journal of the Royal Statistical Society B 18 : 24-27.
Arbuthnott, J. 1710. An argument for Divine Providence, taken from the
constant regularity observ'd in the births of both sexes.
Philosophical Transactions of the Royal Society 23 : 186-190.
Bakan, D. 1967. The test of significance in psychological research. From
Chapter 1 of On Method, Jossey-Bass, Inc. (San Francisco). Reprinted
in The Significance Test Controversy - A Reader, D. E. Morrison and
R. E. Henkel, eds., 1970, Aldine Publishing Company (Butterworth Group).
Barnard, G. 1998. Letter. New Scientist 157: 47.
Barndorff-Nielsen, O. 1977. Discussion of D. R. Cox's paper. Scandinavian
Journal of Statistics 4 : 67-69.
Beaven, E. S. 1935. Discussion on Dr. Neyman's Paper. Journal of the Royal
Statistical Society, Supplement 2 : 159-161.
Berger, J. O., and D. A. Berry. 1988. Statistical Analysis and the
Illusion of Objectivity. American Scientist 76:159-165.
Berger, J. O. and Sellke, T. 1987. Testing a point null hypothesis: the
irreconcilability of P values and evidence. Journal of the
American Statistical Association 82 : 112-122.
Berkson, J. 1938. Some difficulties of interpretation encountered in the
application of the chi-square test. Journal of the American
Statistical Association 33 : 526-536.
Berkson, J. 1942. Tests of significance considered as evidence. Journal of
the American Statistical Association 37 : 325-335.
Binder, A. 1963. Further considerations on testing the null hypothesis and
the strategy and tactics of investigating theoretical models.
Psychological Review 70 : 107-115. Reprinted in Statistical Issues,
A Reader for the Behavioural Sciences, R. E. Kirk, ed., 1972,
Wadsworth Publishing Company : 118-126.
Boardman, T. J. 1994. The statistician who changed the world: W. Edwards
Deming, 1900-1993. The American Statistician 48(3) : 179-187.
Box, G. E. P. 1976. Science and statistics. Journal of the American
Statistical Association 71 : 791-799.
Box, G. E. P. 1983. An apology for ecumenism in statistics. In Scientific
Inference, Data Analysis, and Robustness, G. E. P. Box, T. Leonard and
C. F. Wu, eds., Academic Press, Inc. : 51-84.
Braithwaite, R. B. 1953. Scientific Explanation. A Study of the Function
of Theory, Probability and Law in Science. Cambridge University Press.
Bryan-Jones, J. and Finney, D. J. 1983. On an error in "Instructions to
Authors". HortScience 18(3) : 279-282.
Buchanan-Wollaston, H. J. 1935. The philosophic basis of statistical
analysis. Journal of the International Council for the Exploration
of the Sea 10 : 249-263.
Camilleri, S. F. 1962. Theory, probability, and induction in social
research. American Sociological Review 27 : 170-178. Reprinted in
The Significance Test Controversy - A Reader, D. E. Morrison and
R. E. Henkel, eds., 1970, Aldine Publishing Company (Butterworth
Campbell, M. 1992. Letter. Royal Statistical Society News & Notes
Carver, R. P. 1978. The case against statistical significance testing.
Harvard Educational Review 48 : 378-399.
Casella, G. and Berger, R. L. 1987. Rejoinder. Journal of the American
Statistical Association 82 : 133-135.
Chatfield, C. 1985. The initial examination of data (with discussion).
Journal of the Royal Statistical Society A 148: 214-253.
Chatfield, C. 1989. Comments on the paper by McPherson. Journal of the
Royal Statistical Society A 152 : 234-238.
Chernoff , H. 1986. Comment. The American Statistician 40(1) : 5-6.
Chew, V. 1976. Comparing treatment means: a compendium. HortScience
11(4) : 348-357.
Chew, V. 1977. Statistical hypothesis testing: an academic exercise in
futility. Proceedings of the Florida State Horticultural Society
90 : 214-215.
Chew, V. 1980. Testing differences among means: correct interpretation
and some alternatives. HortScience 15(4) : 467-470.
Cochran, W. G. and Cox, G. M. 1957. Experimental Designs. 2nd ed. John
Wiley & Sons, Inc.
Cohen, J. 1990. Things I have learned (so far). American Psychologist
45 : 1304-1312.
Cohen, J. 1994. The earth is round (p < .05). American Psychologist
49 : 997-1003.
Cormack, R. M. 1985. Discussion of Dr Chatfield's paper. Journal of the
Royal Statistical Society A 148 : 231-233.
Cox, D. R. 1958. Some problems connected with statistical inference.
Annals of Mathematical Statistics 29 : 357-372.
Cox, D. R. 1977. The role of significance tests. (with discussion).
Scandinavian Journal of Statistics 4 : 49-70.
Cox, D. R. 1982. Statistical significance tests. British Journal of
Clinical Pharmacology 14 : 325-331.
Cox, D. R. and Snell, E. J. 1981. Applied Statistics Principles and
Examples. Chapman and Hall.
Deming, W. E. 1975. On probability as a basis for action. The
American Statistician 29:146.
Edwards, W., Lindman, H. and Savage, L. J. 1963. Bayesian statistical
inference for psychological research. Psychological Review 70 :
Finney, D. J. 1988. Was this in your statistics textbook? III. Design
and analysis. Experimental Agriculture 24 : 421-432.
Finney, D. J. 1989a. Was this in your statistics textbook? VI. Regression
and covariance. Experimental Agriculture. 25 : 291-311.
Finney, D. J. 1989b. Is the statistician still necessary? Biometrie
Praximetrie 29 : 135-146.
Fisher, R. A. 1925. Statistical Methods for Research Workers. Oliver
and Boyd (London).
Fisher, R. A. 1935. The Design of Experiments. Oliver and Boyd (Edinburgh).
Gauch Jr., H. G. 1988. Model selection and validation for yield trials
with interaction. Biometrics 44 : 705-715.
Gavarret, J. 1840. Principes Généraux de Statistique Médicale.
[No publisher given] (Paris). (Not cited).
Geary, R. C. 1947. Testing for normality. Biometrika 34 : 209-242.
Gerard, P. D., D. R. Smith, and G. Weerakkody. 1998. Limits of
retrospective power analysis. Journal of Wildlife Management
Gigerenzer, G., Swijtink, Z., Porter, T., Daston, L., Beatty, J. and
Krüger, L. 1989. The Empire of Chance. Cambridge University
Press, Cambridge, England.
Gold, D. 1958. Comment on "A critique of tests of significance". American
Sociological Review 23 : 85-86.
Good, I. J. 1983. Good Thinking. The Foundations of Probability and Its
Applications. University of Minnesota Press (Minneapolis).
Grant, D. A. 1962. Testing the null hypothesis and the strategy and
tactics of investigating theoretical models. Psychological Review
69 : 54-61.
Graybill, F. A. 1976. Theory and Application of the Linear Model.
Duxbury Press (Massachusetts).
Guttman, L. 1977. What is not what in statistics. The Statistician
26 : 81-107.
Guttman, L. 1985. The illogic of statistical inference for cumulative
science. Applied Stochastic Models and Data Analysis 1 : 3-10.
Hacking, I. 1965. Logic of Statistical Inference. Cambridge University
Hahn, G. J. 1990. Commentary. Technometrics 32: 257-258.
Hays, W. L. 1973. Statistics for the Social Sciences. Second edition.
Holt, Rinehart and Winston.
Healy, M. J. R. 1978. Is statistics a science? Journal of the Royal
Statistical Society A 141 : 385-393.
Healy, M. J. R. 1989. Comments on the paper by McPherson. Journal of the
Royal Statistical Society A 152 : 232-234.
Hinkley, D. V. 1987. Comment. Journal of the American Statistical Association
82 : 128-129.
Hodges Jr., J. L. and Lehmann, E. L. 1954. Testing the approximate validity
of statistical hypotheses. Journal of the Royal Statistical Society
B 16 : 261-268.
Hogben, L. 1957a. The contemporary crisis or the uncertainties of
uncertain inference. Statistical Theory, W. W. Norton &
Co., Inc., Reprinted in The Significance Test Controversy -
A Reader, D. E. Morrison and R. E. Henkel, eds., 1970, Aldine
Publishing Company (Butterworth Group).
Hogben, L. 1957b. Statistical prudence and statistical inference.
Statistical Theory, W. W. Norton & Co., Inc. Reprinted
in The Significance Test Controversy - A Reader, D. E.
Morrison and R. E. Henkel, eds., 1970, Aldine Publishing
Company (Butterworth Group).
Hunter, J. S. 1990. Commentary. Technometrics 32 : 261.
Inman, H. F. 1994. Karl Pearson and R. A. Fisher on statistical tests: A
1935 exchange from Nature. The American Statistician 48(1) : 2-11.
Johnson, D. H. 1995. Statistical sirens: the allure of nonparametrics.
Ecology 76:1998-2000.
Jones, D. 1984. Use, misuse, and role of multiple-comparison procedures in
ecological and agricultural entomology. Environmental Entomology
13 : 635-649.
Jones, D. and Matloff, N. 1986. Statistical hypothesis testing in biology:
a contradiction in terms. Journal of Economic Entomology 79 :
Kempthorne, O. 1966. Some aspects of experimental inference. Journal of
the American Statistical Association 61 : 11-34.
Kempthorne, O. 1976. Of what use are tests of significance and tests of
hypotheses. Communications in Statistics: Theory and Methods A5 :
Kish, L. 1959. Some statistical problems in research design. American
Sociological Review, 24 : 328-338. Reprinted in The Significance
Test Controversy - A Reader, D. E. Morrison and R. E. Henkel,
eds., 1970, Aldine Publishing Company (Butterworth Group).
Kruskal, W. H. 1978. Significance, Tests of. In International Encyclopedia
of Statistics , W. H. Kruskal and J. M. Tanur, eds., Free Press
(New York) : 944-958.
Kruskal, W. 1980. The significance of Fisher: a review of R. A. Fisher: The
Life of a Scientist. Journal of the American Statistical Association
75 : 1019-1030.
Kruskal, W. and Majors, R. 1989. Concepts of relative importance in recent
scientific literature. The American Statistician 43(1) : 2-6.
LaForge, R. 1967. Confidence intervals or tests of significance in scientific
research. Psychological Bulletin 68 : 446-447.
Lindley, D. V. 1986. Discussion. The Statistician 35 : 502-504.
Little, T. M. 1981. Interpretation and presentation of results.
HortScience 16 : 637-640.
Luce, R. D. 1988. The tools-to-theory hypothesis. Review of G. Gigerenzer
and D. J. Murray, "Cognition as intuitive statistics." Contemporary
Psychology 33 : 582-583.
Lykken, D. T. 1968. Statistical significance in psychological research.
Psychological Bulletin 70 : 151-159. Reprinted in The Significance
Test Controversy - A Reader, D. E. Morrison and R. E. Henkel, eds.,
1970, Aldine Publishing Company (Butterworth Group).
Matloff, N. S. 1991. Statistical hypothesis testing: problems and
alternatives. Environmental Entomology 20 : 1246-1250.
Matthews, R. 1997. Faith, hope and statistics. New Scientist
McCloskey, D. N. 1995. The insignificance of statistical significance.
Scientific American 272(4) : 104-105.
McNemar, Q. 1960. At random: sense and nonsense. American Psychologist
15 : 295-300.
Meehl, P. E. 1967. Theory testing in psychology and physics: A
methodological paradox. Philosophy of Science 34 : 103-115.
Reprinted in The Significance Test Controversy - A Reader,
D. E. Morrison and R. E. Henkel, eds., 1970, Aldine
Publishing Company (Butterworth Group).
Meehl, P. E. 1978. Theoretical risks and tabular asterisks: Sir Karl,
Sir Ronald, and the slow progress of soft psychology. Journal of
Consulting and Clinical Psychology 46 : 806-834.
Meehl, P. E. 1990. Why summaries of research on psychological theories
are often uninterpretable. Psychological Reports 66 (Monograph
Supplement 1-V66) : 195-244.
Moore, D. S. and McCabe, G. P. 1989. Introduction to the Practice of
Statistics. W. H. Freeman and Company (New York).
Morrison, D. E. and Henkel, R. E. 1969. Significance tests reconsidered.
The American Sociologist 4 : 131-140. Reprinted in The Significance
Test Controversy - A Reader, D. E. Morrison and R. E. Henkel, eds.,
1970, Aldine Publishing Company (Butterworth Group).
Morrison, D. E. and Henkel, R. E. (Eds.) 1970. The Significance Test
Controversy - A Reader. Aldine Publishing Company (Butterworth
Natrella, M. G. 1960. The relation between confidence intervals and
tests of significance. American Statistician 14 : 20-22, 33.
Reprinted in Statistical Issues, A Reader for the Behavioural
Sciences, R. E. Kirk, ed., 1972, Wadsworth Publishing Company
: 113-117.
Nelder, J. A. 1971. Discussion on papers by Wynn, Bloomfield, O'Neill
and Wetherill. Journal of the Royal Statistical Society, B 33 :
Nelder, J. A. 1985. Discussion of Dr Chatfield's paper. Journal of the
Royal Statistical Society A 148 : 238.
Nester, M. R. 1996. An applied statistician's creed. Applied
Statistics 45:401-410.
Neyman, J. 1958. The use of the concept of power in agricultural
experimentation. Journal of the Indian Society of
Agricultural Statistics 9 : 9-17.
Neyman, J. and Pearson, E. S. 1933. On the problem of the most efficient
tests of statistical hypotheses. Philosophical Transactions of the
Royal Society A 231 : 289-337.
Nunnally, J. 1960. The place of statistics in psychology. Educational and
Psychological Measurement 20 : 641-650.
O'Brien, T. C. and Shapiro, B. J. 1968. Statistical significance--what?
Mathematics Teacher 61 : 673-676. Reprinted in Statistical Issues,
A Reader for the Behavioural Sciences, R. E. Kirk, ed., 1972,
Wadsworth Publishing Company : 109-112.
Pearce, S. C. 1992. Data analysis in agricultural experimentation. II. Some
standard contrasts. Experimental Agriculture 28 : 375-383.
Pearson, K. 1900. On the criterion that a given system of deviations from
the probable in the case of a correlated systems of variables is
such that it can be reasonably supposed to have arisen from random
sampling. Philosophical Magazine, Series V, 1 : 157-175.
Pearson, K. 1935a. Statistical tests. Nature 136 : 296-297. (reproduced
in H. F. Inman (1994). Karl Pearson and R. A. Fisher on statistical
tests: A 1935 exchange from Nature. The American Statistician
48(1) : 2-11.
Pearson, K. 1935b. Statistical tests. Nature 136 : 550. (reproduced in
H. F. Inman (1994). Karl Pearson and R. A. Fisher on statistical
tests: A 1935 exchange from Nature. The American Statistician
48(1) : 2-11.
Perry, J. N. 1986. Multiple-comparison procedures: a dissenting view.
Journal of Economic Entomology 79 : 1149-1155.
Peterman, R. M. 1990. The importance of reporting statistical power:
the forest decline and acidic deposition example. Ecology 71:
Petranka, J. W. 1990. Caught between a rock and a hard place.
Herpetologica 46: 346-350.
Platt, J. R. 1964. Strong inference. Science 146:347-353.
Pratt, J. W. 1976. A discussion of the question: for what use are tests
of hypotheses and tests of significance. Communications in
Statistics: Theory and Methods A5 : 779-787.
Preece, D. A. 1982. The design and analysis of experiments: what has gone
wrong? Utilitas Mathematica 21A : 201-244.
Preece, D. A. 1984. Biometry in the Third World: science not ritual.
Biometrics 40 : 519-523.
Preece, D. A. 1990. R. A. Fisher and experimental design: a review.
Biometrics 46 : 925-935.
Quinn, J. F., and A. E. Dunham. 1983. On hypothesis testing in ecology and
evolution. American Naturalist 122: 602-617.
Ranstam, J. 1996. A common misconception about p-value and its consequences.
Acta Orthopaedica Scandinavica 67 : 505-507.
Rosnow, R. L. and Rosenthal, R. 1989. Statistical procedures and the
justification of knowledge in psychological science. American
Psychologist 44 : 1276-1284.
Rothman, K. 1978. A show of confidence. New England Journal of Medicine
299 : 1362-1363.
Rozeboom, W. W. 1960. The fallacy of the null hypothesis significance test.
Psychological Bulletin 57 : 416-428. Reprinted in The Significance
Test Controversy - A Reader, D. E. Morrison and R. E. Henkel, eds.,
1970, Aldine Publishing Company (Butterworth Group).
Savage, I. R. 1957. Nonparametric statistics. Journal of the American
Statistical Association 52 : 331-344.
Sayn-Wittgenstein, L. 1965. Statistics - salvation or slavery? Forestry
Chronicle 41 : 103-105.
Selvin, H. C. 1957. A critique of tests of significance in survey research.
American Sociological Review 22 : 519-527.
Simberloff, D. 1990. Hypotheses, errors, and statistical assumptions.
Herpetologica 46: 351-357.
Skipper Jr., J. K., Guenther, A. L. and Nass, G. 1967. The sacredness
of .05: A note concerning the uses of statistical levels of
significance in social science. The American Sociologist 2 :
16-18. Reprinted in The Significance Test Controversy - A
Reader, D. E. Morrison and R. E. Henkel, eds., 1970, Aldine
Publishing Company (Butterworth Group).
Smith, C. A. B. 1960. Book review of Norman T. J. Bailey: Statistical
Methods in Biology. Applied Statistics 9 : 64-66.
Stevens, S. S. 1968. Measurement, statistics, and the schemapiric view.
Science 161 : 849-856. Abridged in Statistical Issues, A Reader
for the Behavioural Sciences, R. E. Kirk, ed., 1972, Wadsworth
Publishing Company : 66-78.
Street, D. J. 1990. Fisher's contributions to agricultural statistics.
Biometrics 46 : 937-945.
"Student" 1908. The probable error of a mean. Biometrika 6 : 1-25.
Tamhane, A. C. 1996. Review of R. E. Bechhofer, T. J. Santner and D. M.
Goldsman, Design and Analysis of Experiments for Statistical
Selection, Screening and Multiple Comparisons, John Wiley
(New York), 1995. Technometrics 38 : 289-290.
Tukey, J. W. 1973. The problem of multiple comparisons. Unpublished
manuscript, Dept. of Statistics, Princeton University.
Tukey, J. W. 1991. The philosophy of multiple comparisons. Statistical
Science 6 : 100-116.
Tversky, A. and Kahneman, D. 1971. Belief in the law of small numbers.
Psychological Bulletin 76 : 105-110.
Upton, G. J. G. 1992. Fisher's exact test. Journal of the Royal Statistical
Society A 155 : 395-402.
Vardeman, S. B. 1987. Comment. Journal of the American Statistical
Association 82 : 130-131.
Venn, J. 1888. Cambridge anthropometry. Journal of the Anthropological
Institute 18 : 140-154.
Wang, C. 1993. Sense and Nonsense of Statistical Inference. Marcel
Dekker, Inc.
Warren, W. G. 1986. On the presentation of statistical analysis: reason or
ritual. Canadian Journal of Forest Research 16 : 1185-1191.
Yates, F. 1951. The influence of Statistical Methods for Research Workers
on the development of the science of statistics. Journal of the
American Statistical Association 46 : 19-34.
Yates, F. 1964. Sir Ronald Fisher and the design of experiments. Biometrics
20 : 307-321.
Yoccoz, N. G. 1991. Use, overuse, and misuse of significance tests in
evolutionary biology and ecology. Bulletin of the Ecological
Society of America 72:106-111.
Zeisel, H. 1955. The significance of insignificant differences. Public
Opinion Quarterly 17 : 319-321. Reprinted in The Significance
Test Controversy - A Reader, D. E. Morrison and R. E. Henkel,
eds., 1970, Aldine Publishing Company (Butterworth Group).
Marks R. Nester, author of "An Applied Statistician's Creed." Applied Statistics 45:401- 410.
1. A Myopic View and History of Hypothesis Testing
According to Hacking (1965), John Arbuthnott (1710) was the first to publish a test of a statistical hypothesis. Hogben (1957b) attributes to Jules Gavarret (1840) the earliest use of the probable
error as a form of significance test in the biological arena. Hogben (1957b) also states that Venn (1888) was one of the earliest users of the terms "test" and "significant". The form of the
chi-squared goodness-of-fit distribution was published by K. Pearson in 1900. W. S. Gosset, using the pseudonym "Student", developed the t-distribution in 1908. According to E. S. Beaven (1935), T.
B. Wood and Professor Stratton were the first to determine probable errors in the context of replicated agricultural experiments. Apparently Wood and Stratton wrote their paper in 1910, but Beaven
does not give a reference. The foundations of modern hypothesis testing were laid by Fisher (1925), although the modifications propounded by Neyman and Pearson (1933) are the generally accepted norm.
I contend that the general acceptance of statistical hypothesis testing is one of the most unfortunate aspects of 20^th century applied science. Tests for the identity of population distributions,
for equality of treatment means, for presence of interactions, for the nullity of a correlation coefficient, and so on, have been responsible for much bad science, much lazy science, and much silly
science. A good scientist can manage with, and will not be misled by, parameter estimates and their associated standard errors or confidence limits. A theory dealing with the statistical behaviour of
populations should be supported by rational argument as well as data. In such cases, accurate statistical evaluation of the data is hindered by null hypothesis testing. The scientist must always give
due thought to the statistical analysis, but must never let statistical analysis be a substitute for thinking! If instead of developing theories, a researcher is involved in such practical issues as
selecting the best treatment(s), then the researcher is probably confronting a complex decision problem involving inter alia economic considerations. Once again, analyses such as null hypothesis
testing and multiple comparison procedures are of no benefit.
Although some of the following passages have been included for their historical interest, most of the quotations are offered in partial support of my views.
Arbuthnott - "This Equality of Males and Females is not the Effect of Chance but Divine Providence ... which I thus demonstrate :
Let there be a Die of Two sides, M and F ...But it is very improbable (if mere Chance govern'd) that ... To repair that Loss, provident Nature ... brings forth more Males than Females ; and that in
almost a constant proportion."
Venn - "When a sufficient number of results had been obtained ... I was requested ... to undertake an analysis of them, and a comparison of their general outcome with that of those obtained by almost
identical instruments at South Kensington. ... When we are dealing with statistics, we ought to be able not merely to say vaguely that the difference does or does not seem significant to us, but we
ought to have some test as to what difference would be significant. ... The above remarks ... inform us which of the differences in the above tables are permanent and significant, in the sense that
we may be tolerably confident that if we took another similar batch we should find a similar difference; and which of them are merely transient and insignificant, in the sense that another similar
batch is about as likely as not to reverse the conclusion we have obtained."
Pearson - "A theoretical probability curve without limited range will never at the extreme tails exactly fit observation. The difficulty is obvious where the observations go by units and the theory
by fractions."
Pearson - "if the earlier writers on probability had not proceeded so entirely from the mathematical standpoint, but had endeavoured first to classify experience in deviations from the average, and
then to obtain some measure of the actual goodness of fit provided by the normal curve, that curve would never have obtained its present position in the theory of errors"
Pearson - "We can only conclude from the investigations here considered that the normal curve possesses no special fitness for describing errors or deviations such as arise either in observing
practice or in nature"
Neyman and Pearson - "if x is a continuous variable ... then any value of x is a singularity of relative probability equal to zero. We are inclined to think that as far as a particular hypothesis is
concerned, no test based upon the theory of probability can by itself provide any valuable evidence of the truth or falsehood of that hypothesis"
Buchanan-Wollaston - "The [null] hypothesis should be such that it is acceptable on a priori grounds if the data do not show it to be unlikely to be true"
Fisher - "Every experiment may be said to exist only in order to give the facts a chance of disproving the null hypothesis"
Pearson (a)- "the [^2 goodness-of-fit] tests are used to ascertain whether a reasonable graduation curve has been achieved, not to assert whether one or another hypothesis is true or false"
Pearson (a)- "I have never found a normal curve fit anything if there are enough observations!"
Pearson (b)- "There is only one case in which an hypothesis can be definitely rejected, namely when its probability is zero."
Berkson - "we may assume that it is practically certain that any series of real observations does not actually follow a normal curve with absolute exactitude ... and ... the chi-square
[goodness-of-fit] P will be small if the sample has a sufficiently large number of observations in it"
Berkson - "null hypothesis procedure ... It says 'If A is true, B will happen sometimes; therefore if B has been found to happen, A can be considered disproved' "
Berkson - "I do not say anything has been 'proved' or 'disproved.' I leave to others the use of these words, which I think are quite inadmissible as applying to anything that can be accomplished by
Geary - "Normality is a myth; there never was, and never will be, a normal distribution"
Yates - "the emphasis given to formal tests of significance ... has resulted in ... an undue concentration of effort by mathematical statisticians on investigations of tests of significance
applicable to problems which are of little or no practical importance ... and ... it has caused scientific research workers to pay undue attention to the results of the tests of significance ... and
too little to the estimates of the magnitude of the effects they are investigating"
Yates - "the occasions ... in which quantitative data are collected solely with the object of proving or disproving a given hypothesis are relatively rare"
Yates - "... the unfortunate consequence that scientific workers have often regarded the execution of a test of significance on an experiment as the ultimate objective"
Yates - "Results are significant or not significant and that is the end of it"
Braithwaite - "The peculiarity of ... statistical hypotheses is that they are not conclusively refutable by any experience"
Braithwaite - "no batch of observations, however large, either definitively rejects or definitively fails to reject the hypothesis H[0]"
Braithwaite - "what John Dewey called 'the quest for certainty' is, in the case of empirical knowledge, a snare and a delusion"
Braithwaite - "The ultimate justification for any scientific belief will depend upon the main purpose for which we think scientifically--that of predicting and thereby controlling the future"
Hodges, Jr. and Lehmann - "we may formulate the hypothesis that a population is normally distributed, but we realize that no natural population is ever exactly normal"
Hodges, Jr. and Lehmann - "when we formulate the hypothesis that the sex ratio is the same in two populations, we do not really believe that it could be exactly the same"
Zeisel - "the researchers who follow the statistical way of life often distinguish themselves by a certain aridity of theoretical insights"
Anscombe - "Tests of the null hypothesis that there is no difference between certain treatments are often made in the analysis of agricultural or industrial experiments in which alternative methods
or processes are compared. Such tests are ... totally irrelevant. What are needed are estimates of magnitudes of effects, with standard errors"
Cochran and Cox - "In many experiments it seems obvious that the different treatments must have produced some difference, however small, in effect. Thus the hypothesis that there is no difference is
unrealistic: the real problem is to obtain estimates of the sizes of the differences"
Hogben (a) - "Acceptability of a statistically significant result ... promotes a high output of publication. Hence the argument that the techniques work has a tempting appeal to young biologists, if
harassed by their seniors to produce results, or if admonished by editors to conform to a prescribed ritual of analysis before publication. ... the plea for justification by works ... is therefore
likely to fall on deaf ears, unless we reinstate reflective thinking in the university curriculum"
Hogben (a) - "we can already detect signs of such deterioration in the growing volume of published papers ... recording so-called significant conclusions which an earlier vintage would have regarded
merely as private clues for further exploration"
Savage - "to make measurements and then ignore their magnitude would ordinarily be pointless. Exclusive reliance on tests of significance obscures the fact that statistical significance does not
imply substantive significance"
Savage - "Null hypotheses of no difference are usually known to be false before the data are collected ... when they are, their rejection or acceptance simply reflects the size of the sample and the
power of the test, and is not a contribution to science"
Selvin - "High levels of predictability, explanation, and association are legitimate goals for social scientists; they are not the same as a high level of significance, nor is statistical
significance a substitute for them."
Cox - "Exact truth of a null hypothesis is very unlikely except in a genuine uniformity trial"
Cox - "Assumptions that we make, such as those concerning the form of the population sampled, are always untrue"
Gold - "An important weakness of much analysis in current social research is the failure of the analyst to consider the distinction between statistical significance and substantive importance."
Neyman - "If we go to the trouble of setting up an experiment this is because we want to establish the presence of some possible effect of a treatment." Comment: This is sadly reminiscent of Fisher
Neyman - "if experimenters realized how little is the chance of their experiments discovering what they are intended to discover, then a very substantial proportion of the experiments that are now in
progress would have been abandoned in favour of an increase in size of the remaining experiments, judged more important"
Neyman - "What was the probability (power) of detecting interactions ... in the experiment performed? ... The probability in question is frequently relatively low ... in cases of this kind the fact
that the test failed to detect the existence of interactions does not mean very much. In fact, they may exist and have gone undetected."
Kish - "Significance should stand for meaning and refer to substantive matter. ... I would recommend that statisticians discard the phrase 'test of significance' "
Kish - "the tests of null hypotheses of zero differences, of no relationships, are frequently weak, perhaps trivial statements of the researcher's aims ... in many cases, instead of the tests of
significance it would be more to the point to measure the magnitudes of the relationships, attaching proper statements of their sampling variation. The magnitudes of relationships cannot be measured
in terms of levels of significance"
McNemar - "too many users of the analysis of variance seem to regard the reaching of a mediocre level of significance as more important than any descriptive specification of the underlying averages"
McNemar - "so much of what should be regarded as preliminary gets published, then quoted as the last word, which it usually is because the investigator is too willing to rest on the laurels that come
from finding a significant difference. Why should he worry about the degree of relationship or its possible lack of linearity"
Natrella - "One reason for preferring to present a confidence interval statement (where possible) is that the confidence interval, by its width, tells more about the reliance that can be placed on
the results of the experiment than does a YES-NO test of significance."
Natrella - "the significance test without its OC [Operating Characteristic] curve has distorted the thinking in some experimental problems"
Natrella - "Confidence intervals give a feeling of the uncertainty of experimental evidence, and (very important) give it in the same units ... as the original observations."
Nunnally - "Few ... of the criticisms which will be made were originated by the author ... However, it is hoped that when the criticisms are brought together they will argue persuasively for a change
in viewpoint about statistical logic"
Nunnally - "the null-hypothesis models ... share a crippling flaw: in the real world the null hypothesis is almost never true, and it is usually nonsensical to perform an experiment with the sole aim
of rejecting the null hypothesis"
Nunnally - "when large numbers of subjects are used in studies, nearly all comparisons of means are 'significantly' different and all correlations are 'significantly' different from zero'
Nunnally - "If rejection of the null hypothesis were the real intention in psychological experiments, there usually would be no need to gather data"
Nunnally - "the mere rejection of a null hypothesis provides only meager information"
Nunnally - "Closely related to the null hypothesis is the notion that only enough subjects need be used in psychological experiments to obtain 'significant' results. This often encourages
experimenters to be content with very imprecise estimates of effects"
Nunnally - "analysis of variance should be considered primarily an estimation device"
Nunnally - "psychological research is often difficult and frustrating, and the frustration can lead to a 'flight into statistics.' With some, this takes the form of a preoccupation with statistics to
the point of divorcement from the headaches of empirical study. With others, the hypothesis-testing models provide a quick and easy way of finding 'significant differences' and an attendant sense of
Nunnally - "We should not feel proud when we see the psychologist smile and say 'the correlation is significant beyond the .01 level.' Perhaps that is the most that he can say, but he has no reason
to smile"
Rozeboom - "one can hardly avoid polemics when butchering sacred cows"
Rozeboom - "Whenever possible, the basic statistical report should be in the form of a confidence interval"
Rozeboom - "the stranglehold that conventional null hypothesis significance testing has clamped on publication standards must be broken"
Rozeboom - "The traditional null hypothesis significance-test method ... of statistical analysis is here vigorously excoriated for its inappropriateness as a method of inference"
Smith - "One feature ... which requires much more justification than is usually given, is the setting up of unplausible null hypotheses. For example, a statistician may set out a test to see whether
two drugs have exactly the same effect, or whether a regression line is exactly straight. These hypotheses can scarcely be taken literally"
Camilleri - "another problem associated with the test of significance. The particular level of significance chosen for an investigation is not a logical consequence of the theory of statistical
Camilleri - "The precision and empirical concreteness often associated with the test of significance are illusory and it would be a serious error to predicate our actions towards hypotheses on the
test of significance as if it were a reliable arbiter of truth"
Grant - "In view of our long-term strategy of improving our theories, our statistical tactics can be greatly improved by shifting emphasis away from over-all hypothesis testing in the direction of
statistical estimation. This always holds true when we are concerned with the actual size of one or more differences rather than simply in the existence of differences."
Binder - [With regard to Fisher's 1935 quote about experiments and null hypotheses] "This is not very edifying since one does not expect to prove any hypothesis by the methods of probabilistic
Binder - "when one tests a point prediction he usually knows before the first sample element is drawn that his empirical hypothesis is not precisely true"
Binder - "It is surely apparent that anyone who wants to obtain a significant difference badly enough can obtain one ... choose a sample size large enough"
Edwards et al. - "in typical applications, one of the hypotheses--the null hypothesis--is known by all concerned to be false from the outset"
Edwards et al. - "classical procedures quite typically are, from a Bayesian point of view, far too ready to reject the null hypotheses" Comment: Then this is a most convincing argument against the
use of Bayesian methods.
Edwards et al. - "Estimation is best when it is stable. Rejection of a null hypothesis is best when it is interocular"
Yates - "The most commonly occurring weakness ... is ... undue emphasis on tests of significance, and failure to recognise that in many types of experimental work estimates of treatment effects,
together with estimates of the errors to which they are subject, are the quantities of primary interest"
Yates - "In many experiments ... it is known that the null hypothesis ... is certainly untrue"
Sayn-Wittgenstein - "There is nothing wrong with the t-test; it has merely been used to give an answer that was never asked for. The Student t-test answers the question: 'Is there any real difference
between the means of the measurement by the old and the new method, or could the apparent difference have arisen from random variation?' We already know that there is a real difference, so the
question is pointless. The question we should have answered is: 'How big is the difference between the two sets of measurements, and how precisely have we determined it?'"
Kempthorne - "a continuously distributed random variable ... one never actually observes such random variables, ... all observations are ... discrete"
Bakan - "Little of what is contained in this paper is not already available in the literature"
Bakan - "the test of significance has been carrying too much of the burden of scientific inference. It may well be the case that wise and ingenious investigators can find their way to reasonable
conclusions from data because and in spite of their procedures. Too often, however, even wise and ingenious investigators ... tend to credit the test of significance with properties it does not have"
Bakan - "a priori reasons for believing that the null hypothesis is generally false anyway. One of the common experiences of research workers is the very high frequency with which significant results
are obtained with large samples"
Bakan - "there is really no good reason to expect the null hypothesis to be true in any population ... Why should any correlation coefficient be exactly .00 in the population? ... why should
different drugs have exactly the same effect on any population parameter"
Bakan - "if the test of significance is really of such limited appropriateness ... we would be much better off if we were to attempt to estimate the magnitude of the parameters in the populations"
Bakan - "When we reach a point where our statistical procedures are substitutes instead of aids to thought, and we are led to absurdities, then we must return to common sense"
Bakan - "we need to get on with the business of generating ... hypotheses and proceed to do investigations and make inferences which bear on them, instead of ... testing the statistical null
hypothesis in any number of contexts in which we have every reason to suppose that it is false in the first place"
LaForge - "Confidence regions ... for estimation of unknown parameters ... are appropriate for most scientific research and reporting"
Meehl - "in psychological and sociological investigations involving very large numbers of subjects, it is regularly found that almost all correlations or differences between means are statistically
Meehl - "it is highly unlikely that any psychologically discriminable stimulation which we apply to an experimental subject would exert literally zero effect upon any aspect of his performance"
Meehl - "a fairly widespread tendency to report experimental findings with a liberal use of ad hoc explanations for those that didn't 'pan out' "
Meehl - "our eager-beaver researcher, undismayed by logic-of-science considerations and relying blissfully on the 'exactitude' of modern statistical hypothesis-testing, has produced a long
publication list and been promoted to a full professorship. In terms of his contribution to the enduring body of psychological knowledge, he has done hardly anything. His true position is that of a
potent-but-sterile intellectual rake, who leaves in his merry path a long train of ravished maidens but no viable scientific offspring"
Skipper Jr., Guenther and Nass - "The current obsession with .05 ... has the consequence of differentiating significant research findings and those best forgotten, published studies from unpublished
ones, and renewal of grants from termination. It would not be difficult to document the joy experienced by a social scientist when his F ratio or t value yields significance at .05, nor his horror
when the table reads 'only' .10 or .06. One comes to internalize the difference between .05 and .06 as 'right' vs. 'wrong,' 'creditable' vs. 'embarrassing,' 'success' vs. 'failure' "
Skipper Jr., Guenther and Nass - "blind adherence to the .05 level denies any consideration of alternative strategies, and it is a serious impediment to the interpretation of data"
Lykken - "Unless one of the variables is wholly unreliable so that the values obtained are strictly random, it would be foolish to suppose that the correlation between any two variables is
identically equal to 0.0000... (or that the effect of some treatment or the difference between two groups is exactly zero)"
Lykken - "the finding of statistical significance is perhaps the least important attribute of a good experiment
Lykken - "The value of any research can be determined, not from the statistical results, but only by skilled, subjective evaluation of the coherence and reasonableness of the theory, the degree of
experimental control employed, the sophistication of the measuring techniques, the scientific or practical importance of the phenomena studied"
Lykken - "Editors must be bold enough to take responsibility for deciding which studies are good and which are not, without resorting to letting the p value of the significance tests determine this
O'Brien and Shapiro - "It is this distinction between statistical significance and practical importance that seems often to be overlooked by many researchers."
Stevens - "What does it mean? Can no one recognize a decisive result without a significance test? How much can the burgeoning of computation be blamed on fad? How often does inferential computation
serve as a premature excuse for going to press? Whether the scholar has discovered something or not, he can sometimes subject his data to an analysis of variance, a t test, or some other device that
will produce a so-called objective measure of 'significance.' The illusion of objectivity seems to preserve itself despite the admitted necessity for the investigator to make improbable assumptions,
and to pluck off the top of his head a figure for the level of probability that he will consider significant."
Stevens - "The extreme stochastophobe is likely to ask: What scientific discoveries owe their existence to the techniques of statistical analysis or inference?"
Stevens - "The aspersions voiced by stochastophobes fall mainly on those scientists who seem, by the surfeit of their statistical chants, to turn data treatment into hierurgy. These are not the
statisticians themselves, for they see statistics for what it is, a straightforward discipline designed to amplify the power of common sense in the discernment of order amid complexity."
Morrison and `Henkel - "In addition to important technical errors, fundamental errors in the philosophy of science are frequently involved in this indiscriminate use of the tests [of significance]"
Morrison and Henkel - "What we say is frankly polemical, though not original"
Morrison and Henkel - "we usually know in advance of testing that the null hypothesis is false"
Morrison and Henkel - "To say we want to be conservative, to guard against accepting more than 5 percent of our false alternative hypotheses as true ... is nonsense in scientific research"
Morrison and Henkel - "Researchers have long recognized the unfortunate connotations and consequences of the term 'significance,' and we propose it is time for a change"
Morrison and Henkel - "there is evidence that significance tests have been a genuine block to achieving ... knowledge"
Morrison and Henkel - "we are convinced that the diversion of energy away from the rituals of significance testing in basic scientific research will be a worthy first step toward this goal [solving
the problems of scientific inference] and will ... be one difference in behavioral science that is significant"
Morrison and Henkel - "scientists by and large adjust their beliefs about a hypothesis in informal ways on the basis of evidence, regardless of the formal decisions to reject or accept hypotheses
made by individual researchers"
Morrison and Henkel - "significance testing in behavioral research is deeply implicated in our false search for empirical association, rather than a search for hypotheses that explain"
Morrison and Henkel - "many researchers ... will regard the abandonment of the tests a threat to the very foundations of empirical behavioural research. In fact, our experience (among sociologists)
has been that many researchers accept all or most of our arguments on rational grounds, but keep using significance tests as before simply because use is a strong norm in the discipline"
Nelder - "multiple comparison methods have no place at all in the interpretation of data"
Tversky and Kahneman - "the statistical power of many psychological studies is ridiculously low. This is a self-defeating practice: it makes for frustrated scientists and inefficient research. The
investigator who tests a valid hypothesis but fails to obtain significant results cannot help but regard nature as untrustworthy or even hostile"
Tversky and Kahneman - "Significance levels are usually computed and reported, but power and confidence limits are not. Perhaps they should be."
Tversky and Kahneman - "The emphasis on significance levels tends to obscure a fundamental distinction between the size of an effect and its statistical significance."
Hays - "There is surely nothing on earth that is completely independent of anything else. The strength of an association may approach zero, but it should seldom or never be exactly zero."
Tukey - "The twin assumptions of normality of distribution and homogeneity of variance are not ever exactly fulfilled in practice, and often they do not even hold to a good approximation."
Box - "all models are wrong"
Box - "in nature there never was a normal distribution, there never was a straight line"
Box - "experiments where errors cannot be expected to be independent are very common"
Chew - "the research worker has been oversold on hypothesis testing. Just as no two peas in a pod are identical, no two treatment means will be exactly equal. ... It seems ridiculous ... to test a
hypothesis that we a priori know is almost certain to be false"
Graybill - "when making inferences about parameters ... hypothesis tests should seldom be used if confidence intervals are available ... the confidence intervals could lead to opposite practical
conclusions when a test suggests rejection of H[0] ... even though H[0] is not rejected, the confidence interval gives more useful information"
Kempthorne - "one will not ever have a random sample from a normal distribution"
Kempthorne - "no one, I think, really believes in the possibility of sharp null hypotheses -- that two means are absolutely equal in noisy sciences"
Pratt - "tests [of hypotheses] provide a poor model of most real problems, usually so poor that their objectivity is tangential and often too poor to be useful"
Pratt - "And when, as so often, the test is of a hypothesis known to be false ... the relevance of the conventional testing approach remains to be explicated"
Pratt - "This reduces the role of tests essentially to convention. Convention is useful in daily life, law, religion, and politics, but it impedes philosophy"
Barndorff-Nielsen - "Most of the models considered in statistics are but rough approximations to reality"
Chew - "Testing the equality of 2 true treatment means is ridiculous. They will always be different, at least beyond the hundredth decimal place."
Cox - "Overemphasis on tests of significance at the expense especially of interval estimation has long been condemned"
Cox - "Admittedly all real measurements are discrete"
Cox - "there are considerable dangers in overemphasizing the role of significance tests in the interpretation of data"
Cox - "statistical significance is quite different from scientific significance and ... therefore estimation ... of the magnitude of effects is in general essential regardless of whether
statistically significant departure from the null hypothesis is achieved"
Guttman - "lack of interaction in analysis of variance and ... lack of correlation in bivariate distributions--such nullities would be quite surprising phenomena in the usual interactive complexities
of social life"
Guttman - "Estimation and approximation may be more fruitful than significance in developing science, never forgetting replication."
Guttman - "It [the normal distribution] is seldom, if ever, observed in nature."
Carver - "Statistical significance testing has involved more fantasy than fact. The emphasis on statistical significance over scientific significance in educational research represents a corrupt form
of the scientific method. Educational research would be better off if it stopped testing its results for statistical significance."
Carver - "Statistical significance ordinarily depends upon how many subjects are used in the research. the more subjects the researcher uses, the more likely the researcher will be to get
statistically significant results."
Healy - "it is widely agreed among statisticians ... that significance testing is not the be-all and end-all of the subject"
Healy - "The commonest agricultural experiments ... are fertilizer and variety trials. In neither of these is there any question of the population treatment means being identical ... the objective is
to measure how big the differences are"
Kruskal - "statistical significance of a sample bears no necessary relationship to possible subject-matter significance"
Kruskal - "it is easy to ... throw out an interesting baby with the nonsignificant bath water. Lack of statistical significance at a conventional level does not mean that no real effect is present;
it means only that no real effect is clearly seen from the data. That is why it is of the highest importance to look at power and to compute confidence intervals"
Kruskal - "Another criticism of standard significance tests is that in most applications it is known beforehand that the null hypothesis cannot be exactly true"
Kruskal - "Because of the relative simplicity of its structure, significance testing has been overemphasized in some presentations of statistics, and as a result some students come mistakenly to feel
that statistics is little else than significance testing"
Meehl - "I suggest to you that Sir Ronald has befuddled us, mesmerized us, and led us down the primrose path. I believe that the almost universal reliance on merely refuting the null hypothesis as
the standard method for corroborating substantive theories in the soft areas is a terrible mistake, is basically unsound, poor scientific strategy, and one of the worst things that ever happened in
the history of psychology."
Meehl - "probably all theories are false in the eyes of God"
Meehl - "as I believe is generally recognized by statisticians today and by thoughtful social scientists, the null hypothesis, taken literally, is always false"
Rothman - "The P value ... conveys no information about the extent to which two groups differ or two variables are associated. ... P vales serve poorly as descriptive statistics".
Rothman - "By choosing a measure that quantifies the degree of association or effect in the data and then calculating a confidence interval, researchers can summarize the strength of association in
their data and allow for random variation in a simple and unambiguous way."
Chew - "... means are significantly different ... This is a very unfortunate choice of terminology, because the significant difference in the statistical sense is often taken, incorrectly, as being
significant in the practical or economic sense"
Chew - "Experimenters are often unhappy if the decision from the analysis of variance is to accept H[0]. ... The correct interpretation in this case is that all true differences are 'small' and/or
the number of replicates is insufficient"
Chew - "I have tried to steer them [agricultural researchers] away from testing H[0]. I maintain that on a priori physical, chemical and biological grounds, H[0] is always false in all realistic
experiments, and H[0] will always be rejected given enough replication"
Chew - "As Confucius might have said, if the difference isn't different enough to make a difference, what's the difference?"
Kruskal - "the traditional table [analysis of variance table] with its terminology and seductive additivities has in fact often led to superficiality of analysis"
Cox and Snell - "Models are always to some extent tentative"
Little - "The idea that one should proceed no further with an analysis, once a non-significant F-value for treatments is found, has led many experimenters to overlook important information in the
interpretation of their data"
Cox - "It is very bad practice to summarise an important investigation solely by a value of P".
Cox - "The criterion for publication should be the achievement of reasonable precision and not whether a significant effect has been found"
Preece - "over-emphasis on significance-testing continues"
Preece - "the norm should be that only a standard error is quoted for comparing means from an experiment"
Preece - "experimenters having difficulty in interpreting their results, after the results have been converted into an analysis of variance, must often be urged to think as if they had never heard of
statistics; only then is fettered rote-thinking abandoned in favour of common-sense and intelligence"
Box - "The resultant magnification of the importance of formal hypothesis tests has inadvertently led to underestimation by scientists of the area in which statistical methods can be of value and to
a wide misunderstanding of their purpose"
Bryan-Jones and Finney - "Of central importance to clear presentation is the standard error of a mean"
Bryan-Jones and Finney - "In interpreting and in presenting experimental results there is no adequate substitute for thought - thought about the questions to be asked, thought about the nature and
weight of evidence the data provide on these questions, and thought about how the story can be told with clarity and full honesty to a reader. Statistical techniques must be chosen and used to aid,
but not to replace, relevant thought"
Bryan-Jones and Finney - "Our message is not new"
Good - "with general principles ... it is usually possible to find something in the past that to some extent foreshadows it"
Good - "A large enough sample will usually lead to the rejection of almost any null hypothesis ... Why bother to carry out a statistical experiment to test a null hypothesis if it is known in advance
that the hypothesis cannot be exactly true"
Jones - "There is a rising feeling among statisticians that hypothesis tests ... are not the most meaningful analyses"
Jones - "preoccupation with testing 'is there an interaction"' in factorial experiments, ... emphasis should be on 'how strong is the interaction?' "
Jones - "The difference between 'statistically significant' and 'biologically significant' needs to be appreciated much more than it is now"
Jones - "Reporting of results in terms of confidence intervals instead of hypothesis tests should be strongly encouraged"
Preece - "Statistical 'recipes' are followed blindly, and ritual has taken over from scientific thinking"
Preece - "The ritualistic use of multiple-range tests--often when the null hypothesis is a priori untenable ...- is a disease"
Altman - "Somehow there has developed a widespread belief that statistical analysis is legitimate only if it includes significance testing. This belief leads to, and is fostered by, numerous
introductory statistics texts that are little more than catalogues of techniques for performing significance tests"
Chatfield - "differences are 'significant' ... nearly always ... in large samples"
Chatfield - "Within the last decade or so, practising statisticians have begun to question the relevance of some Statistics courses ... However ... Statistics teaching is still often dominated by
formal mathematics"
Chatfield - "tests on outliers are less important than advice from 'people in the field' "
Chatfield - "significance tests ... are also widely overused and misused"
Chatfield - "an ANOVA will not tell us how a null hypothesis is rejected"
Chatfield - "Rather than ask if these differences are statistically significant, it seems more important to ask if they are of educational importance"
Chatfield - "All statistical techniques, however sophisticated, should be subordinate to subjective judgement"
Chatfield - "it has ... become impossible to get results published in some medical, psychological and biological journals without reporting significance values even when of doubtful validity"
Cormack - "Estimates and measures of variability are more valuable than hypothesis tests"
Guttman - "Since a point hypothesis is not to be expected in practice to be exactly true, but only approximate, a proper test of significance should almost always show significance for large enough
samples. So the whole game of testing point hypotheses, power analysis notwithstanding, is but a mathematical game without empirical importance."
Nelder - "the grotesque emphasis on significance tests in statistics courses of all kinds ... is taught to people, who if they come away with no other notion, will remember that statistics is about
tests for significant differences. ... The apparatus on which their statistics course has been constructed is often worse than irrelevant, it is misleading about what is important in examining data
and making inferences"
Chernoff - "Analysis of variance ... stems from a hypothesis-testing formulation that is difficult to take seriously and would be of limited value for making final conclusions."
Gardner and Altman - "In this approach [hypothesis testing] data are examined in relation to a statistical 'null' hypothesis, and the practice has led to the mistaken belief that studies should aim
at obtaining 'statistical significance.' On the contrary, the purpose of most research investigations in medecine is to determine the magnitude of some factor(s) of interest."
Gardner and Altman - "there is a tendency to equate statistical significance with medical importance or biological relevance"
Gardner and Altman - "Confidence intervals ... should become the standard method for presenting the statistical results of major findings."
Jones and Matloff - "We recommend that authors display the estimate of the difference and the confidence limit for this difference"
Jones and Matloff - "at its worst, the results of statistical hypothesis testing can be seriously misleading, and at its best, it offers no informational advantage over its alternatives"
Jones and Matloff - "the ubiquitous problem of synonymizing statistical significance with biological significance"
Jones and Matloff - "all populations are different, a priori"
Jones and Matloff - "The only remedy ... is for journal editors to be keenly aware of the problems associated with hypothesis tests, and to be sympathetic, if not strongly encouraging, toward
individuals who are taking the initial lead in phasing them out"
Lindley - "estimation procedures provide more information [than significance tests]: they tell one about reasonable alternatives and not just about the reasonableness of one value"
Perry - "significance tests have a limited role in biological experiments because 1) significance refers merely to plausibility, not to biological importance ... 2) theories may be proved to be
strictly untrue but still of practical use ... 3) a null hypothesis is often known to be false before experimentation 4) the outcome of a test often depends merely on the size of the experiment ...
the more replicates, the greater the chance of achieving significance; 5) in agricultural and ecological entomology, the really critical, single experiment is rare; 6) results may indicate merely
that a hypothesis is rejected, but not give the magnitude of departures from the hypothesis ... 7) the exact nature of tests is often exaggerated and ignores the fact that all tests are based on
assumptions that rarely hold in practice"
Perry - "A confidence interval certainly gives more information than the result of a significance test alone ... I ... recommend its use [standard error of each mean]"
Warren - "the word 'significant' could be abolished ... Based on a dictionary definition, one might expect that results that are declared significant would be important, meaningful, or consequential.
Being 'significant at an arbitrary probability level,' ... ensures none of these"
Warren - "the researcher has the right to make inferences that may seem contrary to the objective analysis [statistical analysis], provided that is what he or she really believes and that the
objective results have been given due consideration"
Warren - "I have seen authors declare that means were not different, but with less than a 50% chance of detecting a difference the magnitude of which would be important; if such a difference existed
they would have been better off tossing a coin and not doing the experiment"
Berger and Sellke - "even if testing of a point null hypothesis were disreputable, the reality is that people do it all the time ... and we should do our best to see that it is done well". Comment:
On the contrary, if we assist others to perform disreputable tests then we ourselves also become disreputable.
Casella and Berger - "In a large majority of problems (especially location problems) hypothesis testing is inappropriate: Set up the confidence interval and be done with it!"
Hinkley - "for problems where the usual null hypothesis defines a special value for a parameter, surely it would be more informative to give a confidence range for that parameter"
Vardeman - "Competent scientists do not believe their own models or theories, but rather treat them as convenient fictions. ... The issue to a scientist is not whether a model is true, but rather
whether there is another whose predictive power is enough better to justify movement from today's fiction to a new one"
Vardeman - "Too much of what all statisticians do ... is blatantly subjective for any of us to kid ourselves or the users of our technology into believing that we have operated 'impartially' in any
true sense. ... We can do what seems to us most appropriate, but we can not be objective and would do well to avoid language that hints to the contrary"
Finney - "rigid dependence upon significance tests in single experiments is to be deplored"
Finney - "The primary purpose of analysis of variance is to produce estimates of one or more error mean squares, and not (as is often believed) to provide significance tests"
Finney - "A null hypothesis that yields under two different treatments have identical expectations is scarcely very plausible, and its rejection by a significance test is more dependent upon the size
of an experiment than upon its untruth"
Finney - "I have failed to find a single instance in which the Duncan test was helpful, and I doubt whether any of the alternative tests [multiple range significance tests] would please me better"
Finney - "Is it ever worth basing analysis and interpretation of an experiment on the inherently implausible null hypothesis that two (or more) recognizably distinct cultivars have identical yield
Gauch - "the mere declaration that the interaction is or is not significant is far too coarse a result to give agronomists or plant breeders effective insight into their research material"
Luce - "I could only wish for every psychologist to read this chapter as an antidote to mindless hypothesis testing in lieu of doing good science: measuring effects, constructing substantive theories
of some depth, and developing probability models and statistical procedures suited to these theories."
Chatfield - "We all know ... that the misuse of statistics and an overemphasis on p values is endemic in many scientific journals"
Finney (a) - "The analysis of data ... requires assumptions ... The assumptions are never correct"
Finney (b)- "I confidently assert that yields of potatoes from plots of a well-conducted field experiments [sic] can be assumed independently and Normally distributed with constant variance; I do not
believe this"
Finney (b)- "the Blind need frequent warnings and help in avoiding the multiple comparison test procedures that some editors demand but that to me appear completely devoid of practical utility"
Gigerenzer et al. - "In some fields, a strikingly narrow understanding of statistical significance made a significant result seem to be the ultimate purpose of research, and non-significance the sign
of a badly conducted experiment - hence with almost no chance of publication."
Healy - "it is a travesty to describe a p value ... as 'simple, objective and easily interpreted' ... To use it as a measure of closeness between model and data is to invite confusion"
Kruskal and Majors - "We are also concerned about the use of statistical significance--P values--to measure importance; this is like the old confusion of substantive with statistical significance"
Moore and McCabe - "Some hesitation about the unthinking use of significance tests is a sign of statistical maturity"
Moore and McCabe - "It is usually wise to give a confidence interval for the parameter in which you are interested"
Moore and McCabe - "A null hypothesis that is ... false can become widely believed if repeated attempts to find evidence against it fail because of low power"
Moore and McCabe - "Other eminent statisticians have argued that if 'decision' is given a broad meaning, almost all problems of statistical inference can be posed as problems of making decisions in
the presence of uncertainty"
Rosnow and Rosenthal - "A result that is statistically significant is not necessarily practically significant as judged by the magnitude of the effect."
Cohen - "The null hypothesis ... is always false in the real world. ... If it is false, even to a tiny degree, it must be the case that a large enough sample will produce a significant result and
lead to its rejection."
Cohen - "I believe ... that hypothesis testing has been greatly overemphasized in psychology and in other disciplines that use it."
Cohen - "The prevailing yes-no decision at the magic .05 level from a single research is a far cry from the use of informed judgment. Science simply doesn't work that way. A successful piece of
research doesn't conclusively settle an issue, it just makes some theoretical proposition to some degree more likely. ... There is no ontological basis for dichotomous decision making in
psychological inquiry."
Hahn - "hypothesis tests (irrelevant for most practical applications)"
Hunter - "How about 'alpha and beta risks' and 'testing the null hypothesis'? ... The very beginning language employed by the statistician describes phenomena in which engineers/physical scientists
have little practical interest! They want to know how many, how much, and how well ... Required are interval estimates. We offer instead hypothesis tests and power curves"
Meehl - "All statistical tables should be required to include means and standard deviations, rather than merely a t, F or^2, or even worse only statistical significance."
Meehl - "Confidence intervals for parameters ought regularly to be provided."
Meehl - "Since the null hypothesis refutation racket is 'steady work' and has the merits of an automated research grinding device, scholars who are pardonably devoted to making more money and keeping
their jobs ... are unlikely to contemplate with equanimity a criticism that says that their whole procedure is scientifically feckless and that they should quit doing it and do something else. ...
that might ... mean that they should quit the academy and make an honest living selling shoes"
Preece - "I cannot see how anyone could now agree with this [Fisher's 1935 quote about experiments and null hypotheses]"
Street - "Fisher ... appears to have placed an undue emphasis on the significance test"
Street - "in many experiments it is well known ... that there are differences among the treatments. The point of the experiment is to estimate ... and provide ... standard errors. One of the
consequences of this emphasis on significance tests is that some scientists ... have come to see a significant result as an end in itself"
Matloff - "statistical significance is not the same as scientific significance"
Matloff - "the test is asking whether a certain condition holds exactly, and this exactness is almost never of scientific interest"
Matloff - With regard to a goodness-of-fit test to answer whether certain ratios have given exact values, "we know a priori this is not true; no model can completely capture all possible genetical
Matloff - "the number of stars by itself is relevant only to the question of whether H[0] is exactly true--a question which is almost always not of interest to us, especially because we usually know
a priori that H[0] cannot be exactly true."
Matloff - "problems stemming from the fact that hypothesis tests do not address questions of scientific interest"
Matloff - "the 'star system' includes neither an E part [estimate] nor an A part [accuracy] and thus excludes vital information ... There is no such danger in basing our analysis on CIs [confidence
Matloff - "no population has an exact normal distribution, nor are variances exactly homogeneous, and independence assumptions are often violated to at least some degree"
Tukey - "Statisticians classically asked the wrong question-and were willing to answer with a lie, one that was often a downright lie. They asked 'Are the effects of A and B different?' and they were
willing to answer 'no.' All we know about the world teaches us that the effects of A and B are always different-in some decimal place-for any A and B. Thus asking 'Are the effects different?' is
Tukey - "Empirical knowledge is always fuzzy! And theoretical knowledge, like all the laws of physics, as of today's date, is always wrong-in detail, though possibly providing some very good
approximations indeed."
Pearce - "In a biological context interactions are common, so it is better to play safe and regard any appreciable interaction as real whether it is significant or not"
Upton - "The experimenter must keep in mind that significance at the 5% level will only coincide with practical significance by chance!"
Wang - "Testing of statistical hypotheses ... are often irrelevant, wrong-headed, or both"
Wang - "the tyranny of the N-P [Neyman-Pearson] theory in many branches of empirical science is detrimental, not advantageous, to the course of science"
Boardman - "He [W. E. Deming] went on to suggest that the problem lay in teaching 'what is wrong.' The list of evils taught in courses on statistics ... is a long one. One of the topics included
hypothesis testing. Personally I have found few, if any, occasions where such tests are appropriate."
Cohen - "I make no pretense of the originality of my remarks in this article."
Cohen - "I argue herein that NHST [null hypothesis significance testing] has not only failed to support the advance of psychology as a science but also has seriously impeded it."
Cohen - "my ... recommendation is that ... we routinely report effect sizes in the form of confidence limits."
Cohen - "they [confidence limits] are rarely to be found in the literature. I suspect that the main reason they are not reported is that they are so embarrassingly large!"
Inman - "Like many working scientists since, Buchanan-Wollaston professed a belief that commonly used statistical tests were either obvious or irrelevant to the scientific problem of interest"
McCloskey - "scientists care about whether a result is statistically significant, but they should care much more about whether it is meaningful"
McCloskey - "the scale for measuring ... effects ... or ... changes ... is not so clear: you may get statistically impeccable answers that make little difference to anyone or 'insignificant' ones
that are absolutely crucial"
Ranstam - "A common misconception is that an effect exists only if it is statistically significant and that it does not exist if it is not [statistically significant]"
Ranstam - "When using confidence intervals, clinical rather than statistical significance is emphasized. Moreover, confidence intervals, by their width, disclose the statistical precision of the
Tamhane - "The point of departure in ranking-and-selection methodology is the recognition that the treatments being compared are in fact different, and a sufficiently large sample size will
demonstrate this fact with any preassigned confidence level. Therefore, it is futile to test the null hypothesis of homogeneity."
|
{"url":"http://www.npwrc.usgs.gov/resource/methods/hypotest/?C=M%3BO=A","timestamp":"2014-04-18T23:17:06Z","content_type":null,"content_length":"89351","record_id":"<urn:uuid:ebc03250-9c31-480b-9286-528ca0304ae6>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Anomalous Readings
I’m pretty sure I’ve talked a lot more about my math class than my physics class this semester. With the semester winding to a close, I don’t have much time life to even the score. But here’s an
attempt. The reason for the relative silence on the subject of physics is, however, math-related.
As I mentioned in an earlier post, the third semester of intro physics is usually referred to as modern physics. At my community college, it’s “Waves, Optics, and Modern Physics.” The course covers a
lot of disparate material. While the first half of the semester was pretty much all optics, the second half has been the modern physics component.
What does “modern physics” mean? Well, looking at the syllabus, it means a 7-week span in which we talked about relativity, quantum mechanics, atomic physics, and nuclear physics. All of these are
entire fields unto themselves, but we spent no more than a week or two on each topic.
I predicted during the summer that I wouldn’t mind the abbreviated nature of the course, but that prediction turned out to be wrong. Here’s why.
The first two semesters of physics at my community college were, while not perfect by any stretch of the imagination, revelatory in comparison to the third semester. I enjoyed them a great deal
because physical insight arose from mathematical foundations. With calculus, much of introductory physics becomes clear.
You can sit down and derive the equations of kinematics that govern how objects move in space. You can write integrals that tell you how charges behave next to particular surfaces. Rather than being
told to plug and chug through a series of equations, you’re asked to use your knowledge of calculus to come up with ways to solve problems.
This is in stark contrast to what I remember of high school physics. There, we were given formulas plucked from textbooks and told to use them in a variety of word problems. Kinetic energy was 1/2mv^
2, because science. There was no physical insight to be gained, because there was no deeper understanding of the math behind the physics.
And so it is in modern physics as well. The mantra of my physics textbook has become, “We won’t go into the details.” Where before the textbook might say, “We leave the details as an exercise for the
reader,” now there is no expectation that we could possibly comprehend the details. The math is “fairly complex,” we are told, but here are some formulas we can use in carefully circumscribed
It happened during the optics unit, too. Light, when acting as a wave, reflects and refracts and diffracts. Why? Well, if you use a principle with no physical basis, you can derive some of the
behaviors that light exhibits. But why would you use such a principle? Because you can derive some of the behaviors that light exhibits, of course.
But it’s much worse in modern physics. The foundation of quantum mechanics is the Schrödinger equation, which is a partial differential equation that treats particles as waves. Solutions to this
equation are functions called Ψ (psi). What is Ψ? Well, it’s a function that, with some inputs, produces a complex number. Complex numbers have no physical meaning, however. For example, what would
it mean to be the square root of negative one meters away from someone? Exactly.
So to get something useful out of Ψ, you have to square it. Doing so gives you the probability of finding a particle in some particular place or state. Why? Because you can’t be the square root of
negative one meters away from someone, that’s why. The textbook draws a parallel between Ψ and the photon picture of diffraction, in which the square of something also represents a probability, but
gives us no mathematical reason to believe this. Our professor didn’t even try and was in fact quite flippant about the hand-waving nature of the whole operation.
If you stick a particle (like an electron) inside of a box (like an atom), quantum mechanics and the Schrödinger equation tell you that the electron can only exist at specific energy levels. How do
we find those energy levels? (This is the essence of atomic physics and chemistry, by the way.) Well, it involves “solving a transcendental equation by numerical approximation.” Great, let’s get
started! “We won’t go into the details,” the textbook continues. Oh, I see.
Later, the textbook talks about quantum tunneling, the strange phenomenon by which particles on one side of a barrier can suddenly appear on the other side. How does this work? Well, it turns out the
math is “fairly involved.” Oh, I see.
This kind of treatment goes on for much of the text.
Modern physics treats us as if we are high school students again. Explanations are either entirely absent or sketchy at best. Math is handed down on high in the form of equations to be used when
needed. Insight is nowhere to be found.
Unfortunately, there might not be a great solution to this frustrating conundrum. While the basics of kinematics and electromagnetism can be understood with a couple semesters of calculus, modern
physics seems to require a stronger mathematical foundation. But you can’t very well tell students to get back to the physics after a couple more years of math. That’s a surefire way to lose your
students’ interest.
So we’re left with a primer course, where our appetites are whetted to the extent that our rudimentary tools allow. My interest in physics has not been stimulated, however. I’m no less interested
than I was before, but what’s really on my mind is the math. More than the physics, I want to know the math behind it. No, I’m not saying I want to be a mathematician now. I’m just saying that I
can’t be a physicist without being a little bit a mathematician.
This post may seem a little out there, but that might be the point.
Last week in differential equations we learned about a process our textbook called complexification. (You can go ahead and google that, but near as I can tell what you’ll find is only vaguely related
to what my textbook is talking about.) Complexification is a way to take a differential equation that looks like it’s about sines and cosines and instead make it about complex exponentials. What does
that mean?
Well, I think most people know a little bit about sine and cosine functions. At the very least, I think most people know what a sine wave looks like.
Shout out to Wikipedia.
Such a wave is produced by a function that looks something like f(x) = sin(x). Sine and cosine come from relationships between triangles and circles, but they can be used to model periodic,
fluctuating motion. For example, the way in which alternating current goes back and forth between positive and negative is sinusoidal.
On the other hand, exponential functions don’t seem at all related. Exponential functions look something like f(x) = e^x, and their graphs have shapes such as this:
Thanks again, Wikipedia.
Exponential functions are used to model systems such as population growth or the spread of a disease. These are systems where growth starts out small, but as the quantity being measured grows larger,
so too does the rate of growth.
Now, at first blush there doesn’t appear to be a lot of common ground between sine functions and exponential functions. But it turns out there is, if you throw in complex numbers. What’s a complex
number? It’s a number that includes i, the imaginary unit, which is defined to be the square root of -1. You may have heard of this before, or you may have only heard that you can’t take the square
root of a negative number. Well, you can: you just call it i.
So what’s the connection? The connection is Euler’s formula, which looks like this:
e^i^x = cos(x) + isin(x).
Explaining why this formula is true turns out to be very complicated and a bit beyond what I can do. So just trust me on this one. (Or look it up yourself and try to figure it out.) Regardless, by
complexifying, you have found a connection between exponentials and sinusoids.
How does that help with differential equations? The answer is that complexifying your differential equation can often make it simpler to solve.
Take the following differential equation:
d^2y/dt^2 + ky = cos(x).
This could be a model of an undamped harmonic oscillator with a sinusoidal forcing function. It’s not really important what that means, except to say you would guess (guessing happens a lot in
differential equations) that the solution to this equation involves sinusoidal functions. The problem is, you don’t know if it will involve sine, cosine, or some combination of the two. You can
figure it out, but it takes a lot of messy algebra.
A simpler way to do it is by complexifying. You can guess instead that the solution will involve complex exponentials, and you can justify this guess through Euler’s formula. After all, there is a
plain old cosine just sitting around in Euler’s formula, implying that the solution to your equation could involve a term such as e^i^x.
This idea of complexification got me thinking about the topic of explaining things to people. You see, I think I tend to do a bit of complexifying myself a lot of the time. Now, I don’t mean I throw
complex numbers into the mix when I don’t technically have to; rather, I think I complexify by adding more than is necessary to my explanations of things. I do this instead of simplifying.
Why would I do this? After all, simplifying your explanation is going to make it easier for people to understand. Complexifying, by comparison, should make things harder to understand. But
complexifying can also show connections that weren’t immediately obvious beforehand. I mean, we just saw that complexifying shows a connection between exponential functions and sinusoidal functions.
Another example is Euler’s identity, which can be arrived at by performing some algebra on Euler’s formula. It looks like this:
e^i^π + 1 = 0
This is considered by some to be one of the most astounding equations in all of mathematics. It elegantly connects five of the most important numbers we’ve discovered. Stare at it for awhile and take
it in. Can that identity really be true? Can those numbers really be connected like that? Yup.
That, I think, is the benefit of complexifying: letting us see what is not immediately obvious.
It turns out last week was also Carl Sagan’s birthday. This generated some hubbub, with some praising the man and others wishing we would just stop talking about him already. Carl Sagan was
admittedly before my time, but he has had an impact on me nonetheless. No, he didn’t inspire me to study science or pick up the telescope or anything like that. But I am rather fond of his pale blue
dot speech, to the extent that there’s even a minor plot point about it in one of my half-finished novels.
Now, I read some rather interesting criticism of Sagan and his pale blue dot stuff on a blog I frequent. A commenter was of the opinion that Sagan always made science seem grandiose and inaccessible.
That’s an interesting take, but I happen to disagree. Instead, I think we might be able to conclude that Sagan engaged in a bit of complexifying. No, he certainly didn’t make his material more
difficult to understand than it had to be; he was a very gifted communicator. What he did do, however, and this is especially apparent with the pale blue dot, is make his material seem very big, very
out there. You might say he added more than was necessary.
In doing so, he showed connections that were not immediately obvious. The whole point of his pale blue dot speech is that we are very small fish in a very big pond, and that this connects us to each
other. The distances and differences between people are, relatively speaking, absolutely miniscule. From the outer reaches of the solar system, all of humanity is just a pixel.
But there are more connections to be made. Not only are all us connected to each other; we’re also connected to the universe itself. Because, you see, from the outer reaches of the solar system,
we’re just a pixel next to other pixels, and those other pixels are planets, stars, and interstellar gases. We’re all stardust, as has been said.
This idea that seeing the world as a tiny speck is transformative has been called by some (or maybe just Frank White) the overview effect. Many astronauts have reported experiencing euphoria and awe
as a result of this effect. But going to space is expensive, especially compared to listening to Carl Sagan.
So yeah, maybe Sagan was a bit grandiose in the way he doled out his science. But I don’t think that’s a bad thing. I just think it shows the connection between Sagan and my differential equations
I will calculate the distance from the Earth to the Sun using nothing but the Earth’s temperature, the Sun’s temperature, the radius of the Sun, and the number 2. How will I perform such an amazing
feat of mathematical manipulation? Magic (physics), of course. And as a magician (physics student), I am forbidden from revealing the secrets of my craft (except on tests and this blog).
During last night’s physics lecture, the professor discussed black-body radiation in the context of quantum mechanics. In physics, a black body is an idealized object that absorbs all electromagnetic
radiation that hits it. Furthermore, if a black body exists at a constant temperature, then the radiation it emits is dependent on that temperature alone and no other characteristics.
According to classical physics, at smaller and smaller wavelengths of light, more and more radiation should be emitted from a black body. But it turns out this isn’t the case, and that at smaller
wavelengths, the electromagnetic intensity drops off sharply. This discrepancy, called the ultraviolet catastrophe (because UV light is a short wavelength), remained a mystery for some time, until
Planck came along and fixed things by introducing his eponymous constant.
Thanks, Wikipedia.
The fix was to say that light is only emitted in discrete, quantized chunks with energy proportional to frequency. Explaining why this works is a little tricky, but the gist is that there are fewer
electrons at higher energies, which means fewer photons get released, which means a lower intensity than predicted by classical electromagnetism. Planck didn’t know most of those details, but his
correction worked anyway and kind of began the quantum revolution.
But all of that is beside the point. If black bodies are idealized, then you may be wondering how predictions about black bodies came to be so different form the observational data. How do you
observe an idealized object? It turns out that the Sun is a near perfect real-world analog of a black body, and by studying its electromagnetic radiation scientists were able to study black-body
Anywho, my professor drew some diagrams of the Sun up on the board during this discussion and then proposed to us the following question: Can you use the equations for black-body radiation to predict
the distance from the Earth to the Sun? As it turns out, the answer is yes.
You see, a consequence of Planck’s law is the Stefan-Boltzmann law, which says that the intensity of light emitted by a black body is proportional to the 4th power of the object’s temperature. That
is, if you know the temperature of a black body, you know how energetic it is. How does that help us?
Well, the Sun emits a relatively static amount of light across its surface. A small fraction of that light eventually hits the Earth. What fraction of light hits the Earth is related to the how far
away the Earth is from the Sun. The farther away the Sun is, the less light reaches the Earth. This is pretty obvious. It’s why Mercury is so hot and Pluto so cold. (But it’s not why summer is hot or
winter cold.) So if we know the temperature of the Sun and the temperature of the Earth, we should be able to figure out how far one is from the other.
To do so, we have to construct a ratio. That is, we have to figure out what fraction of the Sun’s energy reaches the Earth. The Sun emits a sphere of energy that expands radially outward at the speed
of light. By the time this sphere reaches the Earth, it’s very big. Now, a circle with the diameter of the Earth intercepts this energy, and the rest passes us by. So the fraction of energy we get is
the area of the Earth’s disc divided by the surface area of the Sun’s sphere of radiation at the point that it hits the Earth. Here’s a picture:
I made this!
So our ratio is this: P[e]/P[s] = A[e]/A[s], where P is the power (energy per second) emitted by the body, A[e] is the area of the Earth’s disc, and A[s] is the surface area of the Sun’s energy when
it reaches the Earth. One piece we’re missing from this is the Earth’s power. But we can get that just by approximating the Earth as a blackbody, too. This is less true than it is for the Sun, but it
will serve our purposes nonetheless.
Okay, all we need now is the Stefan-Boltzmann law, which is I = σT^4, where σ is a constant of proportionality that doesn’t actually matter here. What matters is that I, intensity, is power/area, and
we’re looking for power. That means intensity times area equals power. So our ratio looks like this:
σT[e]^44πr[e]^2 / σT[s]^44πr[s]^2 = πr[e]^2 / 4πd^2
This is messy, but if you look closely, you’ll notice that a lot of those terms cancel out. When they do, we’re left with:
T[e]^4 / T[s]^4r[s]^2 = 1 / 4d^2
Finally, d is out target variable. Solving for it, we get:
d = r[s]T[s]^2 / 2T[e]^2
Those variables are the radius of the Sun, the temperatures of the Sun and the Earth, and the number 2 (not a variable). Some googling tells me that the Sun’s surface temperature is 5778 K, the
Earth’s surface temperature is 288 K, and the Sun’s radius is 696,342 km. If we plug those numbers into the above equation, out spits the answer: 1.40x10^11 meters. As some of you may remember, the
actual mean distance from the Earth to the Sun is 1.496x10^11 meters, giving us an error of just 6.32%.
I’d say that’s pretty damn close. Why an error of 6%? Well, we approximated the Earth as a black body, but it’s actually warmer than it would be if it were a black body. So the average surface
temperature we used is too high, thus making our answer too low. (There are other sources of error, too, but that’s probably the biggest one.)
There is one caveat to all this, however, which is that the calculation depends on the radius of the Sun. If you read the link above (which I recommend), you know, however, that we calculate the
radius of the Sun based on the distance from the Earth to the Sun. But you can imagine that we know the radius of the Sun (to far less exact measurements) based solely on its observational
characteristics. And in that case, we can still make the calculation.
Anywho, there’s your magic trick (physics problem) for the day. Enjoy.
Okay, it doesn’t have quite the same ring to it as National Novel Writing Month, but I’m saving my good words for my, well, novel writing. As some of you may know, November is NaNoWriMo, a worldwide
event during which a bunch of people get together to (individually) write 50,000 words in 30 days. I’ve done it the last several years and I’m doing it this year, too. It’s hard, it’s fun, and it’s
As some of you may also know, Laura Miller, a writer for Salon, published a piece decrying NaNoWriMo. (Turns out she published that piece 3 years ago, but it's making the rounds now because NaNo is
upon us. Bah, I'm still posting.) This made a lot of wrimos pretty upset, and I’ve seen some rather vitriolic criticism in response. Miller’s main point seems to be that there’s already enough crap
out there and we don’t need to saturate the world with more of it. Moreover, she thinks we could all do a little more reading and a little less writing.
Well, as a NaNoWriMo participant and self-important blogger, I think I’m going to respond to Miller’s criticism. Of course, maybe that’s exactly what she wants. By writing this now, I’m not writing
my NaNo novel. Dastardly plan, Laura Miller.
Now, I understand the angry response to Miller’s piece. I really do. It has a very “get off my lawn” feel to it that seems to miss the point that, for a lot of people, NaNo is just plain fun. But her
two points aren’t terrible points, and I think they’re worth responding to in a civil, constructive way. So here goes.
As is obvious to anyone who’s read this blog, I quite like science. That’s what the blog is about, after all. In fact, I’ve been interested in science ever since I was a child. I read books about
science, I had toy science kits, and I loved science fiction as a genre.
Yet this blog about science is not even a year old, and I’m writing this post as a freshly minted 28 year old. Why is that? Because up until about 2 years, I didn’t do anything with my interest in
science. I took plenty of science and math classes in high school, but I mostly dithered around in them and didn’t, you guessed it, practice.
It wasn’t until 2 years ago that I sat down and decided it was time to reteach myself calculus. And how did I teach myself calculus? By giving myself homework. By doing that homework. By checking my
answers and redoing problems until I got them right. And now I can do calculus. Now I can do linear algebra, differential equations, and physics. I’m no expert in these subjects, but I understand
them to a degree because I’ve done them. I’ve practiced, just like you practice a sport.
The analogy here should be clear. You have to practice your sport, you have to practice your math, you have to practice your writing. Where some may disagree with this analogy is the idea that
writing 50,000 words worth of drivel counts as practice. The answer is that it’s practicing one skill of writing, but not all writing skills. This follows from the analogy, too. Sometimes you
practice free throws; other times you practice taking integrals. Each is a specific skill within a broad field, and each takes practice.
And as any writer knows, sometimes the most difficult part of writing is staring at a blank white page and trying to find some way to put some black on it. We all have ideas. We all have stories and
characters in our heads. But exorcising those thoughts onto paper is a skill wholly unto itself, apart from the skills of grammar, narrative, and prose.
So it needs practice, and NaNoWriMo is that practice. If you’re a dedicated writer, however, then it follows that NaNo should not be your only practice. You have to practice the other skills, too.
You have to write during the rest of the year, and you have to pay attention to grammar, narrative, and prose. But taking one month to practice one skill hardly seems a waste.
I’ve less to say about Miller’s second point, that we should read more and write less. This is a matter of opinion, I suppose. But I do have one comment about it. America is often criticized as being
a nation of consumers who voraciously eat up every product put before them. We are asked only to choose between different brand names and to give no more thought to our decisions than which product
to purchase.
Writing is a break from that. Rather than being a lazy, passive consumer of other people’s ideas, writing forces you to formulate and express your own ideas. Writing can be a tool of discovery, a way
to expand the thinking space we all inhabit. Rather than selecting an imperfect match from a limited set of options, writing lets you make a choice that is precisely what you want it to be. You get
to declare where you stand, or that you’re not taking a stand at all. You get to have a voice beyond simply punching a hole in a ballot.
You shouldn’t write instead of read, but you should write (or find some other way to creatively express your identity).
My weekend activities have provided me with ample blogging fodder of late. This past weekend I went to a local Renaissance Festival and, among other things, watched some real life jousting. That is,
actual people got on actual horses and actually rammed lances into each other, sometimes with spectacular results.
I didn't take this picture. It's from the Renn Fest website. I just think my blog needs more visuals.
At one point a lance tip broke off on someone’s armor and went flying about 50 feet into the air. A friend wondered aloud what kind of force it would take to achieve that result, and here I am to do
the math. This involves some physics from last year as well as much more complicated physics that I can’t do. You see, if a horse glided along the ground without intrinsic motive power, and were
spherical, and of uniform density… but alas, horses are not cows.
Anywho, as to the flying lance tip, the physics is pretty easy. Now, I can’t say what force was acting on the lance. The difficulty is that, from a physics standpoint, the impact between the lance
and the armor imparted momentum into the lance tip. Newton’s second law (in differential form) tells us that force is equal to the change in momentum over time. Thus, in order to calculate the force
of the impact, I have to know how long the impact took. I could say it was a split second or an instant, but I’m looking for a little more precision than that.
Instead, however, I can tell you how much energy the lance tip had. It takes a certain amount of kinetic energy to fly 50 feet into the air. We’re gonna say the lance tip weighs 1 kg (probably an
overestimate) and that it climbed 15 meters before falling down. In that case, our formula is e = mgh, where g is 9.8 m/s^2 of gravitational acceleration, and we’re at about 150 joules of energy.
This is roughly as much energy as a rifle bullet just exiting the muzzle. It also means the lance tip had an initial speed of about 17 m/s. I’m ignoring here, because I don’t have enough data, that
the lance tip spun through the air—adding rotational energy to the mix—and that there was a sharp crack from the lance breaking—adding energy from sound.
But this doesn’t conclude our analysis. For starters, where did the 150 joules of energy come from? And is that all the energy of the impact? Let’s answer the second question first. Another pretty
spectacular result of the jousting we witnessed was that one rider was unhorsed. We can model being unhorsed as moving at a certain speed and then having your speed brought to 0. Some googling tells
me that a good estimate for the galloping speed of a horse is 10 m/s.
So the question is, how much work does it take to unhorse a knight? With armor, a knight probably weighs 100 kg. Traveling at 10 m/s, our kinetic energy formula tells us this knight possesses 5000
joules of energy, which means the impact must deliver 5000 joules of energy to stop the knight. This means there’s certainly enough energy to send a lance tip flying, and it also means that not all
of the energy goes into the lance tip.
We can apply the same kinetic energy formula to our two horses, which each weigh about 1000 kg, and see that there’s something like 100 kj of energy between the two. Not all of that goes into the
impact, however, because both horses keep going. This is where the horses not being idealized points hurts the analysis. Were that so, we might be able to tell how much energy is “absorbed” by the
armor and lance.
There is one final piece of data we can look at. I estimate the list was 50 meters long. The knights met at the middle and, if they timed things properly, reached their maximum speeds at that point.
Let’s also say that horses are mathematically simple and accelerate at a constant rate. One of the 4 basic kinematic equations tells us that v[f]^2 = v[i]^2 + 2ad. So this is 100 = 0 + 2*a*25, and
solving for a gets us an acceleration of 2 m/s^2. Newton’s second law, f=ma, means each horse was applying 2000 newtons of force to accelerate at that rate. 2000 N across 25 meters is 50,000 joules
of work. It takes 5 seconds to accelerate to 10 m/s at 2 m/s^2, so 50,000 joules / 5 seconds = 10,000 watts of power. What’s 10,000 watts? Well, let’s convert that to a more recognizable unit of
measure. 10 kW comes out to about 13 horsepower, which is about 13 times as much power as a horse is supposed to have. Methinks James Watt has some explaining to do.
One other thought occurred to me during this analysis. Some googling tells me there are roughly 60 million horses in the world. If a horse can pump out 10 kW of energy, then we have roughly 600 GW of
energy available from horses alone. Wikipedia says our average power consumption is 15 TW, which means the world’s horses running on treadmills could provide 4% of the energy requirements of the
modern world. This isn’t strictly speaking true, because there will be losses due to entropy (and you can’t run a horse nonstop), but it’s in the right ballpark. Moral of the story? Don’t let anyone
tell you that energy is scarce. The problem isn’t that there isn’t enough energy in the world; it’s that we don’t have the industry and infrastructure necessary to use all the energy at our disposal.
Last weekend I went to an SF writing convention at which the esteemed George R. R. Martin himself was guest of honor. I had a good time, managed to snag an autograph, and even got conclusive proof
that he is, in fact, working on The Winds of Winter. (He read 2 chapters.) So my blog post for today is inspired by the convention visit, but has nothing to do with science fiction, writing, or
The most interesting panel at the convention, personally, wasn’t even a panel at all. It was a talk given by computer scientist Dr. Alice Armstrong on artificial intelligence and how to incorporate
AI into stories without pissing off people like Dr. Alice Armstrong. It was an amusing and informative talk, although not as many people laughed at her jokes as should have. A couple points she made
resonated particularly well with me, not because of my fiction, but because of my math courses the past two semesters.
Specifically, about a quarter of her talk was given over to the concept of genetic algorithms, in which a whole bunch of possible solutions to a problem are tested, mutated, combined, and tested
again until a suitable solution is found. This is supposed to mirror the concept of biological evolution, however Dr. Armstrong pointed out numerous times that the similarities are, at this point,
rather superficial.
But one thing she said is that genetic algorithms are essentially search engines. They go through an infinite landscape of possible solutions—the solution space—and come out with one that will work.
This reminded me of a topic covered in linear algebra, and one we’re covering again from a different perspective in differential equations. The solution space in differential equations is a set of
solutions arrived at by finding the eigenvectors of a system of linear differential equations.
Bluh, what? First, let’s talk about differential equations. Physics is rife with differential equations—equations that describe how systems change—because physics is all about motion. What kind of
motion? Motion like, for example, a cat falling from up high. As we all know, cats tend to land on their feet. They start up high and maybe upside down, they twist around in the air, and by the time
they reach the ground they’re feet down (unless buttered toast is involved).
If you could describe this mathematically, you’d need an equation that deals with lots of change, like, say, a differential equation. In fact, you might even need a system of differential equations,
because you have to keep track of the cat’s shape, position, speed, and probably a few other variables.
Going back to linear algebra for a moment, you can perform some matrix operations on this system of differential equations to produce what are called eigenvectors. It’s not really important what
eigenvectors are, except to know that they form the basis for the solution space.
To explain what a basis is, I’m going to steal from Dr. Armstrong for a moment. The example she gave of a genetic algorithm at work was one in which a “population” of cookie recipes are baked, and
the more successful recipes pass on their “genes” to the next generation. A gene, in this sense, is the individual components of the recipe: sugar, flour, chocolate chips, etc.
You have some number of cups of sugar, add that onto some number of cups of flour, add that onto some number of cups of chocolate chips, and so on, and you have a cookie recipe. These linear
combinations of ingredients—different numbers of cups—can be used to form a vast set of cookie recipes, a solution space of cookie recipes, if you will. So a basis is the core ingredients with which
you can build an infinite variety of a particular item.
Let’s get back to our cat example. If you take a system of linear differential equations about falling cats and find their eigenvectors, then you will have a basis for the solution space of cat
falling. That is, you will know an infinite number of ways that a cat can fall, given some initial conditions. But what ways of falling are better than others? That’s where genetic algorithms come
In our case, a genetic algorithm is what produced the cat righting reflex. You see, as we discovered, there are an infinite number of ways for a cat to fall. (While there are an infinite number of
solutions, infinity is not everything. For example, there are an infinite number of numbers between 0 and 1, but the numbers between 0 and 1 are not all the numbers. Similarly, there are ways in
which you can arrange the equations of cat falling that don’t produce meaningful results, like a cat falling up, so these aren’t a part of the infinite solution space.)
It would take a very long time to search through an infinite number of cat falling techniques to find the best one. Genetic algorithms, then, take a population of cats, have a lot of them fall, and
see which ones fall better than others. This is, of course, natural selection. Now, you may think of evolution as a slow process, but it’s important to remember that this genetic algorithm is not
just testing the fitness of cat falling, but of every other way in which a cat could possibly die. From that standpoint, evolution has done an absolutely remarkable job of creating an organism that
can survive in a great many situations.
If you don’t believe me, consider that there is a Wikipedia page dedicated to the “falling cat problem” which, among other things, compares the mathematics of falling cats to the mathematics of
quantum chromodynamics (which I don’t understand a lick of, btw).
Some of you may be wondering how what is essentially a search algorithm can be considered a form of “artificial intelligence.” Well, to answer that question, you have to give a good definition of
what intelligence really is. But this blog post is probably long enough already, so I’m not going down that road. Consider for a moment, however, that there is a certain segment of the population
that is absolutely convinced life originated via intelligent design. While their opinion on this matter is almost always bound up in belief, it’s not hard to look at nature and see something
intelligent. If nothing else, nature produced humans, and it’s difficult to imagine any definition of intelligence that doesn’t include us (despite our occasional idiocy).
One final note: I lied in the beginning. I think Solution Space would make an excellent title for an SF story, so there’s your connection to science fiction and writing. And don’t steal my title, Mr.
Due to fortuitous timing, my life is in somewhat of a holding pattern at the moment. There are three upcoming events that I can do nothing more than wait for. I will list them now in order of my
increasing impotence to influence.
A week ago, I submitted a short story to a magazine. They say their average response time is five weeks, which means I have to wait another four weeks until they send me my rejection notice. With any
luck, it will be a personal rejection.
This is the first story I’ve submitted for publication in several years. I think it’s probably the best thing I’ve ever written, and I know it’s decent enough to be published, but I shouldn’t fool
myself into thinking that the first (or fifth, or tenth) publication I send it to will agree with me.
I was spurred into finally submitting a short story because a very good friend of mine just made her first sale. Unlike me, she’s been submitting non-stop for most of this year. Also unlike me, she’s
been sending her stories to the myriad online magazines that have sprung up in recent years. I’ve sent my story, an 8,000-word behemoth, to the magazine for science fiction, because I have delusions
of grandeur, apparently.
Moving on to the second item on my list, October is when I am supposed to hear back from the 4-year school I’m hoping to attend this coming spring. Their answer, unlike the magazine I’m submitting
to, should be a positive one. Theoretically, I’m enrolled in a transfer program between my community college and the university that guarantees my admission so long as I keep my grades up and yada
I’ve done all that, but I was still required to submit an application along with everyone else that wants to attend the school. And I’ve still been required to wait until now to receive word on my
admission. All this waiting has me doubting how guaranteed my admission really is, but I’m still optimistic that the wait amounts to nothing more than a slow-moving bureaucracy. We’ll see.
Additionally, assuming I am admitted to the university, I then have to figure out how I’m paying for my schooling (community college is much cheaper) and how well my community college transcript
transfers to my 4-year school. The hassle over figuring out what classes count as what could make for a whole other post. I haven’t decided yet whether I want to bore my three readers with the
And finally, as I hinted at in my last post, I’m a government contractor currently experiencing the joys of a government shutdown. So I’m waiting for our duly elected leaders to do their jobs and let
me do my job. This is decidedly not a political blog, and I don’t want to get mired in partisan debates, but I have to say that I would much rather a system that doesn’t grind to a halt whenever
opposing sides fail to reach an agreement.
There are a lot of theoretical alternatives to the system of representative democracy that we have, but I honestly don’t know enough about the subject to know which one would be better. Each system
has pros and cons, and it is my limited understanding that no form of democracy is capable of perfectly representing the will of the people. If that’s the case, what hope is there for the future of
civilization? Well, we can hope for an increasingly less imperfect future, I suppose. Or, to return to the SF side of things, we could just ask Hari Seldon to plan out the future for us.
The one political statement I’ll make here is that I never got over the wonder of Asimov’s psychohistory. I am a firm proponent of technocracy and the idea that, sometimes, it’s better to let experts
make decisions about complex topics. Where I think democracy has its place is in ensuring that people are allowed to choose the type of society they want to live in. But if they really do want to
live in society X, then they should let capable experts create society X first.
Okay, I think that’s enough pontificating for now. Is there some deeper connection between the three things I’m waiting for? Some thread that ties it all together? A concept from physics or
mathematics that I can clumsily wield as an analogy? Nope. Sorry. Not this time.
|
{"url":"http://anomalous-readings.blogspot.com/","timestamp":"2014-04-18T18:10:21Z","content_type":null,"content_length":"172497","record_id":"<urn:uuid:9160eeb6-b759-4e50-9f98-3c9594d5af0a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Differential forms on abelian schemes
up vote 0 down vote favorite
Let $A$ be an abelian scheme over a base scheme $S$ and $\omega$ a global section of the differential module $\Omega^1_{A\times_S A/S}$.
Suppose that $\omega$ is zero when restricted to $A\times S$ and $S\times A$, both times via the zero section and the identity.
Then why can one conclude that $\omega$ itself is already zero?
add comment
1 Answer
active oldest votes
It has nothing to do with abelian schemes. Just use the ``product rule'' of differentiation. This is a natural isomorphism:
$$ p_1^*\Omega^1_{A/S}\oplus p_2^*\Omega^1_{A/S}\to \Omega^1_{A\times_S A/S} $$
up vote 5 down
vote accepted The zero section of $A$ affords an inverse to this natural map: if $s_1 : A\to A\times_S A$ is the inclusion in the first coordinate and $s_2$ the analogous inclusion in the second
coordinats, then since $p_is_i = id$ we see that $s_1\oplus s_2 : \Omega^1_{A\times_S A}\to p_1^*\Omega^1_{A/S}\oplus p_2^*\Omega^1_{A/S}$ is the inverse of the above isomorphism.
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/79918/differential-forms-on-abelian-schemes","timestamp":"2014-04-19T17:21:28Z","content_type":null,"content_length":"49239","record_id":"<urn:uuid:83b032a6-9388-4b2f-b02a-6a49d8b672f8>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Precalculus : Functions and Graphs
ISBN: 9780201611366 | 0201611368
Edition: 4th
Format: Hardcover
Publisher: Addison Wesley
Pub. Date: 1/1/2001
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help
|
{"url":"http://www.knetbooks.com/precalculus-functions-graphs-4th-demana/bk/9780201611366","timestamp":"2014-04-20T13:33:46Z","content_type":null,"content_length":"46092","record_id":"<urn:uuid:a816f5f0-9c8b-4213-b92d-8379c830d5fe>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Miami Beach, WA Trigonometry Tutor
Find a Miami Beach, WA Trigonometry Tutor
...It was here I developed a life-long love for helping children succeed. I went on to medical school and graduated with honors from Meharry Medical College. I have thus far completed 1 year of
residency training in Pediatrics at Jackson Memorial Hospital/Holtz Children's Hospital.
26 Subjects: including trigonometry, English, reading, physics
...The first year I taught there, 75% of my students had learning gains greater than 1 year of expected growth (despite coming in significantly below grade level). Our school was rated an F
school the year prior and went up to a C school in that one year. I scored an AFQT of 95 (meaning I scored be...
23 Subjects: including trigonometry, calculus, GRE, ASVAB
...Likewise, I have experience in tutoring students with disabilities. My teaching philosophy is that with a positive attitude and willingness to learn, math can be easy and fun. The success rate
of my tutoring is positive and many students have improved their grades by a letter grade or more.
18 Subjects: including trigonometry, chemistry, calculus, geometry
...More formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets (sets that have the same cardinality as subsets of the integers, including
rational numbers but not real numbers). However, there is no exact, universally agreed, definition of the ...
23 Subjects: including trigonometry, chemistry, physics, geometry
...Prerequisites include solving 1 and 2 step linear equations; adding, subtracting, multiplying, and dividing integers; combining like terms; the Order of Operations. Prerequisites include
factoring; solving 1 and 2 step linear equations; adding, subtracting, multiplying, and dividing integers; co...
30 Subjects: including trigonometry, calculus, geometry, GRE
Related Miami Beach, WA Tutors
Miami Beach, WA Accounting Tutors
Miami Beach, WA ACT Tutors
Miami Beach, WA Algebra Tutors
Miami Beach, WA Algebra 2 Tutors
Miami Beach, WA Calculus Tutors
Miami Beach, WA Geometry Tutors
Miami Beach, WA Math Tutors
Miami Beach, WA Prealgebra Tutors
Miami Beach, WA Precalculus Tutors
Miami Beach, WA SAT Tutors
Miami Beach, WA SAT Math Tutors
Miami Beach, WA Science Tutors
Miami Beach, WA Statistics Tutors
Miami Beach, WA Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Bal Harbour, FL trigonometry Tutors
Bay Harbor Islands, FL trigonometry Tutors
Brickell, FL trigonometry Tutors
Carl Fisher, FL trigonometry Tutors
El Portal, FL trigonometry Tutors
Florida City, FL trigonometry Tutors
Indian Creek Village, FL trigonometry Tutors
Medley, FL trigonometry Tutors
Miami Beach trigonometry Tutors
North Bay Village, FL trigonometry Tutors
North Miami Bch, FL trigonometry Tutors
Sunset Island, FL trigonometry Tutors
Surfside, FL trigonometry Tutors
Venetian Islands, FL trigonometry Tutors
Virginia Gardens, FL trigonometry Tutors
|
{"url":"http://www.purplemath.com/Miami_Beach_WA_trigonometry_tutors.php","timestamp":"2014-04-17T13:40:05Z","content_type":null,"content_length":"24643","record_id":"<urn:uuid:ec02c0c5-432e-4702-a2d6-641913ab907d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Golf, IL Math Tutor
Find a Golf, IL Math Tutor
...I have additional tutoring experience in the subjects of Accounting, Algebra, Biology, Computer Skills, Geometry, History, and Humanities. I also have experience working as both a GED and ESL
tutor.Microsoft Access is a relational database management program, which allows users to create and man...
39 Subjects: including calculus, grammar, trigonometry, web design
...During the four season with the team, I played both singles and doubles at every match. By my senior year, I was named captain of the Women's varsity team and the number 1 singles player. As
the oldest member of the team, other girls looked to me as their leader and my coaches expected me to lead practices and team warm-ups.
13 Subjects: including precalculus, prealgebra, algebra 1, algebra 2
...Before that, I was a maternity substitute in both a 1st grade and 2nd grade classroom, teaching all subjects and communicating with parents to give the students the best education they could
have. For several summers, I have also taught literature 3rd to 5th graders by doing fun reading and writ...
17 Subjects: including trigonometry, reading, ACT Math, algebra 1
...I think one of the best ways to study literature is through discussion. By asking open-ended questions to my students, this forces them to recall the story and form an opinion or thought which
might trigger another. Students often recognize themes that surface in discussion are themes related to the literature.
20 Subjects: including prealgebra, English, algebra 1, reading
...There, I learned how to instruct by being introduced to mini-tennis and dozens of class drills. I also watch my son's tennis lessons at the junior level. During the summer, I have taken
intermediate lessons with the park district, and learned under different instructors.
37 Subjects: including algebra 2, precalculus, GED, SAT math
Related Golf, IL Tutors
Golf, IL Accounting Tutors
Golf, IL ACT Tutors
Golf, IL Algebra Tutors
Golf, IL Algebra 2 Tutors
Golf, IL Calculus Tutors
Golf, IL Geometry Tutors
Golf, IL Math Tutors
Golf, IL Prealgebra Tutors
Golf, IL Precalculus Tutors
Golf, IL SAT Tutors
Golf, IL SAT Math Tutors
Golf, IL Science Tutors
Golf, IL Statistics Tutors
Golf, IL Trigonometry Tutors
Nearby Cities With Math Tutor
Bannockburn, IL Math Tutors
Fort Sheridan Math Tutors
Fox Valley Math Tutors
Fox Valley Facility, IL Math Tutors
Glenview, IL Math Tutors
Hines, IL Math Tutors
Indian Creek, IL Math Tutors
Indianhead Park, IL Math Tutors
Kenilworth, IL Math Tutors
Morton Grove Math Tutors
Niles, IL Math Tutors
Northfield, IL Math Tutors
Skokie Math Tutors
Western, IL Math Tutors
Winnetka, IL Math Tutors
|
{"url":"http://www.purplemath.com/golf_il_math_tutors.php","timestamp":"2014-04-19T04:56:36Z","content_type":null,"content_length":"23790","record_id":"<urn:uuid:ee5ef910-934c-472e-8238-5b6bc8eabd90>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
|
I want to make a circle [Archive] - OpenGL Discussion and Help Forums
View Full Version : I want to make a circle
i need to aproximate a circle with Ogl. and i need to draw it in any place of the screen without using the translatef command.
Powered by vBulletin® Version 4.2.2 Copyright © 2014 vBulletin Solutions, Inc. All rights reserved.
|
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-133957.html","timestamp":"2014-04-20T11:01:36Z","content_type":null,"content_length":"5696","record_id":"<urn:uuid:d9f8c7a2-6815-4d16-b719-d17706380992>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lonetree, CO SAT Math Tutor
Find a Lonetree, CO SAT Math Tutor
...Here is the brief information about how I tutor: I help to create the self-confidence of study. I help to build a unique way to learn math for every student that is easy, understandable and
efficient. I help students understand the key theories and formulas and get the ideas to solve problems.
27 Subjects: including SAT math, calculus, physics, algebra 1
...I can help students with more than just math! I am very good with the reading/writing portions of standardized tests, such as the SAT and GRE. I can help students with reading comprehension
and essay writing.
27 Subjects: including SAT math, reading, writing, geometry
...I have passed the math portion of the GRE exam with a perfect 800 score, also! My graduate work is in architecture and design. I especially love working with students who have some fear of the
subject or who have previously had an uncomfortable experience with it.I have taught Algebra 1 for many years to middle and high school students.
7 Subjects: including SAT math, geometry, GRE, algebra 1
...I am a licensed Electrocardiogram Technician since 2007, and I can type 102 words per minute.As a high school student, I was part of a program that helped the children of broken families. We
meet up with our children every Wednesday and either helped them with their homework, or if they finished...
31 Subjects: including SAT math, reading, writing, English
...I have taught Sunday School for more than 30 years. Having read through the bible several times, I have written religious literature which has been used as a curriculum both in the U.S. and
internationally. I am a spirit-filled Christian and I pray and seek God daily for an understanding of his Word.
43 Subjects: including SAT math, Spanish, English, chemistry
Related Lonetree, CO Tutors
Lonetree, CO Accounting Tutors
Lonetree, CO ACT Tutors
Lonetree, CO Algebra Tutors
Lonetree, CO Algebra 2 Tutors
Lonetree, CO Calculus Tutors
Lonetree, CO Geometry Tutors
Lonetree, CO Math Tutors
Lonetree, CO Prealgebra Tutors
Lonetree, CO Precalculus Tutors
Lonetree, CO SAT Tutors
Lonetree, CO SAT Math Tutors
Lonetree, CO Science Tutors
Lonetree, CO Statistics Tutors
Lonetree, CO Trigonometry Tutors
Nearby Cities With SAT math Tutor
Bow Mar, CO SAT math Tutors
Centennial, CO SAT math Tutors
Cherry Hills Village, CO SAT math Tutors
Columbine Valley, CO SAT math Tutors
Edgewater, CO SAT math Tutors
Foxfield, CO SAT math Tutors
Glendale, CO SAT math Tutors
Greenwood Village, CO SAT math Tutors
Highlands Ranch, CO SAT math Tutors
Littleton City Offices, CO SAT math Tutors
Lone Tree, CO SAT math Tutors
Louviers SAT math Tutors
Parker, CO SAT math Tutors
Sedalia, CO SAT math Tutors
Sheridan, CO SAT math Tutors
|
{"url":"http://www.purplemath.com/Lonetree_CO_SAT_Math_tutors.php","timestamp":"2014-04-19T07:06:05Z","content_type":null,"content_length":"24162","record_id":"<urn:uuid:8408663b-7d8a-40f1-a9d7-56c04a54feec>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wonder Lake Calculus Tutor
Find a Wonder Lake Calculus Tutor
...I have had a great success rate. I ask students to complete practice tests to analyze their strengths and weaknesses. By using their strengths and focusing on weaknesses I find scores quickly
24 Subjects: including calculus, geometry, algebra 1, GRE
...I have background in peer-tutoring when I was in school, helping in both Physics and Math.This was my major in college. This is a field I am passionate in and have much background in on a
personal, out-of-classroom basis. I received a 26 on this section when I took the exam.
16 Subjects: including calculus, chemistry, physics, geometry
...I have experience tutoring students for ACT and SAT with a national company. I use the Barron's book for the national tests. In my opinion it is the most difficult and if a student does well
with Barron's then the regular test should be much easier.
12 Subjects: including calculus, geometry, algebra 1, algebra 2
...I tutored math through college to stay fresh. Finally, trigonometry always finds its way into my day-to-day work, from teaching college-level physics concepts to building courses for
professional auditors. I was an advanced math student, completing calculus in high school and then taking statistics as part of the engineering curriculum in college.
13 Subjects: including calculus, statistics, algebra 2, geometry
...I have taught AutoCAD at ITT Technical Institute, Mt Prospect IL in the past. I believe I could help you learn AutoCAD professionally as I have direct experience in using this software as well
as teaching it to students for several years. I have a MS/PhD in Mechanical Engineering so I feel that I am qualified to teach Mechanical Engineering students.
22 Subjects: including calculus, physics, geometry, statistics
Nearby Cities With calculus Tutor
Alden, IL calculus Tutors
Genoa City calculus Tutors
Hebron, IL calculus Tutors
Holiday Hills, IL calculus Tutors
Island Lake calculus Tutors
Lakewood, IL calculus Tutors
Mchenry, IL calculus Tutors
Oakwood Hills, IL calculus Tutors
Prairie Grove, IL calculus Tutors
Richmond, IL calculus Tutors
Ringwood, IL calculus Tutors
Salem, WI calculus Tutors
Spring Grove, IL calculus Tutors
Wauconda, IL calculus Tutors
Woodstock, IL calculus Tutors
|
{"url":"http://www.purplemath.com/Wonder_Lake_Calculus_tutors.php","timestamp":"2014-04-20T16:35:58Z","content_type":null,"content_length":"24062","record_id":"<urn:uuid:b0236f79-c1e8-47ae-b4cd-bc86b68e06b4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Thermodynamic Integration
Next: Advanced Topics Up: Free Energy Methods Previous: Excess Chemical Potential via
Thermodynamic integration is a conceptually simple, albeit expensive, way to calculate free energy differences from MC or MD simulations. In this example, we will consider the calculation (again) of
chemical potential in a Lennard-Jones fluid at a given temperature and density, a task performed very well already by the Widom method (so long as the densities are not too high.) More details of the
method can be found in Reference [15].
We begin with the relation derived in the book for a free energy difference, except that they obey two different potentials. System I obeys
Let us consider the canonical partition function for a system obeying a general potential
Recalling that the free energy is given by
The free energy difference between I and II is given by:
To compute
For large values of
So, the total potential is given by
where the prime denotes that we ignore particle
Next, we conduct many independent MC simulations at various values of 6.1), for at least low to moderate densities.
I have done a rough comparison of the thermodynamic integration method described above to the grand canonical MC simulation technique described in Sec. 5.1. Below is a plot of mclj_ti.c. The
temperature was
However, integration is not too sensitive to these fluctuations. As we see below, integrating each of these curves to produce a single value of
The grand canonical simulations are of course much cheaper, as it requires only a single MC simulation to give a value of
1. Can we make the agreement better by running more MC cycles? How about by using more values of
2. How does this compare to the Widom method?
Next: Advanced Topics Up: Free Energy Methods Previous: Excess Chemical Potential via cfa22@drexel.edu
|
{"url":"http://www.pages.drexel.edu/~cfa22/msim/node39.html","timestamp":"2014-04-20T18:23:51Z","content_type":null,"content_length":"19131","record_id":"<urn:uuid:a3a5bfc6-8f28-4d84-9e15-65307c47b03d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
|
prove by induction algebra help!
a sequece of integers x1,x2...xk..is defined recursively as follows:
x1=1 and xk+1 =xk/xk+2 for kis greater then or equal to 1
calculate x2,x3 and x4...i got
k=1:x1+1 =x1/x1+2=1/1+2=3=x^2
k=2:x2+1 =x2/x2+2=3/3+2=3=x^3
k=3:x3+1 =x3/x3+2=3/3+3=4=x^
is this right and another question using the info in the fisrt part how do i find and prove by induction a formula for the nth term xn in terms of n for all n is greater than or equal to
1,calcilate x10.
|
{"url":"http://mathhelpforum.com/algebra/185849-prove-induction-algebra-help.html","timestamp":"2014-04-19T10:03:02Z","content_type":null,"content_length":"43299","record_id":"<urn:uuid:d63e4ede-2d2e-42d3-a6bd-a0daf8840845>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
|
When does the direct image functor nicely push past the power/exists functor?
up vote 5 down vote favorite
Let $D$ and $E$ be toposes and let $f_{\ast}\colon D\to E$ be the direct image part of a geometric morphism $(f^{\ast},f_{\ast})$ between them. Considered as categories, we have (covariant)
power-object endofunctors on each: $$P_D\colon D\to D \hspace{.5in} P_E\colon E\to E$$ where, for a morphism $\phi$ in $D$ we have $P_D(\phi)=\exists_\phi$, sending a sub-object of the domain to its
image under $\phi$.
I'm trying to construct a natural transformation $$A_f\colon\ f_{\ast}\circ P_D\to P_E\circ f_{\ast}$$ of functors $D\to E\ $.
Question: For what geometric morphisms $(f^\ast,f_\ast)$ is such a natural transformation $A_f$ guaranteed to exist?
For example, such a thing exists in the case of change-of-base morphisms between slice toposes of ${\bf Set}$. If $q\colon X\to Y$ is a function, it induces a logical morphism $\Pi_q\colon {\bf Set}/
X\to {\bf Set}/Y$. In this case the natural transformation $$A^~_{\Pi^~_q}\colon\Pi_q\circ P_{{\bf Set}/X}\to P_{{\bf Set}/Y}\circ \Pi_q$$ exists. It acts fiberwise on $Y$; for each $y\in Y$ it sends
a $q^{-1}(y)\ $-indexed collection of subsets to their product.
How to construct this map $A_f$ in general? I wanted to use what would generalize to the morphism $f_{\ast}f^{\ast}\Omega_E\to\Omega_E\ $ induced by the mono-part of the epi-mono factorization for $\
Omega_E\to f_{\ast}f^{\ast}\Omega_E$, where $\Omega_E$ is the subobject classifier in $E$. But while such a map does exist for all geometric morphisms, and can be used to construct the components of
my desired $A_f$, I couldn't see how to prove that the naturality squares for $A_f$ commute. I showed it in the ${\bf Set}$ case using a basic set-theoretic argument.
So what made it work for slice toposes of ${\bf Set}$? Was it very specific to that case? Was it that these change-of-base functors are logical morphisms, or that they're essential geometric, or does
such an $A_f$ always exist?
ct.category-theory topos-theory
add comment
2 Answers
active oldest votes
Since every power object is an internal Heyting algebra, and $f_*$ preserves the structure of internal Heyting algebras, there are trivial examples of such natural transformations
corresponding to the constants $\top$ and $\bot$. Of course, this is uninteresting.
Let me write $P^{\mathcal{D}}$ and $P^{\mathcal{E}}$ for the respective contravariant power object functors. Since $f_*$ preserves monomorphisms, there is a canonical comparison morphism
$f_* \Omega_{\mathcal{D}} \to \Omega_{\mathcal{E}}$; since $f_*$ preserves products, there is a canonical natural morphism $f_* (Y^X) \to (f_* Y)^{f_* X}$; and so there is a canonical
natural morphism $f_* P^{\mathcal{D}} X \to (f_* \Omega_{\mathcal{D}})^{f_* X} \to P^{\mathcal{E}} f_* X$. So there is an interesting canonical natural transformation $\theta : f_* P^{\
mathcal{D}} \Rightarrow P^{\mathcal{E}} f_*$.
Now allow me to argue using generalised elements. Let $T$ be an arbitrary object of $\mathcal{E}$, and let $p : X \to Y$ be a morphism in $\mathcal{D}$. Given a generalised element $t : T \
to f_* P_\mathcal{D} X$, what is $\theta_X \circ t : T \to P_{\mathcal{E}} f_{\ast} X$, and what is $f_* \exists_p \circ t : T \to f_* P_{\mathcal{D}} Y$? Let $t' : f^* T \to P_\mathcal{D}
X$ be the left adjoint transpose of $t$, and let $A' \rightarrowtail X \times f^* T$ be the subobject classified by $t'$. It is clear that $\theta_X \circ t$ is just the classifying
up vote 5 morphism for the pullback of $f_* A' \rightarrowtail f_* X \times f_* f^* T$ along $f_* X \times T \to f_* X \times f_* f^* T$. Also, by naturality, $f_* \exists_p \circ t$ must be the
down vote right adjoint transpose of $\exists_p \circ t' : f^* T \to P_\mathcal{D} Y$, which is none other than the classifying morphism for the image of the composite $A' \rightarrowtail X \times f^
accepted * T \to Y \times f^*T$.
This suggests the crucial criterion is that $f_*$ preserve epimorphisms (and hence, epi–mono factorisations) – and this automatic for all base change morphisms for slices over $\textbf{Set}
$ because $\textbf{Set}$ and its slices have the axiom of choice. So assume $f_*$ preserves epimorphisms. If we write $A \rightarrowtail f_* X \times T$ for the subobject classified by $\
theta_X \circ t$, $B' \rightarrowtail Y \times f^* T$ for the image of $A' \rightarrowtail X \times f^* T \to Y \times f^* T$, and $B \rightarrowtail f_* Y \times T$ for the subobject
classified by $\theta_Y \circ f_* \exists_p \circ t$, then the preservation of epi–mono factorisations implies that $f_* B'$ remains the image of $f_* A'$ under $f_* X \times f_* f^* T \to
f_* Y \times f_* f^* T\ $; but epi–mono factorisations are stable under pullback in a topos, hence $B$ is the image of $A$ under $f_* X \times T \to f_* Y \times T$. Thus, we have $$\
theta_Y \circ f_* \exists_p \circ t = \exists_{f_{\ast} p} \circ \theta_X \circ t$$ for all generalised elements $t : T \to f_* P_{\mathcal{D}} X$, and thus $\theta$ is also a natural
transformation $f_* P_{\mathcal{D}} \Rightarrow f_* P_{\mathcal{E}}$.
That $f_\ast$ should preserve epis was the conclusion of my analysis as well (but when I came here to write it out, your answer was here already (-:). – Todd Trimble♦ Mar 17 '13 at 1:38
For posterity, let $D$ be the topos of cospans in ${\bf Set}$ and let $E$ be the topos ${\bf Set}$ of sets. The unique geometric morphism $f_{\ast}\colon D\to E\ $ sends each cospan to
1 its fiber product. It does not preserve epis, indeed let $X$ be the cospan $\{1\}\to\{1,2\}\leftarrow\{2\}\ $ and let $Y$ be the terminal cospan. The unique morphism $p\colon X\to Y\ $
is epi but $f_{\ast}(X)=\emptyset$ whereas $f_{\ast}(Y)=1\ $. One can check that the components given by $\theta$ constructed above do not form a naturality square for $p$. – David
Spivak Mar 18 '13 at 4:07
add comment
A different way to describe the same answer that Zhen and Todd arrived at is to work in the internal logic of $E$. That way we may pretend that $f_*$ is the global sections functor $\mathrm
{Hom}(1,-) : D \to \mathrm{Set}$, as long as we treat $\mathrm{Set}$ constructively. Then we have the components of a putative natural transformation
$$ \mathrm{Hom}(1,P A) \to P(\mathrm{Hom}(1,A)) $$
up vote
8 down which, under the universal property of power objects $ \mathrm{Hom}(1,P A) \cong \mathrm{Sub}(A)$, sends a subobject $S\rightarrowtail A$ in $D$ to the set of all global sections $1 \to A$
vote which factor through it. The naturality square for $p:A\to B$ requires that if we take the direct image subobject $p_!(S)$, then a global section of $B$ factors through $p_!(S)$ just when it
lifts to some global section of $A$ factoring through $S$. It's easy to see that this is the same as asking that $1\in D$ be projective, which is equivalent to saying that the global
sections functor $f_* = \mathrm{Hom}(1,-)$ preserves epimorphisms.
Looks neat, but how do we formalize our ability to "pretend that $f_{\ast}$ is the global sections functor..."? Maybe a reference for understanding a morphism $f_{\ast}\colon D\to E$ of
topoi in terms of the internal logic of $E$ would help me understand this. – David Spivak Mar 17 '13 at 14:39
If we have a geometric morphism $f : \mathcal{D} \to \mathcal{E}$, then $f^*$ makes $\mathcal{D}$ into an $\mathcal{E}$-indexed topos $\mathbb{D}$, whose fibre over an object $E$ is the
1 slice $\mathcal{D}_{/ f^* E}$. Johnstone explains this in detail in Chapter B3 of Sketches of an elephant. A less high-tech version of this is to simply consider $\mathcal{D}$ as an $\
mathcal{E}$-enriched category, with $\underline{\mathcal{D}}(X, Y) = f_*(Y^X)$; then $f_* : \mathcal{D} \to \mathcal{E}$ becomes a enriched-representable functor in an obvious way. – Zhen
Lin Mar 17 '13 at 16:00
Now that I finally understand Zhen's answer, I realize how cool the perspective of Mike's answer is. I don't think I could have easily started there, but it looks like a very powerful
approach, and I'm about to sink my teeth into Johnstone B.3 to get a piece of it. Thanks guys! – David Spivak Mar 18 '13 at 17:43
add comment
Not the answer you're looking for? Browse other questions tagged ct.category-theory topos-theory or ask your own question.
|
{"url":"https://mathoverflow.net/questions/124726/when-does-the-direct-image-functor-nicely-push-past-the-power-exists-functor/124741","timestamp":"2014-04-21T07:22:58Z","content_type":null,"content_length":"66538","record_id":"<urn:uuid:5c153d5f-7bd4-4577-8a38-bf98ecdc7674>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Can't start
Given that $sin(A+B) = 2sin(A-B)$, express $tan A$ in terms of $tan B$
Dear Punch, $Sin(A-B)=SinACosB-SinBCosA$ $Sin(A+B)=SinACosB+SinBCosA$ Given the above relations I think you should be able to express TanA interms of TanB.
$\sin{(\alpha \pm \beta)} = \sin{\alpha}\cos{\beta} \mp \cos{\alpha}\sin{\beta}$. Given that $\sin{(A + B)} = 2\sin{(A - B)}$ $\sin{A}\cos{B} - \cos{A}\sin{B} = 2(\sin{A}\cos{B} + \cos{A}\sin{B})$ $\
sin{A}\cos{B} - \cos{A}\sin{B} = 2\sin{A}\cos{B} + 2\cos{A}\sin{B}$ $-3\cos{A}\sin{B} = \sin{A}\cos{B}$ $\frac{-3\sin{B}}{\cos{B}} = \frac{\sin{A}}{\cos{A}}$ $-3\tan{B} = \tan{A}$.
Thanks Prove it.. i feel so grateful to you I have another question, Given that cos x = p, express sin4x in terms of p
Dear Punch, $Sin4x=2Sin2xCos2x$ $Sin4x=2(2SinxCosx)Cos2x$ $Sin4x=4\sqrt{(1-Cos^2x)}Cosx(2Cos^2x-1)$ By substituting Cosx=p and further simplification will give you the answer. Hope this helps.
Dear Punch, Do you know that, $Sin2A=2SinACosA$ $SinA=\sqrt{1-Cos^2A}$ These are the two equations I used. In case you do not know them I think you should refer List of trigonometric identities -
Wikipedia, the free encyclopedia When going from 3 to 4 I used the second identity. Hope this helps.
|
{"url":"http://mathhelpforum.com/trigonometry/123090-cant-start-print.html","timestamp":"2014-04-17T03:10:04Z","content_type":null,"content_length":"15968","record_id":"<urn:uuid:4c4df798-4b79-4119-8e80-4988c33e1887>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Need Help Scaling Recipe
November 26, 2013 - 11:41am
I'm new to the forum. I did a search on this but didn't find what I needed so I hope you can help. I hope this is the right forum.
For a while, I've been a fan of a local bakery's cornmeal rye bread. It uses a sourdough starter. I was such a big fan I started making a clone of it at home and it turned out great but I still felt
it never replicated their recipe. One day I was chatting to the manager and told him how much I love the bread and showed him photos of my clone he just photocopied this daily check recipe for me.
Awesome, right?! Just use bakers percent and scale it down, right? I thought to myself excitedly when driving home, so thrilled I was practically hopping. When I started the process, that's when I
ran into trouble. It turns out they soak the cornmeal first to make a slurry. Do I count the cornmeal as a 'flour' in the bakers percent?
It seemed like too much trouble to do it that way, so I simply scaled the big batch recipe down, dividing everything by the same number to get the amount of dough I needed. The results were awful.
The dough was too wet and even when I attempted to bake it, it was a too salty, and didn't have a nice big crumb.
My questions are: 1) how can I scale this using bakers percent since they use 2 flours and a cornmeal slurry.
2) Why doesn't simple scaling work?
I really would like to bake this at home, especially since the manager was nice enough to share the recipe.
Here's the recipe:
81.88 LBS of bread flour
115.6 LBS of corn meal slurry ( Cornmeal 25 LBS, Water 58.25LBS)
34.40 LBS of Rye Flour
2.5 LBS Salt
35.46 LBS of starter dough (levain)
20.87 LBS of water
7.49 LBS of Honey
Any help greatly appreciated
Log in or register to post comments
Nov 26 2013 - 12:08pm
I think simple scaling should work
Perhaps you could 'show your math'?
• Log in or register to post comments
Nov 26 2013 - 12:14pm
Yeah... Why not just change
Yeah... Why not just change the units to grams and multiply by 10? So 81.88 pounds becomes 818.8 grams, 25 LB of Cornmeal becomes 250 grams, etc? That'd make two or three nice sized loaves.
Once that was done if wanted to go further and come up with the baker's percentages, I'm sure you could. Um... I don't think I'd include the corn meal in the baker's percentage, but others may
• Log in or register to post comments
Nov 26 2013 - 3:09pm
How does this look?
The hydration and salt percent look ok:
Cornmeal Rye Bread
Baker's Percent
48.53% .....81.88 LBS of Bread Flour
20.39%..... 34.40 LBS of Rye Flour
20.57% ......34.7 LBS Cornmeal for slurry* *
10.51%..... 17.73 LBS Starter Flour (Levain) * * *
100% .........168.71 LBS Total Flour
47.95%...... 80.9 LBS Cornmeal Slurry Water * *
12.37% ......20.87 LBS of Water
10.51% .....17.73 LBS Starter Water (Levain) * * *
70.83% .....119.5 LBS Total Water
4.44%......... 7.49 LBS of Honey
1.48% .........2.5 LBS Salt
115.6 LBS of Cornmeal Slurry ( Cornmeal 25 LBS, Water 58.25 LBS - [slurry formula 30% / 70%]) * *
Amount of slurry used - 115.6 lbs - 100% (Cornmeal 34.7 LBS - 30%, Water 80.9 LBS Water - 70%, )* *
35.46 LBS of Starter Dough (Levain) (Starter Flour 17.73 LBS, Starter Water 17.73 LBS) 100% hydration? * * *
Cornmeal Rye Bread
2 lb loaf = 907 g
260g Bread Flour
109g Rye Flour
110g Cornmeal for slurry* *
56g Starter Flour (Levain) * * *
535g Total Flour
257g Cornmeal Slurry Water * *
66g Water
56g Starter Water (Levain) * * *
372g Total Water
24g Honey
8g Salt
367g cornmeal slurry - 110g Cornmeal for slurry - 257g Cornmeal Slurry Water
112g Starter (Levain) - 56g Starter Flour - 56g Starter Water
• Log in or register to post comments
Nov 26 2013 - 5:26pm
I'm not sure of the rules
but if you want to figure salt for the flour, rye, cornmeal and starter, I don't see how else you would do it. 1.5% to 2.0% is the range you want salt to be. Hydration of 70% is a wet dough, but I
would think it is in the proper range. 20% starter looks good as a portion of the entire recipe. A starter hydration of 100% is a pancake batter-like pourable consistency.
Everything in the Baker's percentage looks good to me. I'm tempted to try it out and make a 2 lb loaf.
Look at the Recipe Table in this recipe to see how ingredients are accounted for and come together: (scroll down a little to see recipe table )
• Log in or register to post comments
Nov 26 2013 - 5:05pm
Couple of things
typecase, your scaling approach makes sense, and it looks like the original formula should not produce a really wet dough. So I think you have a error in math, transcription, or measurement. I do
this all the time. So I think you need to double check the math and transcription and try the loaf one more time.
Baker's math is extremely useful, but there are different ways of calculating it. For example in this thread FloydM thinks he would leave the corn meal out of the 'flour' but Antilope puts it into
the calculation. Both of these are valid approaches, as long as one understands which system is being used. So right now I'd leave it out and stick to plain scaling to get your bread going. Then you
can put in in baker's percentages according to the system you use for reference, and for better understanding of the characteristics of the dough.
Here a a few more things to think about:
The original formula says '115.6 LBS of corn meal slurry ( Cornmeal 25 LBS, Water 58.25LBS)'. Is the second part of that line supposed to specify the cornmeal/water ratio? Or is it the actual
Is the slurry soaked overnight or cooked?
Is the hydration of your starter the same as that of the bakery? (It looks like yours is 50% and theirs is 30%)?
Anyway, check your math, measure carefully and make it again. It looks like a pretty good bread.
• Log in or register to post comments
Nov 26 2013 - 6:35pm
That slurry
Seems kinda wet, no? At least compared to the cornmeal soaker for the Anadama bread below...
• Log in or register to post comments
Nov 26 2013 - 7:13pm
• Log in or register to post comments
Nov 26 2013 - 6:46pm
If you take the 58.25 LB of the soaker water and subtract 25 LB of cornmeal, you get 33.25, which just happens to be 133% of the cornmeal. Maybe it should be 100 to 133 like the Reinhart formula for
the Cornmeal soaker.
• Log in or register to post comments
Nov 26 2013 - 7:44pm
Okay, let's try it this way
Peter Reinhart's Anadama Bread
from Bread Baker's Apprentice
Anadama Bread..............%
Bread flour..................100
Instant yeast.................1.1
In the style of Peter Reinhart in the Bread Baker's Apprentice
Cornmeal Rye Bread ....%
Water...........................233 (possibly should be 133?)
Total.............................333 (possibly should be 233?)
SOURDOUGH STARTER (Levain)
Bread Flour................70.4
Rye Flour....................29.6
Cornmeal Soaker......99.4
Sourdough Starter....30.5
• Log in or register to post comments
Nov 26 2013 - 7:55pm
Percentages both ways
Hi typecase. No doubt, you came to the right forum for this question! :-)
Do I count the cornmeal as a 'flour' in the bakers percent?
Some bakers do, some don't. It's personal preference. We can calculate the baker's percentages for both cases--that is, counting cornmeal as flour, versus not counting cornmeal as flour--and compare
the formulas. Thanks for clarifying that 25 lbs cornmeal to 58.25 lbs water is the slurry ratio; in other words, 100% cornmeal to 233% water.
Starter hydration aside (since that's an unknown), here are the baker's percentages for your formula, counting cornmeal as flour:
And the same formula, not counting cornmeal as flour:
In both formulas, hydration and salt are in reasonable ranges for bread that uses a cornmeal soaker. My personal preference would be the latter formula, as it better reflects my thinking when making
a bread such as this: In a basic lean dough, I would use 1.8% salt. With the addition of soaked grain (such as corn), I would up the salt to 2-2.25%. Also, you suggested that the loaf from your local
bakery has a "nice big crumb," which tells me that we're after a high-hydration formula, and it's nice to see that expectation reflected in the formula.
If you'd like to bake from this formula, here it is scaled to a home-baking weight of 750 grams:
To your question:
Is there a universal agreement on bakers percent?
In 2009, professional bread bakers Abram Faber, Craig Ponsford and Jeffrey Yankellow from the Bread Bakers Guild of America set out to develop a standard format for bread formulas, and they published
their work in this article. BreadStorm takes much inspiration from this work and allows bakers to create and use formulas in a standard format without doing any math.
Come back with more questions, if you like. And please let us know how the next loaf turns out.
• Log in or register to post comments
Nov 27 2013 - 5:20am
Nice treatment
That's a very good explanation of the different ways to approach baker's percentages. It's also worth noting that typecase's 'simple scaling' should have yielded an equivalent result. Here's my math
for a simple linear scale of the recipe using the method typecase described. The 'scale to 750' yields agrees with Jaqueline's; I've also included a scale to 3.5 Lbs. Typecase, do these numbers agree
with yours?
Original Scaled
pounds grams scaled to 750 g scaled to 3.5 lb oz
81.88 bread flour 37173.52 205.9 0.961021279063743 15.38
34.7147147 cornmeal 15760.4804738 87.3 0.407444791442684 6.52
80.88888 water 36723.55152 203.4 0.949388555442521 15.19
34.4 rye 15617.6 86.5 0.403751001463028 6.46
2.5 salt 1135 6.3 0.029342369292371 0.47
35.46 levain 16098.84 89.2 0.416192166042994 6.66
20.87 water 9474.98 52.5 0.244950098852715 3.92
7.49 honey 3400.46 18.8 0.087909738399944 1.41
Total 298.2035947 135384.4319938 750.0 3.5 56
• Log in or register to post comments
Nov 27 2013 - 5:37am
Hydration percent shouldn't matter for scaling
In the case of scaling a known good formula, you shouldn't even need to consider hydration percent. It's already figured into the formula, and known to work. Simple math would work best, since the
ingredients are already listed by weight. I would recommend you turn everything into smaller units, such as ounces, or even grams, as Floyd suggested. The smaller the unit, the easier it will be to
scale, because there will be fewer partial units (decimal places) to worry about. If you only want to scale it once to a specific size, your way should work well enough.
But, to have the most freedom, and really see how the ingredients work together, I would scale it down to the smallest possible amount, even if it is less than a loaf. That becomes your overall
formula "unit". Then it can be scaled back up by multiplying that number times the number of "units" you need. For instance, if your "unit" ends up being 3oz of dough, that is the smallest amount of
dough you can make easily. Then, you can see that making a pound of dough would be risky, because dividing a "unit" would introduce slop and errors. You could make 15oz or 18oz easily by multiplying
by 5 or 6, respectively. If you needed 16oz, you would make 48oz and divide it three ways after mixing.
P.S. The amount of cornmeal slurry you listed is greater than the amount of cornmeal and water going into it?
• Log in or register to post comments
Nov 28 2013 - 4:59am
What temperature water
do you use for the cornmeal soak? Many recipes will use boiling water, that may help the cornmeal to be more absorbent.
• Log in or register to post comments
Nov 30 2013 - 2:13pm
So I tried making this bread
Curiosity got the better of me, so I made a single 750g loaf using the scaled proportions above. I used Bob's Red Mill coarse corn meal (polenta), KA bread flour and Hodgson's mill rye. I used a 100%
hydration white bread flour levain.
I started the levain build and poured boiling water over the corn meal to make the slurry. I let both sit, the levain at 78 degrees, the slurry at room temperature (60's) for 14 hours.
I combined all the ingredients except the salt to a rough mix, then let it autolyse 40 minutes.
The dough was pretty wet, confirming typecast's experience, and I let it knead with the dough hook in my Kitchenaid, speed 3 for 10 minutes. There was not much, if any gluten development.
I let it bulk ferment at 78 degrees for 3 hours, folding three times, then shaped into a boule. The dough was still quite wet, but shapable.
I let the dough retard for another 14 hours in the fridge, then baked at 460 with steam, then 400 convection without steam, leaving it in the oven for 7 minutes with the door open at the end.
It looks okay:
I got pretty fair oven spring; the loaf was pretty flat when it went in. It would have been better if I slashed properly. The crumb is open and a little spongy; it reminds me of Portuguese Pão de
While it's possible to make this bread I wonder if the dough has to be that wet. In particular I don't understand why it uses a slurry, which is so wet even after 14 hours that it poured out of its
container like sand at the beach. Usually one tries to make a soaker hydration-neutral, so it neither adds nor removes water from the dough. Putting all that water into the slurry reduces the ability
to manage the hydration.
The last thing I added to the dough was the water; before that it looked pretty good, but IMHO adding the water sent the dough into the twilight zone.
Next time I make this bread (and I will - it's good) I think I will use maybe 120% rather than 233% water in the slurry (which should turn it into more of a soaker) and manage the hydration by
adjusting the amount of water. It might be that simply changing the slurry hydration is enough.
So typecast, it ain't just you. Thanks for sharing this with us.
• Log in or register to post comments
Dec 3 2013 - 12:37pm
Still looking at this recipe
Concerning the Cornmeal slurry:
Just thinking out loud.
Some of the ingredients and weights listed in original recipe:
81.88 LBS of bread flour
34.40 LBS of Rye Flour
116.28 LBS - let's call that 100% for the Baker's Percent.
99.4% - 115.6 LBS Cornmeal Slurry, that' almost 100% of the combined flour weight.
If you scale the slurry at 1.33 to 1, as in the Reinhart recipe:
25.00 LBS (100) Cornmeal (probably comes in that size bag purchased by the bakery)
33.25 LBS (133) Boiling Water
Add the two numbers above and you get 58.25 LBS (233)
58.25 LBS (233) - That number appers in the original recipe. Probably the combined weight of Cornmeal and Water for the slurry when mixed from one bag of cornmeal.
Double the Cornmeal slurry (using two 25-LB bags of Cornmeal and 66.5 LBS water) and you get very close to the 115.6 LBS of Cornmeal slurry used in the original recipe. (Could the 115.6 LBS be a typo
and they really meant 116.5 LBS which agrees with the above figures?)
The Cornmeal slurry numbers seem to make sense to me when figured this way.
• Log in or register to post comments
Dec 3 2013 - 1:55pm
That makes sense to me too, antilope.
• Log in or register to post comments
Dec 1 2013 - 7:37am
Good stuff
That was fun, typecase,
It's a pretty good bread, especially the day after baking, when the flavors are more developed.
• Log in or register to post comments
Feb 18 2014 - 4:21am
Coincidentally I was making this bread again when I saw your update. It's really good.
i wonder what's different about the grind. Coarser or finer than what you were using? Did you see the label?
• Log in or register to post comments
|
{"url":"http://www.thefreshloaf.com/comment/275667","timestamp":"2014-04-18T11:09:28Z","content_type":null,"content_length":"81347","record_id":"<urn:uuid:f9725836-4eef-4bee-b9ee-90597dfb3cd3>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
|
In what sense do statistical methods provide scientific evidence?
Bill Thompson's The Nature of Statistical Evidence addresses this intriguing question. Along the way, he discusses whether statistics meets the predictive and experimental verification criteria of
the scientific method; critiques Bayesian inference (e.g., "Randomness Needs Explaining"); investigates various interpretations of probability and "attitudes toward chance;" offers a framework for
statistical evidence as an alternative to the "true value" model; and more. Many thought-provoking points in very few pages--a practical slant on the philosophy of statistics, from a Professor
Emeritus of Statistics (University of Missour-Columbia) who has also consulted for the National Bureau of Standards and the U.S. Army Air Defense Board, among others. And more diplomatically phrased
than the initial statement that "the purpose of this book is to discuss whether statistical methods make sense."
The Nature of Statistical Evidence by W. A. Thompson. Springer, 2007. Mathematics Library QA276.16 .T488 2007 Link to MnCat Record
|
{"url":"http://blog.lib.umn.edu/fowle013/mathematicslibrary/2007/04/in_what_sense_do_statistical_m.html","timestamp":"2014-04-20T19:45:55Z","content_type":null,"content_length":"5066","record_id":"<urn:uuid:7efff5e2-b377-4f1a-8b1f-07f86ce7344c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Topological Correlators in Landau-Ginzburg Models with Boundaries
Advances in Theoretical and Mathematical Physics
Topological Correlators in Landau-Ginzburg Models with Boundaries
Anton Kapustin and Yi Li
We compute topological correlators in Landau-Ginzburg models on a Riemann surface with arbitrary number of handles and boundaries. The boundaries may correspond to arbitrary topological D-branes of
type B. We also allow arbitrary operator insertions on the boundary and in the bulk. The answer is given by an explicit formula which can be regarded as an open-string generalization of C. Vafa's
formula for closed-string topological correlators. We discuss how to extend our results to the case of Landau-Ginzburg orbifolds.
Article information
Adv. Theor. Math. Phys. Volume 7, Number 4 (2003), 727-749.
First available: 4 April 2005
Permanent link to this document
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Kapustin, Anton; Li, Yi. Topological Correlators in Landau-Ginzburg Models with Boundaries. Advances in Theoretical and Mathematical Physics 7 (2003), no. 4, 727--749. http://projecteuclid.org/
|
{"url":"http://projecteuclid.org/euclid.atmp/1112627039","timestamp":"2014-04-18T08:03:36Z","content_type":null,"content_length":"28787","record_id":"<urn:uuid:a2d0186a-9eb5-4a62-acf7-b86b12abb444>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Single Alpine Type R 12" Ported Box Design. Feedback Pleease?
Single Alpine Type R 12" Ported Box Design. Feedback Pleease?
After some long hours of research and designing, I think I am finally ready to start building the enclosure for my alpine 12" type R sub. I took into account the factory recommendations when
designing this box.
The outer dimensions are 24" wide x 13.5" deep x 13.5" long. Built with 3/4" MDF. It will be L ported, with first length being 10" and the second being 8". The width of the port will be 1.25".
The total volume of the box will be 1.679 cubic feet (this is after subtracting the woofer and L porting displacement.) The box will be screwed, then glued together, and carpeted.
I will be using a Rockford Fostgate 1000 watt monoblock amp with the sub wired at 4 ohms, which will put out around 500 watts rms. Tuned for 34Hz. It is being put in the cargo area of my 2001
Ford Explorer Sport.
This will be my first box build, so any feedback or suggestions would be much appreciated.
Attachment 26532857
Re: Single Alpine Type R 12" Ported Box Design. Feedback Pleease?
Re: Single Alpine Type R 12" Ported Box Design. Feedback Pleease?
After some long hours of research and designing, I think I am finally ready to start building the enclosure for my alpine 12" type R sub. I took into account the factory recommendations when
designing this box.
The outer dimensions are 24" wide x 13.5" deep x 13.5" long. Built with 3/4" MDF. It will be L ported, with first length being 10" and the second being 8". The width of the port will be 1.25".
The total volume of the box will be 1.679 cubic feet (this is after subtracting the woofer and L porting displacement.) The box will be screwed, then glued together, and carpeted.
I will be using a Rockford Fostgate 1000 watt monoblock amp with the sub wired at 4 ohms, which will put out around 500 watts rms. Tuned for 34Hz. It is being put in the cargo area of my 2001
Ford Explorer Sport.
This will be my first box build, so any feedback or suggestions would be much appreciated.
Attachment 26532857
Hey how did this box turn out if you build it I'm doing the same sub but the newer model don't know if I should do it bigger or not
Re: Single Alpine Type R 12" Ported Box Design. Feedback Pleease?
Definately go bigger. Sweet spot is about 2.5 for those
Re: Single Alpine Type R 12" Ported Box Design. Feedback Pleease?
Bigger. I know Alpine specs say something like 1.75 but the Type R's (at least the older models) like 2.25-2.5.
Re: Single Alpine Type R 12" Ported Box Design. Feedback Pleease?
Do you have the older one or newer one? I have the first gens and plan to make a ported box after I start work, and get rid of this sealed one. Let us know things work out!
Camren Trevor
Re: Single Alpine Type R 12" Ported Box Design. Feedback Pleease?
OK i have a type r 15 that I was wondering if getting a bigger box would help hit the lows harder i guess, I have a bassworx box that sounds good, well better than the one it was in before, but I
want to know if i got a 2.3 or even a 2.9 cubic feet box that it would hit harder. the one it is in now is a 1.94 cubic foot box and i am limited in space. This is going in a extended cab s-10.
Re: Single Alpine Type R 12" Ported Box Design. Feedback Pleease?
iirc those like tuning around 27hz also.
Re: Single Alpine Type R 12" Ported Box Design. Feedback Pleease?
i got my new gen type r in one of them bbox boxes.. read on their website its the recommended size an they work together. im pretty sure it'll sound better in a bigger box mines just got a punch
to it. no rumble if that makes sense
Re: Single Alpine Type R 12" Ported Box Design. Feedback Pleease?
What's the software you used to model your box?
Re: Single Alpine Type R 12" Ported Box Design. Feedback Pleease?
OK i have a type r 15 that I was wondering if getting a bigger box would help hit the lows harder i guess, I have a bassworx box that sounds good, well better than the one it was in before, but I
want to know if i got a 2.3 or even a 2.9 cubic feet box that it would hit harder. the one it is in now is a 1.94 cubic foot box and i am limited in space. This is going in a extended cab s-10.
I was going to run a Type R 15 in my daughters car. The box I was going to use was 3.5 cubic ft. (after displacement) tuned around 32. So a 2.3 or 2.9 would be waaaay too small imo.
|
{"url":"http://www.caraudio.com/forums/enclosure-design-construction-help/547101-single-alpine-type-r-12-ported-box-design-feedback-pleease-print.html","timestamp":"2014-04-16T14:42:35Z","content_type":null,"content_length":"12475","record_id":"<urn:uuid:4e697a45-eb35-4a68-93db-2872245c76b0>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Relationship between Hardness and Elastic modulus?
Submitted by CAEengineer on Sun, 2007-11-11 23:20.
What is the relationship between hardness and elastic modulus? The higher hardness, the higher elastic modulus? My understanding is that hardness is a local mechanical property, and
elastic modulus is an averaged global mechanical property. Am I right about this?
Great question and a topic that is commonly misunderstood!
Elastic modulus is an intrinsic material property and fundamentally related to atomic bonding. Hardness is an engineering property and for some materials it can be related to yield strength. Hardness
has strong usefulness in characterization of different types of microstructures in metals and is frequently used in the context of comparing things like work-hardened and tempered metals. The classic
experiment in this regard is the Jominy end-quench test . There are no apparent changes in elastic modulus in metals that have undergone different hardening treatments so the hardness is a good
indication of the underlying microstructure. In metals undergoing indentation deformation, the majority of deformation is plastic and the hardness gives a good metric of plastic deformation
differences between materials.
In general, if you plot the indentation (i.e. Vickers) hardness against the elastic modulus for a large range of materials (using software like CES from Granta makes this really easy since both
properties are listed in the database) you will find that the two do increase together. In non-metals, a large fraction of the indentation deformation is elastic, so the two properties are not truly
You can take a simplified model such as that forwarded by Sakai (and examined in additional detail in my own paper on the subject ) where the elastic and plastic deformation components are assumed to
act in series, with two fundamental material parameters: an elastic modulus and a "resistance to plastic deformation". In this approach, the indentation hardness is actually related to both of these
parameters, a function of both the elastic and plastic parts. The limiting behaviour then for metals is easy to understand, where the resistance to plastic deformation is relatively small such that
the elastic deformation contributions to the indentation hardness are minimal and hardness is an approximate measure of plastic deformation resistance.
In most other materials, including ceramics and mineralized tissues (organic-inorganic composites) the contributions to the total deformation from elastic and plastic deformation can be similar and
so the results from, for example, a series of nanoindentation tests the hardness is directly dependent on elastic modulus.
Further complicating the picture is the case of polymers, where the hardness is a time-dependent function (where the total deformation can be considered as a series sum of viscous, elastic and
plastic deformation components ). In this type of case the measurement of "hardness" under different loading rates or load-holding times can actually be used to examine creep response of the material
Submitted by
Atish Ray
on Wed, 2008-03-05 23:17.
Dear All,
I guess the discussion has been closed now, but couldn't contain my eagerness to throw some student thoughts :)
I'm only one day old at iMech, but have spent almost half a day browsing through various delighting discussions.
As Dr. Oyen says or known otherwise, Moduli (compliance / stiffness) are material's intrinsic properties, while "hardness" in other hand should be treated as an extrinsic materials response (/
property). Let's stick to Elastic stiffness (C) for the sake of discussion, as we know C is a function of the bond strength, in turn, it depends on the potential energy (E) well. Stiffer the
material, narrower is the potential well (C is dependent on the double derivative of E).
Now lets look at hardness. Going by the undergrad definition, Hardness is a measure (or response) of a material's flow resistance. Meaning, how easy (or hard) to move the dislocations in a periodic
structure (like metals). This is influenced by the local structure of the material at the elasto-plastic transition and during subsequent plastic straining.
I think of a quantitative way to link the above two by using Peierls-Nabarro stress resistance (\Τau{PN}), which is to be overcome in order to move a group of dislocations resting on parallel slip
planes. This thought was running in back of my mind while going through this thread. Someone must have done this, which I'm not aware of as my thesis deals with a very different subject on large
plastic strains, far far away from the elasto-plastic regime, but nevertheless I've been always interested in dislocations and their dynamics :)
A question just cropped into my mind while writing this: In an indentation test (for example Vickers) is there any stress triaxiality involved?
This is important because hydrostatic component (dσ{ii}) of stress can alter the dislocation core diameter, and in turn, can change the P-N resistance (\Τau{PN}). Again, my thesis doesn't encompass
indentation, rather I'm looking at plane strain compression of metals.
I might be way off ;) Cheers - Atish
Dr. Oyen,
Thanks a lot for the explanation! I will study it in details to learn more.
Another question about this topic is how the differences are reflected in Finite element analysis. Of course,
there is no single material parameter to reflect hardness in FEA, but hardness does play important role in
material behaviors in reality.
Thanks for the reply again,
Submitted by
Yujie Wei
on Mon, 2007-11-12 10:37.
I would like to follow up the discussion on the relationship between elastic moduli and yield strength in materials. As pointed out by Dr. Oyen, elastic modulus is an intrinsic material property and
fundamentally related to atomic bonding. The strength of materials is associated with plastic deformation mechanisms in a material and is hence structural and deformation-mechanism dependent.
In metals, we know in general that grain size, dislocation density, precipitations, et al. have a strong influence on the strength of materials with almost no change in elasticity. If the plasticity
in a material is by creep, then the strength will show high strain-rate sensitivty.
In a polymeric material, internal chain structures decide both elastic and plastic properties of the material considering chains control both elastic and plastic deformation bahavior.
In a very broad viewpoint, there is a rough relationship between moduli and strengths in materials: large moduli correspond to higher strengths. I think that there is a chart in Ashby’s book on this
regard (can’t remember clearly which book but I do have the impression in mind). In metallic glasses, since there are no apparent internal structures, this relationship is indeed quite significant,
see http://www.nanonet.go.jp/english/mailmag/2004/014a.html (the Figure at the RHS). It may due to the fact that the onset of plasticity in metallic glasses is due to breakages of metallic bonds.
I hope that the information here, together with that by Dr. Oyen, make more sense for the discussion.
Yujie, thanks for the response!
Submitted by
Gang Feng
on Mon, 2007-11-19 17:41.
Michelle gave a great summary on the qualitative relationship between hardness (H), elastic modulus (E), and yield strength (Y). In fact, it has been a long history to find this relation, and it is
still an on-going topic. K. L Johnson (“Contact Mechanics”, page 175) proposed one of the most famous models for explaining the physical process of indentation, which is based on the rough
equivalence between an expanding cavity and an indentation for the same elastoplastic material. According to Johnson’s expanding cavity model, there is the following quantitative relation between H,
E, and Y,
H=2Y/3{2+ln[E/(3Ytana)]} (1),
assuming that the Poisson’s ratio is equal to 0.5, where a is the semi-angle of a conical indenter. Equation (1) shows that H is closely related to Y and also related to E through the ratio E/Ytana.
According to experiments and finite element analysis (FEA), Equation (1) is satisfied for E/Ytana < ~30; when E/Ytana > ~30, H»3Y, which is commonly called Tabor’s relation. To know more, you might
want to check Johnson’s Book and Cheng and Cheng's paper. For some FEA results, you could also see our article (Fig. 5a). Then, in the relationship of H-Y-E, another important issue is the relation
of Y-E, which was well discussed by Yujie as in the previous post.
Gang, thanks for the input! I think it is very helpful.
Now I have a clearer picture of this topic. Will study the information to learn more.
Recent comments
|
{"url":"http://www.imechanica.org/node/2285","timestamp":"2014-04-19T12:30:47Z","content_type":null,"content_length":"38004","record_id":"<urn:uuid:521e0f62-b87b-42de-821c-ef96ccd3d66e>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding k of a Tangent Line
February 4th 2010, 04:45 PM #1
Apr 2009
Finding k of a Tangent Line
Maybe we can walk through this? At first glance, the first thing I would do is find the derivative of $y=5x+45$, but that's not right.
If you haven't learned "the whole dy/dx thing", why did you refer to finding the derivative in your first post? The formula you give is for the slope of a straight line. If f(x) is not linear, it
is the slope of a "secant" line, a line that crosses the graph at x and x+y, not the tangent line. Since you don't yet know the derivative (I suspect this problem is preliminary to introducing
the derivative), here's a method that predates the calculus:
If y= 5x+ 45 and $y= k\sqrt{x}$ meet at all, we must have $y= 5x+ 45= k\sqrt{x}$ or $5x- k\sqrt{x}+ 45= 0$. Let u= $\sqrt{x}$ so that $x= u^2$. The equation becomes $5u^2- ku+ 45= 0$. If x= a at
the point of intersection then x- a= $u- \sqrt{a}$ must be a factor of that polynomia. If fact, to be tangent that must be a double factor- we must have $5u^2- ku+ 45= 5(u- \sqrt{a})^2$
Multiplying the right side, we get $5u^2- ku+ 45= 5u^2- 10\sqrt{a} u+ 5a$ for all u. Comparing coefficients, $-k= -10\sqrt{a}$ and $5a= 45$. Solve the second equation for a and use that to solve
the first equation for k.
February 4th 2010, 04:50 PM #2
February 4th 2010, 04:57 PM #3
Apr 2009
February 4th 2010, 05:03 PM #4
February 4th 2010, 05:08 PM #5
Apr 2009
February 5th 2010, 03:46 AM #6
MHF Contributor
Apr 2005
|
{"url":"http://mathhelpforum.com/calculus/127217-finding-k-tangent-line.html","timestamp":"2014-04-18T16:57:46Z","content_type":null,"content_length":"51753","record_id":"<urn:uuid:8227d824-888c-4a01-8c85-05ab7ba4b45a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The last two sections have pulled a fast one. We began by discussing polarization as a two component tensor quantity, but then started discussing the production of polarization as if only its
amplitude were relevant. A more complete formalism for describing the polarization field has been worked out and will be presented in this section (see Kamionkowski, Kosowsky, and Stebbins (1997) for
a more extensive discussion). An equivalent formalism employing spin-weighted spherical harmonics has been used extensively by Zaldarriaga and Seljak (1997). Note that the normalizations employed by
Seljak and Zaldarriaga are slightly different than those adopted here and by Kamionkowski, Kosowsky, and Stebbins (1997).
The microwave background temperature pattern on the sky T(
are the temperature multipole coefficients and T[0] is the mean CMB temperature. Similarly, we can expand the polarization tensor for linear polarization,
(compare with Eq. 8; the extra factors are convenient because the usual spherical coordinate basis is orthogonal but not orthonormal) in terms of tensor spherical harmonics, a complete set of
orthonormal basis functions for symmetric trace-free 2 × 2 tensors on the sky,
where the expansion coefficients are given by
which follow from the orthonormality properties
These tensor spherical harmonics have been used primarily in the literature of gravitational radiation, where the metric perturbation can be expanded in these tensors. Explicit forms can be derived
via various algebraic and group theoretic methods; see Thorne (1980) for a complete discussion. A particularly elegant and useful derivation of the tensor spherical harmonics (along with the vector
spherical harmonics as well) is provided by differential geometry (Stebbins, 1996). Given a scalar function on a manifold, the only related vector quantity at a given point of the manifold is the
covariant derivative of the scalar function. The tensor basis functions can be derived by taking the scalar basis functions Y[lm] and applying to them two covariant derivative operators on the
manifold of the two-sphere (the sky):
where [ab] is the completely antisymmetric tensor, the ``:'' denotes covariant differentiation on the 2-sphere, and
is a normalization factor. Note that the somewhat more familiar vector spherical harmonics used to describe electromagnetic multipole radiation can likewise be derived as a single covariant
derivative of the scalar spherical harmonics.
While the formalism of differential geometry may look imposing at first glance, the expansion of the polarization field has been cast into exactly the same form as for the familiar temperature case,
with only the extra complication of evaluating covariant derivatives. Explicit forms for the tensor harmonics are given in Kamionkowski, Kosowsky, and Stebbins (1997). Note that the underlying
manifold, the two-sphere, is the simplest non-trivial manifold, with a constant Ricci curvature R = 2, so the differential geometry is easy. One particularly useful property for doing calculations is
that the covariant derivatives are subject to integration by parts:
with no surface term if the integral is over the entire sky. Also, the scalar spherical harmonics are eigenvalues of the Laplacian operator:
The existence of two sets of basis functions, labeled here by ``G'' and ``C'', is due to the fact that the symmetric traceless 2 × 2 tensor describing linear polarization is specified by two
independent parameters. In two dimensions, any symmetric traceless tensor can be uniquely decomposed into a part of the form A[: ab] - (1/2)g[ab]A[: c]^c and another part of the form B[: ac] ^c[b] +
B[: bc] ^c[a] where A and B are two scalar functions. This decomposition is quite similar to the decomposition of a vector field into a part which is the gradient of a scalar field and a part which
is the curl of a vector field; hence we use the notation G for ``gradient'' and C for ``curl''. In fact, this correspondence is more than just cosmetic: if a linear polarization field is visualized
in the usual way with headless ``vectors'' representing the amplitude and orientation of the polarization, then the G harmonics describe the portion of the polarization field which has no handedness
associated with it, while the C harmonics describe the other portion of the field which does have a handedness (just as with the gradient and curl of a vector field).
This geometric interpretation leads to an important physical conclusion. Consider a universe containing only scalar perturbations, and imagine a single Fourier mode of the perturbations. The mode has
only one direction associated with it, defined by the Fourier vector k; since the perturbation is scalar, it must be rotationally symmetric around this axis. (If it were not, the gradient of the
perturbation would define an independent physical direction, which would violate the assumption of a scalar perturbation.) Such a mode can have no physical handedness associated with it, and as a
result, the polarization pattern it induces in the microwave background couples only to the G harmonics. Another way of stating this conclusion is that primordial density perturbations produce no
C-type polarization as long as the perturbations evolve linearly. This property is very useful for constraining or measuring other physical effects, several of which are considered below.
Finally, just as temperature fluctuations are commonly characterized by their power spectrum C[l], polarization fluctuations possess analogous power spectra. We now have three sets of multipole
moments, a[(lm)]^T, a[(lm)]^G, and a[(lm)]^C, which fully describe the temperature/polarization map of the sky. Statistical isotropy implies that
where the angle brackets are an average over all realizations of the probability distribution for the cosmological initial conditions. Simple statistical estimators of the various C[l]'s can be
constructed from maps of the microwave background temperature and polarization.
For Gaussian theories, the statistical properties of a temperature/polarization map are specified fully by these six sets of multipole moments. In addition, the scalar spherical harmonics Y[(lm)] and
the G tensor harmonics Y[(lm)ab]^G have parity (- 1)^l, but the C harmonics Y[(lm)ab]^C have parity (- 1)^l + 1. If the large-scale perturbations in the early universe were invariant under parity
inversion, then C[l]^TC = C[l]^GC = 0. The arguments in the previous paragraph about handedness further imply that for scalar perturbations, C[l]^C = 0. A question of substantial theoretical and
experimental interest is what kinds of physics produce measurable nonzero C[l]^C, C[l]^TC, and C[l]^GC. This question is addressed in the following section.
The power spectra can be computed for a given cosmological model through well-known numerical techniques. A set of power spectra for scalar and tensor perturbations in a typical inflation-like
cosmological model, generated with the CMBFAST code (Seljak and Zaldarriaga, 1996) are displayed in Fig. 2.
Figure 2. Theoretical predictions for the four nonzero CMB temperature-polarization spectra as a function of multipole moment l. The solid curves are the predictions for a COBE-normalized scalar
perturbations, while the dotted curves are COBE-normalized tensor perturbations. Note that the panel for C[l]^C contains no dotted curve since scalar perturbations produce no ``C'' polarization
component; instead, the dashed line in the lower right panel shows a reionized model with optical depth
|
{"url":"http://ned.ipac.caltech.edu/level5/Kosowsky/Kosowsky6.html","timestamp":"2014-04-20T11:43:09Z","content_type":null,"content_length":"12454","record_id":"<urn:uuid:c6114ebe-90f3-4512-ba3a-8e1d44f6e11f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Friendswood Calculus Tutor
...If interested students or parents want to share my passion and interest in math and science and other subjects you can contact me anytime. Regards, EricI have an extensive background in Math,
Science, English, and Social Studies that is well suited to tutor elementary (K-6th) students. These ty...
36 Subjects: including calculus, English, chemistry, reading
...Finally, it is important to inspire passion and a desire to learn in each student. I do this by transferring my own passion for learning and by applying the material to real life problems.
Although I am relatively young for the average tutor, my age allows me to connect with students along avenues that are often closed to teachers and parents.
22 Subjects: including calculus, chemistry, physics, geometry
...They range from Differential Geometry to Ordinary differential Equations. I am well versed in this topic and can pull from a wide variety of real world examples that can help ease the
complicated problems. Through my years of experience teaching and tutoring all levels of math, from 6th grade ...
16 Subjects: including calculus, geometry, algebra 1, algebra 2
...I also have experience as a tutor with a large private learning center. I've helped countless students of all ages with math of all levels. I've also taught SAT and ACT prep.
34 Subjects: including calculus, chemistry, reading, English
...I’m friendly and easy for students and parents to get to know and trust. It’s difficult to teach someone unless there is a foundation of trust between the two parties and my disposition makes
it easy to achieve that relationship. I have many hobbies as well.
38 Subjects: including calculus, chemistry, reading, physics
|
{"url":"http://www.purplemath.com/friendswood_tx_calculus_tutors.php","timestamp":"2014-04-20T16:11:47Z","content_type":null,"content_length":"23936","record_id":"<urn:uuid:7a5bec3f-5c3c-438e-bdef-42fedfe1eeda>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Direct reconstruction from 1-D wavelet coefficients
Y = upcoef(O,X,'wname',N)
Y = upcoef(O,X,'wname',N,L)
Y = upcoef(O,X,Lo_R,Hi_R,N)
Y = upcoef(O,X,Lo_R,Hi_R,N,L)
Y = upcoef(O,X,'wname'')
Y = upcoef(O,X,'wname'',1)
Y = upcoef(O,X,Lo_R,Hi_R)
Y = upcoef(O,X,Lo_R,Hi_R,1)
upcoef is a one-dimensional wavelet analysis function.
Y = upcoef(O,X,'wname',N) computes the N-step reconstructed coefficients of vector X.
'wname' is a string containing the wavelet name. See wfilters for more information.
N must be a strictly positive integer.
If O = 'a', approximation coefficients are reconstructed.
If O = 'd', detail coefficients are reconstructed.
Y = upcoef(O,X,'wname',N,L) computes the N-step reconstructed coefficients of vector X and takes the length-L central portion of the result.
Instead of giving the wavelet name, you can give the filters.
For Y = upcoef(O,X,Lo_R,Hi_R,N) or Y = upcoef(O,X,Lo_R,Hi_R,N,L), Lo_R is the reconstruction low-pass filter and Hi_R is the reconstruction high-pass filter.
Y = upcoef(O,X,'wname'') is equivalent to Y = upcoef(O,X,'wname'',1).
Y = upcoef(O,X,Lo_R,Hi_R) is equivalent to Y = upcoef(O,X,Lo_R,Hi_R,1).
% The current extension mode is zero-padding (see dwtmode).
% Approximation signals, obtained from a single coefficient
% at levels 1 to 6.
cfs = [1]; % Decomposition reduced a single coefficient.
essup = 10; % Essential support of the scaling filter db6.
for i=1:6
% Reconstruct at the top level an approximation
% which is equal to zero except at level i where only
% one coefficient is equal to 1.
rec = upcoef('a',cfs,'db6',i);
% essup is the essential support of the
% reconstructed signal.
% rec(j) is very small when j is ≥ essup.
ax = subplot(6,1,i),h = plot(rec(1:essup));
set(ax,'xlim',[1 325]);
essup = essup*2;
title(['Approximation signals, obtained from a single ' ...
'coefficient at levels 1 to 6'])
% Editing some graphical properties,
% the following figure is generated.
% The same can be done for details.
% Details signals, obtained from a single coefficient
% at levels 1 to 6.
cfs = [1];
mi = 12; ma = 30; % Essential support of
% the wavelet filter db6.
rec = upcoef('d',cfs,'db6',1);
subplot(611), plot(rec(3:12))
for i=2:6
% Reconstruct at top level a single detail
% coefficient at level i.
rec = upcoef('d',cfs,'db6',i);
subplot(6,1,i), plot(rec(mi*2^(i-2):ma*2^(i-2)))
title(['Detail signals obtained from a single ' ...
'coefficient at levels 1 to 6'])
% Editing some graphical properties,
% the following figure is generated.
More About
upcoef is equivalent to an N time repeated use of the inverse wavelet transform.
See Also
|
{"url":"http://www.mathworks.nl/help/wavelet/ref/upcoef.html?nocookie=true","timestamp":"2014-04-23T09:47:04Z","content_type":null,"content_length":"44715","record_id":"<urn:uuid:f947c9d2-8117-40c1-b205-5bc197e3d35b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
|
trigonometry (1)
Without using mathematical tables or calculator , show that $\theta=\frac{\pi}{10}$ satisfies the equation $\sin2\theta=\cos3\theta$. Thanks for ur help .
Hello, Consider : $\cos^2 3\theta-\sin^2 2\theta$ ------------------------------------------------------ Remember these identities : $\cos(a+b)=\cos(a)\cos(b)-\sin(a)\sin(b)$ $\cos(a-b)=\cos(a)\cos
(b)+\sin(a)\sin(b)$ Multiply them : \begin{aligned}<br /> \cos(a-b)\cos(a+b) &=\cos^2(a)\cos^2(b)-\sin^2(a)\sin^2(b) \\<br /> &=\cos^2(b)(1-\sin^2(a))-\sin^2(a)\sin^2(b) \\<br /> &=\cos^2(b)-\sin^2
(a)\cos^2(b)-\sin^2(a)\sin^2(b) \\<br /> &=\cos^2(b)-\sin^2(a)(\cos^2(b)+\sin^2(b)) \\<br /> &=\boxed{\cos^2(b)-\sin^2(a)}<br /> \end{aligned}<br />
------------------------------------------------------ Hence $\cos^2 3\theta-\sin^2 2\theta=\cos \theta \cos 5\theta$ And if $\theta=\frac{\pi}{10}$, then $\cos 5\theta=0$ So $0=\cos^2 3\theta-\sin^2
2\theta=(\cos 3\theta-\sin 2\theta)(\cos 3\theta+\sin 2\theta)$ (1) Now notice that : $3\theta=\frac{3\pi}{10}$ and $2\theta=\frac\pi 5$ are both in the first quadrant (because $<\frac\pi 2$) So $\
cos 3\theta>0$ and $\sin 2\theta>0$ Hence $\cos 3\theta+\sin 2\theta>0$ And it follows from (1) that $\boxed{\cos 3\theta-\sin 2\theta=0}$ if $\theta=\frac{\pi}{10}$ Looks clear to you ?
|
{"url":"http://mathhelpforum.com/trigonometry/97421-trigonometry-1-a.html","timestamp":"2014-04-17T20:59:33Z","content_type":null,"content_length":"49725","record_id":"<urn:uuid:e32f45e7-c9c9-4b1c-9286-fe2448bde405>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Series 215 problem 01.html
A series is a sequence of partial sums, hence a series converges if and only if its sequence of partial sums converges. We explore this more here. Consider the geometric series . First the terms of
the sequence to be added.
> a := n-> 1/exp(2*n);
Now we build the sequence of partial sums associated to the series in Exercise 14.
> S := n-> sum(a(k),k=1..n);
And for example we can obtain the first 30 terms of both sequences:
> seq(a(n),n=1..20);
> seq(evalf(S(n)),n=1..30);
You should be sure to understand where both these sequences come from. From this work, it looks like the series converges to .1565176426 (at least to 10 significant digits). In fact, we see that this
series is a geometric series with and . Since , we have by the geometric series formula, that this series will sum to which is to 10 significant digits equal to .1565176427.
> S(n);
> Limit(Sum(a(k),k=1..n),n=infinity) = limit((S(n),n=infinity));
Now Maple can do infinite series - but you should know that an infinite series is always a limit of partial sums as discussed here.
> limit(S(n),n=infinity);
Consider the series .
(a) Argue from the test for divergence , why this series is POSSIBLY convergent.
(b) Use Maple's partial fraction command to rewrite the terms of the series.
(c) Use your result in (b) to find an expression for the n th term in the sequence of partial sums.
(d) Take the limit of this expression as , and confirm your answer with Maple.
(e) Consider the series . Determine whether this series converges. Justify your answer.
2. Plotting the n 'th partial sum of a series.
If , then we can plot the sequence of partial sums in order to get an impression of whether the series converges. For example, consider the series . This time, let us define the k 'th term and n 'th
partial sum as functions, and then plot the first few partial sums.
> a:=k->1/sqrt(k)-1/sqrt(k+1);
> s:=n->sum(a(k),k=1..n);
> plot([seq([n,s(n)],n=1..1000)],style=point);
The plot does support the idea that the sequence of partial sums may be converging to 1. But pictures can be very deceiving, so be careful. For example, consider the series represented below.
> a:=k->1/k;
> s:=n->sum(a(k),k=1..n);
> plot([seq([n,s(n)],n=1..1000)],style=point);
The sum of the first 1000 terms is not very large. But we know the series diverges, because this is just the harmonic series. Even Maple knows this series diverges.
> sum(a(k),k=1..infinity);
Consider the series given by .
a) Have Maple try to compute . If that does not work, ask Maple for a floating point estimate of the sum. What does Maple's response tell you?
b) Plot the first 100 points in the sequence of partial sums. Use this to give an estimate of the sum of the series.
c) Plot the first 200 points in the sequence of partial sums. Use this to give a new estimate of the sum of the series.
d) Plot the first 400 points in the sequence of partial sums. What do you observe now?
e) You probably have observed that there is some interesting behaviour in the sequence of partial sums that happens when . Find a floating point estimate of the number 355/113, and explain why that
estimate has something to do with what happens.
|
{"url":"http://www.uwec.edu/math/Calculus/labs/215/html%20labs/%5BIndex%20215%5D/Series%20215%20problem%2001.html","timestamp":"2014-04-21T05:16:32Z","content_type":null,"content_length":"13418","record_id":"<urn:uuid:eea348b5-d1fc-4117-b4a1-8073d893fc41>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
|
HELP!!! C programming Calculations
10-20-2012 #1
Registered User
Join Date
Oct 2012
Hi Everyone,
I'm doing an assignment for uni which is about a small shopping cart.
Could some please help me with these couple of maths calculation issues with c programming please:
1. If the user inputs a weight that is greater than 2500grams, the charge would be $5.50 + $0.90 for every 400grams over 2500grams, i am able to do the calculate on an separate source code to
see it works, but i am unsure how to include it to the original work that i am working on (if that makes any sense....) I have attached the source code for your reference and guidance!!
Thanking for your help !!!
> i am able to do the calculate on an separate source code to see it works,
So why didn't you post this code as well?
> else if(weight >= 25000)
> input1 = 5.50;
You put the code you have here.
weight = setValidWeight();
type = setValidType();
age = setValidAge();
printf("\n\nThe type is %.2f\n",type);
printf("The age is %.2f\n",age);
printf("The Weight is %.2f Grams\n",weight);
calculateCharge ();
But calculateCharge() calls these functions again, prompting you for the same responses.
Consider say
calculateCharge( weight, type, age );
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
I support http://www.ukip.org/ as the first necessary step to a free Europe.
If you want to integrate the code into one file, make the above file, into a function, and call it from your new program. Make sure your new program has any missing include files that the
AssigShell file requires.
EZ way to do it. If you need help with that, you'll need to post BOTH programs, so we can see what needs to be done.
i just want the setValidWeight(); in one of the IF statement to do this; if the input weight is greater than 2500grams, it should cost it has $5.50 + $0.90c for every 400grams over 2500grams
i did an rough testing program of that in another source file; have attached it for your references....
Ugh, your program is so short. Just post the code within [code][/code] bbcode tags instead of attaching a source file. People are less likely to want to wade through a wall of code to help you,
and by attaching a source file, you give the impression that you have a wall of code that was too long to post, hence you are less likely to get help.
C + C++ Compiler: MinGW port of GCC
Version Control System: Bazaar
Look up a C++ Reference and learn How To Ask Questions The Smart Way
What's so hard about putting some braces in your code?
else if(weight >= 25000)
input1 = 5.50;
else if(weight >= 25000) {
input1 = 5.50;
// now you can put your 4-line calculation here instead.
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
I support http://www.ukip.org/ as the first necessary step to a free Europe.
you could ??
so u can perform multiple calculations under the IF & ELSE IF statements ??......
else if(weight >= 25000) {
input1 = weight - 25000;
input2 = input1 / 400;
input3 = input2 * 0.90;
return input3;
is the code above correct ??
10-20-2012 #2
10-20-2012 #3
Registered User
Join Date
Sep 2006
10-21-2012 #4
Registered User
Join Date
Oct 2012
10-21-2012 #5
10-21-2012 #6
10-21-2012 #7
Registered User
Join Date
Oct 2012
|
{"url":"http://cboard.cprogramming.com/c-programming/151577-help-c-programming-calculations.html","timestamp":"2014-04-19T20:38:42Z","content_type":null,"content_length":"66216","record_id":"<urn:uuid:4578b1a5-f2b8-4ac7-868b-f41257ae22d2>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Exciting card game teaches probability, addition, subtraction & more!
Exciting card game teaches probability, addition, subtraction & more!
This is a wonderful site. I wish I had this resource when I was in school.
I wasn't sure where to post this, but this thought the forum might be interested in a new card game called Battlez, a game designed by a Stanford graduate whose goal is to teach complex math concepts
in a fun manner to kids of all ages. The beta version of the game was released as the limited edition, and can be found on the website: [Cough]. The character artwork is very nice, and the game
itself is so entertaining that kids will play it for hours and days, and learn complex math concepts without even realizing it. I would love to hear your thoughts about this highly educational game
if you are a game player, and its usefulness as a teaching tool if you are an instructor.
Re: Exciting card game teaches probability, addition, subtraction & more!
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Exciting card game teaches probability, addition, subtraction & more!
Complex math concepts. Hmph.
Re: Exciting card game teaches probability, addition, subtraction & more!
As I said " *cough*-*cough* "
Counting down ... 10 .. 9 .. 8 ..
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Exciting card game teaches probability, addition, subtraction & more!
Everyone loves free advertising, don't they?
Why did the vector cross the road?
It wanted to be normal.
Super Member
Re: Exciting card game teaches probability, addition, subtraction & more!
And lo, did I see upon this section this thread and I did say; "Doeth thine eyes decieve thine, o'great purveryor of justice, or doeth thou seeth spam?"
And with awesome rebuke, I did smite it with my mighty hammer of truth, crossed with the fair scepter of trial and the swift rubke of retribution.
Boy let me tell you what:
I bet you didn't know it, but I'm a fiddle player too.
And if you'd care to take a dare, I'll make a bet with you.
Re: Exciting card game teaches probability, addition, subtraction & more!
Yo, verily!
(7 .. 6. .. 5 .. ?)
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Legendary Member
Re: Exciting card game teaches probability, addition, subtraction & more!
People don't notice whether it's winter or summer when they're happy.
~ Anton Chekhov
Cheer up, emo kid.
Re: Exciting card game teaches probability, addition, subtraction & more!
OK, times up ... should this whole topic be deleted?
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Exciting card game teaches probability, addition, subtraction & more!
Re: Exciting card game teaches probability, addition, subtraction & more!
Is someone going to delete this? Or can I?
Legendary Member
Re: Exciting card game teaches probability, addition, subtraction & more!
you can if you like
People don't notice whether it's winter or summer when they're happy.
~ Anton Chekhov
Cheer up, emo kid.
Re: Exciting card game teaches probability, addition, subtraction & more!
I'd leave it as an example to all who try to freely advertise on our boards. Then again, anyone who tries probably wouldn't look through previous topics. Oh well. In that case, I'd leave it because
the topic has turned into an interesting conversation about how spamming is bad, instead of a topic about spam, meaning that it's not bad anymore. Or something.
Why did the vector cross the road?
It wanted to be normal.
Re: Exciting card game teaches probability, addition, subtraction & more!
If you want another site which is connected to the subject with many interactive and entertaining free online math games for kids you can find it here: Probability Games
and to the main site: Math Games
Re: Exciting card game teaches probability, addition, subtraction & more!
Hi BrookDrew;
The shark rounding game was cool, so was the sequence games. I was a little disappointed with the Venn diagram maker. It didn't allow for filling in the intersecting areas or for really adding new
circles. No save also is a negative feature
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=26243","timestamp":"2014-04-20T11:42:08Z","content_type":null,"content_length":"24851","record_id":"<urn:uuid:0650da81-4318-4fa0-989b-cb59092d8425>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matches for:
Contemporary Mathematics
2012; 144 pp; softcover
Volume: 577
ISBN-10: 0-8218-6929-9
ISBN-13: 978-0-8218-6929-1
List Price: US$62
Member Price: US$49.60
Order Code: CONM/577
This volume contains the proceedings of the conference "Multi-Scale and High-Contrast PDE: From Modelling, to Mathematical Analysis, to Inversion", held June 28-July 1, 2011, at the University of
The mathematical analysis of PDE modelling materials, or tissues, presenting multiple scales has been an active area of research for more than 40 years. The study of the corresponding imaging, or
reconstruction, problem is a more recent one. If the material parameters of the PDE present high contrast ratio, then the solution to the PDE becomes particularly challenging to analyze, or compute.
Similar difficulties occur in time dependent equations in high frequency regimes. Over the last decade the analysis of the inversion problem at moderate frequencies, the rigorous derivation of
asymptotics at high frequencies, and the regularity properties of solutions of elliptic PDE in highly heterogeneous media have received a lot of attention.
The focus of this volume is on recent progress towards a complete understanding of the direct problem with high contrast or high frequencies, and unified approaches to the inverse and imaging
problems for both small and large contrast or frequencies. The volume also includes contributions on the inverse problem, both on its analysis and on numerical reconstructions. It offers the reader a
good overview of current research and direction for further pursuit on multiscale problems, both in PDE and in signal processing, and in the analysis of the equations or the computation of their
solutions. Special attention is devoted to new models and problems coming from physics leading to innovative imaging methods.
Graduate students and research mathematicians interested in applications of PDE to material sciences.
|
{"url":"http://ams.org/bookstore?fn=20&arg1=conmseries&ikey=CONM-577","timestamp":"2014-04-24T20:57:46Z","content_type":null,"content_length":"16096","record_id":"<urn:uuid:3b6a6125-e325-4ea4-8646-3098d9898d96>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Optimal list order under partial memory constraints
- Journal of the ACM , 1992
"... Abstract. In practice, almost all dynamic systems require decisions to be made on-line, without full knowledge of their future impact on the system. A general model for the processing of
sequences of tasks is introduced, and a general on-line decnion algorithm is developed. It is shown that, for an ..."
Cited by 186 (9 self)
Add to MetaCart
Abstract. In practice, almost all dynamic systems require decisions to be made on-line, without full knowledge of their future impact on the system. A general model for the processing of sequences of
tasks is introduced, and a general on-line decnion algorithm is developed. It is shown that, for an important algorithms. class of special cases, this algorithm is optimal among all on-line
Specifically, a task system (S. d) for processing sequences of tasks consists of a set S of states and a cost matrix d where d(i, j) is the cost of changing from state i to state j (we assume that d
satisfies the triangle inequality and all diagonal entries are f)). The cost of processing a given task depends on the state of the system. A schedule for a sequence T1, T2,..., Tk of tasks is a
‘equence sl,s~,..., Sk of states where s ~ is the state in which T ’ is processed; the cost of a schedule is the sum of all task processing costs and state transition costs incurred. An on-line
scheduling algorithm is one that chooses s, only knowing T1 Tz ~.. T’. Such an algorithm is w-competitive if, on any input task sequence, its cost is within an additive constant of w times the
optimal offline schedule cost. The competitive ratio w(S, d) is the infimum w for which there is a w-competitive on-line scheduling algorithm for (S, d). It is shown that w(S, d) = 2 ISI – 1 for
eoery task system in which d is symmetric, and w(S, d) = 0(1 S]2) for every task system. Finally, randomized on-line scheduling algorithms are introduced. It is shown that for the uniform task system
(in which d(i, j) = 1 for all i, j), the expected competitive ratio w(S, d) =
- ACM Computing Surveys , 1985
"... this article. Two examples of simple permutation algorithms are move-to-front, which moves the accessed record to the front of the list, shifting all records previously ahead of it back one
position; and transpose, which merely exchanges the accessed record with the one immediately ahead of it in th ..."
Cited by 29 (3 self)
Add to MetaCart
this article. Two examples of simple permutation algorithms are move-to-front, which moves the accessed record to the front of the list, shifting all records previously ahead of it back one position;
and transpose, which merely exchanges the accessed record with the one immediately ahead of it in the list. These will be described in more detail later. Knuth [1973] describes several search methods
that are usually more efficient than linear search. Bentley and McGeoch [1985] justify the use of self-organizing linear search in the following three contexts:
- In , 1998
"... . We survey results on self-organizing data structures for the search problem and concentrate on two very popular structures: the unsorted linear list, and the binary search tree. For the
problem of maintaining unsorted lists, also known as the list update problem, we present results on the competit ..."
Cited by 18 (0 self)
Add to MetaCart
. We survey results on self-organizing data structures for the search problem and concentrate on two very popular structures: the unsorted linear list, and the binary search tree. For the problem of
maintaining unsorted lists, also known as the list update problem, we present results on the competitiveness achieved by deterministic and randomized on-line algorithms. For binary search trees, we
present results for both on-line and off-line algorithms. Self-organizing data structures can be used to build very effective data compression schemes. We summarize theoretical and experimental
results. 1 Introduction This paper surveys results in the design and analysis of self-organizing data structures for the search problem. The general search problem in pointer data structures can be
phrased as follows. The elements of a set are stored in a collection of nodes. Each node also contains O(1) pointers to other nodes and additional state data which can be used for navigation and
, 2004
"... ABSTRACT: Caching is widely recognized as an effective mechanism for improving the performance of the World Wide Web. One of the key components in engineering the Web caching systems is
designing document placement/replacement algorithms for updating the collection of cached documents. The main desi ..."
Cited by 6 (3 self)
Add to MetaCart
ABSTRACT: Caching is widely recognized as an effective mechanism for improving the performance of the World Wide Web. One of the key components in engineering the Web caching systems is designing
document placement/replacement algorithms for updating the collection of cached documents. The main design objectives of such a policy are the high cache hit ratio, ease of implementation, low
complexity and adaptability to the fluctuations in access patterns. These objectives are essentially satisfied by the widely used heuristic called the least-recently-used (LRU) cache replacement
rule. However, in the context of the independent reference model, the LRU policy can significantly underperform the optimal least-frequently-used (LFU) algorithm that, on the other hand, has higher
implementation complexity and lower adaptability to changes in access frequencies. To alleviate this problem, we introduce a new LRU-based rule, termed the persistent-accesscaching (PAC), which
essentially preserves all of the desirable attributes of the LRU scheme. For this new heuristic, under the independent reference model and generalized Zipf’s law request probabilities, we prove that,
for large cache sizes, its performance is arbitrarily close to the optimal LFU algorithm. Furthermore, this near-optimality of the PAC algorithm is achieved at the expense of a negligible additional
complexity for large cache sizes when compared to the ordinary LRU policy, since the PAC
- ICALP'96, LNCS 1099 , 1995
"... We consider self-organizing data structures in the case where the sequence of accesses can be modeled by a first order Markov chain. For the simple-k- and batched-k--move-to-front schemes,
explicit formulae for the expected search costs are derived and compared. We use a new approach that employs th ..."
Cited by 5 (1 self)
Add to MetaCart
We consider self-organizing data structures in the case where the sequence of accesses can be modeled by a first order Markov chain. For the simple-k- and batched-k--move-to-front schemes, explicit
formulae for the expected search costs are derived and compared. We use a new approach that employs the technique of expanding a Markov chain. This approach generalizes the results of Gonnet/Munro/
Suwanda. In order to analyze arbitrary memory-free move-forward heuristics for linear lists, we restrict our attention to a special access sequence, thereby reducing the state space of the chain
governing the behaviour of the data structure. In the case of accesses with locality (inert transition behaviour), we find that the hierarchies of self-organizing data structures with respect to the
expected search time are reversed, compared with independent accesses. Finally we look at self-organizing binary trees with the move-to-root rule and compare the expected search cost with the entropy
of the Markov chain of accesses.
, 1993
"... this paper we assume that the sequence of required keys is a Markov chain with transition kernel P, and we consider the class f* of stochastic matrices P such that move-to-front is optimal among
on-line rules, with respect to the stationary search cost. We give properties of f* that bear out the usu ..."
Cited by 1 (1 self)
Add to MetaCart
this paper we assume that the sequence of required keys is a Markov chain with transition kernel P, and we consider the class f* of stochastic matrices P such that move-to-front is optimal among
on-line rules, with respect to the stationary search cost. We give properties of f* that bear out the usual explanation of optimality of move-to-front by a locality phenomenon exhibited by the
sequence of required keys. We produce explicitly a large subclass of f*. We also show that in some cases move-to-front is optimal with respect to the speed of convergence toward stationary search
cost. 1. Introduction. Let us describe a simple example of a self-organizing sequential search data structure. Let S = {1,2, ... ,M} be a set of items ; assume that these items are stored in places,
and that the set p of places is {1,2, ... ,M}. When an item is required, it is searched for in place 1, then, if not found, in place 2, and so on, and a cost p is incurred if the item is finally
found in place p. Once the item has been found, a control is taken on the search process by replacing the item in a wisely chosen place : for instance, closer to American Mathematical Society 1980
subject classification. Primary 68P05, 90C40 ; secondary 60J10. Key words and phrases. Controlled Markov chain, Bellman optimality condition, self organizing data structure, sequential search,
locality. Abbreviated title (running head). Optimality of move-to-front rule. 2 place 1, in such a way that the most frequently accessed items spend most of their time near place 1. When doing this,
we must free the new position h of the accessed item by pushing the items remaining between the old position k and the new position h, the notaccessed items retaining their relative order, as in
figure 1. Let F = (F n ) n1 be the s...
, 2005
"... Renewed interest in caching techniques stems from their application in improving the performance of the World Wide Web, where storing popular documents in proxy caches closer to end users can
significantly reduce the document download latency and overall network congestion. Rules used to update the ..."
Cited by 1 (1 self)
Add to MetaCart
Renewed interest in caching techniques stems from their application in improving the performance of the World Wide Web, where storing popular documents in proxy caches closer to end users can
significantly reduce the document download latency and overall network congestion. Rules used to update the collection of frequently accessed documents inside a cache are referred to as cache
replacement algorithms (policies). Due to many different factors that influence the Web performance, one of the key attributes of a cache replacement rule are low complexity and high adaptability to
variability in Web access patterns. These properties are primarily the reason why most of the practical Web caching algorithms are based on the easily implemented Least-Recently-Used (LRU) cache
replacement heuristic. In our recent paper [7], we introduce a new algorithm, termed Persistent Access Caching (PAC), that, in addition to desirable low complexity and adaptability, somewhat
surprisingly achieves nearly optimal performance for the independent reference model and generalized Zipf’s law request probabilities. However, the main drawbacks of the PAC algorithm are its
dependence on the request arrival times and variable storage requirements. In this paper, we resolve these problems by introducing a discrete version of the PAC policy that, after a cache miss,
places the requested document in the cache only if
, 2008
"... The transposition rule is an algorithm for self-organizing linear lists. Upon a request for a given item, the item is transposed with the preceding one. The cost of a request is the distance of
the requested item from the beginning of the list. An asymptotic optimality of the rule with the respect t ..."
Add to MetaCart
The transposition rule is an algorithm for self-organizing linear lists. Upon a request for a given item, the item is transposed with the preceding one. The cost of a request is the distance of the
requested item from the beginning of the list. An asymptotic optimality of the rule with the respect to the optimal static arrangement is demonstrated for two families of request distributions. The
result is established by considering an associated constrained asymmetric exclusion process.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=750260","timestamp":"2014-04-19T06:02:04Z","content_type":null,"content_length":"34978","record_id":"<urn:uuid:dc655fe4-abe8-43ee-8afe-f84b8cf36f73>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical Physics
1208 Submissions
[4] viXra:1208.0237 [pdf] submitted on 2012-08-30 04:32:08
Can Differentiable Description of Physical Reality be Considered Complete? :toward a Complete Theory of Relativity
Authors: Xiong Wang
Comments: 15 Pages.
How to relate the physical \emph{real} reality with the logical \emph{true} abstract mathematics concepts is nothing but pure postulate. The most basic postulates of physics are by using what kind of
mathematics to describe the most fundamental concepts of physics. Main point of relativity theories is to remove incorrect and simplify the assumptions about the nature of space-time. There are
plentiful bonus of doing so, for example gravity emerges as natural consequence of curvature of spacetime. We argue that the Einstein version of general relativity is not complete, since it can't
explain quantum phenomenon. If we want to reconcile quantum, we should give up one implicit assumption we tend to forget: the differentiability. What would be the benefits of these changes? It has
many surprising consequences. We show that the weird uncertainty principle and non-commutativity become straightforward in the circumstances of non-differentiable functions. It's just the result of
the divergence of usual definition of \emph{velocity}. All weirdness of quantum mechanics are due to we are trying to making sense of nonsense. Finally, we proposed a complete relativity theory in
which the spacetime are non-differentiable manifold, and physical law takes the same mathematical form in all coordinate systems, under arbitrary differentiable or non-differentiable coordinate
transformations. Quantum phenomenon emerges as natural consequence of non-differentiability of spacetime.
Category: Mathematical Physics
[3] viXra:1208.0071 [pdf] submitted on 2012-08-16 15:43:12
The Mass of the Electron-Neutrino Expressed by Known Physical Constants
Authors: Laszlo I. Orban
Comments: 5 Pages.
Many trials attempted to understand the neutrinos ever since Pauli theoretically concluded its existence from the conservation of energy calculations. The present paper demonstrates that commencing
from two appropriately chosen measurement systems, the mass of the electron-neutrino can be calculated from the mass of the electron and the fine-structure constant. The mass of the neutrino can be
determined by the theoretically derived expression (m_k=\alpha^3 m_e) (m_k is the mass of the neutrino, m_e is the mass of electron, alpha is the fine-structure constant).
Category: Mathematical Physics
[2] viXra:1208.0048 [pdf] submitted on 2012-08-12 02:37:48
What is Mass? Chapter One: Mass in Newtonian Mechanics and Lagrangian Mechanics
Authors: Xiong Wang
Comments: 13 Pages. author name: Xiong WANG Email:wangxiong8686@gmail.com
``To see a World in a Grain of Sand, And a Heaven in a Wild'' We will try to see the development and the whole picture of theoretical physic through the evolution of the very fundamental concept of
mass. 1The inertial mass in Newtonian mechanics 2 The Newtonian gravitational mass 3 Mass in Lagrangian formulism 4 Mass in the special theory of relativity 5 $E = MC^2$ 6 Mass in quantum mechanics 7
Principle of equivalence and general relativity 8 The energy momentum tensor in general relativity 9 Mass in the standard model of particle physics 10 The higgs mechanism
Category: Mathematical Physics
[1] viXra:1208.0036 [pdf] submitted on 2012-08-08 17:19:13
Mathematical Follow-up for Dark Energy and Dark Matter in the Double Torus Universe.
Authors: Dan Visser
Comments: 7 Pages.
The main issue in this paper is my mathematics to be presented about the maximum of dark energy depending on the information-differences on the wall of any volume in the Double Torus. Secondly the
expressions must be worked out further by invitation to them how are triggered by my ideas the universe has a Double Torus geometry. Thirdly I go deeper into details with dark matter, not only
stating dark matter is a spatial particle that spins and gets its energy from its acceleration into a dark matter torus, but also pretending dark matter gets its mass from the vacuum energy. I lay
out the conditions for understanding why the Big Bang dynamics is therefore a part of the Double Torus and how the dark flow in the universe emerge from the Double Torus dark energy equation.
Fourthly I refer to the pretention neutrinos should be sensitive for the flow of dark matter particles expressed in the set of equations in a former paper. But extensively this paper amplifies this
theoretical neutrino-evidence, despite all the confusion around the truth of neutrinos-faster-than-light. Fifthly I observe some dark energy and dark matter issues from some of my former papers.
Category: Mathematical Physics
|
{"url":"http://vixra.org/mathph/1208","timestamp":"2014-04-24T01:19:54Z","content_type":null,"content_length":"9022","record_id":"<urn:uuid:1c699f35-ace2-4f15-8106-36c5123cf3f1>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of cumulative
probability theory
, the
cumulative distribution function (CDF)
, also
probability distribution function
or just
distribution function
, completely describes the
probability distribution
of a real-valued
random variable X
. For every real number
, the CDF of X is given by
$x mapsto F_X\left(x\right) = operatorname\left\{P\right\}\left(Xleq x\right),$
where the right-hand side represents the probability that the random variable X takes on a value less than or equal to x. The probability that X lies in the interval (a, b] is therefore $F_X\left(b\
right)-F_X\left(a\right)$ if a < b.
If treating several random variables X, Y, ... etc. the corresponding letters are used as subscripts while, if treating only one, the subscript is omitted. It is conventional to use a capital F for a
cumulative distribution function, in contrast to the lower-case f used for probability density functions and probability mass functions. This applies when discussing general distributions: some
specific distributions have their own conventional notation, for example the normal distribution.
The CDF of X can be defined in terms of the probability density function ƒ as follows:
$F\left(x\right) = int_\left\{-infty\right\}^x f\left(t\right),dt$
Note that in the definition above, the "less or equal" sign, '≤' is a convention, but it is a universally used one, and is important for discrete distributions. The proper use of tables of the
binomial and Poisson distributions depend upon this convention. Moreover, important formulas like Levy's inversion formula for the characteristic function also rely on the "less or equal"
Every cumulative distribution function F is (not necessarily strictly) monotone increasing and right-continuous. Furthermore, we have
$lim_\left\{xto -infty\right\}F\left(x\right)=0, quad lim_\left\{xto +infty\right\}F\left(x\right)=1.$
Every function with these four properties is a cdf. The properties imply that all CDFs are càdlàg functions.
If X is a discrete random variable, then it attains values x[1], x[2], ... with probability p[i] = P(x[i]), and the cdf of X will be discontinuous at the points x[i] and constant in between:
$F\left(x\right) = operatorname\left\{P\right\}\left(Xleq x\right) = sum_\left\{x_i leq x\right\} operatorname\left\{P\right\}\left(X = x_i\right) = sum_\left\{x_i leq x\right\} p\left(x_i\right)
If the CDF F of X is continuous, then X is a continuous random variable; if furthermore F is absolutely continuous, then there exists a Lebesgue-integrable function f(x) such that
$F\left(b\right)-F\left(a\right) = operatorname\left\{P\right\}\left(aleq Xleq b\right) = int_a^b f\left(x\right),dx$
for all real numbers a and b. (The first of the two equalities displayed above would not be correct in general if we had not said that the distribution is continuous. Continuity of the distribution
implies that P (X = a) = P (X = b) = 0, so the difference between "<" and "≤" ceases to be important in this context.) The function f is equal to the derivative of F almost everywhere, and it is
called the probability density function of the distribution of X.
Point probability
The "point probability" that
is exactly
can be found as
$operatorname\left\{P\right\}\left(X=b\right) = F\left(b\right) - lim_\left\{x to b^\left\{-\right\}\right\} F\left(x\right)$
Kolmogorov-Smirnov and Kuiper's tests
Kolmogorov-Smirnov test
is based on cumulative distribution functions and can be used to test to see whether two empirical distributions are different or whether an empirical distribution is different from an ideal
distribution. The closely related
Kuiper's test
is useful if the domain of the distribution is cyclic as in day of the week. For instance we might use Kuiper's test to see if the number of tornadoes varies during the year or if sales of a product
vary by day of the week or day of the month.
Complementary cumulative distribution function
Sometimes, it is useful to study the opposite question and ask how often the random variable is
a particular level. This is called the
complementary cumulative distribution function
), defined as
$F_c\left(x\right) = operatorname\left\{P\right\}\left(X > x\right) = 1 - F\left(x\right)$.
In survival analysis, $F_c\left(x\right)$ is called the survival function and denoted $S\left(x\right)$.
Folded cumulative distribution
While the plot of a cumulative distribution often has an S-like shape, an alternative illustration is the folded cumulative distribution or mountain plot, which folds the top half of the graph over,
thus using two scales, one for the upslope and another for the downslope. This form of illustration emphasises the median and dispersion of the distribution or of the empirical results.
As an example, suppose
is uniformly distributed on the unit interval [0, 1]. Then the CDF of X is given by
$F\left(x\right) = begin\left\{cases\right\}$
0 &: x < 0 x &: 0 le x le 1 1 &: 1 < x end{cases}
Take another example, suppose X takes only the discrete values 0 and 1, with equal probability. Then the CDF of X is given by
$F\left(x\right) = begin\left\{cases\right\}$
0 &: x < 0 1/2 &: 0 le x < 1 1 &: 1 le x end{cases}
If the cdf
is strictly increasing and continuous then
$F^\left\{-1\right\}\left(y \right), y in \left[0,1\right]$
is the unique real number
such that
$F\left(x\right) = y$
Unfortunately, the distribution does not, in general, have an inverse. One may define, for $y in \left[0,1\right]$,
F^{-1}(y) = inf_{x in mathbb{R}} { F(x) geq y } .
Example 1: The median is $F^\left\{-1\right\}\left(0.5 \right)$.
Example 2: Put $tau = F^\left\{-1\right\}\left(0.95 \right)$. Then we call $tau$ the 95th percentile.
The inverse of the cdf is called the quantile function.
The inverse of the cdf can be used to translate results obtained for the uniform distribution to other distributions. Some useful properties of the inverse cdf are:
1. $F^\left\{-1\right\}$ is nondecreasing
2. $F^\left\{-1\right\}\left(F\left(x\right)\right) leq x$
3. $F\left(F^\left\{-1\right\}\left(y\right)\right) geq y$
4. $F^\left\{-1\right\}\left(y\right) leq x$ if and only if $y leq F\left(x\right)$
5. If $Y$ has a $U\left[0, 1\right]$ distribution then $F^\left\{-1\right\}\left(Y\right)$ is distributed as $F$
6. If $\left\{X_alpha\right\}$ is a collection of independent $F$-distributed random variables defined on the same sample space, then there exist random variables $Y_alpha$ such that $Y_alpha$ is
distributed as $U\left[0,1\right]$ and $F^\left\{-1\right\}\left(Y_alpha\right) = X_alpha$ with probability 1 for all $alpha$.
Multivariate Case
When dealing simultaneously with more than one random variable the joint cumulative distribution function can also be defined. For example, for a pair of random variables X,Y, the joint CDF is given
$x,y to F\left(x,y\right) = operatorname\left\{P\right\}\left(Xleq x,Yleq y\right),$
where the right-hand side represents the probability that the random variable X takes on a value less than or equal to x and that Y takes on a value less than or equal to y
See also
|
{"url":"http://www.reference.com/browse/cumulative","timestamp":"2014-04-24T02:49:17Z","content_type":null,"content_length":"84744","record_id":"<urn:uuid:a32486aa-c3e7-45dd-9d8f-a623ee441c85>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mosaicking Tutorial
The original tutorial has been modified with the authors permission.
Keywords: Matlab, Image Mosaicking, Mosaicking, Image Stitching
These 2 photos depict examples of a scene that can be stitched together in Matlab. The big picture problem is to mosaic them together correctly. Solving this partially or completely is important
because it will allow command and control (C2) teams to assess a disaster situation more accurately and quickly. This tutorial shows you how to stitch images together in Matlab and takes
approximately 1 hour to complete.
Motivation and Audience
This tutorial has been created to help you mosaic images using Matlab. Readers of this tutorial assumes the reader has the following background and interests:
□ Know how to use Matlab and modify m files
□ Also know how to use the help command in Matlab
□ Knowledge of Photoshop, Paint, Microsoft Photo Editor or similar programs is a plus
□ Desire to stitch together images to gain full view of a scene
□ In addition, would like to learn more about Matlab commands and scripts
The rest of the tutorial is presented as follows:
Parts List and Sources
US-based vendors to obtain material to complete this tutorial include
To complete this tutorial, you'll need the following items
┃Software │VENDOR │Version│PRICE│QTY┃
┃Matlab │Mathworks │ 5.X │$500 │1 ┃
This section gives step-by-step instructions along with photos on how to mosaic images in Matlab. The photos at the beginning of this page can be used as examples.
1. Create a folder on your desktop called Mosaic.
8. Use the mouse again to click on the second image. Be sure to select the points on the same order you did the first ones.
9. Review the mosaicked image in Figure No. 2. If the image is distorted, at odd angles, or it is not satisfactory in general, run the script one more time.
10. Go to File > Save As, and save the image as *.jpg
11. Open the image with the software of your choice (Photoshop, Paint, or any other) and make any changes if desired.
12. Repeat steps 4 through 11 to mosaic two more images. You could use the image you just created to add one more sequence to it.
The source code to image mosaicking in Matlab is provided below:
To be compiled with Matlab Editor/Debugger
Note: download the MosaicKit rather than cutting and pasting from below.
Use unzipping software to unzip the file. All the necessary files are included.
Example of mosiac.m script for mosaicing images in Matlab:
I1 = double(imread('left.jpg')); % loads image 1
[h1 w1 d1] = size(I1);
I2 = double(imread('right.jpg')); % load images 2
[h2 w2 d2] = size(I2);
figure; % creates a new figure window
subplot(1,2,1); % divides current figure into 1(row) by 2(columns), it focuses
on the 1st subfigure
image(I1/255); % displays the image
axis image; % displays the axis on the image
hold on;
title('first input image');
[X1 Y1] = ginput2(2); % gets two points from the user
subplot(1,2,2); % takes the previous figure window and focuses
on the 2nd subfigure
image(I2/255); axis image; hold on;
title('second input image');
[X2 Y2] = ginput2(2); % get two points from the user
% estimate parameter vector (t)
Z = [ X2'; Y2' ; Y2' -X2' ; 1 1 0 0; ; 0 0 1 1 ]';
xp = [ X1 ; Y1 ];
t = Z \ xp; % solve the linear system
a = t(1); % = s cos(alpha)
b = t(2); % = s sin(alpha)
tx = t(3);
ty = t(4);
% construct transformation matrix (T)
T = [a b tx ; -b a ty ; 0 0 1];
% warps incoming corners to determine the size of the output image (in to out)
cp = T*[ 1 1 w2 w2 ; 1 h2 1 h2 ; 1 1 1 1 ];
Xpr = min( [ cp(1,:) 0 ] ) : max( [cp(1,:) w1] ); % min x : max x
Ypr = min( [ cp(2,:) 0 ] ) : max( [cp(2,:) h1] ); % min y : max y
[Xp,Yp] = ndgrid(Xpr,Ypr);
[wp hp] = size(Xp); % = size(Yp)
% do backwards transform (from out to in)
X = T \ [ Xp(:) Yp(:) ones(wp*hp,1) ]'; % warp
% re-sample pixel values with bilinear interpolation
clear Ip;
xI = reshape( X(1,:),wp,hp)';
yI = reshape( X(2,:),wp,hp)';
Ip(:,:,1) = interp2(I2(:,:,1), xI, yI, '*bilinear'); % red
Ip(:,:,2) = interp2(I2(:,:,2), xI, yI, '*bilinear'); % green
Ip(:,:,3) = interp2(I2(:,:,3), xI, yI, '*bilinear'); % blue
% offset and copy original image into the warped image
offset = -round( [ min( [ cp(1,:) 0 ] ) min( [ cp(2,:) 0 ] ) ] );
Ip(1+offset(2):h1+offset(2),1+offset(1):w1+offset(1),:) =
% show the result
figure; image(Ip/255); axis image;
title('mosaic image');
mosaic.m Code Description
The mosaic.m script operates as follows:
First, it converts the values of the images from unsigned int to double. This is done because most of the MATLAB operations use double as default type. Then, it asks for the points to be matched.
Once it has this information it performs a transformation based on an estimate. The value of each pixel is then chosen using a bilinear interpolation. Finally, the mosaicked image is displayed.
Final Words
The objective of this tutorial was to teach the reader how to stitch together images by mosaicking them in Matlab. We have created some more examples to give you a better idea of what you can do
with it. By now you should be able to mosaic any two images.
To give you an idea about the applications of the system, it could be utilized for disaster migration, by providing just one image of all the affected zones. It could also be used to create rode
maps, or monitor changes on a given area.
See more examples
Click here to email me
|
{"url":"http://www.pages.drexel.edu/~sis26/MosaickingTutorial.htm","timestamp":"2014-04-17T15:40:43Z","content_type":null,"content_length":"17985","record_id":"<urn:uuid:d1d2bbfe-5253-4bed-9961-a66a963eca07>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
|