content
stringlengths
86
994k
meta
stringlengths
288
619
Patent US5966262 - Method and apparatus for high data rate detection for three dimensional 110 channels The present is related to an application entitled "METHOD AND APPARATUS FOR THREE DIMENSIONAL SEQUENCE ESTIMATION IN PARTIALLY CONSTRAINED BINARY CHANNELS" which is assigned to a common assignee and which is filed on a date even herewith. The related application is incorporated by reference in this application. The present invention relates in general to information storage systems and in particular to a method and apparatus for implementing three-dimensional detection of binary signals in a constrained response magnetic recording system. Computers often include information storage systems having media on which data can be written and from which data can be read for later use. One form of information storage system is a disk drive. Disk drives use various types of media. One more common form is a drive unit incorporating one or more disks having a magnetic coating on the surface of the disk. Data is recorded in tracks which are subsections of the magnetic coating on the surface of the disk. Transducers are used to magnetize portions of the magnetic surface in a write operation. In digital magnetic recording systems, data are stored by writing a sequence of magnets with alternating polarity onto a medium using a write head (transducer). The binary ones and zeros of the stored data are represented either as the two possible magnetic polarities or the change or absence of change. Information is retrieved using a read head (transducer). The read head and the write head are part of what is known as a data channel. The data channel handles data from a source, such as the computer, and writes it to a disk on a disk drive where it is stored for later retrieval. The data channel also handles reading individual magnetic transitions from the disk to retrieve that data previously stored. The data channel manipulates the readback signal produced by the read head (transducer) to produce a representation of the data previously stored. It should be noted that the read head and write head may be two separate transducers or may be a single transducer used for both reading and writing data. The read head is configured such that a change in the polarity of the magnetization pattern results in a non-zero amplitude value in the transducer output. Because the read heads used in storage devices have a limited bandwidth, the response of the read transducer to a change in magnetization is a pulse with a non-zero width. Linear density is the number of bits of data that can be stored in a unit length of a track on the media. At linear densities of interest in present and future storage devices, the transition response pulse is sufficiently wide to add non-zero amplitude components to the signal in adjacent bit periods. This mechanism is known as intersymbol interference (ISI). For reasons of convenience and practicality, the channel noise is assumed to be additive white Gaussian noise (AWGN). Gaussian or normal distributions are well known statistical models that accurately describe the thermal and electronics noise produced by the resistive component of the read transducer and the electronics of the preamplifier required to amplify the transducer output to usable levels. In terms of random variables, the term white indicates that observations of the random variable are uncorrelated; observations of a correlated random variable are said to be colored. Additive noise indicates that summing the noiseless signal s(t) and noise n(t) produces the received signal r(t)=s(t)+n(t), which implies that the signal and noise are uncorrelated. A detector is also part of the data channel. The detector is the portion of the read channel that determines if a particular bit has a value of "1", which indicates a magnetic polarity in a first direction, or a "0", which indicates a magnetic polarity in a second direction. In earlier recording systems, data bits were detected by making a sample-by-sample decisions with a peak detection circuit, which made necessary the use of runlength limited (RLL) codes. This type of detector is called a peak detector. In terms of ISI, the function of the ISI code was to guarantee a minimum spacing between transitions so that the ISI components from adjacent transitions did not reduce the amplitude of the present transition below the detection threshold. Unfortunately, this technique is inefficient because it ignores the information content in the adjacent samples. A second general type of detector is known as a sequence detector. Sequence detectors take advantage of the ISI terms by examining adjacent samples before making a decision. The Viterbi algorithm (VA) is an efficient means for implementing the maximum likelihood sequence detector (MLSD) which chooses the sequence that most likely produced the received or read signal. The complexity of the VA detector increases exponentially as the number of ISI terms increases. One type of magnetic storage device in use today overcomes this limitation by shaping the channel to produce partial response signals that have a predetermined and limited number of ISI terms. After shaping the channel response, a Viterbi algorithm is used to perform a maximum likelihood sequence detection. This type of sequence detector is known as a Partial Response Maximum Likelihood (PRML) detector. PRML detectors are commonly used in current information storage systems, such as disk drives. Another sequence detector is fixed delay tree search with decision feedback (FDTS/DF). The FDTS/DF sequence detector performs sequence detection over a finite number of samples, generally less than the number of ISI terms. The unused ISI terms are subtracted by means of a nonlinear filter structure known as a decision feedback equalizer, which matches the shape of the unused tail of the channel response. Although FDTS/DF does not perform as well as the VA detector, it can outperform PRML schemes at high linear densities because the channel does not need to be shaped into a partial response form. An additional benefit of FDTS/DF is that the delay between the detector input and the decision is fixed and equal to the number of additional samples used in the decision process, i.e., if a total of three observation samples are used, the detector delay is τ=2. Current channel detectors have shortcomings. One of the shortcomings is the ability to handle higher data rates which are demanded by computer manufacturers. In response to demand to increase data rate, there have been attempts to provide for higher data rate detectors. For example, one attempt for a higher data rate detector is the use of a PRML detector with multiple inputs. The problem with the resulting higher data rate channel is that its performance suffers when compared to an optimum detector. a complex detector with limited ability to perform at higher data rates. There is a need for a high data rate detector for use in the read portion of the data channel. In addition, there is a need for a detector that can handle high data rates. The present invention pertains to a data channel for detecting binary symbols from a received signal occurring at high data rates. A method and apparatus for a high data rate detector which has a first portion and a second portion. In a first preferred embodiment, the data channel includes a detector that has two inputs. The two inputs are pipelined into the detector. The detector has a first portion which determines a first estimate of a binary input. The second portion, operating in parallel with the first portion, determines two conditional estimates for a second binary input. The estimate for the second binary input is selected after the first estimate is determined. The first and second estimates for the first and second binary inputs are then output from the detector. Advantageously, the high data rate detector operates at half the clock speed of the data read channel in which the detector is used. Each of the first and second portions of the high data rate detector uses a three dimensional observation space with orthogonal coordinate axes. Each of three consecutive synchronous observation samples of the received signal corresponding unambiguously to an axis in the observation space. A decision feedback equalizer removes intersymbol interference terms associated with prior detector outputs. Each detector portion uses a plurality of linear classifiers to partition the observation space. Boolean logic functions on the first portion of the high data rate detector to decide into which decision region of the observation space a first sample maps into. Once this is determined, the estimate for the first estimate is used to select one of two conditional estimates for the second binary input that has been determined in parallel with the first estimate. The first and second estimates are then output from the detector. In a first embodiment, there are two inputs to the detector. In a second embodiment, there are four inputs to the detector. Advantageously, the detectors described herein have increased performance at higher user bit densities when compared to a other sequence detectors. This and various other features as well as advantages which characterize the present invention will be apparent upon reading of the following detailed description and review of the associated drawings. In the following detailed description of the preferred embodiment, reference is made to the accompanying drawings which form a part hereof, and in which are shown by way of illustration specific embodiments in which the invention may be practiced. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from thc scope of the present FIG. 1 is a schematic and block diagram of an information handling system embodying the present invention. An information handling system 100, includes a data storage medium 112 and interface control unit 114. In the preferred embodiments of this invention the data storage medium 112 comprises a rigid magnetic disk drive 115, although it should be readily understood that any other mechanically moving memory configuration may be used. The magnetic disk drive 115 is illustrated in simplified form sufficient for an understanding of the present invention. The utility of the present invention is not limited to the details of a particular information handling system or to the specific disk drive shown. Now referring to both FIGS. 1 and 2 the disk drive 115 portion of the information handling system 100 will be detailed. FIGS. 1 and 2 show the principal electrical and mechanical components of a disk drive 115 constructed in accordance with a preferred embodiment of this invention. The disk drive 115 includes a head/disk assembly, which includes a base 122 and a cover 124. Attached to the base 122 is a spindle with an attached hub 126. Attached to the spindle with an attached hub 126 are a stack 116 of disks 118. It should be noted that some disk drives have a single disk and that this invention is equally applicable to a single disk version of a disk drive. The stack 116 of disks 118 includes at least one magnetic surface 120. Also attached to the base is a spindle motor (not shown) which rotates the spindle with an attached hub 126 and the disks 118 mounted to the hub 126. An actuator 134 includes arms 132 that carry transducers 128 in transducing relation to one of the disks. A portion of an actuator motor 139 is attached to the actuator 132 and positions one or more transducers 128 to different radial positions relative to one or more magnetic surfaces 120 of the disk 118. The disk drive 115 also includes a data channel 300. A schematic or block diagram of the data channel 300 is shown in FIG. 3. The data channel 300 includes a write channel 310, a physical channel 320 and a read channel 330. The write channel 310 includes an encoder 312, a precoder 314, and write precompensation circuitry 316. The encoder 312 and the precoder 344 implement a Maximum Transition Run ("MTR") code. The particular MTR code used limits the maximum number of consecutive transitions to 2. It should be noted that code used is not limited to an MTR code. The precompensation circuitry 316 changes the signal to be written to the disk so that it can be retrieved more easily from the disk by the read head and read channel 330. The physical channel 320 includes a write head 322, a disk 118, and a read head 324. The read head 324 and write head 320 are also known as transducers. In some instances, the read head and the write head may be the same transducer. The write head 322 includes a coil. When a write current is passed through the write head 322 in one direction, the magnetic surface of a disk 118 is magnetized in a first direction. When a write current is passed through the write head 322 in the opposite direction, the magnetic surface of a disk 118 is magnetized in a second direction. Normally, the disk 118 maintains its magnetized state until the area of the disk is rewritten or remagnetized. The read head 324 produces a signal based on the magnetized state of the disk 118 below the read head 324. The signal from the read head 324 is passed into the read channel 330. The read channel 330 includes a filter 332, a sampler 334, an equalizer 336, a detector 338 and a decoder 339. The filter 332 filters out unwanted portions of the signal from the read head. Samples are taken at the sampler 334. The sampled signal is equalized at the equalizer 336. The output of the equalizer 336 is input to a detector 338 which estimates the value of various sampled portions. The output of the detector 338 is passed through a decoder 339 to produce the read back user data. The input to the channel is a coded binary signal representing data to be stored or transmitted. In addition to compression and error correction codes often employed in communications and storage channels, a modulation code is typically employed to suppress unwanted patterns or to introduce certain desirable characteristics in the channel input data. For more information on general coding techniques used in data storage, see P. H. Siegel et al., "Modulation and coding for information storage," IEEE Transactions on Communications, December 1991, pp. 68-86. A specific code constraint that will be employed with the invention to yield a reduction in error rate is the maximum transition run (MTR) code described in J. Moon et al., "Maximum transition run coding for data storage systems," IEEE Transactions on Magnetics, September 1996, pp. 3992-3994. The MTR code with constraint j=2 limits the maximum number of consecutive transitions in the recorded data to two. FIG. 4 is a schematic diagram of a read signal and some of the operations performed thereon according to the present invention. Referring to FIG. 1, the modulated input signal a.sub.k is convolved with the channel response h(t), depicted by box 402, and combined with additive noise n(t), shown as adder 404, to produce a continuous time read signal r(t). Typically, the read signal is processed by a continuous time filter g(t), such as filter 332, to produce a bandlimited signal s(t) prior to sampling. The signal is sampled at constant or near constant time intervals T that are synchronous with input signal a.sub.k. The sampling is done at sampler 334. Although timing recovery is necessary for the proper operation of the detector, it is not an explicit component of the invention. Discussions on timing recovery may be found in K. H. Mueller et al., "Timing recovery in digital synchronous systems," IEEE Transactions on Communications, pp. 516-531, May 1976. The effective channel response, which is the combined response of the channel and the receiver filters, is specified by some shaping constraint subject to a minimization of noise. Although this operation may be combined with g(t), it is generally performed by a separate, discrete time filter C(D), such as the equalizer 336 shown in FIG. 3. The result of filtering the synchronous samples z.sub.k with C(D) is z.sub.k, which can be written as ##EQU1## where {f.sub.i } is the effective channel response and v.sub.k is the channel noise. The length of the effective response is l, which depends on the actual channel response h(t) and the filters. In a sampled data system, the component of the equalized channel response which multiplies the present input is referred to as the cursor term, and the samples which are multiplied by past inputs are referred to as the postcursor terms. Typically, the cursor term is the first non-zero component of the sampled equalized response. Although the precursor (future) terms may take non-zero values under some equalization design criteria, their magnitude is assumed to be negligibly small relative to the cursor term for practical systems of interest. When l>τ, a feedback filter B(D), depicted by reference number 410, is used to cancel the known postcursor ISI terms, yielding and observation y.sub.k that is a function of the unknown channel inputs and the filtered noise as indicated in (1). The postcursor ISI terms are removed at a summer 412 which is positioned before the detector 410. This implies when l<τ, a feedback filter B(D) is not required. Note that this implies the detector 338 is operating at high SNR (low error rate) so that the detector outputs are assumed to be correct. The invention pertains to the detector block 338 and includes constraining of the channel response, which requires adjustment of the forward filters g(t), depicted by reference number 332, and C(D), depicted by reference number 336, and the feedback filter B(D), depicted by reference number 410. The detector block includes a signal space detector adjusted to suit the chosen channel constraint along with any applicable modulation code constraint. Signal space detection employs an array of linear classifiers combined with a Boolean logic rule to form decision regions in an m-dimensional space. For the m=3 dimensional case relevant to the invention, the classifiers indicate on which side of a plane in the observation space the input lies. Under the assumption of additive white Gaussian noise, the plane is the locus of points equidistant from the two symbols to be differentiated. For two symbols p=(p.sub.0,p.sub.1, p.sub.2) and q=(q.sub.0, q.sub.1, q.sub.2) in the three dimensional space, the closest symbol to the set of observations y.sub.k =(y.sub.k, y.sub.k-1, y.sub.k-2) is determined using γ.sub.k =sgn{w.sub.0 y.sub.k +w.sub.1 (y.sub.k-1 -f.sub.2 a.sub.k-3)+w.sub.2 (y.sub.k-2 -f.sub.1 a.sub.k-3 -f.sub.2 a.sub.k-4)-θ} (3) w.sub.i =2κ(p.sub.i -q.sub.i) (4) ##EQU2## and κ is any real number greater than zero. The sign function sgn{ greater than or equal to zero and a negative logic value otherwise. Because this operation is only concerned with the argument's sign, any κ>0 is permissible. In the preferred embodiments, κ is adjusted to reduce the number of required gain terms by scaling one or more to unity. A number of linear classifiers sufficient to partition the observation space into appropriate decision regions are operated in parallel. The outputs from the classifiers are combined by means of a Boolean logic circuit to form an output term a.sub.k-2 that is a maximum likelihood estimate of a.sub.k-2 given y.sub.k, the previous outputs, and the delay constraint that the decision be made prior to knowledge of The invention incorporates an equalization constraint that specifies f.sub.1, f.sub.2, or both relative to f.sub.0. Although these terms could be fixed at any value, the preferred embodiments employ constraints the simplify the apparatus of the detector according to some criterion. Specific examples of useful criteria include the reduction of summing nodes, multipliers, sample storage elements, number of inputs to a circuit node, and number of outputs from a circuit node. In accordance with the present invention, a signal space detector is formulated for the channel constraints f.sub.0 =f.sub.1 and f.sub.2 =0. This constraint is described by the label DFE-110 which indicates that the first three samples of the normalized effective channel response are 1, 1, and 0, and any remaining terms are eliminated via decision feedback equalization. If there are no terms beyond 1, 1, 0 then no decision feedback is required. The apparatus of a three dimensional signal space detector incorporating this constraint is referred to as a 3D-110 detector. Now referring to FIGS. 5-9, the partitioning of the observation space with a number of linear classifiers into appropriate decision regions operating in parallel will be further detailed. Considering all possible combinations of {a.sub.k, a.sub.k-1, a.sub.k-2 }, the symbol coordinates in the observation space where past ISI terms have been removed are given in the table shown in FIG. 5. The resulting signal space is shown in FIG. 6, where the `X` symbols correspond to a.sub.k-2 =1 and the `O` symbols to a.sub.k-2 =-1, and where the symbol labels correspond to the values of Ψ in FIG. 2. In any case where f.sub.2 =0, removal of known ISI terms will not affect y.sub.k-1, whereas y.sub.k-2 =y.sub.k-2 -f.sub.1 a.sub.k-3. In this constellation, the closest pair of symbols corresponding to different decisions are the symbols labeled 2 and 5, which are produced by the inputs {+1,-1,+1} and {-1,+1,-1}, respectively. Because the detector performance will improve with an increase in the minimum distance between the two symbol sets, an MTR code is employed to give a distance gain. The maximum transition run parameter is selected to be j=2, which prohibits input sequences containing three or more consecutive transitions. Examination of symbols 2 and 5 indicate that for an input of -1, symbol 2 is invalid, and 5 is illegal if the input is +1. With the MTR j=2 code, the constellation assuming a previous decision of +1 is shown in FIG. 7. For the other input, symbol 2 would be absent with symbol 5 present. The quotient of the new distance to the old is √2, which yields a significant distance gain of 3 dB. The linear classifiers are chosen by considering all possible pairs of symbols, one of each from the two symbol sets corresponding to the binary input. This process will yield redundant boundaries which are removed. The signal space and relevant boundaries for the two possible inputs can be conveniently illustrated by rotating the three dimensional space so that the y'.sub.k-2 axis is out of the plane of the paper toward the reader as shown in FIG. 8 for a.sub.k-3 =-1 and in FIG. 9 for a.sub.k-3 =+1. Assuming that κ is chosen to be 0.5 for the normalized channel response, the classifiers for the boundaries in FIGS. 8 and 9 are given by ##EQU3## Because boundary C is used only when a.sub.k-3 =-1 and D is valid only when a.sub.k-3 +1, these two classifiers can be merged to give E=sgn{-y.sub.k +y.sub.k-2 } (7) The decision region is formed with the Boolean logic expression ##EQU4## where the traditional binary Boolean logic set {0,1} is equivalent to the set {-1,+1} used for a.sub.k-2 FIGS. 8 and 9 show the boundaries A, B, C and D in signal space. The boundaries are depicted as lines labeled with the letter of the boundary in a circle referencing the boundary. The individual inputs from the table shown in FIG. 5 are labeled with the number corresponding to the Ψ value. As mentioned above, boundaries C and D can be merged to form boundary E. FIG. 10 is a one preferred embodiment of a detector apparatus 1000 for implementation of this method of sequence detection. In the detector apparatus 1000, the multiplexer inputs are generalized for any κ>0. The detector apparatus 1000 includes two delay elements 1002 and 1004 which produce the terms y.sub.k-1, and y.sub.k-2. The detector apparatus 1000 also has three summing nodes, 1010, 1012, and 1014. The detector apparatus 1000 also has a first multiplexer 1040, and a second multiplexer 1042. The first multiplexer 1040 and the second multiplexer 1042 serve to eliminate one of the possible outputs for a.sub.k-2 and also removes an ISI term that is beyond the three used by the detector 1000. The possible output for a.sub.k-2 which can be eliminated is dependent on the value of a.sub.k-3. A third delay element 1006 is used to capture a.sub.k-3. The value of term a.sub.k-3 is used to eliminate one of the potential points in signal space as can be seen FIGS. 8 an d 9. The blocks to the right of the summing nodes 1010, 1012, and 1014 are single bit quantizers 1020, 1022, and 1024 that produce a positive logic value for an input greater or equal to zero and a negative logic value for an input less than zero. The outputs of the single bit quantizers 1020, 1022, and 1024 correspond to the linear classifiers E, B, and A. The classifier outputs are then placed through an AND logic gate 1030 and through an OR gate to determine the estimated value a.sub.k-2 which is denoted a.sub.k-2. In some particular circuits, it may be advantageous to replace the addition of the negative threshold before the slicer with a comparator. That is, the operation γ.sub.k =sgn {x.sub.k -θ} where x.sub.k is the appropriate combination of observations, is replaced with the operation # #EQU5## which describes a comparator with threshold θ. High Data Rate Detection for 3D-110 Channel A series of speed-enhancing modifications to the sample rate 3D-110 detector are now presented. Because of the simplicity of the 3D-110 detector, it is possible to construct a high speed version that, although more complicated than the basic 3D-110 detector, is not excessively complex. The high data rate detector for a three dimensional 110 channel has two portions which operate in parallel. A first portion of the high data rate detector operates very nearly the same as the three dimensional 110 detector discussed above. A second portion of the high data rate detector, formulates two conditional decisions, one for each possible value of a.sub.k-3. The output of the second portion of the high data rate detector is dependent on the value of a.sub.k-3 from the first portion of the high data rate detector. The correct value of the two conditional decisions in the second portion of the high data rate detector is selected or chosen near the end of the decision period just after the decision for a.sub.k-3 is finalized by the first portion of the detector. Two decisions can be output from the high data rate detector can therefore be output from the detector. The detector, therefore, only has to operate at half the clock speed of the channel. A parallel version suitable for an analog implementation allows the clock rate to be nearly doubled relative to the original detector. By factoring a 1+D term from the detector boundaries and including it in the equalization, the adders required for a digital VLSI approach could be eliminated. Regardless of the approach, noise correlation resulting from shaping the channel response were shown to be a dominant factor at low densities but a nonfactor for user densities of 2.5 bits/PW50 and higher. So while these methods do not offer an appreciable advantage at low densities, they are well suited to high density storage systems where a detector capable of FDTS/DF performance and high data rates is required. High Data Rate Detection: Two-bit-wide detection FIG. 15 shows a two input high data rate detector 1500. The simplicity of the basic sample-rate detector makes it feasible to construct a high speed version by duplicating the detector and feedback loop. Such a structure would release two decisions (a.sub.k-2 and a.sub.k-3) for every decision period and operate at half the system baud rate. The two input high data rate detector 1500 has a first portion 1502 and a second portion 1504. The first portion 1502 of the detector that determines a.sub.k-3 has the same form as the sample-rate detector 1000 (shown in FIG. 10), the other decision is a function of a.sub.k-3, which requires additional boundaries. The other decision is made in the second portion 1504 of the detector. The objective is to have two sets of boundaries for the second portion 1504 of the detector 1500, each conditioned on the two possible choices for a.sub.k-3 which is estimated in the first portion 1502 of the detector 1500. Using subscript 0 to denote the conditional value of a.sub.k-3 +1, and using subscript 1 to denote the conditional value of a.sub.k-3 -1, the required boundaries are A.sub.0 =sgn{y.sub.k-1 +y.sub.k-2 } A.sub.1 =sgn{y.sub.k-1 +y.sub.k-2 -2f.sub.0 } B.sub.0 =sgn{y.sub.k-1 +y.sub.k-2 +2f.sub.0 } (10) B.sub.1 =sgn{y.sub.k-1 +y.sub.k-2 } E.sub.0 =sgn{-y.sub.k +y.sub.k-2 -f.sub.3 } E.sub.1 =sgn{-y.sub.k +y.sub.k-2 +f.sub.3 } where y.sub.1 =y.sub.k +f.sub.3 a.sub.k-3. The boundaries or linear classifiers are associated with certain lines in the second portion 1504 of the detector 1500. Note that A.sub.0 and B.sub.1 are the same so they can be replaced by a single equivalent boundary F=A.sub.0 =B.sub.1. The term y.sub.k exists because the top feedback filter, B'(D)=B(D)-f.sub.3, does not have access to a.sub.k-1 until the next period. The clock signal 1506 and depicted as φ.sub.2 in FIG. 15 operates synchronously but at half the rate of the system clock. Although this circuit is more complex than the sample rate detector, more effort can be directed toward reducing power consumption at the expense of speed because the two-bit wide architecture will have nearly twice that data rate capability than the The detector 1500 has two equalized inputs for samples. The first input 1510 is the input for z.sub.k-1. The second input 1512 is the input for z.sub.k-1. A first feedback filter 1514 removes any additional ISI terms beyond those selected for the sample size. In this case, the first feedback filter 1514 removes any additional samples beyond 3 to form an observation v.sub.k. Similarly, a second feedback filter 1516 removes any additional samples beyond 3 to form observation y.sub.k-1. If ISI terms beyond the observation sample size are not present, the feedback filters 1514 and 1516 are not needed. A first sample and hold 1520 is used to form observation y.sub.k-2, and a second sample and hold 1522 is used to form observation y.sub.k-3, Multiplexer 1524 serves to eliminate one of the possible outputs for a.sub.k-2 and also removes an ISI term that is beyond the three used by the detector 1500. The first portion 1502 of the detector 1500 also has three summing nodes, 1530, 1532, and 1534. The detector apparatus 1500 also has a second multiplexer 1540, and a third multiplexer 1542. The second multiplexer 1540 and the third multiplexer 1542 serve to eliminate one of the possible outputs for a.sub.k-3 and also removes an ISI term that is beyond the three used by the detector 1500. The blocks to the right of the summing nodes 1010, 1012, and 1014 are single bit quantizers 1530, 1532, and 1534 that produce a positive logic value for an input greater or equal to zero and a negative logic value for an input less than zero. The outputs of the single bit quantizers 1530, 1532, and 1534 correspond to the linear classifiers E, B, and A. The classifier outputs are then placed through an AND logic gate 1536 and through an OR gate 1538 to determine the estimated value a.sub.k-3 which is denoted a.sub.k-3. The output of a.sub.k-3 from the first portion 1502 of the detector 1500 is input to a fourth multiplexer 1544 so that the proper value for a.sub.k-2 is selected. The second portion 1504 of the detector 1500 includes summing nodes 1550, 1552, 1554, 1556 and 1558. Each of these summing nodes is a three input adder. Additional inputs for the value 2 and the value f.sub.3 are added to the summing nodes to form the boundary conditions set forth in the boundary equations set forth as equations (13) above. The value of f.sub.0 is assumed to be one in the equations for boundaries A.sub.1 and B.sub.0. Inverters are used to produce the subtracted values for A.sub.1 and E.sub.0. The blocks to the right of the summing nodes 1550, 1552, 1554, 1556 and 1558 are single bit quantizers 1560, 1562, 1564, 1566 and 1568 that produce a positive logic value for an input greater or equal to zero and a negative logic value for an input less than zero. The outputs of the single bit quantizers 1560, 1562, 1564, 1566 and 1568 correspond to the linear classifiers B.sub.0, E.sub.0, F, E.sub.1, and A.sub.1. Classifier outputs B.sub.0, E.sub.0, and Fare then placed through an AND logic gate 1570 and through an OR gate 1572 to determine the first conditional estimated value a.sub.k-2. Classifier outputs F, E.sub.1, and A.sub.1 are then placed through an AND logic gate 1580 and through an OR gate 1582 to determine the second conditional estimated value a.sub.k-2. The actual estimate denoted a.sub.k-2 is selected after multiplexer 1544 which receives the output a.sub.k-3 from the first portion 1502 of the detector 1500. The input a.sub.k-3 is the condition used to pick the proper value of a.sub.k-2. High Data Rate Detection: Four-input, two-output detection The cascading sample-and-hold stages as in the previous structures is unattractive because the analog signal will degrade as it moves along the delay line. This shortcoming can be eliminated if a single set of sampling stages are clocked in succession and the output directed to the appropriate detector input via a rotating switch matrix. With this assumption, the detector can be viewed as a block box receiving four consecutive equalized samples z.sub.k, . . . ,z.sub.k-3 and implementing a function that yields a pari of decisions a.sub.k-2 and a.sub.k-3. There are several possibilities for implementing the feedback filters. Two possible methods would be to: (1) use four RAM's to compute the feedback value and drive separate D/A convertors, or (2) to use D/A convertors for each feedback filter tap and route these terms, via multiplexers, to each of the four summing nodes. FIG. 16 shows a detector 1600 which has two portions 1602 and 1604. FIG. 16 differs from detector 1500 in that four equalized inputs 1610, 1612, 1614 and 1616, which correspond to z.sub.k, z.sub.k-1, z.sub.k-2, and z.sub.k-3 are used rather than two as is shown in FIG. 15. Thus, the delay signals come from the equalizers (not shown) rather than sample and hold circuits in the detector. A set of feedback filters 1620, 1622, 1624 and 1626, remove unwanted ISI terms from the samples z.sub.k, z.sub.k-1, z.sub.k-2, and z.sub.k-3 to form observations y.sub.k, y.sub.k-1, y.sub.k-2, and y.sub.k-3. These observations are then put through summers, single bit quantizers and logic in the same way as shown in FIG. 15 to yield the two estimates of a.sub.k-3 and a.sub.k-2. The hardware implementation for the first portion 1602 and the second portion 1604 are essentially the same and therefore will not be repeated here. The various elements of FIG. 16 also have not been renumbered since reference to FIG. 15 can be easily made. Other High Data Rate Detectors: 1+D Factorization Various delay factors could be added to the equalizer to produced equalized samples that would not require three input adders. The boundary conditions would change. Comparators could be used in lieu of the three input summing nodes and the single bit quantizers shown in FIGS. 15 and 16. This would produce a somewhat simpler and faster detector. Shown and described below are some alterations for simplified basic detectors. It is felt that these same types of modifications could be applied to modify the high data rate detectors 1500 and 1600. Turning now to FIGS. 11-13, another preferred embodiment of the invention will be discussed. Advantageously, in this other preferred embodiment of the present invention, the number of observation samples can be reduced with a corresponding reduction in the classifier complexity by projecting the three dimensional space onto two dimensions. The end result is simpler detector apparatus 1100 for the read data channel 330. In the previous embodiment of the 3D-110 detector, boundaries A and B both include the sum y.sub.k-1 +y.sub.k-2. This operation can be performed with the equalizers by placing a 1+D filter in both the forward and feedback filters, which changes the observation to y.sub.k =f.sub.0 a.sub.k +2f.sub.1 a.sub.k-1 +f.sub.2 a.sub.k-2 +v.sub.k(11) where v.sub.k is the noise term previously labeled v.sub.k colored by the 1+D filter. Design of the filters is performed subject to the constraint that the first three samples of the equivalent channel response normalized by the first non-zero sample are {1, 2, 1}. Rewriting the classifiers for the new observation produces A=sgn{y.sub.k-1 -a.sub.k-3 -1} B=sgn{y.sub.k-1 -a.sub.k-3 -1} (12) E=sgn{y.sub.k -y.sub.k } Because the transformation simply involves moving a portion of the classifier computation from the explicit classifier expression to the equalization blocks, the outputs from the modified classifiers are equivalent to those in the first embodiment and the Boolean logic of (8) may be used without modification. A block diagram of an apparatus suitable for the implementation of the detection method outlined is illustrated in FIG. 11. The detector apparatus 1100 includes a delay element 1102 which holds the sample y.sub.k-1 which is a necessary input to some of the elements of the detector apparatus. This preferred embodiment has advantages compared with the previous detector 1000 shown in FIG. 7 for both analog and digital circuit implementations. For the analog circuit, the number of internal delay (storage) elements required is reduced by one and the number of inputs to the summing nodes is reduced from a maximum of three to two. Since the number of inputs to the summing nodes is two, the summing node and slicer of the detector 1000 can be replaced with comparators 1110, 1112, and 1114. The output from the comparators 1110, 1112, and 1114 are the classifier outputs E, B, and A, respectively. The output of the comparators 1110, 1112, and 1114 is labeled with the linear classifier and corresponding boundary E, B, and A it represents. The classifier outputs are then placed through an AND logic gate 1130 and through an OR gate 1132 to determine the estimate of a.sub.k-2 which is denoted a.sub.-2. For a digital circuit, a three input adder would be implemented with a cascade of two two-input adders. By moving the 1+D factor to the equalizers, this embodiment eliminates the need for one of these two adders. The equivalent comparator is substituted for the single remaining adder and slicer combination. The detector apparatus 1100 also has a first multiplexer 1140, and a second multiplexer 1142. The first multiplexer 1140 and the second multiplexer 1142 serve to add in the classifier constant and also removes an ISI term that is beyond the three used by the detector 1100. A second delay element 1106 is used to capture a.sub.k-3. The symbols and boundaries on the two dimensional subspace are illustrated in FIG. 12 for a.sub.k-3 =-1 and in FIG. 13 for a.sub.k-3 =+1. Although this embodiment can be derived from the two dimensional subspace projection formed by the 1+D operation, the formulation is three dimensional in the context of the invention. Such equivalence results from the fact that the delay through the detector relative to the current observation sample remains three, with the detector using a three dimensional subspace formulation where the multipliers on the observation sample y.sub.k-2 are selected to be zero. A true two dimensional formulation would produce an output a.sub.k-1 in response to y.sub.k and y.sub.k-1. Therein lies the distinction between a two dimensional detector and the three dimensional detector embodied as a projection onto a lower dimensional subspace. Another preferred embodiment of the present invention is shown in FIG. 14 as detector 1400. In detector 1400, the constraint is relaxed to {1, 1, α} so that the third value, α=f.sub.2 /f.sub.0, is unconstrained. Because the projection of the {1, 1, 0} constraint onto the two dimensional subspace yielded a simpler apparatus, the embodiment of this constraint is considered under the same transformation, i.e., the observation is y.sub.k =f.sub.0 (a.sub.k +2a.sub.k-1 +(1+α)a.sub.k-2)+v.sub.k.(12) The block diagram of an apparatus for performing detection on the MTR j=2 constrained sequence equalized to {1, 1, α} is illustrated in FIG. 11. This embodiment is derived from the previous simply by subtracting the term αa.sub.k-3 from the observation y.sub.k-1. Note that the two are not identical; in the form considered here, the observation y.sub.k has an additional term αa.sub.k-2. However, this only affects classifier E because A and B use only y.sub.k For α<0, the additional term moves symbols away from boundary E, increasing distance and reducing the probability of error. If the apparatus were to be implemented as a digital circuit, the three input addition for classifier E would increase the number of serial operations and reduce the maximum operating speed. For α& lt;0, the removal of αa.sub.k-3 from classifier E can be ignored, because the resulting loss of distance along the y.sub.k axis in signal space for certain combinations of symbols and a.sub.k-3 is offset by the increase in distance along the y.sub.k-1 axis. To accommodate situations where α would naturally be greater than a positive value, the filter design algorithm can be monitored and in the case where α>0, the design would be directed by the constraint α=0, thus constraining the channel to the DFE-110 response. The detector apparatus 1400 includes two delay elements 1402 produce the terms y.sub.k-1. The detector apparatus 1400 also has three summing nodes, 1410, 1412, and 1414. The detector apparatus 1400 also has a first multiplexer 1440, a second multiplexer 1442, and a third multiplexer 1444. The first multiplexer 1440, the second multiplexer 1442, and a third multiplexer 1444, remove an ISI terms corresponding to past decisions. The first multiplexer 1440, and the second multiplexer 1442 also add in the classifier constant. The possible output for a.sub.k-2 which can be eliminated is dependent on the value of a.sub.k-3. A second delay element 1406 is used to capture a.sub.k-3. The value of term a.sub.k-3 is used to eliminate one of the potential points in signal space. The blocks to the right of the summing nodes 1410, 1412, and 1414 are single bit quantizers 1420, 1422, and 1424 that produce a positive logic value for an input greater or equal to zero and a negative logic value for an input less than zero. The outputs of the single bit quantizers 1420, 1422, and 1424 correspond to the boundaries E, B, and A. The boundary conditions are then placed through an AND logic gate 1430 and through an OR gate 1432 to determine the estimated value a.sub.k-2 which is denoted a.sub.k-2. Although the present invention has been described with reference to preferred embodiments involving MTR j=2 constrained channels equalized to give equivalent channel responses where the first three samples with relative to the magnitude of the first are {1, 1, 0} and {1, 1, α} and where the observation space resulting from these constraints is projected onto a two dimensional subspace by factoring a 1+D term, persons skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the invention. In summary, this invention relates to a data channel apparatus for estimating binary input symbols received as a read signal from a data channel in an information storage system. The apparatus includes a read channel 330 and a detector 338. The detector 338 further comprises a first sample input, a second sample input, a first detector portion 1502 or 1602. The first detector portion 1502 or 1602 determines a first estimate of the first binary input based on the first sample input 1510, 1610. The apparatus also includes a second detector portion 1504 or 1604. The second detector portion 1504 or 1604 determines two conditional estimates of the second binary input 1512, 1612 based on the first sample input and on the second sample input, said second sample input occurring after the first sample input, said first detector portion 1502, 1602. The second detector portion 1504, 1604 works in parallel to determine the first estimate of the first binary input based on a first set of m consecutive samples. The two conditional estimates of the second binary input are based on a second set of m consecutive samples and one of said two conditional estimates selected on the basis of the determination of the first estimate. The apparatus has two outputs. One of said outputs from element 1538 for the first estimate of the first binary input. The other of said outputs form 1544 for the second estimate of the second binary input. The clock of the detector 1500, 1600 runs at a clock 1506 speed which is half the clock speed of the read channel 330. The samples used by the first portion and the second portion have m-1 samples of said first set of m samples and said second set of m samples in common. The apparatus includes an equalizer 336 for shaping the read signal samples subject to a constraint. A plurality of linear classifiers 1552, 1554, 1556, 1558, 1530, 1532, and 1534 partition an observation space into two decision regions corresponding to one of two binary input symbols. Logic elements 1570, 1572, 1580, 1582, 1536, and 1538 estimate into which of two decision regions a particular sample maps into. The apparatus also includes a feedback filter 1514, 1516 for removing intersymbol interference terms due to past estimates of the binary input symbols from the equalized read signal samples. Another embodiment includes an information storage device 115, comprising a housing 122, 124, disks 118 rotatably attached to said housing 122, said disks having a surface capable of storing binary information, and a read channel 330 for reading stored binary information on the disks 118. The read channel includes a detector 1500, 1600 for estimating the binary input value of a selected sample. The detector further includes a first sample input, a second sample input, a first detector portion and a second detector portion. The first detector portion 1502, 1602 determines a first estimate of the first binary input 1510, 1610 based on the first sample input. The second detector portion 1504, 1604 determines two conditional estimates of the second binary input 1512, 1612 based on the first sample input and on the second sample input, said second sample input occurring after the first sample input. The first detector portion 1502, 1602 and the second detector portion 1504, 1604 work in parallel to determine the first estimate of the first binary input based on a first set of m consecutive samples, and the two conditional estimates of the second binary input based on a second set of m consecutive samples. One of said two conditional estimates selected on the basis of the determination of the first estimate. The information storage device uses m-1 samples of said first set of m samples and said second set of m samples which are identical. The information storage device of claim 7 further comprises two outputs, one of said outputs (form element 1538) for the first estimate of the first binary input, and the other of said outputs (form element 1544) for the second estimate of the second binary input. The detector 1500, 1600 runs at a clock 1506 speed which is half the clock speed of the read channel 330. The information storage device further comprises a sampler 334 for sampling the read signal, an equalizer 336 for shaping the read signal samples subject to a constraint, a plurality of linear classifiers 1552, 1554, 1556, 1558, 1530, 1532, and 1534 for partitioning an observation space into two decision regions corresponding to one of two binary input symbols, and logic elements 1570, 1572, 1580, 1582, 1536, and 1538 to estimate into which of two decision regions a particular sample maps into. The information storage device also includes a feedback filter 1514, 1516, 1524 for removing intersymbol interference terms due to past estimates of the binary input symbols from the equalized read signal samples. The information storage device further comprises a write channel 310 for writing binary information onto the disks 118. The write channel includes an encoder (see FIG. 3) for implementing a maximum transition run code with a constraint limiting the maximum number of consecutive transitions in the recorded data to two. Also described is a method for estimating binary input symbols received as a read signal from a data channel in an information storage system. The method comprises the steps of sampling 334 the read signal, equalizing 336 the read signal samples subject to a shaping constraint, inputting a first sample to a first detector portion 1502, 1602, inputting a first and second sample to a second detector portion 1504, 1604, determining a first estimate of the first binary input in the first detector portion, and determining in parallel with the step of determining a first estimate, two conditional estimates (form elements 1572 and 1582) of the second binary input. The first binary input based on a first set of m consecutive examples. The two conditional estimates (form elements 1572 and 1582) of the second binary input based on a second set of m consecutive examples in the a second detector portion. The second sample input occurs after the first sample input. One of said two conditional estimates is selected based on the determination of the first estimate (element 1544). In the method, m-1 samples of said first set of m samples and said second set of m samples are identical. The method further includes the steps of forming an observation space using a fixed number, m, of consecutive equalized read signal samples, and partitioning the observation space with a plurality of boundaries 1552, 1554, 1556, 1558, 1530, 1532, and 1534 to form decision regions which map to one of two binary input symbols. The step of partitioning the observation space also includes the step of eliminating linear classifiers that are redundant. The method further includes removing intersymbol interference 1514, 1516, and 1524 terms due to past estimates of the binary input symbols from the equalized read signal samples. The method may also result in forming an observation space results in a three dimensional observation space. The step of equalizing 336 the read signal samples uses a shaping constraint in which at least one of the postcursor samples of the equalized channel response has a selected value relative to the cursor sample used to estimate the binary input. The equalizing 336 step may also include adding a factor of 1+D into the equalizing step. It is to be understood that even though numerous characteristics and advantages of various embodiments of the present invention have been set forth in the foregoing description, together with details of the structure and function of various embodiments of the invention, this disclosure is illustrative only, and changes may be made in detail, especially in matters of structure and arrangement of parts within the principles of the present invention to the full extent indicated by the broad general meaning of the terms in which the appended claims are expressed. For example, the particular elements may vary depending on the particular application for the detector while maintaining substantially the same functionality without departing from the scope and spirit of the present invention. In addition, although the preferred embodiment described herein is directed to a detector for a information storage system, it will be appreciated by those skilled in the art that the teachings of the present invention can be applied to other systems, such as communications or other systems, without departing from the scope and spirit of the present invention. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. FIG. 1 is a schematic and block diagram of an information handling system embodying the present invention. FIG. 2 is a top view of an information handling system embodying the present invention. FIG. 3 is a schematic diagram of a data channel for use with the information handling system of FIGS. 1 and 2. FIG. 4 is a schematic diagram of a read signal and some of the operations performed thereon according to the present invention. FIG. 5 is a table showing all possible combinations of {a.sub.k, a.sub.k-1, a.sub.k-2 }, the symbol coordinates, in the observation space where past ISI terms have been removed and all possible combinations of {y, y.sub.k-1,y'.sub.k-2 }, the set of observations. FIG. 6 is a graph of signal space showing all possible combinations of {a.sub.k, a.sub.k-1, a.sub.k-2 }, the symbol coordinates, where the `X` symbols correspond to a.sub.k-2 =1 and the `O` symbols to a.sub.k-2 =-1, and where the symbol labels correspond to the values of Ψ in FIG. 5. FIG. 7 is a graph of signal space showing all possible combinations of {a.sub.k, a.sub.k-1, a.sub.k-2 }, the symbol coordinates, assuming a previous decision of +1. FIG. 8 is a graphical view of the relevant boundaries where a.sub.k-3 =-1. FIG. 9 is a graphical view of the relevant boundaries where a.sub.k-3 =+1. FIG. 10 is a one preferred embodiment of a detector apparatus for implementation of this invention. FIG. 11 is another preferred embodiment of a detector apparatus for implementation of this invention. FIG. 12 is a graphical view of the relevant boundaries where a.sub.k-3 =-1 for the preferred embodiment of the invention shown in FIG. 11. FIG. 13 is a graphical view of the relevant boundaries where a.sub.k-3 =+1 for the preferred embodiment of the invention shown in FIG. 11. FIG. 14 is another preferred embodiment of a detector apparatus for implementation of this invention. FIG. 15 is one preferred embodiment of a high data rate detector apparatus. FIG. 16 is another preferred embodiment of a high data rate detector apparatus.
{"url":"http://www.google.com/patents/US5966262?ie=ISO-8859-1&dq=6,373,753","timestamp":"2014-04-18T21:16:24Z","content_type":null,"content_length":"134583","record_id":"<urn:uuid:d2dcebb5-3575-4255-ae71-bea0a7c14ffd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
Futility Closet 1. Find an expression for the number 1 that uses each of the digits 0-9 once. 2. Do the same for the number 100. 3. Write 31 using only the digit 3 five times. 4. Express 11 with three 2s. 5. Express 10 with two 2s. 6. Express 1 with three 8s. 7. Express 5 with two 2s. A logic exercise by Lewis Carroll. What conclusion can be drawn from these premises? 1. Animals are always mortally offended if I fail to notice them. 2. The only animals that belong to me are in that field. 3. No animal can guess a conundrum unless it has been properly trained in a Board-School. 4. None of the animals in that field are badgers. 5. When an animal is mortally offended, it rushes about wildly and howls. 6. I never notice any animal unless it belongs to me. 7. No animal that has been properly trained in a Board-School ever rushes about wildly and howls. How are the following numbers arranged? 0, 2, 3, 6, 7, 1, 9, 4, 5, 8 This puzzle was devised by W.T. Williams. The goal is to discover the age of Father Dunk’s mother-in-law (2 down), but the clues contain so much cross-reference and the grid so many interlocking solutions that practically the whole puzzle must be completed to find it. The year is 1939. There are 20 shillings in a pound; 4840 square yards in an acre; a quarter of an acre in a rood; and 1760 yards in a mile. 1. Area in square yards of Dog’s Mead 5. Age of Martha, Father Dunk’s aunt 6. Difference in yards between length and breadth of Dog’s Mead 7. Number of roods in Dog’s Mead times 8 down 8. The year the Dunks acquired Dog’s Mead 10. Father Dunk’s age 11. Year of Mary’s birth 14. Perimeter in yards of Dog’s Mead 15. Cube of Father Dunk’s walking speed in miles per hour 16. 15 across minus 9 down 1. Value in shillings per rood of Dog’s Mead 2. Square of the age of father Dunk’s mother-in-law 3. Age of Mary, father Dunk’s other daughter 4. Value in pounds of Dog’s Mead 6. Age of Ted, father Dunk’s son, who will be twice the age of his sister Mary in 1945 7. Square of the breadth of Dog’s Mead 8. Time in minutes it takes Father Dunk to walk 4/3 times round Dog’s Mead 9. The number which, when multiplied by 10 across, gives 10 down 10. See 9 down 12. Addition of the digits of 10 down plus 1 13. Number of years Dog’s Mead has been in the Dunk family Hint: Start with 15 across, and keep your wits about you — most of the clues require some sort of insight or intelligent narrowing of the possible solutions; there’s very little mechanical plugging of numbers. With enough careful, dogged reasoning, it’s possible to complete the entire grid, but it’s stupendously hard. By Paud D. Carpenter. White to mate in two moves. By Friedrich Martin Palitzsch. White to mate in two moves. Suppose we have a clock whose hour and minute hands are identical. How times times per day will we find it impossible to tell the time, provided we always know whether it’s a.m. or p.m.? A boat floats in a swimming pool. In the boat is a block of ice. If the block is dumped into the water and melts, does the water level rise or fall? When eight certain cards are removed from a standard poker deck, it becomes impossible to deal a straight flush. What are the cards? (Assume the deck contains no jokers.) By Konrad Erlinger. White to play and mate in two moves. On a 12 × 12 grid, some squares are infected and some are healthy. On each turn, a healthy square becomes infected if it has two or more infected orthogonal neighbors. (In the example above, the black squares are infected, the white squares are healthy, and the gray squares will be infected next turn.) What’s the smallest number of initially infected squares that can spread an infection over the whole board? A more challenging version of the Counterfeit Coin puzzle from 2011: You have 12 coins, one of which has been replaced with a counterfeit. The false coin differs in weight from the true ones, but you don’t know whether it’s heavier or lighter. How can find it using three weighings in a pan balance? This, now, is straightforward and business-like: A. applied to B. for a loan of $100. B. replied, ‘My dear A., nothing would please me more than to oblige you, and I’ll do it. I haven’t $100 by me, but make a note and I’ll indorse it, and you can get the money from the bank.’ A. proceeded to write the note. ‘Stay,’ said B., ‘make it $200. I want $100 myself.’ A. did so, B. indorsed the paper, the bank discounted it, and the money was divided. When the note became due, B. was in California, and A. had to meet the payment. What he is unable to cipher out is whether he borrowed $100 of B., or B. borrowed $100 of him. – Henry C. Percy, Our Cashier’s Scrap-Book, 1879 By Jan Hartong. White to mate in two moves. In his 1936 collection Brush Up Your Wits, British puzzle maven Hubert Phillips relates that his brother-in-law felt himself cursed with an unintelligent maid. “I have just overheard her taking a ‘phone call,” he told Phillips, “and this is what I heard: “‘Is Mr. Smith at home?’ “‘I will ask him, sir. What name shall I give him?’ “‘What’s that, sir?’ “‘Would you mind spelling it?’ “‘Q for quagga, U for umbrella, O for omnibus, I for idiot –’ “‘I for what, sir?’ “‘I for idiot, T for telephone. Q, U, O, I, T, Quoit.’ “‘Thank you, sir.’” Why did he accuse her of unintelligence? A newlywed couple are planning their family. They’d like to have four children, a mix of girls and boys. Which is more likely: (1) two girls and two boys or (2) three children of one sex and one of the other? (Assume that each birth has an equal chance of being a boy or a girl.) By William Meredith. White to mate in two moves. A problem from the 1999 St. Petersburg City Mathematical Olympiad: Fifty cards are arranged on a table so that only the uppermost side of each card is visible. Each card bears two numbers, one on each side. The numbers range from 1 to 100, and each number appears exactly once. Vasya must choose any number of cards and flip them over, and then add up the 50 numbers now on top. What’s the highest sum he can be sure to reach? By Peder Andreas Larsen. White to mate in two moves. (I’ve added a guide to chess notation.) By Francis Healey. White to mate in two moves. A puzzle by Sam Loyd. The red strips are twice as long as the yellow strips. The eight can be assembled to form two squares of different sizes. How can they be rearranged (in the plane) to form three squares of equal size? By Jørgen Thorvald Møller. White to mate in two moves. A problem from Dick Hess’ All-Star Mathlete Puzzles (2009): A man points to a woman and says, “That woman’s mother-in-law and my mother-in-law are mother and daughter (in some order).” Name three ways in which the two can be related.
{"url":"http://www.futilitycloset.com/category/puzzles/page/3/","timestamp":"2014-04-19T04:19:07Z","content_type":null,"content_length":"106575","record_id":"<urn:uuid:2bbbdf32-77f2-4f0c-b83d-8e4ae1723375>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
From Gold Coins to Cadmium Light [Next] [Up] [Previous] From Gold Coins to Cadmium Light The second is defined as the time taken by 9,192,631,770 oscillations of the microwave radio frequency produced by an atom of Cesium-133 when the electrons in that atom are in the ground state, except for one that has emitted this radiation by making the transition from the upper hyperfine level of this state to the lower one. This definition was chosen to make the length of the SI second the same as that of a second of Ephemeris Time, based on the length of the day in 1900, and so when this second began to be used in civil timekeeping (the changeover to "atomic time"), the use of "leap-seconds" became necessary immediately. At one time, the meter was defined as 1,650,763.73 wavelengths of the orange red line in the spectrum of Krypton-86 which corresponded to the radiation emitted by an electron moving from the orbital 2 p to the orbital 5 d in an unperturbed fashion. However, a scientist proceeded to measure the speed of light by performing an accurate measurement of the ratio between the wavelengths (and/or frequencies) of these two types of radiation. In Zen-like fashion, this convinced those responsible for the standards of the fundamental absurdity of the situation, and so now the definition of the second stands, but the definition of the meter has been replaced; now, the meter derives from the second, through the speed of light, which is, by definition, 2.99792458 * 10 meters per second. This leads to the length of a metre being approximately 30.6633189885 wavelengths of the Cesium-133 microwave radiation noted above: the fact that the wavelength is not a tiny fraction of a metre is why it had previously been considered more suitable to a standard of time, for which a radio frequency which can be manipulated by electronics is more accessible than an optical frequency, than one of distance. The speed of light in a vacuum can also be given in units of the uniform US/British inch of 2.54 centimeters, which leads to light travelling 186,282 miles, 698 yards, 2 feet, and 5 21/127 inches every second in a vacuum. Note that it is more convenient to measure the length of waves of light through interference fringes, and the time between oscillations of radio waves through electrical circuitry, however. So the dual definitions allowed both time and distance to be more accurately defined. Incidentally, as far back as 1927, a definition of the meter in terms of a light wavelength existed, but that definition was based on a line in the spectrum of cadmium: the length of the meter was defined as 1,553,164.13 times the wavelength of the 6436.4696 Ångström cadmium red line. A. A. Michaelson had found the cadmium red line to be particularly monochromatic; he had made a measurement of the International Prototype Metre in 1892 which indicated the length of the meter as 1,553,163.5 times the wavelength of the cadmium red line, the wavelength of which was taken to be 6436.472 Ångström units. The figure used in the 1927 standard of 1,553,164.13 wavelengths per meter corresponds closely to a wavelength of 6436.4696 Ångströms for the cadmium red line; that the cadmium red line is exactly 6436.4696 Ångström units long was adopted as a standard for the definition of the international Ångström unit in 1907, based on measurements in 1906 by Benoit, Fabry, and Perot of the length of the metre in terms of the cadmium red line. Cadmium, however, has eight stable isotopes, the most common of which, Cadmium-114, has a natural abundance of 28.73%. so the accuracy of a standard based on natural cadmium would be limited by the small variations in the wavelengths of the same spectral line between different isotopes. Prior to the use of Krypton-86 for the standard length of the meter, another possibility that was considered was to use Mercury-198. This isotope of mercury was created in 1946 by bombarding gold with neutrons, as gold, like aluminum, has only one stable isotope, and this method of creating isotopically-pure mercury was simpler than attempting to separate the isotopes of mercury by their miniscule difference in weight. Its 5460.753 Ångström spectral line was the one considered for use as a standard. Also, a current secondary standard for the meter is an iodine-stabilized helium-neon laser, the light from which has a wavelength of 6329.9139822 Ångströms. While Cesium-133 is used in national time standards, Rubidium-87 is often used by television stations - and now in cell phone towers - to calibrate radio frequencies because a Rubidium frequency standard has better long-term stability. While commercial Rubidium frequency calibrators (often available for less than $80 used; new ones run from $1,500) typically output a 10 MHz signal, the actual atomic frequency of the hyperfine transition providing the accuracy in timekeeping is 6,834,682,610.904324 Hertz, according to Wikipedia; 6,834,682,612 Hertz according to the site of one supplier of Rubidium standards. 6.8 GHz is still faster than ordinary computers can be clocked, but half that would work as a clock rate - 3.41734 GHz. It is necessary at this point to add that 3515.3502 wavelengths of the cadmium red line would, by the 1927 definition, subtend some 2.2633475317 millimeters (as opposed to 2.2633485174 millimeters), which seems to indicate a discrepancy in the definition of the Potrzebie. As a little arithmetic shows that this definition of the Potrzebie would lead to a meter of 1,553,163.45 wavelengths of the red line of cadmium in length, it seems apparent that Donald Knuth, then a 19-year old high school student, had converted from millimeters to wavelengths of cadmium light using a reference giving the 1892 figure. In my efforts to sort out the mystery of that discrepancy, which led me to finding out about the 1892 figure, I had encountered a biographical essay on A. Michaelson by Robert A. Millikan giving 6438.472 Ångströms as the wavelength of the red line of Cadmium as determined by Michaelson which would indeed lead to a standard of about 1,553,163.5 wavelengths per meter. In the essay, the resulting standard as 1,555,165.5, which I took to be a misprint. Thus, the mystery of the difference between the length of the Potrzebie as defined in terms of the meter and that as defined in terms of the cadmium red line appeared to be solved, and further searching led me both to confirmation that 1,553,163.5 was the figure arrived at by Michaelson in 1892 and to find that the later standard derived from the 1906 measurements of Benoit, Fabry, and Perot as noted above. It may also be noted that in 1964, an agreement was reached between the U.S. and Britain to define the inch as 2.54 centimeters. Prior to 1964, the inch was defined in the U.S. on the basis that a meter was exactly 39.37 inches long, which led to the inch being about 2.540005 centimeters long, and in Britain the inch was 2.539997 centimeters in length. (One older reference gives the meter being 39.37079 inches, and the inch therefore being 2.539954 centimeters in length.) It may be noted that the Pyramid Inch was claimed to be 1.00106 English inches, so that would make it about 2.5426894 centimeters long. The Pyramid inch was said to be 1/25th of a royal cubit. In fact, an ordinary cubit, about 18 inches long (so they were at least right that cubits related better to Imperial measure than to the metric system) was divided into six spans (each three inches long), which were in turn divided into four digits (each 3/4 of an inch; and, indeed, the keys on our typewriter keyboards have 3/4 of an inch spacing even today). A royal cubit is seven spans instead of six, and so, nominally, it should be 21 inches long, but then standards of measure were less accurate in those days. The royal cubit was used in the construction of the Great Pyramid; thus, its sides had a rise of one royal cubit for a run of five and one-half spans; which, multiplied by four, gives 3 1/7, giving the appearance that pi is involved in the construction of the In fact, though, serious archaeologists and historians now know that the Egyptian royal cubit was about 52.6 centimetres (give or take 3mm), or 20.7 inches - so it was indeed near to 21 inches, and actually somewhat smaller, and thus not 25.0265 inches long. To the extent, therefore, that such a thing as an ancient Egyptian inch has any meaning, therefore, it would be about 98 4/7 percent of an inch, not 1.00106 inches, in length. While the Egyptian measure relates to Imperial measure by a factor of about .986, the currently accepted value for the Roman foot makes it .971 feet long; thus, while the foot grew on its way to Britain, in the middle, as it passed through Rome, it shrank. And the Romans did divide the foot into 16 digits as well as 12 inches, and so linking the cubit to the inch as I have done is I think it is unfortunate that they missed their chance to define the inch as being about 2.540002 centimeters in length, so that the diagonal of a square 152 inches on a side would be exactly 546 centimeters, or the diagonal of a square 273 centimeters on a side would be exactly 152 inches. After all, supporters of the metric system have always criticized the Imperial system as irrational; and it would be convenient if having two systems of measurement allowed one, by using both of them, to measure exactly both the sides of a square and its diagonal. Of course, lengths of 152 inches and 273 centimeters are somewhat unwieldy. However, as rulers measuring inches are often divided in tenths of an inch, one could relate 15.2 inches to 273 millimeters. But inch rulers are more often divided into sixteenths of an inch. If one were to use the same method to relate the sixteenth of an inch to a millimeter, defining the inch as about 2.5399946 centimeters would lead to a square 284 millimeters on a side having a diagonal of 15 and 13/16 of an inch. However, failing changes in our systems of measurement, one can always simply make use of the fact that 20 squared is 400, 21 squared is 441, and 29 squared is 841 to come reasonably close to a 45 degree angle and still use exact distances. Also, even if redefining the inch is excluded, if only a single unit of measurement is used, comparable ratios to approximate the square root of two would be 239:169 and 577:408, which are in error by 0.000875 percent and 0.000150 percent respectively, while, using the inch of 2.54 centimeters, the ratio 152 inches to 273 centimeters approximates the square root of two with an error of 0.000078 The diagonal of a square 273 centimeters on a side is 386.0803025278548... centimeters, while 152 inches of 2.54 centimeters each are 386.08 centimeters, so there is an excess of 0.0003025278548... centimeters. The older U.S. inch, such that 39.37 inches equal one metre, is still in use for survey purposes, and this inch is equal to 2.54000508001016... centimeters. If, of the 152 inches of the diagonal, 59 and 9/16 of those inches were measured using the older U.S. inch, and the other 92 and 7/16 of those inches were measured using the current inch of 2.54 cm, an even closer approximation to the square root of two would be obtained. A more approximate measurement of the diagonal of the square can be obtained using much simpler numbers. The diagonal of a square 9 centimeters on a side is 5.0109929... inches in length. For comparison, the diagonal of a square 7 centimeters on a side is 9.8994949... centimeters in length. The discrepancy, in addition to being in the opposite direction, is very nearly 3.6 times as large. So, the diagonal of a square 39.4 centimeters on a side, which is 55.7200143574999... centimeters, is very close to 10 centimeters (about the diagonal of 7 centimeters) plus 18 inches (about the diagonal of the other 32.4 centimeters) since 18 inches is 45.72 centimeters. Since these are all even numbers, we can note that 9 inches plus 5 centimeters is approximately the diagonal of 19.7 centimeters. On one page on this site, I note that the Egyptians had a royal cubit of seven spans as opposed to the regular cubit of six spans. But I note at least one page that claims the original form of the royal cubit in Egypt was a measuring unit for measuring diagonals. The U.S. pound was redefined as 453.59237 grams in 1959, having previously been defined on the basis of 2.20462234 pounds equalling one kilogram exactly, leading to a pound of about 453.5924277 grams, nearly identical to the British pound. A pound is 7000 grains in weight, that is, the normal, 453.59 gram pound used to weigh food. Thus, if you have peas to weigh, you use this pound, which is called the avoirdupois pound. The troy ounce, which is used to weigh gold, however, is 480 grains in weight, and there are twelve troy ounces in a troy pound. Thus, the relevant conversions are: Grains Grams (pre-1959 U.S.) Avoirdupois Pound 7000 453.59237 453.59242798927639 Troy Pound 5760 373.24172 373.241769316890283 Troy Ounce 480 31.103477 31.1034807764075236 Avoirdupois Ounce 437.5 28.349523 28.3495267493297741 Grain 1 0.06479891 0.064798918284182341 And so we can make this chart of ounces versus grams for some common chocolate bar sizes: Ounces Grams 1 28.35 1 1/2 42.52 1.58733 45 1 3/5 49.36 1 3/4 49.61 2 56.7 2 1/2 70.87 2.64555 75 3 85.05 3 1/2 99.22 3.5274 100 4 113.4 Here is a rough table of some values for gold, and the relative value of a dollar in terms of gold under those values: -1834 -1933 -1971 2011 Up to 1834: $19.38 $1.00 $1.07 $1.81 $81.80 Up to 1933: $20.67 94¢ $1.00 $1.69 $76.69 Up to 1971: $35.00 55¢ 59¢ $1.00 $45.29 January 21, 1980 $850.00 2.28¢ 2,43¢ 4.12¢ $1.86 April 2, 2001 $255.95 7.57¢ 8.08¢ 13.67¢ $6.19 July 13, 2011: $1,585.20 1.22¢ 1.3¢ 2.2¢ $1.00 The peak high in 1980 (New York), the peak low in 2001 (London), and the new record-setting price (New York) are noted. People writing mediaeval role-playing games should note that due to the difference in density between gold and silver, if one uses the historic ratio that gold is 16 times more valuable than silver by weight, then it is also (approximately) 25 times more valuable than silver by volume. This will make it possible to determine more accurately how much treasure your characters can carry in their backpacks, while still using handy round numbers. Prior to the devaluation of the U.S. dollar in 1933, which also led to the abolishment of gold coinage, the U.S. dollar was defined by defining the Gold Eagle, a coin with a denomination of $10, as being composed of 258 grains of 9/10 fine gold. In U.S. coins, the remaining tenth was made up of half silver and half copper, which were not without value themselves. The Constitution originally defined the dollar as 371.25 grains of silver, and that is 90% of 412.5. In 1872, the weight of the silver dollar was reduced to 384 grains of 9/10 fine silver, because the price of silver had gone up again relative to that of gold, and gold was generally accepted as the international standard of monetary value. However, unlike the previous silver dollar, while this coin was still intended to be worth nearly a dollar, it was not expected to be worth exactly a full dollar; it was not token coinage, but it was subsidiary coinage. If the situation between 1834 and 1872 is used as the basis, then, an attempt can be made to determine the exact nominal value of pure gold in U.S. dollars. A gold dollar of 25.8 grains of 9/10 fine gold consists of 23.22 grains of pure gold, 1.29 grains of pure silver, and 1.29 grains of copper. 371.25 grains of pure silver and 41.25 grains of copper equal a dollar. Thus, 1.29 grains of pure silver and 0.14333... grains of copper equal 0.3474747... cents. So, 23.22 grains of pure gold and the remaining 1.15333... grains of copper equal 99.6525252... cents; thus, if we neglect the value of the copper, that works out to one troy ounce of gold (480 grains) being worth almost exactly $20.60 instead of $20.67. If silver were worth 1/16th as much as gold, this would make 371.25 grains of silver worth about 99.58 cents instead of a dollar. In 1805, the price of copper was considered high at 138 pounds a ton, or about $690 a ton. A ton being 2000 avoirdupois pounds of 7000 grains, this would be about 2.366 cents per troy ounce, compared to about $1.29 per troy ounce for silver and $20.60 per troy ounce for gold. Thus, the ratio of value by weight, historically, had been in the neighborhood of 50 to 1 between silver and copper, or perhaps higher. Given the composition of the Gold Eagle above, a half-eagle, with a $5 denomination, would be composed of 116.1 grains of pure gold, 6.45 grains of pure silver, and 6.45 grains of pure copper. A Gold Sovereign from Britain, on the other hand, is composed of 113.0015 grains of pure gold, and 10.2729 grains of copper, for a total weight of 123.2744 grains, or 7.9881 grams. Another source gives the weight of a British Gold Sovereign as 5 pennyweights, 3 171/623 grains; one pennyweight is 24 grains, so this is 123 171/623 grains, or approximately 123.2744783306581 grains. This works out to about 113.0016 grains of pure gold at 22 carats. If the halfpenny were exactly equal to the cent, making the British pound equal to $4.80 in value, 113 grains of gold to the pound would imply about 117.7 grains of gold to five dollars; thus, while a pound was not worth quite as much as five U.S. dollars, it was actually worth about $4.85 at the time of the gold standard. (Actually, the value of the mint par of exchange used at the time was $4.8665.) While the Canadian dollar was made to be equal to the U.S. dollar, the Newfoundland dollar was about 1 1/3 cents more than a U.S. dollar, as one would expect from a dollar equal to 100 In that era, other countries also had currencies which were fixed in value in terms of gold, and thus to one another. Thus, 19.2952 cents, at 4.48036 grains of gold, was the value of a French Franc, a Swiss Franc, a Belgian Franc, an Italian Lira, a Bulgarian Lev, or a Greek Drachma in those days. Here is a rough table of coin diameters and values, where the values are in terms of the value of U. S. currency in precious metals between 1892 and 1933, and the upper value is for silver coins, the lower for gold coins: 13mm 15mm 16mm 17mm 4 5 6 1/4 7 13/16 $1.00 $1.25 $1.56 1/4 $2.45 5/16 17mm 18mm 19mm 20mm 8 10 12 1/2 15 5/8 $2.00 $2.50 $3.12 1/2 $3.90 5/8 20mm 22mm 24mm 26mm 16 20 25 31 1/4 $4.00 $5.00 $6.25 $7.81 1/4 26mm 28mm 31mm 33mm 32 40 50 62 1/2 $8.00 $10.00 $12.50 $15.62 1/2 33mm 35mm 38mm 41mm 64 80 $1.00 $1.25 $16.00 $20.00 $25.00 $31.25 Moving one column to the right increases the weight and value of the coin by a factor of 1.25, moving one row down doubles the weight and value of the coin. The sizes have been formed on the basis of the currencies of several different countries, and thus they tend to be smaller than the sizes of U.S. coins, and/or larger than the sizes of British coins of corresponding value (1 shilling = 25 cents, 1 florin = 50 cents, 1 crown = $1.25, 1 pound = $5.00) up to the year 1919. Because copper or bronze coins tend to be token coinage, their size is quite variable; the value ratio of 20:1 for gold and silver at the same diameter is consistent, but that between silver and copper can vary from 25:1 to 50:1. The density of gold is 19.32 grams per cubic centimeter, that of silver 10.5 grams/cc, that of copper 8.96 grams/cc. A ratio of 64:1 in value by weight would lead to a ratio of 75:1 in value by volume for silver and copper as metals. Copper coins tend, however, to be a token currency, that is, a metal form of paper money as it were, and so the ratio in weight of copper coin to silver coin would be more like 16:1 or 32:1. The atomic weight of gold is 196.9665, and its one stable isotope is Gold-197. An atomic mass unit is 1.6605655 * 10 Thus, one grain of gold is .064798918... grams of gold, and would contain some 1.9811592 * 10 atoms of gold. Using a 16:1 ratio of value between gold and silver, and a 64:1 ratio of value between silver and copper for a 1024:1 ratio of value between gold and copper, one finds that U.S. gold coins have a metal content whose value is that of 25.8 * (.9 + .05/16 + .05/1024) = grains of gold per dollar, one could, if one wished, define a dollar as the pecuniary value of 4.7624784 * 10 atoms of Gold-197, or, as they say, contained in a good delivery gold bar, which is a bar of gold that is at least 99.5% fine, and which has a mass of approximately 400 troy ounces, ranging from 350 troy ounces to 430 troy ounces (the actual fineness and the weight being marked on the bar). 400 troy ounces is about 12,441.4 grams, and the range would be from a low of 10,886.22 grams to a high of 13,374.49 grams. (Whether or not the avoirdupois pound is a unit of force instead of a unit of mass, the troy ounce is definitely a unit of mass.) [Next] [Up] [Previous]
{"url":"http://www.quadibloc.com/other/cnv03.htm","timestamp":"2014-04-20T03:09:42Z","content_type":null,"content_length":"23600","record_id":"<urn:uuid:64948206-c69d-4335-b586-b3f739f7b962>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] very simple iteration question. Anne Archibald peridot.faceted@gmail.... Wed Apr 30 12:24:43 CDT 2008 2008/4/30 Christopher Barker <Chris.Barker@noaa.gov>: > a g wrote: > > OK: how do i iterate over an axis other than 0? > This ties in nicely with some of the discussion about interating over > matrices. It ahs been suggested that it would be nice to have iterators > for matrices, so you could do: > for row in M.rows: > ... > and > for column in M.cols: > ... > If so, then wouldn't it make sense to have built in iterators for > nd-arrays as well? something like: > for subarray in A.iter(axis=i): > ... > where axis would default to 0. > This is certainly cleaner than: > for j in range(A.shape[i]): > subarray = A[:,:,j,:] > Wait! I have no idea how to spell that generically -- i.e. the ith > index, where i is a variable. There must be a way to build a slice > object dynamically, but I don't know it. Slices can be built without too much trouble using slice(), but it's much easier to just write for subarray in np.rollaxis(A,i): rollaxis() just pulls the specified axis to the front. (It doesn't do what I thought it would, which is a cyclic permutation of the axes). More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-April/033398.html","timestamp":"2014-04-18T13:39:16Z","content_type":null,"content_length":"4114","record_id":"<urn:uuid:928864e2-c9c1-4c1b-bcea-6027b63c4d6e>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus Tutors San Mateo, CA 94403 Enthusiastic Tutor of Mathematics and Physics ...We then work together to build on it, with lots of practice, to expand knowledge and mastery. Pre-algebra, algebra, geometry, analytic (e.g., Cartesian) geometry, , simple differential equations, matrices; classical mechanics, electricity and magnetism,... Offering 10+ subjects including calculus
{"url":"http://www.wyzant.com/san_mateo_ca_calculus_tutors.aspx","timestamp":"2014-04-17T05:37:49Z","content_type":null,"content_length":"61616","record_id":"<urn:uuid:754dab8b-30d6-4ccd-8e55-24407cfd3cde>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with rounding a number Help with rounding a number Hi, I have a math project in which i need to round numbers correctly (ie. not just rounding up, but proper rounding, so if it was 4.4 it would become 4, and if it was 4.5 it would be 5). Now its easy enough to convert a float to an int, but then it allways rounds up, is it possible to somehow round it correctly? Thanks, Nick How many decimal places do you want to round? I won't give you any code, because that would simply give you the answer to your homework. We don't do that. Just think about the problem: 1) Read the decimal place. 2) Round up if higher than N, otherwise round down. It's really not that hard. Stack Overflow I agree, It's not hard. It can be done in one line of code, but I'm here on quzah's side because I want "to make them think" - by Quzah Yeah, I like that phrase in your signature. Now maybe to help with a few hints you can use the math header <math.h> possibly using abs, floor, ceil or even seperately modf, etc... Anyway, here is a link to all the codes given in the header file math.h Hope this helps, - Stack Overflow there is one function in math.h that exactly answers ur requirements. it is called "round" itself. i am not sure but as far as i know it is a standard function. why dont u just check it out
{"url":"http://cboard.cprogramming.com/c-programming/53541-help-rounding-number-printable-thread.html","timestamp":"2014-04-21T00:40:11Z","content_type":null,"content_length":"7998","record_id":"<urn:uuid:3c5b5f53-ec7b-4de1-afa4-67a527cefc17>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
Cohomology of twisted holomorphic forms on Fano threefolds up vote 5 down vote favorite Given a Fano threefold $X$, its index $ind(X)$ is the largest integer $r$ such that there exists a divisor $H$ such that $rH \cong -K_X$. Let $\mathcal{L}$ be the associated (ample) line bundle and define the twisted forms $\Omega^q(k) = \Omega^q \otimes \mathcal{L}^k$. When do the twisted cohomology groups $H^p(X, \Omega^q(k))$ vanish? The Kodaira-Nakano vanishing theorem states that the cohomology groups vanish for $p+q > 3.$ For simple examples such as $\mathbb{CP}^3$ and the quadric hypersurface, many more of the twisted cohomology groups vanish. What results are known? Can anything stronger be said if $X$ has a Kahler-Einstein metric? Added: There are few scattered results in the literature. For example: The only two threefolds with non-vanishing $H^0(X, \Omega^1_X (1))$ are the Mukai–Umemura threefold $V_{22}$ and $V_{18}$ [math.AG/0310390]. ag.algebraic-geometry fano-varieties add comment 1 Answer active oldest votes The Hodge groups $H^1(X,\Omega^2_X)$ and $H^2(X,\Omega^1_X)$ are usually not zero. These are the groups used to construct the Griffiths intermediate Jacobian of $X$. For many types of Fano threefolds, e.g., smooth cubic hypersurfaces in $\mathbb{C}P^4$, the Torelli theorem holds -- one can uniquely reconstruct $X$ from its polarized intermediate Jacobian (or, equivalently, polarized weight 3 Hodge structure). $\textbf{Update}$. The OP asks for examples where $k$ is positive. Let $X$ be a smooth, cubic hypersurface in $\mathbb{P}^4$. I claim that $h^1(X,\Omega^2_X(1))$ is nonzero. Let $S$ be a general hyperplane section of $X$ so that $\mathcal{O}_X(S)$ equals $\mathcal{O}_{\mathbb{P}^4}(1)|_X$. Then I even claim that the induced map of cohomology groups, $$ H^1(X,\Omega^2_X) \to H^1(X,\Omega^2_X(S)), $$ is injective. Using the long exact sequence of cohomology, to prove this, it suffices to prove that $h^0(S,\Omega^2_X(S)|_S)$ equals $0$. Let $C\cong \mathbb{P}^1$ be a smooth conic in the smooth cubic surface $S$. Then $T_X|C$ is isomorphic to $T_C\oplus (N_{C/X})$, which is $\mathcal{O}_{\mathbb{P}^1}(2)\oplus ( \mathcal{O} _{\mathbb{P}^1}(1) )^{\oplus 2}$. Thus $\Omega_X|_C$ is isomorphic to $\Omega_C\oplus T_C^{\dagger}$, i.e., $\mathcal{O}_{\mathbb{P}^1}(-2) \oplus (\mathcal{O}_{\mathbb{P}^1}(-1))^{\oplus 2}$, where $T_C^\dagger$ denotes the annihilator of $T_C$. Taking the second exterior power, $\Omega^2_X|_C$ is isomorphic to $(\Omega_C\otimes T_C^\dagger)\oplus \bigwedge^2 T_C^\dagger$, i.e., $(\mathcal{O}_{\mathbb{P}^1}(-3))^{\oplus 2} \oplus \mathcal{O}_{\mathbb{P}^1}(-2)$. Finally, $\Omega^2_X(+1)|_C$ is isomorphic to $(\mathcal{O}_{\mathbb{P}^1}(-1))^{\oplus 2} \oplus \mathcal{O}_{\mathbb{P}^1}$. Therefore, $H^0(C,\Omega^2_X(S)|_C)$ is $1$-dimensional, with the unique global section (up to scaling) in the "direction of" $\bigwedge^2 T_C^\dagger$. One way of thinking of this is, if you contract this $2$-form with a tangent vector in $T_C$ at a point $p$ of $C$, then you get the zero $1$-form. up vote 4 down vote However, for a general point $p$ of $S$, there are $27$ different conics passing through this point with $27$ different tangent spaces that span the entire tangent space of $S$ at $p$. For accepted a global section in $H^0(S,\Omega^2_X(S)|_S)$, its restriction to each of these $27$ conics $C$ containing $p$, the contraction of the $2$-form with $T_{C,p}$ gives zero, and the spaces $T_ {C,p}$ span $T_{S,p}$. Thus this $2$-form must contract to zero with every element in $T_{S,p}$. Since $T_{S,p}$ is a codimension $1$ linear subspace of $T_{X,p}$, the only $2$-form that contracts to zero with all of $T_{S,p}$ is the zero $2$-form (there are nonzero $1$-forms that contract to zero with all of $T_{S,p}$). Therefore, every global section of $\Omega^2_X(S)|_S$ must vanish at every sufficiently general point $p$ of $S$. Since $\Omega^2_X(S)|_S$ is a locally free sheaf on an integral scheme $S$, this means that every global section is zero. $\textbf{Second Update}$. I want to add one word about why $N_{C/X}$ is isomorphic to $\mathcal{O}_{\mathbb{P}^1}(1)\oplus \mathcal{O}_{\mathbb{P}^1}(1)$, as opposed to $\mathcal{O}_{\ mathbb{P}^1}(1)\oplus \mathcal{O}_{\mathbb{P}^1}(1)$. This is equivalent to saying that the induced map $H^0(C,N_{C/X}) \to H^0(C,N_{C/X}|_{0,\infty})$ is surjective, i.e., the "evaluation map" from the parameter space of $2$-pointed conics in $X$ to $X\times X$ is submersive. By Sard's theorem / generic smoothness, it suffices to prove the map is dominant (since we are in characteristic $0$). Given general points $x_0$, $x_\infty$ of $X$, the line $L$ spanned by $x_0$ and $x_\infty$ intersects $X$ in a third point $x_1$, which is also a general point of $X$ (since $X$ is a cubic hypersurface and the points are general). Thus there are $6$ lines $M$ in $X$ containing $x_1$. For each such line, for the plan $\Pi = \text{span}(L,M)$, the intersection of $\Pi$ with $X$ equals $M\cup C$, where $C$ is a conic containing $x_0$ and $x_\infty$. Also, in all likelihood, this computation of $H^1(X,\Omega^2_X(1))$ appears somewhere in the publications of Markushevich, Tikhomirov and Iliev, since this comes up in their analysis of the Abel-Jacobi maps associated to cubic threefolds. The Hodge groups are for $k=0$. Can anything be said about $k > 0$? For the quadric, the groups $H^1(X, \Omega_X^2(k))$ vanish for all $k > 1.$ – Richard Eager May 8 '13 at 0:40 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry fano-varieties or ask your own question.
{"url":"http://mathoverflow.net/questions/129926/cohomology-of-twisted-holomorphic-forms-on-fano-threefolds/130001","timestamp":"2014-04-18T19:16:40Z","content_type":null,"content_length":"55932","record_id":"<urn:uuid:b8db339b-345c-4f24-9d64-ccb3116fc09c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
How can I express this with mathematical notation? August 13th 2011, 07:41 AM #1 Jul 2011 How can I express this with mathematical notation? I have a pentagon that I have split into 5 triangles and was wanting to say that the sum of the triangle's angels is 900 and the sum of the 5 middle angles formed by the points of the triangles is 360, therefore the pentagon's angle sum is 900-360 = 540. How can I express this all in mathematical notation? If I labelled the triangles $(\Delta)$ from 1-5 and the angles $(\angle)$ from a-e in my diagram, could I say: $\sum_{\Delta=1}^5$$(\Delta) - \sum_{\angle = a}^e$$(\angle) = 540$ or something? Re: How can I express this with mathematical notation? I have a pentagon that I have split into 5 triangles and was wanting to say that the sum of the triangle's angels is 900 and the sum of the 5 middle angles formed by the points of the triangles is 360, therefore the pentagon's angle sum is 900-360 = 540. How can I express this all in mathematical notation? If you have any convex polygon with $n\ge 3$ vertices then the sum of its angles is $\left(180n-360\right)^o$. August 13th 2011, 08:14 AM #2
{"url":"http://mathhelpforum.com/geometry/186072-how-can-i-express-mathematical-notation.html","timestamp":"2014-04-16T06:57:11Z","content_type":null,"content_length":"35103","record_id":"<urn:uuid:afa74f1d-29fc-4c96-8a6c-fbefaffa8242>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
South Grafton Trigonometry Tutor I am an experience math tutor for students in middle school through college. I have an PhD in Applied Math from UC Berkeley and have been tutoring students part time in the last four years. I enjoy working with students who are motivated but need a little help to understand the subject at hand. 11 Subjects: including trigonometry, calculus, algebra 2, geometry ...I did a postdoctoral fellowship in neuroscience, where that knowledge was crucial in order to do research. For MCAT tutoring, students generally specify which subjects they want tutoring in. I would apply to those positions where my strengths are needed. 47 Subjects: including trigonometry, reading, chemistry, geometry ...My schedule is flexible, but weeknights and weekends are my preference. I can tutor either at my home or will travel to your location unless driving is more than 30 minutes. My strength is my ability to look at a challenging concept from different angles. 8 Subjects: including trigonometry, calculus, geometry, algebra 1 ...I usually explore different problem solving techniques with the student until we find the ones that work best. I give lots of practice problems to ensure the student is confident and excelling in the specific areas. I also help with homework. 16 Subjects: including trigonometry, reading, geometry, SAT math I have my degree in Mathematics from New Mexico State University, where I tutored in the math center for four years. I also received a degree in Theatre Arts and took a wide variety of courses. Math has always been one of my favorite subjects. 9 Subjects: including trigonometry, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/South_Grafton_Trigonometry_tutors.php","timestamp":"2014-04-17T07:29:44Z","content_type":null,"content_length":"24268","record_id":"<urn:uuid:15d1dc16-4556-488c-b740-59bd679fba74>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Shadow on Monday, May 3, 2010 at 10:25pm. Are these the right answers?Thanks • Math - PsyDAG, Tuesday, May 4, 2010 at 8:17am 1. If it is really (xy)^2/xy, then you are right. If not, it is just y. 2. The two p's cancel each other out = mn 3. The two "c^2" cancel each other out = -5/4d^2 4. The c^3 should be in the denominator rather than the numerator = (-1/6)/(c^3). Related Questions Math - Add. (5c^4-3c^3+4c^2+15c-6)+(c^5+9c^3+6c^2-4c+2)+(-8c^4+c^2-7c-8) Math - I have no clue how to do this as no values were given and im confused. ... Math - 4C/(10C-5D) + 2D/(6C-3D) Please include workings thx Algebra - Simplify r=1/(1/x+1/y) multiply the right side by xy/xy maths - the probabilities of a boy passing English and Mathematics tests are x ... math - Given: Polygon STUVW is similar to polygon XYZAB, XY= 32, YZ= 36, ST= 4c... Math - Can you help me simplify this problem? 2y+y^2-xy-(xy-y+3y^2) Thanks math - simplify (xy^5)^3/x^y^4 is the answer xy^11 math - please help simplify (xy^5)^3/x^y^4 is the answer xy^11 let me know if u ... maths - 3. Simplify. 2y+y^2-xy-(xy-y+3y^2)
{"url":"http://www.jiskha.com/display.cgi?id=1272939941","timestamp":"2014-04-19T14:53:36Z","content_type":null,"content_length":"8380","record_id":"<urn:uuid:8d387a68-db91-4de7-be2d-31dc282dc91f>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
Turning calculations From ROBOTC API Guide Turning calculations Turning calculations Differential Steering For a differential drive system, the "center point" where all the turning calculations are based from is based on the location of all the various driving wheels. To keep the process simple, we recommend that you try to keep the wheels evenly spaced so the center is just half way between all the wheels. If the system has a caster, then you can just ignore it for simplicity. [S:So say you had 6 drive wheels equally spaced, then the "center" would be half way between the two center wheels. However if the wheels are not evenly spaced then you need to do some math. Say the 6 wheels were placed so that the front wheels are 7 in from the center wheels and the back wheels are only 4 in from the center. In this instance you would find the average. you could say that the front wheels are +7, the center are 0, and the back wheels are -4. If you take the average you find that the "center" is at +1 from the center wheels.:S] For the calculations we will say that the distance between the wheels from one side to the other is W. We will also say that the desired swing radius from the center is R, and the desired speed is V. The speed of the left side will be defined as V[L] and the speed of the right side as V[R]. Akerman Steering Akerman steering is the steering system used in just about every automobile. Normally the center point is located between a pair of fixed wheels. However this system can have the "center point" located anywhere you want. We will define the distance from the center point to the wheel length-wise as H. We will also define the turning radius as R. We will define the angle of the left and right wheels as Θ[L] and Θ[R] respectively. We will also define W as the distance between the wheels on each side.
{"url":"http://www.robotc.net/w/index.php?title=Tutorials/Arduino_Projects/Additional_Info/Turning_Calculations&oldid=5740","timestamp":"2014-04-25T04:06:59Z","content_type":null,"content_length":"26024","record_id":"<urn:uuid:4f642176-aa5c-451c-af98-a68a91a8786c>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: SYSTEM AND METHOD FOR GENERATING PIXEL VALUES FOR PIXELS IN AN IMAGE USING STRICTLY DETERMINISTIC METHODOLOGIES FOR GENERATING SAMPLE POINTS Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A computer graphics system generates a pixel value for a pixel in an image, the pixel being representative of a point in a scene as recorded on an image plane of a simulated camera. The computer graphics system comprises a sample point generator and a function evaluator. The sample point generator is configured to generate a set of sample points representing at least one simulated element of the simulated camera, the sample points representing elements of, illustratively, for sample points on the image plane, during time intervals during which the shutter is open, and on the lens, a Hammersley sequence, and, for use in global illumination, a scrambled Halton sequence. The function evaluator is configured to generate at least one value representing an evaluation of a selected function at sample points generated by the sample point generator, those values corresponding to the pixel value. A computer graphics system for generating a pixel value for a pixel in an image, the pixel being representative of a point in a scene as recorded on an image plane of a simulated camera, the computer graphics system comprising:A. a sample point generator configured to generate a set of sample points representing at least one simulated element of the simulated camera, the sample points representing elements of a Hammersley sequence; andB. a function evaluator configured to generate at least one value representing an evaluation of said selected function at one of the sample points generated by said sample point generator, the value generated by the function evaluator corresponding to the pixel value. A computer graphics method for generating a pixel value for a pixel in an image, the pixel being representative of a point in a scene as recorded on an image plane of a simulated camera, the computer graphics method comprising:A. a sample point generating step in which a set of sample points are generated representing at least one simulated element of the simulated camera, the sample points representing elements of a Hammersley sequence; andB. a function evaluation step in which at least one value is generated representing an evaluation of said selected function at one of the sample points generated by said sample point generator, the value generated by the function evaluator corresponding to the pixel value. A computer program product for use in connection with a computer to provide a computer graphics system for generating a pixel value for a pixel in an image, the pixel being representative a of a point in a scene as recorded on an image plane of a simulated camera, the computer program product comprising a machine readable medium having encoded thereon:A. a sample point generator module configured to enable the computer to generate a set of sample points representing at least one simulated element of the simulated camera, the sample points representing elements of a Hammersley sequence; andB. a function evaluator module configured to enable the computer to generate at least one value representing an evaluation of said selected function at one of the sample points generated by said sample point generator module, the value generated by the function evaluator corresponding to the pixel value. This application for patent is a Continuation of U.S. patent application Ser. No. 11/548124 filed Oct. 10, 2006 (Atty. Dkt. MENT-061-CON), which is a Continuation of U.S. patent application Ser. No. 09/884861 filed Jun. 19, 2001 (issued Jun. 5, 2007 as U.S. Pat. No. 7,227,547), which claims the benefit of U.S. Provisional Patent Apps. 60/212286 filed Jun. 19, 2000 and 60/265934 filed Feb. 1, 2001 (both expired). This application also incorporates by reference herein U.S. patent application Ser. No. 08/880418 filed Jun. 23, 1997 (issued Mar. 4, 2003 as U.S. Pat. No. 6,529,193), Grabenstein, et al., entitled "System And Method For Generating Pixel Values For Pixels In An Image Using Strictly Deterministic Methodologies For Generating Sample Points," (hereinafter "Grabenstein FIELD OF THE INVENTION [0002] The invention relates generally to the field of computer graphics, and more particularly to systems and methods for generating pixel values for pixels in the image. BACKGROUND OF THE INVENTION [0003] In computer graphics, a computer is used to generate digital data that represents the projection of surfaces of objects in, for example, a three-dimensional scene, illuminated by one or more light sources, onto a two-dimensional image plane, to simulate the recording of the scene by, for example, a camera. The camera may include a lens for projecting the image of the scene onto the image plane, or it may comprise a pinhole camera in which case no lens is used. The two-dimensional image is in the form of an array of picture elements (which are variable termed "pixels" or "pels"), and the digital data generated for each pixel represents the color and luminance of the scene as projected onto the image plane at the point of the respective pixel in the image plane. The surfaces of the objects may have any of a number of characteristics, including shape, color, specularity, texture, and so forth, which are preferably rendered in the image as closely as possible, to provide a realistic-looking image. Generally, the contributions of the light reflected from the various points in the scene to the pixel value representing the color and intensity of a particular pixel are expressed in the form of the one or more integrals of relatively complicated functions. Since the integrals used in computer graphics generally will not have a closed-form solution, numerical methods must be used to evaluate them and thereby generate the pixel value. Typically, a conventional "Monte Carlo" method has been used in computer graphics to numerically evaluate the integrals. Generally, in the Monte Carlo method, to evaluate an integral = ∫ 0 1 f ( x ) x ( 1 ) ##EQU00001## where f (x) is a real function on the real numerical interval from zero to 1, inclusive, first a number "N" statistically-independent random numbers xi, i=1, . . . , N, are generated over the interval. The random numbers xi are used as sample points for which sample values f(xi) are generated for the function f(x), and an estimate f for the integral is generated as ≈ f _ = 1 N i = 1 N f ( x i ) ( 2 ) ##EQU00002## As the number of random numbers used in generating the sample points f (xi) increases, the value of the estimate f will converge toward the actual value of the integral f. Generally, the distribution of estimate values that will be generated for various values of "N," that is, for various numbers of samples, of being normal distributed around the actual value with a standard deviation σ which can be estimated by σ = 1 N - 1 ( f 2 _ - f _ 2 ) . ( 3 ) ##EQU00003## if the values x[i] used to generate the sample values f(x ) are statistically independent, that is, if the values x are truly generated at random. Generally, it has been believed that random methodologies like the Monte Carlo method are necessary to ensure that undesirable artifacts, such as Moire patterns and aliasing and the like, which are not in the scene, will not be generated in the generated image. However, several problems arise from use of the Monte Carlo method in computer graphics. First, since the sample points x used in the Monte Carlo method are randomly distributed, they may clump in various regions over the interval over which the integral is to be evaluated. Accordingly, depending on the set of random numbers which are generated, in the Monte Carlo method for significant portions of the interval there may be no sample points x for which sample values f(x ) are generated. In that case, the error can become quite large. In the context of generating a pixel value in computer graphics, the pixel value that is actually generated using the Monte Carlo method may not reflect some elements which might otherwise be reflected if the sample points x were guaranteed to be more evenly distributed over the interval. This problem can be alleviated somewhat by dividing the interval into a plurality of sub-intervals, but it is generally difficult to determine a priori the number of sub-intervals into which the interval should be divided, and, in addition, in a multi-dimensional integration region (which would actually be used in computer graphics, instead of the one-dimensional interval described here) the partitioning of the region into sub-regions of equal size can be quite complicated. In addition, since the method makes use of random numbers, the error| f-f (where |x| represents the absolute value of the value "x") between the estimate value f and actual value f is probabilistic, and, since the error values for various large values of "N" are close to normal distribution around the actual value f, only sixty-eight percent of the estimate values f that might be generated are guaranteed to lie within one standard deviation of the actual value f. Furthermore, as is clear from equation (3), the standard deviation σ decreases with increasing numbers of samples N, proportional to the reciprocal of square root of "N" (that is, 1/ {square root over (N)}). Thus, if it is desired to reduce the statistical error by a factor of two, it will be necessary to increase the number of samples N by a factor of four, which, in turn, increases the computational load that is required to generate the pixel values, for each of the numerous pixels in the image. Additionally, since the Monte Carlo method requires random numbers, an efficient mechanism for generating random numbers is needed. Generally, digital computers are provided with so-called "random number" generators, which are computer programs which can be processed to generate a set of numbers that are approximately random. Since the random number generators use deterministic techniques, the numbers that are generated are not truly random. However, the property that subsequent random numbers from a random number generator are statistically independent should be maintained by deterministic implementations of pseudo-random numbers on a computer. The Grabenstein application describes a computer graphics system and method for generating pixel values for pixels in an image using a strictly deterministic methodology for generating sample points, which avoids the above-described problems with the Monte Carlo method. The strictly deterministic methodology described in the Grabenstein application provides a low-discrepancy an sample point sequence which ensures, a priori, that the sample points are generally more evenly distributed throughout the region over which the respective integrals are being evaluated. In one embodiment, the sample points that are used are based on a so-called Halton sequence. See, for example, J. H. Halton, Numerische Mathematik, Vol. 2, pp. 84-90 (1960) and W. H. Press, et al., Numerical Recipes in Fortran (2d Edition) page 300 (Cambridge University Press, 1992). In a Halton sequence generated for number base "p," where base "p" is a selected prime number, the "k-th" value of the sequence, represented by H is generated by use of a "radical inverse" operation, that is, by (1) writing the value "k" as a numerical representation of the value in the selected base "p," thereby to provide a representation for the value as D -1 . . . D , where D (m=1, 2, . . . , M) are the digits of the representation, (2) putting a radix point (corresponding to a decimal point for numbers written in base ten) at the least significant end of the representation D -1 . . . D written in step (1) above, and (3) reflecting the digits around the radix point to provide 0.D . . . D , which corresponds to H It will be appreciated that , regardless of the base "p" selected for the representation, for any series of values, one, two, . . . "k," written in base "p," the least significant digits of the representation will change at a faster rate than the most significant digits. As a result, in the Halton sequence H , H , . . . H , the most significant digits will change at the faster rate, so that the early values in the sequence will be generally widely distributed over the interval from zero to one, and later values in the sequence will fill in interstices among the earlier values in the sequence. Unlike the random or pseudo-random numbers used in the Monte Carlo method as described above, the values of the Halton sequence are not statistically independent; on the contrary, the values of the Halton sequence are strictly deterministic, "maximally avoiding" each other over the interval, and so they will not clump, whereas the random or pseudo-random numbers used in the Monte Carlo method may clump. It will be appreciated that the Halton sequence as described above provides a sequence of values over the interval from zero to one, inclusive along a single dimension. A multi-dimensional Halton sequence can be generated in a similar manner, but using a different base for each dimension. A generalized Halton sequence, of which the Halton sequence described above is a special case, is generated as follows. For each starting point along the numerical interval from zero to one, inclusive, a different Halton sequence is generated. Defining the pseudo-sum x⊕ for any x and y over the interval from zero to one, inclusive, for any integer "p" having a value greater than two, the pseudo-sum is formed by adding the digits representing "x" and "y" in reverse order, from the most-significant digit to the least-significant digit, and for each addition also adding in the carry generated from the sum of next more significant digits. Thus, if "x" in base "p" is represented by 0.X . . . X , where each "X " is a digit in base "p," and if "y" in base "p" is represented by 0.Y . . . Y , where each "Y " is a digit in base "p" (and where "M," the number of digits in the representation of "x" in base "p", and "N," the number of digits in the representation of "y" in base "p", may differ), then the pseudo-sum "z" is represented by 0.Z . . . Z , where each "Z " is a digit in base "p" given by Z ) mod p, where "mod" represents the modulo function, and C l = { 1 for X l - 1 + Y l - 1 + Z l - 1 ≧ p 0 otherwise ##EQU00004## is a carry value from the "l-1st" digit position, with C being set to zero. Using the pseudo-sum function as described above, the generalized Halton sequence that is used in the system described in the Grabenstein application is generated as follows. If "p" is an integer, and x is an arbitrary value on the interval from zero to one, inclusive, then the "p"-adic von Neumann-Kakutani transformation T (x) is given by T p ( x ) := x ⊕ p 1 p , ( 4 ) ##EQU00005## and the generalized Halton sequence x[0] , x , x is defined recursively as ) (5) From equations (4) and (5), it is clear that, for any value for "p," the generalized Halton sequence can provide that a different sequence will be generated for each starting value of "x," that is, for each x . It will be appreciated that the Halton sequence H as described above is a special case of the generalized Halton sequence (equations (4) and (5)) for x The use of a strictly deterministic low-discrepancy sequence can provide a number of advantages over the random or pseudo-random numbers that are used in connection with the Monte Carlo technique. Unlike the random numbers used in connection with the Monte Carlo technique, the low discrepancy sequences ensure that the sample points are more evenly distributed over a respective region or time interval, thereby reducing error in the image which can result from clumping of such sample points which can occur in the Monte Carlo technique. That can facilitate the generation of images of improved quality when using the same number of sample points at the same computational cost as in the Monte Carlo technique. SUMMARY OF THE INVENTION [0017] The invention provides a new and improved computer graphics system and method for generating pixel values for pixels in the image using a strictly deterministic methodology for generating sample points for use in evaluating integrals defining aspects of the image. In brief summary, the invention provides, in one aspect, a computer graphics system for generating a pixel value for a pixel in an image, the pixel being representative of a point in a scene as recorded on an image plane of a simulated camera. The computer graphics system comprises a sample point generator and a function evaluator. The sample point generator is configured to generate a set of sample points representing at least one simulated element of the simulated camera, the sample points representing elements of, illustratively, for sample points on the image plane, during time interval during which the shutter is open, and on the lens, a Hammersley sequence, and, for use in global illumination, a scrambled Halton sequence. The function evaluator configured to generate at least one value representing an evaluation of said selected function at one of the sample points generated by said sample point generator, the value generated by the function evaluator corresponding to the pixel value. BRIEF DESCRIPTION OF THE DRAWINGS [0019] This invention is pointed out with particularity in the appended claims. The above and further advantages of this invention may be better understood by referring to the following description taken in conjunction with the accompanying drawings, in which: FIG. 1 depicts an illustrative computer graphics system constructed in accordance with the invention DETAILED DESCRIPTION OF AN ILLUSTRATIVE EMBODIMENT [0021] The invention provides an computer graphic system and method for generating pixel values for pixels in an image of a scene, which makes use of a strictly-deterministic methodology for generating sample points for use in generating sample values for evaluating the integral or integrals whose function(s) represent the contributions of the light reflected from the various points in the scene to the respective pixel value, rather than the random or pseudo-random Monte Carlo methodology which has been used in the past. The strictly-deterministic methodology ensures a priori that the sample points will be generally more evenly distributed over the interval or region over which the integral(s) is (are) to be evaluated in a low-discrepancy manner. FIG. 1 attached hereto depicts an illustrative computer system 10 that makes use of such a strictly deterministic methodology. With reference to FIG. 1, the computer system 10 in one embodiment includes a processor module 11 and operator interface elements comprising operator input components such as a keyboard 12A and/or a mouse 12B (generally identified as operator input element(s) 12) and an operator output element such as a video display device 13. The illustrative computer system 10 is of the conventional stored-program computer architecture. The processor module 11 includes, for example, one or more processor, memory and mass storage devices, such as disk and/or tape storage elements (not separately shown), which perform processing and storage operations in connection with digital data provided thereto. The operator input element(s) 12 are provided to permit an operator to input information for processing. The video display device 13 is provided to display output information generated by the processor module 11 on a screen 14 to the operator, including data that the operator may input for processing, information that the operator may input to control processing, as well as information generated during processing. The processor module 11 generates information for display by the video display device 13 using a so-called "graphical user interface" ("GUI"), in which information for various applications programs is displayed using various "windows." Although the computer system 10 is shown as comprising particular components, such as the keyboard 12A and mouse 12B for receiving input information from an operator, and a video display device 13 for displaying output information to the operator, it will be appreciated that the computer system 10 may include a variety of components in addition to or instead of those depicted in FIG. 1. In addition, the processor module 11 includes one or more network ports, generally identified by reference numeral 14, which are connected to communication links which connect the computer system 10 in a computer network. The network ports enable the computer system 10 to transmit information to, and receive information from, other computer systems and other devices in the network. In a typical network organized according to, for example, the client-server paradigm, certain computer systems in the network are designated as servers, which store data and programs (generally, "information") for processing by the other, client computer systems, thereby to enable the client computer systems to conveniently share the information. A client computer system which needs access to information maintained by a particular server will enable the server to download the information to it over the network. After processing the data, the client computer system may also return the processed data to the server for storage. In addition to computer systems (including the above-described servers and clients), a network may also include, for example, printers and facsimile devices, digital audio or video storage and distribution devices, and the like, which may be shared among the various computer systems connected in the network. The communication links interconnecting the computer systems in the network may, as is conventional, comprise any convenient information-carrying medium, including wires, optical fibers or other media for carrying signals among the computer systems. Computer systems transfer information over the network by means of messages transferred over the communication links, with each message including information and an identifier identifying the device to receive the message. It will be helpful to initially provide some background on operations performed by the computer graphics system in generating an image. Generally, the computer graphic system generates an image attempts to simulate an image of a scene that would be generated by a camera. The camera includes a shutter that will be open for a predetermined time T starting at a time t to allow light from the scene to be directed to an image plane. The camera may also include a lens that serves to focus light from the scene onto the image plane. The average radiance flux L through a pixel at position (m,n) on an image plane P, which represents the plane of the camera's recording medium, is determined by L m , n = 1 A P T A L ∫ A P ∫ t 0 t 0 + T ∫ A L L ( h ( x , t , y ) - ω ( x , t , y ) ) f m , n ( x ) y t x ( 6 ) ##EQU00006## " refers to the area of the pixel, A refers to the area of the portion of the lens through which rays of light pass from the scene to the pixel, and f represents a filtering kernel associated with the pixel. An examination of the integral in equation (6) will reveal that the variables of integration, "x," "y" and "t," are essentially dummy variables with the variable "y" referring to integration over the lens area (A , the variable "t" referring to integration over time (the time interval from t to t +T) and the variable "x" referring to integration over the pixel area (A The value of the integral in equation (6) is approximated by identifying N sample points x in the pixel area, and, for each sample point, shooting N rays at times t in the time interval t to t +T through the focus into the scene, with each ray spanning N sample points y ,k on the lens area A . The manner in which subpixel jitter positions x , points in time t and positions on the lens y ,k are determined will be described below. These three parameters determine the primary ray hitting the scene geometry in h(x ,k) with the ray direction ω(w ,k). In this manner, the value of the integral in equation (6) can be approximated as L m , n ≈ 1 N i = 0 N P - 1 1 N T j = 0 N T - 1 1 N L k = 0 N L - 1 L ( h ( x i , t i , j , y i , j , k ) , - ω ( x i , t i , j , y i , j , k ) ) f m , n ( x i ) , ( 7 ) ##EQU00007## "N" is the total number of rays directed at the pixel. It will be appreciated that rays directed from the scene toward the image can comprise rays directly from one or more light sources in the scene, as well as rays reflected off surfaces of objects in the scene. In addition, it will be appreciated that a ray that is reflected off a surface may have been directed to the surface directly from a light source, or a ray that was reflected off another surface. For a surface that reflects light rays, a reflection operator T is defined that includes a diffuse portion T , a glossy portion T and a specular portion T , or In that case , the Fredholm integral equation L=L L governing light transport can be represented as )+T.sub- .f .su- b.dL (9), where transparency has been ignored for the sake of simplicity ; transparency is treated in an analogous manner. The individual terms in equation (9) are (i) L represents flux due to a light source; (ii) T (where T ) represents direct illumination, that is, flux reflected off a surface that was provided thereto directly by a light source; the specular component, associated with the specular portion T of the reflection operator, will be treated separately since it is modeled as a δ-distribution; (iii) T ) represents glossy illumination, which is handled by recursive distribution ray tracing, where, in the recursion, the source illumination has already been accounted for by the direct illumination (item (ii) above); (iv) T L represents a specular component, which is handled by recursively using "L" for the reflected ray; (v) T L (where T ) represents a caustic component, which is a ray that has been reflected off a glossy or specular surface (reference the T operator) before hitting a diffuse surface (reference the T operator); this contribution can be approximated by a high resolution caustic photon map; and (vi) T L represents ambient light, which is very smooth and is therefore approximated using a low resolution global photon map. As noted above, the value integral (equation (6)) is approximated by solving equation (7) making use of sample points x , t and y ,k, where "x " refers to sample points within area A of the respective pixel at location (m,n) in the image plane, "t " refers to sample points within the time interval t to t +T during which the shutter is open, and "y ,k" refers to sample points on the lens A . In accordance with one aspect of the invention, the sample points x comprise two-dimensional Hammersley points, which are defined as ( i N , Φ 2 ( i ) ) , ##EQU00008## for a selected integer parameter "n," and Φ (i) refers to the radical inverse of "i" in base "two." Generally, the "s" dimensional Hammersley point set is defined as U N , s Hammersley : { 0 , , N - 1 } → I s i x i := ( i N , Φ b 1 ( i ) , , Φ s - 1 ( i ) ) , ( 10 ) ##EQU00009## where I^s is the s-dimensional unit cube [0,1) (that is, an s-dimensional cube that includes "zero," and excludes "one"), the number of points "N" in the set is fixed and b , . . . , b -1 are bases. The bases do not need to be prime numbers, but they are preferably relatively prime to provide a relatively uniform distribution. The radical inverse function Φ , in turn, is generally defined as Φ b : N 0 → I i = j = 0 ∞ a j ( i ) b j j = 0 ∞ a j ( i ) b - j - 1 ( 11 ) ##EQU00010## ( a j ) j = 0 ∞ ##EQU00011## is the representation of "i" in integer base "b." The two-dimensional Hammersley points form a stratified pattern on a 2 by 2 grid. Considering the grid as subpixels, the complete subpixel grid underlying the image plane can be filled by simply abutting copies of the grid to each other. Given integer subpixel coordinates (s ) the instance "i" and coordinates (x,y) for the sample point x in the image plane can be determined as follows. Preliminarily, examining ( i N , Φ 2 ( i ) ) i = 0 N - 1 , ##EQU00012## one observes that [0035] (a) each line in the stratified pattern is a shifted copy of another, and (b) the pattern is symmetric to the line y=x, that is, each column is a shifted copy of another column. , given the integer permutation σ(k):=2 (k) for 0≦k<2 , subpixel coordinates (s ) are mapped onto strata coordinates (j,k):=s mod 2 , s mod 2 ), an instance number "i" is computed as +σ(k) (12) and the positions of the jittered subpixel sample points are determined according to x i = ( s x + Φ 2 ( k ) , s y + Φ 2 ( j ) ) = ( s x + σ ( k ) 2 n , s y + σ ( j ) 2 n ) . ( 13 ) ##EQU00013## An efficient algorithm for generating the positions of the jittered subpixel sample points x will be provided below in connection with Code Segment 1. A pattern of sample points whose positions are determined as described above in connection with equations (12) and (13) has an advantage of having much reduced discrepancy over a pattern determined using a Halton sequence or windowed Halton sequence, as described in the aforementioned Grabenstein application, and therefore the approximation described above in connection with equation (7) gives in general a better estimation to the value of the integral described above in connection with equation (6). In addition, if "n" is sufficiently large, sample points in adjacent pixels will have different patterns, reducing the likelihood that undesirable artifacts will be generated in the image. A ray tree is a collection of paths of light rays that are traced from a point on the simulated camera's image plane into the scene. The computer graphics system 10 generates a ray tree by recursively following transmission, subsequent reflection and shadow rays using trajectory splitting. In accordance with another aspect of the invention, a path is determined by the components of one vector. of a global generalized scrambled Hammersley point set. Generally, a scrambled Hammersley point set reduces or eliminates a problem that can arise in connection with higher-dimensioned low-discrepancy sequences since the radical inverse function Φ typically has subsequences of b-1 equidistant values spaced by 1/b. Although these correlation patterns are merely noticeable in the full s-dimensional space, they are undesirable since they are prone to aliasing. The computer graphics system 10 attenuates this effect by scrambling, which corresponds to application of a permutation to the digits of the b-ary representation used in the radical inversion. For a permutation a from a symmetric group S over integers 0, . . . , b-1, the scrambled radical inverse is defined as Φ b : N 0 × S b → I ( i , σ ) j = 0 ∞ σ ( a j ( i ) ) b - j - 1 ⇄ i = j = 0 ∞ a j ( i ) b j . ( 14 ) ##EQU00014## If the permutation "σ" is the identity, the scrambled radical inverse corresponds to the unscrambled radical inverse. In one embodiment, the computer graphics system generates the permutation a recursively as follows. Starting from the permutation σ =(0,2) for base b=2, the sequence of permutations is defined as follows: (i) if the base "b" is even, the permutation σ is generated by first taking the values of 2 σ b 2 ##EQU00015## and appending the values of 2 σ b 2 + 1 , ##EQU00016## and [0039] (ii) if the base "b" is odd, the permutation ob is generated by taking the values of σ -1, incrementing each value that is greater than or equal to - 1 2 ##EQU00017## by one , and inserting the value b-1 in the middle.This recursive procedure results in =(0,4,2,6,1,5,3,7) . . . . The computer graphics system 10 can generate a generalized low-discrepancy point set as follows. It is possible to obtain a low-discrepancy sequence by taking any rational s-dimensional point "x" as a starting point and determine a successor by applying the corresponding incremental radical inverse function to the components of "x." The result is referred to as the generalized low-discrepancy point set. This can be applied to both the Halton sequence and the Hammersley sequence. In the case of the generalized Halton sequence, this can be formalized as ), Φ ), . . . , Φ(i+i )) (15), where the integer vector , i , . . . , i ) represents the offsets per component and is fixed in advance for a generalized sequence. The integer vector can be determined by applying the inverse of the radical inversion to the starting point "x." A generalized Hammersley sequence can be generated in an analogous manner. Returning to trajectory splitting, generally trajectory splitting is the evaluation of a local integral, which is of small dimension and which makes the actual integrand smoother, which improves overall convergence. Applying replication, positions of low-discrepancy sample points are determined that can be used in evaluating the local integral. The low-discrepancy sample points are shifted by the corresponding elements of the global scrambled Hammersley point set. Since trajectory splitting can occur multiple times on the same level in the ray tree, branches of the ray tree are decorrelated in order to avoid artifacts, the decorrelation being accomplished by generalizing the global scrambled Hammersley point set. An efficient algorithm for generating a ray tree will be provided below in connection with Code Segment 2. Generally, in that algorithm, the instance number "i" of the low-discrepancy vector, as determined above in connection with equation (12), and the number "d" of used components, which corresponds to the current integral dimension, are added to the data structure that is maintained for the respective ray in the ray tree. The ray tree of a subpixel sample is completely specified by the instance number "i." After the dimension has been set to "two," which determines the component of the global Hammersley point set that is to be used next, the primary ray is cast into the scene to span its ray tree. In determining the deterministic splitting by the components of low discrepancy sample points, the computer graphics system 10 initially allocates the required number of dimensions "Δd." For example, in simulating glossy scattering, the required number of dimensions will correspond to "two." Thereafter, the computer graphics system 10 generates scattering directions from the offset given by the scrambled radical inverses Φ (i, σ ), . . . , Φ +bd-1(i, σ +bd-1), yielding the instances ( y i , j ) j = 0 M - 1 = ( Φ b d ( i , σ b d ) ⊕ j M , , Φ b d + Δ d - 1 ( i , σ b d + Δ d - 1 ) ⊕ Φ b d + Δ d - 2 ( i , σ b d + Δ d - 2 ) ) , ( 16 ) ##EQU00018## "⊕" refers to addition modulo "one." Each direction of the "M" replicated rays is determined by y and enters the next level of the ray tree with d':=d+Δd as the new integral dimension in order to use the next elements of the low-discrepancy vector, and i'=i+j in order to decorrelate subsequent trajectories. Using an infinite sequence of low-discrepancy sample points, the replication heuristic is turned into an adaptive consistent sampling arrangement. That is, computer graphics system 10 can fix the sampling rate ΔM, compare current and previous estimates every ΔM samples, and, if the estimates differ by less than a predetermined threshold value, terminate sampling. The computer graphics system 10 can, in turn, determine the threshold value, by importance information, that is, how much the local integral contributes to the global integral. As noted above, the integral described above in connection with equation (6) is over a finite time period T from t to t +T, during which time the shutter of the simulated camera is open. During the time period, if an object in the scene moves, the moving object may preferably be depicted in the image as blurred, with the extent of blurring being a function of the object's motion and the time interval t +T. Generally, motion during the time an image is recorded is linearly approximated by motion vectors, in which case the integrand (equation (6)) is relatively smooth over the time the shutter is open and is suited for correlated sampling. For a ray instance "i," started at the subpixel position x , the offset Φ (i) into the time interval is generated and the N -1 subsequent samples Φ 3 ( i ) + j N T mod 1 ##EQU00019## that is t i , j := t 0 + ( Φ 3 ( i ) ⊕ j N T ) T . ( 17 ) ##EQU00020## Determining sample points in this manner fills the sampling space , resulting in a more rapid convergence to the value of the integral (equation (6)). For subsequent trajectory splitting, rays are decorrelated by setting the instance i'=i+j. In addition to determining the position of the jittered subpixel sample point x , and adjusting the camera and scene according to the sample point t for the time, the computer graphics system also simulates depth of field. In simulating depth of field, the camera to be simulated is assumed to be provided with a lens having known optical characteristics and, using geometrical optics, the subpixel sample point x is mapped through the lens to yield a mapped point x '. The lens is sampled by mapping the dependent samples y i , j , k = ( ( Φ 5 ( i + j , σ 5 ) ⊕ k N L ) , ( Φ 7 ( i + j , σ 7 ) ⊕ Φ 2 ( k ) ) ) ( 18 ) ##EQU00021## onto the lens area A[L] using a suitable one of a plurality of known transformations. Thereafter, a ray is shot from the sample point on the lens specified by y ,k through the point x ' into the scene. The offset (Φ (i+j, σ ), Φ (i+j, σ )) in equation (18) comprise the next components taken from the generalized scrambled Hammersley point set, which, for trajectory splitting, is displaced by the elements ( k N L , Φ 2 ( k ) ) ##EQU00022## of the two -dimensional Hammersley point set. The instance of the ray originating from sample point y ,k is set to "i+j+k" in order to decorrelate further splitting down the ray tree. In equation (18), the scrambled samples (Φ i+j, σ ), Φ (i+j, σ )) are used instead of the unscrambled samples of (Φ (i+j), Φ (i+j)) since in bases "five" and "seven" the up to five unscrambled samples will lie on a straight line, which will not be the case for the scrambled samples. In connection with determination of a value for the direct illumination (T above), direct illumination is represented as an integral over the surface of the scene ∂V, which integral is decomposed into a sum of integrals, each over one of the "L" single area light sources in the scene. The individual integrals in turn are evaluated by dependent sampling, that is ( T f r - f s L e ) ( y , z ) = ∫ ∂ V L e ( x , y ) f r ( x , y , z ) G ( x , y ) x = k = 1 L ∫ supp L e , k L e ( x , y ) f r ( x , y , z ) G ( x , y ) x ≈ k = 1 L 1 M k j = 0 M k - 1 L e ( x j , y ) f r ( x j , y , z ) G ( x j , y ) . ( 19 ) ##EQU00023## where suppL[c,k] refers to the surface of the respective "k-th" light source. In evaluating the estimate of the integral for the "k-th" light source, for the M -th query ray, shadow rays determine the fraction of visibility of the area light source, since the point visibility varies much more than the smooth shadow effect. For each light source, the emission L is attenuated by a geometric term G, which includes the visibility, and the surface properties are given by a bidirectional distribution function f . These integrals are local integrals in the ray tree yielding the value of one node in the ray tree, and can be efficiently evaluated using dependent sampling. In dependent sampling, the query ray comes with the next free integral dimension "d" and the instance "i," from which the dependent samples are determined in accordance with x j = ( Φ b d ( i , σ b d ) ⊕ j M k , Φ b d + 1 ( i , σ b d + 1 ) ⊕ Φ 2 ( j ) ) . ( 20 ) . ##EQU00024## The offset (i, σ ), Φ +1(i, σ +1)) again is taken from the corresponding generalized scrambled Hammersley point set, which shifts the two-dimensional Hammersley point set ( j M k , Φ 2 ( j ) ) ##EQU00025## on the light source . Selecting the sample rate M as a power of two, the local minima is obtained for the discrepancy of the Hammersley point set that perfectly stratifies the light source. As an alternative, the light source can be sampled using an arbitrarily-chosen number M of sample points using ( j M k , Φ M K ( j , σ M k ) ) j = 0 M k - 1 ##EQU00026## as a replication rule . Due to the implicit stratification of the positions of the sample points as described above, the local convergence will be very smooth. The glossy contribution T ) is determined in a manner similar to that described above in connection with area light sources (equations (19) and (20), except that a model f used to simulate glossy scattering will be used instead of the bidirectional distribution function f . In determining the glossy contribution, two-dimensional Hammersley points are generated for a fixed splitting rate M and shifted modulo "one" by the offset (Φ (i, σ ), Φ +1(i, σ +1)) is taken from the current ray tree depth given by the dimension field "d" of the incoming ray. The ray trees spanned into the scattering direction are decorrelated by assigning the instance fields i'=i+j in a manner similar to that done for simulation of motion blur and depth of field, as described above. The estimates generated for all rays are averaged by the splitting rate "M" and propagated up the ray tree. Volumetric effects are typically provided by performing a line integration along respective rays from their origins to the nearest surface point in the scene. In providing for a volumetric effect, the computer graphics system 10 generates from the ray data a corresponding offset Φ (i) which it then uses to shift the M equidistant samples on the unit interval seen as a one-dimensional torus. In doing so, the rendering time is reduced in comparison to use of an uncorrelated jittering methodology. Global illumination includes a class of optical effects, such as indirect illumination, diffuse and glossy inter-reflections, caustics and color bleeding, that the computer graphics system 10 simulates in generating an image of objects in a scene. Simulation of global illumination typically involves the evaluation of a rendering equation. For the general form of an illustrative rendering equation useful in global illumination simulation, namely: ({right arrow over (x)}, {right arrow over (w)})=L ({right arrow over (x)}, {right arrow over (w)})+∫ 'f({right arrow over (x)}, {right arrow over (w)}'→{right arrow over (w)})G({right arrow over (x)}, {right arrow over (x)}')V({right arrow over (x)}, {right arrow over (x)}')L({right arrow over (x)}, {right arrow over (w)} )dA' (21) it is recognized that the light radiated at a particular point {right arrow over (x)} in a scene is generally the sum of two components, namely, the amount of light (if any) that is emitted from the point and the amount of light (if any) that originates from all other points and which is reflected or otherwise scattered from the point {right arrow over (x)}. In equation (21), L({right arrow over (x)}, {right arrow over (w)}) represents the radiance at the point {right arrow over (x)} in the direction {right arrow over (w)}=(θ, φ) (where "θ" represents the angle of direction {right arrow over (w)} relative to a direction orthogonal of the surface of the object in the scene containing the point {right arrow over (x)}', and "φ" represents the angle of the component of direction {right arrow over (w)} in a plane tangential to the point {right arrow over (x)}). Similarly, L({right arrow over (x)}', {right arrow over (w)}') in the integral represents the radiance at the point {right arrow over (x)}' in the direction {right arrow over (w)}'=(θ', φ') (where "θ'" represents the angle of direction {right arrow over (w)}' relative to a direction orthogonal of the surface of the object in the scene containing the point {right arrow over (x)}', and "φ'" represents the angle of the component of direction {right arrow over (w)}' in a plane tangential to the point {right arrow over (x)}'), and represents the light, if any, that is emitted from point {right arrow over (x)}' which may be reflected or otherwise scattered from point {right arrow over (x)}. Continuing with equation (21), L ({right arrow over (x)}, {right arrow over (w)}) represents the first component of the sum, namely, the radiance due to emission from the point {right arrow over (x)} in the direction {right arrow over (w)}, and the integral over the sphere S' represents the second component, namely, the radiance due to scattering of light at point {right arrow over (x)}. f({right arrow over (x)}, {right arrow over (w)}'→{right arrow over (w)}) is a bidirectional scattering distribution function which describes how much of the light coming from direction {right arrow over (w)}' is reflected, refracted or otherwise scattered in the direction {right arrow over (w)}, and is generally the sum of a diffuse component, a glossy component and a specular component. In equation (21), the function G({right arrow over (x)}, {right arrow over (x)}') is a geometric term ( x → , x → ' ) = cos θcos θ ' x → - x → ' 2 ( 22 ) ##EQU00027## θ and θ'are angles relative to the normals of the respective surfaces at points {right arrow over (x)} and {right arrow over (x)}', respectively. Further in equation (21), V({right arrow over (x)}, {right arrow over (x)}') is a visibility function which equals the value one if the point {right arrow over (x)}' is visible from the point {right arrow over (x)} and zero if the point {right arrow over (x)}' is not visible from the point {right arrow over (x)}. The computer graphics system 10 makes use of global illumination specifically in connection with determination of the diffuse component T.sub.∫ L and the caustic component T.sub.∫dT.sub.∫ L using a photon map technique. Generally, a photon map is constructed by simulating the emission of photons by light source(s) in the scene and tracing the path of each of the photons. For each simulated photon that strikes a surface of an object in the scene, information concerning the simulated photon is stored in a data structure referred to as a photon map, including, for example, the simulated photon's color, position and direction angle. Thereafter a Russian roulette operation is performed to determine the photon's state, that is, whether the simulated photon will be deemed to have been absorbed or reflected by the surface. If a simulated photon is deemed to have been reflected by the surface, the simulated photon's direction is determined using, for example, a bidirectional reflectance distribution function ("BRDF"). If the reflected simulated photon strikes another surface, these operations will be repeated (reference the aforementioned Grabenstein application). The data structure in which information for the various simulated photons is stored may have any convenient form; typically k-dimensional trees, for "k" an integer" are used. After the photon map has been generated, it can be used in rendering the respective components of the image. In generating a photon map, the computer graphics system 10 simulates photons trajectories, thus avoiding the necessity of discretizing the kernel of the underlying integral equation. The interactions of the photons with the scene, as described above, are stored and used for density estimation. The computer graphics system 10 makes use of a scrambled Halton sequence, which has better discrepancy properties in higher dimensions than does an unscrambled Halton sequence. The scrambled Halton sequence also has the benefit, over a random sequence, that the approximation error decreases more smoothly, which will allow for use of an adaptive termination scheme during generation of the estimate of the integral. In addition, since the scrambled Halton sequence is strictly deterministic, generation of estimates can readily be parallelized by assigning certain segments of the low discrepancy sequence to ones of a plurality of processors, which can operate on portions of the computation independently and in parallel. Since usually the space in which photons will be shot by selecting directions will be much larger than the area of the light sources from which the photons were initially shot, it is advantageous to make use of components of smaller discrepancy, for example, Φ or Φ (where, as above, Φ refers to the radical inverse function for base "b"), for angular scattering and components of higher discrepancy, for example, Φ or Φ for area sampling, which will facilitate filling the space more uniformly. The computer graphics system 10 estimates the radiance from the photons in accordance with _ r ( x , ω ) ≈ 1 A i .di-elect cons. B k ( x ) f r ( ω i , x , ω ) Φ i ( 23 ) ##EQU00028## , in equation (23), Φ represents the energy of the respective "i-th" photon, ω is the direction of incidence of the "i-th photon," B (x) represents the set of the "k" nearest photons around the point "x," and "A" represents an area around point "x" that includes the photons in the set B (x). The computer graphics system 10 makes use of an unbiased but consistent estimator is used for **, which is determined as follows. Taking the radius r(B (x)) of the query ball (**what is this?**) a tangential disk D of radius r(B (x)) centered on the point x is divided into M equal-sized subdomains D , that is = 0 M - 1 D i = D and D i D j ≠ 0 for i ≠ j , where D i = D M = π r 2 ( B k ( x ) ) M . ( 24 ) ##EQU00029## The set [0060] P = { D i D i { x i D i .di-elect cons. B k ( x ) } ≠ 0 } ( 25 ) ##EQU00030## contains all the subdomains D[i] that contain a point x on the disk, which is the position of the "i-th" photon projected onto the plane defined by the disk D along its direction of incidence ω . Preferably, the number M of subdomains will be on the order of {square root over (k)} and the angular subdivision will be finer than the radial subdivision in order to capture geometry borders. The actual area A is then determined by = π r 2 ( B k ( x ) ) P M . ( 26 ) ##EQU00031## Determining the actual coverage of the disk D by photons significantly improves the radiance estimate (equation (23)) in corners and on borders, where the area obtained by the standard estimate πr (x)) would be too small, which would be the case at corners, or too large, which would be the case at borders. In order to avoid blurring of sharp contours of caustics and shadows, the computer graphics system 10 sets the radiance estimate L to black if all domains D that touch x do not contain any photons. It will be appreciated that, in regions of high photon density, the "k" nearest photons may lead to a radius r(B (x) that is nearly zero, which can cause an over-modulation of the estimate. Over-modulation can be avoided by selecting a minimum radius r , which will be used if r(B (x)) is less than r . In that case, instead of equation (23), the estimate is generated in accordance with _ r ( x , ω ) = N k i .di-elect cons. B k ( x ) Φ i f r ( ω i , x , ω r ) ( 27 ) ##EQU00032## assuming each photon is started with 1/N of the total flux Φ. The estimator in equation (27) provides an estimate for the mean flux of the "k" photons if r(B The global photon map is generally rather coarse and, as a result, subpixel samples can result in identical photon map queries. As a result, the direct visualization of the global photon map is blurry and it is advantageous to perform a smoothing operation in connection therewith In performing such an operation, the computer graphics system 10 performs a local pass integration that removes artifacts of the direct visualization. Accordingly, the computer graphics system 10 generates an approximation for the diffuse illumination term T L as T f d T f d L ≈ ( T f d L _ r ) ( x ) = ∫ S 2 ( x ) f d ( x ) L _ r ( h ( x , ω → ) ) cos θ ω ≈ f d ( x ) M i = 0 M - 1 L _ r ( h ( x , ω ( arcsin u i , 1 , 2 π u i , 2 ) ) ) , ( 28 ) ##EQU00033## with the integral over the hemisphere S^2 (x) of incident directions aligned by the surface normal in "x" being evaluated using importance sampling. The computer graphics system 10 stratifies the sample points on a two-dimensional grid by applying dependent trajectory splitting with the Hammersley sequence and thereafter applies irradiance interpolation. Instead of storing the incident flux fΦ of the respective photons, the computer graphics system 10 stores their reflected diffuse power f with the respective photons in the photon map, which allows for a more exact approximation than can be obtained by only sampling the diffuse BRDF in the hit points of the final gather rays. In addition, the BRDF evaluation is needed only once per photon, saving the evaluations during the final gathering. Instead of sampling the full grid, the computer graphics system 10 uses adaptive sampling, in which refinement is triggered by contrast, distance traveled by the final gather rays in order to more evenly sample the projected solid angle, and the number of photons that are incident form the portion of the projected hemisphere. The computer graphics system fills in positions in the grid that are not sampled by interpolation. The resulting image matrix of the projected hemisphere is median filtered in order to remove weak singularities, after which the approximation is generated. The computer graphics system 10 performs the same operation in connection with hemispherical sky illumination. The computer graphics system 10 processes final gather rays that strike objects that do not cause caustics, such as plane glass windows, by recursive ray tracing. If the hit point of a final gather ray is closer to its origin than a predetermined threshold, the computer graphics system 10 also performs recursive ray tracing. This reduces the likelihood that blurry artifacts will appear in corners, which might otherwise occur since for close hit points the same photons would be collected, which can indirectly expose the blurry structure of the global photon map. Generally, photon maps have been taken as a snapshot at one point in time, and thus were unsuitable in connection with rendering of motion blur. Following the observation that averaging the result of a plurality of photon maps is generally similar to querying one photon map with the total number of photons from all of the plurality of photon maps, the computer graphics system 10 generates N photon maps, where N is determined as described above, at points in time t b = t 0 + b + 1 2 N T T , ( 29 ) ##EQU00034## . During rendering, the computer graphics system 10 uses the photon map with the smallest time difference |t | in connection with rendering for the time sample point t The invention provides a number of advantages. In particular, the invention provides a computer graphics system that makes use of strictly deterministic distributed ray tracing based on low-discrepancy sampling and dependent trajectory splitting in connection with rendering of an image of a scene. Generally, strictly deterministic distributed ray tracing based on low-discrepancy sampling and dependent trajectory splitting is simpler to implement than an implementation based on random or pseudo-random numbers. Due to the properties of the radical inversion function, stratification of sample points is intrinsic and does not need to be considered independently of the generation of the positions of the sample points. In addition, since the methodology is strictly deterministic, it can be readily parallelized by dividing the image into a plurality of tasks, which can be executed by a plurality of processors in parallel. There is no need to take a step of ensuring that positions of sample points are not correlated, which is generally necessary if a methodology based on random or pseudo-random numbers is to be implemented for processing in parallel. Generally, a computer graphics system that makes use of low-discrepancy sampling in determination of sample points will perform better than a computer graphics system that makes use of random or pseudo-random sampling, but the performance may degrade to that of a system that makes use of random or pseudo-random sampling in higher dimensions. By providing that the computer graphics system performs dependent splitting by replication, the superior convergence of low-dimensional low-discrepancy sampling can be exploited with the effect that the overall integrand becomes smoother, resulting in better convergence than with stratified random or pseudo-random sampling. Since the computer graphics system also makes use of dependent trajectory sampling by means of infinite low discrepancy sequences, consistent adaptive sampling of, for example, light sources, can also be performed. In addition, it will be appreciated that, although the computer graphics system has been described as making use of sample points generated using generalized scrambled and/or unscrambled Hammersley and Halton sequences, it will be appreciated that generally any (t,m,s)-net or (t,s)-sequence can be used. Code Segment 1 The following is a code fragment in the C++ programming language for generating the a positions of the jittered subpixel sample points x [i] TABLE -US-00001 unsigned short Period, *Sigma; void InitSigma(int n) { unsigned short Inverse, Digit, Bits; Period = 1 << n; Sigma = new unsigned short [Period]; for (unsigned short i = 0; i < Period; i++) { Digit = Period Inverse = 0; for (bits = i; bits; bits >>= 1) { Digit >>= 1; if (Bits & 1) inverse += Digit; } Sigma[i] = Inverse; } } void SampleSubpixel(unsigned int *i, double *x, double *y, int , int s ) { int j = s & (Period -1); int k = s & (Period -1); *i = j * Period + Sigma[k] *x = (double) s + (double) Sigma[k] / (double) Period; *y = (double) s + (double) Sigma[j] / (double) Period; } Code Segment 2 The following is a code fragment in the C++ programming language for generating a ray tree -US-00002 class Ray { int i; //current instance of low discrepancy vector int d; //current integral dimension in ray tree . . . } void Shade (Ray & Ray) { Ray next_ray; •int i = ray.i; int d = ray.d . . . for (int j = 0; j < M; j++) { . . . // ray set up for recursion y = ( Φ b d ( i , σ b d ) ⊕ j M , , Φ b d + Δ d - 1 ( i , σ b d + Δ d - 1 ) ⊕ Φ b d + Δ d - 2 ( i , σ b d + Δ d - 2 ) ) ## EQU00035## next_ray = SetUpRay(y); // determine ray parameters by y next_ray.i = i + j; // decorrelation by generalization next_ray.d = d + Δd; //dimension allocation Shade(next_ray); . . . } } It will be appreciated that a system in accordance with the invention can be constructed in whole or in part from special purpose hardware or a general purpose computer system, or any combination thereof, any portion of which may be controlled by a suitable program. Any program may in whole or in part comprise part of or be stored on the system in a conventional manner, or it may in whole or in part be provided in to the system over a network or other mechanism for transferring information in a conventional manner. In addition, it will be appreciated that the system may be operated and/ or otherwise controlled by means of information provided by an operator using operator input elements (not shown) which may be connected directly to the system or which may transfer the information to the system over a network or other mechanism for transferring information in a conventional manner. The foregoing description has been limited to a specific embodiment of this invention. It will be apparent, however, that various variations and modifications may be made to the invention, with the attainment of some or all of the advantages of the invention. It is the object of the appended claims to cover these and such other variations and modifications as come within the true spirit and scope of the invention. Patent applications by Alexander Keller, Ulm DE Patent applications in class Lighting/shading Patent applications in all subclasses Lighting/shading User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20090147002","timestamp":"2014-04-17T16:56:28Z","content_type":null,"content_length":"99585","record_id":"<urn:uuid:60861421-0827-46f4-98a5-59de3c096823>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
Cartesian Equation for a plane September 27th 2009, 11:02 AM #1 Senior Member Nov 2008 Cartesian Equation for a plane Determine a cartesian equation for the plane: a) the plane through the point (3,-1,3) and perpendicular to the z-axis. b) the plane through the points (3,1,-2) and (1,3,-1) and parallel to the y-axis. a) z = 3 will do since all points on the plane will have coordinate z=3. Let the equation be: ax+ by + cz = d with b=0. Plugging in each point, we get the conditions: 3a - 2c = d a -c = d 2a - c = 0 c = 2a Generally, all planes ax +2a z = d For example, a=1, c=2, d=-1 x + 2z = -1 is an example. Good luck!! September 27th 2009, 11:19 AM #2
{"url":"http://mathhelpforum.com/calculus/104616-cartesian-equation-plane.html","timestamp":"2014-04-18T11:18:42Z","content_type":null,"content_length":"32402","record_id":"<urn:uuid:27c01ec2-cce2-4e91-8666-9a52411075dc>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
LAPACK/ScaLAPACK Development Hello, everyone! I have a question about solving *non-square* linear systems. I know that I can use dgelss to solve Ax = b, and I believe dgelss uses an SVD (singular value decomposition) as its backend. I also know that I can solve a linear system via a qr factorization by 1) doing the QR factorization via dgeqrf or dgeqp3, 2) constructing Q via dorgqr, 3) multiplying Q^tb using dgemm, 4) and finally writing my own backsubstition to solve Rx = Q^tb. Given that an SVD decomposition and a QR factorization are both O(mn^2) flops, is there any reason to use the second method if I am only solving one linear system? I believe that if I am solving multiple linear systems (i.e., Ax = b1, Ax = b2, Ax = b3, etc.), then it is useful to use the QR factorization method. I will be solving multiple linear systems. I am really interested in whether the linear system is feasible. Thus, after I have a solution Ax = b, I should perform the multiplication Ax using dgemm, and test to see if Ax is close to b within some tolerance. Are there any Lapack functions that perform a backsubstitution, or that would help in determining the feasibility checking? I would greatly appreciate any comments on either the methods proposed here, or the Lapack functions that I am using. Please share if you think I am missing something! Very best regards, Re: Solving linear systems? I have a question about solving *non-square* linear systems. I know that I can use dgelss to solve Ax = b, and I believe dgelss uses an SVD (singular value decomposition) as its backend. Yes, DGELSS is fine, DGELSD uses SVD as well but computes the SVD with the divide and conquer algorithm. DGELS is another option that gives you the solution of the linear least squares through a QR factorization, and there also is DGELSY. So you have a total of four different options. I also know that I can solve a linear system via a qr factorization by 1) doing the QR factorization via dgeqrf or dgeqp3, 2) constructing Q via dorgqr, 3) multiplying Q^tb using dgemm, 4) and finally writing my own backsubstition to solve Rx = Q^tb. This is (almost) what DGELS and DGELSY do. (Step 2 and 3 are done in one step using DORMQR.) DGELS uses DGEQRF. DGELSY uses DGEQP3. Given that an SVD decomposition and a QR factorization are both O(mn^2) flops, is there any reason to use the second method if I am only solving one linear system? I believe that if I am solving multiple linear systems (i.e., Ax = b1, Ax = b2, Ax = b3, etc.), then it is useful to use the QR factorization method. I will be solving multiple linear systems. (1) SVD decomposition and QR factorization are O(mn^2) when m >>>> n. This is correct. If m is of the same order as n, you will see that QR factorization is significantly faster than SVD (2) You solve multiple linear least squares with either DGELS, DGELSY, DGELSD, DGELSS. I am really interested in whether the linear system is feasible. Thus, after I have a solution Ax = b, I should perform the multiplication Ax using dgemm, and test to see if Ax is close to b within some tolerance. Are there any Lapack functions that perform a backsubstitution, or that would help in determining the feasibility checking? Yes that's it. You call DGEMV (one RHS) or DGEMM (several RHS) and then check the norm. Another way to do this is using the feature of the input/output vector B in DGELS: Code: Select all * B (input/output) DOUBLE PRECISION array, dimension (LDB,NRHS) * if TRANS = 'N' and m >= n, rows 1 to n of B contain the least * squares solution vectors; the residual sum of squares for the * solution in each column is given by the sum of squares of * elements N+1 to M in that column; So the input vector B is "too long" for the output vector X. The first N slots of B are used to store X, the last N+1 to M are such that the norm of them norm( Bout(N+1:M, J) ) is the norm of the J-th residual: || Bin(1:M, J) - A(1:M,1:N) * X(1:N, J) || (where X(1:N, J) is Bout(1:N, J)). So all you need to do after DGELS is something like: NORMRES = DNRM2( M-N, B( 1, J), 1). (It's not a bad idea to check that this is the same as what comes out of a DGEMM on A though ... ) The same is true for DGESLD, DGELSS (but not DGELSY). Re: Solving linear systems? susan_ex2_texas wrote:... Are there any Lapack functions that perform a backsubstitution, or that would help in determining the feasibility checking? Very late :-) but as I was looking for this myself: (D,..)TRSV (BLAS Level 2) solves a linear equation system with a triangular matrix, i.e. does backsubstitution. - Mathias
{"url":"http://icl.cs.utk.edu/lapack-forum/viewtopic.php?f=2&t=1627","timestamp":"2014-04-19T12:38:18Z","content_type":null,"content_length":"21697","record_id":"<urn:uuid:3737a19a-3882-4f2f-9197-c0cd5fac7b43>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving large systems of linear equations on supercomputers Mechanical Engineering Associate Professor Dan Negrut will collaborate with colleagues at Purdue University and University of Chicago to produce algorithms capable of solving very large linear algebra problems. Negrut, Ahmed Sameh of Purdue and Matthew Knepley of the University of Chicago received a $600,o00 National Science Foundation grant to investigate how emerging parallel computer hardware can be leveraged to solve systems of linear equations with hundreds of millions of unknowns. To this end, the team will use what is called heterogeneous computing, that is, combining computing on classical CPUs and on graphical processing unit (GPU) cards. The two classes of problems targeted under this project are banded dense and sparse general linear systems. Experiences and lessons learned in this project will augment a graduate level class, High Performance Computing for Engineering Applications, and the team will share findings at the International Conference for High Performance Computing, Networking, Storage and Analysis and a one-day high performance computing boot camp organized in conjunction with the American Society of Mechanical Engineers conference. In addition, this project will shape the research agendas of two graduate students working on advanced degrees in computational science at Wisconsin and Purdue. Mark Riechers
{"url":"http://www.engr.wisc.edu/news/archive/2012/Nov1-Solving-linear-equations.html","timestamp":"2014-04-18T00:14:31Z","content_type":null,"content_length":"10300","record_id":"<urn:uuid:83076e6c-3796-42ba-ba77-8e989c3d71b0>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
Help on a matrix problem. October 28th 2010, 04:48 AM #1 Sep 2010 Help on a matrix problem. 'The Matrix A = ( -1 1 -1) ( 0 c 0) (3+2c-c^2 b 3) can only be diagonalised for some values of the parameters b and c . Construct a table showing the values of b and c for which diagonalisation is possible. You must show your working.' For this I have done the following: eigenvectors -1 P(ƛ) = -(c -ƛ)((-1-ƛ)(3-ƛ)+3+2c-c^2) For what I finally got c-ƛ =0 -> ƛ =c Then ƛ^2 -2ƛ +2C -ƛ^2=0 and then getting the value for ƛ ƛ = 1+-(1-c) Is this right? And what should I do next with this problem? Thank you for your time. Your work so far is correct. At least, those are the eigenvalues I get. You'll notice there are only two eigenvalues. What are the algebraic multiplicities of those two eigenvalues? If by 'k' you mean the integer $k$ such that $\det(A-\lambda I)=(\lambda-2+c)(\lambda-c)^{k}F,$ I would agree with you. You haven't defined k otherwise, so I don't know what it is yet. Last edited by Ackbeet; October 28th 2010 at 05:07 AM. Reason: Different constant. Well, for the determinant that defines the characteristic equation, I get So what is $k$? Thank you. When the integer k is acquired is there something that still that needs to be done for the problem? Yes. Is there a criterion you know of whereby you can say that a matrix is diagonalizable or not? That's in your problem statement. But you haven't answered my question. Independent of this problem, what criterion do you know of that will tell you whether a matrix is diagonalizable or not? Is this when there exists an invertible matrix P such that P −1AP ? The matrix can be written in form A=PDP^(-1) That is the definition of diagonalizable, I grant you. However, it's not much use in the present circumstance, I'm afraid. Have you seen this one: A matrix A is diagonalizable if and only if, for every eigenvalue of A, its geometric multiplicity equals its algebraic multiplicity. You always know that the geometric multiplicity of an eigenvalue is less than or equal to the algebraic multiplicity. Here, we're saying that they have to be equal for all the eigenvalues. If that happens, you can diagonalize the matrix. So, how can you use this idea to find out when your matrix is diagonalizable? That is the definition of diagonalizable, I grant you. However, it's not much use in the present circumstance, I'm afraid. Have you seen this one: A matrix A is diagonalizable if and only if, for every eigenvalue of A, its geometric multiplicity equals its algebraic multiplicity. You always know that the geometric multiplicity of an eigenvalue is less than or equal to the algebraic multiplicity. Here, we're saying that they have to be equal for all the eigenvalues. If that happens, you can diagonalize the matrix. So, how can you use this idea to find out when your matrix is diagonalizable? Is that then eigenvalue c has to equal the value k? c= k No, no. Try to solve the equation $(A-cI)x=0,$ and see what you get. You're trying to find eigenvectors now, and you want to know when you need two vectors, and when you need one. October 28th 2010, 04:52 AM #2 October 28th 2010, 05:03 AM #3 Sep 2010 October 28th 2010, 05:06 AM #4 October 28th 2010, 05:19 AM #5 Sep 2010 October 28th 2010, 05:25 AM #6 October 28th 2010, 09:10 AM #7 Sep 2010 October 28th 2010, 09:12 AM #8 October 28th 2010, 09:13 AM #9 Sep 2010 October 28th 2010, 09:15 AM #10 October 28th 2010, 09:20 AM #11 Sep 2010 October 28th 2010, 09:58 AM #12 October 28th 2010, 10:49 AM #13 Sep 2010 October 28th 2010, 03:24 PM #14
{"url":"http://mathhelpforum.com/advanced-algebra/161290-help-matrix-problem.html","timestamp":"2014-04-19T06:09:04Z","content_type":null,"content_length":"76187","record_id":"<urn:uuid:3a391f59-fc24-4319-b28b-a9441a00e45f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
After 14.1 S, A Spinning Roulette Wheel Has Slowed ... | Chegg.com After 14.1 s, a spinning roulette wheel has slowed down to anangular velocity of 2.91 rad/s. During this time, the wheel has anangular acceleration of -3.34 rad/s^2. Determine theangular displacement of the wheel. either way, i try to work this out, iget the wrong answers. it is NOT -295.7, -291, or -332rad
{"url":"http://www.chegg.com/homework-help/questions-and-answers/141-s-spinning-roulette-wheel-slowed-anangular-velocity-291-rad-s-time-wheel-anangular-acc-q124102","timestamp":"2014-04-16T11:10:18Z","content_type":null,"content_length":"20693","record_id":"<urn:uuid:656325e0-fcf1-4026-ba93-c843792f386a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
How can I count Integers in a List of String. Haskell up vote 0 down vote favorite There is a list of String ["f", "1", "h", "6", "b", "7"]. How can I count Int in this list? Now I have this algorithm but it's not so good. import Data.Char let listOfStrings = ["f", "1", "h", "6", "b", "7"] let convertedString = "f1h6b7" let listOfInt = map (\x -> read [x]::Int) (filter (\x -> isDigit x) convertedString) length listOfInt Prelude> 3 Besides, I can't convert listOfStrings to one string. This algorithm doesn't even work properly Can you help me with optimization? string list haskell 2 You want to count the Strings in the list that are representations of integers (in base 10?), did I understand that correctly? Only non-negative integers or also negative? – Daniel Fischer Dec 2 '12 at 19:59 Yes. I want to count Strings that are representations of integers. They may be negative – SEMA Dec 2 '12 at 20:04 add comment 6 Answers active oldest votes 1) Use reads :: Reads Int (this expression is just reads :: String -> [(Int, String)] in disguise) to test whether a string is a representation of an integer value: isNumber :: String -> Bool isNumber s = case (reads s) :: [(Int, String)] of [(_, "")] -> True _ -> False up vote 4 down vote accepted Why reads? Because it returns additional information about parsing process from which we can conclude if it was successful. read :: Int would just throw an exception. 2) then filter a list of strings with it and take its length: intsCount :: [String] -> Int intsCount = length . filter isNumber add comment The basic principle is • count the items in a list with a given property That's solved by some Prelude functions quite easily: countItemsWith :: (a -> Bool) -> [a] -> Int countItemsWith property list = length $ filter property list up vote 2 down vote All that remains is to find a good expression to determine whether a String is the representation of an integer. We could write our own test, but we can also re-use a Prelude function for that, isIntegerRepresentation :: String -> Bool isIntegerRepresentation s = case reads s :: [(Integer,[Char])] of [(_,"")] -> True _ -> False add comment concat concatenates multiple lists, so concat listOfStrings will result in "f1h6b7". If you only want to count positive integers, you could try something along the lines of countInts (x:xs) = if isDigit x then 1 + countInts xs else countInts xs up vote 0 down where (x:xs) is a pattern for a list with head element x and a tail of xs. (So this will work for convertedString because it is a list of characters [Char] but not for listOfStrings vote because it is actually [String] which can be expanded to [[Char]]). What is the actual input you've got? listOfStrings or convertedString? add comment Your code can be rewritten as: import Data.Char let listOfStrings = ["f", "1", "h", "6", "b", "7"] let convertedString = concat listOfStrings let listOfInts = map digitToInt (filter isDigit convertedString) length listOfInts Prelude> 3 To go from a list of strings to just a single string, just use concat. Concat takes a list of lists, and returns a single list, with all the elements of the lists after each other, and up vote 0 since a string is a list of Chars, concat in this case takes a list of lists of Char, and returns a single list of Char, (aka a string). down vote The filter is simplified from using \x -> isDigit x to just isDigit. This is exactly the same function. I read the digits using digitToInt instead of \x -> read [x] :: Int Notice that if you only want to find the number of digits in convertedString, you can do: let listOfDigits = filter isDigit convertedString length listOfDigits Prelude> 3 add comment I'm sure the best answer for you will be import Data.List (foldl') import Data.Char (isNumber) countNumb l = foldl' (\x y -> x+1) 0 (filter isNumber l) up vote 0 down vote here we check if char is number and count them Ps. This will work for ['f', '1', 'h', '6', 'b', '7']. 1 +1 for Data.Char (isNumber), -1 for wicked foldl' – EarlGray Dec 2 '12 at 20:32 add comment I find it often useful to convert Bool to 0 or 1 using fromEnum: import Data.Char up vote 0 down vote countInts = sum . map (fromEnum . isNumber) . concat add comment Not the answer you're looking for? Browse other questions tagged string list haskell or ask your own question.
{"url":"http://stackoverflow.com/questions/13673169/how-can-i-count-integers-in-a-list-of-string-haskell","timestamp":"2014-04-25T03:48:39Z","content_type":null,"content_length":"84237","record_id":"<urn:uuid:1a9852fb-e727-422d-8ec8-023334efa71f>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
San Bruno Algebra 2 Tutor Find a San Bruno Algebra 2 Tutor ...My infinite desire to learn and my enthusiasm toward teaching are the sources of positive energy that I share with my students. Wish you good luck in your education, since the above quote by Benjamin Franklin is true for all times. Best, MarineI took 2 Econometrics courses as an Undergraduate students at UCI, and received my bachelors degree upon completing the courses. 29 Subjects: including algebra 2, reading, calculus, statistics ...Simplifying expressions, binomials, powers, factoring, linear equations, and graphing don't have to be difficult topics but if a student misses a key idea on a certain class day, it can quickly snowball. Many times, all it takes to get back on track is some focused review work. The second year ... 11 Subjects: including algebra 2, calculus, geometry, algebra 1 With my unique background and education, I can work successfully with a wide variety of students who have a wide variety of "issues" with learning math. I understand where the problems are, and how best to get past them and onto a confident path to math success. My undergraduate degree is in mathematics, and I have worked as a computer professional, as well as a math tutor. 20 Subjects: including algebra 2, calculus, geometry, biology ...I've been a professional tutor since September of 2011, teaching English and mathematics to children from third to eighth grade and test prep, composition, comprehension, and mathematics (through pre-calculus) and organization and study skills to high school and college students. I specialize in... 29 Subjects: including algebra 2, reading, English, algebra 1 ...In areas that I am short of, I am always inclined learn further. My interest in zoology has been most centered on the arthropods (insects, arachnids, crustaceans, etc...) as well as birds (mainly raptors, corvines, and marine birds), although my knowledge on other animal classes is not lacking. ... 22 Subjects: including algebra 2, English, physics, chemistry
{"url":"http://www.purplemath.com/San_Bruno_algebra_2_tutors.php","timestamp":"2014-04-21T12:46:42Z","content_type":null,"content_length":"24245","record_id":"<urn:uuid:b89c35f0-3bb1-4a22-9c4d-26570af6a15e>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
Numerical Analysis for a Fractional Differential Time-Delay Model of HIV Infection of CD4 Abstract and Applied Analysis Volume 2014 (2014), Article ID 291614, 13 pages Research Article Numerical Analysis for a Fractional Differential Time-Delay Model of HIV Infection of CD4^+ T-Cell Proliferation under Antiretroviral Therapy ^1College of Sciences, Guangxi University for Nationalities, Nanning, Guangxi 530006, China ^2Departamento de Matematica, Universidad Tecnica Federico Santa Maria, Casilla 110-V, Valparaiso, Chile Received 2 October 2013; Accepted 11 December 2013; Published 12 February 2014 Academic Editor: Abdon Atangana Copyright © 2014 Yiliang Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We study a fractional differential model of HIV infection of T-cell, in which the T-cell proliferation plays an important role in HIV infection under antiretroviral therapy. An appropriate method is given to ensure that both the equilibria are asymptotically stable for . We calculate the basic reproduction number , the IFE , two IPEs and , and so on, and judge the stability of the equilibrium. In addition, we describe the dynamic behaviors of the fractional HIV model by using the Adams-type predictor-corrector method algorithm. At last, we extend the model to incorporate the term which we consider the loss of virion and a bilinear term during attacking the target cells. 1. Introduction Human immunodeficiency virus (HIV) is a lentivirus (a member of the retrovirus family) that causes acquired immunodeficiency syndrome (AIDS) [1], a condition in humans in which the immune system begins to fail, leading to life-threatening opportunistic infections. HIV infects primarily vital cells in the human immune system such as helper T-cell (to be specific, T-cell), macrophages, and dendritic cells. When T-cell numbers decline below a critical level, cell-mediated immunity is lost and the body becomes progressively more susceptible to opportunistic infections (see [2]). There are only a few results for dynamics of HIV infection of T-cell. In 1992, Perelson et al. [3] examined a model for the interaction of HIV with T-cell who considered four populations: uninfected T-cell, latently infected T-cell, actively infected T cells, and free virus, and they also considered effects of AZT on viral growth and T-cell population dynamics. In 2000, Culshaw and Ruan [4] firstly simplified their model into one consisting of only three components: the healthy T-cell, infected T-cell, and free virus and discussed the existence and stability of the infected steady state, and they studied the effect of the time delay on the stability of the endemically infected equilibrium; criteria were given to ensure that the infected equilibrium is asymptotically stable for all delay. For backward bifurcations in other disease models, we refer the reader to [5–12]. In [12], this paper analyzed the backward bifurcation sources and application in infectious disease model. HIV/AIDS infection model is a special case of infectious disease model. For recent work on global analysis and persistence of HIV models, we refer the reader to [13–18] and references therein. A discussion on HIV infection and T-cell depletion is given in the review paper [19]. In 2012, Shu and Wang [12] considered a new model frame that included full logistic growth terms of both healthy and infected T-cell: Fractional differential equations arise in many engineering and scientific disciplines as the mathematical modeling of systems and processes in various fields, such as physics, mechanics, chemical technology, population dynamics, biotechnology, and economics (see e.g., [20–26]). As one of the important topics in the research differential equations, the boundary value problem has attained a great deal of attention (see [27–41] and the references therein). Many mathematicians and researchers in the field of application are trying to model fractional order differential equations. In biology, the researchers found that biological membranes with fractional order have the nature of electronic conductivity, so it can be classified as a model of the fractional order. Because of the memory property of fractional calculus, we introduce the fractional calculus into HIV model. Both in mathematics and biology, fractional calculus will correspond with objective reality more than ODE. It is particularly important for us to study fractional HIV model. Furthermore, delay plays an important role in the process of spreading infectious diseases; it can be used to simulate the incubation period of infectious diseases, the period of patients infected with disease, period of patients immune to disease, and so on. The basic fact reflected by the specific mathematical model with time delay is that the change of trajectory about time not only depends on the moment itself but also is affected by some certain conditions before, even the reflection of some certain factors before. This kind of circumstance is abundant in the objective world. Recently, Yan and Kou [2] have introduced fractional-order derivatives into a model of HIV infection of T-cell with time delay: with the initial conditions: Motivated by the works mentioned above, we will consider this model where the T-cell proliferation does play an important role in HIV infection under antiretroviral therapy; a more appropriate method is given to ensure that both equilibria are asymptotically stable for . We calculate the basic reproduction number , the IFE , two IPEs and , and so on under certain conditions and judge the stability of the equilibrium. In addition, we describe the dynamic behaviors of the fractional HIV model by using the Adams-type predictor-corrector method algorithm. At last, we extend the model to incorporate the term which we consider the loss of virion and a bilinear term during attacking the target cells. In this paper, we establish mathematical model as follows: with the initial conditions: where denotes Caputo's fractional derivative of order with the lower limit zero. , and represent the concentration of healthy T-cell at time , infected T-cell at time , and free HIV virus particles in the blood at time , respectively. The positive constant represents the length of the delay in days. A complete list of the parameter values for the model is given in Table 1. Furthermore, we assume that , and for all . This paper is organized in the following way. In the next section, some necessary definitions and lemmas are presented. In Section 3, the stability of the equilibria is given. In Section 4, we calculate some of the data and judge the stability of the equilibrium. In Section 5, we will give the numerical simulation for the fractional HIV model. Finally, the conclusions are given. 2. Preliminaries In this section, we introduce definitions and lemmas which will be used later. Definition 1 (see [20, 25]). The fractional (arbitrary) order integral of the function of order is defined by Definition 2 (see [20]). Let , , where denotes the integer part of number . If , the Caputo fractional derivative of order of is defined by Lemma 3 (see [44]). The equilibrium point of the fractional differential system: is locally asymptotically stable if all the eigenvalues of the Jacobian matrix evaluated at the equilibrium point satisfying the following condition: The stable and unstable regions for are shown in Figure 1 [45, 46]. Proposition 4 (see [12]). Consider model (4).(i)Assume that (H1): is satisfied.(a)If , then the IFE is the only equilibrium (Table 3).(b)If , then there are two equilibria: the IFE and a unique IPE . (ii)Assume that (H2): is satisfied. Let ; then ( follows from the assumption that ).(a)If , then the IFE is the only equilibrium.(b)If , then there are three equilibria: the IFE and two IPEs, denoted by and such that .(c)If or , then there are two equilibria: the IFE and a unique IPE. Remark 5. It is similar to [47, 48] to prove the existence of solution of the fractional delay equations, and the one without no delay time is also parallel to [29, 30]. By the next generation matrix method [11], we easily get we obtain the basic reproduction number of model (4) 3. The Stability of the Equilibria In this section, we investigate the existence of equilibria of system (4). In order to find the equilibria of system (4), we put Following the analysis in [12], we get that system (13) has always the infection free equilibrium (IFE) and the infection persistent equilibrium (IPE) , where Next, we will discuss the stability for the local asymptotic stability of the viral free equilibrium and the infected equilibrium . For the local asymptotic stability of the viral free equilibrium , we have the following result. Lemma 6. If , then is locally asymptotically stable for . If , then is locally stable for . If , then is a saddle point with a two-dimensional stable manifold and a one-dimensional unstable manifold. Proof. The associated transcendental characteristic equation at is given by Obviously, the above equation has the characteristic root Next, we consider the transcendental polynomial For , we get We have If , the characteristic equation has a positive eigenvalue and two negative eigenvalues. is thus unstable with a two-dimensional stable manifold and a one-dimensional unstable manifold. If , , then is locally stable. If , the other two eigenvalues have negative real parts if and only if , then is locally asymptotically stable. For , we get Assume that the above equation has roots for and ; we get Separating the real and imaginary parts gives From the second equation of (22), we have that is , . For , , substituting into the first equation of (22), we have For the parameter values given in Table 1, we take any , then the infected equilibrium , and we get that the above equation is unequal for . Therefore, . According to Lemma 3, the uninfected equilibrium is locally asymptotically stable. The proof is completed. Next, for the sake of convenience, at , we give the following symbols: Then the characteristic equation of the linear system is Using the results in [49], we get Denote Lemma 7 (see [49]). The infected equilibrium is asymptotically stable for any time delay if either(i) if ;(ii)if , , , , ;(iii)if , , , , then all roots of satisfy ;(iv)if for all . 4. Comparison with Some of the Data In this section, we calculate the basic reproduction number , the IFE , two IPEs and , , , , , and . On the basis of these data, we apply all the conditions in Lemma 7 to judge the stability of the equilibrium and (Table 2). Remark 8. (1) If , there are two equilibria: the IFE and a unique IPE . In addition, the system at satisfies the second condition in Lemma 7, then the IPE is locally asymptotically stable (Table 5). (2) If , there are three equilibria: the IFE and two IPEs. The system at doesn't satisfy all the conditions in Lemma 7, then the IPE is unstable. The system at satisfies the second condition in Lemma 7, then the IPE is locally asymptotically stable. (3) If , the IFE is the only equilibrium. Remark 9. If , there are two equilibria: the IFE and a unique IPE . In addition, the system at satisfies the second condition in Lemma 7, then the IPE is locally asymptotically stable. Remark 10. If , there are two equilibria: the IFE and a unique IPE . In addition, the system at satisfies the second condition in Lemma 7, then the IPE is locally asymptotically stable. Remark 11. (1) If , the IFE is the only equilibrium. (2) If , there are three equilibria: the IFE and two IPEs. The system at and does not satisfy all the conditions in Lemma 7, then the IPEs and are unstable. (3) If , there are two equilibria: the IFE and a unique IPE . In addition, the system at does not satisfy all the conditions in Lemma 7, then the IPE is unstable. The system at satisfies the second condition in Lemma 7, then the IPE is locally asymptotically stable. (4) If , there are two equilibria: the IFE and a unique IPE . The system at satisfies the second condition in Lemma 7, then the IPE is locally asymptotically stable. (5) If , there are two equilibria: the IFE and a unique IPE . The system at satisfies the first condition in Lemma 7, then the IPE is locally asymptotically stable. 5. Numerical Simulations In this section, we use the Adams-type predictor-corrector method for the numerical solution of nonlinear system (4) with time delay. Firstly, we will replace system (4) by the following equivalent fractional integral equations: Next, we apply the predict, evaluate, correct, evaluate (PECE) method. The approximate solution is displayed in (Figures 2(a)–2(d), 3(a)–3(d), 4(a)–4(d), 5(a)–5(d), 6(a)–6(d), and 7(a)–7(d)). When , system (4) is the classical integer-order ODE. Remark 12. Figures 2 and 3 show that, if and , the system at is locally stable. Remark 13. Figures 4 and 5 show that, if and , the system at is locally stable. Remark 14. Figures 6 and 7 show that, if and , the system at is locally stable. As increases, the trajectory of the system approaches the steady state faster and gets close to the integer-order ODE. 6. Extending the Model In this section, we add the term in the third equation of model (4) which we consider the loss of virions due to all causes, and we also add the bilinear term which we consider free infectious virions when they enter the target cells. We extend model (4) to the following system of differential equations: Following the analysis in [12], we get that system (30) has always the uninfected equilibrium , and the infected equilibrium , where By the next generation matrix method [11], we obtain the basic reproduction number of model (30) 7. Conclusion In this paper, we modified the ODE model proposed by Shu and Wang [12] and the fractional model proposed by Yan and Kou [2] into a system of fractional order. We study a fractional differential model of HIV infection of T cell. We will consider this model where the T-cell proliferation does play an important role in HIV infection under antiretroviral therapy. The more appropriate method is given to ensure that both the equilibria are asymptotically stable for under some conditions. We calculate the basic reproduction number , the IFE , two IPEs and , and so on, under certain conditions and judge the stability of the equilibrium. According to Tables 1 and 4, we get that, if and , there is only the IFE for . In addition, if under some conditions, there is only the IFE for . We describe the dynamic behaviors of the fractional HIV model by using the Adams-type predictor-corrector method algorithm. At last, we extend the model to incorporate the term which we consider the loss of virion and a bilinear term during attacking the target cells. Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. This project was supported by NNSF of China Grant nos. 11271087 and 61263006. 1. R. A. Weiss, “How does HIV cause AIDS?” Science, vol. 260, no. 5112, pp. 1273–1279, 1993. View at Publisher · View at Google Scholar · View at Scopus 2. Y. Yan and C. Kou, “Stability analysis for a fractional differential model of HIV infection of ${\text{C}\text{D}4}^{+}$ T-cells with time delay,” Mathematics and Computers in Simulation, vol. 82, no. 9, pp. 1572–1585, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus 3. A. S. Perelson, D. E. Kirschner, and R. De Boer, “Dynamics of HIV infection of ${\text{C}\text{D}4}^{+}$ T cells,” Mathematical Biosciences, vol. 114, no. 1, pp. 81–125, 1993. View at Publisher · View at Google Scholar · View at Scopus 4. R. V. Culshaw and S. Ruan, “A delay-differential equation model of HIV infection of ${\text{C}\text{D}4}^{+}$ T-cells,” Mathematical Biosciences, vol. 165, no. 1, pp. 27–39, 2000. View at Publisher · View at Google Scholar · View at Scopus 5. J. Arino, C. C. McCluskey, and P. van den Driessche, “Global results for an epidemic model with vaccination that exhibits backward bifurcation,” SIAM Journal on Applied Mathematics, vol. 64, no. 1, pp. 260–276, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 6. A. Atangana and N. Bildik, “Approximate solution of tuberculosis disease population dynamics model,” Abstract and Applied Analysis, vol. 2013, Article ID 759801, 8 pages, 2013. View at Publisher · View at Google Scholar · View at MathSciNet 7. J. Dushoff, W. Huang, and C. Castillo-Chavez, “Backwards bifurcations and catastrophe in simple models of fatal diseases,” Journal of Mathematical Biology, vol. 36, no. 3, pp. 227–248, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 8. H. Gómez-Acevedo and M. Y. Li, “Backward bifurcation in a model for HTLV-I infection of ${\text{C}\text{D}4}^{+}$ T cells,” Bulletin of Mathematical Biology, vol. 67, no. 1, pp. 101–114, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus 9. R. Qesmi, J. Wu, J. Wu, and J. M. Heffernan, “Influence of backward bifurcation in a model of hepatitis B and C viruses,” Mathematical Biosciences, vol. 224, no. 2, pp. 118–125, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 10. O. Sharomi, C. N. Podder, A. B. Gumel, E. H. Elbasha, and J. Watmough, “Role of incidence function in vaccine-induced backward bifurcation in some HIV models,” Mathematical Biosciences, vol. 210, no. 2, pp. 436–463, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus 11. P. van den Driessche and J. Watmough, “A simple SIS epidemic model with a backward bifurcation,” Journal of Mathematical Biology, vol. 40, no. 6, pp. 525–540, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 12. H. Shu and L. Wang, “Role of ${\text{C}\text{D}4}^{+}$ T-cell proliferation in HIV infection under antiretroviral therapy,” Journal of Mathematical Analysis and Applications, vol. 394, no. 2, pp. 529–544, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 13. B. Buonomo and C. Vargas-De-León, “Global stability for an HIV-1 infection model including an eclipse stage of infected cells,” Journal of Mathematical Analysis and Applications, vol. 385, no. 2, pp. 709–720, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 14. M. Y. Li and H. Shu, “Global dynamics of a mathematical model for HTLV-I infection of ${\text{C}\text{D}4}^{+}$T cells with delayed CTL response,” Nonlinear Analysis: Real World Applications, vol. 13, no. 3, pp. 1080–1092, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus 15. S. Liu and L. Wang, “Global stability of an HIV-1 model with distributed intracellular delays and a combination therapy,” Mathematical Biosciences and Engineering, vol. 7, no. 3, pp. 675–685, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus 16. X. Liu, H. Wang, Z. Hu, and W. Ma, “Global stability of an HIV pathogenesis model with cure rate,” Nonlinear Analysis: Real World Applications, vol. 12, no. 6, pp. 2947–2961, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus 17. G. P. Samanta, “Permanence and extinction of a nonautonomous HIV/AIDS epidemic model with distributed time delay,” Nonlinear Analysis: Real World Applications, vol. 12, no. 2, pp. 1163–1177, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus 18. R. Xu, “Global stability of an HIV-1 infection model with saturation infection and intracellular delay,” Journal of Mathematical Analysis and Applications, vol. 375, no. 1, pp. 75–81, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus 19. M. Février, K. Dorgham, and A. Rebollo, “${\text{C}\text{D}4}^{+}$T-cell depletion in human immunodeficiency virus (HIV) infection: role of apoptosis,” Viruses, vol. 3, no. 5, pp. 586–612, 2011. View at Publisher · View at Google Scholar · View at Scopus 20. A. A. Kilbas, H. M. Srivastava, and J. J. Trujillo, Theory and Applications of Fractional Differential Equations, vol. 204 of North-Holland Mathematics Studies, Elsevier Science, Amsterdam, The Netherlands, 2006. View at MathSciNet 21. Z. Liu and X. Li, “Existence and uniqueness of solutions for the nonlinear impulsive fractional differential equations,” Communications in Nonlinear Science and Numerical Simulation, vol. 18, no. 6, pp. 1362–1373, 2013. View at Publisher · View at Google Scholar · View at MathSciNet 22. Z. Liu and X. Li, “On the controllability of impulsive fractional evolution inclusions in banach spaces,” Journal of Optimization Theory and Applications, vol. 156, no. 1, pp. 167–182, 2013. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 23. Z. Liu and J. Sun, “Nonlinear boundary value problems of fractional functional integro-differential equations,” Computers and Mathematics with Applications, vol. 64, no. 10, pp. 3228–3234, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus 24. Z. Liu and J. Sun, “Nonlinear boundary value problems of fractional differential systems,” Computers and Mathematics with Applications, vol. 64, no. 4, pp. 463–475, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus 25. I. Podlubny, Fractional Differential Equations, vol. 198 of Mathematics in Science and Engineering, Academic Press, San Diego, Calif, USA, 1999. View at MathSciNet 26. B. Ross, Fractional Calculus and Its Applications, vol. 457 of Lecture Notes in Mathematics, Springer, Berlin, Germany, 1975. View at Publisher · View at Google Scholar · View at MathSciNet 27. Z. Bai, “On positive solutions of a nonlocal fractional boundary value problem,” Nonlinear Analysis: Theory, Methods and Applications, vol. 72, no. 2, pp. 916–924, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 28. M. Benchohra, J. R. Graef, and S. Hamani, “Existence results for boundary value problems with non-linear fractional differential equations,” Applicable Analysis, vol. 87, no. 7, pp. 851–863, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 29. Z. Bai and H. Lü, “Positive solutions for boundary value problem of nonlinear fractional differential equation,” Journal of Mathematical Analysis and Applications, vol. 311, no. 2, pp. 495–505, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 30. V. D. Gejji, “Positive solutions of a system of non-autonomous fractional differential equations,” Journal of Mathematical Analysis and Applications, vol. 302, no. 1, pp. 56–64, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 31. D. Jiang and C. Yuan, “The positive properties of the Green function for Dirichlet-type boundary value problems of nonlinear fractional differential equations and its application,” Nonlinear Analysis: Theory, Methods and Applications, vol. 72, no. 2, pp. 710–719, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 32. E. R. Kaufmann and E. Mboumi, “Positive solutions of a boundary value problem for a nonlinear fractional differential equation,” Electronic Journal of Qualitative Theory of Differential Equations , no. 3, pp. 1–11, 2008. View at Zentralblatt MATH · View at MathSciNet · View at Scopus 33. C. F. Li, X. N. Luo, and Y. Zhou, “Existence of positive solutions of the boundary value problem for nonlinear fractional differential equations,” Computers and Mathematics with Applications, vol. 59, no. 3, pp. 1363–1375, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 34. Z. Liu and D. Motreanu, “A class of variational-hemivariational inequalities of elliptic type,” Nonlinearity, vol. 23, no. 7, pp. 1741–1752, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus 35. Z. Liu, “Anti-periodic solutions to nonlinear evolution equations,” Journal of Functional Analysis, vol. 258, no. 6, pp. 2026–2033, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 36. Z. Liu, “Existence results for quasilinear parabolic hemivariational inequalities,” Journal of Differential Equations, vol. 244, no. 6, pp. 1395–1409, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus 37. Z. Liu, “Browder-Tikhonov regularization of non-coercive evolution hemivariational inequalities,” Inverse Problems, vol. 21, no. 1, pp. 13–20, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus 38. Z. Peng, Z. Liu, and X. Liu, “Boundary hemivariational inequality problems with doubly nonlinear operators,” Mathematische Annalen, vol. 356, no. 4, pp. 1339–1358, 2013. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 39. S. Zhang, “Positive solutions for boundary-value problems of nonlinear fractional differential equations,” Electronic Journal of Differential Equations, vol. 2006, no. 36, pp. 1–12, 2006. View at 40. Z.-Y. Zhang, Z.-H. Liu, X.-J. Miao, and Y.-Z. Chen, “Stability analysis of heat flow with boundary time-varying delay effect,” Nonlinear Analysis: Theory, Methods and Applications, vol. 73, no. 6, pp. 1878–1889, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus 41. Z.-Y. Zhang, Z.-H. Liu, X.-J. Miao, and Y.-Z. Chen, “New exact solutions to the perturbed nonlinear Schrödinger's equation with Kerr law nonlinearity,” Applied Mathematics and Computation, vol. 216, no. 10, pp. 3064–3072, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus 42. A. S. Perelson and P. W. Nelson, “Mathematical analysis of HIV-1 dynamics in vivo,” SIAM Review, vol. 41, no. 1, pp. 3–44, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus 43. D. Kirschner, “Using mathematics to understand HIV immune dynamics,” Notices of the American Mathematical Society, vol. 43, no. 2, pp. 191–202, 1996. View at Zentralblatt MATH · View at 44. C. Kou, Y. Yan, and J. Liu, “Stability analysis for fractional differential equations and their applications in the models of HIV-1 infection,” Computer Modeling in Engineering and Sciences, vol. 39, no. 3, pp. 301–317, 2009. View at Zentralblatt MATH · View at MathSciNet 45. D. Matignon, “Stability results for fractional differential equations with applications to control processing,” in Computational Engineering in Systems Applications, pp. 963–968, 1996. 46. K. S. Miller and B. Ross, An Introduction to the Fractional Calculus and Fractional Differential Equations, A Wiley-Interscience, New York, NY, USA, 1993. View at MathSciNet 47. Y.-K. Chang, M. M. Arjunan, G. M. N'Guérékata, and V. Kavitha, “On global solutions to fractional functional differential equations with infinite delay in Fréchet spaces,” Computers and Mathematics with Applications, vol. 62, no. 3, pp. 1228–1237, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 48. Z. Ouyang, Y. Chen, and S. Zou, “Existence of positive solutions to a boundary value problem for a delayed nonlinear fractional differential system,” Boundary Value Problems, vol. 2011, Article ID 475126, 17 pages, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 49. E. Ahmed and A. S. Elgazzar, “On fractional order differential equations model for nonlocal epidemics,” Physica A, vol. 379, no. 2, pp. 607–614, 2007. View at Publisher · View at Google Scholar · View at Scopus
{"url":"http://www.hindawi.com/journals/aaa/2014/291614/","timestamp":"2014-04-20T17:09:19Z","content_type":null,"content_length":"559128","record_id":"<urn:uuid:d5a4e116-28c6-45d1-8452-242eed04e5c0>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Clowning Around In this lesson, students are encouraged to discover all of the combinations for a given situation. Students apply problem-solving skills (including elimination and collection of organized data) to draw their conclusions. The use of higher-level thinking skills (synthesis, analysis, and evaluations) is the overall goal. Distribute the Clowns activity sheet to each student. Students will be using the activity sheet to determine possible combinations by coloring. Each student needs a red, green, yellow, and blue Review the problem by reading it to the students. Each clown needs a mouth. Draw either a smile or forwn for each one. Each clown needs a colored hat: red, green, yellow, or blue. How many different clowns can we create? (No two should be the Guide students to predict how many different clowns can be created. Students should record their predictions on the activity sheet. Next, allow enough time for students to color and to count the total number of combinations. Students should count a total of eight different clowns. The combinations are: Smile, Red hat Frown, Red hat Smile, Green hat Frown, Green hat Smile, Yellow hat Frown, Yellow hat Smile, Blue hat Frown, Blue hat Students should compare their predictions to the actual total number of combinations. Ask students who predicted the correct number of combinations to share with the class how they chose their In addition to the organized list, as shown previously, students may make a table to solve this problem. For example: │ │ Red Hat │ Green Hat │ Yellow Hat │ Blue Hat │ │ Smile │ Red Hat, Smile │ Green Hat, Smile │ Yellow Hat, Smile │ Blue Hat, Smile │ │ Frown │ Red Hat, Frown │ Green Hat, Frown │ Yellow Hat, Frown │ Blue Hat, Frown │ • Colored crayons (red, green, yellow, blue) 1. What if six colors could be used for the hats? (Add orange and brown.) [There would be twelve clowns colored.] Learning Objectives Students will: • determine the number of clown faces that can be made with two mouth possibilities and four colored-hat possibilities • generalize the number of combinations that can be made from a given number of possibilities Common Core State Standards – Mathematics -Kindergarten, Algebraic Thinking • CCSS.Math.Content.K.OA.A.3 Decompose numbers less than or equal to 10 into pairs in more than one way, e.g., by using objects or drawings, and record each decomposition by a drawing or equation (e.g., 5 = 2 + 3 and 5 = 4 + -Kindergarten, Algebraic Thinking • CCSS.Math.Content.K.OA.A.4 For any number from 1 to 9, find the number that makes 10 when added to the given number, e.g., by using objects or drawings, and record the answer with a drawing or equation. -Kindergarten, Number & Operations • CCSS.Math.Content.K.NBT.A.1 Compose and decompose numbers from 11 to 19 into ten ones and some further ones, e.g., by using objects or drawings, and record each composition or decomposition by a drawing or equation (such as 18 = 10 + 8); understand that these numbers are composed of ten ones and one, two, three, four, five, six, seven, eight, or nine ones. Common Core State Standards – Practice • CCSS.Math.Practice.MP1 Make sense of problems and persevere in solving them. • CCSS.Math.Practice.MP4 Model with mathematics. • CCSS.Math.Practice.MP5 Use appropriate tools strategically. • CCSS.Math.Practice.MP7 Look for and make use of structure.
{"url":"http://illuminations.nctm.org/Lesson.aspx?id=2042","timestamp":"2014-04-17T15:53:25Z","content_type":null,"content_length":"62093","record_id":"<urn:uuid:3ecae15e-86bc-4d30-859a-64024b3e4270>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
bcParserX - Math Expression Parser COM Component 3.3 bcParserX - Math Expression Parser COM Component 3.3 Ranking & Summary RankingClick at the star to rank Ranking Level Buy now User Review: 1 (1 times) File size: 613K Platform: Windows 9X/ME/NT/2K/2003/XP/Vista License: Shareware Price: $19.95 Downloads: 1732 Date added: 2005-10-22 bcParserX - Math Expression Parser COM Component 3.3 description This is a small COM component and C DLL to parse mathematical expressions. Features are: *Easy to use, simple class API. *Comes with predefined functions. *Users can create custom functions/variables. *Optimization: Constant expression elimination for repeated tasks. *Operators: +, -, /, *, ^ *Logical Operators: <, >, =,<>,>=,<= &, |, ! [less, greater, equal, and, or, not] [IF(a,b,c) is also supported] *Paranthesis: (, {, [ *Scientific notation, eg: 4.27E-3 *Functions in the form of: f(x), f(x,y), f(x,y,z) *List of predefined functions is available in the documentation. *It does not require any other DLLs. An example of a simple expression is : LN(X)+SIN(10/2-5) When parsed, this expression will be represented as: since the SIN(10/2-5) is in fact SIN(0) which is a constant and is 0. Thus, in a loop, if you change the value of X and ask for the value of the expression, it will be evaluated quite fast since SIN(10/2-5) is not dependent on X. X and Y are predefined variables. You can create your own variables as needed. There are many predefined mathematical functions. They are listed in documentation. You can create your functions as needed. bcParserX - Math Expression Parser COM Component 3.3 Screenshot bcParserX - Math Expression Parser COM Component 3.3 Keywords Bookmark bcParserX - Math Expression Parser COM Component 3.3 bcParserX - Math Expression Parser COM Component 3.3 Copyright WareSeeker.com do not provide cracks, serial numbers etc for bcParserX - Math Expression Parser COM Component 3.3. Any sharing links from rapidshare.com, yousendit.com or megaupload.com are also Related Software Math Parser Java class parses and evaluates mathematical expressions at runtime. Supports user defined variables and functions. Source code is included. Free Download Mathematical Expression Parser Component for Delphi parses and evaluates math expressions at runtime. Delphi Source code is included. Free Download bcParser.NET component parses and evaluates mathematical expressions at runtime. Supports user defined variables and functions. Source code is included. Free Download This is a license to allow the use of the component by any number of developers on any number of development machines in a site of your organization. Free Download Powerful COM Port wrapper Active-X. Can be used with Visual Basic, Delphi, MS Visual C++, Java Script, and any other applications capable to host AxtiveX components. Free Download #Calculation component is a powerful calculation engine for your applications. This component integrates expression parsing and evaluating. Its suitable for heavy-duty number crunching. Free Download Math Expression Calculator is an ease to use tool to quickly evaluate mathematical expressions involving multiple variables Free Download NuGenSciMath.NET is a fully managed set of two .NET components with extensive mathematical calculation capabilities Free Download
{"url":"http://wareseeker.com/Software-Development/bcparserx-math-expression-parser-com-component-3.3.zip/47055","timestamp":"2014-04-17T12:30:28Z","content_type":null,"content_length":"36248","record_id":"<urn:uuid:0b40ca2a-d807-4697-a96d-7715a8848d4e>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with public variables 02-04-2012, 08:20 AM Help with public variables Hey, I thought I could make a public variable and then have multiple lower level functions modify this variable. Is that not correct? What am I doing wrong, any help would be greatly appreciated. public class testapp extends Applet implements MouseListener { *declare a bunch of public arrays* public void math() { *do a bunch of math* public void map(graphics){ at the end it just outputs 0 for all those variables. are those functions even working? 02-04-2012, 08:28 AM Re: Help with public variables Its kinda hard to say, but I dont see the functions math(), or map() called from anywhere, so the integer variables could simply be outputting their defualt 0 value ...Try to use code tags when you post code. 02-04-2012, 08:34 AM Re: Help with public variables Ya, I was playing around with that. public void main(Graphics g){ I am not sure that is working though 02-04-2012, 08:45 AM Re: Help with public variables The mapping function is certainly working... its outputting the right variables in the right places, all the variables are simply 0. Either the math function isnt running or the variables are not being passed back and forth properly. Help! 02-04-2012, 08:49 AM Re: Help with public variables try doing this: public class testapp extends Applet implements MouseListener { *declare a bunch of public arrays* public void start() { public void math() { *do a bunch of math* public void map(graphics g){ 02-04-2012, 08:52 AM Re: Help with public variables It worked! Map runs without the start thing, but math does not. 02-04-2012, 08:53 AM Re: Help with public variables No problem, You're the first i've helped on this forum!
{"url":"http://www.java-forums.org/new-java/55062-help-public-variables-print.html","timestamp":"2014-04-20T18:45:00Z","content_type":null,"content_length":"7594","record_id":"<urn:uuid:b36a43bd-e6f7-40a7-9986-abfe5ae64375>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
Isaac Newton Institute The Fields Medal is officially known as the International Medal for Outstanding Discoveries in Mathematics. It is awarded every four years at the International Congress of the International Mathematical Union to two, three or four mathematicians aged under 40. Since the Institute opened, 24 winners of the Fields Medal have attended our programmes.
{"url":"http://www.newton.ac.uk/history/fields_medal.html","timestamp":"2014-04-19T17:10:11Z","content_type":null,"content_length":"9253","record_id":"<urn:uuid:c08b94ab-94d9-4cce-a5b0-5fcd47890078>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Manhattan Beach Algebra 1 Tutor ...Sometimes, all it takes is to redirect the student to practice better time-management and study habits and then practice their subject diligently. Other students need greater attention paid to their particular learning style for them to improve in their studies. I know that many students are vi... 22 Subjects: including algebra 1, English, reading, writing ...I am also very friendly and believe in every students' abilities to learn and achieve. I am a hard worker and will make sure I give my best to tutoring my students. I received my Master's in Education last year and am currently a chemistry teacher in Van Nuys. 35 Subjects: including algebra 1, reading, English, chemistry Hello students and parents! My name is John and I am an experienced math tutor who strives to help you succeed. I graduated from CSUN with a BA in mathematics in 2009. 9 Subjects: including algebra 1, calculus, geometry, algebra 2 In 2008 I came to the USA, after receiving a Permanent Resident card from the Greencard Lottery. I came from Indonesia where I have worked for more than 5 years as a science teacher at the high school level. In Los Angeles I have worked for two years in a special education school, where I have lea... 8 Subjects: including algebra 1, calculus, geometry, algebra 2 I am a person of the world ! I was born in Cameroon, raised in France, and now I live in the U.S. I perfectly understand three languages: French, English, and Spanish. I have been a two-year tutor in maths and French at a college, where everybody appreciated my work. 6 Subjects: including algebra 1, French, algebra 2, geometry
{"url":"http://www.purplemath.com/manhattan_beach_algebra_1_tutors.php","timestamp":"2014-04-18T19:01:08Z","content_type":null,"content_length":"24188","record_id":"<urn:uuid:edc207d5-a935-49eb-ae1d-bfd702b027b9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
Fibonacci number , 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377 ... etc. Apparently, the series appears all over the place in nature. Double Generator Algoritm: int t1 = 0, t2 = 1; for (int counter = 0; counter &lt 50; counter++) { cout &lt&lt t1 &lt&lt endl &lt&lt t2 &lt&lt endl; t1 += t2; t2 += t1; } Single Generator Algoritm: int t = 0, temp1 = 1, temp2 = 1; for (int counter = 0; counter &lt 50; counter++) { cout &lt&lt t &lt&lt endl; temp1 = temp2; temp2 = t; t = temp1 + temp2; } Recursive Algoritm: This function returns the nth fibonacci number. int f(int &n) { if ((n == 2) || (n == 1)) return(1); else return(f(n-1) + f(n-2)); } The n-th Fibonacci number can be calculated without recursion with a simple assembly language routine (nasm syntax), callable by C programs, as: It works both with C compilers that prepend each function name with an underscore, and those that do not. Like the rest of algorithms presented here, it does not check for overflow. Here it is: global _fibonacci, fibonacci _fibonacci: fibonacci: sub eax, eax ; eax = 0 mov ecx, [esp+4] ; ecx = n sub edx, edx jecxz .done ; return 0 if n == 0 inc dl ; edx = 1 .loop: push eax add eax, edx pop edx loop .loop .done: ret x = int(raw_input("Go until?")) a, b = 0, 1 while b < x print b a, b = b, a+b (thing) by artemis entreri Sat Feb 10 2001 at 3:19:38 / 1 + sqrt(5) \ n F(n) = | ----------- | / sqrt(5) \ 2 / In code (BASIC, at least), that would be: Fn = int( ( ( ( 1 + sqrt(5) ) / 2 ) ^ n ) / sqrt(5) + 0.5 ) For more of this kind of thing, point your browser at http://www.maa.org/devlin/devlin_3_99.html SpudTater's formula for approximating the nth Fibonacci number is actually related to a larger formula which calculates the nth number exactly. The formula is: Now the fun part. Why does this work? The reason is that this is the solution to the difference equation defined by the Fibonacci sequence. The Fibonacci sequence is: We can form this into a polynomial in Z, where the power of Z is the term in the sequence represented by a, like this: The roots of this equation are 0, ((1+√(5))/2) and (1-√(5))/2). (We can obtain these by using the quadratic formula.) We ignore the trivial root 0, and using the remaining roots we can create the Finally we solve for the constants c1 and c2 by using known values of an (in particular, when n=0 and n=1). We find c1=1 and c2=-1. From this, we get the formula above. Pretty nifty, eh? The formula JyZude mentions is referred to as Binet's Formula for calculating the nth Fibonacci number. Of the two references I've read which skimmed across this subject, both have noted that Binet probably wasn't the first to discover this formula, and found it on his own 100 years after. They differ, in that one claims de Moivre discovered it first, and the other claims Euler did. Binet, however, eventually wound up getting the credit for it. Fibonacci numbers may be used as an approximation to convert between miles and kilometers. Like the log tables of old, this is a great method if your multiplication is poor, but you want a more accurate value than you would reach by the super-simple method of multiplication or division by a factor of 1.5 Now to convert miles to km, take each of your numbers and convert it to the subsequent number in the sequence, and add these up: To convert the km to miles, add up the preceding set of numbers: Handle the 1´s however you like, it is only an approximation after all. See logarithmic spiral for a bit of an explanation for this. $m=1; $k=1; print "Find the first how many #s: "; $x=<stdin>; $y=3; print "0, ", "1, 1, "; until ($y>$x) { $n=($k+$m); print $n, ", "; $k=$m; $m=$n; $y++; } Most algorithms I have seen that calculate the n:th fibonacci number do so in O(n) time. There is however a way to do it in O(lg n) time. The algorithm makes use of the fact that it is possible to calculate 'i to the power of n' in O(lg n) time this way: int Pow(i, n) { if (n == 0) return 1; else { int r = Pow(i, n / 2); // assumes integer division r = r * r; if (n % 2 == 1) // if n is odd we need to multiply r = r * i; // r with i to get the correct result return r; } } [0 1]^n [fib(n - 2) fib(n - 1)] [1 1] = [fib(n - 1) fib(n) ] int Fib(int n) { Matrix F = {{0, 1}, {1, 1}}; F = Pow(F, n); return F[1][1]; // row 2, column 2 } Matrix Pow(Matrix M, int n) { if (n == 0) return {{1, 0}, {0, 1}}; // the identity matrix else { Matrix R = Pow(M, n / 2); R = R * R; if (n % 2 == 1) R = R * {{0, 1}, {1, 1}}; return R; } } Fibonacci numbers are also found in a branch of stock market analysis called Technical Analysis. Fibonacci fans, arcs and retracements are often used to try to predict future stock movements. For example after a stock price has moved upwards (rallied) and starts to move in the opposite direction, a Fibonacci retracement can be used to try to determine the point at which the stock price will start to turn upwards again. The Haskell Fibonacci program is even shorter than the Python program above. fibonacci = 0 : 1 : [ this + next | (this, next) <- zip fibonacci (tail fibonacci) ] Note that the list fibonacci actually contains every fibonacci number, in order. Haskell's lazy evaluation means that it won't actually generate the list until you ask for one of its elements. jrn points out that calculating fibonacci numbers this way is slow and uses a bunch of memory. If you want to calculate fibonacci numbers in Haskell FAST, this is how you do it: golden_ratio = (1 + (sqrt 5))/2 fibonacci_at :: Integer -> Integer fibonacci_at n = round (golden_ratio^n/(sqrt 5)) But that's just not as cool as having an infinitely long list, is it? :) (idea) by Little Suspicious Sun Oct 23 2005 at 8:06:13 Fibonacci sequences (or similar ratios) are found often in nature. Pine cones, seed heads (ex: sunflower), how the leaves grow on a plant, ram's horns, seashells. It should be noted that the Fibonacci sequence is not the only sequence in nature that dictates spirals, but is one of many phyllotaxies (princibles governing leaf arrangement). Fibonacci numbers also play a role in deriving the golden number (or the golden mean). By taking any number in the sequence and dividing it by the one before it, you get a number near Phi, the golden number, 1.618034 (the larger numbers you are working with, the closer it will be). To artists, the golden mean is better known as the golden rectangle. (a good picture of it here: http://www.vashti.net/mceinc/GoldSqre.gif). Each rectangle has the proportions of a fibonacci number. The first square is 1x1, the second is 1x1, the third is 2x2, the fourth is 3x3, the fifth is 5x5 and so on. Luca Pacioli, in his Divina proportione (On Divine Proportion), wrote about the golden mean and his work influenced such artists as Da Vinci (see the Aunnuciation). Many art books and art professors will tell students that it is better to position the focus of a picture slightly to one side (on the one-third line). The fibonacci sequence has been tied occassionally to buddhism and spirituality, being used to make such symbols as the Heart of Buddha (the golden heart), and the Grail Cup. The arms of the pentacle also conform to fibonacci ratios. Some believe that the reoccurance of the same pattern in many parts of nature is manifest of a divine plan. (see sacred geometry) Some photos/pictures of the Fibonacci spiral as seen in nature: Sea Shell:http://www.world-mysteries.com/fib_nautilus2.jpg Daisy: http://www.math.unl.edu/~sdunbar1/Teaching/ExperimentationCR/Lessons/ Fibonacci/daisy2.jpg Rams Horns: http://incider.byu.edu/vwb/000.%20Example%20Sym/Bighorn%20Sheep72.jpg Golden ratio Tribonacci number Logarithmic spiral Proof that there is no largest prime Nature The golden ratio and the new writeups Fibonacci numbers in nature Compute Fibonacci numbers FAST! Infinite C++: computing Fibonacci numbers at compile How to convert a Super Nintendo controller to work on Fibonacci like recurrence relations time the PC The Pythagorean Theorem: A Vast Right Triangle Lucas number A brief history of Pi Sierpinski triangle I am not a hacker how to take apart an orange pip But who codes the coders? C++ Fibonacci Sequence Fibonacci base Combinatorics Perrin numbers Log in or register to write something here or to contact authors.
{"url":"http://www.everything2.com/title/Fibonacci+number","timestamp":"2014-04-17T12:51:00Z","content_type":null,"content_length":"89498","record_id":"<urn:uuid:85201b7d-4bd4-4878-b8c8-ed85f6ad547e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex variables proof April 16th 2007, 07:48 PM #1 Junior Member Feb 2007 Complex variables proof This particular problem is giving me some trouble: Suppose that f is an entire function and Re[f(z)] is less than or equal to c for all z. Show that f is constant. This is a famous famous theorem. In fact it gives a super short proof of the fundamental theorem of algebra. Here. By the actual theorem is stated and proven. Here. April 16th 2007, 07:55 PM #2 Global Moderator Nov 2005 New York City
{"url":"http://mathhelpforum.com/calculus/13811-complex-variables-proof.html","timestamp":"2014-04-20T09:58:49Z","content_type":null,"content_length":"32876","record_id":"<urn:uuid:5c055551-1ea4-4331-8d87-1c832a00bfe7>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
Some More Results on IF Soft Rough Approximation Space International Journal of Combinatorics Volume 2011 (2011), Article ID 893061, 10 pages Research Article Some More Results on IF Soft Rough Approximation Space ^1Department of Mathematics, Tripura University, Suryamaninagar, Tripura 799130, India ^2Department of Mathematics, Yazd University, Yazd 89195-741, Iran Received 6 September 2011; Accepted 20 November 2011 Academic Editor: Toufik Mansour Copyright © 2011 Sharmistha Bhattacharya (Halder) and Bijan Davvaz. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Fuzzy sets, rough sets, and later on IF sets became useful mathematical tools for solving various decision making problems and data mining problems. Molodtsov introduced another concept soft set theory as a general frame work for reasoning about vague concepts. Since most of the data collected are either linguistic variable or consist of vague concepts so IF set and soft set help a lot in data mining problem. The aim of this paper is to introduce the concept of IF soft lower rough approximation and IF upper rough set approximation. Also, some properties of this set are studied, and also some problems of decision making are cited where this concept may help. Further research will be needed to apply this concept fully in the decision making and data mining problems. 1. Introduction Data mining is a technique of extracting meaningful information from large and mostly un-organized data banks. Data mining is one of the areas in which rough set is widely used. Data mining is the process of automatically searching large volumes of data for patterns using tools such as classifications, association, rule mining, and clustering. The rough set theory is a well understood format framework for building data mining models in the form of logic rules on the bases of which it is possible to issue predictions that allow classifying new cases. In general whenever data are collected they are linguistic variables. Not only this, the answers are not always in Yes/No form. So, in this case to deal with such type of data IF set is a very important tool. Data are in most of the cases a relation between object and attribute. Soft set is an important tool to deal with such types of data. So, throughout this paper a combined approach of soft set, IF set, rough set is studied. Further study is required to find the application of this concept in the field of data mining. Zadeh in 1965 [ 1] introduced the concept of fuzzy set. This set contains only a membership function lying between 0 and 1. But while collecting data many cases may be there where data are missing so IF sets are reqd which consists of both membership value and nonmembership value. Atanassov [2] introduced the concept of IF set. Atanassov named it intuitionistic fuzzy set. But nowadays a problem arose due to the already introduced concept of intuitionistic logic. Hence, instead of intuitionistic fuzzy set, throughout this paper we are using the nomenclature IF set. Rough sets introduced by Pawlak [3] are also a very useful tool for data mining problems where vagueness is the key factor. Molodtsov [4] introduced the concept of soft set, and in 2009 Feng et al. [ 5] introduced a combined notion of fuzzy set, rough set, and soft set to deal with complex data which arises in the most social science problems. In this paper, our aim is to introduce the concepts of IF soft lower and IF soft upper rough approximations which help a lot for sorting the vague data and tending towards decision. 2. Basic Definitions In this section, some of the important required concepts necessary to go further through this paper are shown. Let be a nonempty set, and let be the unit interval . According to [2], an intuitionistic fuzzy set (IFS for short) is an object having the form where the functions and denote, respectively, the degree of membership and the degree of nonmembership of each element to the set , and for each . An intuitionistic fuzzy topology (IFT for short) on a nonempty set is a family of IFS’s in containing and closed under arbitrary infimum and finite supremum [6]. In this case, the pair is called an intuitionistic fuzzy topological space (IFTS for short) and each IFS in is known as an intuitionistic fuzzy open set (IFOS for short). The compliment of an IFOS is called an intuitionistic fuzzy closed set (IFCS for short). Let be a nonempty set and let IFS’s and be in the following forms: Then,(1),(2),(3),(4),(5). Let be a finite nonempty set, called universe and an equivalence relation on , called indiscernibility relation. The pair is called an approximation space. By we mean that the set of all such that , that is, is containing the element . Let be a subset of . We want to characterize the set with respect to . According to Pawlak’s paper [3], the lower approximation of a set with respect to is the set of all objects, which surely belong to , that is, , and the upper approximation of with respect to is the set of all objects, which are partially belonging to , that is, . For an approximation space , by a rough approximation in we mean a mapping defined by for every , Given an approximation space , a pair is called a rough set in if for some . Fuzzy set is defined by employing the fuzzy membership function, whereas rough set is defined by approximations. The difference of the upper and the lower approximation is a boundary region. Any rough set has a nonempty boundary region whereas any crisp set has an empty boundary region. The lower approximation is called interior, and the upper approximation is called closure of the set. By using these concepts, we can make a topological space. A set is said to be a topological space if with every there is an associated set such that the following conditions are satisfied: for any , , , , and . The operation is called an interior operation. This topological space is written by . Let be a universal set and let be a set of parameters. According to [4], a pair is called a soft set over , where and , the power set of , is a set-valued mapping. Let be a Pawlak approximation space. For a fuzzy set , the lower and upper rough approximations of in are denoted by and , respectively, which are fuzzy sets defined by for all . The operators and are called the lower and upper rough approximation operators on fuzzy sets. If the fuzzy set is said to be definable, otherwise is called a rough fuzzy set. A soft set over is called a full soft set if . Let be a full soft set over , and let be a soft approximation space. For a fuzzy set the lower and upper soft rough approximations of with respect to are denoted by and , respectively, which are fuzzy sets in given by for all . The operators and are called the lower and upper soft rough approximation operators on fuzzy sets. If both the operators are the same then is said to be soft definable, otherwise is said to be soft rough fuzzy set. 3. On IF Soft Rough Approximations In this section, we introduce the concept of IF soft rough approximation. Some of its properties are studied and examples are presented. The main focus of this paper is to show the scope of this newly introduced concept in the field of data mining and decision making. Definition 3.1. Letbe a full soft set over and a soft approximation space. For an IF set , the IF soft lower rough approximation and IF soft upper rough approximation with respect to the soft approximation space are denoted by and and are defined as follows: for all . Example 3.2. Suppose that is the universe of the days of a week and the set of parameters are given by , where stands for hot, medium, cold, heavy rain, medium rainy, and not raining. Let us consider a soft set describing the weather. Let us represent Table 1. Then, , , , , , . Suppose that Then, Remark 3.3. (1) which follows from the above example. But it completely is a part of the same object. (2) If any object is of the form , then , since 0 is the infimum of all members and 1 is the supremum of all non members. (3) If any object is of the form , then need not be , since there may exist many other elements whose membership value is less than 1, but if , then no other object is in the same mapping . Similarly, . (4) Let any object be of the form . Now, if , then also there does not exist any object in the same mapping with membership 0 but if other object exists with membership nonzero its nonmembership must be 0. Remark 3.4. (1) If , then the IF soft rough approximation is said to be simply IF soft approximation. (2) If for some of the object , then the IF soft rough approximation is said to be simply IF soft oscillating approximation. (3) If for none of the object , then the IF soft rough approximation is said to be completely IF soft rough approximation. For this case we may consider two more definitions which are known as IF soft stable lower rough approximation and IF soft stable upper rough approximations and are denoted by and . Definition 3.5. The positive difference between and is denoted by and is said to oscillate in the approximation space, that is, where “” is required since otherwise the membership value of the difference may be negative. Example 3.6. Consider the Example 3.2. Then, we have Now, let us consider another case of Example 3.2. Suppose that Then, Theorem 3.7. If then we obtain object IF soft approximation space. Proof. If , then the following two cases may arise:(1)all the object has the same nonzero value for and . Hence, from Remark 3.4, we obtain an IF soft approximation space.(2)If , then the conclusions may be drawn from Remark 3.3. Theorem 3.8. can never be for any object. can never be for any object. Proof. (1) Suppose that , then from the definition we have , that is, Let , that is, , , that is, . Since , so gives and . Since , gives , that is, but if and are members of the same mapping , then , which is a contradiction. Hence, . (2) can be proved similarly. Remark 3.9. (1) If for all object then the approximation space is IF soft approximation space for all object. (2) If for some object then the approximation space is If soft oscillating space. (3) If for all objects then the approximation space is IF soft rough approximation space. In such cases we need to define an IFSLR set approximation space which is stable; else decisions cannot be drawn for any particular object. Definition 3.10. An IF soft stable lower rough approximation (IFSSLRA) of with respect to is denoted by and an IF soft stable upper rough approximation by Example 3.11. Let us consider Example 3.2. Then, , that is, Now, if we consider Example 3.6 then . Therefore, Theorem 3.12. Let be a full soft set over , and let be a soft approximation space. Then, we have: (1) and ,(2), , ,(3) for any object in ,(4) for any object in ,(5),(6),(7),(8), ,(9). Proof. It is straightforward. Remark 3.13. If , then is an IF soft open set. In Example 3.11, are soft open objects, and their memberships are soft open members. Also, if , then is a closed set. Remark 3.14. (1) Here is the IFSR boundary region. (a) If then the data is IF soft set. If , then the data are IF soft rough set. In Example 3.11, the first is IF soft set and is not IF soft rough set but the second one is IFSR set. (b) if and only if . (2) Here , where denotes that the value for every object is the IFSR negative region. (a) If , then if and only if , by Remark 3.3. (b) if if and only if , where is the value for every object. Now, let us take an example from [7]. Example 3.15. Suppose that is the universe consisting of eight persons and the set of parameters are given by , where implies short height, implies tall height, implies blond hair, implies red hair, implies dark hair, implies blue eyes, and implies brown eyes. Let us consider a soft set describing the “attractive person”. Let us represent Table 2. Let , , , , , and , . Let us now consider the IF set of an attractive person as per our choice as Here which implies that the approximation space is stable and the IF set taken for the persons is correct and of less Finally, we consider another example from [3]. Example 3.16. Suppose that is the universe consisting of six persons and the set of parameters are given by , where implies headache, implies musclepain,and implies temperature. Let us consider a soft set describing the “flu infected person”. Let us represent Table 3. Let , , and . Let us now consider the IF set of a flu infected person as per our choice as Here which implies that the approximation space may not be stable and the IF set taken for the persons is not perfectly correct. 4. Conclusion The concepts of IF lower soft rough approximation and IF upper soft rough approximation space are introduced. In the most of the cases, IFSLRA is not stable for that we had introduced a new concept of IFSSLRA space. In some sense almost all concepts we are meeting in every day life are vague rather than precise. This gap between real world and traditional mathematics becomes smaller in recent year. In order to remove this gap, rough set, IF set, a soft set help a lot. The data mining and decision making processes may cross a new milestone after introduction of this new hybridized model. Further study will be needed to establish the utilities of the notions indicated in this paper. 1. L. A. Zadeh, “Fuzzy sets,” Information and Computation, vol. 8, pp. 338–353, 1965. View at Zentralblatt MATH 2. K. T. Atanassov, “Intuitionistic fuzzy sets,” Fuzzy Sets and Systems, vol. 20, no. 1, pp. 87–96, 1986. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 3. Z. Pawlak, “Rough sets,” International Journal of Computer and Information Sciences, vol. 11, no. 5, pp. 341–356, 1982. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 4. D. Molodtsov, “Soft set theory—first results,” Computers & Mathematics with Applications, vol. 37, no. 4-5, pp. 19–31, 1999. View at Zentralblatt MATH 5. F. Feng, C. Li, B. Davvaz, and M. I. Ali, “Soft sets combined with fuzzy sets and rough sets: a tentative approach,” Soft Computing, vol. 14, no. 9, pp. 899–911, 2010. View at Publisher · View at Google Scholar 6. D. Çoker, “An introduction to intuitionistic fuzzy topological spaces,” Fuzzy Sets and Systems, vol. 88, no. 1, pp. 81–89, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt 7. Y. Y. Yao, A Review of Rough Set Models, Rough Set and Data Mining: Analysis of Imprecise Data, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1997.
{"url":"http://www.hindawi.com/journals/ijcom/2011/893061/","timestamp":"2014-04-21T00:15:05Z","content_type":null,"content_length":"538193","record_id":"<urn:uuid:2f8d9deb-b1a3-46f4-9666-94d745fd2306>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
PageRank Leakage PageRank (or PR) leakage Whether PageRank Leakage exists or not, is a question of semantics. The PageRank for a given page is solely determined by the inbound links.(1) However, as the spreadsheet will show, an outgoing link can drain the entire site for PageRank. We have just set up two separate sites - with one page (named Page5) linking to a page in the other site. A picture of the spreadsheet can be found here). Then we studied how the linked-to site was able to increase its PageRank. We will now look at how the linking site fares. We had at first, that the PageRank transferred from one site to the other would be 0.85 * (1/5) = 0.17. If we look at the first iteration (picture to the right), we see that Page5 has retained all of its PageRank - since PageRank isn't influenced by bound links. This is what makes many people declare that PageRank leakage doesn't exist. The Page Ranks of the 4 other page have dropped, however, since they are now getting a smaller share of Page5's PageRank. The loss equals 0.425 per page for a total of 0.17 - exactly as we predicted. Those 4 pages now have less PageRank to pass back to Page5, so in the iteration all 5 pages get a lower PageRank. The total PageRank of this mini-site is no longer 5 or 4.83, but 4.6855. At the third iteration, the total PageRank is 4.568816. As you have guessed by now, it's the same mechanism that we just saw in action when we studied PageRank increase , but when it's shown - graphically - like this, it doesn't feel like a leak - more like a haemorrhage! Let's skip forward a few hundred iterations and behold. The total PageRank is now a mere 4.054473 - the total loss from our site was much higher than we expected. And notice another curious thing: For every iteration the leaking page (i.e. Page5) has the highest PageRank in this mini-site. The other 4 pages have even lower PageRank because Page5 isn't doing its part. This slight difference in PageRank will probably not have an effect in the real world - but if anybody tells you their pages perform better when they link out - give it a thought. A final note: This does not mean that you shouldn't link out.(2) PageRank is only one part of Google's algorithm. I'm personally convinced Google have lots of other ingredients in their secret sauce, which will reward sites that links out. And now for The Big Picture
{"url":"http://www.pagerank.dk/Pagerank-leak/Pagerank-leakage.htm","timestamp":"2014-04-17T18:49:20Z","content_type":null,"content_length":"6015","record_id":"<urn:uuid:744bf05d-7b6a-42a0-a0f5-44ed7b3a46fc>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Gyroscope Measurement and Angular Rates Here are a couple of a recently published review/survey style articles: H. Christiansen, H.Z. Munthe-Kaas, and B. Owren, Topics in structure-preserving discretization , Acta Numerica 20 (2011), 1-119 [You can find the PDF on the internet. Search for the title at scholar.google.com.] E. Celledoni, H. Marthinsen, and B. Owren, An introduction to Lie group integrators -- basics, new developments and applications , J. Comput. Phys. 257 B (2014), 1040-1061 arxiv preprint: Both give nice, but perhaps mathematically dense, overviews of the concept of Lie group integrators. There are three primary approaches, the Crouch-Grossman techniques, the Runge Kutta Munthe-Kaas techniques, and Celledoni's commutator-free techniques. Note that the papers cited include the developers of two of those three techniques. The cited papers discuss all three approaches. One thing neither discusses is how to adapt these concepts to unit quaternions for rotations in three dimensional space. Nobody apparently has discussed that. It's too specific a topic for general mathematical consumption. My technical monitor wants me to write a paper on what I've done, but aimed at a more utilitarian audience (e.g., physics-based simulations and graphics).
{"url":"http://www.physicsforums.com/showthread.php?s=a1f0d34414434b0ba7f6951927286e94&p=4640339","timestamp":"2014-04-18T03:16:20Z","content_type":null,"content_length":"44614","record_id":"<urn:uuid:9e73ceab-ccd4-454f-9a96-2327676754f4>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
gmat prep ds standard deviation Author Message gmat prep ds standard deviation [#permalink] 12 Jun 2007, 09:10 hsk E Intern Difficulty: Joined: 03 Apr 5% (low) Question Stats: Posts: 46 Followers: 0 (00:00) correct 0% (00:00) based on 0 sessions List S and list T each contain 5 positive integers and for each list the avg (mean) of the integers in the list is 40. if the integers 30,40 and 50 are in both list , is the standard deviation of the integers in list S greater than the list standard deviation of the integers in list T? (1) the integer 25 is in list S (2) the integer 45 is in the list My choice "B" From (1) we get no info. about List T Joined: 31 May From (2) Assuming you meant to say that 45 is in both the lists. Then there is only one choice left for the 5th no. and it is 35 (to get the avg. 40) 2007 The std. dev would be the same. Posts: 6 Followers: 0 hsk I'm sorry, I ment to say Intern List S and list T each contain 5 positive integers and for each list the avg (mean) of the integers in the list is 40. if the integers 30,40 and 50 are in both list , is the standard deviation of the integers in list S greater than the list standard deviation of the integers in list T? Joined: 03 Apr 2006 (1) the integer 25 is in list S (2) the integer 45 is in the list T Posts: 46 Followers: 0 X & Y Manager I believe is C Joined: 25 May _________________ Who is John Galt? Posts: 227 Followers: 1 Kudos [?]: 5 [0], given: 0
{"url":"http://gmatclub.com/forum/gmat-prep-ds-standard-deviation-47011.html?kudos=1","timestamp":"2014-04-16T20:23:45Z","content_type":null,"content_length":"160420","record_id":"<urn:uuid:5da29a91-de58-48e3-a819-b9f2c40201df>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
spectacular applications of functional analysis in resolutions of apparently unrelated problems up vote 7 down vote favorite What are some of the spectacular applications of functional analysis to apparently unrelated problems.One that comes to my mind is Per Enflo's resolution of Hilbert's 5th problem.There are also reformulations of RH in Hilbert Space.I would be happly to hear about it's nice applications in geometry,number theory,topology,etc. specially in context to solving conjectures. fa.functional-analysis big-picture 3 I don't know what you are looking for. For example, Peter -Weyl theorem is an application of compact operator theory and yileds the consequence that every compact Lie group is linear. The proof of Hodge's theorem on harmonic forms representing cohomology also uses functional analysis. At various points, that in a representation space of semi-simple lie groups, analytic vectors exist in profusion (Harish-Chandra theorem) uses elliptic operators. – Aakumadula Feb 24 '13 at 13:50 2 I should have added. A large part of the work of margulis is on structure and algebraic properties of lattices (arithmeticity, normal subgroup theorem etc). However the proofs use ergodic theory, which you may think of as part of functional analysis. – Aakumadula Feb 24 '13 at 13:58 1 hodge's theorem is in geometry (see warnIt is basic to er's book on differential geometry) and not in harmonic analysis. it is basic to algebraic geometry, for example. – Aakumadula Feb 24 '13 at 2 Should be a CW question. – Benjamin Steinberg Feb 24 '13 at 16:01 Yes, Cleft, I could write a book on transference of knowledge, results, and techniques from modern Banach space theory to other areas, but just as striking is merely listing people who made big 1 reputations through research in other fields based to a large extent on what they learned and did in Banach space theory, including, among many others, Bourgain, Casazza, Gowers, Ghoussoub, Grothendieck, Milman, Pisier, Rudelson, Talagrand, Vershynin, ... – Bill Johnson Feb 24 '13 at 16:50 show 3 more comments 3 Answers active oldest votes Here is one application, which may not seem spectacular to the modern mathematician, but it has many profound applications. Suppose that $X$ is a reflexive Banach space $E: X\to (-\infty, \infty]$ a convex function such that $$ \lim_{\Vert x\Vert\to\infty} E(X)=\infty, $$ up vote 5 down $$ E(x)\leq \liminf_{y\to x} E(x). \;\;\forall x\in X. $$ Then there exists $x_0\in X$ such that $$ E(x_0)\leq E(x),\;\;\forall x\in X. $$ For example, one can use this to settle the so called Dirichlet principle which generated many debates in the 19th century. can you please provide link for the proof. – Koushik Feb 24 '13 at 22:32 Look at Proposition 10.3.20 in www3.nd.edu/~lnicolae/Lectures.pdf or Corolllary 3.23 in H. Brezis' book "Functional Abalysis, Sobolev Spaces and Partial Differential Equations'' – Liviu Nicolaescu Feb 25 '13 at 9:53 add comment One should probably mention Gelfand's proof of Wiener's theorem that, if a nowhere zero periodic $f$ has absolutely convergent Fourier series, then so does $1/f$. There is also Kantorovich's famous note "On a problem of Monge": Monge, in his memoirs of 1781, considering the problem of the most rational ways of transporting earth from an embankment to an excavation, proposed the following problem: divide two equal volumes into infinitesimal particles and associate them one to another so that the sum of the path lengths multiplied by the volumes of the particles be minimum possible. up vote 4 In connection with this problem, Monge created the geometrical theory of congruences. As to the problem itself, he conjectured, but did not proved rigorously, that the paths of the down vote mass translocation form a family of normals to a certain family of surfaces. The same problem was studied later by Dupin, but a rigorous proof of the Monge theorem was given only a century later, in 1884, in a 200-page memoirs by Appell. (...) Meanwhile, this assertion follows immediately from the abstract theorem mentioned above. (...) add comment For every smooth function $g$ the linear partial differential equation with constant coefficients $P(D)f=g$ is solvable in convex sets. Although the statement has nothing to do with functional analysis Malgrange's proof heavily relied on Frechet space theory (and, of course, Fourier transformation). up vote 3 down vote The same holds if $g$ is a distribution (but one may object that distribution theory is part of functional analysis). add comment Not the answer you're looking for? Browse other questions tagged fa.functional-analysis big-picture or ask your own question.
{"url":"http://mathoverflow.net/questions/122793/spectacular-applications-of-functional-analysis-in-resolutions-of-apparently-unr/122808","timestamp":"2014-04-17T12:58:04Z","content_type":null,"content_length":"66769","record_id":"<urn:uuid:c192ff7b-9958-46a7-9813-89c66ba18a0a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Looking for a rooted DAG isomorphism algorithm Carl Sturtivant <carl@bitstream.net> 3 Mar 1998 10:39:58 -0500 From comp.compilers | List of all articles for this month | From: Carl Sturtivant <carl@bitstream.net> Newsgroups: comp.compilers Date: 3 Mar 1998 10:39:58 -0500 Organization: Bitstream Underground References: 98-02-073 Keywords: analysis, question Charlie Burns wrote: > I am looking for references to a rooted DAG isomorphism algorithm. The > DAGs I want to compare are like trees but nodes can have more than one > parent. Could you be more precise? Do you mean that each DAG has only one root, i.e. all other nodes have non-zero indegree? And that this root is already distinguished in each of the two DAGS whose isomorphism problem we are to solve? Are the internal nodes (i.e. those nodes with non-zero outdegree) Are the leaves (i.e. those nodes with zero outdegree) unlabeled? Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/98-03-008","timestamp":"2014-04-18T21:08:13Z","content_type":null,"content_length":"4878","record_id":"<urn:uuid:a549dbc2-2ad2-4553-b44d-37277e77ccaf>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
Alan Turing (The Great)... a short history lesson... July 4th, 2002, 05:16 AM #1 Join Date Mar 2002 Alan Turing (The Great)... a short history lesson... Hey All, While I was writting my tutorial on the Turing Programming language, many asked if it had any relation to Alan Turing, AI or Enigma Encryption. The answer is no. The only relation was that the language might have been named in his honor. However, while I was searching for info on Turing language, I came accross loads of information on Alan. Since many people here showed interest in the man who made computers famous and now in homes (one of the people responsible, anyways...) I figured I'd write this short history lesson on the guy. He had much to do with computers and for that I honor him with this short essay. Alan Turing (1912-1954 42yrs) The movie A Beautiful Mind was based on real-life events, and guess who's life events they were based on? (between 1938-1943 (approx) of Alan Turing's life) This movie was a shortened version of his life, enhanced with some Hollywood dramatics and effects to make it more "interesting and eye catching". Alan Turing is well known and documented as a pioneer for computer science, encryption, enigmas, and AI Turing went to school and became a impressive (and reknowned) mathematician. But what began the wheels of history, and placed him within the sands of time, is the whole idea behind the Turing Machine (which is basically the foundation of computer science). The incompleteness of mathematics was proven. The existence of true statements about numbers could not be proved by the formal application of set rules of deduction. Because of this, Turing thought to himself Could there exist, at least in principle, a definite method or process by which it could be decided whether any given mathematical assertion was provable? Turing came up with a great solution. He wrote a paper on On Computable Numbers with an application to the Entscheidungsproblem, which is the concept of the Turing Machine. (Computation and computability, ie: mathematical calculations done by computer to determine wether a number is prime or not) Like most people know from the movie (ABM), Turing was used in the war to decipher the encrypted codes by the Germans, with the Cryptanalytic Headquarters. The way to "break the code" was to know how the Germans were using their Enigma. One idea to break the code was the Bomba. Turing turned the Polish Bombe into a more powerful device, with logical deductions of plaintext. (In particular, U-boat codes.... see movie U-571 for example) During this time, Turing took a strong interest in electronics and planned the embodiment of the Universal Turing Machine in electronic form, or in effect, invented the digital computer. Later on came the Turing Test, or what most know it as AI (bots so to speak). Turing never got a chance to actually create AI or anything close to it. He did, however, write papers on his theory of how to create AI with a computer, as well as speak to many associations and organizations about his idea (all did not listen). Turing believed that computers could learn and modify or change their behaviour. He wanted to create a "brain" which would have neurons that transmit bits (0 or 1). See link for more info Link Alan Turing was eventually arrested for being a homosexual. Instead of going to jail, he decided to participate in an experiment where they gave him oestrogen shots (for a year) to neutralize his libido. Later on, (way later) he was found dead by his mother in his bed. He died of cyanide poisoning. His mother thinks it was accidental (believing he had some on his hands from a amateur chemistry experiment) while the autopsy sais suicide. Turing Machine = math, algorithms, enigmas Turing Test = Artificial Intelligence Note: I made this as short as possible. If some info seems garbled or confusing, sorry. The movie A Beautiful Mind was based on real-life events, and guess who's life events they were based on? (between 1938-1943 (approx) of Alan Turing's life) This movie was a shortened version of his life, enhanced with some Hollywood dramatics and effects to make it more "interesting and eye catching". Huh, A Beautiful Mind was about the life of Nobel Prize winner John Nash??? Nice post though OpenBSD - The proactively secure operating system. oops... well, it somewhat shadowed the life of Alan Turing as well... my bad... Nice post, few realise that Turing and Co at bletchley park had the first working computer (of sorts0 but then destroyed it in the interests of national security!! Or babbage made a 'computer' with wooden wheels in it... A Beatiful mind was about John nash, and <Great Indignation> there was NO AMERICAN involvement in the capturing of the Enigma machine by the ROYAL NAVY from a german Submarine. Nice to see Turing being remembered for the great man he was though. Who Cares Wins July 4th, 2002, 07:25 AM #2 Senior Member Join Date Oct 2001 July 4th, 2002, 08:02 AM #3 Join Date Mar 2002 July 4th, 2002, 01:53 PM #4 Join Date Aug 2001
{"url":"http://www.antionline.com/showthread.php?230752-The-Ultimate-Social-Engineering-tutorial!&goto=nextoldest","timestamp":"2014-04-18T01:19:57Z","content_type":null,"content_length":"73612","record_id":"<urn:uuid:09521578-6f0d-4517-b1ce-851c99ee8a28>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
frequently asked questions about the 1. What are the dimensions of the acre? “I just wanted to know what are the dimensions of an acre—how many feet wide and long?” “If an acre is a square or rectangular why doesn't anybody on the web know what the measurements are in feet, not sq. feet. I have looked in about 1000 different places and no one has an answer that “If I am to measure an acre as length x width what would that measure be? I have looked up many sites but cannot find this measurement.” Since medieval times the acre hasn't had any specific dimensions. It is purely an area of 43,560 square feet. The two sides of a 1-acre rectangular lot can be any lengths as long as multiplying one by the other gives 43,560 (if they are measured in feet). For example, imagine a sidewalk 5 feet wide. If it were (43560 ÷ 5 = ) 8712 feet long it would take up an acre, a long skinny acre. On the other hand, if the 1-acre lot were a square, its sides would be only 208.7 feet. 2. What is the perimeter of an acre? “Can you tell me [if the distance around] one acre of land is more than a mile? I want to purchase property, but I want the distance around the property to be at least 1 mile.” Because the acre is a measure of area, it has no specific perimeter. For example, using the examples in question one, the perimeter of the long, skinny sidewalk acre would be 17,434 feet (more than 3 miles), but the perimeter of the square acre would be 835 feet. If you don't know the shape of the lot you can't determine the perimeter. 3. How are acres measured on hillsides? “If I buy a hill, it being sold as 242 acres, how are the acres measured? Are they as if the hill were cut off and the land was level? Or up one side of the hill and down the other as if it were covered with a blanket and then the size of the blanket was measured?” It's as if the hill were cut off and the land was level. To quote from an old authoritative surveying text: Horizontal Lines. -- In surveying, all measurements of lengths are horizontal or else are subsequently reduced to horizontal distances. As a matter of convenience, measurements are sometimes taken on slopes, but the horizontal projection is afterward computed. The distance between two points as shown on a map then is always this horizontal projection. Charles B. Breed and George L. Hosmer. The Principles and Practice of Surveying, Vol 1. New York: John Wiley & Sons, 1908. home | units index | search | to contact Sizes | acknowledgements | help | terms of use
{"url":"http://www.sizes.com/units/acre_FAQ.htm","timestamp":"2014-04-18T13:22:39Z","content_type":null,"content_length":"7256","record_id":"<urn:uuid:c293fb27-30bc-44fd-9580-c0831b5386ae>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Dowker notation Dowker notation ^en In the mathematical field of knot theory, the Dowker notation, also called the Dowker–Thistlethwaite notation or code, for a knot is a sequence of even integers. The notation is named after Clifford Hugh Dowker and Morwen Thistlethwaite, who refined a notation originally due to Peter Guthrie Tait. To generate the Dowker notation, traverse the knot using an arbitrary starting point and direction. Label each of the n crossings with the numbers 1, ..., 2n in order of traversal, with the following modification: if the label is an even number and the strand followed crosses over at the crossing, then change the sign on the label to be a negative. When finished, each crossing will be labelled a pair of integers, one even and one odd. The Dowker notation is the sequence of even integer labels associated with the labels 1, 3, ..., 2n − 1 in turn. For example, a knot diagram may have crossings labelled with the pairs and. The Dowker notation for this labelling is the sequence: 6 −12 2 8 −4 −10. A knot can be recovered from a Dowker sequence, but the recovered knot may differ from the original by being a reflection or by having any connected sum component reflected in the line between its entry/exit points – the Dowker notation is unchanged by these reflections. Knots tabulations typically consider only prime knots and disregard chirality, so this ambiguity does not affect the tabulation. Wikipedia [ - ] Object is not asserted on this topic. • #9202a8c04000641f80000000045cc8f4 • Tuesday, April 10, 2007 7:19 PM • In the mathematical field of knot theory, the Dowker notation, also called the Dowker–Thistlethwaite notation or code, for a knot is a sequence of even integers. The notation is named after Clifford Hugh Dowker and Morwen Thistlethwaite, who refined a notation originally due to Peter Guthrie Tait. To generate the Dowker notation, traverse the knot using an arbitrary starting point and direction. Label each of the n crossings with the numbers 1, ..., 2n in order of traversal, with the following modification: if the label is an even number and the strand followed crosses over at the crossing, then change the sign on the label to be a negative. When finished, each crossing will be labelled a pair of integers, one even and one odd. The Dowker notation is the sequence of even integer labels associated with the labels 1, 3, ..., 2n − 1 in turn. For example, a knot diagram may have crossings labelled with the pairs and. The Dowker notation for this labelling is the sequence: 6 −12 2 8 −4 −10. A knot can be recovered from a Dowker sequence, but the recovered knot may differ from the original by being a reflection or by having any connected sum component reflected in the line between its entry/exit points – the Dowker notation is unchanged by these reflections. Knots tabulations typically consider only prime knots and disregard chirality, so this ambiguity does not affect the tabulation. Wikipedia • Tuesday, April 22, 2014 6:50 AM
{"url":"http://www.freebase.com/m/025tl7n","timestamp":"2014-04-23T15:13:24Z","content_type":null,"content_length":"106848","record_id":"<urn:uuid:13edd17e-64bb-4a34-bc0a-85a4fd9c597d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
Comparing fractions with different denominators. A Fraction consists of two numbers separated by a line. The top number (or numerator) tells how many fractional pieces there are. The fraction 3/8 indicates that there are three pieces. The denominator of a fraction tells how many pieces an object was divided into. The fraction 3/8 indicates that the whole object was divided into 8 pieces. If the numerators of two fractions are the same, the fraction with the smaller denominator is the larger fraction. For example 5/8 is larger than 5/16 because each fraction says there are five pieces but if an object is divided into 8 pieces, each piece will be larger than if the object were divided into 16 pieces. Therefore, five larger pieces are more than five smaller pieces.
{"url":"http://aaamath.com/B/cmp43bx2.htm","timestamp":"2014-04-18T10:35:38Z","content_type":null,"content_length":"7053","record_id":"<urn:uuid:b500fc1e-3d11-4b1b-a863-099cc3cec098>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
New Brunswick Calculus Tutor ...My goal is to break down complicated concepts for my students to ensure they have a deep understanding of the material. I look forward to working with each and every student take pride in mentoring and guiding students through these difficult courses. I am a master MCAT tutor with another tutor... 39 Subjects: including calculus, chemistry, geometry, physics ...The students always surprised me when I threw them curve-balls or problems that were seemingly just out of reach. As an educator, I recognize that the student must come first. This attitude has always served me well in patience with exploring new topics and needing to explore alternate routes of explanation. 9 Subjects: including calculus, physics, algebra 1, algebra 2 ...I recently found a job in a global company as an engineer and relocated to NJ area and hope to continue tutoring. I tailor my teaching style to accommodate the student because each individual learns differently and I think that it's important for the student and tutor to meet prior to their firs... 15 Subjects: including calculus, English, Chinese, GRE ...I also taught calculus in the university and I am tutoring high school students in AP calculus right now. I taught math and physics in high schools and also tutored many students. I have both a Bs and a PhD in physics. 9 Subjects: including calculus, physics, algebra 1, algebra 2 ...I get very strong results in Math SAT tutoring. I give the student problems suited to the areas he/she has more of a challenge with, and work through them until the student can solve all of that type of problem. I also work with you on understanding basic mathematical facts included in the SAT. 32 Subjects: including calculus, English, writing, geometry
{"url":"http://www.purplemath.com/New_Brunswick_calculus_tutors.php","timestamp":"2014-04-16T10:52:29Z","content_type":null,"content_length":"24194","record_id":"<urn:uuid:79b6ca7e-147d-4966-adbe-b5bfe815e977>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
General problem-solving tips Quick description Many Tricki articles are about methods for solving particular classes of problems. But when one is doing mathematical research, many of the most useful tips are not methods of this kind but are more like general research strategies that can in principle be applied to virtually any mathematical problem. The Tricki contains descriptions of several of these, and they are listed below. If you would like to write an article in this category, try to adhere to the Tricki format: your article will be much more useful if you can give examples of mathematical situations where your advice is helfpul. The following dead links and descriptions of articles that have yet to be written give some idea of what we mean by a general research strategy. The articles Don't start from scratch Quick description ( A common mistake people make when trying to answer a mathematical question is to work from first principles: it is almost always easier to modify something you already know. This article illustrates the point with examples that range from simple arithmetic to problems from the forefront of research. ) Hunt for analogies Quick description ( It is surprising how often the following general approach to problem-solving is successful: you have a problem you don't yet know how to solve; you think of a somewhat similar context where you can formulate an analogous problem that you do know how to solve; you then work out what the corresponding solution ought to be in the context you started with. Even quite loose analogies can do a wonderful job of guiding you in the right direction. ) Mathematicians need to be metamathematicians Quick description ( If you want to prove a theorem, then one way of looking at your task is to regard it as a search, amongst the huge space of potential arguments, for one that will actually work. One can often considerably narrow down this formidable search by thinking hard about properties that a successful argument would have to have. In other words, it is a good idea to focus not just on the mathematical ideas associated with your hoped-for theorem, but also on the properties of different kinds of proofs. This very important principle is best illustrated with some examples. ) Think about the converse Quick description ( If you are trying to prove a mathematical statement, it is often a good idea to think about its converse, especially if the converse is not obvious. This is particularly useful if the statement you are trying to prove is a lemma that you would like to use to prove something else. Some examples will help to explain why. More obviously, it is useful if you are trying to prove a result that would, if true, be best possible. ) Try to prove the opposite Quick description ( If you want to prove a mathematical statement, try proving the negation of that statement. Very often it gives you an insight into why the original statement is true, and sometimes you discover that it is not true. This tip can be iterated. ) Look for related problems Quick description ( When you are trying to solve a problem, it can be very helpful to formulate similar-looking problems and think about those too. Sometimes they turn out to be interesting in themselves, and sometimes they lead you to ideas that are useful for the original problem. ) Work on clusters of problems Quick description ( It is not usually a good research strategy to think about one isolated problem, unless you are already some way to solving it. Solving a problem involves a certain degree of luck, so your chances of success are much greater if you look at a cluster of related problems. ) Look at small cases Quick description ( Can't see how to solve a problem? Then see if you can solve it in special cases. Some special cases will be too easy to give you a good idea of how to approach the main problem, and some will be more or less as difficult as the main problem, but if you search for the boundary between these two extremes, you will often discover where the true difficulty lies and what it is. And that is progress. ) Try to prove a stronger result Quick description ( As a student, one is asked to prove many statements that have been carefully designed so that their hypotheses are exactly the appropriate ones for deducing the conclusion. This makes it possible to design "trick" questions with unnecessarily strong hypotheses: these questions can be hard if one tries to use the hypotheses as they stand, since their full strength is irrelevant. The problems that arise when one is doing research are often of the "trick" variety: one has not been set them by a benign professor in the sky. So it is a good idea to investigate whether weaker hypotheses will suffice. As with looking at small cases, this can help one to locate the true point of difficulty of a problem. Similarly, if you are trying to find an example of a mathematical structure that has a certain property , it may be easier to look for an that has a stronger property . And even if you fail, you are likely to understand much better what is required to find an that satisfies . ) Prove a consequence first Quick description ( This is the flip side of "Try to prove a stronger result"; if one wants to prove some statement, it can be useful to first prove a weaker consequence of that statement, and then use that weaker result as a stepping stone to the full result.) Think axiomatically even about concrete objects Quick description ( If you are trying to prove a fact about the exponential function, it may be easier not to use any of the common definitions of this function, but to use instead a few properties that it has, of which the most important is that . In general, it is often possible to turn concrete problems into abstract ones in this way, and doing so can considerably clarify the problems and their solutions. ) Temporarily suspend rigor Quick description ( The final proof of a result should of course be fully rigorous. But this certainly does not prevent one from suspending rigor in order to locate the right proof strategy to pursue. For instance, if one needs to compute some complicated integral expression, one can temporarily suspend concerns about whether operations such as interchange of integrals is actually justified, and go ahead and perform these operations anyway in order to find a plausible answer. One can always go back later and try to make the argument more rigorous.) Turn off all but one of the difficulties Quick description ( A problem may have several independent difficulties plaguing it; for instance one may need to establish an estimate which is uniform both with respect to a large parameter , and an independent small parameter . In that case, one can often proceed by passing to a special case in which only one of the difficulties is "active", solving each of these basic special cases, and then try to merge the arguments together. For instance, one could set and get an argument which is uniform as , then set and get an argument which is uniform as , then try to put them together.) Simplify your problem by generalizing it Quick description ( Sometimes if you generalize a statement, the result is easier to prove. There are several reasons for this, discussed in separate articles linked to from this page. ) If you don't know how to make a decision, then don't make it Quick description ( Very often a proof requires one to choose some object that will make the rest of the proof work. And very often it is far from obvious how to make the choice. Often a good way of getting round the problem is to take an arbitrary object of the given type, give it a name , and continue with the proof as if you had chosen . Along the way, you will find that you need to assume certain properties of . Then your original problem is reduced to the question "Does there exist an object of the given type with properties ?" Often, this is a much more straightforward question than the main problem you were trying to solve. ) If an argument looks promising but needs some technical hypothesis, try assuming that hypothesis for now, but aim to remove it later Quick description ( If you have an idea on how to solve a problem, but it requires a hypothesis that one does not actually have, one doesn't necessarily have to abandon the strategy entirely; one may try instead for a conditional result assuming the hypothesis instead. If that is successful, one can then try to look for ways to remove the extra hypothesis. ) Recent comments
{"url":"http://www.tricki.org/article/General_problem-solving_tips","timestamp":"2014-04-17T08:18:16Z","content_type":null,"content_length":"33943","record_id":"<urn:uuid:c601b6f8-62f2-4de3-b9a2-4c692e602cf5>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Algorithm question 10-28-2002 #1 Algorithm question Ok, first off I want to state that this is NOT a homework question! I'm wading through an algorithm book that I bought (pleasure reading if you can believe it!) and I'm having difficulty figuring out one of the problems in it, one of those star problems that are supposed to be harder than the rest. This question is only in chapter 3, so it shouldn't take a very high level algorithm. What the question asks for is given an unsorted array of n integers, and a value x, write an algorithm that tells you whether or not there exists two integers in the array who's sum is EXACTLY x. No problem, just iterate through the array with two nested loops that checks every possible configuration right? Nope, this sucker needs to run at a worst case (n log n) time which usually means something recursive/divide and conquer needs to be done. Straight iteration runs at n^2 time. I can't figure out how to do this. The best I can figure is to first recursively merge sort the array which is (n log n) time, then iterate through the array from the beginning and using divide and conquer to see if (x-array[i]) exists. This should be faster than the straight iteration, but I feel like there must be a better way since my algorithm has three separate functions, 2 at (n log n) time and one at n time. Anyone have any ideas? Sorry so confusing, if any of this needs clarification, don't hesitate to ask! What book is this out of? Does the problem say it has to have a runtime of O(n log n)? Mr. C: Author and Instructor The book is Intro to Algorithms by Cormer, and yes it has to have a run time of O(n log n) 10-28-2002 #2 CS Author and Instructor Join Date Sep 2002 10-28-2002 #3
{"url":"http://cboard.cprogramming.com/cplusplus-programming/27421-algorithm-question.html","timestamp":"2014-04-17T23:29:34Z","content_type":null,"content_length":"46859","record_id":"<urn:uuid:b450f953-8d14-44cf-a14f-13c20507b648>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
trigonometric form and DeMoivres Theorem May 19th 2010, 08:14 AM #1 Mar 2010 trigonometric form and DeMoivres Theorem list the 3 complex roots of -1 by rewriting -1 in trogonometric form and applying DeMoivre's Theorem. you may leave your answers in trigonometris form $z^3=-1=cis(\pi+2k\pi)=cis(\pi(2k+1))\,,\,\,with\,\,\,ci s(x):=e^{xi}\,,\,\,x\in\mathbb{R}\,,\,\,k\in\mathb b{Z}$ . In order to get all the different elements though we restrict $k=0,1,2$ (why? Check the trigonometric circle). So $z=(-1)^{1/3}=cis(\pi(2k+1))^{1/3}=cis\left(\pi\frac{2k+1}{3}\right)$ (here we used De Moivre's theorem) , so now just input $k=0,1,2$ and obtain the three complex roots of $-1$ ...as simple as that! May 19th 2010, 11:29 AM #2 Oct 2009
{"url":"http://mathhelpforum.com/pre-calculus/145557-trigonometric-form-demoivres-theorem.html","timestamp":"2014-04-18T16:48:54Z","content_type":null,"content_length":"33742","record_id":"<urn:uuid:4706e734-abc7-4fe7-89ee-eb59fce73a00>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
Is any interesting question about a group G decidable from a presentation of G? up vote 18 down vote favorite We say that a group G is in the class F[q] if there is a CW-complex which is a BG (that is, which has fundamental group G and contractible universal cover) and which has finite q-skeleton. Thus F[0] contains all groups, F[1] contains exactly the finitely generated groups, F[2] the finitely presented groups, and so forth. My question: For a fixed q ≥ 3, is it possible to decide, from a finite presentation of a group G, whether G is in F[q] or not? I would assume not, but am not having much luck proving it. [S:One approach would be to prove that, if G is a group in F[q] and H is a finitely presented subgroup, then H ∈ F[q] as well. This would make being in F[q] a Markov property, or at least close enough to make it undecidable.:S] Henry Wilton's comment below makes it clear that being F[q] is not even quasi-Markov, so the above idea won't work. I still suspect that "G ∈ F[q]" is not decidable, but now my intuition is from Rice's theorem: If $\mathcal{B}$ is a nonempty set of computable functions with nonempty complement, then no algorithm accepts an input n and decides whether φ[n] is an element of $\mathcal{B}$. [S:It seems likely to me that something similar is true of finite presentations and the groups they define.:S] John Stillwell notes below that this can't be true for a number of questions involving the abelianization of G. This wouldn't affect the Rips construction/1-2-3 theorem discussion below if the homology-sphere idea works, since those groups are all perfect. Any thoughts? at.algebraic-topology gr.group-theory homotopy-theory lo.logic 2 For my edification, can you give an example of a finitely presented group not in F_3? – Reid Barton Feb 21 '10 at 23:26 1 Here's an example of an interesting question whose decidability status is unknown. 'Does G have a proper subgroup of finite index?' – HJRW Feb 23 '10 at 2:28 1 Your revised question is truly outstanding! I had already voted up your previous question, but now all I want to know whether Rice's theorem holds for finite group presentations. – Joel David Hamkins Feb 23 '10 at 3:17 (Stillwell's example shot down my earlier comment, so I deleted it.) – François G. Dorais♦ Feb 23 '10 at 5:10 1 At the risk of sounding churlish, I think the original question, about finiteness properties, was really interesting, and the modified version is really dull. – HJRW Feb 27 '10 at 2:49 show 1 more comment 5 Answers active oldest votes It seems to me that the analogue of Rice's theorem fails for finitely presented groups $G$ because of questions like: is the abelianization of $G$ of rank 3? The rank of the abelianization of any finitely presented $G$ can be computed by reducing the abelianization to normal form, so this (slightly) interesting question can be decided from the up vote 11 down presentation of $G$. vote accepted Yes, you're absolutely right. More generally, any property of G that is computable from the isomorphism type of ab(G) is computable from a finite presentation for G. – Chad Groft Feb 23 '10 at 4:46 +1. Very good. It's too bad we can't have Rice's theorem here... – Joel David Hamkins Feb 23 '10 at 12:05 1 There's still a reasonable question in that area. For instance, you could restrict yourself to perfect groups. – HJRW Feb 23 '10 at 17:27 Henry, I had had the same thought, and I have asked a question at mathoverflow.net/questions/16532 following up on this. – Joel David Hamkins Feb 26 '10 at 17:38 add comment [This answers Reid's petition for an example in the comments, in answer form so as to be able to preview] Stallings has given in [Stallings, John. A finitely presented group whose 3-dimensional integral homology is not finitely generated. Amer. J. Math. 85 1963 541--543. MR0158917] an example up vote 9 of a finitely presented group $G$ such that $H_3(G)$ is not finitely generated. It follows from this that the 3-skeleton of $BG$ is infinite. The group is $$G=\langle a,b,c,x,y:[x,a], down vote [y,a],[x,b],[y,b],[a^{-1}x,c],[a^{-1}y,c],[b^{-1}a,c]\rangle.$$ Stallings' paper is characteristically short and beautiful, and the proof is a nice application of Mayer-Vietoris (!) 3 Bestvina and Brady gave a beautiful generalization of Stallings's theorem using PL morse theory (they manage to construct groups with all kinds of weird (non)finiteness properties). Their methods become especially simple when restricted to Stallings's example -- an elementary account of this can be found in Bestvina's wonderful "PL Morse theory notes", which are available on his homepage at math.utah.edu/~bestvina/research.html – Andy Putman Feb 22 '10 at 15:39 4 Stallings's example shoots down Chad's approach from the last paragraph. G is equal to the kernel of the map F_2 x F_2 x F_2-> Z. In particular, F_2 x F_2 x F_2 is in F_3 but G is not. – HJRW Feb 23 '10 at 0:14 add comment I don't have a complete answer, but here are some thoughts. The Rips Construction takes an arbitrary finitely presented group Q and produces a 2-dimensional hyperbolic group $\Gamma$ and a short exact sequence $1\to K\to \Gamma\stackrel{q}{\to} Q\to 1$ such that the kernel $K$ is generated by 2 elements. It turns out, by a result of Bieri, that $K$ is finitely presentable if and only if $Q$ is finite. One can improve the finiteness properties of $K$ using a fibre product construction. Let $P=\{(\gamma,\delta)\in\Gamma\times\Gamma\mid q(\gamma)=q(\delta)\}$. By the '1-2-3 Theorem', if $Q$ is of type $F_3$ then $P$ is finitely presentable. I would guess that $P$ has good higher finiteness properties if and only if $Q$ is finite. Perhaps one can use the fact that $P\cong K \rtimes\Gamma$. up vote 3 down vote Even if this is true then it still doesn't quite solve your problem, as we don't have a presentation for $P$. To do this, one needs to be given a set of generators for $\pi_2$ of the presentation complex of $Q$, which enable one to apply an effective version of the 1-2-3 Theorem. (In the absence of this data, presentations for $P$ are not computable. Indeed, $H_1(P)$ is not computable.) Question: Does there exist a list of presentations for groups $Q_n$ such that: 1. each group $Q_n$ is of type $F_3$; 2. the set $\{n\in\mathbf{N}\mid Q_n\cong 1\}$ is recursively enumerable but not recursive; 3. but generators for $\pi_2(Q_n)$ (as a $Q_n$-module) are computable? If so, and if I'm right that the higher finiteness properties of $P$ are determined by $Q$, then higher finiteness properties are indeed undecidable. Simply apply the Rips Construction and the effective version of the 1-2-3 Theorem to the list $Q_n$. Gah! Why has my latex not worked? – HJRW Feb 23 '10 at 2:22 I fixed it for you. Put backticks ` around your latex when something goes wrong. – François G. Dorais♦ Feb 23 '10 at 3:24 I'll check these references out in the near future. (I'm not "affiliated with an institution" at the moment, so it'll take a little while.) I know it's possible to get an effective sequence {Sigma_k} of homology n-spheres (for fixed n ≥ 5) for which the set { k : Sigma_k = S^n } is noncomputable. (Since a simply connected homology n-sphere is just the n-sphere, the set is r.e.) Also the fundamental groups would necessarily have zero homology in dimensions 1 and 2. Might it be possible to take Q_k = pi_1(Sigma_k)? What kind of control do we get on pi_2(Q_k)? – Chad Groft Feb 23 '10 at 3:47 Gah. It's gone again. – HJRW Feb 23 '10 at 5:35 Chad, I think the problem with that idea is that we need to know about $\pi_2$ of a presentation complex, not $\pi_2$ of some space with the right $\pi_1$. – HJRW Feb 23 '10 at 5:51 show 1 more comment I am fascinated by your question about whether there is a finite group presentation analogue of Rice's theorem. If true, this would settle your original question and all other similar decidability questions about group presentations, as long as they are questions about the class of groups presented, rather than a question about the presentation itself. Such a theorem would simultaneously answer hundreds of such similar decidability questions. The evidence against an analogue of Rice's theorem would include the fact that there are large classes of groups having very nice decidability features. For example, the class of automatic groups have many decidable properties. They have decidable word problems, and it is decidable whether they are trivial or nontrivial, whether they are infinite or not. I think that the conjugacy problem is decidable, when the presentation of the group and subgroup is automatic. According to the Wikipedia page, the automatic groups include • Negatively curved groups • Euclidean groups • All finitely generated Coxeter groups [1] up vote 3 • Braid groups down vote • Geometrically finite groups Automaticity does not depend on the set of generators. Furthermore, one can sometimes tell from a presentation that one has an automatic group. I recall that the Magnus group theory program is very happy when it finds that a group presentation that you give it is automatic, and it will tell you so (but I don't have so much experience with these programs). But this evidence does not seem to refute the Rice theorem analogue, unless we could tell from a presentation whether it was automatic or not. At least sometimes we can, but I don't think automaticity is decidable in general. So I am holding out for a positive answer to the Rice Theorem analogue. Automaticity is not decidable. The proof that Markov properties are undecidable would apply here. More generally, any condition on a group that applies to the trivial group, and which 2 implies that the group has decidable word problem, cannot be decidable. So hope survives! Specifically, there is a construction which takes a f.p. group G and a word w in G, and builds a group G_w. This new group is trivial if w is trivial and contains G as a subgroup if w is not trivial. Testing G_w for the condition would test w for triviality, which is clearly a problem. – Chad Groft Feb 23 '10 at 4:40 2 I think this misses the point somewhat. There are some very general, very well-behaved classes of groups. For instance, a 'randomly chosen' finitely presented group is hyperbolic, which is a very nice sort of hyperbolic group. But the point is that one can't recognise if a given presentation falls into a nice class or not. Indeed, we can't recognise if a given presentation is of the trivial group! That's the point about Markov properties. – HJRW Feb 23 '10 at 5:30 Henry, thanks for your comment. Yes, I understand that point very well. This was why I found the possibility of a Rice's theorem in this context so tantalizing, because if true, it would have explained the phenomenon so completely. But alas, Stillwell has refuted it. – Joel David Hamkins Feb 23 '10 at 13:57 add comment I've been told that the answer to a related question (namely, can one decide if a finitely presented group has a finite classifying space) is no, and that the proof can be extracted from up vote 2 the book ``Computers, rigidity, and moduli''. I would guess that one can also consult the book for a proof of the original question, but the book is currently unavailable to me. down vote Do you mean this book - books.google.com/… – François G. Dorais♦ Mar 20 '10 at 5:31 It's possible. I have a copy of that book; perhaps I should take a closer look. – Chad Groft Mar 21 '10 at 1:38 add comment Not the answer you're looking for? Browse other questions tagged at.algebraic-topology gr.group-theory homotopy-theory lo.logic or ask your own question.
{"url":"http://mathoverflow.net/questions/15957/is-any-interesting-question-about-a-group-g-decidable-from-a-presentation-of-g/18825","timestamp":"2014-04-20T11:01:37Z","content_type":null,"content_length":"99781","record_id":"<urn:uuid:61dc090f-80c7-4c78-ab00-2757271f3b5e>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
Function prove September 26th 2009, 12:51 PM Function prove Show that the function f defined by f(x)=0 if x is irrational and f(x)=1 if x is rational does not have a limit at any point. Any suggestions? September 26th 2009, 01:35 PM Let $x_{n}=\bigg(1+\frac{1}{n}\bigg)^{n}$ for all $n\in\mathbb{N}$, then we know that $x_{n}\in\mathbb{Q}$ for all $n\in\mathbb{N}$ and $x_{n}\to\mathrm{e}\in\mathbb{Q}^{c}$ as $n\to\infty$. Therefore, we have $f(x_{n})=1$ for all $n\in\mathbb{N}$ but $\limolimits_{n\to\infty}f(x_{n})=f(\mathrm{e})=0 eq1$. Thus $f$ is not continuous at $\mathrm{e}$. However, you can give a good proof for the general case since for any $\xi\in\mathbb{R}$, you may find $\{\sigma_{n}\}_{n\in\mathbb{N}}\subset\mathbb{Q}$ and $\{\zeta_{n}\}_{n\in\mathbb{N}}\ subset\mathbb{Q}^{c }$ such that $\sigma_{n},\zeta_{n}\to\xi$ as $n\to\infty$. I hope this can help you... September 26th 2009, 02:03 PM You can solve this using the facts that: (a) for any $s,t \in \mathbb{R}, \exists r \in \mathbb{Q} \ : \ s < r < t$ (b) for any $r \in \mathbb{Q}^c$, there exists a sequence $\left\{a_n\right\}_{n=1}^{\infty}$ such that $\forall n \in \mathbb{N}, a_n \in \mathbb{Q}$ and $a_n \rightarrow r$ That is to say, for any irrational number there is a sequence of rationals that converges to it.
{"url":"http://mathhelpforum.com/differential-geometry/104429-function-prove-print.html","timestamp":"2014-04-18T18:56:09Z","content_type":null,"content_length":"9825","record_id":"<urn:uuid:4de763a9-f980-4b3a-945b-5fb8ec0a80a0>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
Numerical Modeling of Crystalline Silicon Solar Sells Numerical Modeling of Crystalline Silicon Solar Sells Research Projects for Undergraduate Stundets. Modeling of silicon solar cell is a complex task, which involves using of different simulation tools and techniques. Generally speaking, silicon solar cell is a semiconductor device, the operation of which is governed by the set of drift-diffusion equations. The solution of these equations gives the performance of the cell, i.e. the amount of solar energy it can convert into electricity, as well as lots of other useful information. In fact, to simulate solar cells engineers use the same kind of tools they use to simulate transistors for CPUs and memory. However, there are two important differences. The first is that silicon solar cell is a large scale device, with the area of tens of centimeters, while the dimensions of modern transistors are usually described in nanometers. And because the larger the device, the more equations the computer has to solve, it turns out that modern computational power is not enough to simulate the entire solar cell. This is the reason, why the cell is divided into unit domains, which are solved in a numerical simulator. The solutions of the unit domains are further combined into a circuit to simulate the entire solar cell. The second difference is that the model of a solar cell has to consider not only electrons and holes interacting with the crystal, external bias and each other, but also photons, which come from the Sun and bring their energy to excite the carriers inside the cell. Because optical excitation is a reach process, engineers usually use a separate tool to model it and then load the results of this simulation into the device simulator. Thus, the overall process can be split into three separate parts: simulation of optical generation, simulation of a unit domain of the solar cell and circuit simulation, where the circuit mimics the entire solar cell. Project 1. Optical simulation of silicon solar cells The goal of the first project is to learn the optical ray tracer Sunrays in order to simulate optical generation rate for different silicon solar cell structures. Sunrays is a sophisticated program designed to trace the path of each ray from the solar spectrum in order to figure out the position inside the cell, where it will be absorbed and will give birth to electron-hole pair. The range of structures will include the cells textured with upright pyramids, covered with various antireflection coatings, having different thicknesses and different metal and dielectric layers on the rear Project 2. Circuit simulations for silicon solar cells The goal of the second project is to create a Spice model of a solar cell, where the current-voltage characteristics of previously simulated unit domains are combined in one big circuit taking into account the series resistance of the front and rear metallization and edge effects. Prerequisites: knowledge of circuit simulation tools. To apply, please, send email to Stas Herasimenka (sherasim at asu.edu). Write a couple of words about yourself and why would you like to do research at Solar Power Lab.
{"url":"http://pv.asu.edu/career/undergrad/simulation","timestamp":"2014-04-17T00:47:43Z","content_type":null,"content_length":"19353","record_id":"<urn:uuid:50ae7a44-9d92-472f-b47c-d8cceeb0f3b9>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
Applied Psychology, Research Methods, and Statistics New R offerings, R blog January 29, 2009 I’ve been a huge fan of the open-source statistics package R for many years: large user base, lots of high-quality modeling functions, flexible graphics, etc. The learning curve is a little steep (especially for things like graphics and avoiding computationally costly loops in code) and there are some things R still doesn’t handle particularly gracefully (e.g., computations on very large data sets, making use of common multicore/SMP capability). Anyway, I saw on some other blogs that a new company REvolution Computing is offering a version of R that has some tweaks worth looking at, including optimized math libraries which may speed up certain operations and allow for some degree of multithreading. There is also a non-free “Enterprise” edition that can finally take advantage of 64-bit Windows and makes using the “MP” in SMP systems a bit easier; of course, whether those changes make a difference in your analyses will depend on the kind of code you run. Still, I run into out of memory errors often enough that I think it would be a nice boost for my work. Aside from the product, they also have a pretty decent blog, which is worth checking out. New book on team effectiveness in organizations, etc. December 6, 2008 I just received my copy of Team Effectiveness in Complex Organizations, which includes a chapter on social network analysis I helped co-author (<- shameless plug). It’s a general overview of issues and concepts in social network analysis (SNA), with some examples of applications to the study of teams. It’s written for people who don’t know a lot about networks, so I think it’s not a bad place to start if you want a gentle introduction to some fundamental concepts. There are a couple of things I like about the chapter, in no particular order: (1) It emphasizes the use of networks as a way of operationalizing and quantifying social context across multiple levels (2) I think it does a decent job of reminding people there’s a lot more to social structure and network analysis than just centralization and brokerage, and (3) gives some decent examples of more recent types of network analysis techniques, such as exponential random graph models (ERGMs, sometimes called p* models) that haven’t yet percolated into widespread use in psychology and management (although with the publications of articles introducing ERGM concepts, like Contractor, Wasserman, & Faust’s 2006 article in the Academy of Management Review, this is likely to change). In fact, I think this last point is one of the most important – there are clearly a lot of theories in psychology and management about social, cognitive, and physical interdependency that involve multiple levels of analysis (individuals, dyads, triads, cliques, teams, organizations, etc.), but I’ve always been a little disappointed that most of the SNA articles I see published in journals like JAP and AMJ tend to focus on only a handful of concepts like centrality, and to a lesser extent, brokerage. It’s not surprising – these are simple, powerful concepts, after all – but as articles like Contractor et al. (2006) make clear, there’s a lot more out there. Probably the biggest impendiment to a more widespread interest in these models is accessibility, in terms of (1) ease of use and (2) interpretability. The first issue is being addressed by a number of researchers with statistics software such as the statnet package for R, the StOCNET suite, and PNet; these are making it much easier for researchers to test complicated structural models (StOCNET even provides for estimation of certain network dynamics in longitudinal networks – very cool stuff). Of course, “easier” is a relative term, and even though I am comfortable with all of these packages, I wouldn’t call them “easy” to use, even for an R and SAS wonk like myself. It definitely takes a lot of time to become comfortable with them, and for people who are interested in a given topic (say, emergent leadership relations – one of my personal areas of interest), it can be hard to justify taking the time to learn the models and the software, especially if you aren’t sure you will be using the technique frequently in the future. The second issue is much more difficult. While it’s possible to take a network and have it spit out a set of results, actually interpreting what the results mean can still be quite hard – if you have a significant level of reciprocity, great, but what does that 1.34 mean, exactly? When there is also a positive alternating k-star parameter? There is information out there to help you figure it out, but it’s not particularly easy to find unless you know the ERGM literature very well, and in the case of interpreting the more recently developed ERGM configurations (like alternating k-stars), there’s simply not that much. The people who work to develop these types of models (my co-author and former advisor, Laura Koehly is one such person) do an incredibly good job of helping to create statistical models that can be used to test a wide variety of important research questions, but speaking as a person who’s research involves actually using them, I think that the average researcher interested in testing these models would be more than a little lost when faced with describing to a reviewer what a particular set of numbers actually means, and there’s not enough help out there for people without personal pipelines to the random graph gurus that develop these things (I’ve been very lucky in that regard.) With the development of random graph models for cognitive social structures, multiple networks, longitudinal network data, and two-mode networks, things are going to get even more complicated before too long…. but if you are a researcher thinking about using these sorts of models, don’t let that scare you! There’s a lot of power to be had from these sorts of models, and they are reasonably accessible nowadays. Valuing social networks June 23, 2008 Although a good portion of my research revolves around social networks, I don’t usually pay much attention to the “social network” websites (Facebook, MySpace, etc.) – it’s not really something that overlaps with my interests, outside of the name. However, a recent post by Michael Arrington on the financial value of social network sites (which I found linked over at Andrew Gelman’s blog) got my Basically, Arrington suggests that the value of these services should be related to the size of the network and where the users live…. networks with more users in higher-value markets (operationalized in terms of per-capita ad spend in the various markets) should be worth more than networks with users in lower-value markets. I don’t know a lot about the business of advertising (as I’m sure this post shows), and this seems like a reasonable analysis, but I have to wonder how much these sorts of valuations would change if the actual structural properties of the network were taken into account. What’s the point of an ad? When an advertiser buys them, are they just trying to reach individuals? Or are they also trying to tap into cliques and subcommunities and reach other targets through word of mouth? If the latter is important, then there would probably be significant marginal value attached to the level/type of connectedness in the network: a sparse network wouldn’t be worth much more than a normal website with X number of users, but a densely connected network with high levels of inter-user interactions and tie formation might be worth more… depending on the nature of the advertising, I suppose. Moreover, different communities within the online network could tend to have slightly different types of connection styles – for example, differential homophily based on gender, or different levels of centralization based on personal expertise in certain areas. I imagine professional (or topic-based) networks developing centralized tendencies and grouping around a few key leaders in a way that affiliation-based networks of, e.g., college friends might not. To the extent that you buy into the concept of ‘opinion leaders’ and ‘key players’, these sorts of structures are tremendously important. I imagine the data’s all there in a server somewhere – I wonder whether it’s made available to advertisers? I could easily picture network sites offering discounts to advertisers for ads targeted to new users or peripheral members of the network, but charging more for access to members with large numbers of connections to certain segments. I could also see ads on social network sites that are displayed based on the internet viewing habits or click-through habits of your network peers and or local clique, not just your own (this would be sort of like the advertising equivalent of Amazon’s or Netflix’s recommendation systems). Of course, for all I know, people do this sort of stuff already, or it might be stupid for a variety of reasons (off the top of my head, selling statistics about actual network connections – even anonymous ones – might raise privacy issues for some people). Still, I find it a little odd that for all the talk about the ‘power of networks’ and the constant news these social networking sites get, I never hear a discussion about tools or applications that really make use of the structural properties of the network. Social influence modeling using network autoregression May 17, 2008 Well, I’ve been back from conference for several weeks, but only just got time to come back to post. The conference was nice, though I wasn’t able to see all the presentations and papers I would have liked – par for the course, I guess. The conference organizers do a great job, all things considered, but somehow the topics I want to listen to get scheduled at about the same time, meaning I only get about half the utility out of these conferences that I’d like. I think I may have to shell out for the audio recordings they make of them. To follow up an earlier post, I thought people might like to see a bit more about how one can use network data (social networks, proximity data, etc.) to model social influence in groups. The basic question these models address is pretty straightforward: Given a set of actors and data on their ideas, attitudes, and opinions, do people that share certain types of ties tend to have more similar attributes/preferences/attitudes, etc.? As it turns out, there are actually several different ways we can go about addressing this basic question. For example, we can do it using simulation-based methods (I’ve seen this more often in epidemiology and marketing research), where we take a network with certain structural characteristics, and plot how long it takes for ideas, behaviors, etc. to spread to certain percentages of nodes in the network, conditional on certain assumptions about the parameters of influence/infection and node “recovery” (see this previous post on SI/SIR/SIS-type influence models for a rough overview of some of these parameters.) Like most simulation-based methods, this is nice because it allows us to see the implications of different sorts of assumptions about the way influence happens. Thus, we can vary our assumptions, change the model, and see if we observe network dynamics that occur in real-world networks: for social influence models, this means we want our models to be able to predict things like convergence (and non-convergence) of opinions over time. Simulation-based techniques like this can be quite powerful, but make strong assumptions about the way that influence must occur, besides being hard to test for most typical hard-working social scientists. (Although that is slowly changing.) Another way of modeling social influence is through statistical models that take into account the interdependence between units, and estimate the extent to which that interdependence results in more similar (or dissimilar) attitudes or opinions. Again, there are several ways of doing this, but one of the more widely used (at least, to my knowledge) models is the network effects autoregression model. I’ve sometimes seen this model and related models under slightly different names. For example, geostatistics researchers have names like “spatial regression model”, “spatial error model”, etc., but it comes down to the same thing – predicting values at a point based on the values of that points’ neighbors. Geostatistics people define neighbors using geographical contiguity (for example, neighboring counties), but there’s nothing stopping us from using alternative definitions of neighbors – for example, based on social networks. The basic network effects model is given by the equation $y = \rho Wy$, where y is a vector of individual outcomes (for example, organizational attitudes, turnover, etc.), W is the n x n weight matrix which defines the way in which individual scores are believed to be interdependent, and $\rho$ is the autoregressive parameter which defines the extent of social influence, given the matrix W. Similarly, we can define the mixed regressive-autoregressive model $y = \beta x + \rho Wy$, where $\beta$ and x are typical regression parameters and predictors, respectively. (Note that in this model, if there is no social influence effect, e.g., if $\rho = 0$, then the model reduces to a standard regression.) A third type of autoregression involves modeling social influence effects through interdependence in the error term $\tau$ associated with the regression of the exogenous predictors X on the dependent variable; this is sometimes referred to as the network disturbances model, because it provides for an estimate of the interdependence in individual deviations from their predicted score, based on other predictors: $y = \beta X + \tau, \tau = \rho W\tau + \epsilon$. We can also mix these models – for example, by creating a mixed regressive-autoregressive model where interdependencies exist between both the absolute value of the y-scores (as in the network effects model) AND in the deviations from the expected y-score, thus: $y = \beta X + \rho_{1} W_{1}y + \tau, \tau = \rho_{2} W_{2}\tau + \epsilon$, where $W_{1}$$W_{2}$ are different weight matrices. The exact interpretation of the model depends on the way in which you define W; given a non-directional binary social network, for example, you may define the weight matrix to be the adjacency matrix, where the entries $w_{ij} = 1$ if i and j share a tie, otherwise $w_{ij} = 0$. In a network effects model using this definition, $\rho$ is the extent to which an individual y score is related to the sum of the y scores of that individual’s neighbors in the social network. The most common W matrix I am familiar with is a row-standardized matrix, where the standard adjacency matrix/ sociomatrix (e.g., the matrix that defines the connections in your social network, usually binary, but possibly valued) is altered so that each row sums to 1. Using this definition of W, we are essentially estimating the relationship between an individual’s y score and the mean of their friends’ scores; this represents the assumption that influence upon an actor is evenly divided among an individual’s partners in the network. Obviously, the specific definition of W used makes a big difference in the way that the autocorrelation parameter $\rho$ is interpreted. The nice thing about these models is that they are flexible – for example, allowing for many different ways of defining actor interdependencies, like direct ties, shared social positions, and pretty easily estimated using maximum likelihood procedures in programs like R, or using MCMC estimation in software like WinBUGS\OpenBUGS. However, some of the statistical properties of these estimators is still a little uncertain; one of the research projects I’m currently finishing up explores what these properties are, and how they are likely to impact research into organizational networks. (Time permitting, I’ll post a draft of the paper – dissertation comes first, though.) As with any regression-based/correlational method, the limitations are pretty obvious, of course – if you don’t have longitudinal data, you can’t really say for certain that you are dealing with a true “social influence” effect, or whether the apparent interdependencies in responses aren’t being driven by some third shared variable. Still, it’s a useful model for exploring possible influence effects in a network. Finally, it’s worth pointing out (again) that this is only one way of estimating potential social influence effects in social networks; a variety of other procedures are available, that provide slightly different ways of conceptualizing and estimating social influence effects, using both cross-sectional and longitudinal data. For example, if you wished to study social influence effects between husbands’ and wives’ purchasing decisions, and had groups of independent husband-wife dyads, you could use something like HLM pretty easily. If you had longitudinal data on both networks and actor attributes/attitudes, you could use something like Tom Snijder’s actor-oriented random graph models to separate out social influence and social selection effects across time. There are also the simulation-based techniques described at the top of the post, and even agent-based modeling for people who want to get really fancy in terms of modeling network dynamics. Working, drafts, and factors of bloggish non-production March 24, 2008 Well, the SIOP conference is coming up in just a few weeks. I’m busy preparing, hence the lack of updates. This year, in addition to the usual poster sessions, I’m going to be on a panel to discuss my chapter on network analysis for an upcoming book on team research. I’m looking forward to it – this will be my first time on a panel discussion, and I’m looking forward to answering people’s One of the things I’ve noticed is that the “new resurgence” in network research has really taken hold in management, but it hasn’t become quite as widely used in industrial/organizational psychology research. I’m hoping that the book chapter will be able to convince a few more people why methods for social network analysis are worth learning – and for those I/O people who have run across network analysis before, why there’s more to life than just measuring individual centrality. We’ll see how it goes. Oh, and I finally decided to bit the bullet and learn $\LaTeX$, so I can finally make all the beautiful formulas I want, like the basic normal distribution function: $P(x) = \frac{1}{{\sigma \sqrt {2 \pi } }}e^{{{ - \left( {x - \mu } \right)^2 } \mathord{\left/ {\vphantom {{ - \left( {x - \mu } \right)^2 } {2\sigma ^2 }}} \right. \kern-ulldelimiterspace} {2\sigma ^2 }}}$. Video resource roundup March 2, 2008 Okay, things being as busy as they are, there’s not much time for a long post now. Still, I recently ran across a mentions of two interesting video-based sites I thought some people might find interesting. VideoLectures is just what it sounds like – a site where people have made videos of various lectures available. As of right now, there’s not a lot of psychology material available for viewing (not a whole lot for the social sciences in general, unfortunately). Still, there are some lectures that are probably of wider interest there; for example, lectures on Markov Chain Monte Carlo methods by Christian Robert (videos linked here). There are also a decent number of lectures related to statistical modeling and graphical models which may interest a number of people. The biggest section is for computer science, but if you search, there are lectures on a pretty good number of things that psychologists and other social scientists might find useful, like the use of the stats program R for data mining. Another place mentioned to me by several people is TED. It’s a collection of talks by people from all sorts of areas on all sorts of topics, only some of which pertain directly to science. It’s mainly “big picture” kind of talks, but there are certainly some big names, and it’s nice to be able to watch that kind of thing on demand…. As long as I’m covering “broad” not-really-science sites, a third site I watch all the time is C-SPAN’s BookTV. They archive most of their old programs, and you can find some great interviews and talks on all sorts of different topics, especially history, politics, and public policy. Not a lot on actual science, but lots of examples relevant to (and applications of) social science research. Definitely worth browsing if you find history and politics relevant to your work, or even just interesting. Random resource round-up February 20, 2008 From past students in psychology and management (and some of my non-academic friends), I often got the question about where to go to find relevant articles. There are, of course, all the major repositories (Web of Science, Psychinfo, etc.), but I find that more and more I have been using alternative resources for identifying interesting articles, ideas, methods, etc. One of the big ones is Google Scholar (in fact, there’s been some neat uses of GS recently, for example, Publish or Perish), which has been a great resource, and probably my single most-used research tool over the last year. Together with Google Books, I think it’s nearly doubled my monthly research-related book budget, something I can ill-afford as a grad student. I’m thinking of sending Google my bill. CiteSeer is another big source for citations and draft papers that’s been quite invaluable. It focuses more on topics like computer science and IS, but I’ve been surprised by the amount of information I’ve gotten that’s been useful to my own research in social network analysis – particularly when it comes to questions of statistical modeling and computation, but also for helping me find theories and points of view on certain topics that goes outside the particular theoretical “sandboxes” in which I was raised. There is also the newer site BizSeer for academic business articles, but I have to admit I haven’t really tried it yet. For those types of articles, Google Scholar and library resources seem to work quite well. Still, the main reason I love CiteSeer is for the fast and easy websurfing from link to link; I go for one article, and half an hour later, I’m reading articles from entirely separate domains by authors I’ve never heard of on theories I didn’t know about. If BizSeer is as good at that particular aspect of searching as CiteSeer, I imagine I’ll be using it much more. There are also two other smaller research portals I’ve run across, which have helped me to find some interesting new research, often in the form of harder-to find (for me, anyway) research reports: Scientific Commons and the Social Science Research Network. I had heard of Scientific Commons before, but only recently started using it, and I was surprised by the amount of articles I got back. SSRN seems harder to navigate, but has helped me browse for research reports that sometimes get lost in my usual Google searches, or which haven’t been indexed at CiteSeer. Frankly, I’ve been pleasantly surprised by the amount of useful material I’ve run across in the kind of research reports that aren’t covered by the usual academic resources, and it’s just getting easier to find those kinds of articles as time goes on. I’m still not entirely certain how well-received the use of such citations would be in my area, however. I see it all the time in statistics-related articles, but hardly ever in the psychology and management articles I read, despite the large number of relevant research reports out there in those domains. I don’t know if that’s because finding the articles is hard, or because they’re somehow deemed to be “poor” citations for academic articles.
{"url":"http://appliedpsyc.wordpress.com/","timestamp":"2014-04-20T13:18:27Z","content_type":null,"content_length":"65755","record_id":"<urn:uuid:03d68684-847c-4562-b461-67cee6fdad2e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Missing square illusion Missing square illusion - Optical Illusions Missing square illusion The missing square puzzle is an optical illusion used in mathematics classes, to help students reason about geometrical figures. It depicts two arrangements of shapes, each of which apparently forms a 13x5 right-angled triangle, but one of which has a 1x1 "hole" in it. The key to the puzzle is the fact that neither of the 13x5 "triangles" has the same area as its component parts. The four figures (the yellow, red, blue and green shapes) total 32 units of area, but the triangles are 13 wide and 5 tall, which equals 32.5 units. The blue triangle has a ratio of 5:2, while the red triangle has the ratio 8:3, and these are not the same ratio. So the apparent combined hypotenuse in each figure is actually bent. The amount of bending is around 1/28th of a unit, which is very difficult to see on the diagram of this puzzle, though just about possible. According to Martin Gardner, the puzzle has been invented by a New York city amateur magician Paul Curry in 1953. Ever since it has been known as Curry's paradox. You may have noticed that the integer dimensions of the parts of the puzzle (2, 3, 5, 8, 13) are successive Fibonacci numbers. Many other geometric dissection puzzles are based on a few simple properties of the famous Fibonacci sequence.
{"url":"http://www.optical-illusions.info/illusions/Missing_square_illusion.htm","timestamp":"2014-04-19T23:14:37Z","content_type":null,"content_length":"3389","record_id":"<urn:uuid:4aee338e-20a2-470e-96c1-3f2a0a47c9ed>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
Financial Math FM/Time Value of Money From Wikibooks, open books for an open world A basic definition of interest is the price paid for obtaining, or price received for providing, money or capital in a credit transaction. For example, today Brian borrows $10,000 from Ivan and agrees to pay him $10,500 in one year. In this example the $10,000 is the principal that Ivan invested over a period of 1 year. After 1 year, the accumulated value is $10,500. $500 is the amount of interest Brian paid Ivan. Interest Rate (Rate of Interest)[edit] An interest rate or the rate of interest is how much is paid by a borrower as expressed as a percent of the borrowed capital. In the previous example $500 dollars of interest accrued at the end of 1 year over a $10,000 loan. ${\operatorname 500 \over\operatorname 10,000} = 5%$ Thus the interest rate for this loan was 5% and unless otherwise stated in the examination question, rates are expressed as annual rates. ^[1] Loans can and often are over multiple periods and the exact terms of the loan can depend on the interest type. Simple Interest[edit] Brian and Ivan can agree to use simple interest. The final amount of interest paid is simply the original loan amount times the interest rate times the number of periods. $I_{simp} = k \cdot i \cdot t$ Ivan invests $10000 into a bank account which pays simple interest with an annual rate of 5%. Find the balance in Ivan's account after 2 years. $I_{simp} = k \cdot i \cdot t$$\ I = 10,000(.05 \cdot 2)$$\ I = 1,000$$\ 10,000 + 1,000 = 11,000$ The final balance in Ivan's account will be his original investment plus the interest accrued over two years. This comes out to $11,000. Compound Interest[edit] Compound interest arises when interest is added to the principal, so that from that moment on, the interest that has been added also itself earns interest. This addition of interest to the principal is called compounding. A bank account, for example, may have its interest compounded every year: in this case, an account with $1000 initial principal and 20% interest per year would have a balance of $1200 at the end of the first year, $1440 at the end of the second year, and so on. In the case of compound interest, the accumulated value at the end of t periods is $\ A(t) = k (1+i)^t$ Ivan invests $10000 into a bank account which pays compound interest with an annual rate of 5%. Find the balance in Ivan's account after 2 years. $\ A(t) = k (1+i)^t$$\ A(t) = 10,000 (1+.05)^2$$\ A(t) = 11025$ The final balance in Ivan's account comes to $11,025. Amount and Accumulation function[edit] Amount function[edit] A(t) is defined as the amount function where A(t) denotes the value of an investment at time t. For this function there are two general assumptions. • For each $t \geq 0, A(t) >0$. • A is nondecreasing. It should also be noted that the amount of interest earned over the period $\ [s,t]$ is $\ A(t) - A(s)$. The effective rate of interest earned in the period $\ [s,t]$ is :$\frac{A(t) - A(s) }{A(s)}$ Ivan invests $2000 into a bank account with an interest rate of 5% and has an amount function of $\ A(t) = k e^{0.5 \cdot t \cdot i }$ What is the effective rate of interest in the third year? $i_{[s,t]}=\frac{A(t) - A(s) }{A(s)}$$i_{[s,t]}=\frac{ k e^{0.5 \cdot t \cdot i } - k e^{0.5 \cdot s \cdot i } }{ k e^{0.5 \cdot s \cdot i }}$$i_{[3,4]}=\frac{ e^{0.5 \cdot 4 \cdot 0.05 } - e^{0.5 \cdot 3 \cdot 0.05 } }{ e^{0.5 \cdot 3 \cdot 0.05 }}$$i_{[3,4]}= 0.0253$ The effective interest rate in the third year is 2.53% Accumulation function[edit] a(t) is defined as the accumulation function where a(t) denotes the value of an investment at time t where the initial investment is 1. This is simply a special case of the amount function. The amount function and accumulation function share a simple relationship. $A(t) = k \cdot a(t)$ An investment's accumulation function is $\ a(t) = {t^2}$ . The value of the investment at time t=4 is 80. What was the value of the initial investment? $A(t) = k \cdot a(t)$$A(4) = k \cdot a(4)$$80 = k \cdot 4^{2}$$k = 5$ The initial investment was 5. Present and Future value[edit] As we've seen money has a time value so that $1 today is worth more than $1 a year from now. Given an effective annual interest rate of i compounded annually over t years: $PV(1+i)^t = FV$ where PV is the present value and FV is the future value. $(1+i)$ is called the interest factor. Note that $PV = FV (1+i)^{-t}$ $v=\frac{1}{1+i}$ is called the discount factor and is denoted as $v$. In exchange for your car John has promised to pay you $5,000 in 1 year and $10,000 in 3 years. Using an annual effective interest rate of 5% find the present value of these payments. $PV = 5,000v + 10,000v^3$$v = 1.05^{-1}$$PV = 13400.28$ Using accumulation function $a(t)$ notation • $\frac{a(n)}{a(n-1)} =$The n-th year interest factor. • $i_n = \frac{a(n)-a(n-1)}{a(n-1)} =$the effective rate of interest in the n-th year. • $v_n = \frac{a(n-1)}{a(n)} =$ the n-th year discount factor. • $d_n = \frac{a(n)-a(n-1)}{a(n)} =$ the effective rate of discount in the n-th year. Note that • $v_n = 1- d_n$ • $d_n = \frac{i_n}{1+i_n}$ • $i_n = \frac{d_n}{1-d_n}$ Interest has been so far defined as the payment required at the end of the period, however there is also the rate of discount, denoted by d, which is a measure of the interest paid at the beginning of the period. For example, if Brian borrows $100 at a rate of interest of 10% over a period, then he will receive $100 now and he will have an obligation of $100 dollars at the end of the period plus $10 in interest. However, if Brian borrows $100 at a 10% rate of discount, he will only receive $90 now and he'll have an obligation of $100 at the end of the period. $d_n = \frac{A(n)-A(n-1)}{A(n)} = \frac{I_n}{A(n)}$ Brian borrows $45 at a 5% annual rate of interest. Find the annual rate of discount. $d \frac{I_n}{A(n)}$$d \frac{45*0.05}{45*1.05}$$d = 0.0476$ Example 1 If i = 6.5%, what are d and v? $d = \frac{i}{1+i} = \frac{0.065}{1.065} =6.103$$v = (1+i)^{-1} = 1.065^{-1} = .93896$ OR $v = 1 - d = 1 -0.06103 = 9.3896$ Example 2 What is the present value of $9,000 to be received in 10 years at an annual effective rate of discount of 8%. $PV = v^n FV$$PV = (1-d)^n \cdot 9,000$$PV = (1-.08)^10 \cdot 9,000$$PV = 3,909.5$ Example 3 At time t = 0, Ivan deposits $260 into a fund with an annual discount factor of 0.95. Find the fund's value at time t = 3.75. $PV = v^n FV$$260 = 0.95^{3.75} FV$$FV = 315.15$ Example 4 Thor invests $250 into a bank account. One year later, his account is $260. 1. Find the effective annual interest rate Thor earned that year. 2. Find the effective annual discount rate Thor earned that year. $i = \frac{260-250}{250}$$i = .04$$d = \frac{260-250}{260}$$d = .03846$ Nominal vs Effective rates[edit] So far when dealing with compound interest we've only seen the interest compounded annually. That is to say, the interest is only added to the principal once a year. To illustrate this, Ivan invests $1000 into a bank account that offers 10% interest compounded annually. At the end of the first year, the account is $1000 + $100 in interest. At this point the principal earning interest is $1100 so that at the end of the second year the interest earned is $111 and the principal is $1100. However, the interest could be compounded more frequently. Suppose that instead of being compounded annually, the interest is compounded semi-annually. At the end of 6 months, the interest is $50 and added to the $1000 principal. The new principal is $1,050. At the end of the first year, the interest earned over the last 6 months is $52.5. After one year, the bank account contains $1,102.5. Notice that the interest rate is still 10% annually, however the effective annual rate of interest is 10.25%. $i_n = \frac{1,102.5-1000}{1000}=0.01025$ 10% is the nominal interest rate and 10.25% is the effective rate. A nominal rate of interest compounded m times a year is denoted as $i^{(m)}$. The relationship between the effective rate and nominal rate of interest is: $i_{effective} = {(1+\frac{i^{(m)}}{m})}^{m} -1$ Example 1 Loki borrows $200 from Thor and repays him $205 in one month. 1. Find the monthly effective interest rate, which Loki is charged for the loan. 2. Find the annual nominal interest rate compounded monthly. 3. Find the annual effective interest rate. $i = \frac{205-200}{200} = 0.025$$i^{(m)} = 12 \cdot 0.025 = 0.3$$i = (1.025)^{12} = 0.3448$ The monthly effective rate was 2.5%. The nominal rate will simply be 12 times the monthly effective rate, which is 30%. Finally, the effective annual rate is the monthly effective rate compounded 12 times, which is about 34.5% Xiao Yu invests $7,500 into a bank account with a nominal interest rate of 8% convertible quarterly. How much does she have in the account after 6 years? $7,500(1+\frac{0.08}{4})^{6 \cdot 4} = 12,063.28$ Inflation and real rate of interest[edit] Force of interest[edit] So far we've said that the effective rate of interest earned in a period n is $i_n = \frac{a(n)-a(n-1)}{a(n-1)}$. This gives the interest rate earned over a period however. We can also find the rate of interest over an interval smaller than the period. $i = \frac{a(t)-a(t-h)}{h \cdot a(t-h)}$. If we have a continuous accumulation function we can also measure the the interest acting at a given moment called the force of interest which is denoted as δ. $\delta_t = \lim_{h \to 0}\frac{a(t)-a(t-h)}{h \cdot a(t-h)}= \frac{a'(t)}{a(t)} = \frac{d}{dt} \ln a (t)$ $e^{\int_0^{t}\delta_r dr } = a(t)$ Ivan invests $10,000 at a force of interest of 3%. Find the accumulated value at the end of 7.25 years. $e^{\int_0^{t}\delta_r dr } = a(t)$$10,000 \cdot e^{.03*7.25} = 12,429.65$ Equation of value[edit] 1. ↑ http://www.beanactuary.org/exams/syllabi/Notation_term2FM.pdf
{"url":"http://en.wikibooks.org/wiki/Financial_Math_FM/Time_Value_of_Money","timestamp":"2014-04-20T00:43:44Z","content_type":null,"content_length":"53793","record_id":"<urn:uuid:9aeba066-f30c-4460-a5a4-189e8392ec6b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
For Each Of Cases, Determine An Appropriate Characteristic... | Chegg.com For each of cases, determine an appropriate characteristic length Lc and the corresponding Biot number Bi that is associated with the transient thermal response of the solid object. State whether the lumped capacitance approximation is valid. If temperature information is not provided, evaluate properties at T = 300 K . (a) A Toroidal shape of diameter D = 65 mm and cross-sectional area A = 7mm2 is of thermal c conductivity k = 2.3 W / m.K . The surface of the torus is exposed to a coolant corresponding to a convection coefficient of h = 50W / m2 .k . (b) A long, hot AISI 302 stainless steel bar of rectangular cross section has dimension w = 5mm , W = 7mm , and L = 150mm . The bar is subjected to a coolant provides a heat transfer coefficient of h =10W/m2.K at all exposed surface. (c) A long extruded aluminum (Alloy 2024) tube of inner and outer dimensions w = 25mm and W = 30mm respectively, is suddenly submerged in water, resulting in a convection coefficient of h = 40W / m2 .K at the four exterior tube surfaces. The tube is plugged at both ends, trapping stagnant air inside the tube. (d) An L = 300 mm -long solid stainless steel rod of diameter D = 13 mm and mass M = 0.328kg is exposed to a convection coefficient of h = 30W / m2 .K . (e) A solid sphere of diameter D = 13mm and thermal conductivity K = 130 W / m.K is suspended in large vacuum oven with internal wall temperatures of T = 18 C . The initial sphere temperature is T = 100 C , and its emissivity is 0.75 . (f) A long cylindrical rod of diameter D = 20mm , density ? = 2300Kg / m3 , specific heat Cp =1750J / kg.K , and thermal conductivity k =16W / m.K is suddenly exposed to convective conditions with T = 20 C . The rod is initially at a uniform temperature of T = 200 C and reaches a spatially averaged temperature of T = 100 C at t = 225s . (g) Repeat part (f) but now consider a rod diameter of D = 200mm . ? ??
{"url":"http://www.chegg.com/homework-help/questions-and-answers/cases-determine-appropriate-characteristic-length-lc-corresponding-biot-number-bi-associat-q3782445","timestamp":"2014-04-24T21:04:18Z","content_type":null,"content_length":"26083","record_id":"<urn:uuid:698b51c1-c712-4ae2-8374-ef9b87df00c8>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
This reference is for Processing 2.0+. If you have a previous version, use the reference included with your software. If you see any errors or have suggestions, please let us know. If you prefer a more technical reference, visit the Processing Javadoc. Name quad() Examples quad(38, 31, 86, 20, 69, 63, 30, 76); Description A quad is a quadrilateral, a four sided polygon. It is similar to a rectangle, but the angles between its edges are not constrained to ninety degrees. The first pair of parameters (x1,y1) sets the first vertex and the subsequent pairs should proceed clockwise or counter-clockwise around the defined shape. Syntax quad(x1, y1, x2, y2, x3, y3, x4, y4) x1 float: x-coordinate of the first corner y1 float: y-coordinate of the first corner x2 float: x-coordinate of the second corner Parameters y2 float: y-coordinate of the second corner x3 float: x-coordinate of the third corner y3 float: y-coordinate of the third corner x4 float: x-coordinate of the fourth corner y4 float: y-coordinate of the fourth corner Returns void Updated on March 27, 2014 07:26:07pm EDT
{"url":"http://processing.org/reference/quad_.html","timestamp":"2014-04-20T04:06:49Z","content_type":null,"content_length":"8500","record_id":"<urn:uuid:a94ee37c-b08a-46f7-8d62-0256a5dcf48f>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Googol Factorial Date: 11/09/96 at 10:48:18 From: Chris Myrick Subject: Googol Factorial What is googol factorial? I saw a web page that gave instructions for computing the factorials of very large numbers, but I couldn't figure out how to compute googol!. The web page only gave the first few hundred digits of the factorial. Date: 12/11/96 at 17:28:11 From: Doctor Daniel Subject: Re: Googol Factorial Wow! A googol is a pretty impressively sized number to begin with since it is a 1 followed by 100 zeros. Taking its factorial, which means multiplying it by all the whole numbers greater than zero which are less than it, results in an immense number! Let's think about how big it would be: 10! is about 3.6 million, which is roughly 3 followed by six digits. 30! is roughly 3 followed by 32 zeros (do you know scientific notation? It's 3 x 10^32). 60! is roughly 1 followed by 80 zeros. From this you can begin to see a pattern: the number of digits in the factorial of a number is a bit bigger than the number itself. (Like how 30! is roughly 30 digits long). This is true, in particular, for large numbers. So, since it is true that this pattern continues as the number gets bigger, googol! will be more than a googol digits long. (I guess it'll actually be a few hundred googol, but that's not really a big deal at that point since the numbers are already so That's a really big number. How big is it? Well, suppose you somehow computed it on a computer. How long would it take a computer to print the answer? The answer is, more or less, forever. In fact, the number of digits in googol! is VASTLY greater than the number of particles in the universe, so even if each particle were to represent a digit, we'd STILL be stuck. Another way of thinking about its size is to let each digit represent one year. But that's billions of billions of billions of billions of... of billions of years. That's a while! I suspect that the method you saw on the Web was some trick formula used to approximate factorials. Since they only needed to print the first couple of hundred digits, they didn't bother to compute their answer fully (because they couldn't). A lot of formulas like this exist: one is called Stirling's approximation. If you don't know some algebra, this might confuse you, but if you do, it might amuse you to see the formula I usually wind up using, which comes from Stirling's approximation: n! > sqrt (2*pi*n) * (n/e)^n n! < sqrt (2*pi*n) * (n/e)^(n+1/12n) If we set n = googol, you can see just how huge this is. But it might be possible somehow to get the first several digits of googol! this way. There's also a few other things that are like factorial that we can approximate. So that's possible too. So you can't easily compute googol! after all. But this should give you some idea of why not. -Dr. Daniel, The Math Forum Check out our web site http://mathforum.org/dr.math/ Date: Tue, 8 Dec 1998 21:34:51 From: Jason Rodgers Subject: Re: Googol Factorial A good while back the question was raised on how to compute (10^100)! To figure out the first digits use this method. It's good for x => 10 and only to about 12 decimal digits An=[(x+.5)*ln(x)-x + 1/(12x)-1/(360x^3)+1/(1260x^5)-1/(1680x^7) The power of ten of the answer is int(An). The first few digits of x! are 10^(An-int(An)). The reason I wrote is because the problem just looked interesting. Date: Sun, 10 Jun 2001 01:43:26 From: Robert Munafo Subject: Re: Googol Factorial The factorial of googol is given by a series expansion of the Gamma function, and it is approximately: (10^100) ! = 10 ^ ( 9.9565705518097 x 10 ^ 101 )
{"url":"http://mathforum.org/library/drmath/view/57903.html","timestamp":"2014-04-20T19:07:37Z","content_type":null,"content_length":"8637","record_id":"<urn:uuid:f2a08a6b-9dfa-4df3-a2da-f691b01cf242>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: A single term formula for calculating the circumference of ellipse Replies: 1 Last Post: Dec 27, 2011 2:32 AM Messages: [ Previous | Next ] Re: A single term formula for calculating the circumference of ellipse Posted: Dec 27, 2011 2:32 AM I read your note and can tell you I did come up with some formulas that are not so complicated yet with [subjectively] high precision. I commented on your other thread re: ellipse circumference. If I have time I may play around with your formula more. It's surprising that what you came up wasn't known long ago - it's an example of experts being so much into theory that they overlook some approaches. In the sciences, this sort of inventive ability can be welcoming, unless there is pressure from jealous people to thwart advancements as I sadly had to endure. **No paper certificate measures the creative mind** yet human resource managers don't seem to comprehend this so well. I welcome you to view some of my websites such as http://freeideas2society.blogspot.com where I know my efforts are futile unless having a tremendous backing so I just give everything away for free. Anyway, after Mr. Michon's conversion, your formula has great eye appeal. You should be proud of your ingenuity. Date Subject Author 4/21/09 A single term formula for calculating the circumference of ellipse Shahram Zafary 12/27/11 Re: A single term formula for calculating the circumference of ellipse thomasinventions@yahoo.com
{"url":"http://mathforum.org/kb/message.jspa?messageID=7634367","timestamp":"2014-04-18T15:41:34Z","content_type":null,"content_length":"18169","record_id":"<urn:uuid:1f84a83a-3c59-47f2-8b12-77364cc9b765>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
Ordinary Differential Equations/Without x or y From Wikibooks, open books for an open world Equations without y[edit] Consider a differential equation of the form F(x, y')=0. If we can solve for y', then we can simply integrate the equation to get the a solution in the form y=f(x). However, sometimes it may be easier to solve for x. In that case, we get Then differentiating by y, ${1 \over y'}={df \over dy'}{dy' \over dy}$ Which makes it become $y=C+\int y' {df \over dy'} dy'$. The two equations $y=C+\int y' {df \over dy'} dy'$ is a parametric solution in terms of y'. To obtain an explicit solution, we eliminate y' between the two equations. If it is possible to express F(x, y')=0 parametrically as x=f(t), y'=g(t), then one can differentiate the first equation: So that $y=C+\int g(t)f'(t)dt$ to obtain a parametric solution in terms of t. If it is possible to eliminate t, then one can obtain an integral solution. Equations without x[edit] Similarly, if the equation F(y, y')=0. can be solved for y, write y=f(y'). Then the following solution, which can be obtained by the same process as above is the parametric solution: $x=C+\int \frac{f'(y')}{y'} dy'$ In addition, if one can express y and y' parametrically y=f(t), y'=g(t), then the parametric solution is $x=C+\int \frac {f'(t)}{g(t)} dt$ so that if the parameter t can be eliminated, then one can obtain an integral solution.
{"url":"http://en.wikibooks.org/wiki/Ordinary_Differential_Equations/Without_x_or_y","timestamp":"2014-04-20T16:45:41Z","content_type":null,"content_length":"27349","record_id":"<urn:uuid:441840a4-663a-4e6b-93e5-309fef603b4e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving Equations To rearrange formula and make one of the variables the subject, you use the same rules as you use to solve linear equations. The only difference is you are collecting the variable you want on one side and everything else on the other. 1. Get rid of any square roots by squaring both sides. 3. Get rid of any fractions by multiplying all terms by the denominator. 4. Rearrange by collecting the letter you want on one side and everything else on the other. Remember that every term has its own sign in front of it. Negative terms need to be added to both sides to get rid of them. 5. Factorise the side your variable is on so there is only one of it. 6. Divide both sides by whatever the variable is multiplied by. Here is an example rearranging: a(a2 + y) = b3 making y the subject. Simultaneous equations are two equations both with two different variables in them. You cannot solve them on their own (as they have infinite solutions!) so we have to merge them to end up with a linear equation which we can solve (see earlier section on 'Linear equations'). You can follow some simple steps which we will show you using this example: y = 2x - 3 5x - 2y = 8 Simultaneous equations will only have one pair of values for x and y that work in both equations. Here's how to find them: 1. First write both equations in the form: ax + by = c -2x + y = -3 5x - 2y = 8 2. Now, you need to match up the numbers in front of either the x's or the y's. To do this you multiply one (or both) of the equations to get a match. If you multiply a whole equation by something you must remember to multiply every term. We are going to multiply the first equation by 2 as this will match the y's. -4x + 2y = -6 5x - 2y = 8 3. Now you can merge the two equations by adding them or subtracting them from each other (whichever gets rid of the y's!). In this case we will add them to get rid of the y's (careful with negative (-4x + 5x just gives x, 2y + -2y disappears and -6 + 8 gives 2) 4. Sometimes you will be left with a Linear Equation in x which you have to solve but, in this case we were just left with the answer. Now substitute your value of x into either of the original equations to find y. We will use the first one of the original equations: y = 4 - 3, which gives y = 1 5. Now substitute your answers into the other equation to check that they work! Putting x = 2 and y = 1 into this gives 10 - 2 = 8 which works!!! So our answer is x = 2, y = 1 To see how to solve simultaneous equations using a graph see the graph section. If you are asked to solve a problem by forming an equation it's worth remembering that equations are just shorthand and you should try to write them in words first before using your highly tuned mathematical skill to write them in symbols. The perimeter of a rectangle is 98cm. One side is 7cm longer than the other. Form an equation and hence find the length of each side. If we call the short side x we can draw the following diagram: x + (x + 7) + x + (x + 7) = 98 So the shorter side is 21cm and the longer side is 28cm. Just do a quick check to make sure that 21 + 28 + 21 + 28 = 98 Students keep making the same mistakes in their GCSE Maths exams. Inspired by the examiner's reports find out where students are losing vital marks, so that you can avoid the common slip-ups!
{"url":"http://www.s-cool.co.uk/gcse/maths/algebra-equations-and-inequalities/revise-it/solving-equations","timestamp":"2014-04-20T13:28:24Z","content_type":null,"content_length":"39504","record_id":"<urn:uuid:d836ecf7-5ccc-4fb8-ab38-0991cb795754>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
Newton point and Newton polygon stratifications up vote 3 down vote favorite Let $k$ be a field of characteristic $p>0$, with absolute Galois group $\Gamma$. Let $Y$ be a Shimura variety of PEL type, defined over $k$, with associated reductive (connected) quasisplit group $G$. We fix a maximal torus $T$ of $G_{\overline k}$ and a Borel subgroup $B \supset T$. We get a root datum $(X^\ast,R^\ast,X_\ast,R_\ast, \Delta)$, and let $\Omega$ be the Weyl group. We have the so called Newton stratification of $Y$, that is defined in terms of the Newton point of any $y \in Y$. This is defined taking the F-isocrystal associated to $y$ and considering its image in $$ (X_{\ast,\mathbb Q}/\Omega)^\Gamma $$ via the "Newton map". See "On the classification and specialization of F-isocrystals with additional structure", by Rapoport and Richartz. On the other hand, we can consider the stratification given by the Newton polygon of the F-isocrystal (i.e. looking at the classical Newton polygon of the abelian variety given by $y$). This stratification can be obtained as above forgetting the $G$-structure via the natural morphism $G \to \operatorname{GL}(V)$ (where $V$ is part of the PEL datum). In particular, the Newton point stratification is finer than the Newton polygon stratification. Question: are these two stratifications equal? This is the case if $G=\operatorname{GL}$ or $G$ is the symplectic group (I think), so, using the standard terminology of PEL Shimura variety, in case (A)linear or in case (C). It remains the unitary case, where I think the answer is in genera "no". Can someone give an example? Thank you very much! nt.number-theory shimura-varieties add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged nt.number-theory shimura-varieties or ask your own question.
{"url":"http://mathoverflow.net/questions/121491/newton-point-and-newton-polygon-stratifications","timestamp":"2014-04-21T10:14:46Z","content_type":null,"content_length":"46316","record_id":"<urn:uuid:776992f2-a15f-4630-8b52-a24a01907a02>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
Nunn Math Tutor Find a Nunn Math Tutor ...The training taught me study skills for all types of subjects, tests, and learning styles. I taught the Biology section of the MCAT for a national test prep agency for over 1 year. I had 30+ hours of training to prepare me for my Biology MCAT classes. 26 Subjects: including precalculus, SAT math, ACT Math, trigonometry ...Let me help you look at math and science differently, and learn techniques that will make a variety of subjects easier to understand. I earned a BS in Electrical Engineering and Computer Science at the University of California, Berkeley, then worked at a major computer company for 10 years befor... 13 Subjects: including geometry, precalculus, trigonometry, differential equations ...I successfully started 2 companies that I sold and am now interested in passing on all my real world technical skills to those that would like to learn. My teaching/mentoring approach consists of learning about the student so that I can match their interests through creative and fun real world l... 19 Subjects: including algebra 2, prealgebra, SAT math, ACT Math ...This is not to say that it isn’t occasionally necessary to drop back and focus on a point of grammar and spend some time practicing syntax or discussing semantics, but for the most part, pragmatic communication is the shortest route to learning the ins and outs of a language. I have had the most... 24 Subjects: including geometry, prealgebra, calculus, statistics ...My own qualifications include A's in the courses I took at my university here in the states, as well as 1sts in 2 courses in advanced linear algebra in Cambridge. Much of my research in the fields of math, economics and astronomy relies heavily on the use of computer simulations. From Excel, do... 25 Subjects: including discrete math, SPSS, logic, probability Nearby Cities With Math Tutor Ault Math Tutors Bellvue Math Tutors Buford, WY Math Tutors Burns, WY Math Tutors Eaton, CO Math Tutors Garden City, CO Math Tutors Gill, CO Math Tutors Glen Haven, CO Math Tutors Grover, CO Math Tutors La Salle, CO Math Tutors Laporte, CO Math Tutors Livermore, CO Math Tutors Lyons, CO Math Tutors Pierce, CO Math Tutors Platteville, CO Math Tutors
{"url":"http://www.purplemath.com/Nunn_Math_tutors.php","timestamp":"2014-04-20T11:20:04Z","content_type":null,"content_length":"23468","record_id":"<urn:uuid:c8774a3c-627d-4f93-bf91-54858737ef67>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Which categories are the categories of models of a Lawvere theory? up vote 10 down vote favorite Background: a Lawvere theory $T$ is a category with finite products such that each object is a power of a fixed object $x$. Given a Lawvere theory $T$, the category $\text{Mod}_T$ of models of $T$ is the category of product-preserving functors $T \to \text{Set}$. Any such category is locally finitely presentable (if I'm not mistaken). Gabriel-Ulmer duality says, among other things, that locally finitely presentable categories are categories of models of generalizations of Lawvere theories. Do we have something like this result for categories of models of Lawvere theories (in the sense that we can identify some categorical properties that are satisfied precisely by such categories)? ct.category-theory universal-algebra duality add comment 1 Answer active oldest votes Adámek and Rosický [On sifted colimits and generalized varieties] have shown that a category $\mathcal{C}$ is equivalent to the category of models for a (finitary) Lawvere theory in $\ mathbf{Set}$ if and only if it satisfies these conditions: 1. $\mathcal{C}$ is locally small and cocomplete. 2. There exists a small family $\mathcal{G}$ of objects in $\mathcal{C}$ such that, for every object $G$ in $\mathcal{G}$, the hom-functor $\mathcal{C}(G, -) : \mathcal{C} \to \mathbf {Set}$ preserves sifted colimits, and for every object $A$ in $\mathcal{C}$, there exists a small sifted diagram of objects in $\mathcal{G}$ whose colimit in $\mathcal{C}$ is $A$. up vote 11 down vote These conditions are easily seen to be analogues of the conditions for $\mathcal{C}$ to be locally finitely presentable, except for replacing "filtered" (or "directed") everywhere by accepted "sifted". Incidentally, any such $\mathcal{C}$ is locally finitely presentable, and we may also replace "cocomplete" in condition (1) with "complete", just as for locally finitely presentable 1 Great! Am I correct in guessing that by "Lawvere theory" here you mean a multisorted Lawvere theory, and that if I was only interested in Lawvere theories with one sort the second condition should be "there exists an object in $C$ such that..."? – Qiaochu Yuan Jun 10 '13 at 23:20 Yes I believe Zhen is talking about the multisorted case and you will get the single-sorted case by taking a single object (a small projective generator). Note the algebras over a multi/singly-sorted theory correspond to cocomplete algebraic categories with a set of projective generators/a single projective generator in the sense of Quillen. – Justin Noel Jun 11 '13 at 7:05 @Qiaochu: Yes, this is multi-sorted. It will not do to take $\mathcal{G}$ to be a single object: this already fails for concrete examples like $\mathbf{Ab}$. I think instead you need to ask that $\mathcal{G}$ be generated by a single object under finite coproducts – but I have not checked. Certainly it is enough to ask that the full subcategory of strongly finitely presentable objects have this property, because this subcategory becomes the Lawvere theory. – Zhen Lin Jun 11 '13 at 7:39 @Zhen: ah, I see. I would also guess that $G$ generated by a single object under finite coproducts is the correct condition. – Qiaochu Yuan Jun 11 '13 at 8:02 add comment Not the answer you're looking for? Browse other questions tagged ct.category-theory universal-algebra duality or ask your own question.
{"url":"http://mathoverflow.net/questions/133341/which-categories-are-the-categories-of-models-of-a-lawvere-theory?answertab=votes","timestamp":"2014-04-21T10:07:57Z","content_type":null,"content_length":"57147","record_id":"<urn:uuid:bc1f7fc4-858b-4a54-81dd-3dd55296aab5>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
Package Cl-Graph Add-edge adds an existing edge to a graph. As add-edge add-edge-between-vertexes is generally more natur... Adds an edge between two vertexes and returns it. add-edge-between-vertexes If force-new? is true, the edge is added even i... Adds a vertex to a graph. If called with a vertex, add-vertex then this vertex is added. If called with a ... Return true if vertex-1 and vertex-2 are connected adjacentp by an edge. [?? compare with vertices-share-... any-undirected-cycle-p Returns true if there are any undirected cycles in graph. Returns a list of the vertexes to which vertex is child-vertexes connected by an edge and for which vertex i... Returns the number of connected-components of connected-component-count graph. Returns a union-find-container representing the connected-components connected-components of graph. connected-graph-p Returns true if graph is a connected graph and nil otherwise. delete-all-edges Delete all edges from `graph'. Returns the graph.. delete-edge Delete the edge&#39; from the graph' and returns it. Finds an edge in the graph between the two delete-edge-between-vertexes specified vertexes. If values (i.e., non-vertexes) a... Remove a vertex from a graph. The 'vertex-or-value' delete-vertex argument can be a vertex of the graph or a 'v... Returns the maximum depth of the vertexes in graph depth assuming that the roots are of depth 0 and t... directed-edge-p Returns true if-and-only-if edge is directed Used by graph->dot to output edge formatting for edge->dot edge onto the stream. The function can ass... Returns the number of edges attached to edge-count vertex. Compare with the more flexible `vertex-degree... edge-lessp-by-direction Returns true if and only if edge-1 is undirected and edge-2 is directed. edge-lessp-by-weight Returns true if the weight of edge-1 is strictly less than the weight of edge-2. edges Returns a list of the edges of thing. Returns a list of sub-graphs of graph where each find-connected-components sub-graph is a different connected component... Search graph for an edge whose vertexes match find-edge edge. This means that vertex-1 of the edge ... Searches graph for an edge that connects vertex-1 find-edge-between-vertexes and vertex-2. [?? Ignores error-if-not-fou... Finds and returns an edge between value-or-vertex-1 find-edge-between-vertexes-if and value-or-vertex-2 if one exists. Unless... Returns the first edge in thing for which the find-edge-if predicate function returns non-nil. If the `k... Returns a list of edges in thing for which the find-edges-if predicate returns non-nil. [?? why no key fu... Search 'graph' for a vertex with element find-vertex 'value'. The search is fast but inflexible because it ... Returns the first vertex in thing for which the find-vertex-if predicate function returns non-nil. If the ... find-vertexes-if Returns a list of vertexes in thing for which the predicate returns non-nil. [?? why no key f... Ensures that the graph is undirected (possibly by force-undirected calling change-class on the edges). Returns a version of graph which is a directed free generate-directed-free-tree tree rooted at root. Generates a description of graph in DOT file format. The graph->dot formatting can be altered using `gr... Unless a different graph-formatter is specified, graph->dot-properties this method is called by graph->dot to output ... Returns a list of the roots of graph. A root is graph-roots defined as a vertex with no source edges (i.e.,... In a directed graph, returns true if vertex has any has-children-p edges that point from vertex to some other In a directed graph, returns true if vertex has any has-parent-p edges that point from some other vertex to ... Returns true if start-vertex is in some cycle in in-cycle-p graph. This uses child-vertexes to generat... Return true if-and-only-if an undirected cycle in in-undirected-cycle-p graph is reachable from start-vertex. iterate-edges Calls fn on each edge of graph or vertex. Calls fn on every vertex adjecent to vertex See iterate-neighbors also iterate-children and iterate-parents. Calls fn on every vertex that is either connected iterate-parents to vertex by an undirected edge or is at the ... In a directed graph, calls fn on each edge of a iterate-source-edges vertex that begins at vertex. In an undirecte... In a directed graph, calls fn on each edge of a iterate-target-edges vertex that ends at vertex. In an undirected ... iterate-vertexes Calls fn on each of the vertexes of thing. Takes a GRAPH and a TEST-FN (a single argument make-filtered-graph function returning NIL or non-NIL), and filters th... Create a new graph of type `graph-type'. Graph type make-graph can be a symbol naming a sub-class of basic-g... Create a new graph given a list of vertexes (which make-graph-from-vertexes are assumed to be from the same graph). The ... Called during the initialization of a vertex to make-vertex-edges-container create the create the container used to store t... Creates a new vertex for graph graph. The keyword arguments include: * vertex-class : specif... minimum-spanning-tree Returns a minimum spanning tree of graph if one exists and nil otherwise. Returns a list of the vertexes to which vertex is neighbor-vertexes connected by an edge disregarding edge dire... Returns the number of neighbors of number-of-neighbors vertex (cf. neighbor-vertexes). [?? could be a defun] Assuming that the value-or-vertex corresponds to other-vertex one of the vertexes for edge, this method re... Returns true if the edge is connected to vertex and out-edge-for-vertex-p is either an undirected edge or a directed ... Returns a list of the vertexes to which vertex is parent-vertexes connected by an edge and for which vertex... Creates the unimodal bipartite projects of project-bipartite-graph existing-graph with vertexes for each vertex of existi... Assign a number to each edge in a graph in some renumber-edges unspecified order. [?? internal] Assign a number to each vertex in a graph in some renumber-vertexes unspecified order. [?? internal] Replace vertex old in graph graph with vertex replace-vertex new. The edge structure of the graph is mai... Returns true if vertex is a root vertex (i.e., rootp it has no incoming (source) edges). Search 'graph' for a vertex with element search-for-vertex 'value'. The 'key' function is applied to each element... Returns the number of source edges of source-edge-count vertex (cf. source-edges). [?? could be a defun] Returns a list of the source edges of source-edges vertex. I.e., the edges that begin at vertex. Returns the source-vertex of a directed source-vertex edge. Compare with vertex-1. Returns a new graph that is a subset of graph subgraph-containing that contains vertex and all of the other verte... Sets the tag of all the edges of thing to tag-all-edges true. [?? why does this exist?] tagged-edge-p Returns true if-and-only-if edge's tag slot is t Returns the number of target edges of target-edge-count vertex (cf. target-edges). [?? could be a defun] Returns a list of the target edges of vertex. target-edges I.e., the edges that end at vertex. Returns the target-vertex of a directed target-vertex edge. Compare with vertex-2. Returns a list of vertexes sorted by the depth from topological-sort the roots of the graph. See also assign-lev... undirected-edge-p Returns true if-and-only-if edge is undirected Sets the tag of all the edges of thing to nil. untag-all-edges [?? why does this exist?] untagged-edge-p Returns true if-and-only-if edge's tage slot is nil Unless a different vertex-formatter is specified vertex->dot with a keyword argument, this is used by graph... Returns the number of vertexes in graph. [?? vertex-count could be a defun] vertexes Returns a list of the vertexes of thing. Return true if vertex-1 and vertex-2 are connected vertices-share-edge-p by an edge. [?? Compare adjacentp]
{"url":"http://www.common-lisp.net/project/tinaa/documentation/cl-graph-package/index.html","timestamp":"2014-04-17T15:29:52Z","content_type":null,"content_length":"27761","record_id":"<urn:uuid:ba9852cd-0cda-453e-afc7-eaa7b606626d>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
Impulsive Boundary Value Problems for Planar Hamiltonian Systems Abstract and Applied Analysis Volume 2013 (2013), Article ID 892475, 6 pages Research Article Impulsive Boundary Value Problems for Planar Hamiltonian Systems Department of Mathematics, Middle East Technical University, 06800 Ankara, Turkey Received 3 September 2013; Accepted 24 October 2013 Academic Editor: Josef Diblik Copyright © 2013 Zeynep Kayar and Ağacık Zafer. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We give an existence and uniqueness theorem for solutions of inhomogeneous impulsive boundary value problem (BVP) for planar Hamiltonian systems. Green's function that is needed for representing the solutions is obtained and its properties are listed. The uniqueness of solutions is connected to a Lyapunov type inequality for the corresponding homogeneous BVP. 1. Introduction The planar Hamiltonian system of 2-linear first-order equations has the form where is a symmetric matrix with piecewise continuous real-valued entries, and is the so called symplectic identity. Setting the Hamiltonian system can be rewritten in a more convenient way as Our aim in this work is to prove an existence and uniqueness theorem for solutions of the related BVP for inhomogeneous Hamiltonian system under impulse effect of the formwhere(i),,,, andare real sequences forwith (ii), whereand is continuous on each interval, the limitsexist andfor(iii)forandfor;andare given real numbers. We also setandfor convenience. By a solution of the impulsive BVP (6a)–(6c), we mean nontrivial functionssuch thatsatisfies system (6a)–(6c) for all. The corresponding homogeneous BVP takes the form Note that if we take then we obtain as a special case of (6a), (6b), and (6c) the impulsive BVP for second-order differential equations of the form To the best of our knowledge although many results have been obtained for linear impulsive boundary value problems by using different techniques, there is little known for the linearHamiltonian systems under impulse effect. The existence and uniqueness of linear impulsive boundary value problem for the first-order equations are considered in [1–4]. For the second-order case we refer to [5, 6] in which the integral representation of the solution of second order linear impulsive boundary value problems is given by using Green’s function and the existence and uniqueness of the solutions are obtained. Variational technique approach for the existence of the solutions of linear and nonlinear impulsive boundary value problems can be found in [7–10]. In [11], the method of upper and lower solutions is employed for the existence of solutions of nonlinear impulsive boundary value problems. For a detailed discussion on boundary value problems for higher-order linear impulsive equations we refer to [12]. Basic theory of impulsive differential equations is contained in the seminal book [13]. Our method of proof is based on Green’s function formulation and Lyapunov type inequalities for linear Hamiltonian system under impulse effect. There are many studies on Lyapunov type inequalities and their applications for linear ordinary differential equations [14] and for systems [15–17] as well as for linear impulsive differential equations and systems [18, 19]. However, the use of a Lyapunov type inequality in connection with BVPs seems to be limited. 2. Preliminaries 2.1. Lyapunov Type Inequality for Homogeneous Problem In this section we provide a Lyapunov type inequality to be used for the uniqueness of the inhomogeneous BVP. The obtained inequality is sharper than the one given by the present authors in [20] in the sense thatis replaced by. Theorem 1. If the homogeneous BVP (8a), (8b), and (8c) has a real solutionsuch thaton, then one has the Lyapunov type inequality: whereand. Proof. Define forand, where we put again,and make a convention thatif. It is not difficult to see from (8a), (8b), (8c), and (12) that Since we assumed that,is continuous on. Moreover,,, andfor all. We may assume without loss of generality thaton. Using (13) and (14) we obtain Integrating (16) fromtoand using (15) and (17) lead to from which we have On the other hand, from the first equation in (13), we have Let If we integrate (20) fromto, we see that and so Using the obvious estimate and then applying Cauchy-Schwarz inequality, we have Similarly, by integrating (21) fromtoand following the above procedure, we get Now we recall the elementary inequality: forand. In view of (26) and (27) setting we have Combining (19) and (30) results in Finally, sincefor, from (31) we obtain the desired inequality: 2.2. Green’s Function Here we derive Green’s function to be used for the representation of the solutions of the inhomogeneous BVP. Let be a fundamental matrix for (8a), (8b) and set Define the rectangles and the triangles Green’s function (pair) and its properties are given in the next theorem. Theorem 2. Suppose that the homogeneous BVP (8a)–(8c) has only the trivial solution. Let Note that the inverse of matrixexists in view of the assumption (see also the proof of Theorem 4). Then the pair of functions constitutes Green’s function for (6a), (6b), and (6c). Moreover, we have the following properties: (G1)is continuous and bounded on,(G2)is continuous and bounded on the rectangleswithand on the trianglesand,(G3)satisfies the following jump conditions:(a)where(b),(c),(G4), considered as a function of, is left continuous and satisfies whereis any of the intervalsor (G5)(G6), considered as a function of, is left continuous and satisfies (39). Proof. (G1) and (G2) are trivial. Let us consider (G3)(a) follows from To see (b), we write for, For (c), let; then Next, we consider (G4). By definition, it is easy to see thatis left continuous function at. Let us consider the interval. The later is similar. The first equation in (39) is direct consequences of (c) and the definition of. Clearly, The proofs of (G5) and (G6) are similar to (a) and (G4), respectively. Remark 3. One can easily rewrite Green’s function (pair) in terms of the solutions of system (8a), (8b). Indeed, and since we may write 3. The Main Result Our main result is the following theorem. Theorem 4. Let (i)–(iii) hold. If then BVP (6a), (6b), and (6c) has a unique solution. Moreover,is expressible as where and Green’s function pairis given by (38). Proof. We first prove the uniqueness. It suffices to show that the homogeneous BVP (8a)–(8c) has only the trivial solution. Leton. By Theorem 1, we see that Lyapunov type inequality (11) holds contradicting the inequality (47). Thusfor all. Moreover, by (6a), (6b), and (6c) we have which results infor. Taking limit we see that. As a result we obtainfor all. This completes the uniqueness of the solutions. For the existence, we start with the variation of parameters formula and write the general solution of system (6a), (6b) as Clearly, the boundary condition is satisfied if where. Since we have the uniqueness of solutions, the matrixmust have an inverse. Setting we may solvefrom (52) uniquely: Hence, Therefore the unique solution of the BVP (6a)–(6c) can be expressed as Let us now consider the BVP (10a), (10b), (10c), and (10d). In this case it is not difficult to see that the corresponding Green’s function (pair) becomes whereis the first row of the (Wronskian) Corollary 5. Suppose thatandare piece-wise continuous on,, andfor.If then the BVP (10a), (10b), (10c), and (10d) has a unique solutionwhich is expressible as where and Green’s function pairis given by (57). Remark 6. The results in this work are new even if the impulses are absent. The statements of the corresponding theorems are left to the reader. 1. J. J. Nieto, “Basic theory for nonresonance impulsive periodic problems of first order,” Journal of Mathematical Analysis and Applications, vol. 205, no. 2, pp. 423–433, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 2. J. Li, J. J. Nieto, and J. Shen, “Impulsive periodic boundary value problems of first-order differential equations,” Journal of Mathematical Analysis and Applications, vol. 325, no. 1, pp. 226–236, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 3. J. J. Nieto, “Periodic boundary value problems for first-order impulsive ordinary differential equations,” Nonlinear Analysis: Theory, Methods & Applications A, vol. 51, no. 7, pp. 1223–1232, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 4. J. J. Nieto, “Impulsive resonance periodic problems of first order,” Applied Mathematics Letters, vol. 15, no. 4, pp. 489–493, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 5. A. Cabada, J. J. Nieto, D. Franco, and S. I. Trofimchuk, “A generalization of the monotone method for second order periodic boundary value problem with impulses at fixed points,” Dynamics of Continuous, Discrete and Impulsive Systems, vol. 7, no. 1, pp. 145–158, 2000. View at Zentralblatt MATH · View at MathSciNet 6. J. Li and J. Shen, “Periodic boundary value problems for second order differential equations with impulses,” Nonlinear Studies, vol. 12, no. 4, pp. 391–400, 2005. View at Zentralblatt MATH · View at MathSciNet 7. J. J. Nieto and D. O'Regan, “Variational approach to impulsive differential equations,” Nonlinear Analysis: Real World Applications, vol. 10, no. 2, pp. 680–690, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 8. J. J. Nieto, “Variational formulation of a damped Dirichlet impulsive problem,” Applied Mathematics Letters, vol. 23, no. 8, pp. 940–942, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 9. J. Xiao and J. J. Nieto, “Variational approach to some damped Dirichlet nonlinear impulsive differential equations,” Journal of the Franklin Institute, vol. 348, no. 2, pp. 369–377, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 10. M. Galewski, “On variational impulsive boundary value problems,” Central European Journal of Mathematics, vol. 10, no. 6, pp. 1969–1980, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 11. P. W. Eloe and J. Henderson, “A boundary value problem for a system of ordinary differential equations with impulse effects,” The Rocky Mountain Journal of Mathematics, vol. 27, no. 3, pp. 785–799, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 12. Ö. Uğur and M. U. Akhmet, “Boundary value problems for higher order linear impulsive differential equations,” Journal of Mathematical Analysis and Applications, vol. 319, no. 1, pp. 139–156, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 13. A. M. Samoilenko and N. A. Perestyuk, Impulsive Differential Equations, World Scientific, Singapore, 1995. 14. A. Liapounoff, “Problème général de la stabilité du mouvement,” Annales de la Faculté des Sciences de Toulouse pour les Sciences Mathématiques et les Sciences Physiques. Série 2, vol. 9, pp. 203–474, 1907, Annals of Mathematics Studies, vol. 17, pp. 203–474, 1947. View at MathSciNet 15. G. Sh. Guseinov and B. Kaymakçalan, “Lyapunov inequalities for discrete linear Hamiltonian systems,” Computers & Mathematics with Applications, vol. 45, no. 6–9, pp. 1399–1416, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 16. X. Wang, “Stability criteria for linear periodic Hamiltonian systems,” Journal of Mathematical Analysis and Applications, vol. 367, no. 1, pp. 329–336, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 17. X.-H. Tang and M. Zhang, “Lyapunov inequalities and stability for linear Hamiltonian systems,” Journal of Differential Equations, vol. 252, no. 1, pp. 358–381, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 18. G. Sh. Guseinov and A. Zafer, “Stability criterion for second order linear impulsive differential equations with periodic coefficients,” Mathematische Nachrichten, vol. 281, no. 9, pp. 1273–1282, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 19. G. Sh. Guseinov and A. Zafer, “Stability criteria for linear periodic impulsive Hamiltonian systems,” Journal of Mathematical Analysis and Applications, vol. 335, no. 2, pp. 1195–1206, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 20. Z. Kayar and A. Zafer, “Stability criteria for linear Hamiltonian systems under impulsive perturbations,” http://arxiv.org/abs/1101.3094.
{"url":"http://www.hindawi.com/journals/aaa/2013/892475/","timestamp":"2014-04-20T06:45:37Z","content_type":null,"content_length":"560718","record_id":"<urn:uuid:f04a6fa3-936b-4a7c-883a-6518efbb3900>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
Homothetic property April 30th 2013, 09:04 PM #1 Homothetic property Please I need your help. I need to prove or disprove the following. a_1 a_2 real numbers. x, y both in R^3. A is a convex set in R^3. a.x in a.A and b.y in b.A. Then ax + by is in (a+b)A (a.A, b.A, (a+b)A are all Homothetic transformations) Last edited by kezman; April 30th 2013 at 11:58 PM. Re: Homothetic property Consider the unit vectors $v_1=(1,0,0)$ and $v_2=(0,1,0)$ and the set $A=\{a\cdot v_1|a\in \mathbb{R}\}\cup \{b\cdot v_2|2\in \mathbb{R}\}$ Re: Homothetic property Im sorry I forgot that A is a convex set April 30th 2013, 10:28 PM #2 Super Member Jan 2008 April 30th 2013, 11:56 PM #3
{"url":"http://mathhelpforum.com/advanced-math-topics/218417-homothetic-property.html","timestamp":"2014-04-20T02:42:29Z","content_type":null,"content_length":"35207","record_id":"<urn:uuid:4a434203-45c9-4413-b53c-89e955a96c16>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
How to convert a Fraction to Decimal This little test program has allot of user input validation. About 10 years ago while working as a pipefitter I was called upon to figure out how to fit a riser pipe onto a reducer. Since the reducer is tapered there was no normal way to do it. I had to calculate as a best guess as to what my dimensions would be based on what I already knew about fitting pipe to pipe. It didn’t turn out near as pretty as I would have hoped , but a grinder does wonders to finish getting it closer to where it needed to be. While I’m currently laid off, even though I haven’t had to do one of these since then, I finally decided to set down and figure out all of the math involved to create the 8 points and lengths needed for half of the riser circle and to be able to add the differences in the taper of the reducer.It took about 3 days working on and off to complete the formula to do all of the calculations required. Do to the massive amount of calculations required to output the final 8 lengths, I wanted to create a program that does all of the math for the user. My first hurdle is the user input validation. I need to validate if the person is inputting a decimal, a fraction or some off the wall entry. Since the normal user would input a fraction, I searched for a premade function on the internet for the fraction to decimal conversion. I found this one Here from a March 14, 2008 blog post. You find very few that go from fraction to decimal. Most samples go from decimal to fraction. After removing an extra “EndIf”, the code worked fine for converting a fraction to a decimal. (see code below) Private Function FractionToDecimal(ByVal frac As String) As String Dim decimalVal As String = "0" Dim upper As Decimal = 0 Dim lower As Decimal = 0 Dim remain As Decimal = 0 If frac.IndexOf("/") <> -1 Then 'End If < ----Extra End if If frac.IndexOf(" ") <> -1 Then remain = CType(frac.Substring(0, frac.IndexOf(" ")), Decimal) frac = frac.Substring(frac.IndexOf(" ")) End If upper = CType(frac.Substring(0, frac.IndexOf("/")), Decimal) lower = CType(frac.Substring(frac.IndexOf("/") + 1), Decimal) decimalVal = (remain + (upper / lower)).ToString End If Return decimalVal End Function After getting the code to work I then started thinking about user input validation. I added validation for every type of malformed input I could think of and gave the ability to return an error or return the result of the conversion. As you can see the first screen shot is a normal Conversion. The second screen shot shows the fraction with the enumerator larger than the denominator , I could have allowed this type of input, but in this case I am catching for when people swap the two numbers. Now the Final Code: Imports System.Text.RegularExpressions Public Class Form1 Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Me.Text = "Convert Fraction to Decimal " & My.Application.Info.Version.ToString End Sub Private Sub btnConvert_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnConvert.Click Dim regx As New Regex("^[a-zA-Z]") 'Starts at the begining and checks for any upper or lower case letters. Dim inputNfo As String ' Regx class http://msdn.microsoft.com/en-us/library/system.text.regularexpressions.regex(v=vs.90).aspx ' The String from the input text box inputNfo = tbUserInput.Text Dim outputStr As String = Nothing 'Here we check if it is a reconised decimal number, if true then just return the decimal number If IsNumeric(inputNfo) = True Then outputStr = inputNfo tbOutput.ForeColor = Color.Green lblInputValidation.Text = "Input is Valid" lblInputValidation.ForeColor = Color.Green ' Verify user is not spelling out the number ElseIf regx.IsMatch(inputNfo) = True Then outputStr = "Please check you are not Spelling your whole number" lblInputValidation.Text = "Input is Not Valid" lblInputValidation.ForeColor = Color.Red ' if not a decimal, verify it is not a malformed fraction ElseIf inputNfo.Contains("/") And inputNfo.Contains(".") = False Then Dim ReturnString As String ReturnString = FractionToDecimal(inputNfo) 'this part checks in the function if the enumerator is larger than the denominator If ReturnString = "Error Please Check Fraction" Then outputStr = ReturnString tbOutput.ForeColor = Color.Red lblInputValidation.Text = "Input is Not Valid" lblInputValidation.ForeColor = Color.Red 'If all is good then return the converterd fraction as a decimal outputStr = ReturnString tbOutput.ForeColor = Color.Green lblInputValidation.Text = "Input is Valid" lblInputValidation.ForeColor = Color.Green End If 'this checks that the input in not blank ElseIf inputNfo = Nothing Or inputNfo = " " Then outputStr = "Nothing was input" tbOutput.ForeColor = Color.Red lblInputValidation.Text = "Nothing was input" lblInputValidation.ForeColor = Color.Red 'returns this if the input number is realy messed up. outputStr = "Input not reconised" tbOutput.ForeColor = Color.Red lblInputValidation.Text = "Input not reconised " lblInputValidation.ForeColor = Color.Red End If 'Returns the final output tbOutput.Text = outputStr.ToString End Sub Private Function FractionToDecimal(ByVal frac As String) As String 'Code from http://amrelgarhytech.blogspot.com/2008/03/fraction-to-decimal.html ,,Fixed error to get to work. Dim decimalVal As String = "0" Dim upper As Decimal = 0 Dim lower As Decimal = 0 Dim remain As Decimal = 0 If frac.IndexOf("/") <> -1 Then If frac.IndexOf(" ") <> -1 Then remain = CType(frac.Substring(0, frac.IndexOf(" ")), Decimal) frac = frac.Substring(frac.IndexOf(" ")) End If End If upper = CType(frac.Substring(0, frac.IndexOf("/")), Decimal) lower = CType(frac.Substring(frac.IndexOf("/") + 1), Decimal) If upper > lower Then Return "Error Please Check Fraction" decimalVal = (remain + (upper / lower)).ToString End If Return decimalVal End Function End Class Starting from the top we need the “Imports System.Text.RegularExpressions” for checking user input in the second test. Next we set inputNfo as a string and equal it to the input text. Then we run the inputNfo thru the VB.Net built in “IsNumeric” function to see if it is a recognized decimal number, if it is , then just return that number to the output textbox. No Conversion needs to be done. If inputNfo is not recognized as a number we then pass it to the regular expression test and make sure the user is not trying to spell out the number(three instead of 3), if they are, it returns an error, if not then it passes on to the next test. The next test would be to find out if the character “/” is in the string and also to make sure the the character “.” is not in the string, if it is then the user input looks something like, “22. 3/4” , with the decimal in place it will return an error. We could add more code to handle that problem and fix it but, why baby the user, make them input the information correctly so they don’t get used to inputting the information incorrectly. If it passes all of those test then the string / number gets passed to the actual function that does the conversion. While in the conversion function we verify that the fraction does not have a larger enumerator than a denominator. Our users should not need to input the fraction in this way.(For the final program that this sample app. was built to test.) If it fails the test then it returns an error message, (as in the screen shot above) if it passes then it returns the result of the Conversion as a decimal. Again, do to the massive amount of calculations my final program will require, I need to validate every possible way a user could input a number so the end result would be a decimal, and then I would be able to do the calculations on the input parameters. If I catch all possible input errors then any output errors should be easier to trace and fix. The output of the 8 lengths in my final application will then be converted back from a decimal to a fraction rounded to the nearest 1/16th of an Inch. One Response to How to convert a Fraction to Decimal 1. Thanks for that awesome posting. It saved MUCH time This entry was posted in CodeProject, VB.net and tagged Fraction To Decimal Conversion, input validation. Bookmark the permalink.
{"url":"https://pcsxcetrasupport3.wordpress.com/2012/02/18/how-to-convert-a-fraction-to-decimal/","timestamp":"2014-04-18T21:05:06Z","content_type":null,"content_length":"79343","record_id":"<urn:uuid:19c38a5f-f287-46fa-9d6b-508a4773e5ef>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
question on integration bounds pwsnafu, I'm curious what integral definitions produce different answers in this case? If ##\mu## is a measure on ℝ, then the integral wrt ##\mu## gives ##\int_{[a,b]} f \, d\mu = \int_{(a,b)} f \, d\mu + f(a)\, \mu(\{a\}) + f(b) \, \mu(\{b\})##. So for any atomless measure (including the Lebesgue measure) the integrals are equal. But if ##\mu(\{a\}) \neq 0## then the integrals are different.
{"url":"http://www.physicsforums.com/showthread.php?p=4276070","timestamp":"2014-04-20T21:27:09Z","content_type":null,"content_length":"51218","record_id":"<urn:uuid:06212d74-8ff5-40e9-8554-2f196950c639>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
Using hexbin to better visualize a dense two-dimensional plot Anyone working with high dimensional data would have tried to plot two variables at some point. The problem is that if even a small proportion of the data is noisy, this translated to a large number of points which can obscure good visualization. Here is an example: x <- runif(100000) y <- -10 + 20*x + rnorm(100000, sd=2) y[1:20000] <- rnorm(20000, sd=10) # 20% noise plot(x, y, pch=”O”) which produces the following which I don’t think is very informative as the very strong linear trend is now obscured by 20% of the noisy data. An alternative at better visualizing such plot will be to use the hexbin package. library( hexbin) plot( hexbin(x, y, xbins=50) ) The hexbin package estimates the density (number of points in) the neighbourhood of predefined grid centres and uses varying shades of grey to represent the density. You can install the hexbin package in R using the following commands: This entry was posted in Statistics. Bookmark the permalink.
{"url":"http://adai.wordpress.com/2007/11/09/using-hexbin-to-better-visualize-a-dense-two-dimensional-plot/","timestamp":"2014-04-17T09:36:50Z","content_type":null,"content_length":"47606","record_id":"<urn:uuid:c2314a0a-739b-4a9d-87d0-f3e1d63fcae3>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
Help from The Economyst Posted by Ed on Sunday, June 10, 2007 at 12:24pm. I have put together my set of questions that I posted and answered them with yor help and direction, would you mind looking at the finished product for correctness. I still got lost on some of it I think. I could send them to you. Post them. We do not engage in Email with students for security sake. Ok, understand that perhaps I can post? Ok, I understand...Economyst, you helped me get in the right direction, can you see if I am on the right path? 1)The director of marketing at Vanguard Corporation believes the sales of the company’s Bright Side Laundry detergent (S) are related to Vanguard’s own advertising expenditure (A), as well as the combined advertising expenditures of its three biggest rival detergents (R). The marketing director collects 36 weekly observations on S, A, and R to estimate thefollowing multiple regression equation: S = a + bA + cR Where S, A, R are measured in dollars per week. Vanguard’s marketing director is comfortable using parameter estimates that are statistically significant at the 10 percent level or better. a)What sign does the marketing director expecta, b, and c to have? The director would expect his own advertising to have a positive effect and the competitor’s advertising to have a negative effect. He should expect some level of brand loyalty, but his advertising should have a positive effect. b)Interpret the coefficients a, b, and c? S = a + bA + cR Here “a” will be the intercept parameter and b, along with c, will be the slope parameter. Vanguard’s own advertising would be a positive effort and the competitor’s would be negative. The regression output from the computer is as follows: Dependent Variable: S R-Square: 0.2247 F ratio: 4.781 P-Value on F: 0.0150 Observations: 36 Variable: Intercept Parameter Estimate: 175086.0 Standard Error: 63821.0 T ratio: 2.74 P-Value: 0.0098 Variable: A Parameter Estimate: 0.8550 Standard Error: 0.3250 T ratio: 2.63 P-Value: 0.0128 Variable: R Parameter Estimate: - 0.284 Standard Error: 0.164 T ratio: - 1.73 P-Value: 0.0927 c)Does Vanguard’s advertising expenditure have a statistical significant effect on the sales of Bright Side detergent? Yes, at the 5% level, there is statistical significance at the 5% level. Explain, using appropriatep-value… A 0.0128 p-value means the exact level of significance for a T-Ratio of 2.63 is 1 % and the level of confidence is 99%. Stating b is statistically significant. d)Does the advertising by its three largest rivals affect sales of Bright Side detergent in a statistical significant way? P-Value and T-Ratio show that the competitor’s advertising has a negative effect. Explain using the appropriate p-value… The high P-value indicates that the negative T-ratio has a high probability of competitor’s advertising effecting sales of Bright Side negatively. e)What fraction of the total variation in sales of Bright Side remains unexplained? What can the marketing director do to increase the explanatory power of the sales equation? He could look at the prices charged by the competitor and Vanguard and add this to the equation as well as log variables on advertising expenses. What other explanatory variables might be added to this equation? Other variables might include family size, loads of laundry done during the summer vs. the winter. f)What is the expected level of sales each week when Vanguard spends $40,000 per week and the combined advertising expenditures for the three rivals are $100,000 per week? S = a + b($40,000) + c($100,000) S = 175086.0 + 0.85550($40,000) + - 0.284($100,000) S = 175086.0 + $34,000 + - $28,400 S = $209,306 + (- $28,400) S = $180,906 4)The manager of Collins Import Autos believes that the number of cars sold in a day (Q) depends on two factor: (1) the number of hours the dealership is open (H) and (2) the number of salespersons working that day (S). After collecting the data for two months (53 days), the manager estimates the following log-linear model: b c Q = aH S a)Explain how to transform this log-linear model into a linear form that can be estimated using multiple regression analysis. Logarithms must be taken of the equation to transform the log-linear model into a non-linearequation: Q = aHbSc would be: In Y = (In a) = b (In H) + c (In S) We define the following: Q’ = In Q H’ = In H S’ = In S The linear equation is: Q = a’ + bH’ + cS’ The computer output for the multiple regression analysis is shown below: Dependent Variable: LNQ R-Square: 0.5452 P-Value on F: 0.0001 Observations: 53 Variable: Intercept Parameter Estimate: 0.9162 Standard Error: 0.2413 T-Ratio: 3.80 P-Value: 0.0004 Variable: LNH Parameter Estimate: 0.3517 Standard Error: 0.1021 T-Ratio: 3.44 P-Value: 0.0012 Variable: LNS Parameter Estimate: 0.2250 Standard Error: 0.0785 T-Ratio: 3.25 P-Value: 0.0021 b)How do you interpret coefficients b and c? If the dealership increases the number of salespersons by 20 percent, what will thepercentage increase in daily sales? The parameter estimates of b and c are elastic. A 20% Increase in salespeople would result in a decrease in daily sales. c)Test the overall model for statistical significance at the 5% level? 1 – P-Value = level of confidence with the F-Stat 53 Observations – 3 Parameters = 50, the critical T-value is 2.0. The T-Ratios are over this so the significance level is strong a 5%. d)What percent of the total variation in daily auto sales is explained by this equation? What could you suggest to increase this Perhaps pricing of the product could be used as a determinant of sales. e)Test the intercept for statistical significance at the 5% level of significance. Values are significant at the 5% level. If H and S both equal 0, are the sales expected to be 0? Explain why or why not….. If H and S are equal to 0 the explanatory variables which b is a coefficient, is not relative to the dependant variable. f)Test the estimated coefficient b for statistical significance. If the Dealership decreases its hours of operation by 10%, what is the expected impact on daily sales? The coefficient b is statistically significant at the 5% level. If hours of operation are decreased the number of sales will decrease. Thanks for your help! Ok, EY, for the most part, you are on the right track. That said, i think it would be helpful if you better understood what linear regression analyses is trying to do and what the statistics mean. For a parameter estimate, the T ratio is simply the parameter estimate divided by the standard error. A negative T ratio simply means the parameter estimate is negative. The test is: is the parameter estimate significantly different from zero. You could take the T ratio and find a probability in a cumulative normal distribution table. However, most stat packages do it for you; hence the P-Value. The P-Value gives the probability that the "true" parameter is zero (or worse, the opposite sign). The F statistic tests whether the explanatory power of the overall model is significantly different from zero. Under d) in the first problem, you answered "The high P-value indicates that the negative T-ratio has a high probability of competitor’s advertising effecting sales of Bright Side negatively." No No No. The High P Value of .09 means there is a high (9%) probability the parameter estimate is zero or worse. It is not significant. Further, the sign has nothing to do with the significance. The R-squared gives a measure of the total variation in y that is explained by the specified model. An R^2 of .22 means the 22% of the variation is explained, which means that 78% is unexplained. Check your answer e) To increase the explanatory power, you suggested using logged variables. This may be true, but why? Taking logs of variables is one way of converting a non-linear relationship into a linear relationship. As I posted earlier, I suggested lagged variables. That is, advertising this week affects not only sales this week but sales next week and the week after. Now equation 2) In my earlier post, I said that in a log model, parameter estimates were elasticities (not elastic). (See your basic economics text if you are confused about what an elasticity is and represents). So, the estimated elasticity of car sales (Q) to sales people (S) is .225. So if S goes up by 20%, we would expect Q to INCREASE by 20%*.225 = 4.5%. Answer c) For the overall significance of a model, look at the P-value on the F-statistic. Since the P value is so very low, we can say the model is significant; it does explain at least some of the variation in the y variable. Answer e). Your original specification is Q=aHS. So, if H and S are zero, the Q must be zero. Answer f). Again, the parameter estimate is and elasticity. If H goes down by 10%, then Q decreases by 10%*.3517 = 3.517%. Well, thanks, I think you pointed out I am still confused, this stuff is hard to understand. I meant log-linear variables, I thought that was where you were pointing me because I saw these in the I will go back and look at these, it appears I only understood about half and got the other reveresed, like p-values. Now on to theory of Consumer behavior with marginal utility, market demand, and income. • Overseas Phone Calls - Malissa, Friday, May 14, 2010 at 5:40pm Cheap international calls on net for each. Buy calling & phone cards. • Help from The Economyst - Brian, Monday, May 24, 2010 at 11:47pm I had no idea help existed for this type of problems. thanks Related Questions COM/220 - Hi I am looking or Ms Sue. I have finished the Thesis Statement, ... physics - Tarzan wights 820N, swings from a cliff at the end of a 20m vine that ... math 140 - I have two questions posted that has not been answered please help. ... Math HELP! - I posted a few questions and then went back and posted how I ... English - Posted by rfvv on Sunday, May 2, 2010 at 10:33pm. 1. Are you finished ... Math - I posted this problem the other day and someone answered it but when I ... To-Ms. Writeacher - I posted 2 questions under Law. Margie answered them. Can ... Science EASY HELP - I had posted this question before but no one answered for ... math - A=p+prt, for t how about subtracting p from both sides, then dividing ... Data management!! [ Hi economyst] Please Help. - Hi. economyst. Thanks for your ...
{"url":"http://www.jiskha.com/display.cgi?id=1181492665","timestamp":"2014-04-20T09:40:09Z","content_type":null,"content_length":"18795","record_id":"<urn:uuid:a4031792-a285-4e2a-8aae-b56184770f3f>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
Question 13.15 comp.lang.c FAQ list · Question 13.15 Q: I need a random number generator. A: The Standard C library has one: rand. The implementation on your system may not be perfect, but writing a better one isn't necessarily easy, either. If you do find yourself needing to implement your own random number generator, there is plenty of literature out there; see the References below or the sci.math.num-analysis FAQ list. There are also any number of packages on the net: old standbys are r250, RANLIB, and FSULTRA (see question 18.16), and there is much recent work by Marsaglia, and Matumoto and Nishimura (the ``Mersenne Twister''), and some code collected by Don Knuth on his web pages. Here is a portable C implementation of the ``minimal standard'' generator proposed by Park and Miller: #define a 16807 #define m 2147483647 #define q (m / a) #define r (m % a) static long int seed = 1; long int PMrand() long int hi = seed / q; long int lo = seed % q; long int test = a * lo - r * hi; if(test > 0) seed = test; else seed = test + m; return seed; (The ``minimal standard'' is adequately good; it is something ``against which all others should be judged''; it is recommended for use ``unless one has access to a random number generator known to be This code implements the generator X <- (aX + c) mod m for a = 16807, m = 2147483647 (which is 2**31-1), and c = 0.[footnote] The multiplication is carried out using a technique described by Schrage, ensuring that the intermediate result aX does not overflow. The implementation above returns long int values in the range [1, 2147483646]; that is, it corresponds to C's rand with a RAND_MAX of 2147483646, except that it never returns 0. To alter it to return floating-point numbers in the range (0, 1) (as in the Park and Miller paper), change the declaration to double PMrand() and the last line to return (double)seed / m; For slightly better statistical properties, Park and Miller now recommend using a = 48271. References: K&R2 Sec. 2.7 p. 46, Sec. 7.8.7 p. 168 ISO Sec. 7.10.2.1 H&S Sec. 17.7 p. 393 PCS Sec. 11 p. 172 Knuth Vol. 2 Chap. 3 pp. 1-177 Park and Miller, ``Random Number Generators: Good Ones are Hard to Find'' about this FAQ list about eskimo search feedback copyright
{"url":"http://c-faq.com/lib/rand.html","timestamp":"2014-04-16T18:56:44Z","content_type":null,"content_length":"5721","record_id":"<urn:uuid:f88b2d53-a0a0-4e30-89f2-fceb460bae39>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
The algorithm behind "direction" and "speed"? - Programming Q&A Currently, I'm attempting to emulate Game Maker's movement system in Java. It currently uses the hspeed and vspeed variables to determine the speed and direction. However, I want to be be able to access these two variables using direction and speed. Does anyone know how Game Maker converts "direction" and "speed" into "hspeed" and "vspeed"? I'm not asking for any Java code; I'd like it in either GML or plain English, please Thanks in advance!
{"url":"http://gmc.yoyogames.com/index.php?showtopic=526380","timestamp":"2014-04-17T09:57:00Z","content_type":null,"content_length":"58234","record_id":"<urn:uuid:dd87a27b-818a-4a96-bec3-0807bcab27a5>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
Narrow Search Middle school High school Earth and space science Sort by: Per page: Now showing results 1-10 of 24 Students will learn about NASA's Radiation Belt Storm Probes (RBSP), Earth's van Allen Radiation Belts, and space weather through reading a NASA press release and viewing a NASA eClips video segment. Then students will use simple linear functions to... (View More) examine the scale of the radiation belts and the strength of Earth's magnetic field. This activity is part of the Space Math multimedia modules that integrate NASA press releases, NASA archival video, and mathematics problems targeted at specific math standards commonly encountered in middle school textbooks. The modules cover specific math topics at multiple levels of difficulty with real-world data and use the 5E instructional sequence. (View Less) This is an online set of information about astronomical alignments of ancient structures and buildings. Learners will read background information about the alignments to the Sun in such structures as the Great Pyramid, Chichen Itza, and others.... (View More) Next, the site contains 10 short problem sets that involve a variety of math skills, including determining the scale of a photo, measuring and drawing angles, plotting data on a graph, and creating an equation to match a set of data. Each set of problems is contained on one page and all of the sets utilize real-world problems relating to astronomical alignments of ancient structures. Each problem set is flexible and can be used on its own, together with other sets, or together with related lessons and materials selected by the educator. This was originally included as a folder insert for the 2010 Sun-Earth Day. (View Less) This is an activity about detecting elements by using light. Learners will develop and apply methods to identify and interpret patterns to the identification of fingerprints. They look at fingerprints of their classmates, snowflakes, and finally... (View More) “spectral fingerprints” of elements. They learn to identify each image as unique, yet part of a group containing recognizable similarities. The activity is part of Project Spectra, a science and engineering program for middle-high school students, focusing on how light is used to explore the Solar System. (View Less) This is an activity about the Signal-to-Noise Ratio. Learners will engage with a hands-on activity and an online interactive to understand the terms signal and noise as they relate to spacecraft communication; quantify noise using a given dataset;... (View More) and calculate the signal-to-noise ratio. The activity also includes a pencil-and-paper component that addresses relevant topics, such as proportions and ratios. Includes teacher background information, student data sheets, answer guide, extensions and adaptions. (View Less) This is an activity about spacecraft radio communications. Learners will explore spacecraft radio communications concepts, including the speed of light and the time-delay for signals sent to and from spacecraft. Learners measure the time it takes... (View More) for a radio signal to travel to a spacecraft using the speed of light, demonstrate the delay in radio communication signals to and from a spacecraft, and devise unique solutions to the radio-signal-delay problem. In an extension, learners are asked to calculate the distance the spacecraft traveled. All NASA spacecraft missions have a telecommunications system and use radio waves to transmit signals. The context for this activity is sending a command to the New Horizons spacecraft telling it to take a picture of Pluto. Includes teacher background, adaptations, and student data sheets. (View Less) This is a collection of mathematical problems about transits in the solar system. Learners can work problems created to be authentic glimpses of modern science and engineering issues, often involving actual research data. In this problem set, learners will consider the temperature in Kelvin of various places in the universe and use equations to convert measures from the three temperature scales to answer a series of questions. Answer key is provided. This is part of... (View More) Earth Math: A Brief Mathematical Guide to Earth Science and Climate Change. (View Less) This is a booklet containing 37 space science mathematical problems, several of which use authentic science data. The problems involve math skills such as unit conversions, geometry, trigonometry, algebra, graph analysis, vectors, scientific... (View More) notation, and many others. Learners will use mathematics to explore science topics related to Earth's magnetic field, space weather, the Sun, and other related concepts. This booklet can be found on the Space Math@NASA website. (View Less) In this problem set, learners will refer to the tabulated data used to create the Keeling Curve of atmospheric carbon dioxide to create a mathematical function that accounts for both periodic and long-term changes. They will use this function to... (View More) answer a series of questions, including predictions of atmospheric concentration in the future. A link to the data, which is in an Excel file, as well as the answer key are provided. This is part of Earth Math: A Brief Mathematical Guide to Earth Science and Climate Change. (View Less) In this problem set, learners will practice fractions by working with the ratios of various molecules or atoms in different compounds to answer a series of questions. Answer key is provided. This is part of Earth Math: A Brief Mathematical Guide to... (View More) Earth Science and Climate Change. (View Less) «Previous Page123 Next Page»
{"url":"http://nasawavelength.org/resource-search?facetSort=1&topicsSubjects=Mathematics%3AAlgebra&educationalLevel%5B%5D=High+school&educationalLevel%5B%5D=Middle+school","timestamp":"2014-04-16T17:10:58Z","content_type":null,"content_length":"75997","record_id":"<urn:uuid:ae55359e-24dc-441e-a26e-f0d14ad669e8>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
Right Triangle "In any right triangle, the square described on the hypotenuse is equal to the sum of the squares described on the other two sides. If A B C, is a right triangle, right angled at B, then the square described on the hypotenuse AC is equal to the sum of the suares described on the sides A B and B C." — Hallock, 1905
{"url":"http://etc.usf.edu/clipart/25700/25752/right_triang_25752.htm","timestamp":"2014-04-17T21:48:07Z","content_type":null,"content_length":"11392","record_id":"<urn:uuid:27eef4db-a911-41d4-8e91-3b2ca173fea3>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
Decide N(F) and V(F) for Linear mapping in basis w. Matrix October 15th 2012, 04:01 AM #1 Jan 2011 Decide N(F) and V(F) for Linear mapping in basis w. Matrix Decide N(F) and V(F) for Linear mapping in basis w. Matrix The matrix is (1 2 ). (3 6 ) Call it A A 2 x 2 matrix its located in some basis E = (e1,e2) I want to find N(F) and V(F) 1. So I will start by findin N(F) What does that mean? F(u) = zero vector I want to know which vectors which will Fall on the zero vector This means it's the same as It would lead to a linear equation system X1 + 2x2 = 0 3x1 + 6x2 = 0 So I suppose I should do gauss If I eliminate 3x1 from row 2 I will get all of the 2nd row to become 0=0 if I multiplied row 1 with -3 0=0 leaves me wondering what to do next? 2. I guess V(F) is gotten by cross multiplying Two column vectors of the matrix A This also leaves me wondering because it's a 2x2 matrix Could someone show me how to solve it? Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-algebra/205349-decide-n-f-v-f-linear-mapping-basis-w-matrix.html","timestamp":"2014-04-17T13:25:24Z","content_type":null,"content_length":"30237","record_id":"<urn:uuid:99d7cd7c-bcbb-4bfc-8f41-4d34cdfd70e4>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
Sometimes” and “Not Never” Revisited: On Branching Versus Linear Time Results 1 - 10 of 199 - ACM Transactions on Programming Languages and Systems , 1986 "... We give an efficient procedure for verifying that a finite-state concurrent system meets a specification expressed in a (propositional, branching-time) temporal logic. Our algorithm has complexity linear in both the size of the specification and the size of the global state graph for the concurrent ..." Cited by 1173 (58 self) Add to MetaCart We give an efficient procedure for verifying that a finite-state concurrent system meets a specification expressed in a (propositional, branching-time) temporal logic. Our algorithm has complexity linear in both the size of the specification and the size of the global state graph for the concurrent system. We also show how this approach can be adapted to handle fairness. We argue that our technique can provide a practical alternative to manual proof construction or use of a mechanical theorem prover for verifying many finite-state concurrent systems. Experimental results show that state machines with several hundred states can be checked in a matter of seconds. - HANDBOOK OF THEORETICAL COMPUTER SCIENCE , 1995 "... We give a comprehensive and unifying survey of the theoretical aspects of Temporal and modal logic. ..." - Journal of the ACM , 1997 "... Temporal logic comes in two varieties: linear-time temporal logic assumes implicit universal quantification over all paths that are generated by system moves; branching-time temporal logic allows explicit existential and universal quantification over all paths. We introduce a third, more general var ..." Cited by 448 (47 self) Add to MetaCart Temporal logic comes in two varieties: linear-time temporal logic assumes implicit universal quantification over all paths that are generated by system moves; branching-time temporal logic allows explicit existential and universal quantification over all paths. We introduce a third, more general variety of temporal logic: alternating-time temporal logic offers selective quantification over those paths that are possible outcomes of games, such as the game in which the system and the environment alternate moves. While linear-time and branching-time logics are natural specification languages for closed systems, alternating-time logics are natural specification languages for open systems. For example, by preceding the temporal operator "eventually" with a selective path quantifier, we can specify that in the game between the system and the environment, the system has a strategy to reach a certain state. Also the problems of receptiveness, realizability, and controllability can be formulated as model-checking problems for alternating-time formulas. - JOURNAL OF THE ACM , 1998 "... Translating linear temporal logic formulas to automata has proven to be an effective approach for implementing linear-time model-checking, and for obtaining many extensions and improvements to this verification method. On the other hand, for branching temporal logic, automata-theoretic techniques ..." Cited by 298 (64 self) Add to MetaCart Translating linear temporal logic formulas to automata has proven to be an effective approach for implementing linear-time model-checking, and for obtaining many extensions and improvements to this verification method. On the other hand, for branching temporal logic, automata-theoretic techniques have long been thought to introduce an exponential penalty, making them essentially useless for model-checking. Recently, Bernholtz and Grumberg have shown that this exponential penalty can be avoided, though they did not match the linear complexity of non-automata-theoretic algorithms. In this paper we show that alternating tree automata are the key to a comprehensive automata-theoretic framework for branching temporal logics. Not only, as was shown by Muller et al., can they be used to obtain optimal decision procedures, but, as we show here, they also make it possible to derive optimal model-checking algorithms. Moreover, the simple combinatorial structure that emerges from the - Logics for Concurrency: Structure versus Automata, volume 1043 of Lecture Notes in Computer Science , 1996 "... Abstract. The automata-theoretic approach to linear temporal logic uses the theory of automata as a unifying paradigm for program specification, verification, and synthesis. Both programs and specifications are in essence descriptions of computations. These computations can be viewed as words over s ..." Cited by 217 (23 self) Add to MetaCart Abstract. The automata-theoretic approach to linear temporal logic uses the theory of automata as a unifying paradigm for program specification, verification, and synthesis. Both programs and specifications are in essence descriptions of computations. These computations can be viewed as words over some alphabet. Thus,programs and specificationscan be viewed as descriptions of languagesover some alphabet. The automata-theoretic perspective considers the relationships between programs and their specifications as relationships between languages.By translating programs and specifications to automata, questions about programs and their specifications can be reduced to questions about automata. More specifically, questions such as satisfiability of specifications and correctness of programs with respect to their specifications can be reduced to questions such as nonemptiness and containment of automata. Unlike classical automata theory, which focused on automata on finite words, the applications to program specification, verification, and synthesis, use automata on infinite words, since the computations in which we are interested are typically infinite. This paper provides an introduction to the theory of automata on infinite words and demonstrates its applications to program specification, verification, and synthesis. 1 , 1985 "... We consider the computation tree logic (CTL) proposed in (Set. Comput. Programming 2 ..." - FORMAL METHODS IN SYSTEM DESIGN, VOL 6, ISS , 1995 "... We study property preserving transformations for reactive systems. The main idea is the use of simulations parameterized by Galois connections ( �), relating the lattices of properties of two systems. We propose and study a notion of preservation of properties expressed by formulas of a logic, by a ..." Cited by 136 (4 self) Add to MetaCart We study property preserving transformations for reactive systems. The main idea is the use of simulations parameterized by Galois connections ( �), relating the lattices of properties of two systems. We propose and study a notion of preservation of properties expressed by formulas of a logic, by a function mapping sets of states of a system S into sets of states of a system S'. We give results on the preservation of properties expressed in sublanguages of the branching time-calculus when two systems S and S' are related via h � i-simulations. They can be used to verify a property for a system by verifying the same property on a simpler system which is an abstraction of it. We show also under which conditions abstraction of concurrent systems can be computed from the abstraction of their components. This allows a compositional application of the proposed verification method. This is a revised version of the papers [2] and [16] � the results are fully developed in - In 25th International Colloqium on Automata, Languages and Programming, ICALP ’98 , 1998 "... Abstract. The p-calculus can be viewed as essentially the "ultimate" program logic, as it expressively subsumes all propositional program logics, including dynamic logics, process logics, and temporal logics. It is known that the satisfiability problem for the p-calculus is EXPTIMEcomplete. This upp ..." Cited by 129 (12 self) Add to MetaCart Abstract. The p-calculus can be viewed as essentially the "ultimate" program logic, as it expressively subsumes all propositional program logics, including dynamic logics, process logics, and temporal logics. It is known that the satisfiability problem for the p-calculus is EXPTIMEcomplete. This upper bound, however, is known for a version of the logic that has only forward modalities, which express weakest preconditions, but not backward modalities, which express strongest postconditions. Our main result in this paper is an exponential time upper bound for the satisfiability problem of the p-calculus with both forward and backward modalities. To get this result we develop a theory of two-way alternating automata on infinite trees. 1 - Acta Informatica , 1990 "... This paper describes a procedure, based around the construction of tableau proofs, for determining whether finite-state systems enjoy properties formulated in the propositional mu-calculus. It presents a tableau-based proof system for the logic and proves it sound and complete, and it discusses tech ..." Cited by 91 (8 self) Add to MetaCart This paper describes a procedure, based around the construction of tableau proofs, for determining whether finite-state systems enjoy properties formulated in the propositional mu-calculus. It presents a tableau-based proof system for the logic and proves it sound and complete, and it discusses techniques for the efficient construction of proofs that states enjoy properties expressed in the logic. The approach is the basis of an ongoing implementation of a model checker in the Concurrency Workbench, an automated tool for the analysis of concurrent systems. 1 Introduction One area of program verification that has proven amenable to automation involves the analysis of finite-state processes. While computer systems in general are not finite-state, many interesting ones, including a variety of communication protocols and hardware systems, are, and their finitary nature enables the development and implementation of decision procedures that test for various properties. Model checking has p... , 1996 "... . In computer system design, we distinguish between closed and open systems. A closed system is a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of ..." Cited by 79 (11 self) Add to MetaCart . In computer system design, we distinguish between closed and open systems. A closed system is a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an ongoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems (mod- ule checking, for short). We show that while module
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=121065","timestamp":"2014-04-18T11:27:17Z","content_type":null,"content_length":"38759","record_id":"<urn:uuid:7f3bbd97-3443-48a2-bb4d-c92ff3f62393>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: INTRODUCTION TO S-PLUS FOR GENERALIZED P.M.E.Altham, Statistical Laboratory, University of Cambridge, CB3 0WB. July 7, 2010 Special thanks must go to Dr R.J.Gibbens for his help in introducing me to S-Plus. Several generations of keen and critical students for the Cambridge University Diploma in Mathematical Statistics (pre 1998) and for the MPhil in Statistical Science have made helpful suggestions which have improved these worksheets. Readers are warmly encouraged to tell the author of any further corrections or suggestions for improvements. Some draft solutions may be seen at The S-Plus worksheets were prepared for the MPhil course. Since I retired in September 2005, I have gradually edited and updated all my worksheets. (This is an ongoing process) I started to use R in about 1998, after the S-Plus worksheets were first constructed, and so many of the examples given here were used as the basis for my undergraduate worksheets in R: see for The R worksheets include many more graphs (with the corresponding R code) than are given in the S-Plus worksheets.
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/349/2036344.html","timestamp":"2014-04-19T10:12:27Z","content_type":null,"content_length":"8307","record_id":"<urn:uuid:03101f2d-f5b8-4f41-a951-fe31d810cc75>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
Refactoring probability distributions, part 2: Random sampling Refactoring probability distributions, part 2: Random sampling Posted by Eric Kidd Wed, 21 Feb 2007 23:53:00 GMT In Part 1, we cloned PFP, a library for computing with probability distributions. PFP represents a distribution as a list of possible values, each with an associated probability. But in the real world, things aren’t always so easy. What if we wanted to pick a random number between 0 and 1? Our previous implementation would break, because there’s an infinite number of values between 0 and 1—they don’t exactly fit in a list. As it turns out, Sungwoo Park and colleagues found an elegant solution to this problem. They represented probability distributions as sampling functions, resulting in something called the λ[◯] calculus. (I have no idea how to pronounce this!) With a little bit of hacking, we can use their sampling functions as a drop-in replacement for PFP. A common interface Since we will soon have two ways to represent probability distributions, we need to define a common interface. type Weight = Float class (Functor d, Monad d) => Dist d where weighted :: [(a, Weight)] -> d a uniform :: Dist d => [a] -> d a uniform = weighted . map (\x -> (x, 1)) The function uniform will create an equally-weighted distribution from a list of values. Using this API, we can represent a two-child family as follows: data Child = Girl | Boy deriving (Show, Eq, Ord) child :: Dist d => d Child child = uniform [Girl, Boy] family :: Dist d => d [Child] family = do child1 <- child child2 <- child return [child1, child2] Now, we need to implement this API two different ways: Once with lists, and a second time with sampling functions. Finite distibutions, revisted The monad from part 1 now becomes: type FDist = PerhapsT ([]) instance Dist FDist where weighted [] = error "Empty distribution" weighted xws = PerhapsT (map weight xws) where weight (x,w) = Perhaps x (P (w / sum)) sum = foldl' (+) 0 (map snd xws) exact :: FDist a -> [Perhaps a] exact = runPerhapsT We can run this as follows: > exact family [Perhaps [Girl,Girl] 25.0%, Perhaps [Girl,Boy] 25.0%, Perhaps [Boy,Girl] 25.0%, Perhaps [Boy,Boy] 25.0%] Sampling functions This second part is also easy. First, we define a type Rand a, which represents a value of type a computed using random numbers. We implement Rand by reduction to Haskell’s IO monad. (There’s any number of other ways to do this, of course.) newtype Rand a = Rand { runRand :: IO a } randomFloat :: Rand Float randomFloat = Rand (getStdRandom (random)) Next, we make Rand a trivial, sequential monad: instance Functor Rand where fmap = liftM instance Monad Rand where return x = Rand (return x) r >>= f = Rand (do x <- runRand r runRand (f x)) We can turn any FDist into a Rand by picking an element at random: instance Dist Rand where weighted = liftF . weighted liftF :: FDist a -> Rand a liftF fdist = do n <- randomFloat pick (P n) (runPerhapsT fdist) pick :: Monad m => Prob -> [Perhaps a] -> m a pick _ [] = fail "No values to pick from" pick n ((Perhaps x p):ps) | n <= p = return x | otherwise = pick (n-p) ps Of course, a single random sample won’t do us much good. We need functions for repeated sampling and for displaying the results: sample :: Rand a -> Int -> Rand [a] sample r n = sequence (replicate n r) sampleIO r n = runRand (sample r n) histogram :: Ord a => [a] -> [Int] histogram = map length . group . sort And sure enough, we get a nice, even distribution: > liftM histogram (sampleIO family 1000) That’s it! Two monads for the price of one. Part 3: Coming soon. David House said 5 days later: Couldn’t you just use newtype-deriving to make Rand a monad, like you did for making Prob an instance of Num in the first part? Also, sequence (replicate f xs) is just replicateM f xs. Great articles, by the way! Eric said 5 days later: I suspect that newtype-deriving will work quite nicely. Thanks! It might also be desirable to use the Rand monad from the Haskell wiki, or perhaps the splittable version (which might make it feasible to run sampling computations in parallel?).
{"url":"http://www.randomhacks.net/articles/2007/02/21/randomly-sampled-distributions","timestamp":"2014-04-17T18:34:36Z","content_type":null,"content_length":"33271","record_id":"<urn:uuid:0ff5d661-be92-40dc-a06a-b4c73cc5aab7>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
Chance News 38 (Difference between revisions) m m (→The Drunkard's Walk:How Randomness Rules Our Lives) (→The Drunkard's Walk:How Randomness Rules Our Lives: It's Monte Carlo and Monty Hall) ← Older edit Newer edit → Line 139: Line 139: There are not many writers who can successfully write about mathematics for the general public There are not many writers who can successfully write about mathematics for the general public but Leonard Mlodinow is one of them. He is a physicist who has written a number of successful but Leonard Mlodinow is one of them. He is a physicist who has written a number of successful books on physics and mathematics for the general public. He has also been an editor for Star books on physics and mathematics for the general public. He has also been an editor for Star Trek. Trek. The Drunkard's Walk is his most recent book. In this book he shows that we all have a hard time The Drunkard's Walk is his most recent book. In this book he shows that we all have a hard time understanding probability and yet it plays an important role in our daily lives. To show that we understanding probability and yet it plays an important role in our daily lives. To show that we − are not wired to understand probability, he has only to show us the birthday problem, the [DEL: + are not wired to understand probability, he has only to show us the birthday problem, the Hall Monte :DEL]Hall problem, the two sisters problems, the Linda problem, the two-envelope problem, problem, the two sisters problems, the Linda problem, the two-envelope problem, etc. Of course to understand how probability affects our lives we have to understand some basic Of course to understand how probability affects our lives we have to understand some basic probability. Mlodinow makes this more interesting by explaining probability along with the probability. Mlodinow makes this more interesting by explaining probability along with the history of its development. He starts with Cardano introducing the sample space and solving dice history of its development. He starts with Cardano introducing the sample space and solving dice problems. At the same time he discusses Cardano's colorful life. He then discusses Pascal and problems. At the same time he discusses Cardano's colorful life. He then discusses Pascal and Fermat's solution to the problem of points. He continues with Bernoulli, deMere, and Bayes and Fermat's solution to the problem of points. He continues with Bernoulli, deMere, and Bayes and explains their contributions including the law of large numbers, the central limit theory and explains their contributions including the law of large numbers, the central limit theory and conditional probability. Of course none of this is new but what makes this book so interesting is conditional probability. Of course none of this is new but what makes this book so interesting is that while Mlodinow discusses probability concepts and applications he also explains how they can that while Mlodinow discusses probability concepts and applications he also explains how they can effect our lives. For example the argument that there is no such thing as a hot-hand in effect our lives. For example the argument that there is no such thing as a hot-hand in basketball might also be made about your stock adviser. basketball might also be made about your stock adviser. Revision as of 16:05, 17 July 2008 If the devil exists, he no doubt has a high IQ and an Ivy League degree. It's clear that having an educational pedigree is no prophylactic against greed and bad behavior.. Paul Alper found inWikipedia the following quotations of well known statisticians relating to Types of error. In 1948, Frederick Mosteller argued that a "third kind of error" was required to describe circumstances he had observed, namely: Type I error: rejecting the null hypothesis when it is true. Type II error: accepting the null hypothesis when it is false. Type III error: correctly rejecting the null hypothesis for the wrong reason. In 1957, Allyn W. Kimball, a statistician with the Oak Ridge National Laboratory, proposed a different kind of error to stand beside "the first and second types of error in the theory of testing hypotheses". Kimball defined this new "error of the third kind" as being "the error committed by giving the right answer to the wrong problem" Mathematician Richard Hamming expressed his view that It is better to solve the right problem the wrong way than to solve the wrong problem the right way. The famous Harvard economist Howard Raiffa describes an occasion when he, too, "fell into the trap of working on the wrong problem" In 1974, Ian Mitroff and Tom Featheringham extended Kimball's category, arguing that: One of the most important determinants of a problem's solution is how that problem has been represented or formulated in the first place. They defined type III errors as either "the error. of having solved the wrong problem. when one should have solved the right problem" or "the error. [of] choosing the wrong problem representation. when one should have. chosen the right problem representation" In 1969, the Harvard economist Howard Raiffa jokingly suggested "a candidate for the error of the fourth kind: solving the right problem too late" . In 1970, Marascuilo and Levin proposed a "fourth kind of error" -- a "Type IV error" -- which they defined in a Mosteller-like manner as being the mistake of "the incorrect interpretation of a correctly rejected hypothesis"; which, they suggested, was the equivalent of "a physician's correct diagnosis of an ailment followed by the prescription of a wrong medicine" See the Wikipedia article for references. Deborah Alper suggested the following Forsooth: Roughly one-third of all eligible Americans, 64 million people, are not registered to vote. This percentage is even higher for African-Americans (30 percent) and Hispanics (40 percent). The Nation July 21/28, 2008, Page 32 Paul Alper suggested the following two Forsooths: We want to persuade you of one claim: that William Sealy Gosset (1876-1937)--aka "Student" of Students t-test--was right and that his difficult friend, Ronald A. Fisher, though a genius, was From the preface of Cult of Statistical Significance: Deirdre Nansen McCloskey and Steve Ziliak Feb 19, 2008 "There's this cluster of interrelated findings", said Richard A. Lippa, a professor of psychology at California State University at Fullerton, who has found evidence that in gay men, the hair on the back of the head is more likely to curl counterclockwise than in straight men. "These are all biological markers that something must have gone on early in development". Irreligion: A Mathematician Explains Why the Arguments for God Just Don't Add Up . By John Allen Paulos. 158 pp. Hill & Wang. $20. John suggested that Chance News readers might enjoy some of the arguments that he used in his book that rely on probability concepts . We give a sample below and you can see more of his probability arguments in a talk he gave at the recent Conference "Beyond Belief Enlightenment 2.0" sponsored by the science network. A common creationist argument goes roughly like the following. A very long sequence of individually improbable mutations must occur in order for a species or a biological process to evolve. If we assume these are independent events, then the probability of all of them occurring and occurring in the right order is the product of their respective probabilities, which is always a tiny number. Thus, for example, the probability of getting a 3, 2, 6, 2, and 5 when rolling a single die five times is 1/6 x 1/6 x 1/6 x 1/6 x 1/6 or 1/7,776 - one chance in 7,776. The much longer sequences of fortuitous events necessary for a new species or a new process to evolve leads to the minuscule probabilities that creationists argue prove that evolution is so wildly improbable as to be essentially This line of argument, however, is deeply flawed. Leaving aside the issue of independent events, I note that there are always a fantastically huge number of evolutionary paths that might be taken by an organism (or a process), but there is only one that actually will be taken. So if, after the fact, we observe the particular evolutionary path actually taken and then calculate the a priori probability of its being taken, we will get the minuscule probability that creationists mistakenly attach to the process as a whole. A related creationist argument is supplied Michael Behe, a key supporter of intelligent design. Behe likens what he terms the "irreducible complexity" of phenomena such as the clotting of blood to the irreducible complexity of a mousetrap. If just one of the trap's pieces is missing -- whether it be the spring, the metal platform, or the board -- the trap is useless. The implicit suggestion is that all the parts of a mousetrap would have had to come into being at once, an impossibility unless there were an intelligent designer. Design proponents argue that what's true for the mousetrap is all the more true for vastly more complex biological phenomena. If any of the 20 or so proteins involved in blood clotting is absent, for example, clotting doesn't occur, and so, the creationist argument goes, these proteins must have all been brought into being at once by a designer. But the theory of evolution does explain the evolution of complex biological organisms and phenomena, and the Paley argument from design has been decisively refuted. Natural selection acting on the genetic variation created by random mutation and genetic drift results in those organisms with more adaptive traits differentially surviving and reproducing. (Interestingly, that we and all life have evolved from simpler forms by natural selection disturbs fundamentalists who are completely unphased by the Biblical claim that we come from dirt.) Further rehashing of defenses of Darwin or refutations of Paley is not my goal, however. Those who reject evolution are usually immune to such arguments anyway. Rather, my intention here is to develop some loose analogies between these biological issues and related economic ones and, secondarily, to show that these analogies point to a surprising crossing of political lines. Paul Alper suggested that readers might enjoy the following: Paulos often writes about unlikely events and how quickly the public tends to assume something supernatural is taking place. On page 52 of Irreligion he muses on numerological coincidences involving 9/11. He starts with 9/11 being "the telephone code for emergencies." The digits 9 + 1 + 1 sum to 11 and September 11 is the 254th day of the year so that 2 + 5 + 4 sum to 11. Further, there are another 111 days to the end of the year. The first plane to crash into the towers was flight number 11. The Pentagon, Afghanistan and New York City each have 11 letters. Moreover, any three-digit number when multiplied by 91 and 11 results in a six-digit number where digits four, five and six repeat digits one, two and three, respectively; in particular, starting with 911 results in 911,911. A few pages later he notes that on September 11, 2002 "the New York State lottery numbers were 911." The day before that,"the closing value of the September S&P 500 futures contracts" was 911. And to cinch it all, Johnny Unitas, the number one quarterback ever, died on September 11 and wore 19 on his jersey. An improbable event and a coincidence I have an example of an improbable event and a coincidence; it shows the difference between them. At Forrest's graduation last night, all of the seniors marched, in alphabetical order, to the stage to receive their diplomas. The women were wearing gray gowns and the men were wearing black gowns. I was careful to note any siblings (as far as I could tell, there were none). GREAT! So now we have a random sequence of coin tosses of length about 310, and the coin is pretty close to fair. The longest sequence of consecutive men I observed was 9; this is somewhat longer than the expected length of the longest run of heads, which is about 7, and somewhat longer than the expected length of the longest run of either heads or tails, which is about 8. So I observed a fairly unusual event. The coincidence is that Forrest was in the longest run of men. An email from Charles Grinstead to Laurie Snell about his son's graduation. The Drunkard's Walk:How Randomness Rules Our Lives Leonard Mlodinow Pantheon Books, New York, 2008 There are not many writers who can successfully write about mathematics for the general public but Leonard Mlodinow is one of them. He is a physicist who has written a number of successful books on physics and mathematics for the general public. He has also been an editor for Star Trek. The Drunkard's Walk is his most recent book. In this book he shows that we all have a hard time understanding probability and yet it plays an important role in our daily lives. To show that we are not wired to understand probability, he has only to show us the birthday problem, the Monty Hall problem, the two sisters problems, the Linda problem, the two-envelope problem, etc. Of course to understand how probability affects our lives we have to understand some basic probability. Mlodinow makes this more interesting by explaining probability along with the history of its development. He starts with Cardano introducing the sample space and solving dice problems. At the same time he discusses Cardano's colorful life. He then discusses Pascal and Fermat's solution to the problem of points. He continues with Bernoulli, deMere, and Bayes and explains their contributions including the law of large numbers, the central limit theory and conditional probability. Of course none of this is new but what makes this book so interesting is that while Mlodinow discusses probability concepts and applications he also explains how they can effect our lives. For example the argument that there is no such thing as a hot-hand in basketball might also be made about your stock adviser. To hear this in action listen to Mlodinow himself here You can also a review of this book and other similar books here by the well known probabilist David J. Aldus in the Berkeley Statistics Department. David teaches an Undergraduate Seminar From Undergraduate Probability Theory to the Real World. You will also find on his website a talk The top ten things that math probability says about the real world. submitted by Laurie Snell Researchers Fail to Reveal Full Drug Pay New York Times June 8, 2008 Gardner Harris and Benedict Carey The authors say: A world-renowned Harvard child psychiatrist whose work has helped fuel an explosion in the use of powerful antipsychotic medicines in children earned at least $1.6 million in consulting fees from drug makers from 2000 to 2007 but for years did not report much of this income to university officials, according to information given Congressional investigators. By failing to report income, the psychiatrist, Dr. Joseph Biederman, and a colleague in the psychiatry department at Harvard Medical School, Dr. Timothy E. Wilens, may have violated federal and university research rules designed to police potential conflicts of interest, according to Senator Charles E. Grassley, Republican of Iowa. Some of their research is financed by government Like Dr. Biederman, Dr. Wilens belatedly reported earning at least $1.6 million from 2000 to 2007, and another Harvard colleague, Dr. Thomas Spencer, reported earning at least $1 million after being pressed by Mr. Grassley’s investigators. But even these amended disclosures may understate the researchers’ outside income because some entries contradict payment information from drug makers, Mr. Grassley found. In one example, Dr. Biederman reported no income from Johnson & Johnson for 2001 in a disclosure report filed with the university. When asked to check again, he said he received $3,500. But Johnson & Johnson told Mr. Grassley that it paid him $58,169 in 2001, Mr. Grassley found. The Harvard group’s consulting arrangements with drug makers were already controversial because of the researchers’ advocacy of unapproved uses of psychiatric medicines in children. In addition to money that they get from the drug company, researchers often get addition support from the National Institute of Health that has some responsibility to monitor conflicts of interest. Since neither the Universities nor the NIH seem to be doing their duty Senator Grassley is asking Congress and the NIH to do something about this. You can read his proposal here. Does the internet help or confuse medical decisions? I have been diagnosed as having Mild Cognitive Impairment (MCI). This is a transition stage between the cognitive changes of normal aging and the more serious problems caused by Alzheimer's disease (AD). It has been recommended that I take two medications Aricept (donepezil) and Excelon (rivastigmine) which do not improve memory but is thought to delay the occurrence of Alzheimer’s disease. Since the possible side effects are unpleasant I decided to look on the web for studies on the effectiveness of these drugs. I found that the recommendation for donepezil is often based on the article: Vitamin E and Donepezil for the Treatment of Mild Cognitive Impairment. New England Journal of Medicine, June 9, 2005. Ronald C. Petersen and others. This is a well designed experiment and they describe there study as: A total of 769 subjects were enrolled, and possible or probable Alzheimer's disease developed in 212. The overall rate of progression from mild cognitive impairment to Alzheimer's disease was 16 percent per year. As compared with the placebo group, there were no significant differences in the probability of progression to Alzheimer's disease in the vitamin E group (hazard ratio, 1.02; 95 percent confidence interval, 0.74 to 1.41; P=0.91) or the donepezil group (hazard ratio, 0.80; 95 percent confidence interval, 0.57 to 1.13; P=0.42) during the three years of treatment. Prespecified analyses of the treatment effects at 6-month intervals showed that, as compared with the placebo group, the donepezil group had a reduced likelihood of progression to Alzheimer's disease during the first 12 months of the study (P=0.04), a finding supported by the secondary outcome measures. Among carriers of one or more apolipoprotein E 4 alleles, the benefit of donepezil was evident throughout the three-year follow-up. There were no significant differences in the rate of progression to Alzheimer's disease between the vitamin E and placebo groups at any point, either among all patients or among apolipoprotein E 4 carriers. Their conclusion was: Vitamin E had no benefit in patients with mild cognitive impairment. Although donepezil therapy was associated with a lower rate of progression to Alzheimer's disease during the first 12 months of treatment, the rate of progression to Alzheimer's disease after three years was not lower among patients treated with donepezil than among those given placebo. The 3 year period was the primary outcome, but the one year period was an exploratory outcome so we have to worry about this. Bob Norman subjested another thing we might worry about. He suggested the following picture: The top line is the rate of progression of Alzheimer's for the placebo group and the bottom line for the treated group.This shows the rate of progression of Alzheimer's for the treated group decreasing in the first year and then increasing until the third year. But after that the treated group increases faster than the placebo group! The most recent study for rivastigmine seems to be the following: Effect of rivastigmine on delay to diagnosis of Alzheimer's disease from mild cognitive impairment: Lancet Neurology, June 2007 Howard Feldman and others. The authors describe their study as follows: Of 1018 study patients enrolled, 508 were randomly assigned to rivastigmine and 510 to placebo; 17·3% of patients on rivastigmine and 21·4% on placebo progressed to AD (hazard ratio 0·85 [95% CI 0·64–1·12]; p=0·225). There was no significant difference between the rivastigmine and placebo groups on the standardized Z score for the cognitive test battery measured as mean change from baseline to endpoint (−0·10 [95% CI −0·63 to 0·44], p=0·726). Serious adverse events were reported by 141 (27·9%) rivastigmine-treated patients and 155 (30·5%) patients on placebo; adverse events of all types were reported by 483 (95·6%) rivastigmine-treated patients and 472 (92·7%) placebo-treated patients. The predominant adverse events were cholinergic: the frequencies of nausea, vomiting, diarrhoea, and dizziness were two to four times higher in the rivastigmine group than in the placebo group. And their interpretation was: There was no significant benefit of rivastigmine on the progression rate to AD (Alzhimers) or on cognitive function over 4 years. The overall rate of progression from MCI to AD in this randomized clinical trial was much lower than predicted. Rivastigmine treatment was not associated with any significant safety concerns. Ronald C Peterson (lead author of the New England Journal of medicine study) wrote a critique of this study MCI treatment trials: failure or not? The Lancet Neurology - Volume 6, Issue 6 (June 2 Ronald C Petersen In this critique he writes" The 3-year duration of anticipated therapeutic effect applied to this study was, in retrospect, overly ambitious: the treatment did not work for this duration in the donepezil and vitamin E trial, and no cholinesterase inhibitor has been shown to work for 3 years, even in AD. Therefore, this duration of effect would not be expected at the MCI stage. This, coupled with the subtherapeutic doses used, contributed to the treatment failure. In spite of all these challenges, there was a glimmer of efficacy—a trend towards a positive rivastigmine effect. Some of the MRI measures suggested a therapeutic response during the 1 year to 2 year window, which is similar to the mild efficacy effect in the donepezil and vitamin E trial? Note: Ronold C. Peterson is the Director of the Mayo Alzheimer's Disease Research Center. Well were does that leave me? Answer: Confused. (1) What are exploratory outcomes and why do we have to worry about them? (2) What do doctors know that the Internet doesn't? (3) How do you think we should use the internet in making a medical decision? Submitted by Laurie Snell A Controversy About Doping Detecting drug cheats is a key issue in the lead-up to the Beijing Olympics this summer. Not a surprise given the battering of top-flight sports by successive doping scandals. In June, cyclist Floyd Landis officially lost his 2006 Tour de France title, almost two years after his urine gave a positive test for a performance-enhancing drug known as EPO. WADA (World Anti Doping Agency) has accredited 33 labs around the world to perform this and other anti-doping tests. Recently, some Danish researchers rained on the parade, claiming that their experiment proved that the EPO test had poor “detection power”. (For a good summary, see “The Validity of EPO Testing for Athletes”, ScienceDaily.com, June 28, 2008.) In their paper, researchers at the Copenhagen Muscle Research Center made two specific claims: (a) that the detection power of the EPO test is poor; and (b) that agreement between results from two WADA-accredited labs is very poor. They concluded that due to a high false negative rate (low sensitivity), this test would fail to catch many drug cheats. They called into question WADA’s ability to detect EPO abuse at the Beijing Olympics. In the experiment, they recruited eight (8) healthy college students, all non-athletes, to follow a program of EPO injection and exercise over a seven-week period, divided into three phases (boosting /higher dose, maintenance/lower dose, post treatment/off cycle). Eight, 16 and 24 urine samples were collected in these respective phases, in addition to eight samples taken before the EPO program to serve as base-line. Each half-sample was submitted to one of two WADA-accredited labs (known as “Lab A” and “Lab B”) for EPO testing. An excerpt from the table of results from the original paper is given below: │Phase │Pre│Boosting│Maintenance1│Maintenance2│Post1│Post2│Post3│ │Samples tested │8 │8 │8 │8 │7* │6* │7* │ │Lab A + │0 │8 │4 │2 │2 │0 │0 │ │Lab B + │0 │0 │0 │0 │0 │0 │0 │ Note: The reduction in samples in the post-treatment period was not explained Source: Lundby, et. al., “Testing for recombinant human erythropoietin in urine: problems associated with current anti doping testing”, PresS. J Appl. Physiol, June 26, 2008, online. --- Discussion --- (1) Describe a pair of metrics that statisticians use to measure the accuracy (“detection power”) of diagnostic tests. This experiment addressed only one of these metrics. Which one? Why couldn’t this experimental design capture the other metric? Why is it important to know the other metric before passing judgment on test accuracy? (2) Inside the paper, the authors revealed that each lab classified each sample into one of three categories: positive, suspicious and negative. Suspicious cases are subject to further confirmatory testing (although we do not know if this was done). We were told that Lab A indicated 2 maintenance samples and 3 post-treatment samples “suspicious”; Lab B indicated 7 boosting samples and 5 maintenance samples “suspicious”; Lab B called 1 boosting sample negative. Do the additional data affect your opinion of the study’s conclusions? (3) Comment on the sample size, and the selection mechanism. How comfortable are you to make statistical inference using this data? (4) In the Discussion, the authors used language like “In the ‘maintenance’ period, laboratory A found six positive results … in a total of 16 samples.” Explain why special testing methodology must be used if you were to treat this data as an n=16 sample. What key assumption is violated when ordinary testing is invoked? --- Further Discussion --- It has been widely reported that the EPO test is highly “unreliable” because this study proved that two labs, both WADA certified, could produce divergent analytical results. In response, Olivier Rabin, WADA’s scientific director, doubted that this study “reflected the true state of EPO testing”. (“Study Shows Problems in Olympic-Style Tests”, New York Times, June 26, 2008) Dr. Rabin raised a core question in statistical inference: given that the true state is unknown, one would like to see if the experimental data provides sufficient statistical evidence to prove/disprove one’s (5) State null and alternative hypotheses for this problem. Identify the population and the sample in the Danish experiment. What was their sample size? (6) Under what conditions are you willing to generalize the result of this test of Lab A vs. Lab B? (Or design a different experiment to test your stated hypotheses.) Submitted by Kaiser Fung
{"url":"http://test.causeweb.org/wiki/chance/index.php?title=Chance_News_38&diff=prev&oldid=5792","timestamp":"2014-04-17T10:19:14Z","content_type":null,"content_length":"57916","record_id":"<urn:uuid:2aba1992-67b7-4dc5-97c1-5912b50e56cc>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
Dr Komil Kuliev Position: Research Associate Email: KulievK@cardiff.ac.uk Telephone: +44(0)29 208 75259 Fax: +44(0)29 208 74199 Extension: 75259 Location: M/2.13 Research Interest Differential Equations, Inequalities. Research Group Analysis and Differential Equations Recent Significant Publications Drábek, P.; Kuliev, K. Half-linear Sturm-Liouville problem with weights. Bull. Belg. Math. Soc. Simon Stevin 19(2012), no. 1, 107-119; Kufner, A.; Kuliev, K.; Persson, L.-E. Some higher order Hardy inequalities. J. Ineq. &Appl. 69(2012), no.1, 1-14. Kufner, A.; Kuliev, K. The Hardy inequality with “negative powers”. Adv. Algebra Anal. 1(2006), no.3, 219-228; Kufner, A.; Kuliev, K; Oguntuase, J. A.; Persson, L.-E. Generalized weighted inequality with negative powers. J. Math. Inequal. 1(2007), no.2, 269-280; Kuliev, Komil; Persson, Lars-Erik. An extension of Rothe’s method to non-cylindrical domains. Appl. Math. 52(2007), no.5, 365-389; Gogatishvilli, A.; Kuliev, K.; Kulieva, G. Some conditions characterizing the “reverse” Hardy inequality. Real Anal. Exchange 33(2008), no.1, 249-257; Kufner, A.; Kuliev, K.; Kulieva, G. The Hardy inequality with one negative parameter. Banach J. Math. Anal. 2(2008), no.2, 76-84; Kufner, A.; Kuliev, K.; Kulieva, G. Transformation method for parabolic equations and variational inequalities in non-cylindrical domains. Far East J. Math. Sci. (FJMS) 28(2008), no.1, 17-36; Kufner, A.; Kuliev, K.; Kulieva, G. Conditions characterizing the Hardy and reverse Hardy inequalities. Math. Inequal. Appl. 12(2009), no.4, 693-700; Essel, Emmanuel Kwame; Kuliev, Komil; Kulieva, Gulchehra; Persson, Lars-Erik Homogenization of quasilinear parabolic problems by the method of Rothe and two scale convergence. Appl. Math. 55(2010), no.4, 305-327; Kufner, A.; Kuliev, K. Some characterizing conditions for the Hardy inequality. Eurasian Mathematical Journal (EMJ). 1(2010), no.2, 86-98; Kufner, A.; Kuliev, K.; Kulieva, G. Transformation method for parabolic equations and variational inequalities in non-cylindrical domains. Research report 2006-12, Department of Mathematics, Luleå University of Technology, Sweden, ISSN 1400-4003; Kuliev, Komil; Kulieva, Gulchehra; Essel, Emmanuel Kwame; Persson, Lars-Erik Linear parabolic problems with singular coefficients in non-cylindrical domains. Research report 2006-14, Department of Mathematics, Luleå University of Technology, Sweden, ISSN 1400-4003; Kuliev, K.; Kulieva, G.; Persson, L.-E. Parabolic variational inequalities with singularities. Research report 2007-2, Department of Mathematics, Luleå University of Technology, Sweden, ISSN Essel, Emmanuel Kwame; Kuliev, Komil; Kulieva, Gulchehra; Persson, Lars-Erik Homogenization of quasilinear parabolic problems by the method of Rothe and two scale convergence. Research report 2008-3, Department of Mathematics, Luleå University of Technology, Sweden, ISSN 1400-4003. Doctoral Thesis Parabolic problems on non-cylindrical domains – The method of Rothe.PhD thesis (2007), Department of Mathematics, University of West Bohemia, Czech Republic; Licentiate Thesis Parabolic problems on non-cylindrical domains – The method of Rothe.Lic. thesis (2006), Department of Mathematics, Luleå University of Technology, Sweden; Address for the thesis: http://epubl.ltu.se/1402-1757/2006/66/LTU-LIC-0666-SE.pdf Major Conference Talks 06/2009: “Some characterizing conditions of Hardy inequality” Analysis, Inequalities and Homogenization Theory. Lulea, June 8-11 2009
{"url":"http://www.cardiff.ac.uk/maths/contactsandpeople/profiles/kulievk.html","timestamp":"2014-04-20T10:59:21Z","content_type":null,"content_length":"15529","record_id":"<urn:uuid:9b0bdd4b-938a-4651-ac42-f34121b2c11c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
We need one more tool to solve for intersections of polar functions. We need to know how to solve a system of two equations. We can find where two rectangular functions y = f (x) and y = g(x) intersect by setting them equal to each other and solving the equation f (x) = g(x) for x. This is the place (or places) where both equations are true at the same time. Like the intersection of two rectangular functions, we can find most of the places two polar equations r = f (θ) and r = g(θ) intersect by setting them equal to each other and solving the equation f (θ) = g(θ) for θ. It's useful to graph the functions before finding where they intersect. We want to know how many intersection points we need to find. Also, setting equations equal to each other and solving might not catch an intersection point at the origin. This sort of thing can happen because with polar coordinates there are infinitely many ways to write any point. Pick your favorite and stop there. When finding where two polar graphs intersect, graph the functions first. Then look at how many intersection points there are and which quadrants they're in. Then we'll know how many points we need to find and roughly where they are. We can estimate the intersection points from the graph. The graphs of r = cos θ and r = sinθ do look like they hit each other around θ = π / 4. However, we still need to set the functions equal to each other and solve for θ. Find the points at which r = 1 and r = sin θ intersect. Find where r = cos θ and r = sin θ intersect. • Find all points where r = 1 + cosθ and r = 1 + sin θ intersect. • Find all points where r = 1 + cos θ and r = -cos θ intersect. • Find all points in the first quadrant where r = 2sin(3θ) and r = 1 intersect.
{"url":"http://www.shmoop.com/points-vectors-functions/polar-functions-intersect-help.html","timestamp":"2014-04-17T04:46:35Z","content_type":null,"content_length":"42050","record_id":"<urn:uuid:f69973eb-7f8b-4c2b-92da-5d2884eb5591>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - wavelets Date: Jan 26, 2013 6:15 PM Author: perseus Subject: wavelets this is a sine wave riding a sine wave and the resultant equation. Can anyone tell me why f(x)=sin(sin(2x)) creates a symmetrical sine wave, but f(x)=sin(2pi(sin(2x))) doesn't? When you figure it out, let me know.
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8167135","timestamp":"2014-04-17T10:01:11Z","content_type":null,"content_length":"1302","record_id":"<urn:uuid:83eade16-0890-4e6e-92b5-4a6b9ca51034>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
New Hyde Park Find a New Hyde Park Precalculus Tutor ...I have also been tutoring for the past 5 years Elementary Math, Algebra, Precalculus and Calculus students, amongst others, at Hunter College's Dolciani Math Learning tutoring center. I have taught Elementary Math, Algebra, Finite Mathematics, PreCalculus, Introductory and Intermediate Statistic... 21 Subjects: including precalculus, calculus, physics, statistics ...I am easygoing and get along with children at all ages very well. Well, I am sure it should be even easier to communicate with adults. My time is very flexible, but I do prefer to meet my student at least 5 hours a week in order to have a better result of tutoring. 6 Subjects: including precalculus, calculus, Japanese, Chinese ...I did my undergraduate in Physics and Astronomy at Vassar, and did an Engineering degree at Dartmouth. I'm now a PhD student at Columbia in Astronomy (have completed two Masters by now) and will be done in a year. I have a lot of experience tutoring physics and math at all levels. 11 Subjects: including precalculus, Spanish, calculus, physics I believe that everyone can do well in Mathematics and Science. It does however require effort. I look forward to showing you the way, but the effort has to come from you. 11 Subjects: including precalculus, calculus, GRE, GMAT ...Tutoring students from all different ages poses new challenges. Yes, as you progress in the mathematical world, concepts become more involved and abstract. This is the challenge I face while teaching a students at the high school and college level. 11 Subjects: including precalculus, calculus, physics, geometry
{"url":"http://www.purplemath.com/new_hyde_park_ny_precalculus_tutors.php","timestamp":"2014-04-20T09:23:21Z","content_type":null,"content_length":"24204","record_id":"<urn:uuid:1c69481c-5023-44bb-9332-fa47c15c78c7>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
Find the points on inflection and discuss the concavity of the graph of the function? September 16th 2007, 02:20 PM #1 Aug 2007 Find the points on inflection and discuss the concavity of the graph of the function? I need to find the points on inflection and discuss the concavity of the graph of the function: f(x)= (x+1)/(sqrt(x)) I know that points of inflection require setting the second derivative equal to zero and finding where it is undefined. Here is my work so far... f'(x)=1/2x^(-1/2) -1/2x^(-3/2) f''(x)= -1/4x^(-3/2) +3/4x^(-5/2) My problem is that when I find the zeros of f''(x) I get x=3. That makes sense. But when I see where it is undefined, it is undefined from (-infinity,0). How can I use this information to find points of inflection and where the graph is concave up/down? Thanks so much!!! Forget the derivative for a moment. What is the Domain of f(x)? Are you really surprised that the second derivative doesn't exist for x <= 0? You can check the sign of the first derivative on both sides of the point of inflection to see the concavity. September 16th 2007, 02:53 PM #2 MHF Contributor Aug 2007
{"url":"http://mathhelpforum.com/calculus/19041-find-points-inflection-discuss-concavity-graph-function.html","timestamp":"2014-04-17T11:46:35Z","content_type":null,"content_length":"34006","record_id":"<urn:uuid:109dd31d-4bfa-4a67-a7aa-55dd49f9b1c8>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
Karen E Free Tutoring: The "Math Lab" is available in the University Learning Center on the first floor of Westside Hall. No appointment is needed during open lab hours. More about the Learning Center ( hours of operation, information on tutors, etc.) can be found on the website: http://www.uncw.edu/ulc . This is a great place to go to do your homework or study for tests. [When you have a question, you can get an answer and continue working rather than "getting stuck" and giving up.] There are excellent tutors in the Learning Center. Use the above website to "meet" them and then pick a regular time to do your homework in the Learning Center. You may use a computer in the Learning Center if you don't have easy access to a computer of your own. Students who visit the Math Lab can also get help with math study skills and math anxiety. Math tutors help students make the transition to college mathematics as well as supporting students in math and statistics courses. Math Services is now offering one-on-one appointments for MAT 112, MAT 115, MAT 141-142, MAT 161-2, and STT 215. Students can set their own appointments through our website: The Learning Center also has this link which gives additional resources to learning math: http://www.uncw.edu/ulc/math/resources.html It is also very helpful if you form a small "study group" with others in your class to meet on a regular basis to do the assigned homework and study for quizzes and tests. Math 112 has online course materials which can be accessed at http://www.coursecompass.com/ and on this website http://people.uncw.edu/spikek/mat112.htm
{"url":"http://people.uncw.edu/spikek/","timestamp":"2014-04-20T05:43:49Z","content_type":null,"content_length":"10875","record_id":"<urn:uuid:cb30be77-4030-4fb3-971b-16b1674842c1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
Infinity and the mind – LewRockwell.com A fascinating book (but probably not for non-math nerds) that I read many moons ago is Rudy Rucker’s Infinity and the Mind (review). There are actually different types and levels of infinity, and whole mathematics of infinities… pretty mind-blowing stuff. Even the simple notion of infinity is hard to wrap your mind around. But here is a neat little trick I learned a long time ago, that I’ve never forgotten. A line segment is packed with an infinite number of points, right? Say, a one-inch line segment. You would think that a line segment twice as long has more points in it than the shorter one, right? Maybe… twice as many points? Well, this is wrong, as can be seen in the following diagram: If you take the two line segments, L1 and L2, and curve them into concentric circles, then you can see that for every single point on the outer, larger line segment L2, it corresponds to a unique point on the inner line segment L1–the same radius line passes through both. If L2 had more points than L1, you could find at least one point on L2, that does not correspond to a unique point on L1… but you can’t. Every point on L2 connects by the radius line, to a point on L1. So, the longer line segment L2 does not have “more” points than the shorter segment L1. But it does not have the “same number” of points, either–that’s the thing about infinity. Each segment has an infinite number of points, but this is not a fixed number, so they can’t be the “same”. That is why 2 x infinity = infinity. If infinity was a certain number, then this equation would not be true–but it is. 3:26 pm on September 23, 2003 Email Stephan Kinsella
{"url":"http://www.lewrockwell.com/lrc-blog/infinity-and-the-mind/","timestamp":"2014-04-16T10:46:24Z","content_type":null,"content_length":"31876","record_id":"<urn:uuid:e05aa189-bd1e-4315-8458-6d2815df0d01>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 43 , 1992 "... We survey the computational geometry relevant to finite element mesh generation. We especially focus on optimal triangulations of geometric domains in two- and three-dimensions. An optimal triangulation is a partition of the domain into triangles or tetrahedra, that is best according to some cri ..." Cited by 180 (8 self) Add to MetaCart We survey the computational geometry relevant to finite element mesh generation. We especially focus on optimal triangulations of geometric domains in two- and three-dimensions. An optimal triangulation is a partition of the domain into triangles or tetrahedra, that is best according to some criterion that measures the size, shape, or number of triangles. We discuss algorithms both for the optimization of triangulations on a fixed set of vertices and for the placement of new vertices (Steiner points). We briefly survey the heuristic algorithms used in some practical mesh , 1987 "... In this paper we study the ways in which the topology of the image of a polyhedron changes with changing viewpoint. We catalog the ways that the topological appearance, or aspect, can change. This enables us to find maximal regions of viewpoints of the same aspect. We use these techniques to constru ..." Cited by 88 (7 self) Add to MetaCart In this paper we study the ways in which the topology of the image of a polyhedron changes with changing viewpoint. We catalog the ways that the topological appearance, or aspect, can change. This enables us to find maximal regions of viewpoints of the same aspect. We use these techniques to construct the viewpoint space partition (VSP), a partition of viewpoint space into maximal regions of constant aspect, and its dual, the aspect graph. In this paper we present tight bounds on the maximum size of the VSP and the aspect graph and give algorithms for their construction, first in the convex case and then in the general case. In particular, we give bounds on the maximum size of Q(n 2 ) and Q (n 6 ) under an orthographic projection viewing model and of Q(n 3 ) and Q(n 9 ) under a perspective viewing model. The algorithms make use of a new representation of the appearance of polyhedra from all viewpoints, called the aspect representation or asp. We believe that this - ALGORITHMICA , 1992 "... Given a set of nonintersecting polygonal obstacles in the plane, the link distance between two points s and t is the minimum number of edges required to form a polygonal path connecting s to t that avoids all obstacles. We present an algorithm that computes the link distance (and a correspon ..." Cited by 53 (6 self) Add to MetaCart Given a set of nonintersecting polygonal obstacles in the plane, the link distance between two points s and t is the minimum number of edges required to form a polygonal path connecting s to t that avoids all obstacles. We present an algorithm that computes the link distance (and a corresponding minimum-link path) between two points in time O(E#(n) log² n) (and space O(E)), where n is the total number of edges of the obstacles, E is the size of the visibility graph, and #(n) denotes the extremely slowly growing inverse of Ackermann's function. We show how to extend our method to allow computation of a tree (rooted at s) of minimum-link paths from s to all obstacle vertices. This leads to a method of solving the query version of our problem (for query points t). - Computational Geometry , 1985 "... Spurred by developments in spatial planning in robotics, computer graphics, and VLSI layout, considerable attention has been devoted recently to the problem of moving sets of objects, such as line segments and polygons in the plane to polyhedra in three dimensions, without allowing collisions betwee ..." Cited by 39 (4 self) Add to MetaCart Spurred by developments in spatial planning in robotics, computer graphics, and VLSI layout, considerable attention has been devoted recently to the problem of moving sets of objects, such as line segments and polygons in the plane to polyhedra in three dimensions, without allowing collisions between the objects. One class of such problems considers the separability of sets of objects under different kinds of motions and various definitions of separation. This paper surveys this new area of research in a tutorial fashion, present new results, and provides a list of open problems and suggestions for further research. Key Words and Phrases: sofa problem, polygons, polyhedra, movable separability, visibility hulls, hidden lines, hidden surfaces, algorithms, complexity, computational geometry, spatial planning, collision avoidance, robotics, artificial intelligence. CR Categories: 3.36, 3.63, 5.25. 5.32. 5.5 * Research supported by NSERC Grant no. A9293 and FCAR Grant no.EQ1678. - 2 - ... - Discrete Comput. Geom , 1993 "... A plane geometric graph C in ! 2 conforms to another such graph G if each edge of G is the union of some edges of C. It is proved that for every G with n vertices and m edges, there is a completion of a Delaunay triangulation of O(m 2 n) points that conforms to G. The algorithm that construct ..." Cited by 32 (6 self) Add to MetaCart A plane geometric graph C in ! 2 conforms to another such graph G if each edge of G is the union of some edges of C. It is proved that for every G with n vertices and m edges, there is a completion of a Delaunay triangulation of O(m 2 n) points that conforms to G. The algorithm that constructs the points is also described. Keywords. Discrete and computational geometry, plane geometric graphs, Delaunay triangulations, point placement. Appear in: Discrete & Computational Geometry, 10 (2), 197--213 (1993) 1 Research of the first author is supported by the National Science Foundation under grant CCR-8921421 and under the Alan T. Waterman award, grant CCR-9118874. Any opinions, finding and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the view of the National Science Foundation. Work of the second author was conducted while he was on study leave at the University of Illinois. 2 Department of Computer Scienc... - SIAM J. Comput , 1991 "... Abstract. We show that a triangulation of a set of n points in the plane that minimizes the maximum edge length can be computed in time O(n 2). The algorithm is reasonably easy to implement and is based on the theorem that there is a triangulation with minmax edge length that contains the relative n ..." Cited by 28 (3 self) Add to MetaCart Abstract. We show that a triangulation of a set of n points in the plane that minimizes the maximum edge length can be computed in time O(n 2). The algorithm is reasonably easy to implement and is based on the theorem that there is a triangulation with minmax edge length that contains the relative neighborhood graph of the points as a subgraph. With minor modi cations the algorithm works for arbitrary normed metrics. Key words. Computational geometry,point sets, triangulations, two dimensions, minmax edge length, normed metrics AMS(MOS) subject classi cations. 68U05, 68Q25, 65D05 Appear in: SIAM Journal on Computing, 22 (3), 527{551, (1993) "... We present a method of decomposing a simple polygon that allows the preprocessing of the polygon to efficiently answer visibility queries of various forms in an output sensitive manner. Using O (n3 log n) preprocessing time and O(n3) space, we can, given a query point q inside or outside an n verte ..." Cited by 24 (2 self) Add to MetaCart We present a method of decomposing a simple polygon that allows the preprocessing of the polygon to efficiently answer visibility queries of various forms in an output sensitive manner. Using O(n3 log n) preprocessing time and O(n3) space, we can, given a query point q inside or outside an n vertex polygon, recover the visibility polygon of q in O(log n + k) time, where k is the size of the visibility polygon, and recover the number of vertices visible from q in O(log n) time. The key notion , 1995 "... We present an algorithm for the two-dimensional translational containment problem: find translations for k polygons (with up to m vertices each) which place them inside a polygonal container (with n vertices) without overlapping. The polygons and container may be nonconvex. The containment algorit ..." Cited by 24 (13 self) Add to MetaCart We present an algorithm for the two-dimensional translational containment problem: find translations for k polygons (with up to m vertices each) which place them inside a polygonal container (with n vertices) without overlapping. The polygons and container may be nonconvex. The containment algorithm consists of new algorithms for restriction, evaluation, and subdivision of two-dimensional configuration spaces. The restriction and evaluation algorithms both depend heavily on linear programming; hence we call our algorithm an LP containment algorithm. Our LP containment algorithm is distinguished from previous containment algorithms by the way in which it applies principles of mathematical programming and also by its tight coupling of the evaluation and subdivision algorithms. Our new evaluation algorithm finds a local overlap minimum. Our distance-based subdivision algorithm eliminates a "false" (local but not global) overlap minimum and all layouts near that overlap minimum, a... , 1995 "... Layout and packing are NP-hard geometric optimization problems of practical importance for which finding a globally optimal solution is intractable if P!=NP. Such problems appear in industries such as aerospace, ship building, apparel and shoe manufacturing, furniture production, and steel construct ..." Cited by 13 (6 self) Add to MetaCart Layout and packing are NP-hard geometric optimization problems of practical importance for which finding a globally optimal solution is intractable if P!=NP. Such problems appear in industries such as aerospace, ship building, apparel and shoe manufacturing, furniture production, and steel construction. At their core, layout and packing problems have the common geometric feasibility problem of containment: find a way of placing a set of items into a container. In this thesis, we focus on containment and its applications to layout and packing problems. We demonstrate that, although containment is NP-hard, it is fruitful to: 1) develop algorithms for containment, as opposed to heuristics, 2) design containment algorithms so that they say "no" almost as fast as they say "yes", 3) use geometric techniques, not just mathematical programming techniques, and 4) maximize the number of items for which the algorithms are practical. Our approach to containment is based on a new , 2004 "... This papers introduces the stealth tracking problem, in which a robot, equipped with visual sensors, tries to track a moving target among obstacles and, at the same time, remain hidden from the target. Obstacles impede both the tracker's motion and visibility, but on the other hand, provide hiding p ..." Cited by 12 (2 self) Add to MetaCart This papers introduces the stealth tracking problem, in which a robot, equipped with visual sensors, tries to track a moving target among obstacles and, at the same time, remain hidden from the target. Obstacles impede both the tracker's motion and visibility, but on the other hand, provide hiding places for the tracker. Our proposed tracking algorithm uses only local information from the tracker's visual sensors and assumes no prior knowledge of target motion or a global map of the environment. It applies a local greedy strategy, and chooses the best move at each time step by minimizing the target's escape risk, subject to the stealth constraint. The algorithm is efficient, taking O(n) time at each step, where n is the complexity of the tracker's visibility region. Simulation experiments show that the algorithm performs well in difficult environments.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=291433","timestamp":"2014-04-20T21:14:03Z","content_type":null,"content_length":"38294","record_id":"<urn:uuid:a869a921-f8c1-4ebb-b515-4bef9930ee77>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/appleduardo/answered/1","timestamp":"2014-04-18T23:35:44Z","content_type":null,"content_length":"108275","record_id":"<urn:uuid:50cb2d82-0af6-478c-a39b-ce78e61fc468>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Re: Why the definition of "large cardinal axiom" matters Roger Bishop Jones rbj01 at rbjones.com Fri Apr 30 06:15:33 EDT 2004 On Wednesday 21 April 2004 7:16 pm, JoeShipman at aol.com wrote: > The interesting thing about large cardinal axioms, > empirically, is that they fall into a hierarchy. They are all > compatible with each other, and are naturally ordered in 2 > compatible but distinct ways: consistency strength and > cardinality. Do you think the claim that the orderings are compatible can be proven using your definition of "large cardinal axiom"? On the basis of your proposed definition I would have thought that any large cardinal axiom, no matter what size the cardinal it asserts, could be modified to have arbitrarily high consistency strength without increasing the size of the cardinal asserted to exist, by adding a statement of arithmetic (e.g. con(ZFC + stronger large cardinal axiom)). If you agree this is the case, can you think of any way of fixing the definition so as to make your claim above that the two orderings are compatible provable? Roger Jones - rbj01 at rbjones.com plain text email please (non-executable attachments are OK) More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-April/008098.html","timestamp":"2014-04-17T16:20:38Z","content_type":null,"content_length":"3769","record_id":"<urn:uuid:938f9023-7ad1-4f1f-9008-8910c83c5088>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
Placement Test Contents of the Mathematics Placement Test Since 1978, UW system faculty and Wisconsin high school teachers have been collaborating to develop a test for placing incoming students into college Mathematics courses. The current test includes four components: elementary algebra, intermediate algebra, college algebra, and trigonometry. These components are available for each UW System campus to use according to its individual needs and resources. Each campus determines the appropriate scores for entry into specific courses. The purpose of this brochure is to introduce you to the test, describe the rationale behind its creation, and outline future plans for its continued development. Follow this Link for a Practice Mathematics Placement Test Background and Purpose of the Test In 1978, following the publication of the UW System Basic Skills Task Force Report, members of Mathematics Department Faculties from UW System institutions met in Madison to discuss common entry-level curriculum problems. One problem most departments shared was how to effectively place incoming freshmen into an appropriate mathematics course. Placement procedures and tests varied from campus to campus and it seemed that some consistency was desirable. The decision was made to develop a System-wide test for placement into an introductory mathematics curriculum. The committee that would begin this task would consist of representatives from any UW System Mathematics Department that chose to participate. The departments would select their representatives and participation would be strictly voluntary. The first step in the project was to carefully analyze each of the individual curricula in the System and write a detailed set of prerequisite objectives for all courses prior to calculus. After this list was approved by all campuses, the committee began developing test items on the skills identified in their test objectives. Through a series of pilot administrations at area high schools and UW campuses, the committee obtained valuable information about how the individual items performed, and gained important insights into item writing strategies and techniques. Many items were refined or corrected, as necessary, and repiloted in an effort to improve their ability to distinguish between students with different levels of mathematical preparedness. After a sufficient number of high quality items had been developed, they were assembled into a complete test. The first operational form of the Mathematics Placement Test was administered in 1984. Placement into college courses is the sole purpose of this test. The experienced teacher will quickly realize that many skills which are taught in the high school mathematics courses are not included in the test. This was by design, as the test is a tool to assist advisors in placing students into the best course in the university-level mathematics sequence. The questions on the test were specifically selected with this single purpose in mind. This means the test is not a measure of everything that is learned in high school Mathematics courses. The test was not designed to measure program success or to compare students from one high school with students from another. It should be viewed only as a tool to be used for placing students at the university level. As a placement instrument, the test has to be easy enough to identify those students needing remedial help, yet it also has to be complex enough to identify those students who are ready for calculus. Scores have to be precise enough to allow placement into many different levels of university coursework. In addition, the test has to be efficient to score, since thousands of students each year need to have their results promptly reported. In order to meet these criteria, the test development committee selected a multiple-choice format. The items measure four different areas of mathematical competence: elementary algebra, intermediate algebra, college algebra, and trigonometry. Each skill area has a different set of detailed objectives, carefully developed to best match university mathematics curricula across the University of Wisconsin System. Every year, a new form of the Mathematics Placement Test is published, along with some new pilot items for each component of the test, and administered to all incoming Freshmen in the UW System. All items are subjected to a statistical review to identify which items effectively distinguish the students with the strongest mathematics skills or the weakest mathematics skills from the general population of students. Only the items which are most useful for differentiating among students are ever considered for use on a future form of the test. Although faculty cannot be considered disinterested observers, those who are familiar with the placement test feel that its quality is extremely high. The feeling among faculty at the participating UW institutions is that the test has helped enormously in placing students into appropriate courses. One of the strengths of the Mathematics Placement Test is that it is developed by faculty from throughout the University of Wisconsin System. Therefore, this test represents a UW System perspective with respect to the underlying skills that are necessary for success in our courses. One of the challenges, in this regard, is that the UW System campuses do not have a single Mathematics curriculum. Instead, each campus has its own curriculum and its own courses, which may or may not correspond well with courses on other UW campuses. To ensure that the placement test works well on each UW campus, each UW institution determines its own cutscores so as to optimize placement into its own Mathematics course sequence. Consequently, cutscores will vary from campus to campus as a result of curricular differences and student population differences. Also, on many campuses, the placement test is but one of several variables used for placing students, often also including ACT/SAT score, units of high school mathematics, and grades in high school mathematics courses. The ability of this test to appropriately place students into courses rests in the quality of the match between test content and the institutional curricula at each UW campus. To ensure that the test mirrors the curriculum in introductory mathematics courses throughout UW System, the Mathematics Placement Test Development Committee has expanded to include one representative from most of the 14 UW institutions, as well as one Wisconsin high school mathematics teacher. This committee generally convenes twice each year to write and revise test items and discuss issues pertaining to test content and university curricula. Recent Developments Over the past few years, the Mathematics Placement Test has changed in several very important ways. Prior to the fall of 2000, the different campuses were combining items in different ways to form placement scores and make placement decisions. The particular set of items used by any one campus was selected to best match that campus’ introductory curriculum. However, because campuses combined the test items in different ways, students who transferred from one UW institution to another often found themselves without valid placement scores and needing to retake the test. In 2000, the Mathematics Placement Test Development Committee agreed upon a common method for combining test objectives to create a uniform set of scores for reporting performance on the placement test. The three agreed-upon scores, one corresponding to each new objective, are labeled mathematics basics, algebra, and trigonometry. These three scores are now used by all System institutions. Historically, the Mathematics Placement Test has always been organized in three different sections, labeled A, B, and C. Students were instructed to take either sections A and B (which contained elementary and intermediate algebra) or sections B and C (which contained intermediate and college algebra and trigonometry), depending upon their mathematics background and desired college placement. However, our experience was that many students found it difficult to accurately assess their level of preparedness and would often take the wrong two sections. On most campuses, students who scored too high on the AB test or too low on the BC test could not be placed accurately and were required to retest with the appropriate two sections of the test. This was very inconvenient, both for the students and for the university’s testing office. Beginning in the fall of 2002, the three different sections were shortened and combined, and all students were asked to complete the entire test. This has helped eliminate the confusion over which two sections to take and has reduced the number of students needing to retest. General Characteristics of the Test 1. All items are to be completed by all students. The items are roughly ordered from elementary to advanced. The expectation is that less prepared students will answer fewer questions correctly than more prepared students. 2. The test consists entirely of multiple choice questions, each with five choices. 3. The test is scored as the number of correct answers, with no penalty for guessing. Each item has only one acceptable answer. This number correct score is converted to a standard score between 150 and 850 for the purposes of score reporting. 4. The Mathematics Placement Test is designed as a test of skill and not speed. Ample time is allowed for most students to answer all questions. Ninety (90) minutes are allowed to complete the test. 5. The mathematics basics component has a reliability of .85. The algebra component has a reliability of .90. The trigonometry component has a reliability of .85. For all three sections, items are selected of appropriate difficulty to provide useful information within the range of scores used for placement throughout the System campuses. Test Description The Mathematics Test Development Committee decided on three broad categories of items: mathematics basics, algebra, and trigonometry. The entire Mathematics Placement Test is designed to be completed in 90 minutes, sufficient time for most students to complete the test. Items for each of the three components are selected to conform to a carefully created set of detailed objectives. The percentage of items selected from each component are shown in Table 1 below. Table 1 Blueprint for the UW Mathematics Placement Test Mathematics Basics (20 items) (Percentage of Scale) 1. Integer Arithmetic (5.0) 2. Rational and Decimal Arithmetic (20.0) 1. Introductory Skills (15.0) 2. Factoring and Simplifying Algebraic Expressions (15.0) 3. Rules Using Positive Integer Exponents and Solving Linear and Literal Equations (20.0) 1. Introductory One-Dimensional Problems (15.0) 2. Solids and More Complex One-Dimensional Problems (10.0) Algebra (35 items) 1. Graphs and Systems of Linear Equations (10.0) 2. Simplifying Expressions (15.0) A. Operations on Polynomials B. Rational Expressions C. Negative Exponents 3. Quadratics (15.0) A. Factoring Quadratics B. Quadratic/Rational Equations 1. Geometric Relationships (15.0) A. Right Triangle Relationships B. Parallel and Perpendicular Lines C. Distance D. Area and Similar Figures 2. Circles and Other Conics (5.0) 1. Radicals and Fractional Exponents (6.0) 2. Absolute Value and Inequalities (3.0) 3. Functions (15.0) A. Functional Definition, Notation, and Interpretation B. Algebra of Functions C. Rational Functions D. Inverse Functions 4. Exponentials and Logarithms (10.0) A. Exponential and Logarithmic Functions B. Applications 5. Complex Numbers and Theory of Equations (3.0) 6. Applications (3.0) A. Word Problems 1. Basic Trigonometry Definitions (30.0) 2. Identities (20.0) 3. Triangles (10.0) 4. Graphs (15.0) 1. Circles (10.0) 2. Triangles (10.0) 3. Parallel/Perpendicular Lines (5.0) Note: The following sample items are scanned images and as such, do not have the clarity that the items have when printed in test booklets. Sample Items from the Mathematics Basics Component Sample Arithmetic Items 1. The greatest common divisor of 20 and 36 is a. 180 b. 108 c. 56 * d. 4 e. 2 2. 2¼ yards is a. 27 in. b. 36 in. c. 72 in. * d. 81 in. e. 96 in. Sample Algebra Items 3.One factor of 3x^2 - 6x + 9 is * a. x^2 - 2x + 3 b. x^2 - 6x + 9 c. x^2 - 2x + 9 d. x + 3 e. None of these 4. (p^x)^y = a. p^x+y b. pxy c. yp^x * d. p^xy e. None of these Sample Geometry Items Sample Algebra Items Sample Geometry Items Sample Advanced Algebra Items Sample Items from the Trigonometry Component Sample Trigonometry Items Sample Geometry Items Additional Statements About High School Preparation for College Mathematics Study The number of high schools offering some version of calculus has increased markedly since the UW System Math Test Committee's first statement of objectives and philosophy, and experience with these courses has shown the validity of the Committee's original position. This position was that a high school calculus program may work either to the advantage or to the disadvantage of students depending on the nature of the students and the program. Today, it seems necessary to mention the negative possibilities first. A high school calculus program not designed to generate college calculus credit is likely to mathematically disadvantage students who go on to college. This is true for all such students whose college program entails use of mathematics skills, and particularly true of students whose college program involves calculus. High school programs of this type tend to be associated with curtailed or superficial preparation at the precalculus level and their students tend to have algebra deficiencies which hamper them not only in mathematics courses but in other courses in which mathematics is The positive side is that a well-conceived high school calculus course which generates college calculus credit for its successful students will provide a mathematical advantage to students who go on to college. A study by the Mathematical Association of America identified the following features of successful high school calculus programs: 1. they are open only to interested students who have completed the standard four year college preparatory sequence. A choice of mathematics options is available to students who have completed this sequence at the start of their senior year. 2. they are full year courses taught at the college level in terms of text, syllabus, depth and rigor. 3. their instructors have had good mathematical preparation (e.g. at least one semester of junior/senior level real analysis) and are provided with additional preparation time. 4. instructors expect that their successful graduates will not repeat the course in college, but will get college credit for it. A variety of special arrangements exist whereby successful graduates of a high school calculus course may obtain credit at one or another college. A generally accepted method is for the students to take the Advanced Placement Examinations of the College Board. Success rates of students on this exam can be a good tool for evaluation of the success of a high school calculus course. The range of objectives in this document represents a small portion of the objectives of the traditional high school geometry course. The algebra objectives represent a substantial portion of the objectives of traditional high school algebra courses. The imbalance of test objectives can be explained in part by the nature of the entry level mathematics courses available at most colleges. The first college mathematics course generally will be either calculus or some level of algebra. A choice is usually based on three factors: (1) high school background; (2) placement test results; (3) curricular objectives. One reason for the emphasis on algebra in this document and on the test is that virtually all college placement decisions involve placement into a course which is more algebraic than geometric in character. Still, there are reasons for maintaining a geometry course as an essential component in a college preparatory program. Since there are no entry level courses in geometry at the college level, it is essential that students master geometry objectives while in high school. High school geometry contributes to a level of mathematical maturity which is important for success in college. Students should have the ability to use logic within a mathematical context, rather than the ability to do symbolic logic. The elements of logic which are particularly important include: 1. Use of the connectives "and’ and "or" plus the "negation" of resultant statements, and recognition of the attendant relationship with the set operations "intersection," "union," and 2. Interpretation of conditional statements of the form "if P then Q," including the recognition of converse and contrapositive. 3. Recognition that a general statement cannot be established by checking specific instances (unless the domain is finite), but that a general statement can be disproved by finding a single counter example. This should not discourage students from trying specific instances of a general statement to conjecture about its truth value. Moreover, logical thinking or logical reasoning as a method should permeate the entire curriculum. In this sense, logic cannot be restricted to a single topic or emphasized only in proof-based courses. Logical reasoning should be explicitly taught and practiced in the context of all topics. From this, students should learn that forgotten formulas can be recovered by reasoning from basic principles, and that unfamiliar or complex problems can be solved in a similar way. Although only two of the objectives explicitly refer to logic, the importance of logical thinking as a curriculum goal is not diminished. This goal, as well as other broad-based goals, is to be pursued despite the fact that it is not readily measured on placement tests. Problem solving involves the definition and analysis of a problem together with the selecting and combining of mathematical ideas leading to a solution. Ideally, a complete set of problem solving skills would appear in the list of objectives. The fact that only a few problem solving objectives appear in the list does not diminish the importance of problem solving in the high school curriculum. The limitations of the multiple choice format preclude the testing of higher level problem solving skills. Mathematics is a basic skill of equal importance with reading, writing, and speaking. If basic skills are to be considered important and mastered by students, they must be encouraged and reinforced throughout the curriculum. Support for mathematics in other subject areas should include: – a positive attitude toward mathematics – attention to correct reasoning and the principles of logic – use of quantitative skills – application of mathematics curriculum. The impact of the computer on daily life is apparent, and consequently many high schools have instituted courses dealing with computer skills. While the learning of computer skills is important, computer courses should not be construed as replacements for mathematics courses. There are occasions in college math courses when calculators are useful or even necessary (for example, to find values of trig functions), so students should be able to use calculators at a level consistent with the level at which they are studying mathematics (four-function calculators initially, scientific calculators in pre-calculus). A more compelling reason for being able to use calculators is that they will be needed in other courses involving applications of mathematics. The ability to use a calculator is very definitely a part of college preparation. On the other hand, students need to be able to rapidly supply from their heads – whether by calculation or from memory – basic arithmetic, in order to be able to follow mathematical explanations. They should also know the conventional priority of arithmetic operations and be able to deal with grouping symbols in their heads. For example, students should know that (-3)^2 s is 9, that -32 is -9, and that (-3)^3 is -27 without needing to push buttons on their calculators. Moreover, students should be able to do enough mental estimation to check whether the results obtained via calculator are approximately correct. Beginning in the spring of 1991, the use of scientific calculators has been allowed on the UW Mathematics Placement Test. The test was redesigned to accommodate the use of scientific calculators, so as to minimize the effects on placement due to the use or nonuse of calculators. Exact numbers such as π, continue to appear in both questions and answers where appropriate. Use of scientific, non-graphing calculators is optional. Each student is advised to use or not use a calculator in a manner consistent with his or her prior classroom experience. Calculators will not be supplied at the test sites. Mathematics curricula and faculty throughout UW are divided on whether or not to permit graphing calculators in classrooms. There remain many college-level courses for which graphing calculators are not allowed. Therefore, the placement test has not been revised to accommodate the use of graphing calculators. Students may not use graphing calculators for the Mathematics Placement Test. Although university curricula are somewhat in a state of flux, with many basic issues and philosophies being examined, the normal entry level courses in mathematics remain the traditional algebra and calculus courses. Therefore, the placement tests must reflect those skills which are necessary for success in these courses. This is not intended to imply that courses stressing topics other than algebra and geometry are not vital to the high school mathematics curriculum, but rather that those topics do not assist in placing students in the traditional university entry level courses. Probability and statistics are topics of value in the mathematical training of young people today that are not reflected on the placement test. It is the Committee's feeling that these topics are important to the elementary and secondary curriculum. They are gaining significance on university campuses, both within mathematics departments and within those departments not normally thought of as being quantitative in nature. The social sciences are seeking mathematical models to apply, and in general these models tend to be probabilistic or statistical. As a result, the curriculum in these areas is becoming heavily permeated with probability and statistics. Mathematics departments are finding many of their graduates going into jobs utilizing computer science or statistics. Consequently, their curricula are beginning to reflect these trends. The Committee urges the educational community to develop and maintain meaningful instruction in probability and statistics. How Teachers Can Help Students Prepare for the Test The best way to prepare students for the placement tests is to offer a solid mathematics curriculum and to encourage students to take four years of college preparatory mathematics. We do not advise any special test preparation, as we have found that students who are prepared specifically for this test, either by practice sessions or the use of supplementary materials, score artificially high. Often such students are placed into a higher level course than their background dictates, resulting in these students either failing or being forced to drop the course. Due to enrollment difficulties on many campuses, students are unable to transfer into a more appropriate course after the semester has begun. Significant factors in the placement level of a student are the high school courses taken as well as whether or not mathematics was taken in the senior year. Data indicate that four years of college preparatory mathematics in high school not only raises the entry level mathematics course, but predicts success in other areas as well, including the ability to graduate from college in four years. Teachers should certainly feel free to encourage students to be well-rested and try to remain as relaxed as possible during the test. We intend that the experience be an enjoyable, yet challenging one. Remember that the test is designed to measure students at many different levels of mathematical preparedness; all students are not expected to answer all items correctly. There is no penalty for guessing, and intelligent guessing will most likely help students achieve a higher score. Use of the Tests When the UW System Mathematics Placement Tests were developed, they were written to be used strictly as a tool to aid in the most appropriate placement of students. They were not designed to compare students, to evaluate high schools or to dictate curriculum. The way an institution chooses to use the test to place students is a decision made by each institution. The Center for Placement Testing can and does help institutions with these decisions. Each campus will continue to analyze and modify its curriculum and hence will continue to modify the way in which it uses the placement tests to place students. Cutoff scores might need to be changed over time to reflect the prerequisites for a campus’ curriculum. It is also important for follow-up studies to be made to determine the effectiveness of the placement procedures. Contact must be maintained with the high schools so that modifications in the curriculum in both the high schools and in the UW System can be discussed.
{"url":"http://www.testing.wisc.edu/centerpages/mathtest.html","timestamp":"2014-04-18T02:58:25Z","content_type":null,"content_length":"34392","record_id":"<urn:uuid:ee5a1ccd-8755-4fcf-a46d-f50be0cf7a31>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
measurement to all angles December 8th 2010, 08:12 AM measurement to all angles Attachment 20025 In the figure at right, if DC || FG || AB, EA≅EB, and m∠1=28º, find the measures of all numbered angles help please? December 8th 2010, 08:49 AM Attachment 20025 In the figure at right, if DC || FG || AB, EA≅EB, and m∠1=28º, find the measures of all numbered angles help please? Hi tysonrss, What have you tried so far? This is pretty straight forward if you are familiar with a few geometric postulates and/or theorems. [1a] When two parallel lines are cut by a transversal, alternate interior angles are congruent. [1b]When two parallel lines are cut by a transversal, corresponding angles are congruent. [2] Base angles of an isosceles triangle are congruent. [3] Vertical angles are congruent [4] Two angles that form a linear pair are supplementary To get you started, angles 1 and 8 are congruent by [1a]. Angles 7 and 8 are congruent by [2] Angles 7 and 2 are congruent by [1a] Can you determine the rest?
{"url":"http://mathhelpforum.com/geometry/165692-measurement-all-angles-print.html","timestamp":"2014-04-20T19:58:11Z","content_type":null,"content_length":"5115","record_id":"<urn:uuid:96106261-af99-49b1-bcb8-cc2fd9b88943>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: TRANSMITTER APPARATUS, RECEIVER APPARATUS, COMMUNICATION SYSTEM, COMMUNICATION METHOD, AND INTEGRATED CIRCUIT Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A transmitter apparatus to transmit to a plurality of receiver apparatuses a data signal at the same timing and on the same frequency, includes a transmitter unit that transmits a demodulation reference signal addressed to a first receiver apparatus and a demodulation reference signal addressed to a second receiver apparatus different from the first receiver apparatus at the same timing and on the same frequency. A transmitter apparatus to transmit to a plurality of receiver apparatuses a data signal at the same timing and on the same frequency, comprising: a transmitter unit that transmits a demodulation reference signal addressed to a first receiver apparatus and a demodulation reference signal addressed to a second receiver apparatus different from the first receiver apparatus at the same timing and on the same frequency and a non-linear precoder that performs a non-linear precoding operation on the demodulation reference signal, wherein the transmitter unit transmits a plurality of data signals at the same timing and on the same frequency and wherein the non-linear precoder performs the same non-linear precoding operation on the demodulation reference signal and the data signal and normalizes the demodulation reference signal and the data signal with the same power. The transmitter apparatus according to claim 19, comprising a DMRS corrector to correct the demodulation reference signal. The transmitter apparatus according to claim 20, wherein the DMRS corrector comprises a two-dimensional Euclidean algorithm unit that performs a two-dimensional Euclidean algorithm operation on the demodulation reference signal. The transmitter apparatus according to claim 21, wherein the two-dimensional Euclidean algorithm unit comprises a difference vector calculator that subtracts from the first demodulation reference signal the second demodulation reference signal, a signal that results from rotating the second demodulation reference signal in phase by 90 degrees, a signal that results from rotating the second demodulation reference signal in phase by 180 degrees, and a signal that results from rotating the second demodulation reference signal in phase by 270 degrees. A receiver apparatus comprising: a perturbation vector adder that adds to a demodulation reference signal a signal that is an integer multiple of a predetermined width and an interim channel estimator that estimates a channel in accordance with the demodulation reference signal with the signal added thereto. The receiver apparatus according to claim 23, comprising a perturbation vector candidate selector that selects a plurality of different pieces of the signal and a perturbation vector estimator that selects one of the signals in accordance with a channel estimation result estimated by the interim channel estimator that estimates the channel using each of the signals. The receiver apparatus according to claim 23, comprising: a demodulator that calculates a logarithmic likelihood ratio by soft-estimating a data signal of each of the channel estimate results corresponding to the plurality of different pieces of the signal, and a perturbation vector evaluation value calculator that calculates a variance of each logarithmic likelihood ratio, wherein the perturbation vector estimator selects the signal corresponding to the largest one of the variances. The receiver apparatus according to claim 23, comprising a two-dimensional Euclidean algorithm unit that applies the two-dimensional Euclidean algorithm to a plurality of demodulation reference signals to calculate an irreducible vector. The receiver apparatus according to claim 23, comprising a complex gain calculator that calculates a complex gain of a channel using the irreducible vector. A reception method comprising the steps of: adding to a demodulation reference signal a signal that is an integer multiple of a predetermined width; estimating a channel in accordance with the demodulation reference signal with the signal added thereto; calculating a logarithmic likelihood ratio by soft-estimating a data signal of each of the channel estimate results corresponding to a plurality of different pieces of the signal; calculating a variance of each logarithmic likelihood ratio; and selecting the signal corresponding to the largest one of the variances. TECHNICAL FIELD [0001] The present invention relates to a transmitter apparatus, a receiver apparatus, a communication system, a communication method, and an integrated circuit. BACKGROUND ART MIMO [0002] Many studies have been made to increase frequency efficiency to increase the speed of wireless data communication within a limited frequency band. Among them, MIMO (Multi-Input Multi-Output) technique to increase transmission capacity per unit frequency with a plurality of antennas used concurrently gains attention. MIMO includes Single-User MIMO (SU-MIMO) in which a base-station (BS) apparatus transmits a plurality of signals to a single mobile-station (MS) device at the same timing and on the same frequency, and Multi-User MIMO (MU-MIMO) in which a base station transmits a signal to different mobile-station devices at the same timing and on the same frequency. Since SU-MIMO is unable to multiplex streams more than the number of antennas of a mobile-station device, the maximum number of streams is limited by the number of physical antennas of the mobile-station device. On the other hand, since a base-station apparatus is able to have antennas more than the number of antennas of the mobile-station device, MU-MIMO becomes necessary in order to make the most use of idling antennas of the base-station apparatus. Specifications of down-link (DL) MU-MIMO using linear precoding (LP) have been formulated in LTE (long Term Evolution) and LTE-Advanced (see Non Patent Literature 1 below). In MU-MIMO based on LP (LP MU-MIMO), a base-station apparatus orthogonalizes transmit signals by performing a multiplexing operation on a linear filter, and thus removes Multi-User Interference (MUI) between mobile-station devices. This reduces a flexibility of combinations of mobile-station devices that can be spatial multiplexed. On the other hand, Nonlinear Precoding (NLP) MU-MIMO is disclosed as another method to implement spatial multiplexing. In NLP MU-MIMO, a mobile-station device performs a modulo operation to treat, as the same point points, points to which a received signal is shifted in parallel by an integer multiple of a constant width (modulo width) in directions of an in-phase channel (I-ch) and a quadrature channel (Q-ch). In this way, the base-station apparatus can add to a modulation signal a signal of any integer multiple of the modulo width (perturbation vector), and reduces transmission power by appropriately selecting the perturbation vector and adding the selected perturbation vector to a signal addressed to each mobile-station device (see Non Patent Literature 2 below). The mobile-station device performs the modulo operation on a received signal, and the base-station apparatus will have a freedom of adding to each modulation signal a signal of any integer multiple of the modulo width. The signal that can be added is referred to as a perturbation vector. VP (Vector Perturbation) MU-MIMO is a method of searching for a perturbation vector that increases power efficiency most, in view of channel states of all mobile-station devices that are spatial multiplexed. In VP MU-MIMO, the base-station apparatus has a large amount of calculation, but VP MU-MIMO is NLP MU-MIMO scheme that provides excellent characteristics with a full transmit diversity gain (see Non Patent Literature 2 below). THP (Tomlinson-Harashima precoding) MU-MIMO is available and is a method different from VP MU-MIMO. THP MU-MIMO calculates a perturbation vector that is to be successively added to the signal addressed to each mobile-station device, in view of user interference each mobile-station device has suffered. In THP MU-MIMO, the complexity of a transmission process of the base-station apparatus is low but not all mobile-station devices may obtain full transmit diversity (see Non Patent Literature 3 below). LR-THP is a method of THP MU-MIMO with a process called lattice reduction (LR) added thereto. LR-THP is a method that provides the full transmit diversity gain with an amount of calculation lower than the amount of calculation of VP MU-MIMO (see Non Patent Literature 3 below). DMRS [0010] In the NLP MU-MIMO system, a base-station apparatus needs to transmit DMRS (DeModulation Reference Signal) to each mobile-station device. However, if the base-station apparatus performs the same non-linear precoding operation as the data signal on DMRS and then transmits the DMRS, the mobile-station device is unable to estimate channels. The DMRS is a signal that the base-station apparatus uses to notify each mobile-station device in advance of an amplitude and phase of the data signal on which the base-station apparatus has performed the precoding operation through NLP MU-MIMO. If a non-linear precoding operation is performed on the DMRS, the base-station apparatus adds a perturbation vector on the DMRS (or performs the modulo operation on the DMRS), and then transmits the DMRS. The mobile-station device needs to perform the modulo operation on the DMRS as well. For this reason, the mobile-station device needs to know a modulo width needed in the modulo operation in advance. Since the modulo width is proportional to the amplitude of the modulation signal in a received signal, the mobile-station device needs to know a reception gain of the data signal (a complex gain of a channel) subsequent to the non-linear precoding operation. However, the mobile-station device is unable to know the reception gain (the complex gain of the channel) without estimating channels using the DMRS. More specifically, the mobile-station device is in the situation that "the mobile-station device is unable to acquire the reception gain without estimating the channels using the DMRS, while being unable to estimate the channels using DMRS without knowing that reception gain". The above-described problem thus arises. In the technique disclosed in Patent Literature 1, the DMRS is transmitted to each mobile-station device using orthogonal radio resources (regions divided in a time direction and in a frequency direction, and if different data signals and different reference signals are assigned to the regions, the regions do not mutually interfere with). In such a case, the mobile-station device is free from performing the modulo operation on the DMRS, and obtains the reception gain. CITATION LIST Patent Literature [0013] PTL 1: Japanese Unexamined Patent Application Publication No. 2009-182894 Non Patent Literature [0013] [0014] NPL 1: 3GPP Technical Specification 36.211 v8.9.0 NPL 2: B. M. Hochwald, C. B. Peel, A. L. Swindlehurst, "A Vector-Perturbation Technique for Near-Capacity Multiantenna Multiuser Communication-Part II Perturbation," IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 53, NO. 3, PP. 537-544, March 2005 NPL 3: F. Liu, L. Jiang, C. He, "Low complexity lattice reduction aided MMSE precoding design for MIMO systems," Proceedings of ICC 2007, pp. 2598-2603, June 2007. SUMMARY OF INVENTION Technical Problem [0017] In the method of arranging the DMRS addressed to each mobile-station device to the orthogonal radio resources as disclosed in PTL 1, the radio resources dedicated to the DMRS are needed for the number of mobile-station devices. The overhead involved in the insertion of the DMRS increases accordingly. The present invention has been developed in view of the above problem, and it is an object of the present invention to provide a technique of estimating an amplitude and phase (a complex gain and a reception gain) of a data signal normally in the mobile-station device and minimizing an increase in the overhead involved in the insertion of the DMRS by spatial multiplexing DMRS in the NLP MU-MIMO Solution to Problem [0019] According to an aspect of the present invention, there is provided a transmitter apparatus to transmit to a plurality of receiver apparatuses a data signal at the same timing and on the same frequency. The transmitter apparatus includes a transmitter unit that transmits a demodulation reference signal addressed to a first receiver apparatus and a demodulation reference signal addressed to a second receiver apparatus different from the first receiver apparatus at the same timing and on the same frequency. In the transmitter apparatus, the demodulation reference signals (DMRS) spatial multiplexed are transmitted to all mobile-station apparatuses using a single radio resource. A plurality of demodulation reference signals are transmitted to all the mobile-station apparatuses while enough radio resources are reserved to arrange the data signal. Each mobile-station apparatus (hereinafter referred to as a receiver apparatus) may perform a channel estimation operation using a plurality of demodulation reference signals. The present invention helps reduce the insertion loss of the demodulation reference signal. The transmitter apparatus preferably includes a non-linear precoder that adds to the demodulation reference signal a signal that is an integer multiple of a predetermined signal. Also, the transmitter apparatus may include a non-linear precoder that performs a non-linear precoding operation on the demodulation reference signal. Preferably, the transmitter unit transmits a plurality of data signals at the same timing and on the same frequency, and the non-linear precoder performs the same non-linear precoding operation, as the non-linear precoding operation performed on the data signal, the demodulation reference signal. In this way, the non-linear precoding operation is performed using the same filter as the filter used on the data signal on the same principle as the principle applied to the data signal. The transmitter apparatus may include a DMRS corrector to correct the demodulation reference signal. The DMRS corrector preferably includes a two-dimensional Euclidean algorithm unit that performs a two-dimensional Euclidean algorithm operation on the demodulation reference signal. The two-dimensional Euclidean algorithm unit may include a difference vector calculator that subtracts from the first demodulation reference signal the second demodulation reference signal, a signal that results from rotating the second demodulation reference signal in phase by 90 degrees, a signal that results from rotating the second demodulation reference signal in phase by 180 degrees, and a signal that results from rotating the second demodulation reference signal in phase by 270 degrees. The present invention relates to a receiver apparatus including a perturbation vector adder that adds to a demodulation reference signal a signal that is an integer multiple of a predetermined width. The receiver apparatus preferably includes an interim channel estimator that estimates a channel in accordance with the demodulation reference signal with the signal added thereto. The receiver may include a perturbation vector candidate selector that selects a plurality of different pieces of the signal and a perturbation vector estimator that selects one of the signals in accordance with a channel estimation result estimated by the interim channel estimator that estimates the channel using each of the signals. The receiver apparatus preferably includes a demodulator that calculates a logarithmic likelihood ratio by soft-estimating a data signal of each of the channel estimate results corresponding to the plurality of different pieces of the signal, and a perturbation vector evaluation value calculator that calculates a variance of each logarithmic likelihood ratio. The perturbation vector estimator selects the signal corresponding to the largest one of the variances. The receiver apparatus preferably includes a two-dimensional Euclidean algorithm unit that applies the two-dimensional Euclidean algorithm to a plurality of demodulation reference signals to calculate an irreducible vector. The receiver apparatus preferably includes a complex gain calculator that calculates a complex gain of a channel using the irreducible vector. The present invention also relates to a communication system. The communication system includes a transmitter apparatus that includes a non-linear precoder that adds to a demodulation reference signal a signal that is an integer multiple of a predetermined signal, and a transmitter unit that transmits the demodulation reference signal and another signal at the same timing and on the same frequency, and a receiver apparatus that includes a perturbation vector adder that adds to the demodulation reference signal a signal that is an integer multiple of a predetermined width. The present invention relates to a communication method. The communication method includes a transmission method that includes a step of adding to a demodulation reference signal a signal that is an integer multiple of a predetermined signal, and a step of transmitting the demodulation reference signal and another signal at the same timing and on the same frequency, and a reception method that includes a step of adding to the demodulation reference signal a signal that is an integer multiple of a predetermined width. The present invention relates to an integrated circuit. The integrated circuit includes a non-linear precoder that adds to a demodulation reference signal a signal that is an integer multiple of a predetermined signal, and a transmitter unit that transmits the demodulation reference signal and another signal at the same timing and on the same frequency. The present invention relates to an integrated circuit. The integrated circuit includes a perturbation vector adder that adds to a demodulation reference signal a signal that is an integer multiple of a predetermined width. The present invention may include a program that causes elements to perform the communication method, and a computer-readable recording medium that has stored the program. The description contains the contents of the specification and/or the drawings of Japanese Patent Application No. 2011-043755 which this application is based on and claims priority from. BRIEF DESCRIPTION OF DRAWINGS [0031] FIG. 1 illustrates a concept of a communication system of a first embodiment of the present invention. FIG. 2 illustrates a sequence chart illustrating an example of an operation of the communication system of the present embodiment. FIG. 3 is a function block diagram illustrating a general structure of a base-station apparatus of the present embodiment. FIG. 4 is a schematic diagram illustrating a structure example of a specific signal of the present embodiment. FIG. 5A is a schematic diagram illustrating a structure example of frames of the present embodiment. FIG. 5B illustrates a first structure example of frames transmitted via first through N-th antennas a101-n. FIG. 5C illustrates a second structure example of frames transmitted via the first through N-th antennas a101-n. FIG. 6 is a function block diagram illustrating a structure example of a mobile-station device of the present embodiment. FIG. 7 is a function block diagram illustrating a DMRS channel estimator in detail. FIG. 8 illustrates examples of positions of two DMRSs (DMRS 1 and DMRS 2) indicated on a signal point plane. FIG. 9 illustrates examples of positions on the signal point plane where an irreducible vector fails to match a signal point of QPSK. FIG. 10 illustrates a configuration example of a two-dimensional Euclidean algorithm unit. FIG. 11 illustrates a configuration example of a DMRS corrector. FIG. 12 illustrates a DMRS channel estimator (a) and a complex gain calculator (b). FIG. 13 illustrates a configuration example of a non-linear precoder. FIG. 14 is a flowchart illustrating an operation of the non-linear precoder. FIG. 15 is a function block diagram illustrating a configuration example of a base-station apparatus of a second embodiment of the present invention. DESCRIPTION OF EMBODIMENTS [0048] Embodiments of the present invention are described in detail below with reference to the drawings. First Embodiment Variance Base of Logarithmic Likelihood Ratio Communication System FIG. 1 illustrates a concept of a communication system 1 of a first embodiment of the present invention. The communication system 1 includes a base-station apparatus A1, and mobile-station devices B11 through B1N (FIG. 1 illustrates an example in which the base-station apparatus A1 selects a first mobile-station device through a fourth mobile-station device B11, B12, B13, and B14 (N=4)). The base-station apparatus A1 transmits a common reference signal (CRS). Note that a reference signal as the CRS is stored by the base-station apparatus A1 and the first through fourth mobile-station devices B11 through B14. Each of the first through fourth mobile-station devices B11 through B14 estimates a channel state based on the CRS transmitted by the base-station apparatus A1, and then notifies the base-station apparatus A1 of channel state information based on the estimated channel state. The base-station apparatus A1 transmits DMRS and a data signal to the first through fourth mobile-station devices B11 through B14. In this case, the base-station apparatus A1 performs a precoding operation on the DMRS and the data signal, and then transmits the DMRS and data signal subsequent to multiplication. In accordance with the DMRS received from the base-station apparatus A1, the first through fourth mobile-station devices B11 through B14 multiplexed estimate the channel state of an equivalent channel having a precoding operation treated as part thereof (hereinafter referred to as an equivalent channel) and acquires a data signal in accordance with equivalent channel state information indicating a channel state of the estimated equivalent channel. FIG. 2 illustrates a sequence chart illustrating an example of an operation of the communication system of the present embodiment. FIG. 2 illustrates the example of the operation of the communication system 1 of FIG. 1. (Step S101) The base-station apparatus A1 transmits the CRS to the first through fourth mobile-station devices B11 through B14. The base-station apparatus A1 then proceeds to step S102. (Step S102) The first through fourth mobile-station devices B11 through B14 estimate the channel state in accordance with the CRS transmitted in step S101. Processing proceeds to step S103. (Step S103) The first through fourth mobile-station devices B11 through B14 calculate the channel state information in accordance with the channel state estimated in step S102. Processing proceeds to step S104. (Step S104) The first through fourth mobile-station devices B11 through B14 notify the base-station apparatus A1 of the channel state information calculated in step S103. Processing proceeds to step (Step S105) The base-station apparatus A1 calculates a filter to be used in the non-linear precoding operation in accordance with the channel state information transmitted in step S104. The base-station apparatus A1 performs the non-linear precoding operation on the generated DMRS and data signal using the filter, thereby generating DMRS and data signal. Processing proceeds to step (Step S106) The base-station apparatus A1 transmits the signal of DMRS generated in step S105 to the first through fourth mobile-station devices B11, B12, B13, and B14. Processing proceeds to step (Step S107) The first through fourth mobile-station devices B11 through B14 estimate the equivalent channel in accordance with the signal of DMRS transmitted in step S106. Processing proceeds to step (Step S108) The base-station apparatus A1 transmits the data signal generated in step S105 to each of the first through fourth mobile-station devices B11 through B14. Processing proceeds to step (Step S109) The first through fourth mobile-station devices B11 through B14 detect and acquire the data signal in accordance with the equivalent channel state information indicating the channel state of the equivalent channel estimated in step S108. In FIGS. 1 and 2, the first through fourth mobile-station devices B11 through B14 are four-multiplexed. In the discussion that follows, N mobile-station devices, i.e., a first mobile-station device to an N-th mobile-station device B11 through B1N are multiplexed. The mobile-station devices B11 through BIN are collectively referred to as a mobile-station device B1n. -Station Apparatus A1 FIG. 3 is a function block diagram illustrating a general structure of the base-station apparatus A1 of the present embodiment. As illustrated in FIG. 3, the base-station apparatus A1 includes a first antenna a101-1 through an N-th antenna a101-N, a first receiver a102-1 through an N-th receiver a102-N, a first GI (Guard Interval) remover a103-1 through an N-th GI remover a103-N, a first FFT (Fast Fourier Transform) unit a104-1 through an N-th FFT unit a104-N, a channel state information acquisition unit a105, a filter calculator all, a first encoder a121-1 through an N-th encoder a121-N, a first modulator a122-1 through an N-th modulator a122-N, a DMRS generator a124, a specific signal constructor a125, a non-linear precoder a13, a CRS generator a141, a frame constructor a142, a first IFFT (Inverse Fast Fourier Transform) unit a143 through an N-th IFFT unit a143-N, a first GI inserter a144-1 through an N-th GI inserter a144-N, and a first transmitter a145-1 through an N-th transmitter a145-N. The base-station apparatus A1 of FIG. 3 includes N antennas of the first antenna a101-1 through N-th antenna a101-N, and serves as a base-station apparatus with N mobile-station devices multiplexed (N=4 in the example of FIGS. 1 and 2). In the discussion that follows, the base-station apparatus A1 of FIG. 3 uses orthogonal frequency division multiplexing (OFDM) scheme for uplink and downlink, for example. The present invention is not limited to this. The base-station apparatus A1 may use time division multiplexing (TDM) scheme or frequency division multiplexing (FDM) on one of the uplink and the down link or on both the uplink and the down link. The first through N-th receivers a102-n (n=1, 2, . . . , N) receive signals transmitted by each mobile-station device B1n (a signal of a carrier frequency) via the first through N-th antannas a101-n. The signal includes the channel state information. The first through N-th receivers a102-n down-convert the received signals, and A/D (analog/digital) convert the down-converted signals, thereby generating baseband digital signals. The first through N-th receivers a102-n output the generated digital signals to the first through N-th GI removers a103-n. The first through N-th GI removers a103-n remove GI from the digital signals input from the first through N-th receivers a102-n, and then output the signals with GI removed therefrom to the first through N-th FFT units a104-n. The first through N-th FFT units a104-n perform FFT transform operations on the signals input from the first through N-th GI removers a103-n, thereby generating signals in the frequency domain. The first through N-th FFT units a104-n output the generated signals in the frequency domain to the channel state information acquisition unit a105. The channel state information acquisition unit a105 demodulates the signals input from the first through N-th FFT units a104-n, and extracts the channel state information from the demodulated information. The channel state information acquisition unit a105 outputs the extracted channel state information to the filter calculator all. A signal, other than the signal of the channel state information, out of the signals output from the first through N-th FFT units a104-n, is demodulated by a controller (not illustrated). Control information of the demodulated information is used to control the base-station apparatus A1. Data other than the control information of demodulated signal is transmitted to another base-station apparatus, a server apparatus, and the like. The filter calculator all calculates a filter for use in a non-linear precoding operation in accordance with the channel state information input from the channel state information acquisition unit a105. Details of a filter calculation process performed by the filter calculator all are described below. The filter calculator all inputs the calculated filter to the non-linear precoder a13. The first through N-th encoders a121-n receive information bit (data) addressed to each mobile-station device B1n (N=4 in the example of FIGS. 1 and 2). The first through N-th encoders a121-n error-correction encode the input information bit, and then output the encoded bits to the first through N-th modulators a122-n. The first through N-th modulators a122-n modulate the encoded bits input from the first through N-th encoders a121-n, thereby generating the data signal addressed to the mobile-station devices B1n. The first through N-th modulators a122-n output the generated data signals to the specific signal constructor a125. The base-station apparatus A1 determines a modulation scheme for use in each of the first through N-th modulators a122-n in accordance with the channel state information, and outputs modulation information indicating the determined modulation scheme to the first through N-th modulators a122-n. The base-station apparatus A1 notifies the first through N-th mobile-station devices B1n of the modulation information. The DMRS generator a124 generates DMRSs to be addressed to the first through N-th mobile-station devices B1n. The DMRS generator a124 outputs the generated DMRS to the specific signal constructor The specific signal constructor a125 associates the data signals addressed to the first through N-th mobile-station devices B1n input from the first through N-th modulators a122-n with the DMRSs addressed to the first through N-th mobile-station devices B1n input from the DMRS generator a124. Pieces of information associated by the specific signal constructor a125 and addressed to the first through N-th mobile-station devices B1n are referred to as specific signals of the first through N-th mobile-station devices B1n. The specific signal constructor a125 outputs to the non-linear precoder a13 the specific signals of the first through N-th mobile-station devices B1n which are generated through the association operation thereof. The non-linear precoder a13 performs a non-linear precoding operation on the specific signal (the data signal and the DMRS) of each of the first through N-th mobile-station devices Bin input from the specific signal constructor a125. The non-linear precoding operation to be performed by the non-linear precoder a13 is described in detail below. The non-linear precoder a13 outputs to the frame constructor a142 the specific signals that have been non-linearly precoded. The CRS generator a141 generates the CRS including a known reference signal of the base-station apparatus A1 and the first through N-th mobile-station devices Bin, and then outputs the generated CRS to the frame constructor a142. The frame constructor a142 maps the specific signal input from the non-linear precoder a13 and the CRS input from the CRS generator a141. The frame constructor a142 may map the specific signal and the CRS to the same frame or to different frames. For example, the frame constructor a142 may map only the CRS to a given frame, or may map the CRS and the specific signal to another frame. Note that the base-station apparatus A1 maps the CRS and the specific signal to a frame in accordance with a predetermined mapping and that the first through N-th mobile-station devices Bin have the knowledge of the mapping in advance. The frame constructor a142 outputs, to the first through N-th IFFT units a143-n, a signal, out of the mapped signals, to be transmitted via the antenna a101-non a per frame unit basis. The first through N-th IFFT units a143-n perform an IFFT transform operation on the signal input from the frame constructor a142, thereby generating a signal in the time domain. The first through N-th IFFT units a143-n output the generated signals in the time domain to the first through N-th GI inserters a144-n. The first through N-th GI inserters a144-n attach guard intervals to the signals input from the first through N-th IFFT units a143-n, and then output the signals with the guard intervals attached thereto to the first through N-th transmitters a145-n. The first through N-th transmitters a145-n D/A (digital/analog) convert the signals (baseband digital signals) input from the first through N-th GI inserters a144-n. The first through N-th transmitters a145-n generate signals on a carrier frequency by up-converting the digital-to-analog converted signal. The first through N-th transmitters a145-n transmit the generated signals via the first through N-th antennas a101-n. FIG. 4 is a schematic diagram illustrating a structure example of the specific signal of the present embodiment. In FIG. 4, the abscissa represents time. FIG. 4 illustrates the specific signal output by the specific signal constructor a125. FIG. 4 also illustrates the specific signals of the first through N-th mobile-station devices Bin to be transmitted on the same frequency aligned on the time In FIG. 4, the DMRSs addressed to the first through N-th mobile-station devices Bin are referred to as "DMRS-MSn". As illustrated in FIG. 4, the specific signal is for each of the first through N-th mobile-station devices Bin and includes the DMRS and the data signal. For example, a signal labeled symbol S11 indicates the DMRS addressed to the mobile-station device B11 (DMRS-MS1). FIG. 4 indicates that the DMRSs addressed to the first through N-th mobile-station devices Bin are output at the same time from the specific signal constructor a125. FIG. 4 also indicates that the data signals addressed to the first through N-th mobile-station devices Bin are also output from the specific signal constructor a125 at the same time. As described above, the data signals assigned to the same time and the DMRSs assigned to the same time are spatial multiplexed by the non-linear precoder a13 and then transmitted from the base-station apparatus A1 at the same timing and on the same frequency (i.e., by the same subcarrier of the same OFDM symbol). FIG. 4 also indicates that the DMRS and the data signal are output from the specific signal constructor a125 at different timings. For example, the DMRSs addressed to the first through N-th mobile-station devices B1n are output at time t and time t , and the data signals addressed to the first through N-th mobile-station devices B1n are output at time t through time t Note that the structure of the specific signal of FIG. 4 is an example only, and the present invention is not limited to this structure. For example, as illustrated in FIG. 4, the data signal is output (at t thereafter) after the DMRS addressed to the mobile-station device B1n is output. Alternatively, the specific signal constructor a125 may output the DMRS after outputting the data signal. Also, the specific signal constructor a125 may output the data signal and the DMRS alternately on the time axis, or in another sequential order. FIG. 5A is a schematic diagram illustrating a structure example of frames of the present embodiment. FIG. 5A illustrates the structure of frames to which the frame constructor a142 maps the signal. Also, FIG. 5A illustrates the structures of the frames aligned along the time axis and transmitted via the first through N-th antennas a101-n. In FIG. 5A, the CRSs transmitted via the first through N-th antennas a101-n are labeled "CRS-Txn". FIG. 5A also illustrates that each of the frames of the first through N-th antennas a101-n includes a CRS and a specific signal (the non-linearly precoded specific signal including the data signal and the DMRS). For example, a signal labeled symbol S21 indicates a CRS (CRS-Tx1) to be transmitted via the antenna a101-1. A signal labeled symbol S22 is a non-linearly precoded specific signal to be transmitted via the antenna a101-1. FIG. 5A illustrates the CRSs that arranged at different time bands by the frame constructor a142 and are transmitted via the antenna a101-n from the base-station apparatus A1. For example, the CRS to be transmitted via the antenna a101-1 is transmitted at time t1, and the CRS to be transmitted via the antenna a101-2 is transmitted at time t FIG. 5A also illustrates the CRS and the specific signal that are arranged at different time bands by the frame constructor a142. For example, the CRSs to be transmitted via the antenna a101-n are arranged in a time band for transmission of time t through time t , and the specific signals are arranged in a time band at time t +1 thereafter. As also illustrated in FIG. 5A, the specific signals to be transmitted via the first through N-th antennas a101-n are arranged in the same time band by the frame constructor a142. FIG. 5A illustrates the frame structure for exemplary purposes only, and the present invention is not limited to this frame structure. For example, as illustrated in FIG. 5A, all CRSs to be transmitted via the antennas a101-n are arranged in a preceding time band (earlier than t +1), and the specific signals are arranged in a subsequent time band (at t +1 thereafter). Alternatively, the frame constructor a142 may arrange the specific signals in the preceding time band, and the CRSs in the subsequent time band. Alternatively, the frame constructor a142 may arrange the specific signal and the CRS alternately in time bands, or in another sequential order. For example, the frame constructor a142 may arrange the CRS to be transmitted via the antenna a101-1 in the time band subsequent to time t +1 and the specific signal in the time band prior to time t +1. Note that the base-station apparatus A1 transmits a frame having only the CRS arranged therewithin before starting the transmission of the specific signal. FIG. 5B and FIG. 5C illustrate a structure example of the frames to be transmitted via the first through N-th antennas a101-n. FIG. 4 and FIG. 5A illustrate the DMRS and data signal, or the CRS and specific signal (including the DMRS and data signal) arranged in the time direction. As illustrated in FIG. 5B and FIG. 5C, the CRS, DMRS, and data signal may be arranged in a two-dimensional matrix along the time direction (t) and frequency direction (f). -Station Device B1n FIG. 6 is a function block diagram illustrating a structure example of the mobile-station device B1n of the present embodiment. As illustrated in FIG. 6, the mobile-station device B1n includes an antenna b101, a receiver b102, a GI remover b103, an FFT unit b104, a signal separator b105, a CRS channel estimator b107, a DMRS channel estimator b12, a channel compensator b106, a modulo calculator b109, a demodulator b110, a decoder bill, a channel state information generator b108, an IFFT unit b131, a GI inserter b132, and a transmitter b133. The receiver b102 receives a signal transmitted by each mobile-station device B1n (a signal on a carrier frequency) via the antenna b101. The receiver b102 down-converts and A/D (analog/digital) converts the received signal, thereby generating a baseband digital signal. The receiver b102 outputs the generated digital signal to the GI remover b103. The GI remover b103 removes the GI from the digital signal input from the receiver b102, and outputs the signal without the GI to the FFT unit b104. The FFT unit b104 performs a fast Fourier transform operation on the signal input from the GI remover b103, thereby generating a signal in the frequency domain. The FFT unit b104 outputs the generated signal in the frequency domain to the signal separator b105. The signal separator b105 demaps the signal input from the FFT unit b104 in accordance with mapping information from the base-station apparatus A1. The signal separator b105 outputs out of the demapped signal, a CRS to the CRS channel estimator b107 and a DMRS to the DMRS channel estimator b12. The signal separator b105 outputs a data signal to the channel compensator b106. Furthermore, the signal separator b105 inputs the data signal to the DMRS channel estimator b12. The CRS channel estimator b107 estimates a channel state in accordance with the CRS input from the signal separator b105, and then outputs information indicating the estimated channel state to the channel state information generator b108. In accordance with the DMRS input from the signal separator b105, the DMRS channel estimator b12 estimates a channel state of an equivalent channel that includes as part thereof a filter to be used in the non-linear precoding operation. The DMRS channel estimator b12 is described in detail below. The DMRS channel estimator b12 outputs equivalent channel state information indicating the channel state of the estimated equivalent channel to the channel compensator b106. The channel compensator b106 performs channel compensation on the signal input from the signal separator b105 using the equivalent channel state information input from the DMRS channel estimator b12. The channel compensator b106 outputs the channel compensated signal to the modulo calculator b109. The modulo calculator b109 performs a modulo operation on the signal input from the channel compensator b106 in accordance with modulation information from the base-station apparatus A1. The modulo operation is expressed by the following Expression. (α)=α-floor((Re[α]+τ/2)/τ)-i×floor((I- m[α]+τ/2)τ)τ (1-1) Expression (1-1) indicates the modulo operation that is performed on a signal α with a modulo width τ. Also, floor(x) represents a maximum integer that does not exceed x, and Re[α] and Im[α] respectively represent a real part and an imaginary part of a complex number α. Here, i represents a unit of complex number. Also, τ represents the modulo width. If the modulation scheme of the data signal is QPSK, τ is preferably 2×2 /2 times the mean amplitude of QPSK signal, if the modulation scheme of the data signal is 16QAM, τ is preferably 8×10 /2 times the 16QAM signal, and if the modulation scheme of the data signal is 64QAM, τ is preferably 16×42 /2 times the 64QAM signal. The modulo width may be different if the base-station apparatus and the mobile-station device use a common value. The modulo calculator b109 outputs a value as a modulo operation result to the demodulator b110. The demodulator b110 demodulates the signal input from the modulo calculator b109 in accordance with a modulation scheme indicated by the modulation information from the base-station apparatus A1. The demodulator b110 outputs to the decoder bill the demodulated information (hard-determined encoded bit or soft-estimated value of the encoded bit). The decoder bill acquires an information bit by decoding the information input from the demodulator b110, and then outputs the acquired information bit. The channel state information generator b108 generates the channel state information from the channel state input from the CRS channel estimator b107 (this operation is referred to as a channel state information generation process). The estimated channel state is herein referred to as a row vector h 1, h , . . . , h N]. Let h 1, h , . . . , h N represent a channel state between the antenna a101-1 and the mobile-station device B1n, a channel state between the antenna a101-2 and the mobile-station device B1n, . . . , and a channel state between the antenna a101-N and the mobile-station device B1n. The channel state information generator b108 does not necessarily have to notify the base-station apparatus A1 of the row vector |h | exactly as the channel state information. For example, the channel state information generator b108 may normalize the row vector by norm |h | of the row vector of in accordance with a predetermined number C, and notifies the base-station apparatus A1 of the resulting normalized row vector C×h |. Alternatively, the channel state information generator b108 may notify the base-station apparatus A1 of a value approximated to be a predetermined value as the channel state information. The channel state information generator b108 modulates the generated channel state information, and outputs the modulated signal of the channel state information to the IFFT unit b131. The IFFT unit b131 performs an inverse Fourier transform operation on the signal input from the channel state information generator b108, thereby generating a signal in the time domain. The IFFT unit b131 outputs the generated signal in the time domain to the GI inserter b132. The GI inserter b132 attaches the guard interval to the signal input from the IFFT unit b131, and outputs the signal with the guard interval attached thereto to the transmitter b133. The transmitter b133 D/A (digital/analog) converts the signal (a baseband digital signal) input from the GI inserter b132. The transmitter b133 up-converts the converted signal, thereby generating a signal on a carrier frequency. The transmitter b133 transmits the generated signal via the antenna b101. Filter Calculator a The filter calculator all of FIG. 3 constructs a channel matrix H from the channel state information input from the channel state information acquisition unit a105. H represents is a matrix of N rows and N columns, and a component at a p-th row and a q-th column represents a complex gain of a channel between a p-th mobile-station device B1p and a q-th antenna a101-q of the base-station apparatus (each of p and q is an integer from 1 to N). More specifically, the channel state acquired from the channel state information from each mobile-station device B1p becomes a row vector, and the channel matrix H is generated by arranging a matrix having row vectors corresponding to all the mobile-station devices. If the norm of the channel state information is normalized by the mobile-station device B1n, the base-station apparatus A1 generates the channel matrix H using the channel state information from the mobile-station device B1n, that is, using a normalized channel state at each row. The filter calculator all calculates an inverse matrix W (=H ) of the channel matrix H, and inputs the inverse matrix to the non-linear precoder a13. Detail of Non -Linear Precoder A13 Let sn represent each specific signal input from the specific signal constructor a125, and s represent a column vector having each components of all s1 through Sn. The specific signal herein refers to the data signal and the DMRS generated by the DMRS generator a124. As for the DMRS signal herein, the value of τ is determined independently of the data signal, but it is necessary to use a DMRS signal having a value common to the base-station apparatus and the mobile-station device. For example, the DMRS signal may match one point of the QPSK signal, and τ=2×2 /2 is used even if the data signal is modulated through 16QAM. The non-linear precoder a13 searches for a combination of N-dimensional integer column vectors Z1 and Z2 that minimizes the norm of a transmit signal multiplied by filter W. This operation may be expressed by the following expression. (Z1,Z2)=argmin.sub.(z1,z2)|W(s+z1τ+iz2τ)| (1-2) A combination (z1, z2) that minimizes the norm |W(s+z1τ+iz2τ)| on the right side is represented by (Z1, Z2). Since each component of z1 and z2 can take any integer, it is difficult to search for all the combinations. A search range of each component of z1 and z2 is thus limited to a predetermined range (for example, to an integer having the absolute value L or less: [-L , -L +1, . . . , -2, -1, 0, 1, 2, . . . , L -1, L ]). A search operation is preferably performed on points as candidates centered on a signal point Ws, but a method other than this method may be used to search for a point that minimizes the norm. Here, z1τ+z2τ is referred to as a perturbation vector. A signal x=W(s+Z1τ+iZ2τ) based on the calculated (Z1, Z2) is power-normalized. The base-station apparatus A1 needs to normalize total transmit power of data signals within a constant number of subcarriers and a constant number of OFDM symbols (referred to as a "power normalization unit") in order to keep the transmit power constant. The power normalization unit represents the entire frame unit of FIG. 5B and FIG. 5C. Total power P across the power normalization unit of the power of the data signals x calculated in the non-linear precoding operation is calculated. Let P represent the total power assigned to the base-station apparatus A1 to transmit the data signal of one power normalization unit, and a power normalization coefficient g (=(P /2) is calculated. The non-linear precoder a13 multiplies the specific signal x (the data signal and DMRS) by the reciprocal of the power normalization coefficient g, and inputs the data signal subsequent to the multiplication to the frame constructor a142. Each component of the signal x indicates a transmit signal to be transmitted by each of the first antenna a101-1 through N-th antenna Since the DMRS signal itself is multiplied by g in the mobile-station device B1n, the DMRS channel estimator b12 may estimate the channel, and the channel compensator b106 may perform the compensation operation on the amplitude of the data signal by multiplying the amplitude by g. More specifically, a signal can correctly be detected by multiplying the received data signal by g. As described above, the non-linear precoder a13 of the present invention performs on the DMRS the same non-linear precoding operation as the non-linear precoding operation performed on the data signal, including the power normalization. Detail of DMRS Channel Estimator b Since the base-station apparatus B1 multiplies the data signal by W (=H ), the received signal is HWs=s in an ideal environment, and no channel compensation is needed on the data signal. However, since (1) the base-station apparatus A1 has performed the power normalization, and (2) the channel state changes from when the mobile-station device B1n feeds back the channel state information to when the base-station apparatus A1 transmits the data signal, a reception gain of the data signal (a complex gain of an equivalent channel including the precoding) needs to be freshly estimated during a data signal reception using the DMRS that has undergone the same precoding operation as the precoding operation performed on the data signal. FIG. 7 illustrates the DMRS channel estimator b12 in detail. The DMRS channel estimator b12 includes a perturbation vector candidate selector b121, a perturbation vector adder b122, an interim channel estimating unit b123, a channel compensating unit b124, a modulo calculator b125, a demodulator b126, a perturbation vector evaluation value calculator b127, and a perturbation vector estimating unit b128. (Step S21) The perturbation vector candidate selector b121 selects a perturbation vector candidate, and input the selected perturbation vector to the perturbation vector adder b122. A perturbation vector Za serving as a candidate is expressed by =(z1a+iz2a)τ (1-3) where z 1a and z2a are N-dimensional column vectors, and each component falls within a predetermined range (for example, an integer having the absolute value L or less: [-L , -L +1, . . . , -2, -1, 0, 1, 2, . . . , L -1, L ]). The number of candidates of the perturbation vectors present and expressed in Expression (1-3) is (2L N. Here, the base-station apparatus A1 and the mobile-station device B1n use a common DMRS modulo width. In Expression (1-3), the candidates of the perturbation vector are also restricted in the base-station apparatus as well. Although condition L holds preferably, condition L may be acceptable to reduce an amount of calculation in the mobile-station device B1n. In THP or LR-THP to be described with reference to a third embodiment, it is difficult to determine L , and L may be determined depending on the amount of calculation of the mobile-station device B1n. (Step S22) Next, the perturbation vector adder b122 calculates a signal q+Za by adding a perturbation vector candidate Za=(Z1a+iZ2a)τ to a reference signal q (a signal transmitted from the base-station apparatus and being a DMRS signal prior to the non-linear precoding operation). The perturbation vector adder b122 inputs the signal q+Za to the interim channel estimating unit b123. (Step S23) Next, the interim channel estimating unit b123 divides the received DMRSp by the signal q+Za. The complex gain may be estimated to be h=p/(q+Za). The interim channel estimating unit b123 inputs an amplitude |h| and a phase arg (h) of the complex gain h to the channel compensating unit b124 and the perturbation vector estimating unit b128. (Step S24) The channel compensating unit b124 performs the channel compensation on the data signal in the frame having the DMRS therewithin using the amplitude and phase input from the interim channel estimating unit b123. More specifically, the channel compensating unit b124 divides the received signal by the complex gain h. The channel compensating unit b124 inputs the channel-compensated data signal to the modulo calculator b125. (Step S25) The modulo calculator b125 performs a modulo operation on the data signal channel-compensated by the channel compensating unit b124. The modulo calculator b125 uses the data signal modulo width common to the base-station apparatus A1 and the mobile-station device B1n. The modulo calculator b125 inputs the modulo calculated data signal to the demodulator b126. (Step S26) The demodulator b126 soft-estimates the input modulo operated data signal, and then inputs the soft-estimated value to the perturbation vector evaluation value calculator b127. Note that the demodulator b126 calculates a soft-estimated value as a log likelihood ratio (LLR). LLR is calculated using the following Expression. [ Math 1 ] L m = log s k S m + p [ y | s k ] s k S m - p [ y | s k ] ( 1 - 4 ) [ Math 2 ] [ Math 2 ] p [ y | s k ] = 1 πσ n 2 exp ( - y - s k 2 σ n 2 ) ( 1 - 5 ) ##EQU00001## Here, y represents the modulo-operated data signal input from the modulo calculator b125, and s represents each of the signal points of each modulation scheme. Also, s .sup.+ represents a signal point with a m-th bit thereof modulated through each modulation scheme and being +1, and s .sup.- represents a signal point with a m-th bit thereof modulated through each modulation scheme and being 0. Also, σ represents a sum of variances of noises in I-ch and Q-ch of each modulation signal (i.e., complex Gaussian noise power). In accordance with Expression (1-4) and Expression (1-5), the demodulator b126 inputs, to the perturbation vector evaluation value calculator b127, LLR corresponding to each bit assigned to each data signal. More specifically, the demodulator b126 calculates LLRs of the number equal to the number of data signals×2 in the case of QPSK, and calculates LLRs of the number equal to the number of data signals×4 in the case of 16QAM, and then inputs the resulting LLRs to the perturbation vector evaluation value calculator b127. (Step S27) The perturbation vector evaluation value calculator b127 calculates a variance of the input LLRs, and input the calculated variance of the LLRs to the perturbation vector estimating unit b128. The variance of LLRs is calculated in accordance with the following Expression. [ Math 3 ] σ LLR 2 = 1 VM v = 1 V m = 1 M L m v 2 ( 1 - 6 ) ##EQU00002## Here V represents the number of data signals soft estimated on each candidate of the perturbation vectors. M represents a bit count assigned to each modulation scheme, and is 2 in QPSK, and 4 in 16 QAM. L represents LLR at the m-th bit assigned to a v-th data signal. (Step S28) The perturbation vector estimating unit b128 selects the largest variance of the variances of LLRs corresponding to perturbation vectors input from the perturbation vector evaluation value calculator b127, outputs the amplitude and phase of the data signal (referred to as "equivalent channel state information") corresponding to the perturbation vector having the largest variance, and thus inputs the equivalent channel state information to the channel compensator b106 in the DMRS channel estimator b12 (FIG. 6). A large variance of LLRs indicates that a mutual amount of information transmitted from the base-station apparatus A1 is the largest when each perturbation vector is assumed. The absolute value of LLR becomes larger as the "probability" that each bit is 1 or 0 is higher. Each perturbation vector is assumed, and the variance of LLRs is calculated. The perturbation vector having the highest "probability" is estimated by selecting a perturbation vector having the largest variance of the LLRs. Since the channel compensating unit b124, the modulo calculator b125, and the demodulator b126 in the DMRS channel estimator b12 perform the same operations as those of the channel compensator b106, the modulo calculator b109, and the demodulator b110 respectively external to the DMRS channel estimator b12, the same circuit design may be shared to reduce the circuit scale. A soft-estimated value corresponding to the perturbation vector estimated by the perturbation vector estimating unit b128, out of the soft-estimated values calculated by the demodulator b126, may be used by the decoder bill. This arrangement avoids calculating the soft-estimated value twice, leading to a reduction in the amount of calculation. Advantageous Effects [0135] As described with reference to the present embodiment, the base-station apparatus spatial multiplexes the DMRSs through NLP MU-MIMO and transmits the multiplexed DMRSs. Power efficiency is thus increased, and the overhead involved in the insertion of the DMRS is reduced. Note that the data signal that is soft-estimated to estimate the perturbation vector may be a data signal as part of a frame. The perturbation vector may be estimated using an index other than the variance of the LLRs. Second Embodiment Two -Dimensional Euclidean Algorithm In the first embodiment, the mobile-station device B1n estimates the likeliest perturbation vector from the candidates of the perturbation vectors. In this way, the DMRSs can be spatial multiplexed using NLP MU-MIMO. In a second embodiment, two apparatuses including a base-station apparatus A2 and a mobile-station device B2n perform a process referred to as "two-dimensional Euclidean algorithm", and the mobile-station device B2n operates with an amount of calculation lower than the amount of calculation of the mobile-station device B2n in a first embodiment. The base-station apparatus A2 of the present embodiment is identical to the base-station apparatus A1 of the first embodiment except that the base-station apparatus A2 includes a DMRS corrector a226. The mobile-station device B2n of the present embodiment is identical to the mobile-station device B1n of the first embodiment except that a DMRS channel estimator b22 in the mobile-station device B2n is different in operation from the DMRS channel estimator b12. The following discussion focuses on the difference between the first embodiment and the second embodiment. FIG. 15 illustrates a configuration of the base-station apparatus A2. As described above, the DMRS corrector a226 is newly added to the configuration of FIG. 3. The DMRS corrector a226 in the base-station apparatus A2 and the DMRS channel estimator b22 in the mobile-station device B2n are described in detail. -Dimensional Euclidean Algorithm Each of the base-station apparatus A2 and mobile-station device B2n of the present embodiment includes a two-dimensional Euclidean algorithm unit that performs a "two-dimensional Euclidean algorithm process". The two-dimensional Euclidean algorithm unit constitutes one of the features of the present embodiment, and the principle thereof is described first. The two-dimensional Euclidean algorithm is an algorithm to which a standard Euclidean algorithm is extended. The standard Euclidean algorithm is herein referred to as a one-dimensional Euclidean algorithm. The one-dimensional Euclidean algorithm is a method of finding the greatest common divisor of a pair of integers. For example, the greatest common divisor of 15 and 36 is determined here. The smaller number of 15 and 36 is subtracted from the larger number. Then, 36-15=21. The subtraction operation results in 21. Next, the smaller number of the pair of the smaller number of 15 and 36, i.e., 15 and 21 is subtracted from the larger number. Then, 21-15=6. This operation is repeated. The series of calculations is written as below. The algorithm ends when 0 finally appears. Of the remaining pair of 0 and the other number, the other number is the greatest common divisor. The one-dimensional Euclidean algorithm has been The one-dimensional Euclidean algorithm is extended to a complex number. The algorithm to which the one-dimensional Euclidean algorithm is extended is the two-dimensional Euclidean algorithm. A method described herein is to calculate a "base vector (or an irreducible vector)" corresponding to the greatest common divisor of Gaussian integers if two complex numbers, each having an integer real part and an integer imaginary part (Gaussian integers), are present. As an example, the two-dimensional Euclidean algorithm is applied to two Gaussian integers of (3+i) and (-1+i). A Gaussian integer of the pair of (3+i) and (-1+i) having a larger norm (absolute value) is added to a Gaussian integer that results from multiplying a Gaussian integer having a smaller norm of the pair by +1, -1, +i, and -i (the multiplication of the Gaussian integer by -1, +i, and -i is interpreted to mean that the Gaussian integer is rotated by +180 degrees, +90 degrees, and +270 degrees, Among these four values (2+2i, 4, 2, 4+2i), the Gaussian integer having the smallest norm is 2. The same operation is performed on one of 3+i and -1+i, whichever has a smaller norm, and 2. Among these four values , the Gaussian integers having the smallest norm are 1+i and 1-i. If two or more Gaussian integers have the same norm, whichever may be used. For example, 1+i is selected here, and the same operation is performed on -1+i and 1+i. (-1+i)+(+1) (1+i)=2i (-1+i)+(-1) (1+i)=-2 (-1+i)+(+i) (1+i)=-2+2i (-1+i)+(-i) (1+i)=0 (If the two Gaussian integers have the same norm, whichever may be multiplied by +1, -1, +i, and -i). Since 0 appears, 1+i or -1+i corresponds to the greatest common divisor of the one-dimensional Euclidean algorithm (referred to as the "irreducible vector"). The two-dimensional Euclidean algorithm has been described. The present embodiment features the DMRS channel estimation through the two-dimensional Euclidean algorithm. The principle of the channel estimation is described below. The DMRS serving as a reference in the present embodiment has to be one of the points of the QPSK signal. FIG. 8 illustrates an example of the positions of two DMRSs (referred to as DMRS 1 and DMRS 2) in a signal point plane. The base-station apparatus may now transmit the DMRS 1 at position 3+i, and the DMRS 2 at position -1+i, for example. The unit of the signal point plane is set so that the amplitude of the QPSK signal is 2 /2. More specifically, the signal points of the QPSK signal are (±1±i), and the modulo width is 4 (for convenience of explanation, the unit of amplitude has changed from that of the first embodiment). All of solid circles denoting the DMRS 1 and the DMRS 2 and blank circles other than the solid circles are QPSK signal points and points resulting from adding any perturbation vector to the QPSK signal points. The DMRS 1 is a signal that results from adding a perturbation vector +4 to -1+i of the QPSK signal. If the two-dimensional Euclidean algorithm is performed on 3+i and -1+i, a complex number 1+i as the "greatest common divisor" is obtained. This means that a minimum lattice vector (irreducible vector) of a lattice point group (solid circles and blank circles) of FIG. 8 is calculated. The irreducible vector corresponds to one of the four QPSK points. If a common factor "a" to complex numbers is multiplied such as a(3+i) and a(-1+i) in original signals, the irreducible vector is also multiplied by a, resulting in an irreducible vector a(1+i). In the mobile-station device, the DMRS 1 and the DMRS 2 are multiplied by a complex gain of the channel, namely, h(3+i) and h(-1+i). Noise is disregarded herein. If the two-dimensional Euclidean algorithm is applied to the two signals, the irreducible vector h(1+i) results. This signal is obtained by multiplying the complex gain of the channel by one of the four QPSK points to which no perturbation vector is added. Since it is known that each of the QPSK points has a norm of 2 /2, the absolute value |h | of the complex gain of the channel can be determined. Also, since arg (h) can be determined, the phase of the complex gain of the channel can also be determined. However, the mobile-station device has difficulty in determining which of the four QPSK points (±1±i) multiplied by h results in the signal h(1+i) obtained through the two-dimensional Euclidean algorithm unit during the DMRS channel estimation. The phase arg(h) is thus determined on a tentatively selected point (1+i). If the point multiplied by h is one of the other three points (-1+i), (1-i), and (-1-i), the actual phase difference of the complex gain is definitely one of +90 degrees, +180 degrees, and +270 degrees. The modulo operation is then performed on the DMRS 1 and the DMRS 2. The same modulo operations are performed on I-ch and Q-ch, and are symmetrical with respect to the corresponding axis, and are rotational symmetrical computation by 90 degrees. For this reason, if arg (h) is rotated by an integer multiple of 90 degrees, the modulo operation is performed on each of the DMRS 1 and the DMRS 2 without any problems. The DMRS 1 subsequent to the modulo operation is h(1+i), and the DMRS 2 subsequent to the modulo operation is h(-1+i). If noises μ1 and μ2 are considered, the DMRS 1 becomes h(1+i)+μ1, and the DMRS 2 becomes h(-1+i)+μ2. Since the mobile-station device has known that the DMRS 1 is (1+i) and the DMRS 2 is (-1+i) prior to the non-precoding operation, the mobile-station device performs the channel estimation from each DMRS as below. , μ1/(1+i) and μ2/(-1+i) are channel estimation errors, and can be reduced by maximum ratio combining the channel estimation results of the DMRS 1 and the DMRS 2. It should be noted herein that h(±1±i) corresponding to the irreducible vector cannot be calculated through the two-dimensional Euclidean algorithm in a combination of some DMRSs. If the DMRSs are arranged as illustrated in FIG. 9, the irreducible vector is 3+3i, and fails to match the signal point of QPSK. To avoid such a case, the base-station apparatus predicts the case of FIG. 9 through the two-dimensional Euclidean algorithm, and then changes the signal points and transmits points free from such a problem. The overview of the two-dimensional Euclidean algorithm of the present embodiment has been discussed. This process is described again referring to FIG. 10 that illustrates the two-dimensional Euclidean algorithm. As described above, the two-dimensional Euclidean algorithm unit 300 is present both in the base-station apparatus and the mobile-station device. The two-dimensional Euclidean algorithm unit of the mobile-station device is described first. The two-dimensional Euclidean algorithm unit 300 of FIG. 10 includes a vector storage unit 301, a difference vector calculator 303, a difference vector norm calculator 305, a convergence determining unit 307, and a norm calculator 309. (Step S31) The vector storage unit 301 inputs the two input DMRSs to the norm calculator 309. The two DMRSs are obtained by multiplying the DMRS transmitted from the base-station apparatus by the complex gain h of the channel, and adding noise to the DMRS subsequent to the multiplication. (Step S32) The norm calculator 309 calculates the norms of the two input DMRSs, and input each norm to the vector storage unit 301. (Step S33) The vector storage unit 301 inputs the norms corresponding to the two input DMRSs to the difference vector calculator 303. (Step S34) Let "a" represent a vector having a larger norm out of the two input vectors and "b" represent a vector having a smaller norm (the signal obtained when the difference vector calculator 303 recursively performs a calculation on the two input DMRSs is hereinafter referred to as a "vector"). The difference vector calculator 303 calculates four vectors c1=a+b, c2=a-b, c3=a+ib, and c4=a-ib, and then inputs the four vectors to the difference vector norm calculator 305. The difference vector calculator 303 also inputs the norms of the vector b and the norm of the vector b to the vector storage unit 301. (Step S35) The difference vector norm calculator 305 calculates the norms of the four input vectors c1 through c4, inputs the vector having the smallest norm and the norm of that vector to the vector storage unit 301, and inputs the norm of the vector having the smallest norm to the convergence determining unit 307. (Step S36) The convergence determining unit 307 compares in magnitude the input norm with a predetermined positive value T. If the norm is larger than T, the convergence determining unit 307 inputs to the vector storage unit 301 information that the two-dimensional Euclidean algorithm has been completed. The norm of 0 is not set as a convergence condition as opposed in the principle of the two-dimensional Euclidean algorithm because the DMRS contains noise in the mobile-station device and the norm of the convergence determining unit 307 does not end up with 0. However, the noise is typically smaller than the DMRS, and it is thus determined that convergence has reached when the norm becomes smaller than the predetermined constant T. The value of T is a tradeoff of the following two factors, and is preset through computer simulation. 1) If T increases, the convergence determination condition is relaxed. Even when there is no error caused by the noise, the probability that the convergence determining unit 307 determines that the norm has converged even though the norm is not 0. 2) If T decreases, the convergence determination condition becomes rigorous. The convergence determining unit 307 determines that the norm has not converged because of an error caused by noise, although the convergence determining unit 307 should determine that the norm has converged. The value of T is thus preset in view of the tradeoff of the two factors. (Step S37) The vector storage unit 301 inputs to the difference vector calculator 303 two norms of 1) norm corresponding to a vector input from the difference vector calculator 303, and 2) norm corresponding to a vector input from the difference vector norm calculator 305, and repeats the process beginning with step S34. (Step S38) Upon receiving from the convergence determining unit 307 information indicating that the two-dimensional Euclidean algorithm has been completed, the vector storage unit 301 outputs the vector b (irreducible vector). Since the two-dimensional Euclidean algorithm unit 300 in the base-station apparatus has not multiplied the DMRS by the complex gain h of the channel and has not added noise to the DMRS, the convergence determining unit 307 sets a determination condition to be T=0. Detail of DMRS Corrector a Described next are the detailed configuration and operation of the DMRS corrector a226 in the base-station apparatus A2 that performs the two-dimensional Euclidean algorithm. FIG. 11 illustrates a configuration example of the DMRS corrector a226. The DMRS corrector a226 includes a two-dimensional Euclidean algorithm unit 300, an irreducible vector verification unit 320, and a perturbation vector adder 340. The DMRS corrector a226 causes the base-station apparatus A2 to solve the problem that the irreducible vector fails to match the lattice vector, thereby precluding the problem. The DMRS corrector a226 acquires the DMRS with the perturbation vector added thereto by the non-linear precoder a13. The DMRS acquired by the DMRS corrector a226 has the perturbation vector added thereto. A signal prior to adding a filter W is (q+z1τ+iz2τ). The DMRSs acquired at a time is two, and the two are paired in advance. Which is paired with which is known by the base-station apparatus A2 and the mobile-station device B2n in advance. The two-dimensional Euclidean algorithm unit 300 inputs to the irreducible vector verification unit 320 an irreducible vector resulting from applying the two-dimensional Euclidean algorithm to the two DMRSs (the irreducible vector is referred to "d") and the two DMRSs acquired from the non-linear precoder a13. If the irreducible vector d is the DMRS signal q prior to the perturbation vector addition, i.e., one of the QPSK signals (±1±i), the irreducible vector verification unit 320 again inputs the two DMRSs acquired from the non-linear precoder a13 to the non-linear precoder a13. On the other hand, if the irreducible vector d is not the DMRS prior to the perturbation vector addition, i.e., not one of the QPSK signals (±1±i), the irreducible vector verification unit 320 inputs the two DMRSs acquired from the non-linear precoder a13 to the perturbation vector adder 340. The perturbation vector adder 340 adds the perturbation vector τ(-τ, iτ, or -iτ) to one of the two DMRSs acquired from the non-linear precoder a13. The perturbation vector adder 340 determines which of the DMRSs the perturbation vector τ is to be added to, in a random fashion using a random number. The perturbation vector adder 340 again inputs to the two-dimensional Euclidean algorithm unit 300 the two DMRSs including the DMRS with the perturbation vector added thereto. The two-dimensional Euclidean algorithm unit 300 performs the two-dimensional Euclidean algorithm using new DMRSs. From then on, the two-dimensional Euclidean algorithm unit 300, the irreducible vector verification unit 320, and the perturbation vector adder 340 continuously perform the operations thereof until the irreducible vector verification unit 320 determines that the irreducible vector d is the DMRS prior to the perturbation vector addition, i.e., one of the QPSK signals (±1±i). Finally, the irreducible vector verification unit 320 outputs and then inputs the two corrected DMRSs to the non-linear precoder a13. If the DMRS corrector a226 corrects the DMRS, the non-linear precoder a13 multiplies the DMRS by a filter W without searching for a new perturbation vector, and multiplies the resulting value by the reciprocal g of the power normalization coefficient g. Detail of DMRS Channel Estimator b Described in detail below are the configuration and operation of the DMRS channel estimator b12 in the mobile-station device b2. FIG. 12(α) illustrates the DMRS channel estimator b12 in detail. The DMRS channel estimator b12 includes a two-dimensional Euclidean algorithm unit 300 and a complex gain calculator 350. As illustrated in FIG. 12(b), the complex gain calculator 350 includes an interim complex gain calculator 351, a DMRS channel compensator 352, a DMRS-modulo unit 353, and a vector divider 354. The DMRS channel estimator b12 first acquires the two DMRSs received by the mobile-station device B2n. The two DMRSs are a pair of DMRSs (referred to as p and p ) discussed with reference to the DMRS corrector a226 of the base-station apparatus. The two-dimensional Euclidean algorithm unit 300 performs the two-dimensional Euclidean algorithm on the two DMRSs, thereby calculating an irreducible vector p . The irreducible vector p is an irreducible vector already multiplied by the complex gain h, and is thus different from the irreducible vector d calculated by the DMRS corrector a226. As described above, the irreducible vector p is a signal that has been obtained by multiplying one of the four QPSK signals with no perturbation vector added thereto by the complex gain h of the channel and by adding an error caused by noise to the multiplied signal. The interim complex gain calculator 351 determines a phase θ (=arg(p /(1+i))) on the assumption that the irreducible vector p is obtained by multiplying (1+i) of the four QPSK signals by the complex gain (even if another point such as (-1+i), (1-i), and (-1-i) is used, a phase difference of the actual complex gain is definitely one of +90 degrees, -90 degrees, or +180 degrees). Since each of the QPSK points has a norm of 2 /2, the absolute value |h|(=pred/2 /2) of the complex gain of the channel can be determined. The interim complex gain calculator 351 inputs the absolute value |h|(=p /2) and the phase θ of the complex gain of the channel (=arg (p /(1+i))) thus calculated to the DMRS channel compensator 352. The DMRS channel compensator 352 performs a channel compensation on the DMRSs p and p using the input absolute value |h| of the complex gain of the channel and the input phase θ. More specifically, the amplitude of the DMRS is multiplied by 1/|h| and the phase is rotated by -θ as The DMRS channel compensator 352 inputs to the DMRS-modulo unit 353 the channel-compensated DMRSs q and q , the absolute value |h| and the phase θ of the complex gain of the channel. The DMRS-modulo unit 353 calculates signals e and e by performing the modulo operation on the input channel-compensated DMRSs q and q . Since the DMRSs are channel-compensated for in terms of both amplitude and phase, the DMRS-modulo unit 353 can perform the modulo operation using the modulo width τ(=2×2 /2) corresponding to a standard QPSK signal. The DMRS-modulo unit 353 calculates a signal p ×exp (2πiθ)) and a signal p ×e×p (2πiθ)) based on the signals e and e subsequent to the modulo operation, using the absolute value |h| and the phase θ of the complex gain of the channel. The DMRS-modulo unit 353 inputs the signals p and p to the vector divider 354. The vector divider 354 estimates the complex gain h of the channel (=p /q or p /q) by dividing the signals p and p by a reference signal q of the DMRS received and learned in advance by the mobile-station device (the reference signal q is a signal point of the DMRS prior to the perturbation vector addition by the base-station apparatus and one of the four QPSK signal points). Since the two DMRSs are present, two measurement values are obtained with respect to the complex gain of the same channel. The error caused by the noise may be controlled by maximum ratio combining. The vector divider 354 inputs the complex gain h of the estimated channel to the channel compensator b106. Advantageous Effects [0192] As described with reference to the embodiment, each of the base-station apparatus A2 and the mobile-station device B2n performs the process of the two-dimensional Euclidean algorithm, and the DMRSs can be spatial multiplexed through NLP MU-MIMO using a relatively small amount of calculation on the mobile-station device B2n. The overhead involved in the insertion of the DMRS is reduced. Third Embodiment VP THP , etc. Each of the mobile-station device B1n of the first embodiment and the mobile-station device B2n of the second embodiment performs VP with the non-linear precoder a13 and the filter calculator all. A third embodiment uses THP that involves a smaller amount of calculation than VP. In the third embodiment, a base-station apparatus is referred to as a base-station A3, mobile-station devices are referred to as mobile-station devices B31 through B3N, and any of the mobile-station devices is referred to as a mobile-station device B3n. In the third embodiment, a non-linear precoder is referred to as a non-linear precoder a33, and a filter calculator is referred to as a filter calculator a31. The filter calculator a31 first QR decomposes H after calculating the channel matrix H in the same manner as in the first embodiment. =QR (3-1) Here, R is an upper triangular matrix, and Q is a unitary matrix. Also, means a complex conjugate transpose of a matrix. Let A represent a diagonal matrix including only diagonal components of R . A linear filter P is calculated using A and Q in accordance with the following Expression. Also, an interference coefficient filter F is calculated using A and R in accordance with the following Expression. -I (3-3) Here, let I represent a unit matrix having N rows and N columns. Finally, the filter calculator a31 inputs the linear filter P and the interference coefficient filter F to the non-linear precoder a33. To perform the power normalization, the filter calculator a31 calculates the power normalization coefficient g in accordance with /2 (3-4) Let P[s] represent the mean power of the modulation signal, and P represent the mean power of the data signal subsequent to the modulo operation, and C is a diagonal matrix having the diagonal components , . . . ,P ] (3-5) in the order from the top -left corner of the matrix. Note that P changes in response to the modulo width τ. If P =1, P is 4/3 with QPSK (τ=2×2 /2), P is 16/15 with 16QAM (τ=8×10 /2) and P is 64/63 with 64QAM (τ=16×42 /2). This is based on the fact that the data signals after the modulo operation are uniformly distributed within the modulo width centered on the origin. The filter calculator a31 inputs to the non-linear precoder a33 as a new linear filter P a matrix that results from multiplying the linear filter P calculated in Expression (3-2) by g . The power normalization described with reference to the first embodiment is not performed then. FIG. 13 illustrates the configuration of the non-linear precoder a33 and FIG. 14 is a flowchart illustrating the operation of the non-linear precoder a33. The operation of the non-linear precoder a33 is sequentially described. (Step S1) The interference calculator a131 and the linear filter multiplier a134 respectively acquire the interference coefficient filter F and the linear filter P from the filter calculator a31. (Step S2) The non-linear precoder a33 substitutes 1 for n representing the number of a mobile-station device that is calculating the transmit signal. (Step S3) The non-linear precoder a33 substitutes v1 for the specific signal s1 addressed to a mobile-station device B31. (Step S4) The non-linear precoder a33 substitutes n+1 for n. In other words, n=2. (Step S5) The interference calculator a131 calculates an interference signal f2 received by a mobile-station device B32, using v1, in accordance with the following Expression. 2=F(2,1)*v1 (3-6) where F (p, q) represents a component at a second row and a first column of the matrix F. (Step S6) An interference subtracter a133-2 subtracts f2 from the specific signal s2 addressed to the mobile-station device B32, thereby calculating signal s2-f2. (Step S7) A first modulo calculator a132-2 performs the modulo operation on s2-f2, thereby calculating a signal v2. (Step S8) Since n=2, the process for the signal addressed to a next mobile-station device B33 is performed starting with step S4 (step S4 through step S7). The operations in steps S4 through S8 are repeated until n=N. For example, the calculation of the signal addressed to an n-th mobile-station device B3n is described. (Step S4) The non-linear precoder a33 increments the value of n by 1. (Step S5) The interference calculator a131 calculates an interference signal fn received by the mobile-station device n using v1 through v(n-1) in accordance with the following Expression. =F(n,1:n-1)*[v1,v2, . . . ,v(n-1)] where F (n, 1:n-1) represents a row vector indicating components at first column through (n-1)-th column at an n-th row of the matrix F. (Step S6) An n-th interference subtracter a133-n subtracts fn from a specific signal sn addressed to the n-th mobile-station device B3n, thereby calculating sn-fn. (Step S7) A modulo calculator a132-n performs the modulo operation on sn-fn, thereby calculating a signal vn. The modulo operation reduces transmit signal power addressed to each mobile-station (Step S8) If n<N, the non-linear precoder a33 performs the operation in step S4 again. If n=N, processing proceeds to step S9. (Step S9) Let x represent a signal resulting from multiplying a signal v=(v1, v2, . . . , vN) by the linear filter P. As in the first embodiment, the components of the signal x are transmit signals sequentially transmitted via the antennas a101-1 through a101-N. The signal x is input to the frame constructor a142. As VP, THP can also be considered to be a perturbation vector search algorithm as described below. However, THP is not an algorithm to search for a truly optimal perturbation vector but an algorithm to search for a quasi-optimal perturbation vector using an amount of calculation lower than that for VP. The non-linear precoder a33 may now perform signal calculation based on the assumption that the non-linear precoder a33 includes none of the modulo calculators a132-1 through a132-N. If the power normalization coefficient is 1, the non-linear precoder a33 performs the following calculation. s (3-8) If Expressions (3-2) and (3-3) are substituted for P and F, = QA - 1 { I + ( R H A - 1 - I ) } - 1 s = QA - 1 ( R H A - 1 ) - 1 s = Q ( R H ) - 1 s = ( R H Q H ) - 1 s { ( QR ) H } - 1 s = { ( H H ) H } - 1 s = H - 1 s = Ws ( 3 - 9 ) x = QA - 1 { I + ( R H A - 1 - I ) } - 1 s = QA - 1 ( R H A - 1 ) - 1 s = Q ( R H ) - 1 s = ( R H Q H ) - 1 s = { ( QR ) H } - 1 s = { ( H H ) H } - 1 s = H - 1 s = Ws ##EQU00003## This may be interpreted to mean that (z1, z2)=(0, 0) is substituted in Expression (1-2). The modulo calculators a132-1 through a132-N add, to I-ch and Q-ch of the specific signal s, a signal of an integer multiple of the modulo width. For this reason, the signal x calculated by the non-linear precoder a33 of the present embodiment is also expressed by an expression x=W(s+z1τ+iz2τ). The signal x calculated by the non-linear precoder a33 is not necessarily a signal, expressed by Expression (1-2), with an optimal perturbation vector added thereto, but is a quasi-optimal signal x because power control through the modulo operation is performed thereon. If the modulo operation is not performed, the norm becomes definitely small in comparison with Ws. The signal x is calculated in accordance with the above-described algorithm. Unlike VP in the first embodiment, searching for all the candidates of perturbation vectors becomes unnecessary. The amount of calculation is thus reduced. In this way, this arrangement minimizes an increase in the overhead involved in the DMRS insertion by spatial multiplexing DMRS in the NLP MU-MIMO system while the mobile-station device can normally estimate the complex gain of the data signal. The quasi-optimal perturbation vector search algorithm using THP has been discussed. The same is true of THP using ordering, or LR-THP. In the embodiments, the present invention is not limited to the configurations and the like illustrated in the accompanied drawings. The configurations and the like may be modified appropriately within the scope where the effects of the present invention are provided. The configurations and the like may be appropriately modified without departing from the scope of the present invention. A program to perform the functions described with reference to the embodiments may be recorded on a computer-readable recording medium. The program recorded on the recording medium may be installed onto a computer system, and the computer system may execute the program. The process of each element is thus performed. The term "computer system" includes OS, and hardware such as a peripheral The "computer system" includes a homepage providing environment (or display environment) if a WWW system is used. The term "computer readable recording medium" refers to a portable medium, such as a flexible disk, a magneto-optical disk, ROM, or CD-ROM, or a recording device, such as a hard disk, built into the computer system. The "computer readable recording medium" may include a communication line that holds dynamically the program for a short period of time. The communication line transmits the program via a communication channel such as a network like the Internet or a telephone line. The "computer readable recording medium" may also include a volatile memory in the computer system that may be a server or a client and stores the program for a predetermined period of time. The program may implement part of the above-described function. The part of the above-described function may be used in combination with a program previously recorded on the computer system. INDUSTRIAL APPLICABILITY [0225] The present invention finds applications in communication apparatuses. REFERENCE SIGNS LIST [0226] A1 . . . base-station apparatus, B11-B14 . . . mobile-station devices, all . . . filter calculator, a13 . . . non-linear precoder, a31 . . . filter calculator, a33 . . . non-linear precoder, a101 . . . antenna, a102 . . . receiver, a103 . . . GI remover, a104 . . . FFT unit, a105 . . . channel state information acquisition unit, a121 . . . encoder, a122 . . . modulator, a124 . . . DRMS generator, a125 . . . specific signal constructor, a133 . . . interference subtracter, a132 . . . modulo calculator, a134 . . . linear filter multiplier, a141 . . . CRS generator, a142 . . . frame constructor, a143 . . . IFFT unit, a144 . . . GI inserter, a145 . . . transmitter, a226 . . . CRS corrector, b12 . . . DMRS channel estimator, b101 . . . antenna, b102 . . . receiver, b103 . . . GI remover, b104 . . . FFT unit, b105 . . . signal separator, b106 . . . channel compensator, b107 . . . CRS channel estimator, b108 . . . channel state information generator, b109 . . . modulo calculator, b110 . . . demodulator, b111 . . . decoder, b121 . . . perturbation vector candidate selector, b122 . . . perturbation vector adder, b123 . . . interim channel estimating unit, b124 . . . channel compensating unit, b125 . . . modulo calculator, b126 . . . demodulator, b127 . . . perturbation vector evaluation value calculator, b128 . . . perturbation vector estimating unit, b131 . . . IFFT unit, b132 . . . GI inserter, b133 . . . transmitter, 300 . . . two-dimensional Euclidean algorithm unit, 301 . . . vector storage unit, 303 . . . difference vector calculator, 305 . . . difference vector norm calculator, 307 . . . convergence determining unit, 309 . . . norm calculator, 320 . . . irreducible vector verification unit, 340 . . . perturbation vector adder, 350 . . . complex gain calculator, 351 . . . interim complex gain calculator, 352 . . . DMRS channel compensator, 353 . . . DMRS-modulo unit, 354 . . . vector divider All the publications, patents and patent applications cited in this description are incorporated by reference in their entirety herein. Patent applications by Hiromichi Tomeba, Osaka-Shi JP Patent applications by Hiroshi Nakano, Osaka-Shi JP Patent applications by Takashi Onodera, Osaka-Shi JP Patent applications by SHARP KABUSHIKI KAISHA Patent applications in class Having both time and frequency assignment Patent applications in all subclasses Having both time and frequency assignment User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20130336282","timestamp":"2014-04-19T13:02:42Z","content_type":null,"content_length":"136191","record_id":"<urn:uuid:a34874bc-ac8b-4668-b040-4a8652bdff50>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
By: SharpBrains The Power of Noth­ing: Could study­ing the placebo effect change the way we think about med­i­cine? (The New Yorker): “For years, Ted Kaptchuk per­formed acupunc­ture at a tiny clinic in Cam­bridge, a few miles from his cur­rent office, at the Har­vard Med­ical School. He opened for busi­ness in 1976, hav­ing just returned from Asia, where he had spent four years hon­ing his craft. Not long after he arrived in Boston, he treated an Armen­ian woman for chronic bron­chi­tis. A few weeks later, the woman returned with her hus­band and told Kaptchuk that he had “cured” her.” Read the rest of this entry » Jan 10, 2008 10 Cognitive Training Clinical Trial: Seeking Older Adults By: Alvaro Fernandez Yaakov Stern on the Cog­ni­tive Reserve) have asked for help in recruit­ing vol­un­teers for an excit­ing clin­i­cal trial. If you are based in New York City, and between the ages of 60 and 75, please con­sider join­ing this study. More infor­ma­tion below: Use it or Lose it? Train your Brain! Healthy adults between the ages of 60 and 75 liv­ing in NYC are invited to join a study of men­tal fit­ness train­ing. Qual­i­fied indi­vid­u­als will play a scientifically-based video game in our lab­o­ra­tory, and will be tested to deter­mine the effects on atten­tion, mem­ory, and cog­ni­tive performance. You will earn up to $600 plus trans­porta­tion costs if you com­plete the 3-month program. This excit­ing study is being per­formed by the Cog­ni­tive Neu­ro­science Divi­sion of the Sergievsky Cen­ter at Colum­bia Uni­ver­sity Med­ical Center. If inter­ested, con­tact us today: Read the rest of this entry » Jun 10, 2007 50 Sunday Afternoon Quiz By: Caroline Latham Here’s a quick quiz to test your mem­ory and think­ing skills which should work out your tem­po­ral and frontal lobes. See how you do! 1. Name the one sport in which nei­ther the spec­ta­tors nor the par­tic­i­pants know the score or the leader until the con­test ends. 2. What famous North Amer­i­can land­mark is con­stantly mov­ing backward? 3. Of all veg­eta­bles, only two can live to pro­duce on their own for sev­eral grow­ing sea­sons. All other veg­eta­bles must be replanted every year. What are the only two peren­nial vegetables? 4. What fruit has its seeds on the outside? 5. In many liquor stores, you can buy pear brandy, with a real pear inside the bot­tle. The pear is whole and ripe, and the bot­tle is gen­uine; it hasn’t been cut in any way. How did the pear get inside the bottle? 6. Only three words in Stan­dard Eng­lish begin with the let­ters “dw” and they are all com­mon words. Name two of them. 7. There are 14 punc­tu­a­tion marks in Eng­lish gram­mar. Can you name at least half of them? 8. Name the one veg­etable or fruit that is never sold frozen, canned, processed, cooked, or in any other form except fresh. 9. Name 6 or more things that you can wear on your feet begin­ning with the let­ter “S.” Answers To Quiz: 1. The one sport in which nei­ther the spec­ta­tors, nor the par­tic­i­pants, know the score or the leader until the con­test ends: boxing 2. The North Amer­i­can land­mark con­stantly mov­ing back­ward: Nia­gara Falls (the rim is worn down about two and a half feet each year because of the mil­lions of gal­lons of water that rush over it every minute.) 3. Only two veg­eta­bles that can live to pro­duce on their own for sev­eral grow­ing sea­sons: aspara­gus and rhubarb. 4. The fruit with its seeds on the out­side: strawberry. 5. How did the pear get inside the brandy bot­tle? It grew inside the bot­tle. (The bot­tles are placed over pear buds when they are small and are wired in place on the tree. The bot­tle is left in place for the entire grow­ing sea­son. When the pears are ripe, they are snipped off at the stems.) 6. Three Eng­lish words begin­ning with “dw”: dwarf, dwell, and dwindle. 7. Four­teen punc­tu­a­tion marks in Eng­lish gram­mar: period, comma, colon, semi­colon, dash, hyphen, apos­tro­phe, ques­tion mark, excla­ma­tion point, quo­ta­tion marks, brack­ets, paren­the­sis, braces, and ellipses. 8. The only veg­etable or fruit never sold frozen, canned, processed, cooked, or in any other form but fresh: lettuce. 9. Six or more things you can wear on your feet begin­ning with “s”: shoes, socks, san­dals, sneak­ers, slip­pers, skis, skates, snow­shoes, stock­ings, stilts. PS: Enjoy these 50 brain teasers to test your cog­ni­tive abil­ity. Free, and fun for adults of any age! Jun 1, 2007 28 Executive Function Workout By: Caroline Latham Here is new brain teaser from puz­zle mas­ter Wes Carroll. The Fork in the Road Start at the cen­ter num­ber and col­lect another four num­bers by fol­low­ing the paths shown (and not going back­wards). Add the five num­bers together. What is the low­est num­ber you can score? This puz­zle works your exec­u­tive func­tions in your frontal lobes by using your plan­ning skills, hypoth­e­sis test­ing, and logic. PS: Enjoy these 50 brain teasers to test your cog­ni­tive abil­ity. Free, and fun for adults of any age! May 8, 2007 8 Math Brain Teaser: Concentric Shapes or The Unkindest Cut of All, Part 2 of 2 By: Caroline Latham If you missed Part 1, also writ­ten by puz­zle mas­ter Wes Car­roll, you can start there and then come back here to Part 2. Con­cen­tric Shapes: The Unkind­est Cut of All, Part 2 of 2 Dif­fi­culty: HARDER Type: MATH (Spa­tial) Imag­ine a square within a cir­cle within a square. The cir­cle just grazes each square at exactly four points. Find the ratio of the area of the larger square to the smaller. In this puz­zle you are work­ing out many of the same skills as in Part I: spa­tial visu­al­iza­tion (occip­i­tal lobes), mem­ory (tem­po­ral lobes), logic (frontal lobes), plan­ning (frontal lobes), and hypoth­e­sis gen­er­a­tion (frontal lobes). Two to one. Draw the smaller square’s diag­o­nal to see that the the smaller square’s diag­o­nal is the diam­e­ter of the cir­cle. Divide the larger square into two equal rec­tan­gu­lar halves to see that the larger square’s side is also the diam­e­ter of the cir­cle. This means that the smaller square’s diag­o­nal equals the larger square’s side. (Or, if you pre­fer, sim­ply rotate the inner square by 45 degrees.) As we’ve seen in the ear­lier puz­zle “The Unkind­est Cut Of All,” the area of the smaller square is half that of the larger, mak­ing the ratio two to one. PS: Enjoy these 50 brain teasers to test your cog­ni­tive abil­ity. Free, and fun for adults of any age! May 4, 2007 14 Take the Senses Challenge By: Caroline Latham This is a very fun link to a series of 20 timed puz­zles put together by the BBC. It should take you about 10 min­utes or less to complete. –> Take THE SENSES CHALLENGE (BBC) You also might enjoy their Inter­ac­tive Brain which allows you to explore both the struc­ture and func­tion of your brain. The func­tions will help you learn what areas of your brain you are exer­cis­ing when you do or feel cer­tain things. They map out for you: anger, con­scious­ness, dis­gust, hap­pi­ness, lan­guage under­stand­ing, move­ment, self aware­ness, smell, taste, touch, breath­ing, coor­di­na­tion, fight or flight, hear­ing, long-term episodic mem­ory, sad­ness, self con­trol, speech pro­duc­tion, thirst and hunger, and vision. PS: Enjoy these 50 brain teasers to test your cog­ni­tive abil­ity. Free, and fun for adults of any age! Apr 27, 2007 13 Math Brain Teaser: The Unkindest Cut of All, Part 1 of 2 By: Caroline Latham In honor of Math­e­mat­ics Aware­ness Month, here is another math­e­mat­i­cal brain ben­der from puz­zle mas­ter Wes Carroll. The Unkind­est Cut of All, Part 1 of 2 Dif­fi­culty: HARD Type: MATH (Spa­tial) The area of a square is equal to the square of the length of one side. So, for exam­ple, a square with side length 3 has area (3^2), or 9. What is the area of a square whose diag­o­nal is length 5? In this puz­zle you are work­ing out your spa­tial visu­al­iza­tion (occip­i­tal lobes), mem­ory (tem­po­ral lobes), and hypoth­e­sis gen­er­a­tion (frontal lobes). I am espe­cially fond of these two ways to solve this problem: 1. Draw the right tri­an­gle whose hypotenuse is the square’s diag­o­nal, and whose two legs are two sides of the square. Then use the Pythagorean The­o­rem (a^2 + b^2 = c^2) to solve for the length of each side. Since two sides are equal, we get (a^2 + a^2 = c^2), or (2(a^2) = c^2) ). Since c is 5, 2(a^2) = 25, mak­ing a^2 equal to 25/2, or 12.5. Since the area of the square is a^2, we’re done: it’s 12.5. 2. Tilt the square 45 degrees and draw a square around it such the the cor­ners of the orig­i­nal square just touch the mid­dles of the sides of the new, larger square. The new square has sides each 5 units long (the diag­o­nal of the smaller square), and it there­fore has area 25. How­ever, a closer inspec­tion reveals that the area of the larger square must be exactly twice that of the smaller. There­fore the smaller square has area 25/2, or 12.5. You can now go on to Con­cen­tric Shapes: The Unkind­est Cut of All, Part 2 of 2
{"url":"http://sharpbrains.com/tags/clinical-trial/","timestamp":"2014-04-19T09:26:01Z","content_type":null,"content_length":"46423","record_id":"<urn:uuid:ccd7bad1-c76f-4779-a06a-24e099b8124c>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
Logo from the Kindergarten to the University Level Dr Károly Farkas Head of Mathematics and Informatics Department College of Szolnok Hungary-5000 Szolnok Ady E. street, 9. Ildikó Tasnádi Informatics teacher Hungary-1011 Budapest Hunyadi János u. 3. itasnadi@hotmail.com Éva Törtely Hungary-1126 Budapest Márvány u. 27. Keywords: Structured thinking, problemsolving, Tangram, random Theme: Classroom practices 1. Introduction Prof. Seymour Papert said in his book: "The Children's Machine" that the children who learn nowadays are the computer generation. Papert S. (1993). It is truer nowadays, when the date begins with the number two in the European calendars. The skills and abilities of children considered to be among the most important are: preparedness to solve new problems, sensitivity to problems, creativity and imagination, algorithmic skills, the ability to persuade others, short-time visual and verbal memory, directed attention, and concentrated attention. On the basis of our experiment conducted over the past twenty years in Hungary, we are convinced that these skills and abilities can be successfully developed and perfected using the pedagogy developed by Papert, using the Logo programming language and Logo-pedagogy. The computer is only one tool and it is not the most important one. We have tried using several other kinds of information technologies. Computer and the other tools are only objects. Even in the age of informatics the child - as the learner - is the most important. We prefer the problems which are solved in various ways, especially there is manipulating in that different ways. We help the teaching with trial and response, the individual study with multiple methods. This theses, is important both in early childhood and later. We in general use together in parallels the traditional and computerised methods of learning. The basis of our pedagogical research is improving while preserving previous worths. There are such didactic activities even today, in the age of computer, that may or should be played without computer, and the computer is only used for repetition and variation. As far as we concern the logo-pedagogy is more general than the computer. We accepted the advice of Papert: the turtle and his world is only one model among the possible microworlds Papert (1980). We are looking for methods, and devices with ones we can imitate and support the method of turtle. For that reason we use several team activity games, role-playing and manual activity games, which are very useful to introduce the concept of informatics. In the next two paragraphs we are going to discuss the didactic line of our methods, later we shall discuss two elements of a method which are new in our concept. The use of Tangram in developing structured thinking is our example for it. In this case the computer is only of little consequence. On the other hand there are many fields of teaching where the using of computer is the optimal method. The demonstration of probability by the wandering of turtles is a good example for this. 2. The correct line of using our methodical elements "One of the biggest advantages of Logo is that it can suit all ages." Calabrese E. (1989). It is really true. This is the reason why we use Logo from the kindergarten to the university level. But of course the method, the way of using is not the same in the different ages. There is an optimal order from our own body to the computer. 2.1. Robot Games Playing games is necessary at every stage of human life. Playing makes learning more enjoyable and successful. We start to work with the children's own body, because knowledge about themselves is necessary before they can manipulate things outside themselves. Both teacher and children act as robots, which are given commands how to move. To introduce students to Logo, we developed "robot games" to help them act out what they will do later by using computers. Seymour Papert says that the ultimate mean of motivation for a child are the turtles, but we believe that students are more comfortable with the idea of a robot. A real turtle would think of obeying our orders, while a robot would always be faithful to our commands. We also think it to be important to "detach" young children from the computer-screen. Depending on the age of the learner, we give verbal instructions - first in Hungarian and later in another language. At first we use vocabulary from natural language: walk, turn, and so on, and then we gradually begin to apply some formal commands, slowly approaching Logo's syntax and semantics. Children play as being the robot by performing each movement by word. Before beginning the robot game we discuss the size of the steps to be taken and the unit of a turn. A step is usually a foot, and the unit of a turn for small children is a quarter turn. Later, at the end of the first or in the second class, it is 1/12 of a circle - we call this "one slice of cake". At first we ask children to enact only the basic Logo primitives: FD, RT, BK, LT. Later we work on the interpretations of results. For example PD means that students will trace the path with chalk or use a branch to draw the path on the ground on the playground. Students create the equivalent of the HT primitive by crouching so that the hidden robots cannot be seen between the benches. The wall of the classroom or the edge of the playground can serve as the frame of the computer screen when students learn to interpret commands such as FENCE, WINDOWS, TRACK, and so on. Later in the older student's ages, when the founded problems were difficult, we make the students use their own body for understanding. 2.2. Feed the turtle In the first level of the game, in our pedagogy usually play the "Feed the Turtle" game in the kindergarten. The other children direct Child who plays the turtle to the "feeding place". They direct the child turtle by means of different Logo commands. The game provides very useful training in algorithms, as well as a sense of position and direction. In the next level of the game, in the primary school, students move figures they had made based on a model - a Turtlegarden. Later they use graph paper, abandoning the real world game and taking a decisive step from the real to the abstract. The students then copy the trail of the turtle in a copybook. After practising with the turtle Turtlegarden and graph paper, they start using a computer that has some model of the Turtlegarden they have been using in the game. 2.3. Match-Logo In this game students operate a vector in a plane. They direct a "robot" along imaginary swampy ground, but the robot is able to proceed only after they have "paved" its way with arrows they have made themselves using matches or some other form of vector marker. The robot can only move in the direction of the "paving". 2.4. Brick-Logo Brick Logo is our name of the well-known LEGO-Logo. Brick-Logo is a pseudo-language that uses Logo commands to describe a three-dimensional turtle graphic. Students use Brick-Logo programming commands to move real objects in the real space, which helps develop their dexterity, visual response time, spatial reasoning, and algorithmic ability. The special advantage of the game is that it involves real three-dimensional space and for example is a very easy way to explain the laws of symmetry to students. One of our recent results of our research is that, the Brick-Logo can be used in the higher education, even it is an excellent test of the measure readiness to learn. These elements: Robot Games, Feed the Turtle, Match-logo, Brick-Logo and the others of our repertoire: different types of floorturtles, robots, models, logodrio, Rubics cube, speed reading and Logo only some examples. Lot of them are well-known for the Logo community, and many other colleagues use similar methodology elements in his or her teaching. We conferred about these several times. Farkas - Kőrösné. (1989) (1991) and Farkas. (1994) (1996) Look the references! These acts and the others have a large number of form, which depend on the age of the student. We always looking for new and new objects, which are comfortable to use like a logo microworlds. 3. Geometrical games - structured thinking We hold the view that children should not be taught programming in public education, they should be taught structured thinking with the help of various games. In elementary school (and even more so in the higher years of primary school) there is less and less time left for playing as there are few lessons and a lot has to be taught, "useful" things have to be hammered into the schoolchildren' heads. In this way the very exercises (the more entertaining ones), that could convince children that mathematics, geometry, or informatics can be interesting, are left out. The general experience is that most pupils view exercises as an integral whole, and are unable to find a solution at once. Then they categorically state: "I can't do that.", and really, after this they are unable to solve the problem. From the beginning of teaching information science, resourceful problem-solving thinking has to be emphasised. "Great inventions solve great problems, but even the smallest problem requires some little invention. The problem you are thinking about may be simple, but if it arouses your interest, activates your inventiveness, and finally if you manage to solve it on your own, you experience the thrill and triumph of discovery. ... What is realised of it greatly depends on the mathematics teacher. If the mathematics teacher spends most of his/her times making the pupils solve routine problems, he/she kills their interest, hinders their mental development, and turns the favourable conditions on their wrong side. He has to realise that solving a mathematics problem can be as enjoyable as doing a crossword puzzle, or a strenuous mental exercise can be as good as a game of tennis." Pólya Gy. (1957) György Pólya's book was primarily written for mathematics teachers or students dealing with mathematics, but his words are worth bearing in mind for every teacher and are highly recommended to teachers of information science. And indeed, if we keep saying, for example, when teaching Logo: repeat it four times, fifty forward, ninety to the right, then joy of creation is lost. If Michelangelo carves chair legs, he is no more than a craftsman. We have to set a goal that can be achieved. We have to rely on our knowledge in carrying out the task. We would like to make the turtle draw the picture of a beautiful castle. At first sight it seems to be a very complicated task. But if we analyse the task and divide it into parts, it does not seem to be unrealisable anymore. We set about the task with pupils in the fourth form. We named the picture "The Enchanted Castle". In connection with it we had a chat about the Amusement Park, their holidays abroad. The lesson of information science is much more than teaching computer science only! While analysing the picture, we were "learning" new words: bastion, round bastion, citadel, and we formed the following structure: Enchanted Castle Sun (Moon) Birds (Bats) Castle Wall It was at the fifth level that we met the known geometric figures, curves. And then we only had to properly join them, and the picture was ready. Figure 1: Castle Similar structures can be realised by the well-known jigsaw puzzle called Tangram. Since the ancient Greeks many scientist have been interested in the problem of cutting and joining figures. The statement of the well-known Hungarian mathematician Farkas Bolyai - father of János Bolyai, who constructed the non-euclidean geometry - is famous in geometry: if the area of two polygons are equal, one of them can always be divided into a finite number of polygons, out of which the other one can be assembled. An internationally significant mathematical achievement of recent years is connected with this topic and is associated with the name of a Hungarian mathematician. Miklós Laczkovich solved a problem set for over sixty years proving that a circle can be divided into a finite number of pieces and by rearranging these pieces we get a square the area of which is the same as that of the circle. Of course, the pieces cannot be cut out of paper, as the transformation of the circle into a square is impossible using pieces of ordinary edges. Figure 2: picture of Tangram Tangram is one of the most popular jigsaw puzzles. The aim of the game is to form various patterns: geometrical figures, animals, people, objects. Certain collections contain more than 1000 pictures: mostly figurative puzzles. Each starts with the division of a square into seven pieces. We cut out the seven pieces with the children using drawing paper with the help of a plastic model. Figure 3: Tangram set The "teacher's set" is made of coloured cardboard. Tangram is very interesting from the point of view of geometry, but smaller children take more delight in assembling various figures. Many of them are real works of art, although the artistic value is at the expense of the puzzle. (The solution is too obvious.) In information science lessons not the level of difficulty is what matters, but the structure of pictures. Of course, we started from reality here, too. If it is possible, show your pupils the original version of the task, or a picture of it. One of our best lessons was when we prepared the picture of a During the introductory conversation we spoke about the candle, holidays by candlelight. On such occasions it strikes you that some of the children are "belle esprit". We made a picture of the candlestick. In our presentation we are going to show our Tangram programme written in MicroWorlds. As far as we are concerned nowadays one of the best version of Logos is the MicroWorlds. We use MicroWorlds Pro. 4. The demonstration of probability by the wandering of turtles The concept of chance and probability are often unclear even for students in higher education. Demonstration and gaining experience may be helpful here, too. The computer, namely the Logo programming language may play a significant part in it. If you ask a student to put down ten one-digit numbers at random, the series of numbers is something like this: 1 7 4 5 2 8 3 9 6 1. It means that the less experienced student tries to use each figure in a random sequence, but with roughly the same frequency. Whereas the result of repeat 10[pr 1 + random 10] command calls e.g.:7 9 1 9 1 3 6 2 3 4 The wandering of turtles demonstrates the role of chance very well. We show some algorithms: to butterfly fd 80 rt 60 + random 90 wait 2 butterfly to cat fd 5 rt random 10 wait 1 cat to formica fd 20 rt -10 + random 360 wait 2 formica to boxes repeat 4[fd 120 rt 90] rt random 360 wait 2 boxes to Mondrian setpensize 1 + random 10 setc random 600 fd random 100 rt 90 Mondrian to trend repeat 1000[fd 3 rt 90 fd random 10 lt 90 fd 3 ... ... lt 90 fd random 10 rt 90] to trend2 :d :k fd :d rt 90 fd random :k lt 90 fd :d lt 90 fd random :k rt 90 trend2 :d :k The turtle helps for us to understand, to feel the random. These algorithms generate such curves which show dynamically regularity -irregularity in the random. Later in higher education you may show the chance like Futschek does it, with the procedure MOVE. Futschek G. (1993) In the higher education we model the random with the famous example, with the binary tree (adapted from Abelson, 1982) like this: In the logo-society well-known the next recursive algorithm: to tree :a :n if :n = 0 [stop] fd :a / 2 lt 45 tree :a / 2 :n - 1 rt 90 tree :a / 2 :n - 1 lt 45 bk :a / 2 The recursive description of the tree is a V-shape with a smaller tree at each tip. Each smaller tree is a V-shape with a still smaller tree at its tip and so on. It is not necessary, that this algorithm, this curve let be symmetrical. The right and the left side let be changed with a multiplier and the reciprocal of that one. (In the example it is 1.2) In the nature the plants grow strongly to the light. to tree2 :a :n if :n = 0 [stop] fd :a / 2 lt 45 tree2 1.2 * :a / 2 :n - 1 rt 90 tree2 (1 / 1.2) * :a / 2 :n - 1 lt 45 bk :a / 2 Beside the point of the compass factor (in the next example it is 1.05) let another be which depend of the random: to tree3 :a :n if :n = 0 [stop] fd :a / 2 lt 45 tree3 1.05 * (.25 * (2 + random 4)) * :a / 2 :n - 1 rt 90 tree3 1 / 1.05 * (.25 * (2 + random 4)) * :a / 2 :n - 1 lt 45 bk :a / 2 On the figure 4 you can see some calls. Figure 4: Binary random trees 5. References Calabrese E. (1991) Marta - the "intelligent" turtle.. G. Schuyten & M. Valcke: Proceedings Second European Logo Conference, Gent 178-196. Farkas K. - Kőrösné M. M. (1989) How we Teach Logo in a Budapest School. G. Schuyten & M. Valcke: Proceedings Second European Logo Conference, Gent 439-442. Farkas K. - Kőrösné M. M. (1991) Informatics Games for Devepoping Children's Thinking Ability. Children in the Informatics Age, IV. Conference, Albena Farkas K. (1994) Informatics Games for Developing Children's Thinking Ability D.C. Johnson-B. Samways ed.: IFIP Transaction A-34. 87-94. Farkas K. (1996) Robot Games and Turtle Roses: Teaching Logo in Hungary Learning and Leading With technology, April 1996, 62-64 Futschek G. (1993) Exploring Recursion: from Random Walks to the Design of Mazes, P. Georgiadis, G. Gyftodimos, Y. Kotsanis, C. Kynigos: Logo-like Learning Environments: Reflection & Prospects, Doukas School, 106-112 Papert S. (1980) Mindstorm. Children, Computers and Powerful Ideas, Basic Books. Papert S. (1993) The Children's Machine, Basic Books, New York. Pólya Gy. (1957) A gondolkodás iskolája (How to solve it. A new aspect of mathematical method), Bibliotheca, Budapest. Dr. Károly Farkas is an educator with a strong background in engineering, a Doctor of the technical Sciences and candidate of pedagogy. Having gained a profound one-decade-long experience in the industrial engineering, he assumed his activities as an educator. Twenty years ago, he started teaching informatics for the first time in Hungary to minors within the Elementary School tutoring patern. Presently more than one hundred Elementary schools in Hungary are engaged in informatics studies, while the tutors have been integrated into the "Play-The-Game-Informatics" professional team, established and guided by the author. The autor has written a number of books, handouts and other papers on the topic in question, aimed on the dissemination of his experience and providing help to both students and teachers. He takes an active part in international research in this field as a member of several research groups. The author is a Hungarian representative in EUROLOG Scientific Committee. In recent years he leads department of Matematics and Informatics of College Szolnok. Ildikó Tasnádi is a computer teacher. She joined to the Playfull Informatics group right in the start. She teaches student's different ages of life. Nowadays she makes the Hungarian version of Microwords Pro. Éva Törtely is a teacher and an expert in the field of teaching informatics. She's been teaching for more than two decades mostly informatics to very young children in elemantary schools. She was one of the first person who started teaching and developing the Playfull Informatics. Lot of games and didactic elements are connected to her in Hungary.
{"url":"http://www.ofi.hu/studies-articles-090617/logo-from-the","timestamp":"2014-04-19T19:34:03Z","content_type":null,"content_length":"37698","record_id":"<urn:uuid:3f11af73-9fc8-4c6b-8d17-0d48fc348ad7>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
Minors and Cofactors: Row Operations Minors and Cofactors: Row Operations (page 3 of 3) • Find the following determinant by expanding along the row or column of your choice: No particular row or column looks any better (easier) to expand along than the others. I think I'll do some row- (and column-) operations first, to get some more zeroes in the determinant. Ah! That's much nicer. Now I can expand along the first row or along the first or second column, and my computations will be much easier than if I'd stuck with the original determinant. I think I'll go down the first column. The only cofactor I actually need to compute is A[2,1]: Then the value of the determinant is 15. I mentioned that there are other ways to manipulate determinants. You can do the other row operations that you're used to, but they change the value of the determinant. The rules are: If you interchange (switch) two rows (or columns) of a matrix A to get B, then det(A) = det(B). If you multiply a row (or column) of A by some value "k" to get B, then det(A) = (1/k)det(B). In other words, if you multiply a row (or column) without adding it to another row (or column), then you have to keep track of that multiplier, and divide it off in the end. And if you switch two rows (or columns), you have to change the sign on the answer in the end. The trouble with these rules is that it's easy to lose track of whatever you multiplied by and how many times you've switched rows, which is why I try to stay away from those operations. But if you're careful, these other operations can be handy. Returning to that last determinant (above), look at these row switches: Copyright © Elizabeth Stapel 2006-2011 All Rights Reserved I did two row switches, so I've got two sign changes, which takes me right back to the original sign. And now, expanding down the first column, I get: Did you see that? By manipulating the determinant into triangular form, the value of the determinant turned out to be the product of the values on the diagonal (times the two sign changes, which didn't actually change anything). Multiplying along the diagonal is much simpler than doing all the minors and cofactors. Given the opportunity, it is almost always better to do row operations and only then do the "expansion". Unless you have an instructor who absolutely insists that you expand determinants in their original form, try to do some row (and column) operations first. If you're having to do determinants by hand, doing operations first will make your life a little less messy. We've already seen some determinant rules. Two more are as follows: For matrices A and B, det(AB) = det(A)det(B). If A is n-by-n, then det(kA) = k^ndet(A). We have also seen that the determinant of a triangular matrix C is just the product of the elements on the diagonal. This tells us that the determinant of the identity matrix I is det(I) = 1. And this leads to a sometimes-useful result: Any invertible matrix A has an inverse matrix A^ 1 such that (A)(A^ 1) = (A^ 1)(A) = I. Since det(AA^ 1) = det(I) = 1, then, whatever the value of det(A) is, the value of det(A^ 1) must be the reciprocal of det(A). For instance, if det(A) = 2/3, then det(A^ 1) = 3/2, so det(AA^ 1) = det(A)det(A^ 1) = (2/3)(3/2) = 1 = det(I). But what if A isn't invertible? Think about it: What number has no reciprocal? The number zero. The only value for det(A) which won't allow for A to be invertible is zero. From this we get the following rule: If a matrix A has no inverse, then det(A) = 0, and vice versa. You may be asked at some point to "determine if the following matrix is invertible by using determinants". The question will be asking you to remember the above rule, and to see if the determinant is zero or not. You will not be expected actually to find the inverse. If you're not going much further in mathematics, you may be able to get away with having your calculator do most or all of your determinant computations for you. But if you're planning on going on (for instance, to differential equations or linear algebra), then make sure you learn how to do determinants by hand. You will almost certainly need the skill. << Previous Top | 1 | 2 | 3 | Return to Index Cite this article as: Stapel, Elizabeth. "Minors and Cofactors: Row Operations." Purplemath. Available from http://www.purplemath.com/modules/minors3.htm. Accessed
{"url":"http://www.purplemath.com/modules/minors3.htm","timestamp":"2014-04-19T20:02:11Z","content_type":null,"content_length":"30823","record_id":"<urn:uuid:416a1e32-5f73-460b-b4a7-46122a4cc1da>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
Rain rules, okay! Dr Srinivas Bhogle Editor's note: On May 23, in London, an International Cricket Conference committee chaired by India's Sunil Gavaskar will examine and, where necessary, recalibrate, the playing conditions that govern the game. The enhanced use of technology in umpiring will be taken up. So, too, will the mandatory use of lights to make up for lost time in Test cricket. Also up for review is the Duckworth Lewis system governing the recalibration of runs and required run rates in rain-interrupted matches. That the D/L system has its advantages is universally accepted; that it occasionally throws up ridiculous situations is equally obvious. When the D/L system was questioned in the past, the argument in favour always was: TINA. There Is No Alternative. Well, now there is. An Indian, Jayadevan by name, has devised, with painstaking attention to detail, an alternative system. Various Indian umpires have looked at, and been impressed by it. Various former players of repute -- including Gavaskar himself -- have received the document. So... we present it for you. In detail, with analysis and insight. Read it, compare it with the existing system, then let us know what you think. Who knows, this could be the cricketing law of the future... Jayadevan's rain rules for LOI cricket In January 2001, I received an e-mail from Frank Duckworth and Tony Lewis who have jointly developed the Duckworth/Lewis (D/L) method for resetting targets in limited-over international (LOI; earlier called ODI) cricket matches. "We hear that someone called Jayadevan from India is proposing an alternative to the D/L method", they wrote, "do you know more about this method?". I didn't. But being Harsha Bhogle's brother has its advantages. I could, in a day or two, establish e-mail contact with V Jayadevan, who turned out to be a civil engineer from Thrissur in Kerala with a Master's degree in building technology from IIT, Madras. Jayadevan (J) sent me a lot of material about his proposed method. Even a cursory glance at J's papers was enough to convince me that his method was truly promising. So we started corresponding on e-mail and even met one weekend in Bangalore. I also had the opportunity to discuss Jayadevan's proposed method with Duckworth and Lewis, and play the role of the moderator as J and D/L exchanged In this article, I will attempt a detailed introduction of Jayadevan's method. [J has recently significantly revised his method, to correct certain technical anomalies, many of them pointed out by D/ L. He has also made a technical presentation to the BCCI on April 7, 2001. I understand that the BCCI has advised Jayadevan to communicate his method to the ICC]. In a subsequent article we will try to determine if Jayadevan's method could replace the D/L method. Introducing the Jayadevan method Like the D/L method, the Jayadevan method also recognises that a fair target must take into account information about overs remaining and wickets lost. But while Duckworth and Lewis responded to this requirement by building a sophisticated, and elegant, mathematical model, Jayadevan appears to be using a practical engineer's approach. Jayadevan method is essentially built around two curves. The first curve depicts the "normal" run getting pattern; i.e., when there is no interruption and the side is expecting to bat its full quota of overs. The second curve ("target curve") indicates how the batting side should "speed up" after an interruption. The "normal" curve takes into account both the percentage of overs played and the percentage of wickets lost. The "target" curve, which is used to set revised targets, only considers the percentage of overs played. How does Jayadevan fabricate these two curves? The explanation is a trifle technical, but I would urge readers to try to read on, especially because Jayadevan has been quite clever. But those of you who have no interest at all in such arguments can safely jump to the next section. To obtain the normal score curve Jayadevan looks at samples of LOI match results, choosing in particular some close finishes. Instead of actual runs scored or wickets lost; he considers the corresponding percentage values. Jayadevan then considers the percentage of runs scored in seven phases: settling down (first 10% of the overs, i.e. the five overs), exploiting field restrictions (next 20% of the overs, i.e. overs 6-15), stabilising the innings-I (next 20%; overs 16-25), stabilising the innings-II (next 10%; overs 26-30), beginning the acceleration (next 20%; overs 31-40), secondary stage of acceleration (next 10%; overs 41-45) and final slog (last 10%; overs 46-50). The normal score curve is then obtained by fitting a suitable regression equation for the cumulative overs % and the corresponding cumulative runs scored %. How does Jayadevan obtain the target score curve? He looks at the percentage of runs scored in each of the seven scoring phases, arranges them in descending order of run productivity, and again obtains a regression fit for cumulative overs % versus the juxtaposed cumulative runs scored %. This clever ploy is reminiscent of the rationale behind the Australian "most productive overs" concept, which was a good idea with a flawed execution. The Jayadevan look-up table Like D/L, the Jayadevan method also reads off values, to determine the victory target, from a table. The table has twelve columns: for overs %, "target runs" % and "normal runs" % corresponding to 0,1,2 ..9 wickets fallen (for uniformity, Jayadevan calls these 0%, 10%, 20% .. 90% wickets fallen). For overs played before an interruption, we read off values from the normal runs% columns. After an interruption, we use values from the target runs% column. An extract from Jayadevan's full table appears below: To get a feel for how Jayadevan calculates, here are two or three simple situations involving interruptions while Team 2 is batting: Example 1: Team 1 score 280 runs in 50 overs. During the break, there is a shower and Team 2 can only bat for 40 overs. What is Team 2's target? Following the interruption, Team 2 must bat for 40 overs out of 50, i.e. for 40/50 =80% of the overs. Looking up the J table, we note that the "target run" % corresponding to 80% overs is 87.6%. So Team 2 must score 280 x 87.6% = 246 runs to win. [The corresponding D/L target is 253] Example 2: Team 1 score 280 runs in 50 overs. Team 2 are 75/2 in 20 overs when there is an interruption which takes away 10 overs. What is Team 2's winning target in the 20+20=40 overs? Team 2 have two batting spells. In the first spell of 20 out of 50 possible overs (20/50 = 40% overs) Team 2 bat "normally". Looking at the table we note that the expected normal run % for 40% overs, and the loss of two wickets, is 36.5%. So, in this spell, Team 2 should have reached a score of 280 x 36.5% = 102.20. After the interruption, Team 2 can bat for only 20 out of the remaining 30 overs; so their winning target must be reduced. To reset the target, look up the target run% corresponding to 20/30 = 66.7% overs. The tabulated value is 77.7% (read off from the full J table). To stay on course, Team 2 must score 77.7% of the remaining (280-102.20) runs, i.e. (280-102.20) x 77.7% = 138.15 in their 20 overs. Team 2's winning target in 20+20 overs is therefore 102.20 + 138.15 = 241 [The corresponding D/L target is 246] Example 3: Team 1 score 280 runs in 50 overs. Team 2 are 75/2 in 20 overs when there is an interruption which takes away 20 overs. What is Team 2's winning target in the 20+10=30 overs? Once again, Team 2 must bat in two spells. In the first spell of 20 overs (40% overs) of "normal" batting, they must, as in Example 2, reach 102.20 to stay on par (they have only scored 75/2, so they might already be in trouble!). In the spell after the interruption Team 2 can bat for only 10 out of the 30 remaining overs. The target run% corresponding to 10/30 = 33.3% overs is 46.2 (read off from Jayadevan's full table). To stay on course, Team 2 must score (280-102.20) x 46.2% = 82.15 runs in these 10 overs. Team 2's winning target in 20+10 overs is therefore 102.20 + 82.15 = 185 runs (because of their slow start, Team 2 appear unlikely to win; they must now score 110 more runs in 10 overs to win). [The corresponding D/L target is 181] Our explanation of the Jayadevan method will be built around examples. To cover the entire gamut of LOI situations, and verify how the J method performs, we will look at all the five possible interruption situations: (a) interruption between Team 1 and Team 2's innings, (b) single interruption while Team 2 are batting, (c) single interruption while Team 1 are batting, (d) more than one interruption while Team 2 are batting and (e) more than one interruption while Team 1 are batting. In each case, we will also indicate the corresponding D/L target. Interruption between Team 1 and Team 2's innings The Example 1, discussed above, highlights this interruption situation. Here's another example: Example 4: Team 1 score 250 in 50 overs. A stoppage between innings leaves Team 2 to bat only for 30 overs. J's target runs% corresponding to 60% overs is 72.3%. The J target is therefore 250 x 72.3% = 181. [The corresponding D/L target is 193] Single interruption while Team 2 are batting This is perhaps the interruption situation which occurs most often. Examples 2 and 3, discussed earlier, are instances of such a situation. Here are three more examples, including the famous instance of the 1992 World Cup semi-final between England and South Africa. Example 5: Team 1 score 250 in 50 overs. In reply, Team 2 are 90/0 in 20 overs when there is an interruption lasting 20 overs. Team 2 return to bat the last 10 overs. Team 2 bat in two spells. During the "normal" batting spell of 20 out of 50 (20/50 = 40%) overs, Team 2 should have scored 32.4% of the required runs (see the J table). This works out to 250 x 32.4% = 81 runs (so Team 2 are doing well, being 9 runs ahead of their "par score" of 81). In the post-interruption spell, Team 2 can bat for only 10 out of the 30 remaining overs (10/30 = 33.3%). The target run% corresponding to 33.3% overs is 46.2. Team 2's "par score" in the second spell is therefore (250-81) x 46.2% = 78.1 runs. Team 2's winning target (they win if they are just ahead of the par score) is therefore 81 + 78.1 = 160. [The corresponding D/L target is 143] Example 6: In reply to Team 1's 250 in 50 overs, Team 2 are 191/9 in 30 overs when the game has to be called off. Team 2 bat "normally" till the interruption. The normal run% corresponding to 30/50 = 60% overs and 90% wickets lost, is 95% (see the J table extract). Team 2's "par score" in this situation should therefore have been 250 x 95% = 237.5, i.e. they should have scored 238 runs at this stage to win. Since Team 2 have only reached 191, they lose. [The D/L target is 232]. Example 7: In the 1992 World Cup semi-final, England make 252 in the stipulated 45 overs. South Africa are 231/6 in 42,5 overs, when 2 overs are lost. How much should South Africa score off the last ball to win? This celebrated example has become some sort of a benchmark to evaluate every target revision method. Note that South Africa have batted "normally" for 42.83/45 = 95% of the overs. The normal run% in 95% overs, for the fall of six wickets, is 91.6%. So the par score at this stage was 252 x 91.6% = 230.83. In the post-interruption spell, Team 2 can bat for only 1 ball (0.17 overs) out of 13 remaining balls (2.17 overs), i.e. for 0.17/2.17 = 7.8% overs. The target run% corresponding to 7.8% overs is about 12% (read off from the full J table). The par score for the second spell is therefore (252-230.83) x 12% = 2.54. South Africa's winning target is therefore 230.83 + 2.54 = 234 runs. So by the J method South Africa needed 3 to win off the last ball. [D/L's target is also 3 runs off the last ball]. The table below summarises some more (hypothetical and real) examples of single interruptions while Team 2 are batting: Single interruption while Team 1 are batting The many examples that we have considered suggest that, in most cases, the D/L and J estimates are very close to each other. The alert reader must, however, have noticed that we did not work out any example where Team 1's innings was interrupted. When Team 1's innings is unexpectedly curtailed or terminated, they are usually disadvantaged with respect to Team 2 (in the D/L terminology, Team 1 have "fewer resources" compared to Team 2). This disadvantage must be compensated and that can happen in one of two ways: increase Team 2's winning target, or decrease the number of overs that Team 2 can face. There are several reasons why we must choose the first option; chiefly because the second option could decrease the number of overs that Team 2 can face to below 25, in which case there's no match! While cricket fans initially found it hard to accept targets revised upwards, mindsets now appear to be changing after the use of the D/L method for over three years. What probably bothers cricket fans more now is whether these revised targets are fair and realistic. The Jayadevan method also provides the option to revise Team 2's target upwards. Unlike the G50 criterion proposed by D/L, Jayadevan is able to use his normal run and target run percentages to directly "scale up" targets. We shall now look at some typical examples involving an upward revision of Team 2's target. Example 8: Team 1 are comfortably placed at 200/2 in 40 overs when their innings is suddenly interrupted. What should be Team 2's target in 40 overs? It is clear that the interruption affects Team 1 more because they were expecting to bat 50 overs. Playing at their "normal" pace, Team 1 would, after accounting for the two wickets lost, have scored 70.9% of their total runs in 40/50 = 80% overs according to the J table. If Team 1 knew from the start that it was a 40-over match, they would have been expected to score 87.6% of their total runs (target run% in 80% overs). Team 2's winning target should therefore be "scaled up" to be 200 x (87.6/70.9)% = 248 runs. [The corresponding D/L target is 252, and its calculation involves the use of the G50 factor]. Example 9: Team 1 are 125/4 in 25 overs when there is a heavy downpour. If only 25 more overs can be bowled, what should be Team 2's target in 25 overs? Having played 50% of their overs, Team 1's normal run% for the loss of 4 wickets is 51.9% (see the J table extract). Team 2's target run% in 50% overs is 63.4. Team 2's winning target is therefore 125 x (63.4/51.9)% = 153. [The corresponding D/L target is 170]. Example 10: Team 1 are in deep trouble at 125/8 in 25 overs when there is a heavy downpour. If only 25 more overs can be bowled, what should be Team 2's target in 25 overs? Having played 50% of their overs, Team 1's normal run% for the loss of 8 wickets is 87%. Team 2's target run% in 50% overs is 63.4. If we imitate the calculation in Example 9, we obtain Team 2's winning target as 125 x (63.4/87.0)% = 91.09 or 92 runs. For various reasons, Jayadevan argues that a target lower than 125 for Team 2 is not acceptable. He therefore stipulates that whenever the "scale up" factor is below 1, it should be set equal to 1. Team 2's target is therefore simply 125 x 1 plus 1, i.e. 126. [The corresponding D/L target is 103]. Example 11: Team 1 are 180/4 after 42 overs when the match is interrupted. Only 35 more overs can be bowled. So Team 1's innings is terminated and Team 2 must bat 35 overs. What would be Team 2's winning target? Team 1's normal score% in 84% overs and with the fall of 4 wickets is 76.0% (from the full J table). The target run% for 84% overs is 90.3. So the J target in 42 overs is 180 x (90.3/76.0)% = 213.9. Since Team 2 can play only 35 out of these 42 overs (i.e. 83% of the overs), we must consider the target run% for 83% overs which is 89.6. So Team 2's target in 35 overs is 213.9 x 89.6% = 192 runs. [The corresponding D/L target is 202]. We end this section by looking at an example where, after an interruption, Team 1 resume their innings. Example 12: Team 1 are 115/3 in 30 overs, when 10 overs are lost. The match is reduced to 40 overs for each team, and Team 1 close at 180/8 in their 40 overs. What is Team 2's target in 40 overs? During the first ("normal") phase of 30 overs (30/50=60% overs), involving the loss of 3 wickets, Team 1 reach 51.4% of their expected score. If there had been no interruption, Team 2 would have used the remaining 20 overs to hit off the (100-51.4) = 48.6% remaining runs. Because of the interruption, Team 1 cannot score all of these remaining 48.6% runs. But, with fewer overs remaining to expend the same number of wickets, Team 1 will now score faster. What per cent of the remaining 48.6% runs can Team 1 now score? Note that after the interruption Team 1 are left with only 10 out of the remaining 20 overs (10/20 = 50% overs). The target run% for 50% overs is 63.4%. So Team 1 would be expected to score (48.6 x 63.4)% = 30.81% of its total score in the last 10 overs. Adding things up, we conclude that the interruption caused Team 1 to score only 51.4 + 30.8 = 82.2% of its expected final score. Team 2 have no such problems. Right from the start they know that they will get only 40 (40/50 = 80%) overs. The target run% for 80% overs is 87.6% (see the J table extract). Team 2's winning target would therefore be 180 x (87.6/82.2)% = 191 runs. This upward revision is reasonable because Team 1 were disadvantaged more by the interruption. [The corresponding D/L target is 202]. More than one interruption while Team 2 are batting A critical requirement of any LOI target resetting method is the ability to handle multiple interruptions. Most of the predecessors of the D/L method failed miserably on this count; D/L, on the other hand, performs most admirably. In this section we will see how the Jayadevan method handles multiple interruptions while Team 2 are batting. Consider the following example: Example 13: Team 1 score 250/6 in 50 overs. In reply, Team 2 are 100/3 in 20 overs when an interruption reduces Team 2's innings to 40 overs. After batting 35 overs, Team 2 are 185/6 when the match has to be stopped. Who wins: Team 1 or Team 2? In the first ("normal") spell of 20/50 = 40% overs, and following the loss of 3 wickets, Team 2 should have scored 39.7% of their total runs (see J table), yielding a par score of 250 x 39.7% = 99.25 runs (so, having reached 100/3, Team 2 are winning by a whisker at the 20-over mark). When Team 2 return for their second batting spell, they are expecting to bat for 20 out of the remaining 30 overs, i.e. for 20/30 = 66.7% of the remaining overs. The target run% corresponding to 66.7% overs is 77.7%. Team 2's par score for the second batting spell is therefore (250-99.25) x 77.7% = 117.13 runs. Unfortunately, there is a second interruption after Team 2 have batted for 35 of the 40 possible overs, and the match is called off. The par score of 117.13 for the second batting spell must therefore be scaled down. Team 2 started the second spell after already batting for 20/40 = 50% of their overs and having lost 3 of their wickets. So (looking at the normal run% in the J table) they had scored 44.3% of their total runs, and would have scored the remaining (100-44.3)% runs during this spell. When the second interruption occurs, Team 2 have played 35/40 = 87.5% of their overs and lost 6 wickets. So (looking at the normal run% of the full J table) they have scored 87.1% of their total runs. Instead of scoring the (100-44.3)% remaining runs, they could only score (87.1-44.3)% of the remaining runs. J therefore scales down the par score of 117.13 by (87.1-44.3)/(100-44.3) to get 117.13 x 0.77 = 90.00. Team 2's winning target is therefore 99.25 + 90.00 = 190. So at 185/6 in 35 overs, Team 2 lose. [Using D/L, Team 2 win at 183/6]. Before we proceed, let me quickly clarify a small detail (which might be bothering some readers). We write that when the first interruption occurs after 20 overs, "Team 2 have scored 39.7% of their total runs". Later, when Team 2 begin their second batting spell, we write that "they have scored 44.3% of their total runs". A "dumb" doubt can be: "how did this percentage suddenly go up?". The answer is of course simple: the loss of overs means that Team 2's final score will comes down. So while Team 2's actual score doesn't go up, its percentage in relation to the total final score does go up. Example 14: In a LOI match played in April 1999 at Bridgetown, Australia scored 252/9 in 50 overs. In reply, West Indies (WI) were 138/1 after 29 overs when 10 overs were lost. WI were set a modified target of 196 in 40 overs by the D/L method which they reached comfortably, scoring 197/2 in 37 overs. Assume that WI gets to 172/3 after 35 overs when two more overs were lost. What would be the revised winning target for WI in 38 overs? In the first batting spell of 29/50 = 58% overs, WI bat "normally". The normal run% in 58% overs for the loss of 1 wicket is 48.4%. So the par score at this stage is 252 x 48.4% = 121.97 (so, being 17 runs ahead, WI appear to have already taken control of the match). After the interruption only 11 out of the remaining 21 overs can be bowled. The target run% corresponding to 11/21 = 58.4% overs is 65.6%. So WI's par score for the second batting spell is (252-121.97) x 65.6% = 85.3 (the J method would therefore have set WI a higher target of 121.97+85.3 = 207.27, i.e. 208 runs). When WI start their second batting spell, they have played 29 out of the possible 40 overs and lost 1 wicket. The normal run% for 29/40 = 72.5% overs with 1 wicket lost is 62.3%. There is now a second interruption at 172/3 after 35 overs. The normal run% for 35/40 = 87.5% overs with 3 wickets lost is 80.6%. The WI par score for the second spell must therefore be scaled down as follows: (80.6-62.3) /(100-62.3) x 85.3 = 41.4. So the par score after 29+6 overs is 121.97 + 41.4 = 163.4 (since WI were 172/3, they were winning when the second interruption occurred). We must now calculate the par score for the third batting spell. In this spell, WI must bat for the last 3 of the 15 remaining overs. Our par calculation required WI to score 207.27-163.4 = 43.87 runs in the last five overs. Since only three overs can now be bowled, we are looking for a suitably scaled down value of 43.87. To ensure that his target is "internally consistent", Jayadevan recommends that the scale down factor should be the target run% for 3/15 = 20% overs (29.8%) divided by the target run% for 5/15 = 33% overs (46.16%). The WI par score for the third batting spell is therefore 43.87 x (29.8/46.16) = 43.87 x 0.646 = 28.34. Adding up the par scores for the three spells we get 121.97+41.40+28.34 = 191.71, i.e. 192 runs. (WI must therefore get 20 runs in the last 3 overs to win, which should be quite easy). More than one interruption while Team 1 are batting The following examples illustrate the situation when there are multiple interruptions while Team 1 are batting; in particular, the second example discusses the New Zealand vs South Africa LOI match played in October 2000. Example 15: In a 50-over match, Team 1 are 90/2 after 20 overs. At this stage, the match is reduced to 35 overs for each side. Team 1 continue batting and are 160/6 after batting 30 overs when a second interruption occurs. Team 1's innings is terminated and Team 2 are required to be set a winning target in 30 overs. Team 1 start their innings batting "normally". After 20/50 = 40% overs, and the fall of two wickets, Team 1 have (see J table extract) scored 36.5% of their expected total score. Following the first interruption only 15 of the remaining 30 overs can be bowled. The target run% for 15/30 = 50% is 63.4. So, instead of scoring the remaining (100-36.5) = 63.5% in the remaining 30 overs, Team 1 will now get to score only (63.5 x 63.4)% = 40.3% of the score in the 15 remaining overs. [So, effectively, Team 1 can hope to score only 36.5 + 40.3 = 76.8% of their total score in 35 overs]. But there is now a second interruption after 10 of the 15 remaining overs have been bowled! So Team 1 will now score less than the expected 40.3% in the second batting spell. How do we scale down the value? Let us argue as we did in Example 13: when Team 1 started their second batting spell they had already batted for 20/35 = 57% of their overs and had lost 2 wickets. So (looking at the normal run% in the J table) they had scored 48.8% of their total runs, and would have scored the remaining (100-48.8)% runs during this spell. When the second interruption occurred, Team 1 had played 30/35 = 86% of their overs and lost 6 wickets. So (looking at the normal run% of the full J table) they have scored 85.7% of their total runs. Because of the second interruption, Team 1 could only score (85.7-48.8) % of the remaining runs instead of (100-48.7)%. J therefore scales down 40.3% by (85.7-48.8)/(100-48.8) to get 40.3 x 0.72 = 29.04%. So, adding up, Team 1 have the opportunity of scoring only 36.5 + 29.04 = 65.54% of their total score in 30 overs. Team 2's target run% in 30/50 = 60% overs is 72.3%. Since Team 2 have the advantage, the J method requires their target to be scale up; Team 2 must therefore score 160 x (72.3/65.54)% = 177 runs in their 30 overs. [The corresponding D/L target is 197]. Example 16: Batting first, New Zealand (NZ) are 81/5 in 27,2 overs when there is a one-over interruption. On resumption, NZ reach 114/5 after 32,4 overs when their innings is terminated. South Africa (SA) can bat for 32 overs in reply. What should be SA's winning target? For the first 27,2 overs (27.33/50= 54.7% overs), NZ bat "normally". The normal run% corresponding to 54.7% overs and the loss of 5 wickets is 60.9% (from full interpolated J table). If there had been no interruption, NZ would have been expected to hit the remaining (100-60.9) = 39.1% runs in the last 22,4 overs. Following the brief interruption, however, only 21,4 of the remaining 22,4 overs can be bowled. The target run% corresponding to 21.67/22.67 = 95.8% overs is 97.8%. So NZ can now only score (39.1 x 97.8%) = 38.24% of their total score in the second batting spell (so effectively NZ can only score 60.9 + 38.24 = 99.1% of their expected score following the one-over interruption). But NZ's second batting spell can continue only till the 32,4-over mark. NZ, therefore, did not get the opportunity to score all of its expected 38.24% runs in the second spell. So we must scale down this percentage. Note than when NZ started their second batting spell they had already batted for 27.33/49 = 55.78% of their overs and had lost 5 wickets. So (looking at the normal run% in the J table) they had scored 61.6% of their total runs, and would have scored the remaining (100-61.6)% runs during this spell. When the second interruption occurred, Team 1 had played 32.67/49 = 66.7% of their overs and lost 5 wickets. So (looking at the normal run% of the full J table) they had scored 69.0% of their total runs. Because of the second interruption, Team 1 could only score (69.0-61.6)% of the remaining runs instead of (100-61.6)%. So NZ could, in effect, only score 38.24 x (69.0-61.6)/(100-61.6) = 7.4% in their second spell. Adding this to the 60.9% which they could score in the first spell, NZ could only score 60.9 + 7.4 = 68.3% of their expected total score in 32,4 overs. SA's target run% in 32.67/50 = 65.3% overs is 76.7% (from full J table). So, in the same 32,4 overs, SA can score 76.7% of their total score, while NZ could only score 68.3%. NZ is disadvantaged, so SA's target must be scaled up to 114 x (76.7/68.3)% = 128.02 runs. But since SA get only 32 instead of 32,4 overs, this target must be scaled down by the target run% corresponding to 32/32.67 = 97.95%, which is 99.3%. So SA must score 128.02 x 99.3 = 128 runs to win. [The corresponding D/L target is 153]. Our exposition of the Jayadevan method is finally over. The sixteen examples that we have discussed cover practically every conceivable interruption situation in LOI cricket. A "ready reckoner" of the J method, summarizing the target resetting procedure in every situation as a set of mathematical rules, is available. In fact, early versions of this article had such a ready reckoner appended at the end of the narrative. The underlying principle of the J method (as, indeed, of the D/L method) is to consider individual batting spells of interrupted innings and calculate the par score for each spell. I believe that the greatest merit of the J method is that it gives very reasonable targets in almost all situations. The calculations can get a little involved if there are multiple interruptions, although Jayadevan re-assures me that it is simply a matter of getting used to the method. One of the debates in all this target resetting business is: what should be the "trade-off" between reasonable targets and ease of calculation? Ten years ago (before the D/L method and its predecessors, such as the "most productive overs" method used in the 1992 World Cup, were introduced), it was simply a question of looking at which team had the higher run rate ("total runs scored divided by total overs faced"). This method often gave unreasonable targets because it didn't take into account the number of wickets lost. The D/L method achieved, for the first time, a "good" trade-off; it give vastly more reasonable targets without making calculations too difficult. Jayadevan has achieved another "good" trade-off. He acknowledges that his calculations, especially for multiple interruption situations, are more complicated than D/L calculations, but he claims that he obtains more reasonable targets than D/L. So is the Jayadevan method good enough to replace the D/L method which is currently in use? I shall try to answer this question in my second article. It has been enjoyable to introduce the J method, although it took much longer than I expected. I am grateful to Jayadevan for responding promptly to my numerous doubts and queries. Frank Duckworth and Tony Lewis read successive drafts of this article and their detailed reactions (which I conveyed to Jayadevan) have significantly improved the J method; I thought that it was absolutely wonderful of D/L to do so when they were aware that Jayadevan was a potential rival. I would also like to thank Prem Panicker for immediately agreeing to "host" this article on Rediff. I just hope that readers don't find it too long. Is Jayadevan's proposed method better than the Duckworth/Lewis method?
{"url":"http://www.rediff.com/cricket/2001/may/21srini1.htm","timestamp":"2014-04-20T13:26:38Z","content_type":null,"content_length":"42740","record_id":"<urn:uuid:a03e3322-01ab-4220-b642-0bdc1af02086>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: Fast Asynchronous Uniform Consensus in Real-Time Distributed Systems August 2002 (vol. 51 no. 8) pp. 931-944 ASCII Text x Jean-François Hermant, Gérard Le Lann, "Fast Asynchronous Uniform Consensus in Real-Time Distributed Systems," IEEE Transactions on Computers, vol. 51, no. 8, pp. 931-944, August, 2002. BibTex x @article{ 10.1109/TC.2002.1024740, author = {Jean-François Hermant and Gérard Le Lann}, title = {Fast Asynchronous Uniform Consensus in Real-Time Distributed Systems}, journal ={IEEE Transactions on Computers}, volume = {51}, number = {8}, issn = {0018-9340}, year = {2002}, pages = {931-944}, doi = {http://doi.ieeecomputersociety.org/10.1109/TC.2002.1024740}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Computers TI - Fast Asynchronous Uniform Consensus in Real-Time Distributed Systems IS - 8 SN - 0018-9340 EPD - 931-944 A1 - Jean-François Hermant, A1 - Gérard Le Lann, PY - 2002 KW - Asynchronous computational models KW - partially synchronous computational models KW - coverage KW - uniform consensus KW - real-time distributed fault-tolerant computing KW - safety KW - liveness KW - timeliness KW - unreliable failure detectors KW - schedulability analysis. VL - 51 JA - IEEE Transactions on Computers ER - We investigate whether asynchronous computational models and asynchronous algorithms can be considered for designing real-time distributed fault-tolerant systems. A priori, the lack of bounded finite delays is antagonistic with timeliness requirements. We show how to circumvent this apparent contradiction, via the principle of "late binding" of a solution to some (partially) synchronous model. This principle is shown to maximize the coverage of demonstrated safety, liveness, and timeliness properties. These general results are illustrated with the Uniform Consensus (UC) and the Real-Time UC problems, assuming processor crashes and reliable communications, considering asynchronous solutions based upon Unreliable Failure Detectors. We introduce the concept of Fast Failure Detectors and we show that the problem of building Strong or Perfect Fast Failure Detectors in real systems can be stated as a distributed message scheduling problem. A generic solution to this problem is given, illustrated considering deterministic Ethernets. In passing, it is shown that, with our construction of Unreliable Failure Detectors, asynchronous algorithms that solve UC have a worst-case termination lower bound that matches the optimal synchronous lower bound, that is, (t+1)D, where t is the maximum number of processors that may crash and D is the maximum interprocess message delay. Finally, we introduce FastUC, a novel solution to UC, that is based upon Fast Failure Detectors. FastUC has a worst-case termination time that is sublinear in tD. For most practical cases and common values of t, FastUC terminates in D, making it a worst-case time optimal solution to Real-Time UC. [1] T.D. Chandra and S. Toueg, “Unreliable Failure Detectors for Reliable Distributed Systems,” J. ACM, vol. 43, no. 2, pp. 225-267, Mar. 1996. (A preliminary version appeared in Proc. 10th ACM Symp. Principles of Distributed Computing, pp. 325-340, 1991 ). [2] F. Cristian and C. Fetzer, “The Timed Asynchronous Distributed System Model,” IEEE Trans. Parallel and Distributed Systems, vol. 10, no. 6, pp. 642-657, June 1999. [3] D. Dolev, C. Dwork, and L. Stockmeyer, “On the Minimal Syncrhony Needed for Distributed Consensus,” J. ACM, vol. 34, no. 1, pp. 77–97, Jan. 1987. [4] C. Dwork, N. Lynch, and L. Stockmeyer, “Consensus in the Presence of Partial Synchrony,” J. ACM. vol. 35, no. 2, pp. 288–323, Apr. 1988. [5] D. Ferrari and D.C. Verma,“A scheme for real-time channel establishment in wide-area networks, IEEE J. Selected Areas in Comm., vol. 8, no. 3, pp. 368-379, Apr. 1990. [6] M.J. Fischer and N.A. Lynch, “A Lower Bound for the Time to Assure Interactive Consistency,” Information Processing Letters, vol. 14, pp. 183-186, June 1982. [7] M.J. Fischer, N.A. Lynch, and M.S. Paterson, “Impossibility of Distributed Consensus with One Faulty Process,” J. ACM, vol. 32, no. 2, pp. 374i–382, 1985. [8] R. Guerraoui, “Indulgent Algorithms,” Proc. 19th ACM Symp. Principles of Distributed Computing, pp. 289-297, July 2000. [9] R. Guerraoui and A. Schiper, “Consensus: The Big Misunderstanding,” Proc. Sixth IEEE Workshop Future Trends in Distributed Computing, pp. 183–188, Tunis, Tunisia, Oct. 1997. [10] J.-F. Hermant and G. LeLann, A Protocol and Correctness Proofs for Real-Time High-Performance Broadcast Networks Proc. IEEE Conf. Distributed Computing Systems, pp. 360-369, 1998. [11] J.-F. Hermant, “Quelques Problèmes et Solutions en Ordonnancement Temps Réel pour Systèmes Répartis,” PhD thesis, Paris-VI-Pierre-et-Marie-Curie Univ., Sept. 1999. [12] M. Hurfin and M. Raynal, “Asynchronous Protocols to Meet Real-Time Constraints: Is It Really Sensible? How to Proceed?” Proc. IEEE Int'l Symp. Object-Oriented Real-Time Distributed Computing, pp. 290-297, Apr. 1998. [13] Algorithm derived independently in 1997 by P. Jayanti and S. Toueg, and by B. Charron-Bost (S. Toueg, private comm., 1999). [14] J.F. Kurose, M. Schwartz, and Y. Yemini, "Multiple-Access Protocols and Time-Constrained Communication," ACM Computing Surveys, vol. 16, pp. 43-70, 1984. [15] M. Larrea, S. Arévalo, and A. Fernández, “Efficient Algorithms to Implement Unreliable Failure Detectors in Partially Synchronous Systems,” Proc. 13th Int'l Symp. Distributed Computing, pp. 34-48, Sept. 1999. [16] G. Le Lann, “On Real-Time and Non Real-Time Distributed Computing,” Proc. Ninth Int'l Workshop Distributed Algorithms, invited paper, Lecture Notes in Computer Science, vol. 972, pp. 51-70, Springer-Verlag, Sept. 1995. [17] G. Le Lann, “Proof-Based System Engineering and Embedded Systems,” Proc. European School on Embedded Systems, invited paper, Lecture Notes in Computer Science, vol. 1494, pp. 208-248, Springer-Verlag, Nov. 1996. [18] G. Le Lann, “Is 'Asynchronous Real-Time' an Oxymoron?” 15th Int'l Symp. Distributed Computing, invited talk, Oct. 2001, INRIA Research Report, to appear. [19] G. Le Lann and P. Rolin, “Process and Device for the Transmission of Messages between Different Stations through a Local Distribution Network,” US Patent Number 4,847,835, July 1989, French Patent Number 84-16957, Nov. 1984. [20] C.L. Liu and J.W. Layland, “Scheduling Algorithms for Multiprogramming in a Hard Real-Time Environment,” J. ACM, vol. 20, no. 1, pp. 40-61, 1973. [21] N. Lynch, Distributed Algorithms. New Jersey, Morgan Kaufman, 1996. [22] A. Mostefaoui and M. Raynal, “Consensus Based on Failure Detectors with a Perpetual Weak Accuracy Property,” Proc. IEEE Int'l Parallel and Distributed Processing Symp., pp. 514-519, May 2000. [23] K. Tindell, A. Burns, and A. Wellings, “Analysis of Hard Real-Time Communication,” J. Real-Time Systems, vol. 9, pp. 147-171, Sept. 1995. [24] P. Veríssimo, A. Casimiro, and C. Fetzer, “The Timely Computing Base: Timely Actions in the Presence of Uncertain Timeliness,” Proc. Int'l Conf. Dependable Systems and Networks, pp. 533-542, June 2000. [25] H. Zhang, “Service Disciplines for Guaranteed Performance Service in Packet-Switching Networks,” Proc. IEEE, vol. 83, pp. 1374-1396, Oct. 1995. Index Terms: Asynchronous computational models, partially synchronous computational models, coverage, uniform consensus, real-time distributed fault-tolerant computing, safety, liveness, timeliness, unreliable failure detectors, schedulability analysis. Jean-François Hermant, Gérard Le Lann, "Fast Asynchronous Uniform Consensus in Real-Time Distributed Systems," IEEE Transactions on Computers, vol. 51, no. 8, pp. 931-944, Aug. 2002, doi:10.1109/ Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tc/2002/08/t0931-abs.html","timestamp":"2014-04-19T14:35:24Z","content_type":null,"content_length":"61318","record_id":"<urn:uuid:cd0cf9db-b45e-4ddd-a5ab-a1be2456bf03>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Homepage Natalie Gilka University of Copenhagen Department of Mathematical Sciences Natalie Gilka Postdoc Department of Mathematical Sciences University of Copenhagen Universitetsparken 5 2100 Copenhagen Ø Research Interests I aim to understand matter at a fundamental level. To achieve this goal I move between theoretical chemistry, mathematics, and the computer implementation and application of the methods I develop. My research interests thus lie at the intersection of mathematics with the computational aspects of chemistry and physics, and with scientific computing. The main issue I am considering at this stage is: how can we establish mathematically what can be deduced from quantum mechanics about molecular systems? My current activities concern in particular what can be rigorously proved about interactions between atoms, this furthermore in an extension to a relativistic model. In the longer term I see my interests in a combination of this rigorous mathematical understanding with methods of computational chemistry and physics. Currently Postdoc at the Department of Mathematical Sciences in Copenhagen (Nov 2008 - present) • 11/2008 - present Postdoc with Jan Philip Solovej at the University of Copenhagen (Denmark) at the Department of Mathematical Sciences Funded by a Postdoc scholarship from the DAAD (German Academic Exchange Service) • 01/2007 - 09/2008 Extended stay at the University of Warwick (finishing German PhD in 06/2008) • 03/2006 - 12/2006 Research visit with Peter R. Taylor at the University of Warwick (UK - Department of Chemistry) Funded by the Marie Curie Research Network NANOQUANT as a Marie Curie Research Fellow • 10/2003 - 06/2008 PhD studies with Christel M. Marian (University of Düsseldorf, Germany), completion with summa cum laude Funded for a period of two years by a Chemiefonds scholarship of the "Verband der Chemischen Industrie e.V." (consortium respresenting the German chemical industry) • 08/2001 - 07/2002 Studies and first research experiences (with Thom H. Dunning) at the University of North Carolina at Chapel Hill (USA) Funded (9 months) by a stipend from the DAAD (German Academic Exchange Service) • 10/1998 - 09/2003 Studies in chemistry at the University of Düsseldorf (Germany), completion with Diploma Research Experience A desire to pursue scientific investigation and a passion for the analytical approach led me early in my career to theoretical chemistry and my first experience of research with Prof. Thom H. Dunning at University of North Carolina, where we worked on a rigorous computational investigation of basis set dependencies in calculations on diatomic systems. For my Diploma studies I combined a theoretical investigation (in relativistic quantum chemistry) with program development in the intricate task of calculating reliable g tensors in near-degenerate states of diatomic systems, with particular emphasis on the molecule AlO. I then moved on to consider more complicated consequences of relativistic quantum mechanics for my PhD, deriving and implementing computationally for the first time the expressions for spin-spin coupling between electrons in elaborate correlated molecular electronic structure calculations, supervised by Prof. Christel M. Marian in Düsseldorf. This combination of theoretical and computational work was an excellent tool for extending and broadening my abilities. Furthermore, I spent part of my PhD with Prof. Peter R. Taylor at the University of Warwick, visiting his research group under funding from a Marie Curie Research Training Network. This gave me in-depth experience with applications and the computational exploitation of molecular symmetry, and with molecular integral evaluation. My interest and growing familiarity with molecular quantum mechanics convinced me that I would benefit from developing my background further and acquiring a different, more mathematical perspective and attendant skills. I secured financial support from the DAAD to work here at the University of Copenhagen, with Prof. Jan Philip Solovej, looking into establishing estimates in diatomic systems: an interesting and challenging extension of my previous knowledge and understanding. I am very grateful for his hospitality and for the financial support that makes my stay here possible. • J. Tatchen, N. Gilka, C. M. Marian, Intersystem Crossing Driven by Vibronic Spin-Orbit Coupling: A Case Study on Psoralen, Phys. Chem. Chem. Phys. 9(38), 5209-5221 (2007). • N. Gilka, J. Tatchen, and C. M. Marian, The g-tensor of AlO: Principal Problems and First Approaches, Chem. Phys. 343(2-3), 258-269 (2008). • N. Gilka, P. R. Taylor, and C. M. Marian, Electron Spin-Spin Coupling from Multireference CI Wave Functions, J. Chem. Phys. 129(4), 044102-1-044102-19 (2008). • C. M. Marian and N. Gilka, Performance of the Density Functional Theory/Multireference Configuration Interaction Method on Electronic Excitation of Extended π-Systems, J. Chem. Theory Comput. 4 (9), 1501-1515 (2008). • N. Gilka, Bounds on Diatomic Molecules in a Relativistic Model, J. Math. Phys. 51(3), 032303-1-032303-13 (2010). • D. Ganyushin, N. Gilka, P. R. Taylor, C. M. Marian, and F. Neese, The Resolution of the Identity Approximation for Calculations of Spin-Spin Contribution to Zero-Field Splitting Parameters, J. Chem. Phys. 132(14), 144111-1-144111-11 (2010). Last modified: April 2010.
{"url":"http://www.math.ku.dk/~natalie/","timestamp":"2014-04-18T11:32:03Z","content_type":null,"content_length":"7145","record_id":"<urn:uuid:efea7615-59d5-4ebc-b9ad-2b106492a409>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: what RDF is not (was ...) From: Peter F. Patel-Schneider <pfps@research.bell-labs.com> Date: Fri, 04 Jan 2002 11:56:04 -0500 To: Mark.Birbeck@x-port.net Cc: bdehora@interx.com, www-rdf-interest@w3.org Message-Id: From: Mark Birbeck <Mark.Birbeck@x-port.net> Subject: RE: what RDF is not (was ...) Date: Fri, 4 Jan 2002 15:38:29 -0000 > Thanks Bill ... I think! I didn't realise that there were larger and smaller > infinities! > But if there is a [...] number: > .6775746636352 > I still don't understand why we can't we have a URI: > http://www.schema.org/datatypes/natural#.6775746636352 > In other words, for every natural number there is a URI. This works for real numbers with really boring decimal expansions. You can even go further and have for s either + or - and dddd,eeee,ffff finite sequences of decimal digts represent the real number whose integral part is sdddd, and whose fractional part starts eeee and then repeats ffff infinitely. This produces finite representations for all rational numbers. > (I don't want to mix up two debates here. I wasn't actually commenting on > data typing in my initial posting, I was merely asking why Peter was saying > that there were a finite number of URIs. I wasn't saying that there are a *finite* number of URIs, just that every URI is finite (or can be finitely represented). I think we may have been talking at > cross purposes. I am saying that whilst there are of course a finite number > of URIs at any one time, there are an infinite number of _possible_ URIs. > The problem is not in finding enough, but in coming up with a convenient way > of creating them as and when they are needed.) This is *not* the problem at all. I'll allow you an infinite number of URIs that exist right now, just as long as each of them can be finitely > Mark Received on Friday, 4 January 2002 11:57:01 GMT This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:51:52 GMT
{"url":"http://lists.w3.org/Archives/Public/www-rdf-interest/2002Jan/0052.html","timestamp":"2014-04-18T08:28:18Z","content_type":null,"content_length":"10687","record_id":"<urn:uuid:10819546-89f9-41b3-b34c-ba5f0f214aa6>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
Structure - Defines statistics for the kernel's use of virtual memory. struct vm_statistics integer_t free_count; integer_t active_count; integer_t inactive_count; integer_t wire_count; integer_t zero_fill_count; integer_t reactivations; integer_t pageins; integer_t pageouts; integer_t faults; integer_t cow_faults; integer_t lookups; integer_t hits; typedef struct vm_statistics* vm_statistics_t; The total number of free pages in the system. The total number of pages currently in use and pageable. The number of inactive pages. The number of pages that are wired in memory and cannot be paged out. The number of zero-fill pages. The number of reactivated pages. The number of requests for pages from a pager (such as the i-node pager). The number of pages that have been paged out. The number of times the vm_fault routine has been called. The number of copy-on-write faults. The number of object cache lookups. The number of object cache hits. The vm_statistics structure defines the statistics available on the kernel's use of virtual memory. The statistics record virtual memory usage since the kernel was booted. For related information for a specific task, see the task_basic_info structure. This structure is machine word length specific because of the memory sizes returned. Functions: task_info, host_page_size. Data Structures: task_basic_info.
{"url":"http://www.opensource.apple.com/source/xnu/xnu-1456.1.26/osfmk/man/vm_statistics.html?txt","timestamp":"2014-04-17T18:42:29Z","content_type":null,"content_length":"3210","record_id":"<urn:uuid:c6518f52-0579-4568-bb27-e537741cd6b7>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Volume of geodesic "sphere" (project) Replies: 3 Last Post: May 1, 1998 8:45 PM Messages: [ Previous | Next ] Re: Volume of geodesic "sphere" (project) Posted: May 1, 1998 7:59 PM In a previous article, nelsoncj@gte.net ("Clifford J. Nelson") says: the following does not define a general tetrah.in space without equivocation!... using homogenous planar coordinates, you need 4 for each plane, or 3 converted to "cartesian" planar ones. >No. I define a tetrahedron by four numbers representing the perpendicular >displacement of the four enclosing planes, the way it says to do it in >Synergetics 1 & 2. member, African Civil Rights Movement -- extirpate retrocolonialism! (No **** is good ****; eh?... http://inet.uni-c.dk/~sch-inst/pres.html) <DELETIVES EXPLETED> http://www.tarpley.net </DELETIVES EXPLETED> *---<Brian Hutchings, Living Space Programs, Santa Monica College>---* Date Subject Author 5/1/98 Re: Volume of geodesic "sphere" (project) Kirby Urner 5/1/98 Re: Volume of geodesic "sphere" (project) Brian Hutchings 5/1/98 Re: Volume of geodesic "sphere" (project) Clifford J. Nelson
{"url":"http://mathforum.org/kb/thread.jspa?threadID=37131&messageID=124560","timestamp":"2014-04-18T08:32:32Z","content_type":null,"content_length":"19153","record_id":"<urn:uuid:c46b4e16-477d-4b58-83e9-72475c0e2420>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to maths online Learning mathematics ... ... is easy for some and difficult for others. In each case it requires efforts which may not be reduced to zero by whatever media. After all, mathematical fields, structures and insights, argumentation techniques and computation methods have been developed in a long historical process. Nevertheless, suitable forms of teaching and learning may help one finding an approach to this formal and abstract world. It is a prominent goal of modern didactics to make clear on which issues the efforts should be concentrated. maths online does not want to teach some kind of "new mathematics", but rather the traditional goals of this subject, supported by modern methods. Dynamical diagrams The applets contained in the Gallery are dynamical diagrams, i.e. interactive units reacting immediately upon the user's activities like moving a scroll bar or typing some numerical input. In contrast to many other units, which at first glance seem to be quite similar to ours, maths online intends to lead the way towards the necessary efforts, not off from these. Most units contain problems to solve, thus pre-assuming a certain state of knowledge and aiming at a well-defined goal of understanding. For example, sometimes a geometrically evident situation is coupled to the symbolic mathematical language. In order to solve the problems guven in such an applet, it will be necessary to "think together" both the geometric-intiutive and the abstract-symbolic description of the same object (see e.g. Coordinate system). Some of these applets are suitable for getting acquainted with mathematical notions long before the technical computational side is addressed (for example the basic notions of differential calculus in the applet On the definition of the derivative). Additionally, the moving parts of a dynamical diagram may attract the user's attention in a very specific way. It is thus easier - as compared to the traditional print media - to distinguish between important and less important elements in a graphics. A further example illustrating the new possibilities of adequately designed maths learning tool is provided by the group of jig-saw puzzles. In every day life, it is familiar to keep many things in one's head and to operate with these mentally. (So, if several persons are discussing about which movie they would watch tonight: everyone has to keep track of various cinemas, addresses, movie titles, reviews and clock times, in order to be able to play the "Pros and Cons" game). In maths teaching this is not necessarily the case: If the drawing of a simple function's graph is tied to a number of technical actions and takes (at best) several minutes, the number of expressions and graphs one may handle simultaneously is strongly limited. An example like Recognize functions 1 shows how multimedia support may make a new type of problems accessible to the learning. Self-made puzzles On frequent demand we have opened the Puzzle workshop, in which you may design your own puzzles, save it to your computer or publish it on the web. Puzzles generated in this way may serve various educational purposes and are not even restricted to mathematics. Interactive tests ... (in the form of multiple choice and puzzles) may help to check and improve one's knowledge and abilities (table of contents). Maths links Various people and institutions offer learning material and information about mathematical issues in the web. The interactive tools mostly consist of single of series of few units. Although material for the more advanced subjects is represented rather sparsely in the web, it altogether covers far more methods and contents than a single project like maths online could present. The page Maths links und online tools collects various possibilities to include such material into the process of teaching and learning. On the one hand, offers ordered by topic shall facilitate the search for appropriate stuff. (All this material does not require the local installation of additional software). On the other hand, an extensive list of collections may serve as a starting points for the user's own discoveries. Among the maths links, you find Online tools such as calculators, programs for plotting 2D and 3D function graphs, differentiating and integrating. (Some of them rely on the computer algebra system Mathematica). They are started in their own browser windows, so that they may be used simultaneously with other pages of maths online (online tools). Suggestions for the classroom ... ... are provided in the form of project type worksheets, suitable for work in small groups (table of contents) and shall facititate the teachers' approach to this new way of learning. We welcome any suggestion or worksheet sent to us by the users of maths online. Most of the worksheets may be designed as web pages. In order to support the users with elementery HTML knowledge in inserting mathematical symbols, we offer a maths online HTML formula tool. The construction of maths online ... ... proceeds since March 1998. New material is added continuously. In order to become independent of web data procession speed, or in case of missing access to the internet, it may be reasonable to download maths online. We offer this possibility for Windows 95/98/ ME/NT/2000/XP and MacIntosh. Feedback from the users In order to optimize contents and presentation of maths online, we would like to encourage all (teaching and learning) users to give us some feedback about their experience. If convenient, please use the questionnaire. Feedback notes are published in the Forum. Since January 1999 maths online has a mirror site at the Universita degli Studi di Messina, (one applet has been translated to Italian language: sulla definizione di derivata) since June 2000 it is available from the server of the Inter-University Institute of Macau, and since July 2001 from the page located in Belgium. Some learning units of the Gallery have been added to the database Tools 4 Teachers of the canadian initiative Alberta Learning (see here). Technical remarks The units of maths online are Java applets. Hence, no download of additional software is necessary. The necessary computer abilities are thus reduced to a minimum. Moreover, most of the applets are designed such that they do not need further data after being initialized. Hence, the connection to the web may be disrupted once the respective web page has been opened in the browser. As a consequence, the applets work only in a Java-enabled browser. (The page Optimal preferences for computer, browser and screen contains a test you can perform). The screen resolution should be at least 800 x 600 pixels (otherwise some applet windows will be "too large"). For an optimal appearance, the color depth of your screen should be adjusted to be greater than 256 colors. Are you behind a fire-wall or a severe proxy server which keeps Java applets out of your local net? Consult this page! The functionality of Java applets should be the same for all platforms, at least in theory. In reality, the continuous development of both Java and the browser technology sometimes cause inconveniences. Should this happen, please tell us. Most parts of maths online developed before May 2000 are suitable for Netscape Navigator 3 and Microsoft Internet Explorer 4. Some applications, however (in particular the puzzles of the interactive tests), as well as the applets developed after May 2000 do not run in Netscape Navigator 3. If you use it, is hightly recommended to switch over to a newer version (4 or higher). Netscape 6 makes a small adjustion necessary in order to be able to display the mathematical symbols. The Opera browser should display most of the pages without big problems. Here some further details on the question Which browser? Contact us ... ... by E-mail or web page form, snail-mail, fax, or give us a call (see Authors).
{"url":"http://www.univie.ac.at/future.media/moe/einfuehrung.html","timestamp":"2014-04-19T19:46:53Z","content_type":null,"content_length":"13562","record_id":"<urn:uuid:10de1e20-a870-47c5-8edd-8e6322063fcc>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
MathFiction: The Last Enemy (Peter Berry (Screenplay) / Iain B. MacDonald (Director)) In this BBC TV series, mathematician Stephen Ezard returns home from China for his brother's funeral but finds himself caught up in two simultaneous stories of high level espionage. In one subplot, he aids (and romances) his brother's widow as she tries to determine the cause of a mysterious disease that the government wishes to cover up. In the other, he helps his ex-girlfriend (now a politician) as she tries to convince the government to adopt "T.I.A.", a computerized system that will keep track of the whereabouts and activities of every individual in Britain. In the first episode, quite a bit is said about the protagonist's mathematics. He is described as the famous author of Ezard's Theorem, the youngest Fields Medalist, and a likely recipient of the Nobel Prize. He is annoyed that people associate his work in statistics to killer bees. And, he gives a talk at the company developing T.I.A. on his geometry research, which sounds suspiciously like Hamilton's program to prove the Poincare Conjecture. (He accepts the company's offer to fund his pure math research in exchange for his work on T.I.A., but presumably he was more motivated by his desire to use their program to locate his brother's widow.) Ezard is presented as the stereotypical, anti-social mathematician. He moved to China to avoid having to deal with people. His abrupt and condescending mannerisms are seen as a benefit to the company as it tries to use him to force the government to adopt T.I.A. officially. And, he appears to be obsessive about washing his hands, though, given the fact that a woman dies of a mysterious and possibly dangerous disease in his apartment, this may not be as crazy as one might otherwise think! Two characters mention that Ezard always wears the same outfit (as did Einstein, according to the anecdote, in order to avoid wasting time thinking of what to wear), though in the show he begins wearing other clothes when tracers are put into his own. Finally, it is Ezard who eventually figures out the solution to the mystery. Thanks to Kevin Nooney who originally brought this work of fiction to my attention and wrote the following description, which was here before I had a chance to see the show myself. Contributed by Kevin Nooney He definitely and unambiguously is a mathematician, though as the plot progresses, it is his programming skills that become more and more stressed. There are several scenes in the early episodes that specifically mention his mathematical work and one in particular that includes his presentation to two members of a company that is involved in developing the database tracking software that is at the heart of the plot. There is also a very brief deleted scene in which he talks about soap-bubble geometry.
{"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf757","timestamp":"2014-04-21T09:36:56Z","content_type":null,"content_length":"10703","record_id":"<urn:uuid:35d1ea9b-3e29-4e24-94f2-4a7c798fff2e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: Matheology § 224 Date: Mar 31, 2013 9:07 PM Author: fom Subject: Re: Matheology § 224 On 3/31/2013 1:44 PM, Virgil wrote: > In article > <3d98da78-e43c-4550-812c-6436200744ec@vh9g2000pbb.googlegroups.com>, > "Ross A. Finlayson" <ross.finlayson@gmail.com> wrote: >>>> In a Complete Infinite Binary Tree, every binary rational path has only >>>> finitely many left-child nodes or only finitely many right-child nodes, >>>> whereas every other path has infinitely many of each. >>> That is nonsense. 0.0101010101... has infinitely many of both sorts. >>> Regards, WM >> Well, you see Virgil has introduced a term in context the "binary >> rational path" > The standard definition of a binary rational is a rational whose > denominator is a power of 2. > In binary place value notation, they are the infinite strings starting > at the binary point, then having onlybinary digits of 0 or 1, which end > with either a string of infinitely many 0's or infinitely many 1's. > Thus in a Complete Infinite Binary Tree they correspond to infinite > paths with either only finitely many 1's or only finitely many 0's. I will not disagree with your statement concerning "binary rational path", but I did do a search and did not come up with anything useful. That does not mean much since there are far more pages with "binary" and "rational" used in a context different from yours. I did, however, find dyadic rationals. These "standard terms" are sometimes a pain. What is important is that they are a dense subset in spite of not being the entire class of rational
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8796470","timestamp":"2014-04-21T03:12:27Z","content_type":null,"content_length":"3009","record_id":"<urn:uuid:5c12b3aa-d801-48d7-bb6f-ddbeda3ca9e6>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
Wilshire Park, LA Trigonometry Tutor Find a Wilshire Park, LA Trigonometry Tutor ...I took Prealgebra in Middle School and passed with an A. I have many math books that I would pull information from when tutoring. I received an A for pre-calculus in high school and have used this material for many courses throughout college. 13 Subjects: including trigonometry, physics, calculus, geometry ...I have taught the trigonometry portion of Geometry and Algebra 2 courses. I would be able to help a student master this content area. I have taught regular statistics and AP Statistics for the past 8 to 10 years. 15 Subjects: including trigonometry, geometry, statistics, GRE ...I believe trying to forge connections between disparate phenomena can be a key to understanding both, rather than something that is only possible once both subjects have been mastered. That being said, it is always important after creative imagination to return to the fundamentals and practice f... 14 Subjects: including trigonometry, calculus, geometry, biology ...My technical expertise is in the fields of optical physics, laser science and chemical physics. I can also tutor college and grad school application writing and preparation. Thanks.I have a Master of Science in Engineering degree in Electrical Engineering from the University of Michigan where I focused in electromagnetics, optics, photonics and electrical signal processing. 22 Subjects: including trigonometry, chemistry, physics, calculus ...During my freshman year of college, I tutored elementary school students in low-income neighborhoods through a program called Project CHILD. Since I finished my undergrad education, I have been tutoring high school students independently in algebra 2, pre-calculus and chemistry. I always enjoyed my math and science classes because they featured the most objective learning. 18 Subjects: including trigonometry, chemistry, Spanish, biology Related Wilshire Park, LA Tutors Wilshire Park, LA Accounting Tutors Wilshire Park, LA ACT Tutors Wilshire Park, LA Algebra Tutors Wilshire Park, LA Algebra 2 Tutors Wilshire Park, LA Calculus Tutors Wilshire Park, LA Geometry Tutors Wilshire Park, LA Math Tutors Wilshire Park, LA Prealgebra Tutors Wilshire Park, LA Precalculus Tutors Wilshire Park, LA SAT Tutors Wilshire Park, LA SAT Math Tutors Wilshire Park, LA Science Tutors Wilshire Park, LA Statistics Tutors Wilshire Park, LA Trigonometry Tutors Nearby Cities With trigonometry Tutor Bicentennial, CA trigonometry Tutors Century City, CA trigonometry Tutors Farmer Market, CA trigonometry Tutors Glendale Galleria, CA trigonometry Tutors La Tijera, CA trigonometry Tutors Lafayette Square, LA trigonometry Tutors Miracle Mile, CA trigonometry Tutors Oakwood, CA trigonometry Tutors Pico Heights, CA trigonometry Tutors Playa, CA trigonometry Tutors Rimpau, CA trigonometry Tutors Sanford, CA trigonometry Tutors Santa Western, CA trigonometry Tutors Toluca Terrace, CA trigonometry Tutors Westwood, LA trigonometry Tutors
{"url":"http://www.purplemath.com/Wilshire_Park_LA_trigonometry_tutors.php","timestamp":"2014-04-19T04:58:07Z","content_type":null,"content_length":"24766","record_id":"<urn:uuid:29411e74-20ed-47ba-b683-8f03ec35e7b5>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
I need help with mathematical coding.......911 I need help with mathematical coding.......911 Hello fellow programmers Hello it is Semma. I have an assignment where i have incoporate an integer for meters. Where the user has to answer this question: (Ex.A) What are the dimenson of your yard in meters ? 50 75 (Ex.B) What are the dimensions of your house in meters ? 33 25 This is what I have written : / / Questions(?) cout<<"what are the dimensions of your yard in meteres..(?)";<<endl; cin >>YardMeters; cout <<"Yard in meters:"<<YardMeters;<<endl; cout <<"What are the dimensions of your house in meters..(?)"; cin >>HouseMeters; cout <<"House in meters:"<<HouseMeters;<<endl; // Meter Calculation Integer int YardMeters; int HouseMeters; // Calculating Dimensions int Calculation_A; int Calculation_B; int Rounding_B; int ConversionHours; int ConversionSecond; int ConversionMinute; Then i have to convert yard in meters and house in meters to at a rate of 2 square meters per second it will take you 1462.5 seconds. Then i have to round it to the nearest second. After it will display 1463 seconds which converts to 0 hour(s) 24minutes and 23 seconds(s). Therefore what code can I use to make these conversions of, the rate of 2 square meters per second and rounding it to the nearest second. Thank you.......... I'm more than willing to help. But I'd like to see you make an attempt to layout the logic not just the ouput statements. When its done you will have learned a great deal more.... Enough from the cout<<"what are the dimensions of your yard in meteres..(?)";<<endl; cout <<"Yard in meters:"<<YardMeters;<<endl; cout <<"House in meters:"<<HouseMeters;<<endl; Too much of semicolons here. This is correct: cout<<"what are the dimensions of your yard in meteres..(?)"<<endl; cout <<"Yard in meters:"<<YardMeters<<endl; cout <<"House in meters:"<<HouseMeters<<endl;
{"url":"http://cboard.cprogramming.com/cplusplus-programming/2226-i-need-help-mathematical-coding-911-a-printable-thread.html","timestamp":"2014-04-20T20:11:23Z","content_type":null,"content_length":"8937","record_id":"<urn:uuid:ca783c66-c327-452b-8bc7-daf0b15f04ff>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
Can You Please Help Me With The Calculations For ... | Chegg.com Can you please help me with the calculations for the three variances in this equation? Factory Overhead Cost Variances The following data relate to factory overhead cost for the production of 8,000 computers: Actual: Variable factory overhead $217,300 Fixed factory overhead 91,000 Standard: 8,000 hrs. at $35.00 280,000 If productive capacity of 100% was 13,000 hours and the factory overhead cost budgeted at the level of 8,000 standard hours was $315,000, determine the variable factory overhead Controllable Variance, fixed factory overhead volume variance, and total factory overhead cost variance. The fixed factory overhead rate was $7.00 per hour. Use the minus sign to enter favorable variances as negative numbers. Controllable variance: Volume Variance: Total Factory Overhead Cost
{"url":"http://www.chegg.com/homework-help/questions-and-answers/please-help-calculations-three-variances-equation-factory-overhead-cost-variances-followin-q3719933","timestamp":"2014-04-18T12:00:25Z","content_type":null,"content_length":"19347","record_id":"<urn:uuid:3e6c54d7-cd0e-4099-b09f-7405c5b89044>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00092-ip-10-147-4-33.ec2.internal.warc.gz"}