content
stringlengths
86
994k
meta
stringlengths
288
619
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Need help in LMC coding: Calculate the sum from 1 to a number inputted by users For example, if user inputs 5, calculate 1+2+3+4+5 and output 15... • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. @KonradZuse @Callisto ? Best Response You've already chosen the best response. I hv the flowchart idk how to convert into code? Best Response You've already chosen the best response. You could start by choosing one symbol from the flowchart and code for it. Then do it again and again until you have all the symbols coded. Then you just have to "smooth out" the code between the symbols. When in doubt, decompose (break down) the problem into smaller problems. Eventually a solution will appear. Best Response You've already chosen the best response. @ParthKohli need help!! Best Response You've already chosen the best response. inp sto value out add N brp value sub value out bra value add n hlt its not compiling.. >.< Best Response You've already chosen the best response. INP STA SUM LOOP SUB ONE STA TEMP LDA SUM ADD TEMP STA SUM LDA TEMP BRZ QUIT BRA LOOP QUIT LDA SUM OUT HLT ONE DAT 1 SUM DAT TEMP DAT Best Response You've already chosen the best response. thanks meepi, can u explains the branching part? Best Response You've already chosen the best response. Best Response You've already chosen the best response. First we store the input in SUM Then we substract 1 from the accumulator, and store the new number in TEMP next we load SUM into the accumulator, and add TEMP and we store this in SUM again then we load temp into the accumulator BRZ check if temp is 0, in which case we're finished (5 + 4 + 3 + 2 + 1 + 0, we have the sum), and if it is we branch to QUIT otherwise we use BRA to go back to the top of the loop and repeat the process Best Response You've already chosen the best response. alright so u took temp as a variable to store the new #? Best Response You've already chosen the best response. each iteration of the loop temp gets lowered by 1 and added to the sum in SUM if temp is 0 we're done temp is used as a countdown from input to 0, and each time the loop runs we add it to sum and substract 1 from it Best Response You've already chosen the best response. got it now! thank you very much for your help! I appreciated! Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/511d8a08e4b03d9dd0c45c8a","timestamp":"2014-04-18T23:47:37Z","content_type":null,"content_length":"58069","record_id":"<urn:uuid:8a46e9d6-518d-4069-acbe-7c3155d383f5>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Use a Pie Chart to Find Percentages and Amounts A pie chart, which looks like a divided circle, shows you how a whole object is cut up into parts. Pie charts are most often used to represent percentages. For example, the following figure is a pie chart representing Eileen’s monthly expenses. You can tell at a glance that Eileen’s largest expense is rent and that her second largest is her car. Unlike a bar graph, the pie chart shows numbers that are dependent upon each other. For example, if Eileen’s rent increases to 30% of her monthly income, she’ll have to decrease her spending in at least one other area. Here are a few typical questions you may be asked about a pie chart: • Individual percentages: What percentage of her monthly expenses does Eileen spend on food? Find the slice that represents what Eileen spends on food, and notice that she spends 10% of her income • Differences in percentages: What percentage more does she spend on her car than on entertainment? Eileen spends 20% on her car but only 5% on entertainment, so the difference between these percentages is 15%. • How much a percent represents in terms of dollars: If Eileen brings home $2,000 per month, how much does she put away in savings each month? First notice that Eileen puts 15% every month into savings. So you need to figure out 15% of $2,000. Solve this problem by turning 15% into a decimal and multiplying: 0.15 2,000 = 300 So Eileen saves $300 every month.
{"url":"http://www.dummies.com/how-to/content/use-a-pie-chart-to-find-percentages-and-amounts.html","timestamp":"2014-04-20T05:05:29Z","content_type":null,"content_length":"50949","record_id":"<urn:uuid:9ea15ac8-7d7c-4398-a867-e0aa321c5d72>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Motion Along a Straight Line In this chapter we will study one dimensional kinematics i.e. how objects move along a straight line: The following parameters will be defined: Average velocity Average Speed Instantaneous velocity Average and instantaneous acceleration We will learn how direction of acceleration with respect to direction of velocity, affects the magnitude of velocity. How to compute velocity from displacement vs. Time graph. How to compute acceleration from velocity vs. Time graph. We will develop the kinematic equations that give us the velocity and position at any time for constant acceleration. We will learn to compute displacement from velocity vs. Time graph for any value of acceleration. We will learn to compute change in velocity from acceleration vs. Time graph for any value of acceleration. What is a one dimensional motion? Observe (click play) the motion of three disks. What is common in motions of these discs? Can we say that all the three were moving in one dimension? Answer is NO. Why? When an object moves in a straight line along a single axis. Such a motion is called one-dimensional motion. Can we tell, which of the three discs were performing one dimensional motion? Blue is moving in a straight line along Vertical axis (y-axis). Green is moving in straight line along horizontal axis (x-axis). Therefore we can say that only blue and green are performing one-dimensional motion. Forces cause motion but we will not discuss cause of motion in this chapter, we will only discuss the motion itself and change in motion. These discs could represent any rigid object or a particle. What is a rigid object? An object whose shape stays same as it moves is called a rigid body or object. An ideal rigid body is nonexistent. An object in which the atomic forces are so strong that the little force needed to move it do not bend it, can be considered a rigid body. How we locate an object? We try to find its position relative to some reference point (origin or zero point) of an axis. The positive direction of the axis is in the direction of increasing numbers, which is in the right The opposite is the negative direction. Position is a vector quantity as it has direction (+ or – in one dimension) and magnitude. What is the position of Green, Orange and Blue disks? Green is -4 m, Red is -2 m and Blue is +4 m. What is the position of red disc? Move (click play) it to a new position. Displacement is the change from one position to another position . Displacement is a vector quantity with direction (+ or – in one dimension) and magnitude. How displacement is computed? Displacement is computed by subtracting initial position from final position . Symbol Δ (uppercase delta), represents a change in quantity (here it is change in position) What will be displacement if object moves from to ? What is the direction of motion? Positive sign of Δx indicates that the motion is along the positive x-direction What will be displacement if the object moves from to ? What is the direction of motion? Negative sign of Δx indicates that the motion is along the negative x-direction Displacement vs. Total Distance Watch the motion of red disk below (click play). Blue arrow is the instantaneous displacement. The disk moved from initial position to . Even though the total distance (dotted line) covered is 10 m the displacement is still Δx = 0. Now watch the motion of blue disk (click play) below. Red arrow is the displacement. The disk moved from initial position to . Even though the total distance (dotted line) covered is 8 m the displacement is Δx = 2m. The actual distance for a trip is irrelevant as far as the displacement is concerned Position time Graph Position of a moving object (blue disk) can be plotted as a function of time. Note down position of moving disk (blue) for different values of time t. These various noted values of points (t, x) can be plotted as a graph with position x(t) at a given time t on Vertical axes and time t on horizontal axes. This is called time versus position graph or position time graph. Stationary Object When the position of an object or a particle does not change with time, it is called a stationary object or particle. What we observed? In other words displacement of the particle is zero with time. Position time graph of a stationary object is parallel to time axes. The position of the object stays at x=-5 m for 10 s. Average Velocity Average velocity , is the ratio of the displacement Δx that occurs during a particular time interval Δt to that interval. On a position x versus t graph, is the slope of the straight line (Magenta line) that connects two particular points on the x (t) curve : one point is (, ), and other point is Average Speed Average speed , is the ratio of the total distance covered during a particular time interval Δt to that interval. Average speed does not include direction, it lacks any algebraic sign. Some time is the same (except for the absence of sign) as . As in the following example. In the above example Δx = 5 m and total distance covered is also 5 m. However (following example) when an object double back on its path, the two can be quite different. In the above example Δx = 2 m (red arrow) while total distance covered is 8 m. Instantaneous Velocity and Speed Instantaneous velocity v is the change in particle position x with time at a given instant. How to compute instantaneous velocity? First calculate average velocity around the given instant (red circle). Now shrink the time interval Δt (using Δt slider) closer and closer to 0. As Δt becomes shorter and shorter the average velocity approaches a limiting value, which is the instantaneous velocity (or simply velocity). Mathematically v is the derivative of x with respect to t. In graphical term, velocity v at given instant is the slope of the particle’ s position - time curve at the point representing that instant. Instantaneous speed (Speed) is the magnitude of velocity. Time vs. Velocity Graph We are given a position time graph, how can we obtain velocity time graph from this? We can measure slope of the curve at each time position (red circle). Value of slope is velocity v at that instant (time). These various noted values of points (t, v) can be plotted as a graph with velocity v(t) on Vertical axes and time t on horizontal axes. When a particle's velocity changes, the particle is said to undergo acceleration (or to accelerate). For motion along an axis, the average acceleration over a time interval Δt is defined as is the particle velocity at time and is particle velocity at . The instantaneous acceleration (or simply acceleration) is the derivative of the velocity with respect to time: Acceleration of a particle at any instant is the rate at which its velocity is changing at that instant. Units of Acceleration We found that acceleration is derivative of velocity with time. Now velocity is derivative of position x(t) with time, therefore at any instant acceleration is the second derivative of its position x(t) with respect to time. Unit of acceleration is the m/(s.s) or . Information from sign of Acceleration Positive sign of acceleration indicates it is acting to the right and negative sign indicates it is acting to the left. Just form the sign of acceleration, can we tell whether an object's speed is increasing or decreasing? Answer is "No"., Why? We need to know sign of velocity too. If the signs of the velocity and acceleration of a particle are the same, the speed of the particle increases. If the signs are opposite, the speed decreases. Time vs. Acceleration Graph We have a velocity time graph, how can we obtain acceleration time graph from this? We can measure slope of the curve at each time position (red circle). Value of slope is acceleration a at that instant (time). These various noted values of points (t, a) can be plotted as a graph with acceleration a(t) on Vertical axes and time t on horizontal axes. Constant Acceleration Let us explore how various graphs look like for different values of velocity and acceleration. When acceleration is constant, how the slope of velocity time graph change with time? It is constant. Equations of Motion (Kinematic Equations) When the acceleration is constant, the average acceleration is equal to the instantaneous acceleration. Here is the velocity at time t=0 and v is the velocity at any later time t. For constant acceleration the average velocity over a time interval t can also be given as the average of the initial velocity and final velocity v. When is the initial position and x is the position at time t, then average velocity is given as Combining above equations we get Kinematic Equations For Constant acceleration, with the help of above two equations we can find following list of equations of motion. Equation Quantity Missing All these equations are only valid for constant acceleration. Free Fall Acceleration Free fall acceleration is an example of constant acceleration. When we toss an object either up or down and could somehow eliminate the effects of air on its flight, we would find that the object accelerates downward at a constant rate. That rate is called the free-fall acceleration, and its magnitude is represented by g. The free-fall acceleration is negative: that is, downward on the y axis, toward Earth's center. The free-fall acceleration near Earth's surface is , and the magnitude of the acceleration is . Do not substitute for g. Graphical Integration in Motion Analysis Can we calculate the displacement, if we know time versus velocity graph representing motion of a particle? Yes, we can compute the displacement between any time interval ( to ) Velocity v is defined as derivative of position with time. In other words for a very small interval of time (Δt→0), the displacement Δx can be given as Example 1: Suppose the time versus velocity plot of a moving object is as follow. If we want to compute displacement from initial time to final time , we can divide this time into four equal parts of Δt=0.5s each (Select Show). Now total displacement will be the sum of displacement in these four small intervals. To increase the accuracy of our calculations, we need to make Δt as small as possible, therefore the displacement will be For further accuracy we can make Δt very small (Δt→0) and Fundamental Theorem of Calculus tells us that Example 2 : When we have a graph of an object's time versus acceleration, we can integrate on the graph to find the object's velocity at any given time (If initial or final velocity is also known). Here plot of time versus acceleration of a moving object (acceleration is not constant) is given. Graphical methods are valid, whether acceleration is constant or not.
{"url":"http://www.rakeshkapoor.us/ClassNotes/MotionAlongAStraightLine.html","timestamp":"2014-04-20T05:42:36Z","content_type":null,"content_length":"42847","record_id":"<urn:uuid:b69d8771-ba7d-46ea-b11e-f245c386ad4f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
Fachbereich Physik 162 search hits Lateral quantization of spin waves in micron size magnetic wires (1997) Christoph Mathieu Jörg Jorzick Andre Frank Serguei Demokritiv Burkhard Hillebrands We report on the observation of quantized surface spin waves in periodic arrays of magnetic Ni81Fe19 wires by means of Brillouin light scattering spectroscopy. At small wavevectors (q_1 = 0 - 0.9*100000 cm^-1 ) several discrete, dispersionless modes with a frequency splitting of up to 0.9 GHz were observed for the wavevector oriented perpendicular to the wires. From the frequencies of the modes and the wavevector interval, where each mode is observed, the modes are identified as dipole-exchange surface spin wave modes of the film with quantized wavevector values determined by the boundary conditions at the lateral edges of the wires. With increasing wavevector the separation of the modes becomes smaller, and the frequencies of the discrete modes converge to the dispersion of the dipole-exchange surface mode of a continuous film. Calculation of Wannier-Bloch and Wannier-Stark states (1998) Markus Glück A. R. Kolovsky Hans Jürgen Korsch Nimrod Moseyev The paper discusses the metastable states of a quantum particle in a periodic potential under a constant force (the model of a crystal electron in a homogeneous electric ,eld), which are known as the Wannier-Stark ladder of resonances. An ecient procedure to ,nd the positions and widths of resonances is suggested and illustrated by numerical calculation for a cosine potential. Mode beating of spin wave beams in ferrimagnetic Lu2.04Bi0.96Fe5O12 films (1999) Oliver Büttner Martin Bauer Christoph Mathieu Serguei Demokritov Burkard Hillebrands P.A. Kolodin M.P. Kostylev S. Sure H. Dötsch V. Grimalsky Yu. Rapoport A.N. Slavin Absract: We report on measurements of the two-dimensional intensity distribtion of linear and non-linear spin wave excitations in a LuBiFeO film. The spin wave intensity was detected with a high-resolution Brillouinlight scatteringspectroscopy setup. The observed snake-like structure of the spin wave intensity distribution is understood as a mode beating between modes with different lateral spin wave intensity distributions. The theoretical treatment of the linear regime is performed analytically, whereas the propagation of non-linear spin waves is simulated by a numerical solution of a non-linear Schrödinger equation with suitable boundary conditions. On the Mass Difference of Neutrinos (1995) Martin Greulach We calculate a relative neutrino mass difference of Delta m / m = 6 10^-9 at the one loop level in a two flavor model. If we combine our result with recently published possible solutions to the solar neutrino problem we can estimate a neutrino mass range of m = (0,12-0,19) eV . Universal and non-universal behavior in Dirac spectra (1998) M. E. Berbenni-Bitsch M. Göckeler S. Meyer A. Schäfer J. J. M. Verbaarschot T. Wettig We have computed ensembles of complete spectra of the staggered Dirac operator using four-dimensional SU(2) gauge fields, both in the quenched approximation and with dynamical fermions. To identify universal features in the Dirac spectrum, we compare the lattice data with predictions from chiral random matrix theory for the distribution of the low-lying eigenvalues. Good agreement is found up to some limiting energy, the so-called Thouless energy, above which random matrix theory no longer applies. We determine the dependence of the Thouless energy on the simulation parameters using the scalar susceptibility and the number variance. Nonvacuum Preudoparticles, Quantum Tunneling and Metastability (1995) Jui-Qing Liang H.J.W. Müller-Kirsten It is shown that nonvacuum pseudoparticles can account for quantum tunneling and metastability. In particular the saddle- point nature of the pseudoparticles is demonstrated, and the evaluation of path-integrals in their neighbourhood. Finally the relation between instantons and bounces is used to derive a result conjectured by Bogomolny and Fateyev. BRST-Invariant Approach to Quantum Mechanical Tunneling (1995) Jui-Qing Liang H. J. W. Müller-Kirsten F. Zimmerschied Jiang-Ge Zhou A new approach with BRST invariance is suggested to cure the degeneracy problem of ill defined path integrals in the path- integral calculation of quantum mechanical tunneling effects in which the problem arises due to the occurrence of zero modes. The Faddeev-Popov procedure is avoided and the integral over the zero mode is transformed in a systematic way into a well defined integral over instanton positions. No special procedure has to be adopted as in the Faddeev-Popov method in calculating the Jacobian of the transformation. The quantum mechanical tunneling for the Sine-Gordon potential is used as a test of the method and the width of the lowest energy band is obtained in exact agreement with that of WKB calculations. Correlation between structure and magnetic anisotropies of Co on Cu(110) (1999) Jürgen Fassbender Christoph Mathieu Burkard Hillebrands G. Güntherodt R. Jungblut J. Kohlhepp M.T. Johnson D.J. Roberts G.A. Gehring Magnetic anisotropies of MBE-grown fcc Co(110)-films on Cu(110) single crystal substrates have been determined by using Brillouin light scattering(BLS) and have been correlated with the structural properties determined by low energy electron diffraction (LEED) and scanning tunneling microscopy (STM). Three regimes of film growth and associated anisotropy behavior are identified: coherent growth in the Co film thickness regime of up to 13 Å, in-plane anisotropic strain relaxation between 13 Å and about 50 Å and inplane isotropic strain relaxation above 50 Å. The structural origin of the transition between anisotropic and isotropic strain relaxation was studied using STM. In the regime of anisotropic strain relaxation long Co stripes with a preferential [ 110 ]-orientation are observed, which in the isotropic strain relaxation regime are interrupted in the perpendicular in-plane direction to form isotropic islands. In the Co film thickness regime below 50 Å an unexpected suppression of the magnetocrystalline anisotropy contribution is observed. A model calculation based on a crystal field formalism and discussed within the context of band theory, which explicitly takes tetragonal misfit strains into account, reproduces the experimentally observed anomalies despite the fact that the thick Co films are quite rough. A new look at the RST model (1996) Jui-Qing Liang H. J. W. Müller-Kirsten F. Zimmerschied Jiang-Ge Zhou The RST model is augmented by the addition of a scalar field and a boundary term so that it is well-posed and local. Expressing the RST action in terms of the ADM formulation, the constraint structure can be analysed completely. It is shown that from the view point of local field theories, there exists a hidden dynamical field 1 in the RST model. Thanks to the presence of this hidden dynamical field, we can reconstruct the closed algebra of the constraints which guarantee the general invariance of the RST action. The resulting stress tensors TSigma Sigma are recovered to be true tensor quantities. Especially, the part of the stress tensors for the hidden dynamical field 1 gives the precise expression for tSigma . At the quantum level, the cancellation condition for the total central charge is reexamined. Finally, with the help of the hidden dynamical field 1, the fact that the semi-classical static soluti on of the RST model has two independent parameters (P,M), whereas for the classical CGHS model there is only one, can be explained. Significance of zero modes in path-integral quantization of solitonic theories with BRST invariance (1996) Jui-Qing Liang H. J. W. Müller-Kirsten F. Zimmerschied Jiang-Ge Zhou D. H. Tchrakiana The significance of zero modes in the path-integral quantization of some solitonic models is investigated. In particular a Skyrme-like theory with topological vortices in (1 + 2) dimensions is studied, and with a BRST invariant gauge fixing a well defined transition amplitude is obtained in the one loop approximation. We also present an alternative method which does not necessitate evoking the time-dependence in the functional integral, but is equivalent to the original one in dealing with the quantization in the background of the static classical solution of the non-linear field equations. The considerations given here are particularly useful in - but also limited to -the one-loop approximation.
{"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/collection/id/15998/start/0/rows/10/doctypefq/preprint","timestamp":"2014-04-18T03:21:57Z","content_type":null,"content_length":"51910","record_id":"<urn:uuid:939ea46b-4da5-428c-be09-8e2de8b6135a>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US7543212 - Low-density parity-check (LDPC) encoder This application claims priority of U.S. provisional application Ser. No. 60/609,644, filed Sep. 13, 2004, and entitled “A Radiation Tolerant, 2 Gb/s (8158,7136) Low-Density Parity-Check Encoder”, by the same inventors. This application incorporates U.S. provisional application Ser. No. 60/609,644 in its entirety by reference. The present invention relates to a method of and an apparatus for signal encoding and error correction. In particular, the present invention relates to a method of and apparatus for encoding using low-density parity-check code. A problem that is common to all data communications technologies is the corruption of data. The likelihood of error in data communications must be considered in developing a communications technology. Techniques for detecting and correcting errors in the communicated data must be incorporated for the communications technology to be useful. Error correcting codes are employed on data transmission and data storage channels to correct or avoid bit errors due to noise and other channel imperfections. As applied to information theory and coding, an error-correcting code is a code in which each data signal conforms to specific rules of construction so that departures from this construction in the received signal can generally be automatically detected and corrected. Error correcting codes are used to detect and/or correct single-bit errors, double-bit errors, and multi-bit errors. In information theory, the Shannon-Hartley theorem states the maximum amount of error-free digital data that can be transmitted over a communication link with a specified bandwidth in the presence of noise interference. The Shannon limit of a communications channel is the theoretical maximum information transfer rate of the channel. The Shannon-Hartley theorem describes the maximum possible efficiency of error-correcting methods versus levels of noise interference and data corruption. This theorem does not describe how to construct the error-correcting method. Instead, the theorem indicates the maximum efficiency of the error-correcting method used. Two main classes of error-correcting codes are block codes and convolutional codes. Convolutional codes operate on serial data, one or a few bits at a time. Block codes operate on relatively large (typically, up to a couple of hundred bytes) message blocks. There are a number of conventional convolutional and block codes currently in use, and a number of algorithms for decoding the received coded information sequences to recover the original data. Error correction, using an error correcting code, improves the reliability of communication systems and devices. In conventional encoding methods, an encoder at the transmission end of a communication encodes an input word, such as a block or vector of a given length, to produce a codeword associated with the error correction code. A conventional transmitter stores in its memory one or a small number of algorithms to produce codewords of a certain code. At the receiving end of the communication, a decoder decodes the received codeword to generate an estimation of the original input word. A channel code is a set of codewords (e.g., binary vectors) which are easily distinguishable from one another, even in the presence of noise, so that transmission errors are avoided. To facilitate encoding and decoding, binary linear codes are generally used. This means that the set of codewords C is a certain k-dimensional subspace of the vector space F^n [2 ]of binary n-tuples over the binary field F[2]={0, 1}. Thus, there is a basis B={g[0], . . . g[k-1]} which spans C so that each cεC may be written as c=u[O]g[O]+u[1]g[1]+ . . . +u[k-1]g[k-1 ]for some {u[i]} in F[2]. More compactly, c=uG where u={u[0], u[1], . . . , u[k-1]} is the k-bit information word and G is the k×n generator matrix whose rows are the vectors {g[i]} (as is conventional in coding, all vectors are row vectors). Further, the (n-k) dimensional null space C^⊥ of G is spanned by the basis B^⊥={h[0], h[1], . . . , h[n-k-1]}. Thus, for each cεC, cH^T=0, where H is the (n−k)×n parity-check matrix whose rows are the vectors {h[i]}. The parity-check matrix H performs m=n−k separate parity checks on a received word. A low-density parity-check code (LDPC) is a linear block code for which the parity-check matrix H has a low density of I 's. The sparseness of an LDPC code's H matrix makes it amenable to a near-optimal decoding procedure called belief propagation. Using Low Density Parity Check (LDPC) codes for error correction generates parity check codes that have a predetermined number of elements having a value of one in rows and columns of the parity check codes. Parity data is then generated based on the parity check codes. That is, in the coding method based on the LDPC codes, a parity check matrix H having a predetermined number of elements that include a value of one in its rows and columns is formed, and a codeword x satisfying the equation Hx=0 is obtained. The codeword x includes original data and parity data. There are several known techniques for generating the generator matrix G. These include Hamming codes, BCH codes, and Reed-Solomon codes. Another known code is a low density parity check (LDPC) code, developed by Gallager in the early 1960's. With block codes, a parity check matrix H of size (n−1)×n exists such that the transpose of H (e.g., H^T), when multiplied by G, produces a null set, or G×H ^T=0. The decoder multiplies the received codeword c (m×G=c) by the transpose of H, e.g., c×H^T. The result, often referred to as the “syndrome,” is a 1×(n−k) matrix of all 0's if c is a valid Virtually all LDPC code design procedures involve specification of the H matrix after which an encoding procedure is devised. A generator matrix G may be obtained from H via Gauss-Jordan elimination, but the resulting encoding operation via c=uG is still too complex for long codes. To simplify the operation, a technique has been proposed in which encoding is performed directly from H. Alternatively, cyclic and quasi-cyclic (QC) LDPC codes have been proposed which lend themselves to lower-complexity shift-register based encoder implementations. The low-complexity advantage derives from the fact that the H matrix is composed of circulant submatrices. What is needed is a simpler method of utilizing LDPC codes in encoding procedures. What is also needed is a simpler method of generating a G matrix to be used in such encoding procedures. The encoder chip of the present invention uses LDPC codes to encode input message data at a transmitting end, thereby generating a series of codewords. The message data and the generated codewords are then transmitted to a receiving end. At the receiving end, the received codewords are decoded and checked for errors. To generate the codewords, the encoder applies a generator matrix G to the input message data. Therefore, the generator matrix G is needed to implement the encoder. As described above, the G matrix is obtained from an H matrix via Gauss-Jordan elimination. As such, the first step in generating a G matrix is to define an H matrix. An H matrix is initially defined as a plurality of circulant sub-matrices. The H matrix is a 16×2 configuration, and each of the sub-matrices is a circulant matrix. In general, the H matrix is an arbitrarily defined matrix. The G matrix is formed by manipulating the H matrix according to a 4-step algorithm. First, the H matrix is re-configured by shifting each column two columns to the right. Second, the H matrix from step 1 is forced to upper triangular form. Third, a determined row from the H matrix in step 2 is made circular. Fourth, a parity matrix P is determined from the H matrix in step 3, where the G matrix defined by G=[I:P]. The encoder chip includes input registers, a parity encoder, and a data output multipexor. Message data is input to the input registers, and corresponding parity data is generated by the parity encoder. The message data and the parity data are output from the data output multiplexor. FIG. 1 illustrates a block diagram of an encoder of the present invention. FIG. 2 illustrates a data flow register included within the bit unpacker. FIG. 3 illustrates a first embodiment of the parity generator from the encoder of FIG. 1. FIG. 4 illustrates a second embodiment of the parity generator from the encoder of FIG. 1. The present invention is described relative to the several views of the drawings. Where appropriate and only where identical elements are disclosed and shown in more than one drawing, the same reference numeral will be used to represent such identical elements. Embodiments of the present invention are directed to a radiation tolerant, encoding chip that implements two low-density parity-check (LDPC) codes. The first LDPC code is a (4088,3360) code (4K) which is shortened from a (4095,3367) cyclic code. The second LDPC code is a quasi-cyclic (8158,7136) code (8K). An encoder chip utilizing the present invention can be programmed with other LDPC codes. Also included on the chip is a CCSDS (Consultative Committee for Space Data Systems) standard randomizer as well as a CCSDS standard attached synchronization marker (ASM) section. The architecture for a parity generator and the derivation of the sub-code which enables this architecture is described in detail below. Code Description The (8158,7136) code is shortened from a geometry-based (8176,7154) quasi-cyclic code designed to have an error floor no higher than 10^−10 BER (bit error rate). The parent code is specified by a 1022×8176H matrix containing 32 right circulant sub-matrices as follows: H=C[1,1 ]C[1,2 ]C[1,3 ]. . . C[1,14 ]C[1,15 ]C[1,16 ] □ C[2,1 ]C[2,2 ]C[2,3 ]. . . C[2,14 ]C[2,15 ]C[2,16 ] Each C[i,j ]matrix is a 511×511 right-circulant matrix containing two “ones” per row and two “ones” per column. In general, each matrix C[i,j ]can be different, although one, some, or all of the them can be the same. The column positions of the “ones” for the first row of the sixteen C[1,j ]right-circulant matrices of one implementation of the invention, are as follows: The column positions within each first row of matrix C[1,1 ]are numbered as (0, 1, 2, . . . , 509, 510), the column positions within each first row of matrix C[1,2 ]are numbered as (511, 512, . . . , 1020, 1021), and so on for each matrix C[ij]. As such, the columns s for each “one” correspond to the numbered column positions. For example, the first row in matrix C[1,1 ]includes “ones” in column positions 0 and 176, and the first row of matrix C[1,2 ]includes “ones” in column positions 523 and 750. Since each matrix C[1,j ]is a right-circulant matrix, the second and subsequent rows follow accordingly from the first rows of each C[1,j ]matrix shown above. The column positions of the ones for the first row of the sixteen C[2,j ]right-circulant matrices are as follows: Since each matrix C[2,j ]is a right-circulant matrix, the second and subsequent rows follow accordingly from the first row of each C[2,j ]matrix shown above. Section A. Sub-Code Derivation The sub-code (8158,7136) can be derived based on the observation that the (8176,7154) code contains two degenerate rows in the H matrix. A degenerate row is a linear sum of at least one other row. The generator matrix G is formed by manipulating the H matrix according to the following 4-step algorithm: Step 1: Set up an initial H′ by column positionswapping within the H matrix. Specifically, each column within the H matrix is right-shifted two columns as follows: □ H′=C[1,15 ]C[1,16 ]C[1,1 ]. . . C[1,13 ]C[1,14 ] ☆ C[2,15 ]C[2,16 ]C[2,1 ]. . . C[2,13 ]C[2,14 ] Step 2: Force the H′ matrix from step 1 into upper triangular form. □ for each row i of H′ (i=1 to 1022) ☆ if H′[i,i ]equals 0 ○ find row where H′[j,i ]equals 1 (j=i+1 to 1022) ○ if row found ○ else ■ This row is degenerate ■ H′[i,j]=0 for j=1 to 1022 ■ H′[i,j]=1 for j=1023 to 8176 ■ go to next row ○ end if ☆ for each row j of H′ (j=i+1 to 1022) ☆ endfor □ endfor Step 3: Make row 1022 circular. Since row 1022 is degenerate, pick values for row 1022 such that row 1021 is a circulant shift of row 1022. This is done following the circular subroutine shown in section B below. Step 4. Solve for H′=[I:P′] form. foreach row i of H′ (i=1021 to 1) □ if i equals 511 □ foreach row j (j=i−1 downto 1) □ endfor The generator matrix G is then G=[I:P] where P=P′^T, and P consists of twenty-eight 511×511 right-circulant matrices such that: $G = I 0 … 0 0 P 1 , 0 P 1 , 1 0 I … 0 0 P 2 , 0 P 2 , 1 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ 0 0 … I 0 P 13 , 0 P 13 , 1 0 0 … 0 I P 14 , 0 P 14 , 1$ Section B. Circular Subroutine In general, let J be an arbitrary 511×511 matrix. Row 511 of each of the 511×511 matrices is degenerate and the matrix can be made circular by applying the following algorithm: $J = J 1 , 1 J 1 , 2 … J 1 , 510 J 1 , 511 J 2 , 1 J 2 , 2 … J 2 , 510 J 2 , 511 ⋮ ⋮ ⋮ ⋮ ⋮ J 510 , 1 J 510 , 2 … J 510 , 510 J 510 , 511 J 511 , 1 J 511 , 2 … J 511 , 510 J 511 , 511$ The values for J[5],I, are calculated according to the following recursion: □ J[511,1]=0 □ foreach i (i=2 to 511) □ endfor Encoder Architecture A block diagram of an encoder of the present invention is shown in FIG. 1. The encoder 10 includes an input pipe register 12, a parity encoder 30, a parity encoder 38, an encoder output MUX 22, a data output MUX 24, a randomizer 26, and a marker control 28. The parity encoder 30 is a 4 k parity encoder and includes a 4 k input block 32, a 4 k look ahead 34, and a 4 k parity register 36. The parity encoder 38 includes a bit packer 14, a bit unpacker 16, a multiply-accumulate (MAC) control 18, and a parity generator 20. The parity encoder 38 is a 8 k parity encoder. The bit packer 14 is a 8 k 16-bit to 21-bit packer. The bit unpacker 16 is a 8 k 21-bit to 7-bit unpacker. The MAC control 18 is a 8 k MAC control. The parity generator 20 is a 8 k parity generator. Message data is input 16-bits at a time to the input pipe registers 12 via the data input bus (DI). Data to be encoded is framed by the encode enable (EE) signal. The (8158,7136) code is one of two codes on the encoder 10 and is selected by holding the select 8K (S8) input high. The message data and parity output by the encoder 10 is randomized if randomize enable (RE) is asserted. An optional internal frame marker precedes the message data if the marker enable (ME) is asserted. Data passes through the encoder 10 without being encoded using a bypass mode if bypass enable (BE) is asserted. Message data output on the data output bus (DO) is framed by output enable (OE). Valid marker (VM) flags, if correct, an external frame marker added during a bypass operation. During operation of the encoder, a 1 Gb/s bit rate was achieved using a 64 MHz system clock. Alternatively, a 2 Gb/s bit rate was achieved using a 128 MHz system clock. When no frame marker is inserted, 446 sixteen-bit data words are input to the encoder and passed out of the encoder with nine clock cycles of latency. When the two-word frame marker is prepended, data has an eleven clock cycle latency. 64 sixteen-bit words follow, of which, 63 are parity bits. The final 16-bit output word contains the last 14 bits of parity with 0's appended. Incoming message data is input to the encoder 10 16-bits at a time. A set of twenty eight 511×511 circulant sub-matrices multiply the incoming data according to c=uG where u is a block of 7136-bits of incoming message data prepended with 18 zeros. To minimize the size of the multiply-accumulate block (MAC) necessary to perform this calculation, partial product multiplication is used. Since 511= 7×73, data is packed by the bit packer 14 into a word which is an integer multiple of 7. To avoid buffering any significant portion of the incoming data, the packed word must be greater than 16-bits, the number of bits input into the encoder 10 at a time. Choosing the packed word length to be 21 minimizes the size of the multiply-accumulate block. The bit packer 14 includes a state machine and a packing register. The state machine has 22 states. The packing register is a 36-bit packing register. The state machine controls the packing of the data into the packing register according to Table I TABLE 1 Present Next Stored Data Data Excess MAC State State Data In Out Data Halt 00 01 18 + 16 − 21 = 13 01 02 13 + 16 − 21 = 8 02 03 8 + 16 − 21 = 3 03 04 3 + 16 − = 19 HALT 04 05 19 + 16 − 21 = 14 05 06 14 + 16 − 21 = 9 06 07 9 + 16 − 21 = 4 07 08 4 + 16 − = 20 HALT 08 09 20 + 16 − 21 = 15 09 0A 15 + 16 − 21 = 10 0A 0B 10 + 16 − 21 = 5 0B 0C 5 + 16 − 21 = 0 0C 0D 0 + 16 − = 16 HALT 0D 0E 16 + 16 − 21 = 11 0F 0F 11 + 16 − 21 = 6 0F 10 6 + 16 − 21 = 1 10 11 1 + 16 − = 17 HALT 11 12 17 + 16 − 21 = 12 12 13 12 + 16 − 21 = 7 13 14 7 + 16 − 21 = 2 14 15 2 + 16 − = 18 HALT 15 01 18 + 16 − 21 = 13 Since the LDPC code used by the parity encoder 38 is shortened from an (8176,7154) code to an (8158,7136) code, the 18 prepended zero's are added in the initialization state (shown as present state 00 in Table 1). The remaining 21-states of the bit packer 14 are then cycled through thereby packing the input message data. Whenever less than 21 bits are available in the packing register, the downstream processing is halted for a clock cycle, as indicated by ‘HALT’ in Table 1. The 21-bit packed data is sent from the bit packer 14 to the bit unpacker 16. Since 511=24×21+7, the 21-bit data must first be adjusted to a 7-bit format at the boundary between circulant matrices before the data is sent to the MAC control 18. The bit unpacker 16 includes a 49-bit data flow register, which is illustrated in FIG. 2. The 49-bit data flow register is used to adjust the data flow across the boundaries at each 511^th bit. The bit unpacker 16 also includes a five-state sequencer (LC[2:0]) that controls the source of data from the 49-bit data flow register to supply an input (MNZ[20:0] in FIG. 1)) to a MAC module (MAC 206 and MAC 208 in FIGS. 3 and 4) in the parity generator 20, according to Table 2. TABLE 2 LC = 0 LC = 2 LC = 4 → 1 LC = 3 → 0 MAZ [20:14] Register 3 Register 5 Register 7 → 4 Register 6 → 3 MAZ [13:7] Register 2 Register 4 Register 6 → 3 Register 5 → 2 MAZ [6:0] Register 1 Register 3 Register 5 → 2 Register 4 → 1 The bit unpacker 16 also includes a word counter (WC[4:0]) that counts the 21-bit words and flags the last word (LWC) which contains the final 7 bits of data in the first 511-bit block, the final 14 bits in the second block and the final 21 bits in the third block. The five-state sequencer (LC[2:0]) rotates through the 5 states for every three of the 511-bit blocks of data, also adjusting for halts required by the bit packer 14. The five-state sequencer is reset after the fourteenth 511-bit block of data is processed. In general, the configuration of the bit packer 14 and the bit unpacker 16 is a function of the structure of the H matrix. In the implementation described above where the H matrix comprises 511×511 matrices, the bit packer is configured as a 16-bit to 21-bit packer and the bit unpacker is configured as a 21-bit to 7-bit unpacker. In alternative implementations in which the H matrix is structured differently, the bit packer and the bit unpacker are configured according to the structure of the H matrix. In certain structures of the H matrix, for example an H matrix comprising 512×512 matrices, a bit packer and a bit unpacker are not needed, as in the case of the 512×512 matrix, 512 is divided evenly by 16-bits. A first embodiment of the parity generator 20 (FIG. 1) is illustrated in FIG. 3. The parity generator is configured into two blocks, each block including a generator polynomial register 214, 216, a multiply-accumulate arithmetic unit (MAC) 206, 208, and a parity register 210, 212. To reconstruct the generator polynomial, only the coefficients for the first rows of the right-circulant matrices need to be known. These coefficients are stored in two 14×511 generator coefficient ROMs 202, 204, which are implemented utilizing subfunction encoding. The generator coefficient ROMs 202, 204 are mux-based ROMs. A 4-bit circulant matrix counter (CMC[3:0]) maintains a count of which 511-bit block of message data is present and provides the ROM address within the generator coefficient ROMs 202, 204 to select the appropriate initial row coefficients. The 28 generator polynomials are then, in turn, generated by right-shifting the initial coefficients in the polynomial generator register 214, 216. Such a function requires 1,022 flip flops to generate and store the two generator polynomials. By observing that the same mathematics can result from simply left-shifting the parity register 210 , 212 before applying the next 21 bits of data to the MAC module 206, 208, the two 511-bit generator polynomial registers 214, 216 can be eliminated, as shown in FIG. 4. FIG. 4 shows a second embodiment of the parity generator 20. A general multiply-accumulate block (MAC) 226, 228 is used rather than a dedicated look-ahead block to allow arbitrary generator polynomials to be programmed by the coefficient ROM. The MAC 226, 228 operates on the message data n=21 bits at a time. A straight forward design of the MAC 226, 228 results in a linear delay path of n+1=22 levels. Since propagation through the MAC 226, 228 is a critical timing path for the encoder 10, the logic is transformed into a tree form with a logic depth given by the next integer greater than log[2](n)+1. Using this tree form of logic, the delay path is reduced to 6 levels of logic, greatly improving the speed of the encoder 10. The parity registers 210, 212 are controlled according to Table 3 below. TABLE 3 LWC MPC [1:0] Parity Register 210 Parity Register 212 000 Hold Hold 001 Load 21 bits Load 21 bits left shift left shift 010 Load 0's Load 0's 011 Shift 16, drag 0's Shift 16, drag MP [510:495] 1xx Load 7 bit Load 7 bit left shifted P80 left shifted P81 Five basic functions are performed by the parity register 210, 212. First, the contents are held during halt states. This corresponds to state 000 in Table 3. Second, the results of the MAC operation in MAC 206, 208 are loaded while simultaneously shifting 21 bits to the left during the first 24×21=504 bits of each 511-bit block. This corresponds to state 001 in Table 3 and allows elimination of a register for storing the next row of the G matrix. Third, the results of the MAC operation in MAC 206, 208 are loaded while simultaneously shifting 7-bits left at the boundary between 511-bit blocks of the message data. This corresponds to state 1xx in Table 3. Fourth, the parity register is reset when data is not being encoded. This corresponds to state 010 in Table 3. Fifth, since no new message data is input to the encoder 10 for at least (1022+2)/16=64 clock cycles to allow the parity bits to be shifted out, most of the parity register 210, 212 can also be used as the output shift register. A shadow register for only the most significant 80 bits is needed in the output path to allow calculations on the parity for the next code word to begin while the final five 16-bit words of parity for the previous message are shifted out. This eliminates the need for a separate 1,022-bit output register. This corresponds to state 011 in Table 3. The reduction of and functional sharing of flip flops within the encoder of the present invention results in improved efficiency. Compared with other conventional encoding schemes for quasi-cyclic LDPC codes, in particular a shift-register-adder-accumulator (SRAA) encoding scheme and a two-stage encoder encoding scheme described by Li et al. in “Efficient Encoding of Quasi-Cyclic LDPC Codes”, the encoder of the present invention uses significantly fewer flip flops, less than 5.2% of the flip flops required by the SRAA architecture and less than 18.3% of the flip flops required by the two-stage encoder of Li et al., to achieve similar throughput. When comparing the encoder of the present invention to encoding schemes with similar numbers of flip flops, the encoder for the present invention achieves significantly faster data rates. Table 4 below shows a comparison the clock-cycles and flip flop count between the encoder of the present invention and the SRAA architecture and the two-stage encoder of Li et al. TABLE 4 Architecture clock-cycles flip flop count Conventional Two-Stage 1,533 7,665 (Li et al.) 511 8,176 Conventional SRAA 7,154 2,044 511 28,616 Present Encoder 511 1,492 Before proper circuit operation can begin, the encoder 10 is initialized by bringing the reset input (R) high for at least two clock pulses. At this time, it is also necessary to bring the encoder enable (EE) inactive low to ensure no spurious messages are processed. The marker enable input (ME) is either brought high to enable the frame marker or brought low to disable the frame marker. Since the encoder 10 includes two encoders, the S8 input is held high to enable the parity encoder 38, and the S8 input is held low to enable the parity encoder 30. The marker enable input (ME) and the input S8 remain in the selected states until another initialization occurs. Once initialization is completed, the reset input (R) is brought low. Two clock pulses after the reset input (R) is brought low, circuit operation commences. The encoder 10 can be re-initialized at any time, but any messages being processed by the encoder 10 at that time are lost. Zeros are clocked into the parity generator whenever the encoder enable (EE) input is low. Encoding is performed by operating on message data presented to the encoder 10 16-bits at a time on the data input bus (DI). The first bit of the message data is presented on the most significant bit of the data input bus (DI) with the first data word. Encoder enable (EE) input is brought high coincidentally with the first word of message data to be encoded. Encoder enable (EE) input remains high until the 446 words of message data have been clocked into the encoder 10 on the data input bus (DI). If marker enable input (ME)=0, then the input message data appears on the output data bus (DO) nine clock cycles after being presented on the input data bus (DI). If marker enable input (ME)=1, then the input message data follows the prepended frame marker and appears on the output data bus (DO) eleven clock cycles after being presented on the input data bus (DI). Data is clocked in and out of the encoder 10 on the rising edge of the clock input (CK). Encoder enable input (EE) is brought low when the last message data has been clocked into the input registers 12. If ME=0, then encoder enable input (EE) remains low for at least 64 clock cycles. If ME=1, then marker enable input encoder enable input (EE) remains low for at least 66 clock cycles. While the encoder enable input (EE) remains low, 64 sixteen-bit words are clocked into the input register 12, of which 63 are parity bits. The last 14 bits of parity are DO[15:2] of the 64′ parity word. DO[0:1] of the last parity word are zeros. If marker enable input (ME)=1, holding encoder enable input (EE) low for at least 66 clock cycles allows room for the frame marker to precede the following block of message data. The preceding operation also fills the parity generator 20 with zeros. If encoder enable input (EE) is held low longer than 64/66 clock cycles, then zeros will appear on the data output bus (DO). Bringing encoder enable input (EE) high after it has been held low for 64/66 or more clock cycles starts the processing of the nest set of message data. After a reset operation, or after a message data block has been encoded, and the parity read from the encoder 10, and if marker enable input (ME)=0, a data bypass operation can occur. Message data can flow through the encoder 10 without being encoded by bringing bypass enable input (BE) high coincidentally with the first word of message data to be passed unprocessed through the encoder 10. After the nine clock cycles of latency, the message data entering on input data bus (DI) appears on output data bus (DO) and continues to pass through the chip as long as bypass enable input (BE) remains high. While bypass enable input (BE) is high, encoder enable input (EE) must be held low to hold the parity registers 210, 212 in the parity generator 20 reset. A bypass operation can be used to insert an externally generated frame marker, which is described in greater detail below. The randomizer 26 includes a sequence generator that utilizes a pseudo-random sequence generated by h(x)=x^8+x^7+x^5+x^3+1. If randomizer input enable (RE)=1, then this sequence is applied to the message data and parity data before being output the output data bus (DO). The 255-bit sequence is bitwise exclusive ORed with the code word. The sequence generator is initialized to all “ones” at the beginning of each code word. The frame marker is not affected by the randomizer 26 regardless of the state of randomizer enable input (RE). If randomizer enable input (RE)=1, then zeros appearing on the output data bus (DO) between code words are randomized. The sequence generator begins operation on the first bit of message data in the codeword. If randomizer enable input (RE)=1 and the bypass mode is also enabled (bypass enable input (BE)=1), then the message data that is bypassed is processed through the randomizer 26. An unrandomized frame marker can be included in the randomized bypassed message data by bringing randomizer enable input (RE) low coincident with the first word of unrandomized frame marker and bringing randomizer enable input (RE) high after the last word of frame marker. If marker enable input (ME)=1, then the marker control 28 generates a 32-bit frame marker that is output in two 16-bit words. This frame marker functions as the attached synchronization marker for non-turbo-coded data specified in the CCSDS standard as the following hex sequence: 1ACF FCID. This frame marker is output in the two clock cycles preceding the first word of message data in a code An externally generated frame marker can be introduced with a bypass operation. Whenever a bypass operation immediately precedes an encoding operation, if the last two 16-bit words of bypassed data are 1 ACF followed by FCID, then the valid marker output (VM) output will be a 1 as these two words are output on input data bus (DO). The latency of message data through the encoder 10 is nine clock cycles if marker enable input (ME)=0 and eleven clock cycles if marker enable input (ME)=1. Thus, nine/eleven clock cycles after encoder enable input (EE) is brought high, the output enable (OE) goes high specifying the start of message data in a valid code word. When output enable (OE) goes low, the encoder 10 is outputting the parity bits. Thus, output enable (OE) functions to frame the message data and is only high when message data is being output. The various combinations of high/low enablement bits, clock cycles, and data flow between the various components related to particular functions of the encoder are described above for exemplary purposes only. It is contemplated that other combinations can be used to achieve the same, or additional functions, of the encoder of the present invention. Radiation Tolerant (RT) chip operation is achieved through the application of Radiation Hardness By Design (RHBD) techniques. These include Single Event Upset (SEU) immune flip flop designs utilizing data storage redundancy with conflict-free fault detection and correction capability internal to each memory cell. To enable the ‘single event’ assumption in design of these cells, nodes in the flip flop cells, which if simultaneously upset would latch the effect of the upset, are separated by large enough spacings to avoid this potential problem. The separation between cells is also increased to insure that the minimum spacing required to avoid simultaneous upset is maintained between cells and rows. Elimination of the capture of Single Event Transients (SETs) occurring in the combinational logic that may arrive at a flip flop input coincident with the storage triggering clock edge is achieved through the combination of an RT design characteristic providing redundant fault tolerant inputs to the flip flop and a delay element connected between the combinational logic and one of the redundant inputs. With this design, a SET dissipates at the non-delayed input before it arrives at the delayed input, thus insuring that the transient is not presented to both redundant fault tolerant inputs simultaneously and the effect of the transient is not stored. The RT library cells are drawn with continuous guard bars through the middle of the cells. Filler cells and end cap cells are used to insure a continuous guard ring around and through the rows of cells. This guard ring eliminates Single Event Latch-up (SEL). Embodiments of the encoder are described above in terms of 511×511 right-circulant matrices. Alternatively, circulant matrices of size 512×512 can be used, which eliminates the packer and unpacker and allows the MAC to operate as a power of two, which would allow a smaller circuit and faster processing speed. Still alternatively, instead of the encoder implementing an 8K LDPC code, a 16K LDPC code can be used by making the circulant matrices powers of two. The present invention has been described in terms of specific embodiments incorporating details to facilitate the understanding of the principles of construction and operation of the invention. As such, references herein to specific embodiments and details thereof are not intended to limit the scope of the claims appended hereto. It will be apparent to those skilled in the art that modifications can be made to the embodiments chosen for illustration without departing from the spirit and scope of the invention.
{"url":"http://www.google.com/patents/US7543212?ie=ISO-8859-1","timestamp":"2014-04-20T21:49:06Z","content_type":null,"content_length":"134152","record_id":"<urn:uuid:a86096a9-2e27-4421-be3d-71b9ea6c9505>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
Maxwell equations in Lorenz gauge I assumed that it was static because that's what you said it was. If we introduce a time-dependent charge distribution, we have to use the retarded Green function to solve the wave equation. The solution is now [tex] V(\mathbf{x},t) = \int \frac{\delta(t'-t + |\mathbf{x}-\mathbf{x}'|/c)}{|\mathbf{x}-\mathbf{x}'|} \rho(\mathbf{x}',t') d\mathbf{x}' dt' .[/tex] This is derived in, for example, Ch 6 of Jackson. First, apologies for my poor phrasing: I didn't mean to suggest you should have read my mind. I'm familiar with the derivation of the Wave equation green function, and that's another thing that's puzzled me -- the Green function seems to say, as you point out, that the potential at point r goes from 0 to 1/r, then stays at 1/r for all time -- which not only sounds reasonable (since the charge appearing at t=0 is jut as discontinuous), but agrees with observations. So perhaps my problems are stemming from my lattice-based simulation, where the charge is acting as a constant source, adding to the V around it, and since there is no dissipation / damping, it never goes away. Or, it might also have something to do with the fact I'm only updating it in the plane, and the 2D Green function isn't a spreading delta function, but IIRC a log function of some sort. To give some background, I originally thought my simulation was broken, since V was monotonically increasing everywhere (while E stabilized to a constant value) -- but I started reading this ( ), and he mentions adding a empirically-determined damping factor at the (artificial, Sommerfield-condition-ed) boundary "to achieve the 1/r dependence"; before I'd seen this, I'd found that adding a damping term at/near the boundary had a similar effect, but rejected it outright as non-physical. Anyway, thanks for taking the time to respond. I'll play with this some more and will undoubtedly be back with more dumb questions with crucial unstated assumptions ;)
{"url":"http://www.physicsforums.com/showthread.php?p=3865286","timestamp":"2014-04-16T22:05:45Z","content_type":null,"content_length":"52021","record_id":"<urn:uuid:45a3868e-7188-4416-bfb2-f58dfc08e320>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
Bohr radius [itex]\psi (x) \to 0[/itex] as [itex]x \to \infty[/itex] [itex]\psi (0) = 0[/itex] But why is that? Can anyone explain? The first condition is necessary because the wavefunction must be normalisable. Meaning it must have a finite integral over all space. If it tended to anything other than 0, then this would not be the case. The second condition is necessary because you are talking about an atom. Atoms have a positive nucleus. There is no probability of finding an electron at exactly the centre of the nucleus.
{"url":"http://www.physicsforums.com/showthread.php?p=3849763","timestamp":"2014-04-19T04:32:46Z","content_type":null,"content_length":"35619","record_id":"<urn:uuid:653bc73a-2fef-4b5a-a357-c65f2a036b43>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
st: calculating effect sizes when using svy command Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: calculating effect sizes when using svy command From "Vogt, Dawne" <Dawne.Vogt@va.gov> To "'statalist@hsphsun2.harvard.edu'" <statalist@hsphsun2.harvard.edu> Subject st: calculating effect sizes when using svy command Date Tue, 7 Dec 2010 15:31:34 -0500 I have two questions related to calculating effect sizes using svyreg (pweights): First, when doing unweighted regressions in SPSS, I like to provide effect sizes for each predictor by calculating a correlation coefficient value (r) from the t values provided in the output. I like using r because it is easy for most people to interpret. Can I do the same using svyreg output? My second question is related to the first. Since there is no correlation option under the svy commands, I have been computing regressions of Y on X and X and Y and using the largest p value of the two sets of results, as recommended elsewhere. I've having trouble figuring out how to convert the results provided in the output to a correlation coefficient though. I noticed that the r value I get by taking the square root of the R squared is different from my own hand calculation of r derived from the t value provided in the regression output [sqrt of (t squared divided by t squared + df). I'm not sure which r is correct (or if either of them are correct). Thanks in advance for any guidance others may be able to offer. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2010-12/msg00285.html","timestamp":"2014-04-18T03:19:23Z","content_type":null,"content_length":"8673","record_id":"<urn:uuid:95f8facc-2e53-4353-ab26-f55c2ed1e86e>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Optimized sum of squares josef.pktd@gmai... josef.pktd@gmai... Tue Oct 20 12:16:18 CDT 2009 On Sun, Oct 18, 2009 at 6:06 AM, Gary Ruben <gruben@bigpond.net.au> wrote: > Hi Gaël, > If you've got a 1D array/vector called "a", I think the normal idiom is > np.dot(a,a) > For the more general case, I think > np.tensordot(a, a, axes=something_else) > should do it, where you should be able to figure out something_else for > your particular case. Is it really possible to get the same as np.sum(a*a, axis) with tensordot if a.ndim=2 ? Any way I try the "something_else", I get extra terms as in np.dot(a.T, a) > Gary R. > Gael Varoquaux wrote: >> On Sat, Oct 17, 2009 at 07:27:55PM -0400, josef.pktd@gmail.com wrote: >>>>>> Why aren't you using logaddexp ufunc from numpy? >>>>> Maybe because it is difficult to find, it doesn't have its own docs entry. >> Speaking of which... >> I thought that there was a readily-written, optimized function (or ufunc) >> in numpy or scipy that calculated the sum of squares for an array >> (possibly along an axis). However, I cannot find it. >> Is there something similar? If not, it is not the end of the world, the >> operation is trivial to write. >> Cheers, >> Gaël > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-October/046103.html","timestamp":"2014-04-18T07:07:00Z","content_type":null,"content_length":"4580","record_id":"<urn:uuid:03fdb7ba-0ea6-48e8-9633-2595594ae277>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
I need help with MatLab!! March 15th 2012, 09:06 AM #1 Mar 2012 I need help with MatLab!! Hi, I have to use Matlab for numerical analysis and don't have a clue how to use it or what to do. Was wondering if I could get some help with this: Consider a population of a very large number M of individuals. The distribution n(s) representing their height (in meters) is given by a Gauss function characterised by the mean value h (hbar) of the height and the standard deviation (sigma) n(s) = (M/sigma*(sqrt(2pi))*exp(-(s-hbar).^2/2/sigma^2) This means that N[h,h+h] =Integral of n(s) ds with limits of (h,h+h) is the number of individuals whose height is between h and h > 0. (a) Write a Matlab M-function with 6 input • two heights H1 and H2, • the standard deviation, the mean and the total number of individuals, • an even number (the number of sub-intervals), and 2 outputs, the total number of individuals between the heights H1 and H2 computed by both CTR and CSR. Thanks, emcg22 Last edited by emcg22; March 15th 2012 at 09:39 AM. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-math-topics/195995-i-need-help-matlab.html","timestamp":"2014-04-18T01:18:45Z","content_type":null,"content_length":"30311","record_id":"<urn:uuid:2485ffe6-ec2a-4697-8d74-151584cad525>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
What Tree Type Is This? 01-16-2013 #1 Registered /usr Join Date Aug 2001 Newport, South Wales, UK I'm looking for the word(s) to describe a particular concept of the way I want to handle a region of memory, so I can perhaps get a handle on the algorithm required to do it (I have written something, but it seems to fail after six or so relatively small regions are allocated). Say I have a fairly large fixed-size region of memory. The size is a power of 2, allowing for the region to be visualised as a square in 2D space. I want to allocate regions of different sizes from this. They can be rectangular in shape, but the length of any side cannot exceed that of the largest free region. The sides are also a power of The process for allocating a region involves looking at a tree of elements. Each element can be divided into two equally-sized regions if required. The tree is searched for an element with the same width and height as the requested region. If the current element is too large, but has children, the search continues by evaluating the child elements. If the current element is too large, but doesn't have children, the element is split in two. The search continues by evaluating the new child elements. Also, if two adjacent elements that are subsequently freed can form a larger element, they are merged. I thought it was an R-tree, but the description doesn't seem to match what I was thinking. Is this more like Binary Space Partitioning (BSP)? I also suspect that a fair amount of sorting is required for this to be particularly efficient. My current algorithm goes down the first available node chain it can find and so doesn't cluster properly. I suck. Just some thoughts to clarify understanding. > Say I have a fairly large fixed-size region of memory. The size is a power of 2, allowing for the region to be visualised as a square in 2D space. Say 8x8 > I want to allocate regions of different sizes from this. They can be rectangular in shape, but the length of any side cannot exceed that of the largest free region. The sides are also a power of 2. Say 4x2 > The process for allocating a region involves looking at a tree of elements. Each element can be divided into two equally-sized regions if required. So we could have 2x2 and 2x2 or 4x1 and 4x1 > If the current element is too large, but doesn't have children, the element is split in two. The search continues by evaluating the new child elements. Does this happen recursively, so you might end up reducing a 4x2 into 8 individual 1x1 elements? Example, a 4x4 has been allocated. Do you now have - Three free regions, each 4x4 - A 4x4 in the top-right, and 8x4 in the bottom half - A 4x4 in the bottom-left, and 4x8 in the right half If the next request is for 2x8, which case would you end up with? - element is not split, but the free space is now non-contiguous - element is not split, but the free space is contiguous (if irregular) - element is split, but the free space is uniform and contiguous If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut. If at first you don't succeed, try writing your phone number on the exam paper. I support http://www.ukip.org/ as the first necessary step to a free Europe. Either the second or third answer is what I would expect (at the end of the insertion, adjacent regions that can be combined are combined). Which of these, is a good question, the only thing I would say is that I would favour filling from the top to the bottom. One of the first two. It's not important that the free space remains contiguous, but the allocated region must be contiguous. And if you want to know the application for this, it's packing subtextures into a full-size OpenGL texture. Just a small correction/note: if the memory must be able to be represented as a square and not just a rectangle, then you need an even power of two. For example, 8 is a power of 2, but you can't make a square out of it, not with integer side lengths. Wikipedia has a list of space partitioning data structures for you to check out: List of data structures - Wikipedia, the free encyclopedia. I scanned over most of them very quickly, and while I didn't find your exact data structure, I did feel like a quad tree was perhaps most similar to what you are looking for. Note, I'm not a data structures expert, and I did not read all articles thoroughly, so I'm sure I missed plenty. Yes, that clarifies the spec a bit. No fractional lengths. Cross-referencing the Wikipedia list with "rectangles" on Google, I came across a Stack Overflow question that was dealing with a similar problem, one of the respondents identified it as a k-d Then someone else disagreed. Hnnngggh. I think the sticking point is just how you choose to split the area of the node. The algo that I wrote yesterday just does it alternately (horz, then vert, then horz, etc.) with no logic applied, which isn't optimal. If you decide based on the shape of the first request, then the tree is biased towards similar shapes. Quadtree is probably more sensible, as splitting squares into squares creates no bias. However, if your input is nothing but rectangles then you'll have quite a bit of empty space that can't be Reread the quad tree article, the regions don't have to be squares, though they often are, for simplicity in examples or teaching. Using rectangles adds a bit of complexity, but a very small one, K-d tree was my second choice and it may work well for what you want, but I'm less familiar with them so I think I was more hesitant to suggest them. Both offer easy ways to split regions into subspaces represented by child nodes in the tree, and easy way to "merge" neighboring small regions, by simply deleting the two subtrees representing that region. Reread the quad tree article, the regions don't have to be squares, though they often are, for simplicity in examples or teaching. Using rectangles adds a bit of complexity, but a very small one, Regardless of the article....a quad tree when used in computer graphics or spatial partioniing is all about squares. A quad tree for spatial partioning will partition a region into 4 equal parts and each of those into 4 equal parts. The key is equal parts. An octree partitions a region into 8 equal parts. Quadtrees are often used in terrain based games and simulators like FSX. Octrees are often used in games for collision detection, lighting, etc. Sparse octrees can be used to represent an enormous amount of detail in voxel rendering. Quad trees are plane based and Octrees are usually volume based. Last edited by VirtualAce; 01-24-2013 at 06:57 PM. Regardless of the article....a quad tree when used in computer graphics or spatial partioniing is all about squares. A quad tree for spatial partioning will partition a region into 4 equal parts and each of those into 4 equal parts. The key is equal parts. An octree partitions a region into 8 equal parts. Quadtrees are often used in terrain based games and simulators like FSX. Octrees are often used in games for collision detection, lighting, etc. Sparse octrees can be used to represent an enormous amount of detail in voxel rendering. Quad trees are plane based and Octrees are usually volume based. A quad tree when used in computer graphics or spatial partitioning is all about...well, spatial partitioning. Quadtrees and octrees will partition a region into equal parts, except when you design them to partition a region into non-equal parts, which is perfectly valid, if atypical. Google "irregular quadtree decomposition", and you'll see plenty of applications of quadtrees that don't decompose into squares. 01-17-2013 #2 01-17-2013 #3 Registered /usr Join Date Aug 2001 Newport, South Wales, UK 01-17-2013 #4 Registered User Join Date Nov 2010 Long Beach, CA 01-17-2013 #5 Registered /usr Join Date Aug 2001 Newport, South Wales, UK 01-17-2013 #6 Registered User Join Date Nov 2010 Long Beach, CA 01-24-2013 #7 01-25-2013 #8 Registered User Join Date Nov 2010 Long Beach, CA
{"url":"http://cboard.cprogramming.com/tech-board/153760-what-tree-type.html","timestamp":"2014-04-23T23:52:34Z","content_type":null,"content_length":"74105","record_id":"<urn:uuid:5934a44d-99df-4d4b-9cd3-e9f9d8991b42>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Quadratic regression analysis for gene discovery and pattern recognition for non-cyclic short time-course microarray experiments • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information BMC Bioinformatics. 2005; 6: 106. Quadratic regression analysis for gene discovery and pattern recognition for non-cyclic short time-course microarray experiments Cluster analyses are used to analyze microarray time-course data for gene discovery and pattern recognition. However, in general, these methods do not take advantage of the fact that time is a continuous variable, and existing clustering methods often group biologically unrelated genes together. We propose a quadratic regression method for identification of differentially expressed genes and classification of genes based on their temporal expression profiles for non-cyclic short time-course microarray data. This method treats time as a continuous variable, therefore preserves actual time information. We applied this method to a microarray time-course study of gene expression at short time intervals following deafferentation of olfactory receptor neurons. Nine regression patterns have been identified and shown to fit gene expression profiles better than k-means clusters. EASE analysis identified over-represented functional groups in each regression pattern and each k-means cluster, which further demonstrated that the regression method provided more biologically meaningful classifications of gene expression profiles than the k-means clustering method. Comparison with Peddada et al.'s order-restricted inference method showed that our method provides a different perspective on the temporal gene profiles. Reliability study indicates that regression patterns have the highest reliabilities. Our results demonstrate that the proposed quadratic regression method improves gene discovery and pattern recognition for non-cyclic short time-course microarray data. With a freely accessible Excel macro, investigators can readily apply this method to their microarray data. Microarray time-course experiments allow researchers to explore the temporal expression profiles for thousands of genes simultaneously. The premise for pattern analysis is that genes sharing similar expression profiles might be functionally related or co-regulated [1]. Due to the large number of genes involved and the complexity of gene regulatory networks, clustering analyses are popular for analyzing microarray time-course data. Heuristic-based cluster analyses group genes based on distance measures; the most commonly used methods include hierarchical clustering [2], k-means clustering [3], self-organizing maps [4], and support vector machines [5]. Due to the lack of statistical properties of these heuristic-based clustering methods, statistical models, especially analysis of variance (ANOVA) models and mixed models are often implemented as a precursor to clustering to ensure the genes used for clustering are statistically meaningful [6,7]. Only genes identified to be significantly regulated by statistical models are used for further clustering. Fitting statistical models prior to clustering usually dramatically reduces the number of genes used for clustering, which in general will improve the performance of the clustering method. An alternative way of clustering is statistical model-based clustering methods, which assume that the data is from a mixture of probability distributions such as multivariate normal distributions and describe each cluster using a probabilistic model [8,9]. In microarray time-course studies, time dependency of gene expression levels is usually of primary interest. Since time can affect the gene expression levels, it is important to preserve time information in time-course data analysis. However, most methods for analyzing microarray time-course data treat time as a nominal variable rather than a continuous variable, and thus ignore the actual times at which these points were sampled. Peddada et al. (2003) proposed a method for gene selection and clustering using order-restricted inference, which preserves the ordering of time but treats time as nominal [1]. Recently, a number of algorithms treating time as a continuous variable have been introduced. Xu et al. (2002) applied a piecewise regression model to identify differentially expressed genes [10]. Both Luan and Li (2003) and Bar-Joseph et al (2003) proposed B-splines based approaches [11,12], which are appropriate for microarray data with relatively long time-course, but their application to short time-course data is questionable. New methods for analyzing short time-course microarray data are needed [13]. In this paper, we propose a model-based approach, step down quadratic regression, for gene identification and pattern recognition in non-cyclic short time-course microarray data. This approach takes into account time information because time is treated as a continuous variable. It is performed by initially fitting a quadratic regression model to each gene; a linear regression model will be fit to the gene if the quadratic term is determined to have no statistically significant relationship with time. Significance of gene differential expression and classification of gene expression patterns can be determined based on relevant F-statistics and least squares estimates. Major advantages of our approach are that it not only preserves the ordering of time but also utilizes the actual times at which they were sampled; it identifies differentially expressed genes and classifies these genes based on their temporal expression profiles; and the temporal expression patterns discovered are readily understandable and biologically meaningful. A free Excel macro for applying this method is available at http://www.mc.uky.edu/UKMicroArray/bioinformatics.htm[14]. The proposed quadratic regression method is applied to a microarray time-course study of olfactory receptor neurons [15]. Biologically meaningful temporal expression patterns have been obtained and shown to be more effective classifications than ANOVA-protected k-means clusters. Comparison with Peddada et al.'s order-restricted inference method [1] showed that our method provides a different and interesting insight into the temporal gene profiles. Reliabilities of the results from all 3 methods were assessed using a bootstrap method [16] and regression patterns were shown to have the highest Step-down quadratic regression We propose a step-down quadratic regression method for gene discovery and pattern recognition for non-cyclic short time-course microarray experiment. The first step is to fit the following quadratic regression model to the j^th gene: y[ij ]= β[0j ]+ β[1j]x + β[2j]x^2 + ε[ij ] (1) where y[ij ]denotes the expression of the j^th gene at the i^th replication, x denotes time, β[0j ]is the mean expression of the j^th gene at x = 0, β[1j ]is the linear effect parameter of the j^th gene, β[2j ]is the quadratic effect parameter of the j^th gene, and, ε[ij ]is the random error associated with the expression of the j^th gene at the i^th replication and is assumed to be independently distributed normal with mean 0 and variance α[0 ]and α[1], need to be pre-specified, where α[0 ]to is recommended to be small to reduce the false positive rate in the gene discovery and α[1 ]less stringent to control pattern classification. α[0 ]could be chosen using various multiple testing p-value adjustment procedures, for example, False Discovery Rate (FDR) [17]. The temporal gene expression patterns can be determined as follows: 1. If overall model (1) p-value >α[0], the j^th gene is considered to have no significant differential expression over time. The expression pattern of the gene is "flat". 2. If overall model (1) p-value ≤ α[0], the j^th gene will be considered to have significant differential expression over time. The patterns are then determined based on the p-values obtained from F tests (Table (Table11). Type I Sum of Squares used to construct F test for pattern determination. a. If both p-value of quadratic effect ≤ α[1 ]and p-value of linear effect ≤ α[1], the j^th gene is considered to be significant in both the quadratic and linear terms. The expression pattern of the gene is "quadratic-linear". b. If p-value of quadratic effect ≤ α[1 ]and p-value of linear effect >α[1], the j^th gene is considered to be significant only in the quadratic term. The expression pattern of the gene is c. If p-value of quadratic effect >α[1], the j^th gene is considered to be non-significant in the quadratic term. The quadratic term will be dropped and a linear regression model will be fitted to the gene: y[ij ]= β[0j ]+ β[1j]x + ε[ij ] (2) From fitting model (2), • If p-value of linear effect ≤ α[1], the j^th gene is considered to be significant in the linear term. The expression pattern of the gene is "linear". • If p-value of linear effect >α[1], the j^th gene is considered to be non-significant in the linear term. The expression pattern of the gene is "flat". The four expression patterns described above can be furthered classified into 9 patterns according to the up/down regulation of the gene expression based on the least-squares estimates (Table2).2). A flow chart for the above procedure is shown in Figure Figure1.1. This procedure can be easily applied using the Excel macro available at http://www.mc.uky.edu/UKMicroArray/bioinformatics.htm[14]. Flow chart of the quadratic regression method. The gene selection and pattern classification procedure of our quadratic regression method. y[ij ]is the expression level; x is time; β[0j], β[1j], and β[2j ]are the parameters of intercept, ... Determination of gene temporal expression patterns by the proposed regression method. Application of the quadratic regression method Normality test based on Shapiro-Wilk statistics [18] suggested that most of the 3834 present genes in the olfactory receptor neuron data do not have a significant departure from the normal distribution (Figure (Figure2).2). Therefore the quadratic regression method with normality assumption was applied to the data of 3834 present genes (Figure (Figure3),3), where α[0 ]was chosen to be 0.01 and α[1 ]to be 0.05. 798 genes were determined to have significant differential expression over time at level 0.01. Examples of 9 regression patterns are shown in Figure Figure44. Histogram of the Shapiro-Wilk p-values for normality test. The Shapiro-Wilk statistic was applied to the olfactory receptor neuron data for normality test. The horizontal axis is the Shapiro-Wilk p-values, and the vertical axis is the corresponding percentages. ... Flow chart of the filtering steps and quadratic regression analysis on the olfactory receptor neuron data. Our regression method is applied to the olfactory receptor neuron data. At first, Affymetrix quality controls, expressed sequence tags, and genes ... An illustration of the nine temporal expression patterns identified by the quadratic regression method. The horizontal axis is the log transformation of time. The vertical axis is the hybridization signals obtained from the microarrays. The blue dots ... Comparison with Peddada et al.'s method Peddada et al.'s method [1] was applied to the expression data of 3834 present genes with 8 pre-specified profiles: monotone increasing (MI); monotone decreasing (MD); 3 up-down profiles with maximum at the second, third, forth time point (UD2, UD3, UD4); and 3 down-up profiles with maximum at the second, third, forth time point (DU2, DU3, DU4). Based on 4000 bootstrap, 379 genes were classified into one of the 8 pre-specified profiles at significance level 0.01. This indicates that Peddada et al.'s method might be relatively more conservative than regression method by selecting much fewer genes at significance level 0.01. Comparisons of Peddada et al.'s profiles and regression patterns are listed in Table Table3.3. We observe that the majority of genes in MI are in LU, similarly for MD and LD, UD2 and QLCD, and DU2 and QLVU. However, each of the Peddada et al.'s profiles contains a mixture of regression patterns, and vice versa. This is reasonable because even though both methods perform gene selection and classification, they are aimed at different aspects of the temporal profiles. For example, Peddada et al.'s MI profile contains regression patterns LU, QLCU and QLVU. Although the gene expression level is increasing monotonically over time, the regression method gives more information on how it is increased: constantly (LU, Figure Figure5a,5a, Gdp2), increases faster then slower (QLCU, Figure Figure5b,5b, Ccl2), or increases slower then faster (QLVU, Figure Figure5c,5c, Prom1). Peddada et al.'s UD2 profile contains genes that are first up-regulated then down-regulated with maximum at the second time points, which could be classified as regression pattern QLCD in general (Figure (Figure5d,5d, Oazin), but it could also be classified as LD if the expression levels of all time points are close to a line (Figure (Figure5e,5e, Grik5); or classified as QC if the expression profile is close to quadratic (Figure (Figure5f,5f, Ubl1); or classified as QLCU if the expression levels of last 4 time points are much closer than those of the first time point. Similarly, Peddada et al.'s UD3 profile could be classified as regression patterns QC, QLCU, and QLCD (Figure (Figure5g,5g, Bub3; ;5h,5h, Fut9; ;5i,5i, Phgdh). Examples of genes in the comparison among regression patterns and Peddada et al.'s profiles. a. Gdp2; b. Ccl2; c. Prom1; d. Oazin; e. Grik5; f. Ubl1; g. Bub3; h. Fut9; i. Phgdh. The genes in a, b, and c all have Peddada et al.'s MI profile, but are in ... Comparisons of regression patterns and Peddada et al.'s profiles obtained from 3834 present genes. Comparison with ANOVA-protected k-means clustering ANOVA-protected k-means clustering was applied to the expression signals of 3834 present genes. Out of 3834 present genes, 770 were identified to be differentially expressed over time by one way ANOVA (overall model p-value ≤ 0.01). These 770 genes were used for classification by k-means clustering with k = 9 and the distance measure being Pearson correlation coefficient (Table (Table44). Comparisons of regression patterns and k-means clusters obtained from 770 ANOVA significant genes. In order to make the regression patterns comparable with the k-means clusters, the quadratic regression method was applied to the 770 ANOVA significant genes. Table Table44 shows the number of genes in common when comparing each regression pattern with each k-means cluster. An example of a good match between regression patterns and k-means clusters is the QLCD regression pattern and k-means cluster K1. However, in most cases, k-means clusters contain a mixture of regression patterns and the regression patterns are separated into different k-means clusters. For example, genes that have the LU regression pattern are split into 4 k-means clusters (Figure (Figure6a,6a, Bzrp; ;6b,6b, Aqp1; ;6c,6c, Prg; ;6d,6d, Hnrpl). The similarity of the temporal expression profiles in Figure Figure66 indicates that it might be more appropriate to classify these genes into the same group, which occurs using the proposed regression method. Examples in Figure Figure77 show that some k-means clusters are also mixtures of expression profiles in terms of the mean signals (green lines). For example, a down-up-down-up pattern (down-regulated at the second time point, up-regulated at the third time point, etc, in terms of mean signals) appeared in both k-means clusters K5 and K6 (green lines), but are identified to have QLVU regression pattern (Figure (Figure7c,7c, Clu; and and7d,7d, D17H6S56E-5); similarly see Figure Figure7a7a and and7b7b (a, Sfpi1; and b, Anxa2). Once again, the regression method provides better classification. Figure Figure88 is an example of genes with similar expression patterns but different initial starting time of the differential expression (Figure (Figure8a,8a, Psmb6; ;8b,8b, Adora2b). Adora2b clearly starts differential expression later than Psmb6 (see the blue dots in Figure Figure8).8). After the initial starting point (first time point for Psmb6 and second time point for Adora2b), these two genes show similar upward regulation. These two genes were classified into the same regression group, but in different k-means clusters. Based on the above analysis, our regression method is demonstrated to be more appropriate for the classification of temporal gene expression profiles than k-means method. Examples of genes with the same LU regression pattern but in different k-means clusters. a. Bzrp is an example from k-means cluster K2; b. Aqp1 is an example from k-means cluster K5; c. Prg is an example from k-means cluster K6; d. Hnrpl is an example ... Examples of genes with similar expression patterns in terms of mean signal and regression. a. Sfpi1 is an example from k-means cluster K2; b. Anxa2 is an example from k-means cluster K8; c. Clu is an example from k-means cluster K5; d. D17H6S56E-5 is ... Examples of genes with the same regression pattern but different onset of differential expression. a. Psmb6 is an example in k-means cluster K8; b. Adora2b is an example in k-means cluster K5. Adora2b clearly starts differential expression later than ... EASE functional analysis on regression patterns and k-means clusters To further explore the effectiveness of the regression method on gene classification, EASE (Expression Analysis Systematic Explorer) software was used to examine the potential relationship between the biological functions of the genes and their expression patterns [19]. EASE calculates EASE scores (Jackknife one-sided Fisher exact p-values) to identify over-represented gene categories within lists of genes. EASE analysis was applied to each of the 9 regression patterns and 9 k-means clusters that were obtained from the classification of 770 ANOVA significant genes (Table (Table4).4). The results are summarized (see Additional file 1), with part of the information shown in Tables Tables55 and and6.6. The EASE analysis demonstrates that the proposed regression method is more effective for gene classification than the k-means clustering method. Almost all of the regression patterns contain genes mainly from one biological process. For example, LU has 9 over-represented gene categories, 8 of which are involved in immune regulation (Table (Table5).5). The majority of the LU and QLVU gene categories are in the immune regulation category. This suggests that there exist multiple regulatory mechanisms within the immune regulation. The immune regulation in QLVU appears to be a more complex regulatory mechanism for the initial up-regulation of these genes due to the slow upward regulation at early time points of this regression pattern (Figure (Figure5c).5c). The EASE results for the k-means clusters shows that the over-represented gene categories of most k-means clusters are involved in more than one biological process, for example, k-means cluster K5 contains 9 over-represented gene categories, 3 involved in immune regulation, 2 involved in cell death, etc. Notice that the immune regulation category is represented in 4 k-means clusters, which suggests that the immune regulation category is more consolidated in regression patterns than in k-means clusters (Table (Table6).6). Also, by comparing EASE scores in Tables Tables55 and and6,6, one can see that the over-represented gene categories in the regression patterns have, in general, smaller EASE scores than those in the k-means clusters, which further indicates the greater effectiveness of the regression method in pattern classification. Over-represented gene categories in some regression patterns from EASE functional analysis. Over-represented gene categories in some k-means clusters from EASE functional analysis. Reliability analysis Kerr and Churchill (2001) introduced a bootstrap technique to assess the stability of clustering results [16]. We applied the same idea here to assess the reliability of regression patterns, Peddada et al.'s profiles, and k-means clusters. All 3 pattern classification methods were performed on the expression data of 770 ANOVA significant genes to make the results comparable. The reliability curves show that regression patterns have the highest reliability, and k-means clusters have the lowest reliability (Figure (Figure9).9). This suggests that the regression method provides relatively more stable pattern classifications. Reliability curves of regression patterns, Peddada et al.'s profiles, and k-means clusters. The horizontal axis is the reliability (percentage of agreement of the bootstrap results with the original result), and the vertical axis is the corresponding ... Simulation study We investigated the false positive rate (gene specific) of our method via a simulation study. The data were generated randomly from N(0,1), containing expression signals of 10000 "null" genes (no gene differentially expressed over time), with 5 time points and 3 replications per time point per gene. 50 of such data were generated. The regression approach was applied to each gene in each simulated data at α[0 ]= 0.01 and the numbers of significant genes in each of the 50 data were obtained. The average proportion of significance (average false positive rate) is 1.01% with standard deviation 0.01%. This demonstrates that the false positive rate of the regression method is accurate because 1% of 10000 genes would be expected to be significant at 0.01 level by chance. The false positive rates of the regression patterns LU, LD, QC, QV are all approximately equal to 1/6 of the average false positives, and those of QLCU, QLCD, QLVU, and QLVD are all approximately equal to 1/12 of the average false positives. The proposed step-down quadratic regression method is an effective statistical approach for gene discovery and pattern recognition. It utilizes the actual time information, and provides biologically meaningful classification of temporal gene expression profiles. Furthermore, it does not require replication at each time point, which ANOVA-type methods do require. Also, this method can identify genes with subtle changes over time and therefore discover genes that might be undetectable by other methods, eg, ANOVA-type methods. However, there are several limitations to this method. Firstly, it is designed to fit time-course data with a small number of time points. We recommend this method when there are 4 to 10 time points in the data. For an experiment with more time points, spline-type methods [11,12] could be a possible choice; for an experiment with 2 or 3 time points, ANOVA-type method is recommended. Secondly, the 9 regression patterns are rather limited considering the complexity of gene regulatory networks. For example, certain proportion of genes show cubic, "M", and "W" shaped patterns in 211 regression FLAT genes which are ANOVA significant (Table (Table4).4). These patterns could be caused by random chance, but they could also be real patterns. Fitting a higher order polynomial regression model may discover these types of genes profiles. Theoretically, one could fit a 4^th-order polynomial regression model to this data (the highest order of the polynomial one can fit is the number of time points minus one). The model with 4^th-order polynomial will work similarly to connecting the mean at each time point, therefore will provide a good fit to the data with smallest R^2 and minimum Mean Squared Error, compared with lower-order polynomials. However, the purpose of pattern analysis is to cluster the data instead of fitting models, so the quadratic fit is useful even though the goodness of fit may not be great. Also, the use of high-order polynomials (higher than the second-order) should be avoided if possible [20], particularly in cases such as this where the regression coefficients are used primarily for classification. Another issue is the transformation of the experimental time. Transformation should be considered when the sampling time is unequally spaced. The choice for the type of transformation (log-transformation, square-root transformation, etc) is not critical because the resulting pattern classification will in general not be impacted. In the reliability curves, at 95% reliability, regression patterns, Peddada et al.'s profiles, and k-means clusters have 33%, 12%, and 0% of genes, respectively; and at 80% reliability, the percentage of genes are 55%, 32%, and 0%, respectively (Figure (Figure9).9). Even though the regression patterns have the highest reliability, only 33% of genes have 95% reliabilities. We examined the overall model (1) p-values of 770 genes by the regression method and found that genes that have the smallest overall model (1) p-values all have 95% reliabilities. This suggests that we could reduce the level of significance α[0 ]to increase the stability of regression patterns. α[0 ]could be reduced using various multiple testing p-value adjustment procedures, for example, Westfall and Young's step down method [21], and False Discovery Rate (FDR) [17]. Application of the FDR method can be done as follows (assuming FDR is controlled at level of α): let p[(1) ]<p[(2) ]< … <p[(m) ]be the ordered overall model (1) p-values, start from the largest p-value p[(m)], compare each p[(i) ]with α *i/m; let k be the largest i that p[(k) ]≤ α *k/m, conclude p[(1)], …, p[(k) ]to be Both our quadratic regression method and Peddada et al.'s method serve the same overall goal: gene selection and classification. Peddada et al.'s method provides more choices of temporal profiles than our method. While our regression method offers less choice of patterns, it may provide deeper insight into the gene expression profiles than Peddada et al.'s method. Our method distinguishes patterns with different rates of change and provides more information on the relative relationship among the expression levels of all time points. For example, specifying a profile of up-down with maximum at one time point does not provide much information on the relative relationships among other time points (Figure (Figure5).5). A further refinement of Peddada et al.'s method may provide such information about the relationship of other time points besides the maximum/minimum. However, it is less likely to separate the patterns in Figure 5a, b, and and5c5c by their method. Another fact is that Peddada et al.'s method provides exactly the location of the maximum/minimum, whereas our method provides the neighborhood of the location of the maximum/minimum. Furthermore, their method is based on bootstrap, which is computationally intensive. The result of their method, for example, the reliability curves, might be improved by applying more bootstrap, which is 4000 in this paper due to the computational difficulties and time constraints. Moreover, their method depends on the ordering of time but not the actual time at which the samples were taken, whereas the regression method accounts for both. K-means is an iterative clustering algorithm [22]. The first step of this method is to randomly assign the data points to the k clusters. Next, the distance to the center of each cluster is calculated for each data point, and the data point is moved into the closest cluster. This step will be repeated until no data point is moving from one cluster to another. In k-means, the number of clusters, k, needs to be pre-specified. Researchers usually choose several different k and find the one which has the most biologically meaningful clusters. There are methods of finding the "optimal" k, for example, Bayesian Information Criterion [23]. In this paper, k was arbitrarily chosen to be 9. Since the k-means clustering does not perform well (Table (Table4;4; Figures Figures6,6, ,7,7, and and8),8), we investigated different choice of k based on the Bayesian Information Criterion and identified that the "optimal" k is 15. However, as we examined these 15 k-means clusters, the pattern classification does not seem to be improved, the same problem exists as with k = 9. For example, Prom1, Clu, and D17H6S56E-5 (Figure (Figure5c,5c, Figure Figure7c7c and and7d)7d) all have similar temporal profiles and are all classified to be QLVU, but they were separated into 3 of the 15 k-means clusters. This could be related to the distance measure used (Pearson correlation coefficient). As we discovered, genes in the same cluster do not necessarily have higher correlation than genes in different clusters. For example, Sfpi1 and Anxa2 (Figure (Figure7a7a and and7b)7b) are highly correlated (Pearson correlation coefficient is 0.9934) and their expression patterns are similar, but they are in different k-means clusters. A possible reason might be that the time-course in olfactory receptor neuron data is too short for correlation to perform well. Even though there are a total of 15 observations for each gene, correlation calculations are based on the 5 mean signals, which could be too few to describe the relationship between temporal profiles. There is also concern about using correlation as the distance measure. A large correlation coefficient does not necessarily indicate two similarly shaped profiles, nor does a small correlation coefficient necessarily indicate differently shaped profiles [1]. A number of regression algorithms have been proposed recently, which treat time as a continuous variable. Several of them are based on cubic B-splines [11,12]. B-splines are defined as a linear combination of a set of basis polynomials. In order to fit cubic B-splines to time-course data, the entire duration of experimental time needs to be divided into several segments by "knots" (the point to separate segments), and each segment will be fit by cubic polynomial. The successful application of these methods to microarray time-course data depends heavily on having a relatively large numbers of time points. The B-spline based methods will not be effective when there are a small number of time points in the time-course experiment [13]. For a data with 5 time points, cubic B-spline type methods would not be appropriate because it is recommended that there should be at least 4 or 5 experimental time points in each segment [24]. Xu et al used a piecewise quadratic regression model to identify differentially expressed genes [10]. In their approach, expression levels at 0 hour and 2 hours after treatment are fit differently from the rest of time points after treatment. Although appropriate for their data, their method cannot be applied to the dataset used in this paper. The quadratic regression method that we applied to the olfactory receptor neuron data relies on the normality assumption. This is supported by the result of the Shapiro-Wilk normality test, which indicates that most of the genes used for the analysis follow a normal distribution. This might be due to the fact that we removed genes that are called "A" (absent) by Affymetrix across all chips. "A" calls are often assigned to low expression signals, which tend to be non-normal in general. Therefore removing genes with a high proportion of "A" calls may reduce the possibility of violation of the normality assumption, which will then make the test based on distributional assumption more likely to be valid, and thus avoid computational intensive resampling procedures, for example, bootstrap and permutation. If desired, experimenters could also try various types of data transformation to make their data closer to normal when the data are shown to have large departure from normality. However, the log transformation performed on the olfactory receptor neuron data was not to reduce the possible non-normality, but solely to make a fair comparison of our regression method and k-means method because it is the default transformation in Genespring. When the normality assumption (ε[ij ]~ N(0,25] can be used to avoid the distributional assumption. For an experiment with m genes, T time points, and r replications per time point, the bootstrap procedure can be performed in the following way: form the data into a matrix of m × rT, each column in the matrix contains expressions of m genes in one chip and each row contains rT expressions of one gene; randomly draw rT columns with replacement to form a bootstrap sample; apply step-down quadratic regression procedure to the bootstrap sample to obtain F statistics from F tests; repeat the above steps 1000 times to form a bootstrap F distribution for each gene; claim a gene to be significance at level of α if its observed F statistics is greater than the upper (α / 2)^thpercentile or less than the lower (α / 2)^th percentile of its bootstrap F distribution. One concern about using bootstrap here is that the bootstrap F distribution might be too discrete due to the small number of time points. However, the fact that we are bootstrapping both the explanatory and response variables mitigates this issue by using all the data points, not just the time points. Additionally, in a small simulation study, we observed that the bootstrap F distribution is rather smooth (result not shown). The proposed step-down quadratic regression approach is shown to be effective for gene discovery and pattern recognition for non-cyclic short time-course microarray experiment. Major advantages of this method are that it preserves the actual time information, and provides a useful tool for gene identification and pattern recognition. The nine regression patterns, obtained when applied to the olfactory receptor neuron data, are shown to be more reasonable classifications compared to ANOVA-protected k-means clustering method. EASE analysis further showed that our regression patterns are more biologically meaningful than the k-means clusters. Comparison with Peddada et al.'s method showed that our method provides a different perspective on the temporal gene profiles. Reliability study indicates that regression patterns are most reliable. In conclusion, this method should improve gene discovery and pattern recognition for microarray time-course data. With the freely accessible Excel macro, investigators can readily apply this method to their research data. ANOVA-protected k-means clustering One-way ANOVA model y[ijk ]= μ[j ]+ τ[k ]+ ε[ijk ]was fitted to each gene in SAS v9, where y[ijk ]denotes the gene expression level of the j^th gene at the i^th replication of the k^th time point, μ[ j ]denotes the overall mean signal of the j^th gene, τ[k ]denotes the effect of k^th time point, ε[ijk ]denotes the random error associated with the i^th replication at the k^th time point of the j^ th gene and is assumed to be independently distributed normal with mean 0 and variance α[0 ]will be used for k-means clustering. K-means clustering was performed in Genespring V6.1 (Silicon Genetics. Redwood City, CA) with k = 9. The similarity measure was chosen to be Pearson correlation coefficient, which was calculated from vectors of length 5 containing mean signals of 3 replications at each of the 5 time points. 500 additional random clusters were tested and the best clusters were selected by the software. EASE functional analysis EASE software was used to identify the over-represented categories of genes [19]. Gene Ontology Biological Process was chosen as the categorization system in EASE analysis. A functional gene category with an EASE score of less than 0.05 is considered to be over-represented. The EASE software is available at: http://david.niaid.nih.gov/david/ease.htm. Data description The data used here are from a study of olfactory receptor neurons [15]. The goal is to investigate the induction of gene regulation at short time intervals following deafferentation of olfactory receptor neurons by target ablation at 2, 8, 16, and 48 hrs compared with the sham control. Total RNA was isolated from 3 male littermate mice per time point. Following hybridization with Affymetrix GeneChips MGU74Av2, 3 chips per time point, the signals were generated by GeneChip Analysis Suite v5.0. The data was filtered before statistical tests were performed. First, 66 Affymetrix quality control probesets and 6432 expressed sequence tags were removed. Next, the absent call (A) provided by Affymetrix was considered. 2156 genes that are called "A" across all 15 chips were removed from the data. The remaining 3834 present genes were used for the regression analysis (Figure (Figure3).3). The hybridization signals of these 3834 genes were log-transformed in Genespring. Because the time points in this experiment are not equally spaced, ln(t+1) transformation was performed to each of the 5 time points, where t stands for the time point. List of abbreviations ANOVA: analysis of variance. LD: linear down regulated regression pattern. LU: linear up regulated regression pattern. QC: quadratic concave regulated regression pattern. QV: quadratic convex regulated regression pattern. QLCD: quadratic-linear concave down regulated regression pattern. QLCU: quadratic-linear concave up regulated regression pattern. QLVD: quadratic-linear convex down regulated regression pattern. QLVU: quadratic-linear convex up regulated regression pattern. MI: monotone increasing. MD: monotone decreasing. UD2/UD3/UD4: up-down with maximum at the second/third/forth time point. DU2/DU3/DU4: down-up with maximum at the second/third/forth time point. Authors' contributions HL conducted the statistical analyses and drafted the manuscript. ST wrote the Excel macro. ASB and TVG conducted the EASE analysis. TVG and MLG provided the microarray data. AJS supervised the analysis. All authors read and approved the final manuscript. Supplementary Material Additional File 1: Over-represented gene categories in each of the regression patterns and k-means clusters from EASE functional analysis. "Ease Analysis.xls" contains 2 worksheets: worksheet "regression" contains the over-represented gene categories in each of the 9 regression patterns obtained from EASE functional analysis; worksheet "k-means" contains the over-represented gene categories in each of the 9 k-means clusters obtained from EASE functional analysis. We wish to thank Christopher P. Saunders for his help on the statistical analysis, Dr. Kuey-Chu Chen for her help in the application of EASE analysis, Dr. Shyamal D. Peddada for his help in the application of his method, and referees for their thoughtful comments. This work is supported by NIH-AG-016824-23 (TVG), NIH-1P20RR16481-03 (AJS), and NSF-EPS-0132295 (AJS). • Peddada SD, Lobenhofer EK, Li L, Afshari CA, Weinberg CR, Umbach DM. Gene selection and clustering for time-course and dose-response microarray experiments using order-restricted inference. Bioinformatics. 2003;19:834–841. doi: 10.1093/bioinformatics/btg093. [PubMed] [Cross Ref] • Eisen MB, Spellman PT, Brown PO, Botstein D. Cluster analysis and display of genome-wide expression patterns. PNAS. 1998;95:14863–14868. doi: 10.1073/pnas.95.25.14863. [PMC free article] [PubMed] [Cross Ref] • Tavazoie S, Hughes JD, Campbell MJ, Cho RJ, Church GM. Systematic determination of genetic network architecture. Nature Genet. 1999;22:281–285. doi: 10.1038/10343. [PubMed] [Cross Ref] • Tamayo P, Slonim D, Mesirov J, Zhu Q, Kitareewan S, Dmitrovsky E, Lander ES, Golub TR. Interpreting patterns of gene expression with self-organizing maps: Methods and application to hematopoietic differentiation. PNAS. 1999;96:2907–2912. doi: 10.1073/pnas.96.6.2907. [PMC free article] [PubMed] [Cross Ref] • Brown MPS, Grundy WN, Lin D, Cristianini N, Sugnet CW, Furey TS, Ares M, Jr, Haussler D. Knowledge-based analysis of microarray gene expression data by using support vector machines. PNAS. 2000; 97:262–267. doi: 10.1073/pnas.97.1.262. [PMC free article] [PubMed] [Cross Ref] • Wolfinger RD, Gibson G, Wolfinger ED, Bennett L, Hamadeh H, Bushel P, Afshari C, Paules RS. Assessing Gene Significance from cDNA Microarray Expression Data via Mixed Models. Journal of Computational Biology. 2001;8:625–637. doi: 10.1089/106652701753307520. [PubMed] [Cross Ref] • Park T, Yi S-G, Lee S, Lee SY, Yoo D-H, Ahn J-I, Lee Y-S. Statistical tests for identifying differentially expressed genes in time-course microarray experiments. Bioinformatics. 2003;19:694–703. doi: 10.1093/bioinformatics/btg068. [PubMed] [Cross Ref] • Yeung KY, Fraley C, Murua A, Raftery AE, Ruzzo WL. Model-based clustering and data transformations for gene expression data. Bioinformatics. 2001;17:977–987. doi: 10.1093/bioinformatics/ 17.10.977. [PubMed] [Cross Ref] • Pan W, Lin J, Le C. Model-based cluster analysis of microarray gene-expression data. Genome Biology. 2002;3:research0009.0001–research0009.0008. doi: 10.1186/gb-2002-3-2-research0009. [PMC free article] [PubMed] [Cross Ref] • Xu XL, Olson JM, Zhao LP. A regression-based method to identify differentially expressed genes in microarray time course studies and its application in an inducible Huntington's disease transgenic model. Hum Mol Genet. 2002;10:1977–1985. doi: 10.1093/hmg/11.17.1977. [PubMed] [Cross Ref] • Luan Y, Li H. Clustering of time-course gene expression data using a mixed-effects model with B-splines. Bioinformatics. 2003;19:474–482. doi: 10.1093/bioinformatics/btg014. [PubMed] [Cross Ref] • Bar-Joseph Z, Gerber G, Simon I, Gifford DK, Jaakkola TS. Comparing the continuous representation of time-series expression profiles to identify differentially expressed genes. PNAS. 2003;100 :10146–10151. doi: 10.1073/pnas.1732547100. [PMC free article] [PubMed] [Cross Ref] • Bar-Joseph Z. Analyzing time series gene expression data. Bioinformatics. 2004;20:2493–2503. doi: 10.1093/bioinformatics/bth283. [PubMed] [Cross Ref] • The Excel macro for the step-down quadratic regression method http://www.mc.uky.edu/UKMicroArray/bioinformatics.htm • Getchell TV, Liu H, Vaishnav RA, Kwong K, Stromberg AJ, Getchell ML. Temporal profiling of gene expression during neurogenesis and remodeling in the olfactory epithelium at short intervals after target ablation. Journal of Neuroscience Research. 2005;80:309–329. doi: 10.1002/jnr.20411. [PubMed] [Cross Ref] • Kerr MK, Churchill GA. Bootstrapping cluster analysis: Assessing the reliability of conclusions from microarray experiments. PNAS. 2001;98:8961–8965. doi: 10.1073/pnas.161273698. [PMC free article] [PubMed] [Cross Ref] • Benjamini Y, Hochberg Y. Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. Journal of the Royal Statistical Society Series B (Methodological) 1995;57 • SAS v9 online document. Cary, NC, USA: SAS Institute Inc; • Hosack D, Dennis G, Sherman B, Lane H, Lempicki R. Identifying biological themes within lists of genes with EASE. Genome Biology. 2003;4:R70. doi: 10.1186/gb-2003-4-10-r70. [PMC free article] [ PubMed] [Cross Ref] • Montgomery DC, Peck EA, Vining GG. Introduction to linear regression analysis. 3. John Wiley &Sons, Inc; 2001. • Westfall PH, Young SS. Resampling-Based Multiple Testing: Examples and Methods for p-Value Adjustment. John Wiley &Sons, Inc; 1993. • Hartigan JA. Clustering Algorithms. John Wiley &Sons, Inc; 1975. • Schwarz G. Estimating the dimension of a model. Annals of Statistics. 1978;6:461–464. • Seber GA, Lee AJ. Linear regression analysis, second edition. John Wiley &Sons, Inc; 2003. • Efron B, Tibshirani RJ. An Introduction to the Bootstrap. Chapman and Hall; 1993. Articles from BMC Bioinformatics are provided here courtesy of BioMed Central Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1127068/?tool=pubmed","timestamp":"2014-04-17T22:02:14Z","content_type":null,"content_length":"137374","record_id":"<urn:uuid:3560d92c-3cf2-467e-8eb1-b17de2d35d2c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
Eigenvalues of the product of two matrices Let A = P'EP Let B = Q'DQ where P'P = Q'Q = I AB= P'EPQ'DQ What we would need for the eigenvalues to be a direct product is PQ' = I and P'Q = I. But (PQ')' = QP' = I = P'Q or Q(P'P) = P'QP = Q or P = I which is a different finding then previously indicated. It is true for each corresponding eigenvalue to be a direct product, one must have PQ' = I implying that PQ'Q = Q or P=Q (as Q'Q =I), which is the prior statement (namely, the eigenvectors must be the same) for each eigenvalue k. However, it is clear that: Det(AB) = Det(A)*Det(B) = Det(P'EPQ'DQ) = Det(P'PQ'QED) = Det(I*I*ED) = Det(ED) = Det(E)*Det(D) So, in total, the product of all the eigenvalues of A*B is equal to the product of all the eigenvalues of both A and B, from which one may be able to establish search bounds for individual Note, if the Kth eigenvalue is negative, I claim that by multiplying all entries in the Kth (or, it may be some other) row by minus one, the sign of the negative eigenvalue becomes positive (or, remains negative and another eigenvalue becomes negative) as this matrix modification is known to change only the sign of the determinant (note, absolute values of all eigenvalues can, in general, change by this row negation, but their product remains constant). If the matrix can be diagonalized, this sign change can occur only by a change in sign in one (or an odd number) of the eigenvalues. Interestingly, in one matrix product instance even without any sign change operations, with both matrix A and B having positive eigenvalues, the product matrix AB have an even number of negative eigenvalues! This occurrence is apparent without direct knowledge of the eigenvalues through the Characteristic polynomial where a term indicates the negative of the trace of eigenvalue matrix, which noticeably declined in absolute value. Note, for all positive eigenvalues, the log of the Determinant may serve as a bound to the log of the largest possible eigenvalue. If we known the largest eigenvalue (via the Power Method), then the (log Det - Log Max Ei ) is an upper bound to log of the second largest eigenvalue.
{"url":"http://www.physicsforums.com/showthread.php?t=588101&page=2","timestamp":"2014-04-16T04:36:08Z","content_type":null,"content_length":"44146","record_id":"<urn:uuid:6cddd817-7402-4a03-830b-1135845648c4>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
Learn About Perimeter and Its Applications Learn about Perimeter and its Applications In this lesson, let’s learn how to solve application problems that deal with perimeters. So we’ll do two problems. The first one says write the number that makes the equation true. The equation given to us is the perimeter is 10 + 4 + 4 + the missing number equals 28 centimeters. So we are given the perimeter is 28 centimeters. Well, what is the perimeter? Perimeter is the length of all the sides, length of all sides or the sum of the length of all sides. It’s the distance around the figure. So what would the perimeter of this rectangle be? It would be ten centimeters plus four centimeters plus 10 centimeters plus the missing side equals 28 centimeters. So we are given that these four sides when totaled equal 28 centimeters. Let’s total up the pre-sides that are given to us. 10 + 4, 14 + 4 is 24 centimeters plus some number gives us 28 centimeters. So what could that number be? What number when added to 24 gives 28? Well, that’s four, right. 24 + 4 = 28, so the missing number must be 4 centimeters. All we did was we looked at the equation and if all four these add up to 28 and three of them add up 24, well then the missing number must be four. Let’s do another problem on perimeter. It says that this picture shows Julia tablecloth. It is six feet one side, ten feet on the other side and then six and ten feet. It says the perimeter of Bill’s tablecloth is the same as perimeter of Julia’s but Bill is shaped like a square. So before we do this, let’s compute the perimeter of Julia’s tablecloth. What is it? Well, we start with the first side which is six feet plus the second side, ten feet, third side is six feet plus the fourth side is also ten feet. What do I have?—6, 16, 22, 32 feet. So the perimeter for Julia’s tablecloth is 32 feet that I have calculated, right. So now, let’s read the second part of the question. It says the perimeter of Bill’s tablecloth is the same as Julia’s. So perimeter of Bill’s tablecloth is 32 feet because it’s the same as Julia’s. But it is shaped like a square, so if we draw a square. What do we know about a square? All four sides are equal, right. So this side is S feet in length that this is also S, this is also S. This is also S. So 32 feet is the same as S + S + S + S, right. So which number when added four times give us 32? If I take 8 feet + 8 feet + 8 feet + 8 feet, if the length of one of the sides was 8 feet, we would get 32 feet. So that’s true. So what's the length of each side of Bill’s tablecloth? Well, the length that we are looking for must be 8 feet. We could have also said this is equaled 4 times S, right. We could have divided 32 by 4 and got an S as 8 feet but they’re both the same. Key thing to remember is a square has four equal sides. So the sum of all four sides has to be 32 feet which I can do in my head that each side is 8 feet because 8 added by itself, three more times gives us 32 feet.
{"url":"http://www.healthline.com/hlvideo-5min/learn-about-perimeter-and-its-applications-285016083","timestamp":"2014-04-16T22:29:53Z","content_type":null,"content_length":"41353","record_id":"<urn:uuid:7eff0b96-799a-4e4f-9590-6066410bb5ca>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
This page lists the Learning Objectives for all lessons in Unit 7. Order of Operations The student will be able to: • Define arithmetic operations, fraction bar and vinculum. • Restate the basic rules for order of operations. • Recognize the need for such rules in mathematics. • Evaluate arithmetic expressions by applying the rules. • Apply order of operations rules to complete five interactive exercises. Operations With Exponents The student will be able to: • Examine an arithmetic expression that includes an exponent. • Restate the expanded rules for order of operations. • Examine examples in which the expanded rules are applied to arithmetic expressions. • Describe how to evaluate an arithmetic expression using the order of operations. • Restate the PEMDAS mnemonic for the order of operations. • Evaluate arithmetic expressions by applying the expanded rules. • Apply the revised rules to complete five interactive exercises. Operations With Integers The student will be able to: • Define integers. • Review the order of operations and PEMDAS. • Evaluate examples in which integers are included in arithmetic expressions. • Describe the use of parentheses and brackets to decipher expressions involving integers. • Examine examples in which fractions are included in arithmetic expressions. • Describe how division applies to expressions that include fractions. • Evaluate arithmetic expressions by applying the expanded rules. • Summarize the complete set of rules for the order of operations, including brackets and fractions. • Apply the expanded rules to complete five interactive exercises. Algebraic Expressions The student will be able to: • Define variables, numerical expressions and algebraic expressions. • Examine examples in which phrases are translated into algebraic expressions. • Examine real-world problems in which phrases are translated into algebraic expressions. • Explain the use of different letters to represent a variable. • Translate phrases in algebraic expressions. • Apply procedures to complete five interactive exercises. Algebraic Equations The student will be able to: • Define algebraic equation. • Examine examples in which open sentences are translated into algebraic equations. • Examine real-world problems in which open sentences are translated into algebraic equations. • Explain how different variables can be used to represent unknown quantities in the equations presented. • Translate open sentences in algebraic equations. • Apply procedures to complete five interactive exercises. Practice Exercises The student will be able to: • Examine ten interactive exercises for all topics in this unit. • Determine which rules and procedures are needed to complete each practice exercise. • Compute answers by applying appropriate rules and procedures. • Self-assess knowledge and skills acquired from this unit. Challenge Exercises The student will be able to: • Evaluate ten interactive exercises with word problems for all topics in this unit. • Analyze each word problem to identify the given information. • Formulate a strategy for solving each problem. • Apply strategies to solve routine and non-routine problems. • Synthesize all information presented in this unit. • Identify connections between pre-algebra and the real world. • Develop problem-solving skills. The student will be able to: • Examine the solution for each exercise presented in this unit. • Identify which solutions need to be reviewed. • Compare solutions to completed exercises. • Identify and evaluate incorrect solutions. • Repeat exercises that were incorrectly answered. • Identify areas of strength and weakness. • Decide which rules and procedures need to be reviewed from this unit. Order your CD today for Home or School. Complete Math Curriculum | Educator's Guide | Main CD Page Copyright © 2010-2013 Mrs. Glosser's Math Goodies. All Rights Reserved.
{"url":"http://www.mathgoodies.com/cd/Objectives/objectives_unit7.html","timestamp":"2014-04-18T02:58:30Z","content_type":null,"content_length":"6720","record_id":"<urn:uuid:26c9fc31-9567-4999-924b-5bb66766df8b>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
ALEKS Student User Guide for K-12 Education 5.5 Additional Features All buttons described below are available in the Learning Mode. In the Assessment Mode, certain buttons may be temporarily inactive. For online help with the use of the Answer Editor, click "Help." To print out an individualized homework sheet based on your most recent work in ALEKS, use the "Worksheet" button. Your teacher can send you messages via ALEKS. You see new messages when you log on. You can also check for messages by clicking on "Inbox" (Sec. 5.6). ALEKS provides a way to send your teacher a specific problem you are working on in ALEKS. Your teacher can choose to let you reply to messages as well. Any time you wish to look at your assessment reports, click on "Report." Choose any date from the drop-down menu and click "OK." This page gives you the options to participate in "Ask a Friend" or forward your ALEKS messages to your email account. This page also shows the total number of hours you have spent using To access any special resources posted to your class by your teacher, click on the "Resources" button. This button will only be available if resources have been posted to your class. To end your ALEKS session and exit, click the Exit button. Clicking "MyPie" gives you a pie chart summarizing your current mastery. You can use this pie chart to choose a new concept. To review past material, use the "Review" button. To search the online dictionary of mathematical terms, click "Dictionary." You can also click on hyperlinked terms in the ALEKS interface to access the Dictionary. To access the online ALEKS Calculator, use the Calculator button. This button will be inactive for material where the use of a calculator is not appropriate. When this button is inactive, do not use any calculator. To see the results of quizzes you have taken in ALEKS or to begin a quiz assigned to you by your teacher, use the "Quiz" button.
{"url":"http://www.aleks.com/user_guides/learners-k12","timestamp":"2014-04-21T12:08:40Z","content_type":null,"content_length":"34127","record_id":"<urn:uuid:08353ecd-e76f-4156-bfa2-81ba5831d3c1>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Computer Assisted Reporting This article describes technical analysis of the Kansas voter Computer Assisted Reporting This technical note describes manipulation/analysis of Wisconsin voter registration data from June 2011. Wisconsin voter registration data can be purchased from the Wisconsin Government Accountability Board for $12,500, whic... There are a number of good open source projects for statistics and data mining, for example the software WEKA developed at the University of Waikato. The description on their website states that: Weka is a collection of machine learning algorithms for data mining tasks. The algorithms can either be applied directly to a dataset or As many people are aware Hans Rosling is an enthusiastic swedish academic with a passion for statistics who recently presented the program The Joy of Stats. One of the great things about Hans Rosling is his presentations and the interactive graphics that he uses to make his points. Fast Tube by Casper The gapminder software Central Limit Theorem A nice illustration of the Central Limit Theorem by convolution.in R: Heaviside 0,1,0) }HH In the past two years, a growing community of R users (and statisticians in general) have been participating in two major Question-and-Answer websites: The R tag page on Stackoverflow, and Stat over flow (which will soon move to a new domain, no worries, I’ll write about it once it happens) In that time, several long (and fascinating) discussion threads where started,... The bottom line of this post is for you to go to: Stack Exchange Q&A site proposal: Statistical Analysis And commit yourself to using the website for asking and answering questions. (And also consider giving the contender, MetaOptimize a visit) * * * * Statistical analysis Q&A website is about to go into BETA A month ago I invited... Two way analysis of variance models can be fitted to data using the R Commander GUI. The general approach is similar to fitting the other types of model in R Commander described in previous posts. Fast Tube by Casper The “Statistics” menu provides access to some analysis of variance models via the “Means” sub-menu:Multi-way ANOVA – the One way analysis of variance models can be fitted to data using the R Commander GUI. The general approach is similar to fitting the other types of model in R Commander described in previous posts. Fast Tube by Casper The “Statistics” menu provides access to some analysis of variance models via the “Means” sub-menu:One-way ANOVA – the We can use the R Commander GUI to fit logistic regression models with one or more explanatory variables. There are also facilities to plot data and consider model diagnostics. The same series of menus as for linear models are used to fit a logistic regression model. Fast Tube by Casper The “Statistics” menu provides access to various
{"url":"http://www.r-bloggers.com/tag/statistical-analysis/","timestamp":"2014-04-19T04:37:53Z","content_type":null,"content_length":"37196","record_id":"<urn:uuid:5c96aa51-60a5-40c3-b407-c939f4abe8c5>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
Puyallup Statistics Tutor Find a Puyallup Statistics Tutor ...In my journey so far I have earned: - Associate of Arts and Sciences degree from Tacoma Community College - Bachelors of Science in Biology from Washington State University - I am very excited to say I have been accepted to medical school and will begin in July of 2014!! While tutoring childr... 25 Subjects: including statistics, chemistry, physics, geometry ...Although I do not hold a teaching certificate (working on that), I am a certified Substitute teacher, I taught Algebra 2 at Choice HS, and I was an AVID tutor for three years at White River HS and Glacier MS. I am also the administrator and a volunteer tutor for a tutoring group in Buckley. If ... 11 Subjects: including statistics, calculus, geometry, algebra 1 ...Dissertation. I am highly qualified to tutor for the GED test. I am currently tutoring physics, chemistry, and mathematics at levels ranging from Algebra 1 through Calculus. 21 Subjects: including statistics, chemistry, physics, English ...I have experience in a wide range of subjects with my strong points being math, sciences and writing. I also have many years of experience with various computer systems and applications. I am currently employed as a Computer Support Technician at a moving company, as well as a writing tutor. 30 Subjects: including statistics, chemistry, reading, writing ...I have a thorough grounding in the social sciences. I have a degree from the London School of Economics, and I have two degrees in psychology. These last two degrees had a quantitative bases, and allowed me to study social psychology. 56 Subjects: including statistics, English, chemistry, GED Nearby Cities With statistics Tutor Auburn, WA statistics Tutors Bonney Lake statistics Tutors Edgewood, WA statistics Tutors Federal Way statistics Tutors Fife, WA statistics Tutors Firwood, WA statistics Tutors Kent, WA statistics Tutors Meeker, WA statistics Tutors Milton, WA statistics Tutors Pacific, WA statistics Tutors Puy, WA statistics Tutors Renton statistics Tutors Seatac, WA statistics Tutors Sumner, WA statistics Tutors Tacoma statistics Tutors
{"url":"http://www.purplemath.com/puyallup_wa_statistics_tutors.php","timestamp":"2014-04-20T19:46:19Z","content_type":null,"content_length":"23695","record_id":"<urn:uuid:3435f390-b89b-484e-9c9f-2f768d0767fc>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Distance measure on weighted directed graphs up vote 2 down vote favorite There is a simple and well-defined distance measure on weighted undirected graphs, namely the least sum of edge weights on any (simple) path between two vertices. Can one devise a meaningful distance metric for weighted directed graphs? (assume non-negative weights and that the distance must be symmetric and have the triangular property) Could you not do the same thing as with undirected graphs, which gives a non-symmetric metric D, and then symmetrize, i.e. define a metric d by d(x,y) = max{D(x,y), D(y,x)}? – Tom Leinster Feb 3 '10 at 15:58 Yes...I had considered d(x,y) = (D(x,y)+D(y,x))/2 but that just didn't feel right. Did you mean 'min'? If so that seems much more satisfying for edge distance. but...I don't think it'll work for paths. – Mitch Harris Feb 3 '10 at 16:07 1 Min does not satisfy the triangle inequality, only max does. – domotorp Feb 3 '10 at 16:23 1 Mitch, I meant max (and agree with domotorp). And yes, I also had it in mind that you could use + to symmetrize, rather than max. It sounds like you have some kind of design criteria in mind for how this distance should behave. Maybe you could add something to the question explaining them? (There's an "edit" button.) – Tom Leinster Feb 3 '10 at 17:24 That min doesn't work but max does is not obvious to me (and I can't seem to think through it). Is there a short counterexample? – Mitch Harris Feb 3 '10 at 20:24 show 1 more comment 1 Answer active oldest votes Since you're asking for what is "meaningful", I think that one can then argue against some of the question. The value of metrics on graphs in combinatorial geometry as an area of pure mathematics are more or less the same as their value in applied mathematics, for instance in direction-finding software in Garmin or Google Maps. In any such setting, the triangle inequality is essential for the metric interpretation, but the symmetry condition $d(x,y) = d(y,x)$ is not. An asymmetric metric captures the idea of one-way distance. You can have perfectly interesting geometry of many kinds without the symmetry condition. For instance, you can study asymmetric Banach norms up vote such that $||\alpha x|| = \alpha ||x||$ for $\alpha > 0$, but $||x|| \ne ||-x||$, or the corresponding types of Finsler manifolds. 5 down vote On the other hand, if you want to restore symmetry, you can. For instance, $$d'(x,y) = d(x,y) + d(y,x)$$ has a natural interpretation as round-trip distance, which again works equally well for pure and applied purposes. You can also use max, directly, or even min with a modified formula. Namely, you can define $d'$ to be the largest-distance metric such that $$d'(x,y) \le \min (d(x,y),d(y,x))$$ for all $x$ and $y$. All of these have natural interpretations. For instance, the min formula could make sense for streets that are one-way for cars but two-way for add comment Not the answer you're looking for? Browse other questions tagged co.combinatorics or ask your own question.
{"url":"http://mathoverflow.net/questions/13987/distance-measure-on-weighted-directed-graphs?sort=votes","timestamp":"2014-04-16T11:07:19Z","content_type":null,"content_length":"56365","record_id":"<urn:uuid:14595df6-b711-48b8-877a-3d3ef7dd8031>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Mableton Calculus Tutor ...I also have extensive experience in test preparation, including PSAT, SAT, GRE and GHSGT. I am respected and work well with my students, having been selected STAR teacher by our top student. I have worked with many different ability levels, but most of my experience is teaching gifted students. 10 Subjects: including calculus, GRE, algebra 1, algebra 2 ...I have just graduated from Georgia Tech with a degree in nuclear and radiological engineering, I have been tutoring people from over the place in this topic since 2009, and I am well qualified. Try me and you will never regret! I have just graduated from Georgia Tech with a degree in nuclear an... 10 Subjects: including calculus, physics, algebra 1, ASVAB ...She explains things in a way that a non mathematically inclined person can understand. She is very intuitive to what I am having trouble grasping and what I need more help with. She is really my one weapon in my arsenal I couldn't do without. 22 Subjects: including calculus, reading, writing, physics ...I have taught complex mechanical and biomedical engineering principles to graduate and undergraduate students in formal classroom and laboratory settings. In addition I have tutored many high school and college students in foundational math and physics skills. I can help you build confidence in these difficult subjects! 15 Subjects: including calculus, physics, algebra 1, trigonometry ...I graduated from Georgia Tech in May 2011, and am currently tutoring a variety of math topics. I have experience in the following at the high school and college level:- pre algebra- algebra- trigonometry- geometry- pre calculus- calculusIn high school, I took and excelled at all of the listed cl... 16 Subjects: including calculus, geometry, algebra 1, algebra 2
{"url":"http://www.purplemath.com/Mableton_Calculus_tutors.php","timestamp":"2014-04-19T15:11:39Z","content_type":null,"content_length":"23894","record_id":"<urn:uuid:bf4cb9da-2b9d-4a1e-8d75-2b89597eb71e>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
Find a Pepperell Precalculus Tutor ...I am also an engineering and business professional with BS and MS degrees. I tutor Algebra, Geometry, Pre-calculus, Pre-algebra, Algebra 2, Analysis, Trigonometry, Calculus, and Physics. Seasonally I work with students on SAT preparation, which I love and excel at. 15 Subjects: including precalculus, calculus, physics, statistics ...I am very experienced in helping students apply the basic principles of calculus to real-life problems. I worked with a couple of chemistry students this past year. The key to my success was my expertise in physics and math. 44 Subjects: including precalculus, chemistry, writing, calculus ...I've tutored Physics I and II to many high school and college students around the Boston area. I have a degree in theoretical math from MIT. While an undergraduate at MIT, I specialized in mathematical physics -- the study of the mathematical structures and techniques that underlie theoretical physics. 47 Subjects: including precalculus, English, chemistry, reading ...Able to clearly and patiently explain concepts and methods. Able to provide solid foundation in mathematics. Experienced teaching all levels of abilities. 5 Subjects: including precalculus, calculus, algebra 1, algebra 2 ...I primarily tutored probability and statistics, single and multi variable calculus, and college algebra. I have also done a considerable amount of private tutoring in a variety of different areas of math. I am qualified to tutor nearly all areas of high school and college math and can also ass... 14 Subjects: including precalculus, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/pepperell_ma_precalculus_tutors.php","timestamp":"2014-04-20T06:32:45Z","content_type":null,"content_length":"23835","record_id":"<urn:uuid:a2117dec-7e19-4f0b-8d09-cfdac7303c71>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
Congressus Numerantium From inside the book 33 pages matching line at infinity in this book Where's the rest of this book? Results 1-3 of 33 What people are saying - Write a review We haven't found any reviews in the usual places. Related books A ABC the reference triangle Art 1 1 p 1 VP Q the Qparallel image of P Art 5 5 WP inconic associated with point P Art 8 8 25 other sections not shown Other editions - View all Common terms and phrases A-vertex AABC actual trilinear distances anticevian triangle anticomplementary triangle antipedal triangle Brocard point center of perspective central line central triangle centroid cevian triangle Chapter circumcenter circumcevian triangle circumcircle circumperp triangle collinear conjugate of Xm construction cos(B cosC cross-cevian point csc(A csc(C defined eigencenter equation equilateral triangle Euler line example excentral triangle excircles Feuerbach Find trilinears functions given harmonic conjugates homothetic incenter incircle inverse isogonal isotomic conjugate line at infinity lines AA Mathematical medial midpoint Morley triangle nine-point circle orthic triangle orthocenter parallel passes through Xi pedal triangle perpendicular perspective to AABC plane of AABC point of intersection polynomial center Problem proof Prove real numbers reference triangle respectively segment sidelengths sideline BC sideline of AABC sin(A Suppose tangential triangle Theorem triangle ABC triangle centers triangle geometry triangle of type triangle of Xi trilinear coordinates vertex vertices X2-Ceva conjugate Bibliographic information A ABC the reference triangle Art 1 1 p 1 VP Q the Qparallel image of P Art 5 5 WP inconic associated with point P Art 8 8
{"url":"http://books.google.com/books?id=3IzxAAAAMAAJ&q=line+at+infinity&dq=related:ISBN091962863X&source=gbs_word_cloud_r&cad=6","timestamp":"2014-04-20T14:36:45Z","content_type":null,"content_length":"125341","record_id":"<urn:uuid:8b39dcab-e553-4ad6-bd83-154f8d543a39>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
bug#6878: bool-vectors of length 0 signal error when aref/aset the 0th e [Top][All Lists] [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] bug#6878: bool-vectors of length 0 signal error when aref/aset the 0th e From: MON KEY Subject: bug#6878: bool-vectors of length 0 signal error when aref/aset the 0th element Date: Thu, 19 Aug 2010 10:13:42 -0400 On Thu, Aug 19, 2010 at 4:42 AM, Andreas Schwab <address@hidden> wrote: >> ,---- (info "(elisp)Bool-Vector Type") >> | >> | "A "bool-vector" is a one-dimensional array of elements that must be `t' >> | or `nil'." > All elements of (make-bool-vector 0 t) are either t or nil. Prove it! Please provide one piece of code that explicitly allows accessing the `t' at elt 0 and/or distinguishing whether the elt at idx 0 is either `t' or `nil'. I can't find a way to do it. Can you? In the absence of a working example I think you have a bug on your hands. > Andreas. [Prev in Thread] Current Thread [Next in Thread] • bug#6878: bool-vectors of length 0 signal error when aref/aset the 0th element, MON KEY, 2010/08/18 • bug#6878: bool-vectors of length 0 signal error when aref/aset the 0th element, Chong Yidong, 2010/08/19 • bug#6878: bool-vectors of length 0 signal error when aref/aset the 0th element, MON KEY, 2010/08/19 • bug#6878: bool-vectors of length 0 signal error when aref/aset the 0th element, Juanma Barranquero, 2010/08/19 • bug#6878: bool-vectors of length 0 signal error when aref/aset the 0th element, Chong Yidong, 2010/08/19 • bug#6878: bool-vectors of length 0 signal error when aref/aset the 0th element, MON KEY, 2010/08/19 • bug#6878: bool-vectors of length 0 signal error when aref/aset the 0th element, Juanma Barranquero, 2010/08/19
{"url":"http://lists.gnu.org/archive/html/bug-gnu-emacs/2010-08/msg00525.html","timestamp":"2014-04-18T16:24:08Z","content_type":null,"content_length":"7970","record_id":"<urn:uuid:7f27aabf-28c5-4d3a-9232-8a48156ba4c1>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
Forecasting for stationary binary time series , 710 "... Abstract. Consider a stationary real-valued time series {Xn} ∞ n=0 with a priori unknown distribution. The goal is to estimate the conditional expectation E(Xn+1|X0,..., Xn) based on the observations (X0,..., Xn) in a pointwise consistent way. It is well known that this is not possible at all value ..." Cited by 2 (2 self) Add to MetaCart Abstract. Consider a stationary real-valued time series {Xn} ∞ n=0 with a priori unknown distribution. The goal is to estimate the conditional expectation E(Xn+1|X0,..., Xn) based on the observations (X0,..., Xn) in a pointwise consistent way. It is well known that this is not possible at all values of n. We will estimate it along stopping times. , 2005 "... The forward estimation problem for stationary and ergodic time series {Xn} ∞ n=0 taking values from a finite alphabet X is to estimate the probability that Xn+1 = x based on the observations Xi, 0 ≤ i ≤ n without prior knowledge of the distribution of the process {Xn}. We present a simple procedure ..." Cited by 2 (1 self) Add to MetaCart The forward estimation problem for stationary and ergodic time series {Xn} ∞ n=0 taking values from a finite alphabet X is to estimate the probability that Xn+1 = x based on the observations Xi, 0 ≤ i ≤ n without prior knowledge of the distribution of the process {Xn}. We present a simple procedure gn which is evaluated on the data segment (X0,...,Xn) and for which, error(n) = |gn(x) − P(Xn+1 = x|X0,...,Xn) | → 0 almost surely for a subclass of all stationary and ergodic time series, while for the full class the Cesaro average of the error tends to zero almost surely and moreover, the error tends to zero in probability. Le problème d’estimation future d’une série de temps ergodique et stationnaire {Xn} ∞ n=0, qui prend ses valeures dans un alphabet fini X, est d’estimer la probabilité que Xn+1 = x, connaissant les Xi pour 0 ≤ i ≤ n mais sans connaissance préalable de la distribution du , 710 "... Bailey showed that the general pointwise forecasting for stationary and ergodic time series has a negative solution. However, it is known that for Markov chains the problem can be solved. Morvai showed that there is a stopping time sequence {λn} such that P(Xλn+1 = 1|X0,...,Xλn) can be estimated fro ..." Add to MetaCart Bailey showed that the general pointwise forecasting for stationary and ergodic time series has a negative solution. However, it is known that for Markov chains the problem can be solved. Morvai showed that there is a stopping time sequence {λn} such that P(Xλn+1 = 1|X0,...,Xλn) can be estimated from samples (X0,...,Xλn) such that the difference between the conditional probability and the estimate vanishes along these stoppping times for all stationary and ergodic binary time series. We will show it is not possible to estimate the above conditional probability along a stopping time sequence for all stationary and ergodic binary time series in a pointwise sense such that if the time series turns out to be a Markov chain, the predictor will predict eventually for all n. , 811 "... A binary renewal process is a stochastic process {Xn} taking values in {0,1} where the lengths of the runs of 1’s between successive zeros are independent. After observing X0,X1,...,Xn one would like to predict the future behavior, and the problem of universal estimators is to do so without any prio ..." Add to MetaCart A binary renewal process is a stochastic process {Xn} taking values in {0,1} where the lengths of the runs of 1’s between successive zeros are independent. After observing X0,X1,...,Xn one would like to predict the future behavior, and the problem of universal estimators is to do so without any prior knowledge of the distribution. We prove a variety of results of this type, including universal estimates for the expected time to renewal as well as estimates for the conditional distribution of the time to renewal. Some of our results require a moment condition on the time to renewal and we show by an explicit construction how some moment condition is necessary. 1. Introduction. The , 710 "... We prove several results concerning classifications, based on successive observations (X1,...,Xn) of an unknown stationary and ergodic process, for membership in a given class of processes, such as the class of all finite order Markov chains. ..." Add to MetaCart We prove several results concerning classifications, based on successive observations (X1,...,Xn) of an unknown stationary and ergodic process, for membership in a given class of processes, such as the class of all finite order Markov chains. , 808 "... Abstract—For a stationary stochastic process {Xn} with values in some set A, a finite word w ∈ A K is called a memory word if the conditional probability of X0 given the past is constant on the cylinder set defined by X −1 −K = w. It is a called a minimal memory word if no proper suffix of w is also ..." Add to MetaCart Abstract—For a stationary stochastic process {Xn} with values in some set A, a finite word w ∈ A K is called a memory word if the conditional probability of X0 given the past is constant on the cylinder set defined by X −1 −K = w. It is a called a minimal memory word if no proper suffix of w is also a memory word. For example in a K-step Markov processes all words of length K are memory words but not necessarily minimal. We consider the problem of determining the lengths of the longest minimal memory words and the shortest memory words of an unknown process {Xn} based on sequentially observing the outputs of a single sample {ξ1, ξ2,...ξn}. We will give a universal estimator which converges almost surely to the length of the longest minimal memory word and show that no such universal estimator exists for the length of the shortest memory word. The alphabet A may be finite or countable. Index Terms—Markov chains, order estimation, probability, statistics, stationary processes, stochastic processes I. , 803 "... The problem of extracting as much information as possible from a sequence of observations of a stationary stochastic process X0,X1,...Xn has been considered by many authors from different points of view. It has long been known through the work of D. Bailey that no universal estimator for P(Xn+1|X0,X ..." Add to MetaCart The problem of extracting as much information as possible from a sequence of observations of a stationary stochastic process X0,X1,...Xn has been considered by many authors from different points of view. It has long been known through the work of D. Bailey that no universal estimator for P(Xn+1|X0,X1,...Xn) can be found which converges to the true estimator almost surely. Despite this result, for restricted classes of processes, or for sequences of estimators along stopping times, universal estimators can be found. We present here a survey of some of the recent work that has been done along these lines. , 2007 "... Intermittent estimation of stationary time series. ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=12482110","timestamp":"2014-04-19T20:49:37Z","content_type":null,"content_length":"28433","record_id":"<urn:uuid:6342f7f9-3b36-48e4-b8d9-8cf65bed2f99>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
Baseball Prospectus | Baseball Therapy: Is Brandon Inge Worth 10 Wins Behind Closed Doors? March 21, 2013 Baseball Therapy Is Brandon Inge Worth 10 Wins Behind Closed Doors? Subscribe for $4.95 per month All Baseball Prospectus Premium and Fantasy articles more than a year old are now free as a thank you to the entire Internet for making our work possible. Recurring subscription - cancel anytime. Purchase a $39.95 gift subscription Not a subscriber? Get exclusive content like this delivered hot to your inbox every weekday. Click here for more information on Baseball Prospectus a 33% savings over the monthly price! subscriptions or use the buttons to the right to subscribe and get instant access to the best baseball content on the web. Already a subscriber? Click here and use the blue login bar to log in. Brandon McCarthy thinks that Brandon Inge is worth 10 wins or so to a team behind closed doors. Jonny Gomes, too. Participating in a player panel at the SABR Analytics Conference earlier this month, McCarthy posited that if Inge and Gomes had been removed from the 2012 Oakland A's, they might have fallen from a 94-win team to a 70-win team, purely by virtue of being deprived of the effect the two players had in the clubhouse. According to WARP, Gomes was worth 2.2 wins last year, while Inge was worth 0.6. So, assuming that if neither had been on the team, they would have been replaced by... well, replacement level players, that means that Inge and Gomes somehow combined for 21.2 wins just by being good guys in the clubhouse. Okay, so maybe McCarthy was exaggerating. Maybe the point that he wanted to make was that Inge and Gomes were fun to be around in the clubhouse and that that helped him and other players out quite a bit. Maybe he wasn't trying to be accurate to the third decimal place—or even the tens place. He just wanted to say that he believes that these sorts of things can make a difference on the field. But it does raise a question that I seem to be visiting a lot lately. What measurable difference can a player make behind the scenes? McCarthy, now with the Arizona Diamondbacks, spoke to AZ Central's Nick Peicoro afterward and explained the effect like this: It sound[sic] stupid, but if you have a rookie that comes up and rookies are filled with self-doubt, filled with worry, and now you’re in the big leagues and you come to a team where nobody makes you feel welcome. So now you’re already nervous, you’re kind of worried about your lot, and then the guys around you, you’re not comfortable and you don’t feel like you’re one of them. You don’t feel kind of free and like you can do what you do. But if you have a guy like Jonny Gomes or Brandon Inge or someone who just comes up and is just kind of (BS-ing) with you and it just sort of loosens you up and then everyone else can kind of get in the mix... That loosens you up, which in turn the person you interactive[sic] with — there’s a whole trickle down effect to it that’s impossible to quantify but it does exist in there.” (emphasis is mine). Was that a challenge? Warning! Gory Mathematical Details Ahead! This is going to be tricky (and very gory). We don't know what Inge or Gomes (or anyone else) did behind the scenes, except in the most general terms. Whom did they help? On what day? (MLBAM folks, we really need BFFf/x up and running soon.) We do, however, know what clubhouses they've been in and who else was in there. And we know, in general, how those guys did from year to year. It's a very rough-hewn method, and we'll talk about the limitations in a bit, but we're not totally in the dark when it comes to measuring the effect of a single player on his teammates over time. To do this, I used a mixed-linear model approach. For all player seasons (batters) from 2003-2012, I coded for whether the player had Inge as a teammate (so basically, everyone who played on the Tigers for most of the last decade) in that season. I did the same for Gomes. Now, to guard against the fact that I don't want to look at raw stats and mistake the fact that Inge or Gomes just happened to play with talented players (or really bad ones), I used an AR(1) covariance matrix. The idea here is that since I'm taking repeated measures for each player, the model will adjust for the fact that if a player hit a lot of home runs last year, we will expect him to do so again next year. For the initiated, I pegged the covariance matrix to the player's age on April 1st of the year in question. In addition, Inge spent most of that time in Detroit with Comerica Park as his home base. To that end, I entered the player's team (as a proxy for home park effects) as a fixed factor in the model. Where a player played for more than one team, I entered the team for whom he had the greatest number of PA. I also entered the calendar year as a fixed factor, since offense has been slowly declining from10 years ago until now. This will correct (some) for the declining offensive environment over time. Age also went into the model as a fixed (categorical) factor, to make sure that the effects weren't just due to the aging curve. I restricted the sample to players who were in their age 23 to age 35 seasons, and also to hitters who had more than 250 PA in the season in question. I should say that these aren't perfect adjustments, but they will do the job for the moment. I should also point out that I eliminated any actual stats belonging to either Brandon Inge or Jonny Gomes. Finally, I entered whether the player had Brandon Inge as a teammate during that season. I modeled each player's strikeout rate (per PA) for the season, and looked to see, once the rest of these things had been controlled for, whether the Inge effect was significant or not. I then did the same for walk rate, HBP rate, singles rate, double/triple rate, HR rate, and outs on balls in play rate. Theoretically, this model should give us the answer to the question, "If you took an average hitter from this population that you've selected, adjusted for park, year, and age, what is the extra effect of having Brandon Inge as a teammate?" The answer for Inge was surprising. Players who played with Inge had strikeout rates about 2.3 percent higher than what might be expected and about 2.2 percent fewer singles. Uh oh. But not all hope is lost. There were some marginally significant effects for home run rate, which increased by about 0.7 percent (p = .09), walk rate, which increased by about 1.1 percent (p = .14), and outs in play, which decreased by 1.6 percent (p = .27). (Note: yes, I'm fully aware that those numbers don't fully reconcile.) If anything, the players around Brandon Inge leaned more to a three-true-outcome philosophy of hitting than their previous (and subsequent) performance would have predicted for them, adjusted roughly for park, year, and age. For Gomes, none of the effects came out significant. It doesn't look like Jonny Gomes's teammates showed systematic improvement from his mere presence. Was it Inge? Before we go further, let's acknowledge something. "I was teammates with Brandon Inge" is a pretty good proxy for "I was a member of the Detroit Tigers." While the model attempts to correct for home park (and by some manner of indirect association, team philosophy), it's important to point out that the Tigers have had a lot of well-known TTO guys who have come through Comerica Park over the past few years (Dmitri Young, Curtis Granderson, Austin Jackson, Miguel Cabrera, and um... Brandon Inge.) Because Inge and "Detroit Tiger" are nearly synonymous, the model may simply be splitting the blame for the fact that all of these TTO guys were around between Inge and the Tigers. Maybe this was a case where the Tigers brass liked to acquire guys who were already in the TTO mold and then encouraged them to become more so, either directly or indirectly. Did Inge cause them to become TTO hitters? Probably not. One statistic that the AR(1) covariance matrix gives us is the AR(1) rho, which is kind of like a multiple year-to-year correlation. TTO outcomes are among the most stable of batting statistics, and in this sample, all were above .70. When I switched the "Inge factor" over to a random effect to look at the variance composition stats, it barely registered as a driver of the variance. In defense of the thought that Inge might have had some effect, the reason that many of the effects observed were marginally significant was not due to a small observed effect, but because of a large error bar. Suppose that Brandon Inge really did have an effect on some of the players with whom he played, but not all of them. Instead of having some random noise distributed around a mean change of zero, it would be that random noise stretched out by the true effects for some of the players in the Inge sample. Measures of variance, such as standard error, would increase. Maybe Inge isn't helping everyone on the team, but maybe he's not completely inert. It's possible (although this is hardly proof) that the fact that we don't know what Inge said to whom and when makes our measure too rough to pick up the signal that's lurking underneath. It's really hard to tell exactly what's going on in this model. In a perfect world, Inge would have moved around to a few teams (much in the way Gomes has), and we might have seen the effects at each of his stops. However, we don't have that luxury here. Major League Baseball continues to ignore my requests to randomly shift players from team to team and to assign playing time to its players randomly. It would make sabermetrics a lot easier. Maybe Brandon McCarthy is right. Given what data we have, we can't really quantify the effect that Brandon Inge had on the 2012 A's. There's too much methodological noise, and I don't know that there's a way around this one. Value Above Cardboard Cutout Before we even entertain the idea of Brandon Inge being a 10-win personality, let's for a moment do a quick thought experiment. Suppose that, as Brandon McCarthy suggested, Brandon Inge really had been removed from the 2012 Oakland A's. Not completely. Let's pretend that nothing changed about the way that he played in the field, but when he crossed into the dugout and clubhouse, he turned into a cardboard cutout. He didn't talk to anyone, and did nothing, good or bad, to affect the team's morale or chemistry. From the sound of it, McCarthy said that Inge's value was in his ability to engage younger players, and to make them feel more relaxed. There's no reason not to believe McCarthy, so I'll accept that this really was the case. Now, with Inge rendered a cardboard cutout, what would happen? In the same way that we have replacement level on the field, we should look at things similarly off the field. Let's assume that there is some skill for helping other people to better acclimate to a new situation, or inspiring people, or keeping them loose. Let's assume that some players are good at it and others not. Most are somewhere in the middle. A baseball team is a small society and, like any society, it must have roles that people fill to accomplish the tasks it needs to survive. Inge filled the role of "guy who kept people loose and feeling happy." If he were replaced with a cardboard cutout, would no one have stepped into that role? Someone probably would have, even if they weren't as skilled, in the same way that someone would have stepped into the role of right fielder had Josh Reddick gotten hurt. Maybe the new clubhouse clown wouldn't have gotten quite the effect that Brandon McCarthy alleges that Brandon Inge and Jonny Gomes got, but it wouldn't be zero. And that's going to be important in both quantifying on-the-field value and the value of a clubhouse presence. We need a reasonable baseline. Assuming for a moment that McCarthy's pronouncement that Inge produced oodles of collateral on-the-field value with his people skills is completely true, let's not credit him with all of that value. If the next most-qualified guy would have done 70 percent as good a job, then Inge is "only" worth three wins. It's similar to the way that replacement level taught us that we shouldn't be massively impressed by 15 HR from a first baseman when we'd estimate that some scrub with the same amount of playing time would have hit 12. To be fair, we know what replacement level looks like on the field, but we have no idea what it is off the field. As this search for the effect of the clubhouse factor continues, we do need to keep the idea of a proper baseline in mind. Why the Search Should Continue I don't buy the idea that Brandon Inge is a 10-win player behind the scenes. But I do think that Brandon McCarthy is onto something here. There might indeed be a very rich vein of hidden value that can be mined, and it can be found when McCarthy says, "there’s a whole trickle-down effect..." Consider that to improve his own on-the-field value, a player has perhaps 500 or 600 PA to make a difference. To change the outcome of one plate appearance, his outcome rates must change by around 0.2 percent, and even going from the worst possible outcome (a strikeout) to the best (a home run) is worth roughly 1.5 runs. To produce one win, he'll need change his own individual outcome rates on both stats by more than a percentage point. That's a tall order. Not impossible, but tall. The reason that a better understanding of how team chemistry works holds great promise is the issue of scale. A player can change only himself on the field, but the trickle-down effect in the clubhouse might touch everyone on the team. Suppose that Inge was able to help several guys on the A's, and that they accounted for only half the plate appearances that the team had during the season. Since the average team sends about 6,000 hitters to the plate in a season, Inge suddenly has his finger on the scale for 3,000 PA, rather than 500. If he can move the needle on the combined strikeout rates and home run rates for the players over whom he holds sway by 0.2 percent on the whole (one PA in 500), he produces most of a win. The effects of chemistry might be small, but they can be spread over a wider swath of players. And if they are, then suddenly there's a multiplier effect, because a nice personality can affect more than just one person. Now, is it reasonable to believe that a player might be able to, once or twice in a season, pick up a friend on the team who is feeling down, and through just being a nice guy, give him a little extra boost that turns what would have been an out into a hit? If that seems reasonable, then suddenly, the value of those few extra hits starts to add up. Maybe we don't know how to measure chemistry or clubhouse behavior yet, and maybe we'll never really have the kind of data that we need to do it. But if we take a small leap of faith and trust the people who have been in clubhouses, we might then have the final piece to make a rational, statistically/numerically based argument for team chemistry being not only something that's real, but something that can be really powerful. I should properly acknowledge the inspiration of Jay Jaffe, whose piece in Sports Illustrated inspired this investigation. Russell A. Carleton is an author of Baseball Prospectus. Click here to see Russell's other articles. You can contact Russell by clicking here
{"url":"http://www.baseballprospectus.com/article.php?type=2&articleid=19944","timestamp":"2014-04-19T10:17:29Z","content_type":null,"content_length":"85661","record_id":"<urn:uuid:87158f85-73a3-40d7-b653-60bf0cd92292>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
A quick question about scalar product of vectors... I know how to find the scalar product of B*C (I think... 5, right?), but I don't really know where the 2 and 3 come into play. I've tried multiplying the values of C by 3 and then finding the scalar product, then multiplying the quantity by two, but that was incorrect. Well this was correct. Unless you made a mistake in carrying out the calculations ... Remember when you multiply the vector C by the number 3 you have to multiply each component of C by this number 3, giving you 3C = 3(-1,-1,2)=(-3,-3,6) I suggest double-checking your calculations and if this doesn't help....show us what you have done and we can most likely find your mistake. For the scalar product of B and C, five is correct. B.C = (-3,0,1).(-1,-1,2)=3+0+2=5, well done.
{"url":"http://www.physicsforums.com/showthread.php?t=228862","timestamp":"2014-04-16T19:10:56Z","content_type":null,"content_length":"33294","record_id":"<urn:uuid:c9abecc0-b9f9-488e-876a-04adb0bb533c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
Projects and Sources - Juan Camilo Corena Implementation of a Boinc volunteer grid computing environment in the university campus to factor RSA-100 and RSA-110 using the GNFS algorithm. Co-author Ivan Rey. Design of a redundant descentralized backup system for LANs, it employs: threshold schemes for key redundancy, Simmetric cryptographic primitives, Rivest's all or nothing transformation and Rabin's Information Dispersal Algorithm (see implementation below in Cryptography section). A protocol for guaranteeing privacy in emails stored at locations that could be compromised such as web based e-mail services. A method for using CDMA in cryptosystems with additive homomorphisms. Co-Author: Jaime Posada. A method for load balancing in the crowds anonymous network using Ring Signatures and Linkable Democratic Group Signatures. Co-Author: Jaime Posada. An effective descentralized, low cost cryptographic alternative to methods implemented in Colombia to control the express kidnappings in taxis occurring in major colombian cities. A cache system using pieces of shared information as its source of information using a suffix tree for quick access. Source Code This is the source code section, effort has been done to write standalone implementations compatible with standard compiler distributions, in this way it is easier to incorporate the source code to an existing application and play around with the algorithms. Unless otherwise stated the source files are written in Java (jdk 6.0). I'm releasing all the source codes under BSD license , if the code is to be used in a commercial application check that there are no patents regarding the algorithms, the implementations are made for educational purposes. Special care needs to be taken specially with the Cryptographic algorithms, since they have not been tested against all possible attacks. Email me if there are any comments, bugs, improvements or anything regarding the sources. Public Key Algorithms I find public key algorithms a fascinating topic, it is one of those fantastic things in Cryptography seeming to defy common sense. Eventhough there are many of them in the literature, it is not common to see a real world implementation with algorithms other than RSA. In this section you can find the algorithms I've worked with for some reason. Classes in this section have a constructor generating keys, a constructor for encryption, a constructor for encrypting and decrypting, an encrypt method and a decrypt method. The idea of such coding strategy is to use the same class for every instance in a real world application; also all the codes include a main method showing how to use the class and some of them show the homomorphic properties of the algorithm. An extension of Goldwasser-Micali, it has additive homomorphism too, there are some cryptographic voting systems using it. Co-author: Jaime Posada. Well this isn't exactly for encryption but they started the public key idea. In this code there are methods to generate the parameters p and g, since it is slow for 1024, generate p and g, store them somewhere and use them as a public parameters in the system. The first probabilistic encryption scheme, encrypts bit by bit, it uses quadratic residues. Algorithm with additive homomorphism, very popular for cryptographic voting systems. Co-author: Jaime Posada. Algorithm based on the integer factorization problem, the encryption function is very simple and has other nice properties for oblivious transfer. The first true public encryption scheme, there are many implementations around but here is another one. Identity Based Encryption Identity Based Encryption has very appealing properties for some environments, unlike traditional public key infrastructures where there is a chain of certificates issued by certification authorities, in IBE schemes the public key for an individual is derived from its identity, to get the private key corresponding to that identity, one must go with the Public Key Generator (PKG) who can compute private keys. Most of the IBE schemes out there use elliptic curve pairings, I'm working in implementing those schemes, in the menatime here is one algorithm based in number theory. An IBE scheme based on Quadratic Residues, it is very inefficient in space. Digital Signatures The digital signature algorithm, the code lets change the parameters for p,q and the hash function. There are four constructors for the possible usage scenarios, parameter generation is kinda slow for 1024 bits, please be patient when running the code. Threshold Schemes Threshold schemes allow to share responsibility for storing or computing a function, they depend on two values, one of them is the number of entities involved and the other is how many entities are needed to compute a function or reveal a secret. Shamir's secret sharing scheme: It uses polynomial interpolation to split a secret among several parties, the code uses Gauss-Jordan in Zp to recover the secret. Zero Knowledge Proofs In zero knowledge proofs one entity called the prover, proves to an entity called the verifier the knowledge of a secret without actually telling the secret, there are interactive and non-interactive protocols. For the interactive protocols steps are numbered for prover and verifier in the source code, for the non interactive ones there are two methods only: prove and verify. In this interactive protocol there is a public parameter n. The prover wishes to prove the knowledge of the modular square roots (s) of an array (v) known to the verifier. Equallity two discrete logarithms Modular square root Information Dispersal Algorithms Information Dispersal Algorithms, allow to split a file in n slices, from those n slices any k (k<n) slices suffice to reconstruct the original file. Parameters n and k can be given at split time. The algorithm proposed by Rabin, it is optimal in terms of information overhead. For a L length file, splitted in n pieces where k suffice for reconstruction, the size of each slice is L/k. The implementation uses arithmetic in GF(2^8) and Vandermonde matrices. I wish to thank Alejandro Sotelo for the help given in finite field arithmetic and his code for inverting Vandermonde matrices in quadratic time. The code can be downloaded as a zipped Visual C++ 2008 express project. For usage instructions read the main file.
{"url":"http://www.juancamilocorena.com/home/projects","timestamp":"2014-04-19T11:58:27Z","content_type":null,"content_length":"29424","record_id":"<urn:uuid:e9796828-c4a0-44d5-b0a0-0191415fc62a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
Cartesian Plane, Coordinate Plane, Coordinate Grid By Deb Russell 2 of 4 More on the Cartesian Plane Notice in the Cartesian Plane that the 2 intersecting number lines are drawn to scale. There are 4 quadrants, the positive direction is upward and to the right, the negative direction is downward and to the left.
{"url":"http://math.about.com/od/geometry/ss/cartesian_2.htm","timestamp":"2014-04-16T10:39:33Z","content_type":null,"content_length":"40305","record_id":"<urn:uuid:94792b14-d2ca-4841-b976-a7678c6ddfb2>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
Oral puzzles Re: Oral puzzles Hi ganesh; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Oral puzzles Hi bobbym, #1628. What is log[3]27? Character is who you are when no one is looking. Re: Oral puzzles Hi ganesh The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Oral puzzles Hi anonimnystefy, The solution #1628 is correct. Neat job! #1629. Evaluate : log[100](0.01) Character is who you are when no one is looking. Re: Oral puzzles Hi ganesh; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Oral puzzles Hi bobbym, The solution #1629 is correct. Excellent! #1630. If log 27 = 1.431, then what is the value of log 9? Character is who you are when no one is looking. Re: Oral puzzles Hi ganesh; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Oral puzzles Hi bobbym, The solution #1630 I get is #1631. If log [10]2 = 0.3010, then what is the value of log [2] 10? Character is who you are when no one is looking. Re: Oral puzzles Hi ganesh The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Oral puzzles Hi ganesh; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Oral puzzles Hi anonimnystefy and bobbym, The solution #1631 is correct. Good work! #1632. If log 2 = 0.3010 and log 3 = 0.4771, then what is the value of log[5]512? Character is who you are when no one is looking. Re: Oral puzzles In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Oral puzzles Hi bobbym, The solution #1632 is correct. Neat job! #1633. If log 2 = 0.30103 find the number of digits in 4^50. Character is who you are when no one is looking. Re: Oral puzzles Hi ganesh; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Oral puzzles Hi ganesh The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Oral puzzles Hi bobbym and anonimnystefy, The solution is Neat work, bobbym! #1634. If log 2 = 0.30103, find the number of digits of 5^20. Character is who you are when no one is looking. Re: Oral puzzles Hi ganesh; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Oral puzzles Hi bobbym, The solution #1634 is correct. Neat work! #1635. Evaluate : (i) log[7]1 (ii) log[34]34 Character is who you are when no one is looking. Re: Oral puzzles Hi ganesh; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Oral puzzles Hi bobbym, The solutions (all the three parts) are correct. Good work! #1636. The speed of a boat in still water in 11 kilometers per hour and the rate of current is 4 kilometers per hour. Find the distance traveled downstream in 16 minutes. Character is who you are when no one is looking. Re: Oral puzzles In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Oral puzzles Hi bobbym, The solution #1636 is perfect. Neat job! #1637. A cistern is filled by pipe A is 9 hours and the cistern can be leaked out by an exhaust pipe B in 10 hours. If both the pipes are opened, in what time would the istern be full? Character is who you are when no one is looking. Re: Oral puzzles Hi ganesh The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Oral puzzles Hi ganesh; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Oral puzzles Hi anonimnystefy and bobbym, The solution #1637 is correct. Good work! #1638. Taps A and B can fill a bucket in 10 minutes and 15 minutes respectively. If both pipes are opened and A is closed after 3 minutes, how much further time would it take for B to fill the Character is who you are when no one is looking.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=237899","timestamp":"2014-04-17T00:51:22Z","content_type":null,"content_length":"39291","record_id":"<urn:uuid:31de704d-f59d-41e7-8054-88b2ef4ee1a8>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
Nef gene evolution from a single transmitted strain in acute SIV infection The acute phase of immunodeficiency virus infection plays a crucial role in determining steady-state virus load and subsequent progression of disease in both humans and nonhuman primates. The acute period is also the time when vaccine-mediated effects on host immunity are likely to exert their major effects on virus infection. Recently we developed a Monte-Carlo (MC) simulation with mathematical analysis of viral evolution during primary HIV-1 infection that enables classification of new HIV-1 infections originating from multiple versus single transmitted viral strains and the estimation of time elapsed following infection. A total of 322 SIV nef SIV sequences, collected during the first 3 weeks following experimental infection of two rhesus macaques with the SIVmac239 clone, were analyzed and found to display a comparable level of genetic diversity, 0.015% to 0.052%, with that of env sequences from acute HIV-1 infection, 0.005% to 0.127%. We confirmed that the acute HIV-1 infection model correctly identified the experimental SIV infections in rhesus macaques as "homogenous" infections, initiated by a single founder strain. The consensus sequence of the sampled strains corresponded to the transmitted sequence as the model predicted. However, measured sequential decrease in diversity at day 7, 11, and 18 post infection violated the model assumption, neutral evolution without any While nef gene evolution over the first 3 weeks of SIV infection originating from a single transmitted strain showed a comparable rate of sequence evolution to that observed during acute HIV-1 infection, a purifying selection for the founder nef gene was observed during the early phase of experimental infection of a nonhuman primate. Genetic evolution in the primary phase of HIV-1 infection has been characterized by single genome amplification and nested polymerase chain reaction (PCR) of HIV-1 genes in parallel with mathematical /computational modeling [1-3]. Major goals of such analyses include the characterization of the transmitted strains, estimating the timing of infection based on the level of sequence diversity, and distinguishing between single virus strain/variant infections (referred to hereafter as "homogenous" infection) versus two or more virus strains/variants infections (referred to hereafter as "heterogenous" infection). Heterogeneous infection is associated with faster sequence diversification and accelerated disease progression due to the rapid emergence of virus variants with enhanced replicative fitness [4-7]. To quantitatively assess whether HIV-1 infections were initiated by single or multiple viral strains, we recently developed a mathematical model and Monte-Carlo (MC) simulation model of HIV-1 evolution early in infection and applied this to the analysis of 102 individuals with acute HIV-1 infection [2]. Further, in cases of single strain (homogeneous) infections, the model provided a theoretical basis for identifying early founder (possibly transmitted) env genes. In this study, we tested the validity of our primary HIV-1 infection model using a non-human primate (NHP) model for HIV-1/AIDS. This model has played a key role in the development of candidate HIV-1 vaccines, and provided critical insights into disease pathogenesis [8-10]. Studies in the macaque/simian immunodeficiency virus (SIV) model have contributed to our understanding of the close association between the extent of virus replication during the acute phase of infection and the subsequent virus set point and disease course [11] as reported in HIV-1 infections [12-14]. Genetic evolution during SIV infection has been well documented in comparison with the evolution of HIV-1 population [15-18]. We examined evolution of the viral nef genes from a single transmitted strain. Nef, a small accessory protein, was selected because the virus can tolerate significant variability in the nef protein, as evidenced by high levels of polymorphism longitudinally throughout infection and at the population level [19-22]. We sequenced full-length nef genes longitudinally during the very early phase of SIV infection using the method of single genome amplification (SGA). The SGA method more accurately represents HIV-1 quasispecies when compared to conventional PCR amplification [1,23,24]. We showed that our sequence evolution model correctly classified the experimental SIV infections as homogeneous infections. As predicted by the model, the consensus sequence of the sampled strains from these homogeneous infections corresponded to the transmitted sequence. However, our systematic evaluation showed that a sequential decrease of the diversity within the first 3 weeks of infection was associated with a purifying selection for the transmitted sequence (and was not a consequence of the limited sample size in our analysis). Longitudinal nucleotide and amino acid mutations We visualized longitudinal sequence evolution, nucleotide and amino acid point mutations in reference to the founder nef gene/Nef protein in Figure 1. From a total of 322 nef sequences sampled from the two animals, we observed 41 nucleotide base substitutions (excluding gaps) from the infecting nef sequence of SIVmac239, within the first 21 days following virus infection; out of these 41 mutations, 10 were determined to be G-to-A hypermutation patterns with APOBEC signatures (red characters in Figure 1) [25]. However, none of these APOBEC signatures were statistically significant (p > 0.05 from a Fisher exact test, Hypermut tool http://www.hiv.lanl.gov webcite). As we predicted in our model [2], the group sequences identical to the consensus sequence indeed corresponded to the transmitted nef sequence. Limited base substitutions observed in all nef genes were sparse and did not align with each other – as we have seen in env genes sampled from HIV-1 acute subjects classified as having homogeneous infection [2]. Out of 41 total mutations, 16 mutations were synonymous and the rest were non-synonymous base substitutions. Figure 1. Nucleotide and amino acid base substitutions within 3 weeks post SIV infection. Longitudinal nucleotide (A) and amino acid (B) base substitutions from the founder nef gene/Nef protein of sequence samples taken at day 4, 7, 11 and 18 post-infection from animal r00065, which was infected intravenously with SIVmac239. C and D display base substitutions in reference to the founder sequence from the samples taken at day 7, 14, and 21 post-infection from animal r98018, which was infected by intrarectal inoculation with SIVmac239. Numbers in the left column in each figure represent the number of a specific sequence out of total sampled sequences at a given day post infection. Each clone was obtained via the method of single genome amplification. Figure 1 shows that all the mutant nef genes except one were not sampled again in the next time point, while the transmitted nef gene was conserved in sequential samples from both animals. A single mutation fixed in the sequence population from animal r00065, C-to-T at position 520, was synonymous one. We examined whether loss of mutant sequences in the sequential samples could be reproduced in the MC simulation. We sampled 30 sequences at days 6, 12, 18, and 24 post infection in the asynchronous infection MC simulation, and then counted the number of mutant sequences that remained at more than one time point, by repeating 10^2 simulations. Figure 2 shows the histogram of the observed number of mutant sequences sampled in any of the sequential time points, N[m]. The 95% confidence intervals were calculated by repeating 10^2 of 10^2 MC runs. The simulation confirmed that loss of mutant sequences is frequent. While the transmitted, founder nef gene remains as the majority of the sampled sequences throughout the early infection period, the mutant sequences are not fixed in the population due to i) only a finite number of sequences are sampled in an exponentially growing population and ii) more mutations to the mutant genes are accumulated by further reverse transcription events. Figure 2. Histogram of the observed number of mutant sequences sampled at more than one time point, N[m]. At day 6, 12, 18, and 24 post infection, 30 nef sequences were sampled. The observed number of mutant sequences which were present at more than one time point was counted from the total of 120 sequences sampled sequentially over 4 time points. For example, N[m ]= 0 denotes that no mutant sequence from the founder gene appeared at more than one time point. The histogram of N[m ]with 95% CIs was constructed by repeating 10^2 asynchronous MC infection simulations. While the founder nef gene remains as the majority of the sampled sequences, loss of mutant sequences in the serial samples was frequently observed. Dynamics of divergence, diversity, variance, maximum HD, and sequence identity Viral diversification in early infection can be probed with several quantities based on Hamming distances among the sampled sequences. Here Hamming distance denotes the number of bases at which any two sequences differ. We measured the kinetics of divergence, diversity, variance, maximum Hamming distance (HD), and sequence identity in the two experimentally infected macaques (Table 1). Divergence is defined as average Hamming distance per site from the transmitted nef gene. Diversity is defined as average intersequence Hamming distance per site, variance as variance of intersequence per base Hamming distance distribution, maximum HD as measured maximum Hamming distance between all sequence pairs, and sequence identity as the proportion of identical sequences to the transmitted strain. Table 1. Animal Information and analysis using the acute HIV-1 infection model. Figure 3 displays the kinetics of these quantities compared to the viral load dynamics for animal r00065 and animal r98018. Each measurement was in the range of the prediction made by our acute HIV-1 sequence evolution model, however, the dynamics of each quantity from the two serial samples was not consistent with that from the model prediction. For instance, the average HD from the founder nef gene, divergence, decreases from 0.018% to 0.0081% over a time interval of 11 days for animal r00065, which is opposite to the trend predicted by the model. Also the proportion of identical sequences to the transmitted one was serially elevated from day 7 to day 18, suggesting either a purifying selection back to the founder strain during the early stage of infection or stochastic fluctuations due to the limited sample size. Figure 3. Viral load kinetics and the dynamics of divergence, diversity, variance, maximum HD, and sequence identity from homogeneous SIV infection. A. Viral load kinetics of animal r00065 (r65, black) and animal r98018 (r98, red). Animal r00065, which was infected by intravenous injection, displays a greater level of viral replication in comparison with animal r98018 which was infected by intrarectal inoculation. Dynamics of divergence (B), diversity (C), variance (D), maximum HD (E), and sequence identity (F) of nef sequences from animals r00065 (black) and r98018 (red). Each average value of simulated quantity from 10^3 simulations is represented with a brown line [2]. We sampled 31 sequences at a given time point in each run. To address whether the acute stage sequence evolution in animal r00065 indeed shows a purifying selection back to the founder strain, we performed a MC simulation by starting with 41 nef sequences identical to those sampled at day 7 from animal r00065. Then we sampled 50 sequences at day 11 (4 days since the "starting" day 7) and 31 sequences at day 18 (11 days since the "starting" day 7) to replicate the experimental sampling from animal r00065. Figure 4 shows each measure of divergence, diversity, variance, and sequence identity with 95% confidence intervals from 1000 MC runs. The measured divergence at day 18, 0.0081%, from animal r00065 is located outside of the 95% confidence intervals of the predicted divergence at day 18, [0.00815%, 0.057%], denoting a violation of the model assumption, neutral evolution without selection. We conclude that the serial decrease in divergence observed in animal r00065 is reflective of a purifying selection rather than a stochastic effect from the finite size of sampling. Figure 4. Predicted divergence, diversity, variance, and sequence identity from a simulation performed by starting with 41 sampled nef sequences obtained at day 7 from animal r00065. 50 sequences at day 11 and 31 sequences at day 18 were sampled by starting a simulation with the 41 sampled nef genes that were obtained at day 7 from animal r00065. The sampling time points were chosen to reflect those used in our initial simulation (i.e., day 11 corresponds to day 4 following the "initial" infection in this simulation, and day 18 corresponds to day 11 following the "initial" infection. The measured divergence at day 18, 0.0081%, from animal r00065 is located outside of the 95% confidence intervals of the predicted divergence at day 18, [0.00815%, 0.057%]. The maximum HD of r98018 at day 21 is 5 due to the presence of a strain with 3 base substitutions from the founder strain. All three of these mutations are G to A hypermutation with APOBEC3G/F signatures [25-27], although the signatures were not found to be statistically significant (p > 0.05 from a Fisher exact test, Hypermut tool http://www.hiv.lanl.gov webcite). Nonetheless, we tentatively attribute the deviation from the prediction generated by our model to these putative APOBEC3G/F signatures. The rate of virus sequence evolution in animal r00065 was slower than in animal r98018 – even though the virus replication rate (virus load) in animal r00065 was higher than that for animal r98018. Single Variant (Homogeneous) Infection with Neutral Evolution Our MC simulation and mathematical calculation is based on the premise that the SIV sequence population diversifies through random base substitutions without any selection or recombination during the first 2–3 weeks of infection, prior to initiation of the host nef-specific immune response that could select viral escape variant. Based on this assumption, the Hamming distance distribution can be approximated as a Poission distribution which is characterized as mean (diversity) equals variance [2,28]. The equality will not be exact due to stochastic effects and sample size dependency. However, we can use the simulation output to capture these effects, and construct a conical region delimited by 95% CIs over mean and variance within which values from a sample from homogeneous infection should lie (Figure 5). If we sample more sequences, the area of the cone decreases. The two conditions for the single variant homogeneous infection without any selection or recombination are: i) measured diversity and variance of the sequence sample should be located inside the cone, between the upper and lower limits of the 95% CIs, and ii) diversity should be less than the upper limit of the 95% CIs of simulated diversity at a given time point (grey lines in Figure 5). Here the cone diagram in Figure 5 was constructed by measuring diversity and variance for 20 (red) or 60 (blue) nef genes at each time point of each MC run. We performed 5000 MC runs. All the homogeneous 7 sequence samples from the two animals satisfy the above two conditions, as Figure 5 depicts. Our model successfully classified the virus sequence pattern in the two animals as being derived from a "homogeneous" infection as opposed to a "heterogeneous" infection with two or more strains. Figure 5. Classification diagram for homogeneous infection. The diversity and the variance of the sampled sequences from animals with homogeneous infection (i.e. infections with a single founder strain without any selection pressure or recombination) are expected to be located within the conical region. Here, the red (blue) conical region represents the 95% CIs from 5 × 10^3 runs where 20 (60) sequences were sampled at each time point. The black diagonal line denotes the average relationship between diversity and variance. The grey vertical line denotes the upper limit of the 95% CIs of simulated diversity at each time point. All of the sequence sets sampled from the two primates within 3 weeks since infection were successfully classified as homogeneous infections; measured diversity and variance are located within the red and blue conical regions and the diversity is less than the upper limit of the 95% CIs of diversity at week 1 from the homogeneous infection Estimating Days since Infection: Poisson Fit For each sequence data set, which was sampled from each animal at a time point following infection, we constructed the distribution of Hamming distances from the founder strain, HD[0 ](Figure 6). The distribution of Hamming distances from the founder strain, HD[0], was calculated as a weighted sum of Binomial distributions in the asynchronous infection mathematical model. The weighted sum of Binomial was approximated as a Poisson distribution, Figure 6. Estimation of days since infection based on Hamming distance distribution. The Hamming distance (HD[0]) distribution (multiplied by the number of sampled sequences) from the founder nef strain, SIVmac239, is shown for each sequence sample from each animal (black boxes) with the best fitting Poisson distribution (red lines). The goodness-of-fit p value of each fit is listed in Table 1. The bottom right corner panel shows a comparison between actual days post infection and the estimated days since infection based on HD[0 ]distribution for animals r00065 (black) and r00098 (blue). The correlation coefficient between the actual and estimated dates post-infection for r00065 is -0.91 and for r98018 is 0.47. with the mean of where t is days post infection, ε is the HIV-1 single replication cycle error rate per base, N[B ]is the number of bases of sampled genes, and R[0 ]is the basic reproductive ratio. We used a Maximum Likelihood method to fit a Poisson distribution to the observed data, and then assessed the goodness of fit through a Chi-Square statistic. Table 1 summarizes the estimated days since infection obtained from the Poisson fit using the relationship between mean of Poisson distribution, λ[0 ]and days post infection, t in Eq. (2), along with 95% CIs obtained by bootstrapping the HD[0 ]distribution 10^5 times. All of the 7 samples yielded a goodness-of-fit p-value of greater than 0.5, suggesting that measured HD[0 ]statistically follows a Poisson distribution. In this goodness of fit test the null hypothesis was that the two distributions tested were statistically the same, hence a low p-value would yield rejection of the null hypothesis. Analysis of all the sequence samples showed that the actual number of days elapsed following infection for the sequence samples fell within the 95% CIs of estimated days post infection by a Poisson fit to the HD[0 ] distribution (Table 1). However, as we expected from the observed decrease in divergence and the increase in sequence identity as infection progresses, the correlation coefficient between actual days since infection and the estimated days post infection (based on the Poisson fit for animal r00065) was -0.91. The correlation coefficient for animal r98018 was 0.47. The present study was undertaken to explore the applicability of a recently developed model for primary HIV-1 infection, to the analysis of acute SIV infection in rhesus macaques [2]. The level of measured diversity ranged from 0.015% to 0.052% during primary SIV infection, before set point, which is comparable to the range of measured diversity, 0.005% to 0.127%, from 68 single strain infected patients at the primary stage of HIV-1 infection [2]. Analysis of the SIV nef sequences showed that the MC simulation model was able to successfully classify 7 sequence samples, from two animals during the first 3 weeks following experimental infection of two rhesus macaques with SIVmac239, as homogeneous infection. We also confirmed that the consensus virus sequence in these animals was identical to the transmitted nef sequence of the infecting SIVmac239. We observed an unexpected decline in the divergence and the diversity from animal r00065 at an early point following infection. We first hypothesized that the serial decline in the divergence might be due to fluctuations arising from the limited sample size, 31–50 sequences per time point. To address this concern, we performed a second simulation, starting with the actually sampled 41 nef genes obtained at day 7 from animal r00065 (which showed the divergence of 0.018%). The MC simulation was performed with the assumption of neutral evolution, and 31 sequences were sampled at day 18. The measured 95% CIs of the divergence from such 1000 simulations provided the basis for the rejection of the null hypothesis (neutral evolution without selection), implying a preferential selection process for the founder strain. We conclude that the decrease in the divergence observed in animal r00065 is reflective of a purifying selection rather than a stochastic effect due to small sample size. We speculate that the purifying selection can be explained as a result of either: (i) lower fitness of the emerging mutant viruses relative to the founder virus, or (ii) selective loss of mutant sequences due to linked, unfavorable changes elsewhere in the genome (i.e., the phenomenon of hitchhiking [29,30]). The roles of Nef in viral fitness, such as promoting viral replication and infectivity and interfering T cell activation, have been well documented [31-33]. The time points in our study were chosen to precede the emergence of cytotoxic T cell lymphocyte (CTL) escape variants. As we expected, Figure 1 shows that all the mutants from the inoculated SIVmac239 nef gene are different each other, at the predicted amino acid level. This is not consistent with the expected outcome of CTL pressure, which classically results in changes confined within one or at most a handful of immunodominant epitopes. The main expected impact of CTL-induced changes on the model can be linked with a deviation from a star-like phylogeny [34], the absence of outgrowth in a particular mutant lineage. We have presented an examination of the property of star phylogeny in Figure 7 where all the 7 samples from two macaques satisfy the expected relationship for star-like phylogeny, diversity = 2 × divergence. The relationship arises from the property that the intersequence hamming distance frequency distribution coincides with the self-convolution of the frequency distribution of the hamming distances from the founder virus. The property of star-like phylogeny was preserved in all the samples from animal r00065 which displayed a sequential decrease in the divergence and the diversity (i.e., a purifying selection). Under the purifying selection preferential for the founder strain, a star-like phylogeny can be retained since there is no outgrowth in a particular mutant lineage except the center of the star, the founder virus. Figure 7. Examination of star-like phylogeny. The star-phylogeny can be examined by testing whether the level of diversity is two times of the level of divergence, which occurs when there is neutral selection in the absence of selective pressure for specific mutant strains. All of the 7 samples from animals r00065 and r98018 satisfy the relationship, diversity = 2 × divergence (blue line). We observed that rapid viral replication kinetics were not necessarily associated with a greater rate of sequence evolution. Animal r00065 displayed a greater level of viral replication in comparison to animal r98018 while less diversification of nef genes was observed in animal r00065. We interrogated the relationship between HIV-1 sequence diversity and viral load from 28 subjects with homogeneous HIV-1 infection in Fiebig stage II, where viral RNA and p24 antigens are positive without detectable HIV-1 serum antibodies [2]. We observed little correlation between plasma viral load and diversity (σ^2 = 0.18) in HIV-1 acute infection. Disconnect between the replication rate and the rate of evolution during early SIV and HIV infections may be partly explained by the unusual small effective population size, which has been estimated ranging from 10^3 to 10^4 [35-38]. The effective population size is defined from the process of transforming an actual, census population into a neutral, constant size population with non-overlapping generations. The difference between the effective population size and the real size can arise from many factors such as varying population size, purifying or diversifying selection and the existence of subpopulation. These factors should be associated with low level of correlation between viral load and the level of diversity in acute HIV-1 and SIV infections. Another aspect we may consider is that low level of correlation might be explained within our model scheme where the reproductive ratio and the generation time are set as independent parameters. Viral sequence diversity is influenced more strongly by generation time and to much lesser extent by the reproductive ratio. Hence for a given viral generation time, if the reproductive ratio changes significantly, the ramp-up slope of infected cell varies accordingly while the rate of sequence diversification remains relatively stable, implying little correlation between the rate of evolution and the rate of replication. For instance, our calculation from the asynchronous infection model study shows that when we change the basic reproductive ratio from 6 to 12, the ramp-up slope of infected cells increases 45% but the slope of diversity increases only 6%. With the assumption that the basic reproductive ratio varies considerably among acute HIV-1 subjects, for example by the level of activated CD4 T cell at the transmission, we may observe a great level of variation in the viral load but less in the sequence diversity. Under this circumstance, a minor correlation can be detected at the population level with another factor for dampening the correlation, fluctuations arising from the limited sample size of genes. An important caveat to the work reported here is that a limited number of clones were examined at specific time points in only 2 SIV infected animals. SGA sequencing is resource-intensive, precluding the use of more animals and time points in this study. In the future, next-generation pyrosequencing technologies [39] may facilitate the examination of far greater numbers of SIV sequences with economy that is impossible to achieve with Sanger-based sequencing. We expect that the acute infection model will be refined and improved as additional sequences become available. This study verifies the robust nature of our MC simulation model for primary HIV-1 infection, and shows that it can be successfully applied to the analysis of acute SIV infection in rhesus macaques. The model predicted the level of SIV sequence diversification during the acute phase of SIVmac239 infection in two rhesus macaques, and it correctly identified "homogenous" virus transmission in this model system. SIV acute sequence samples confirmed that the consensus sequence of each sample was indeed the transmitted strain. Finally, a sequential decrease in viral diversity was observed during the first 3 weeks of infection in one macaque, and was found to be due to a purifying selection for the transmitted sequence. Animals and SIVmac239 challenge Two rhesus macaques were experimentally infected with the clonal SIV isolate SIVmac239, derived from a molecular clone [40]. The SIVmac239 inoculum was sequenced by non limiting dilution PCR. The sequence of the infecting strain was identical to the clone from which it was derived with potential small errors during in vitro amplification. We have indicated the limitation in the revised manuscript. However, we note that our method is the best way for obtaining the clonal nature of the infecting inoculum as far as we can. Animal r00065 (r65) was infected with 100 TCID[50 ]SIVmac239 by intravenous injection. Animal r00098 (r98) was infected by intrarectal inoculation with 10 MID[50 ]SIVmac239. Viral RNA was isolated from frozen plasma samples from animal r00065 collected at days 4, 7, 11, and 18 following virus infection. From animal r00098, viral RNA was isolated from frozen plasma samples collected at days 4, 7, 21 during infection. Virally-infected animals were cared for according to the regulations of the University of Wisconsin Institutional Animal Care and Use Committee, and the NIH. Viral RNA isolation and cDNA synthesis Viral RNA was isolated from each animal at defined time points following infection. Cell-free plasma was prepared from EDTA anticoagulated whole blood by ficoll density gradient centrifugation. Viral RNA isolation was performed using the QIAamp MinElute Virus Spin Kit (QIAGEN, Valencia, CA) according to the manufacturer's instructions. Single strand cDNA was generated using oligo dT primers and the Superscript III reverse transcription kit (Invitrogen, Carlsbad, California, USA) according to the manufacturer's instructions. Limiting Dilution and nested PCR cDNA template was diluted to ~1 viral genome per microliter. The dilution factor necessary to achieve single viral genomes was defined as the template dilution for which only 30% of reactions produced a product. According to a Poisson distribution, the cDNA dilution that yields PCR products in no more than 30% of wells contains one amplifiable cDNA template per positive PCR more than 80% of the time. This was empirically determined using a dilution series and varied between samples and cDNA preps. The dilution series and PCR reactions were set up using a QIAGEN BR3000 liquid handling robot (QIAGEN, Valencia, CA). All PCR reactions used Phusion High-Fidelity polymerase (Finnzymes, Espoo, Finland). A nested PCR approach was used for all amplifications. The following primers designed to amplify a region of the viral Nef gene were used for the first round of PCR: 5'-CAAAGAAGGAGACGGTGGAG-3' and 5'-CATCAAGAAAGTGGGCGTTC-3'. Second round PCR was conducted using 2 ul of the first round PCR product and the following internal primers were used for nested PCR: 5'-TCAGCAACTGCAGAACCTTG-3' and 5'-CGTAACATCCCCTTGTGGAA-3'. For all PCR reactions, the following conditions were used: 98C for 30 s, 30 cycles of: 98C for 5 s, 63C for 1 s and 72C for 10 s, followed by 72C for 5 min. PCR products were run on a 1.5% agaroe gel. PCR products were purified using the Chargeswitch kit (Invitrogen, Carlsbad, Calfornia, USA) according to the manufacturer's instructions. Samples were bi-directionally sequenced susing ET-terminator chemistry on an Applied Biosystems 3730 Sequencer (Applied Biosystems, Foster City, California, USA) and the internal primers described above. DNA sequence alignments were performed using CodonCode Aligner version 2.0 (CodonCode Corporation, Dedham, Massachusetts, USA). Modeling Sequence Evolution in Primary HIV-1/SIV Infection The details of our model for characterizing sequence evolution in acute HIV-1 infection will be described by Lee et al. (HY Lee, EE Giorgi, BF Keele, B Gaschen, GS Athreya, JF Salazar-Gonzalez, KT Pham, PA Geopfert, JM Kilby, MS Saag, EL Delwart, MP Busch, BH Hahn, GM Shaw, BT Korber, T Bhattacharya, and AS Perelson, Modeling Sequence Evolution in Acute HIV-1 Infection, submitted for publication). We provide here an overview of the salient features of the model and its underlying assumptions. After transmission we assume that a systematic infection starts with a single infected cell in a new host. The number of secondary infections caused by one infected cell placed in a population of cells fully susceptible to infection is called the basic reproductive number, R[0]. The available data in humans infected with HIV-1 and in monkeys infected with SIV and SHIV show that virus grows exponentially until a viral load peak is attained a few weeks after infection [41-43]. Following the peak, viral levels decline and establish a set-point. At the set-point each infected cell, on average, successfully infects one other cell during its lifetime. We assumed a homogeneous infection in which the virus grows exponentially with no selection pressure, no recombination, and a constant mutation rate across positions and across lineages. Cell infections occur randomly by the viruses released from an infected cell. Viral production starts on average about 24 hours after a cell is initially infected [44,45], and most likely continues until cell death. While each of the R[0 ]infections could occur at different times, we took a first step in assessing the role of asynchrony by assuming the infections occur at two different times. The average time to new infection defines the viral generation time, τ. Each new infection entails a single round of reverse transcription introducing errors in the proviral DNAs with the number of mutations given by the Binomial distribution, Binom(n; N[B], ε), where n is the number of new base substitutions. Binomial distribution implies that base substitutions occur independently with the probability of ε at each site of SIV genome with the length N[B ]in each reverse transcription cycle. The Monte-Carlo model explicitly emulates all the new infection procedures with mutations, tracking the population of proviral nef genes of the infected cells by introducing base substitutions as infection propagates in a new host. In Ref. [2], we determined that the MC simulation and the mathematical model showed a good agreement with the level of sequence diversity sampled from acute HIV-1 subjects presumably infected with a single variant. Based on the prediction made by the model, the group of identical sequences, usually the consensus sequence of sampled strains, was presumed to be the initial founder strain established by the systematic infection in each host. The parameters used in the acute HIV-1 model were: i) the average generation time of productively infected cells, defined as the average time interval between the infection of a target cell and the subsequent infection of new cells by progeny virions, estimated as 2 days [44], ii) HIV-1 single cycle forward mutation rate, estimated as ε = 2.16 × 10^-5 per site per cycle [46], and iii) the basic reproductive ratio, defined as the number of newly infected cells that arise from any one infected cell when almost all cells are uninfected, estimated as R[0 ]= 6[41]. In the asynchronous infection model, the first time at which a newly infected cell infects other cells, τ, is chosen as 1.5 days. The length of nef gene, N[B], we simulated is 792. We used these parameter values to analyze our data set. For example, calculated R[0 ]values during primary SIV infection from viral ramp-up slope ranged from 2.2 to 68 [43], which justifies the choice of R[0 ]= 6. Improvement of the model requires more accurate estimations for these basic parameters during SIV early infection. The mutation rate, ε, and the generation time, τ, control the rate of increase in divergence and hence diversity. The larger the mutation rate, the faster the genomes mutate, hence the steeper the growth in diversity. The greater the generation time, the slower the genomes diversify, hence the smaller the growth in diversity. The slope of diversification is approximately proportional to ε/τ. On the other hand, R[0 ]mainly controls the growth in the infected cell population size. As the viral population grows, the number of cells one infected cell infects decreases due to the fact that fewer cells are available for infection. The basic reproductive ratio, R[0], affects the rate of evolution in a relatively minor way. Low values (e.g. 2 ≤ R[0 ]≤ 4), slow down the growth in the infected cell population, thus affecting the speed of evolution. For example, from R[0 ]= 6 to R[0 ]= 2 there is a 15.9% increase in the slope of diversity. On the other hand, for R[0 ]≥ 6, the dependence of the rate of diversification on R[0]is reduced. The slope of diversity increases by 5.5% as we increase R[0 ]from 6 to 10. The dynamics of diversity do not depend on the number of initial infected cells. Once we sample a finite number of sequences from the MC simulation at a given time, we first measure the Hamming distance (HD[0]) between each sampled sequence and the founder sequence and the Hamming distance (HD) between sequences sampled at the same time. Here Hamming distance is the number of base substitutions between two sequences. Based on the calculated HD[0 ]and HD, we define the basic measurements for quantifying the evolution of HIV-1 sequence populations. Divergence is defined as the average HD[0 ]per base from the initial founder strain; diversity is defined as the average intersequence Hamming distance per base among sequence pairs at a given time; variance is defined as the variance of the intersequence per base HD distribution; maximum HD is defined as the measured maximum HD between all sequence pairs sampled, and sequence identity is defined as the proportion of sequences identical to the founder strain. Both the MC simulation and mathematical calculation showed that divergence, diversity, and variance increase linearly as a function of time and sequence identity decays exponentially as a function of time [Fig. 2]. These behaviours are characteristics of neutral evolution, characterized as Poisson distribution and star-phylogeny topology. It has been shown that the distribution of pairwise genetic distances is an approximate Poisson in the evolution of mitochondrial DNA [28]. To address the issue of the finite size of samples, we repeated MC simulations sampling a finite number of nef genes at a given time and computed 95% CIs for each quantity. Then we examined whether the measurement of SIV nef gene samples was compatible with the model prediction or not. To infer the number of days elapsed since infection based on sampled strains, first we fit the Poisson distribution to the observed distribution of Hamming distances between sampled nef genes and the transmitted nef gene; we then determined the mean of the Poisson distribution and calculated days post infection using Eq. (2). A key property of the Poisson distribution arising from neutral evolution without selection and recombination is that the level of diversity is comparable to that of variance. We used this property to examine whether sampled strains had evolved from a single founder strain or not. In each MC run, we obtained the values of diversity and variance from the sampled sequences with a given sample size at each time and located those values in the plane of diversity and variance. By repeating MC simulations, we collected all the values of diversity and variance and computed 95% CIs in the plane of diversity and variance. The computed 95% CIs form a conical region within which diversity and variance of the sampled sequences from the animal with homogeneous infection (i.e. infections with a single founder strain without any selection pressure or recombination) are expected to be located [Figure 5]. As we sample more, the conical region becomes smaller [Figure 5]. Another requirement for homogeneous infection is that the sequence diversity should be less than the upper limit of the 95% CIs of the diversity at a given time following infection with a single virus strain. Authors' contributions BNB and DHO performed the animal experiment and nef gene SGA sequencing. HL performed the sequence data analysis and model simulations. EEG and ALA were responsible for the statistical analysis including the Poisson fit. PC, BK, SD, and HL were responsible for design and writing of the manuscript. All authors read and approved the final manuscript. We thank B. T. Korber, B. F. Keele, T. Bhattacharya, and A. S. Perelson for critical reading and comments and M. Draheim for technical support. This publication was supported by NIAID/NIH grant AI083115, NIH grant AI049781, NCRR/NIH grant P51 RR000167, Research Facilities Improvement Program grant numbers RR15459-01 and RR020141-01, University of Rochester Developmental Center for AIDS research (NIH P30AI078498), and NIH P01 AI056356. 1. Salazar-Gonzalez JF, Bailes E, Pham KT, Salazar MG, Guffey MB, Keele BF, Derdeyn CA, Farmer P, Hunter E, Allen S, et al.: Deciphering Human Immunodeficiency Virus Type 1 Transmission and Early Envelope Diversification by Single Genome Amplification and Sequencing. J Virol 2008, 82:3952-70. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 2. Keele BF, Salazar-Gonzalez JF, Pham KT, Salazar MG, Sun C, Grayson T, Decker JM, Wei X, Wang S, Goepfert PA, et al.: Identification and characterization of transmitted and early founder virus envelopes in primary HIV-1 Infection. Proc Natl Acad Sci USA 2008, 105:7552-7557. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 3. Gottlieb GS, Heath L, Nickle DC, Wong KG, Leach SE, Jacobs B, Gezahegne S, van 't Wout AB, Jacobson LP, Margolick JB, Mullins JI: HIV-1 variation before seroconversion in men who have sex with men: analysis of acute/early HIV infection in the multicenter AIDS cohort study. J Infect Dis 2008, 197:1011-1015. PubMed Abstract | Publisher Full Text 4. Kuyl AC, Cornelissen M: Identifying HIV-1 dual infections. Retrovirology 2007, 4:67. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 5. Gottlieb GS, Nickle DC, Jensen MA, Wong KG, Grobler J, Li F, Liu SL, Rademeyer C, Learn GH, Karim SS, et al.: Dual HIV-1 infection associated with rapid disease progression. Lancet 2004, 363:619-622. PubMed Abstract | Publisher Full Text 6. Gottlieb GS, Nickle DC, Jensen MA, Wong KG, Kaslow RA, Shepherd JC, Margolick JB, Mullins JI: HIV type 1 superinfection with a dual-tropic virus and rapid progression to AIDS: a case report. Clin Infect Dis 2007, 45:501-509. PubMed Abstract | Publisher Full Text 7. Costa LJ, Mayer AJ, Busch MP, Diaz RS: Evidence for Selection of more Adapted Human Immunodeficiency Virus Type 1 Recombinant Strains in a Dually Infected Transfusion Recipient. Virus Genes 2004, 28:259-272. PubMed Abstract | Publisher Full Text 8. Haigwood NL: Predictive value of primate models for AIDS. AIDS Rev 2004, 6:187-198. PubMed Abstract 9. Hu SL: Non-human primate models for AIDS vaccine research. Curr Drug Targets Infect Disord 2005, 5:193-201. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 10. Lackner AA, Veazey RS: Current concepts in AIDS pathogenesis: insights from the SIV/macaque model. Annu Rev Med 2007, 58:461-476. PubMed Abstract | Publisher Full Text 11. Staprans SI, Dailey PJ, Rosenthal A, Horton C, Grant RM, Lerche N, Feinberg MB: Simian immunodeficiency virus disease course is predicted by the extent of virus replication during primary J Virol 1999, 73:4829-4839. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 12. Mellors JW, Kingsley LA, Rinaldo CR Jr, Todd JA, Hoo BS, Kokka RP, Gupta P: Quantitation of HIV-1 RNA in plasma predicts outcome after seroconversion. Ann Intern Med 1995, 122:573-579. PubMed Abstract | Publisher Full Text 13. Mellors JW, Rinaldo CR Jr, Gupta P, White RM, Todd JA, Kingsley LA: Prognosis in HIV-1 infection predicted by the quantity of virus in plasma. Science 1996, 272:1167-1170. PubMed Abstract | Publisher Full Text 14. Centlivre M, Sala M, Wain-Hobson S, Berkhout B: In HIV-1 pathogenesis the die is cast during primary infection. Aids 2007, 21:1-11. PubMed Abstract | Publisher Full Text 15. Overbaugh J, Bangham CR: Selection forces and constraints on retroviral sequence variation. Science 2001, 292:1106-1109. PubMed Abstract | Publisher Full Text 16. Rybarczyk BJ, Montefiori D, Johnson PR, West A, Johnston RE, Swanstrom R: Correlation between env V1/V2 region diversification and neutralizing antibodies during primary infection by simian immunodeficiency virus sm in rhesus macaques. J Virol 2004, 78:3561-3571. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 17. Allen TM, O'Connor DH, Jing P, Dzuris JL, Mothe BR, Vogel TU, Dunphy E, Liebl ME, Emerson C, Wilson N, et al.: Tat-specific cytotoxic T lymphocytes select for SIV escape variants during resolution of primary viraemia. Nature 2000, 407:386-390. PubMed Abstract | Publisher Full Text 18. O'Connor DH, Allen TM, Vogel TU, Jing P, DeSouza IP, Dodds E, Dunphy EJ, Melsaether C, Mothe B, Yamamoto H, et al.: Acute phase cytotoxic T lymphocyte escape is a hallmark of simian immunodeficiency virus infection. Nat Med 2002, 8:493-499. PubMed Abstract | Publisher Full Text 19. Lichterfeld M, Yu XG, Cohen D, Addo MM, Malenfant J, Perkins B, Pae E, Johnston MN, Strick D, Allen TM, et al.: HIV-1 Nef is preferentially recognized by CD8 T cells in primary HIV-1 infection despite a relatively high degree of genetic diversity. AIDS 2004, 18:1383-1392. PubMed Abstract | Publisher Full Text 20. Ueno T, Motozono C, Dohki S, Mwimanzi P, Rauch S, Fackler OT, Oka S, Takiguchi M: CTL-mediated selective pressure influences dynamic evolution and pathogenic functions of HIV-1 Nef. J Immunol 2008, 180:1107-1116. PubMed Abstract | Publisher Full Text 21. Huang KJ, Wooley DP: A new cell-based assay for measuring the forward mutation rate of HIV-1. J Virol Methods 2005, 124:95-104. PubMed Abstract | Publisher Full Text 22. Kirchhoff F, Easterbrook PJ, Douglas N, Troop M, Greenough TC, Weber J, Carl S, Sullivan JL, Daniels RS: Sequence variations in human immunodeficiency virus type 1 Nef are associated with different stages of disease. J Virol 1999, 73:5497-5508. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 23. Palmer S, Kearney M, Maldarelli F, Halvas EK, Bixby CJ, Bazmi H, Rock D, Falloon J, Davey RT Jr, Dewar RL, et al.: Multiple, linked human immunodeficiency virus type 1 drug resistance mutations in treatment-experienced patients are missed by standard genotype analysis. J Clin Microbiol 2005, 43:406-413. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 24. Shriner D, Rodrigo AG, Nickle DC, Mullins JI: Pervasive genomic recombination of HIV-1 in vivo. Genetics 2004, 167:1573-1583. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 25. Harris RS, Liddament MT: Retroviral restriction by APOBEC proteins. Nat Rev Immunol 2004, 4:868-877. PubMed Abstract | Publisher Full Text 26. Simon V, Zennou V, Murray D, Huang Y, Ho DD, Bieniasz PD: Natural variation in Vif: differential impact on APOBEC3G/3F and a potential role in HIV-1 diversification. PLoS Pathog 2005, 1:e6. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 27. Bourara K, Liegler TJ, Grant RM: Target cell APOBEC3C can induce limited G-to-A mutation in HIV-1. PLoS Pathog 2007, 3:1477-1485. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 28. Slatkin M, Hudson RR: Pairwise comparisons of mitochondrial DNA sequences in stable and exponentially growing populations. Genetics 1991, 129:555-562. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 29. Smith JM, Haigh J: The hitch-hiking effect of a favourable gene. Genet Res 1974, 23:23-35. PubMed Abstract 30. Charlesworth D, Morgan MT, Charlesworth B: The effect of linkage and population size on inbreeding depression due to mutational load. Genet Res 1992, 59:49-61. PubMed Abstract 31. Miller MD, Warmerdam MT, Gaston I, Greene WC, Feinberg MB: The human immunodeficiency virus-1 nef gene product: a positive factor for viral infection and replication in primary lymphocytes and J Exp Med 1994, 179:101-113. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 32. Sinclair E, Barbosa P, Feinberg MB: The nef gene products of both simian and human immunodeficiency viruses enhance virus infectivity and are functionally interchangeable. J Virol 1997, 71:3641-3651. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 33. Arien KK, Verhasselt B: HIV Nef: role in pathogenesis and viral fitness. Curr HIV Res 2008, 6:200-208. PubMed Abstract | Publisher Full Text 34. Brown AJ: Analysis of HIV-1 env gene sequences reveals evidence for a low effective number in the viral population. Proc Natl Acad Sci USA 1997, 94:1862-1865. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 35. Achaz G, Palmer S, Kearney M, Maldarelli F, Mellors JW, Coffin JM, Wakeley J: A robust measure of HIV-1 population turnover within chronically infected individuals. Mol Biol Evol 2004, 21:1902-1912. PubMed Abstract | Publisher Full Text 36. Shriner D, Liu Y, Nickle DC, Mullins JI: Evolution of intrahost HIV-1 genetic diversity during chronic infection. Evolution 2006, 60:1165-1176. PubMed Abstract 37. Rouzine IM, Coffin JM: Linkage disequilibrium test implies a large effective population number for HIV in vivo. Proc Natl Acad Sci USA 1999, 96:10758-10763. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 38. Ronaghi M, Uhlen M, Nyren P: A sequencing method based on real-time pyrophosphate. Science 1998, 281:363-365. PubMed Abstract | Publisher Full Text 39. Kestler H, Kodama T, Ringler D, Marthas M, Pedersen N, Lackner A, Regier D, Sehgal P, Daniel M, King N, et al.: Induction of AIDS in rhesus monkeys by molecularly cloned simian immunodeficiency Science 1990, 248:1109-1112. PubMed Abstract | Publisher Full Text 40. Stafford MA, Corey L, Cao Y, Daar ES, Ho DD, Perelson AS: Modeling plasma virus concentration during primary HIV infection. J Theor Biol 2000, 203:285-301. PubMed Abstract | Publisher Full Text 41. Mattapallil JJ, Douek DC, Hill B, Nishimura Y, Martin M, Roederer M: Massive infection and loss of memory CD4+ T cells in multiple tissues during acute SIV infection. Nature 2005, 434:1093-1097. PubMed Abstract | Publisher Full Text 42. Nowak MA, Lloyd AL, Vasquez GM, Wiltrout TA, Wahl LM, Bischofberger N, Williams J, Kinter A, Fauci AS, Hirsch VM, Lifson JD: Viral dynamics of primary viremia and antiretroviral therapy in simian immunodeficiency virus infection. J Virol 1997, 71:7518-7525. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 43. Perelson AS, Neumann AU, Markowitz M, Leonard JM, Ho DD: HIV-1 dynamics in vivo: virion clearance rate, infected cell life-span, and viral generation time. Science 1996, 271:1582-1586. PubMed Abstract | Publisher Full Text 44. Markowitz M, Louie M, Hurley A, Sun E, Di Mascio M, Perelson AS, Ho DD: A novel antiviral intervention results in more accurate assessment of human immunodeficiency virus type 1 replication dynamics and T-cell decay in vivo. J Virol 2003, 77:5037-5038. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 45. Mansky LM, Temin HM: Lower in vivo mutation rate of human immunodeficiency virus type 1 than that predicted from the fidelity of purified reverse transcriptase. J Virol 1995, 69:5087-5094. PubMed Abstract | Publisher Full Text | PubMed Central Full Text Sign up to receive new article alerts from Retrovirology
{"url":"http://www.retrovirology.com/content/6/1/57","timestamp":"2014-04-19T17:02:58Z","content_type":null,"content_length":"157698","record_id":"<urn:uuid:7f7887d3-a973-4f7e-9227-074cb8cc5ec3>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
Some comments on functional self-reducibility and the NP hierarchy , 1993 "... Robust computation--a radical approach to fault-tolerant database access--was explicitly defined one decade ago, and in the following year this notion was presented at ICALP in Antwerp. A decade's study of robust computation by many researchers has determined which problems can be fault-tolerantl ..." Add to MetaCart Robust computation--a radical approach to fault-tolerant database access--was explicitly defined one decade ago, and in the following year this notion was presented at ICALP in Antwerp. A decade's study of robust computation by many researchers has determined which problems can be fault-tolerantly solved via access to databases of many strengths. This paper surveys these results and mentions some interesting unresolved issues. "... Most theoretical definitions about the complexity of manipulating elections focus on the decision problem of recognizing which instances can be successfully manipulated, rather than the search problem of finding the successful manipulative actions. Since the latter is a far more natural goal for man ..." Add to MetaCart Most theoretical definitions about the complexity of manipulating elections focus on the decision problem of recognizing which instances can be successfully manipulated, rather than the search problem of finding the successful manipulative actions. Since the latter is a far more natural goal for manipulators, that definitional focus may be misguided if these two complexities can differ. Our main result is that they probably do differ: If integer factoring is hard, then for election manipulation, election bribery, and some types of election control, there are election systems for which recognizing which instances can be successfully manipulated is in polynomial time but producing the successful manipulations cannot be done in polynomial time.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=454115&sort=cite&start=10","timestamp":"2014-04-20T22:12:02Z","content_type":null,"content_length":"14092","record_id":"<urn:uuid:c185de63-7a70-4304-8824-a05a868c2a3b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
Why Major in Mathematics? There are many reasons Some choose the subject because it is beautiful, others because it is practical and yet others for some combination of these reasons. Mathematics is wonderful training for the mind, a great training if you are thinking of being a lawyer for example, and it has an increasing range of applications, from the study of the genome and climate change to financial mathematics and medical statistics. Mathematics has always been the language of physics, and, with the advent of computer modeling, is an essential component of almost all science. An undergraduate degree in mathematics opens the door to many professions and to graduate study in a wide variety of disciplines. The American Math Society's site shows the variety of careers open to people who major in math-related fields. Come talk to one of the Barnard mathematics faculty members if you have any questions. If you are interested in possibilities for summer mathematical research, check out our research opportunities listing. This page also contains some links to various organizations for women in mathematics. Changes to the Majors: see the online Catalog for full details The Mathematics Major now allows substitutions in both the Modern Analysis and Modern Algebra sequences. There is a new Major called the Major in Mathematical Sciences. This major combines the elements of Mathematics, Computer Science and Statistics. It is designed to prepare students for employment in business, administration, and finance, and also give excellent background for someone planning graduate study in a social science field. Students who plan to obtain a teaching qualification in mathematics should plan their course of study carefully with an advisor, since courses that are too far from mathematics do not count towards certification. Calculus Coordinator: Dusa McDuff Prof. McDuff is happy to answer any kind of question about Calculus, either conceptual or about placement in sequence. Please come to her open office hours on Tuesdays 1:30-3:30 in Math 612 (CU) or email her at dmcduff@barnard.edu for an appointment. Computer Science Computer Science Coordinator: Dave Bayer Prof. Bayer is happy to answer any kind of question about Computer Science. Please come to his office during office hours, or email him for an appointment. We have started a Computer Science Help Room and forum for Barnard students interested in computer science. Left Column Copy Department of Math 333 Milbank 3009 Broadway New York NY, 10027-6598 T: 212.854.3577 F: 212.854.7491 Email: Susan Campbell Faculty Office Hours Daniela De Silva: TBA Dusa McDuff: T 1:30-3:30, and by appointment Walter Neumann: MW 10:30-11:30
{"url":"https://math.barnard.edu/","timestamp":"2014-04-18T05:32:35Z","content_type":null,"content_length":"21094","record_id":"<urn:uuid:d3285a68-593e-4045-a3f8-f40bd86384c9>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Provided by: mass -- L2 scalar product form(const space& V, const space& V, "mass"); form(const space& M, const space& V, "mass"); form (const space& V, const space& V, "mass", const domain& gamma); form_diag(const space& V, "mass"); Assembly the matrix associated to the L2 scalar product of the finite element space V. m(u,v) = | u v dx / Omega The V space may be either a P0, P1, P2, bubble, P1d and P1d finite element spaces for building a form see form(3). The use of quadrature formulae is sometime usefull for building diagonal matrix. These approximate matrix are eay to invert. This procedure is available for P0 and P1 approximations. Notes that when dealing with discontinuous finite element space, i.e. P0 and P1d, the corresponding mass matrix is block diagonal, and the inv_mass form may be usefull. When two different space M and V are supplied, assembly the matrix associated to the projection operator from one finite element space M to space V. m(q,v) = | q v dx / Omega for all q in M and v in V. This form is usefull for instance to convert discontinuous gradient components to a continuous approximation. The transpose operator may also be usefull to performs the opposite operation. The following $V$ and $M$ space approximation combinations are supported for the mass form: P0-P1, P0-P1d, P1d-P2, P1-P1d and P1-P2. The following piece of code build the mass matrix associated to the P1 geo g("square"); space V(g, "P1"); form m(V, V, "mass"); The use of lumped mass form write also: form_diag md(V, "mass"); The following piece of code build the projection form: geo g("square"); space V(g, "P1"); space M(g, "P0"); form m(M, V, "mass"); Assembly the matrix associated to the L2 scalar product related to a boundary domain of a mesh and a specified polynomial approximation. These forms are usefull when defining non-homogeneous Neumann or Robin boundary conditions. Let W be a space of functions defined on Gamma, a subset of the boundary of the whole domain Omega. m(u,v) = | u v dx / Gamma for all u, v in W. Let V a space of functions defined on Omega and gamma the trace operator from V into W. For all u in W and v in V: mb(u,v) = | u gamma(v) dx / Gamma For all u and v in V: ab(u,v) = | gamma(u) gamma(v) dx / Gamma The following piece of code build forms for the P1 approximation, assuming that the mesh contains a domain named boundary: geo omega ("square"); domain gamma = omega.boundary(); space V (omega, "P1"); space W (omega, gamma, "P1"); form m (W, W, "mass"); form mb (W, V, "mass"); form ab (V, V, "mass", gamma);
{"url":"http://manpages.ubuntu.com/manpages/precise/man7/mass.7rheolef.html","timestamp":"2014-04-17T18:44:15Z","content_type":null,"content_length":"7755","record_id":"<urn:uuid:2aa063f7-df38-4db9-ae84-005558cea7fd>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi Size refers to the number of nodes in a parse tree. Generally speaking, you can think of size as code length. This solution is locked. To view this solution, you need to provide a solution of the same size or smaller.
{"url":"http://www.mathworks.com/matlabcentral/cody/problems/1325-special-matrix/solutions","timestamp":"2014-04-20T01:03:16Z","content_type":null,"content_length":"132213","record_id":"<urn:uuid:6420b1d9-c0d6-4764-9d49-2fcabb609eb9>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
linear differential! September 17th 2012, 05:11 AM #1 Jul 2011 linear differential! $(\exp^{\frac{y}{x}}-\frac{y}{x}\exp^{\frac{y}{x}}+\frac{1}{1+x^2})dx+\ exp^{\frac{y}{x}}dy=0$ have reduce $(1-\frac{y}{x}+\frac{\exp^{\frac{y}{x}}}{1+x^2})dx+dy =0$ pls am stock here, whether am on the right part or not, help! Re: linear differential! $(\exp^{\frac{y}{x}}-\frac{y}{x}\exp^{\frac{y}{x}}+\frac{1}{1+x^2})dx+\ exp^{\frac{y}{x}}dy=0$ have reduce $(1-\frac{y}{x}+\frac{\exp^{\frac{y}{x}}}{1+x^2})dx+dy =0$ pls am stock here, whether am on the right part or not, help! When you divide by \displaystyle \begin{align*} e^{\frac{y}{x}} \end{align*} you should actually get \displaystyle \begin{align*} \left[ 1 - \frac{y}{x} + \frac{1}{\left( 1 + x^2 \right) e^{\frac {y}{x}}} \right] dx + dy = 0 \end{align*}. Re: linear differential! You understand, do you not, that this is NOT a "linear" equation? Re: linear differential! Re: linear differential! $-\frac{(1+x^2)\exp^v}{1+(1+x^2)\exp^v}dv=\frac{1}{x }dx$ by substitution, i got that above. Do i need further substitution? Re: linear differential! I would recommend $u= e^v$. (Notation: either " $e^v$" or "exp(v)" but not " $exp^v$".) September 17th 2012, 05:40 AM #2 September 17th 2012, 05:47 AM #3 MHF Contributor Apr 2005 September 17th 2012, 05:48 AM #4 September 17th 2012, 06:34 AM #5 Jul 2011 September 17th 2012, 07:52 AM #6 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/differential-equations/203598-linear-differential.html","timestamp":"2014-04-16T04:42:21Z","content_type":null,"content_length":"50750","record_id":"<urn:uuid:fb1ea76a-d64f-4b40-8999-f9443f9e3693>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
Since the post mortem thread in A3 was kinda interesting lets have a go at the other two problems as well This problem was an interesting one for me, I figured its a threading contest, i bet we get to use threads! However the best solution seemed to be single threaded. (Weird!) The best optimalisation I could come up with was a lookup table for every 5x5 block on the field you could determine the inner 9 cells by just looking in a big ass table, turning this problem from a heavy test problem into a simple lookup problem. and by running a single 5x5 block it was easy to determine possible spots to move to in the next cycle. One last optimalisation was to check the current cell's location compared to the target cell and try the possible directions that would get us closer (or atleast in the right direction) to the target cell first. I kinda misread or mis understood the problem description and figured there would be more points for performance then for the 'shortest' route (boy was that a mistake) and opted to have a go at trying to be fastest, looking back yeah big mistake, but hey on timing points alone still got 12 points out of it. (warning due to the lookup table download is about 16 megs) RSS Top Wed, 06/29/2011 - 03:42 I like these post mortums as well, so thanks to all for sharing. I made a number of optimisations to the grid to improve performance of large grids: * Implemented as a bit fields to save memory, cache and bandwidth. * Implemented a process routine that used the bit field to process more in one go. A bit like the idea behind using SSE. * Implemented a hardwired byte based 5x5 grid, like Lazydodo did, that just processes the inner 3x3. I would rip this 5x5 from the large grid. Calculated the next move and iteration and if any of them resulted in a live cell then iterate the whole grid, reinsert the 3x3 grid and save for later proessing. For the threading each thread had its own collection of work. Work being a live cell position and a grid. If a thread ran out of work it would steal a job from another thread. This kept the cross talk between threads low. I started off with each work collection just being a queue of work but changed it to be an array of queues. New work was always taken from the lowest queue with work. New work was push onto the array based on a scoring system. The score was based on the live cell distance from the target and the path length of the present solution. So work that was closer to the solution got done sooner unless it was getting a long path. This gave me a quick route to find A solution and also pushed the infinte loops out of the way of more valid routes. But was not so good at finding the shortest path as it is basically a depth first search with prunning. I made the same mistake and placed effort more on being able to handle large problems and scaling than on finding the shortest route. Bih ouch. Would have been nice to see some larger problems. Code attached. (Linux, GCC, pthreads) Attachment Size Download mazeOfLife.v3.tgz 20.85 KB Wed, 06/29/2011 - 04:39 I also try to solve large problems and use task queue for threads (as duncanhopkins). My problem is a test with short paths. Algorithm (russian forum): Wed, 06/29/2011 - 09:54 hi all, Here is my small write up for the maze of life problem. My starting point was to implement the A* algorithm in order to solve the problem, using the lenght of the path already done and the distance to the goal as heuristic. The only problem at this point is that since every movement generates a lot of new paths and solutions are far from be straight lines, trying to find the shortest path in all cases can be very very slow. So my decision was not to worry about shortest path and replace my initial heuristic with an aproximation taken after solving some small puzzles. Since I dont have any way to verify that my new heuristic is optimistic, I dont have any way to ensure that I find the shortest path. But this idea worked pretty well in practice, and with my single threaded version was working fine. TBB has priority queue (for my open set) and hashmap (for my closed set) thread safe containers, so to move from the single threaded version to the threaded version seemed to be easy. There are two task that could be parallelized. First, the movement computation. So, when I pop the information of a position from my open list, I create eight new parallel tasks (one task per every direction). The other point that could be paralellized was the pop from the open set. Limit the amount of threads popping from the open set Is very important because if you go too deep in the priority queue, you are for sure doing extra work. For this reason I decided to have only 6 threads popping elements from the open set. The Final points in the implementation was to decide when a maze is big enough to be resolved using multiple threads, and when the open set has elements enough to ensure that many threads popping elements doesnt harm. About the solution source: (Windows/Linux, intel compiler/MSVC, TBB) Solution attached Attachment Size Download MazeOfLife.cpp 17.29 KB Thu, 06/30/2011 - 22:08 I almost hate to talk about Maze of Life because it was such an interesting problem and my solution had potential but regrettably did not scale to multiple threads. Part of my difficulty stemmed from getting a late start on the contest and having only 9 days to complete Maze of Life. Working the full 21 days really helps on any problem. My top level design is analagous to a chess-playing program. It needs these three components: 1. A compact representation of board positions 2. A way to quickly generate new positions 3. A payoff function to evaluate the quality of a position To address the first two requirements I turned to the substantial body of literature on Game of Life programming. My data structure is "sparse" in that it only represents regions that contain living cells. Each cell is represented by a bit (0 = dead, 1 = alive) in a 64 bit integer which, in turn, covers a horizontal run of 64 cells. Advancing one generation in the Game of Life uses bitwise operations to process 64 cells in parallel. Now that I know more about SSE, I could probably up the word size to 128 and get 128-way parallelism. It wouldn't have mattered much for this contest because 80% of the test grids were smaller than 64 in width. The payoff function is interesting because it can totally change the complexion of the program. It scores each position based on how likely it is to lead to the desired solution - lower numbers being better. I wrote two payoffs, greedy and optimal: • Greedy returns the approximate Manhattan distance from the intelligent cell to the goal. It actually weights the major axis of the Manhattan distance so heavily that the minor axis distance is only used to break ties. • Optimal is like the A* algorithm. It returns the distance already traveled plus the best case number of remaining steps. This is an optimistic estimate of the length of a path through the current The main algorithm uses a heap (or priority queue) of positions, sorted by payoff, to work on the most promising leads first: push initial position onto the heap while position at top of heap is not at the goal pop position p from the heap for each direction d in 0 to 8 if p + d is in bounds and p[d] is a free cell position n = p with intelligent cell moved in direction d advance n to next generation if the intelligent cell survived push n onto the heap delete p report the solution That's my basic single-threaded solution. With the greedy payoff function it finds sub-optimal solutions quickly. With the optimal payoff, it always finds the shortest path but can take a long time depending on the complexity of the problem. I just ran both versions of my single-threaded program on a home quad core and got these unofficial results for the 10 official test problems: Dataset Greedy time (in milliseconds) Greedy path length Optimal time (in milliseconds) Optimal path length 01_test-example-1.txt 0.295 6 0.292 6 02_test-example-2.txt 0.309 13 0.476 11 03_test-example-3.txt 0.257 11 1.618 10 04_test-example-4.txt 0.293 18 0.423 11 05_test-example-5.txt 0.261 14 0.466 13 06_test-20x20-glider.txt 0.307 16 7.000 16 07_DoDo_14x14-9385-19.txt 0.516 34 7.649 19 08_DoDo_25x25-63474-39.txt 0.931 54 DNF* N/A 09_DoDo_150x150-223533-55.txt 21.456 132 DNF N/A 10_DoDo_300x300-373546-15.txt 22.754 38 241.873 16 (* longer than 2 minutes.) I suspect most of you would see similar results with single-threaded programs. These problems are so small that starting threads and partitioning the load is a waste of time. Anyway, all attempts to multithread this thing only made it slower. The more threads I started the slower it ran, regardless of threading strategy. Now that I look at it, the problem may have been that each iteration of the loop allocates several new positions and only deletes one. It's constantly growing in terms of dynamic memory, and those newly allocated positions have to be loaded into the data cache. Hmm. My first attempt at multi-threading simply protected the heap with a mutex and turned all the threads loose on the main loop. I guess that's why I'm in the Apprentice division! When that failed, I assumed it was due to contention for the mutex. After a series of other failures, I found a paper on Parallel Best N-Block First by Ethan Burns et al. PBNF is a clever algorithm that can be applied to Maze of Life. I won't cover the details here but recommend reading the paper(s). The basic idea is that the game grid is divided into equal-sized rectangular zones (typically squares), say of size 4x4 cells. Each thread stakes out a zone and only processes positions where the intelligent cell is in that zone. The active zones are chosen to be sufficiently far apart so that if the intelligent cell moves out of one thread's zone, it will enter a passive zone (i.e. one not currently being processed by a thread). This precludes the need for mutexes on most data structures. Periodically each thread checks to see if there is a more promising zone that it should be working on. PBNF can be configured to find optimal paths or bounded sub-optimal paths (i.e. length within a certain percentage of the optimal path length). This was the solution that I submitted, using the greedy payoff function as its heuristic. It still didn't speed up with threads but this is the Threading Challenge. It wouldn't be in the spirit of the competition to submit a single-threaded solution. - Rick LaMont Attachment Size Download nblock.cpp 39.01 KB Fri, 07/01/2011 - 20:37 Quoting jmfernandez TBB has priority queue (for my open set) and hashmap (for my closed set) thread safe containers Wow. You actually implemented a closed set. I figured that the chances of a position repeating itself (all cells in same state and intelligent cell in same location) were negligible so it wasn't worth storing old positions and looking them up in a hash table. Did you run any metrics to see how often the closed set pruned a branch of your search tree? decide when a maze is big enough to be resolved using multiple threads Is that the part where it compares the total grid size to 50 * 50? Probably a good idea since 8 of the datasets were smaller than that. Congratulations on your strong performance. - Rick Sat, 07/02/2011 - 05:09 Hi Rick, Answering your first question, even for the shortest grid, the closed lets you save some time. But its also true that this saved time seems to grow in a linear fashion so It is not a critical point, and I think that to use shared containers is not a problem because the computation of the every new grid state takes time enough. I also tried to use per-thread closed sets in order to avoid thread contention and I realized that closed set contention is not very important. The big problem is that the memory can be a bottleneck. I though that I was going to need to store only CRC32 (or something like that)of every maze in the closed set, but finally I didnt implement it. About your second question. Maybe I didnt explain it properly. As I wrote, there are two points where the algorithm can be parallelized: Open set popping and new maze computation and pushing. But there are one third poing who also can be parallelized in some concrete situations. If the maze is big enough, you can calculate the new maze status using multiple threads. The 50x50 thing is related to this point. In my algorithm there are two points who can be working in parallel: The new maze status calculation, who depends in the maze size, and the open set popping, who depends on if we have reached a open set concrete size. And there is other part who is always done in parallel: When a element is popped from the open set, the different directions are alwayscomputed in parallel. Thanks! But I am not sure that this is a good implementation. I didnt have time enough in any of the contest problems, and I would have liked to do more measures and more interesting proof of concepts. So I think that all my implementations can be seriously improved. [I edit this entry to add a final comment. I would have liked to have more time in order to clean a little bit my source code, who in this problem is specially horrible ;o)] Sat, 07/02/2011 - 10:15 My solution went through a bit of an up-down process, as I initially mis-read the problem and solved slightly the wrong thing, so it was pretty close to the deadline when I finally got to optimising things. It was also something like 1am the night before submission when I finally got around to testing larger grids - and since my solution was a breadth-first search, it had little hope for lazydodo's tests. Given more time, I probably would've made a simultaneous depth-first score-based algorithm to run alongside, but there you go. As it was, my performance turned out very well for the smaller tests, and I noticed that if I'd made four forum posts I technically would have won - but that's neither here nor there, as I didn't solve it all and am not bothered about that :) I got lucky with the scoring system that time, and may not with the others. Technically, here's roughly what my solution did: - Store the current list of grids to process as a vector, with each grid as just a 2D array of bools - Use tbb::parallel_for on the vector to divide up the grids by thread - For each grid, calculate all possible moves (up to nine), grow the grid from each, hash it, and store it to a local output vector - Once all threads have been processed, clear the main vector and copy each of the local ones into it, provide their hash isn't already present in a processed set (i.e. pass on successful insert) - Of course there was an early out once a thread found a solution. A 2D array of bools was the fastest of the data types I tried - but it was good for the 7x7 grids, and probably not so much for the 200x200s. parallel_for worked out pretty well for threading, though I just manually locked things when I needed to, as that was faster than the various concurrent objects. More details are in the code :) Attachment Size Download maze.tgz 9.67 KB Login to leave a comment.
{"url":"https://software.intel.com/en-us/forums/topic/283078","timestamp":"2014-04-21T10:08:35Z","content_type":null,"content_length":"75789","record_id":"<urn:uuid:97e0c3e6-6466-4f58-a66e-a8fa83462bca>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
An efficient Monte Carlo approach for optimizing communication constrained decentralized estimation networks Üney, Murat and Çetin, Müjdat (2009) An efficient Monte Carlo approach for optimizing communication constrained decentralized estimation networks. In: 17th European Signal Processing Conference (EUSIPCO-2009), Glasgow, Scotland PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader We consider the design problem of a decentralized estimation network under communication constraints. The underlying low capacity links are modeled by introducing a directed acyclic graph where each node corresponds to a sensor platform. The operation of the platforms are constrained by the graph such that each node, based on its measurement and incoming messages from parents, produces a local estimate and outgoing messages to children. A Bayesian risk that captures both estimation error penalty and cost of communications, e.g. due to consumption of the limited resource of energy, together with constraining the feasible set of strategies by the graph, yields a rigorous problem definition. We adopt an iterative solution that converges to an optimal strategy in a person-byperson sense previously proposed for decentralized detection networks under a team theoretic investigation. Provided that some reasonable assumptions hold, the solution admits a message passing interpretation exhibiting linear complexity in the number of nodes. However, the corresponding expressions in the estimation setting contain integral operators with no closed form solutions in general. We propose particle representations and approximate computational schemes through Monte Carlo methods in order not to compromise model accuracy and achieve an optimization method which results in an approximation to an optimal strategy for decentralized estimation networks under communication constraints. Through an example, we present a quantification of the trade-off between the estimation accuracy and the cost of communications where the former degrades as the later is increased. Repository Staff Only: item control page
{"url":"http://research.sabanciuniv.edu/13320/","timestamp":"2014-04-19T12:00:09Z","content_type":null,"content_length":"18623","record_id":"<urn:uuid:6bb585bc-20cd-4538-8256-9d08c661c543>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
10th Class Grading System AP - SSC Grading Points - Grade Process AP SSC/10th Class Grading System 2012 AP SSC Results 2012 has been released in the year 2012 on 24th May 2012. SSC Board has been released Grading system this year for the first time in Andhra Pradesh . The grade will be calculated based on marks in SSC /10th class public exam.You can see the Grading system, provided in table for 6 subjects. Hindi subject has separate grade than other 5 subjects. Based on the range of marks scored the grade will be calculated .Telugu,English,Social,Maths and Science subjects has the same value of grade. Even as teachers welcome the grading system adopted from this year for the 10th class examinations, a section of parents and students point out that the method adopted for grading will be disadvantageous to them. Unlike the CBSE grading system that is static, the State government has adopted the nine-point ‘relative grading’ system. Subject grades and aggregate grades will be assigned to the candidates depending on their relative performance in the subjects. It means a particular candidate’s grade will not only depend on his/her performance but also on how others perform. In the CBSE mode the grade is decided on the percentage scored by the candidates. Under the new system, the top 12.5 per cent of the passed candidates will get A1 grade while students below 12.5 per cent and above 25 per cent will get A2 grade. Students who figure below 25 per cent and above 37.5 per cent will get B1 grade. Candidates,who get the marks range between 92-100 then his grade is A1. If you get between 83 to 91 marks,then grade decrease to A2. Here A1 is greater than A2.There are totally 9 grades available from A1 to E.If you get Grade A1 then you will get 10 Grade Points.For Grade A2 the grade points are 9. If your Grade follows like A1+A2+A1+A1+A2+A1 in 6 subjects.Then your Grade Points calculation calculate as 10+9+10+10+9+10=58/6=9.66 Here your Grade Points=9.66. The below listed table explained the complete process of Grading system for 10th class marks. │ SSC – Grade Point Average (GPA) System │ │Marks range in Each Subject English, Telugu, Mathematics, Science, Social │Marks range in Hindi│Grade│Grade Point│ │ 92-100 │ 90-100 │ A1 │ 10 │ │ 83-91 │ 80-89 │ A2 │ 9 │ │ 75-82 │ 70-79 │ B1 │ 8 │ │ 67-74 │ 60-69 │ B2 │ 7 │ │ 59-66 │ 50-59 │ C1 │ 6 │ │ 51-58 │ 40-49 │ C2 │ 5 │ │ 43-50 │ 30-39 │ D1 │ 4 │ │ 35-42 │ 20-29 │ D2 │ 3 │ │ 34-0 │ 19-0 │ E │ – │ 1. indhu : my gpa is 8.5 can anyone tell me what will be my total percentage □ prashanth : 8.5 means not % bcoz so many schools got 10 GPA so if u think it as % iys wrong ……in web site by ur total grade u can gess ur marks in manabadi.co.in……pls inform to others …… accrdng to me ir marks are nearly 475 ok pls reply for my quesion □ vikas : plz send me u r mobile no.i will tell u details 2. Tech Inspiration : Thank you for your info it worked a lot.Thanks bro. i am trying to call getting u, but its getting not rechable.so i droped am email.pls can u check and tel the total marks. his hallticket number was 1317131281.he got 9.3 grade.can u tel me total marks…??? i hallticket number 1317131281.he got 9.3 grade can u tel the total marks,..? 3. murali : my gpa is 9.2 can any one tell me what will be my totoal percentage □ Tech Inspiration : hI @MURALI your total percentage is 92% amd marks 552 Congratulations Thank you. If you have any concern follow here.are mail me @ yadavnow@gmail.com or call on: 8374222688 ☆ prashanth : 9.2 Means not % bcoz so many schools got 10 GPA if u think it as % then highest marks are 600/600 okayyyy by ur total grade u can know ur marks in manabadi.co.in…..accrdng to me ur total is nearly 510 okayyyyy …..!!!!!!! 4. Tech Inspiration : Hey Indu, Hi Congratulations Your percentage is 85% and your marks are 510. You can contact here for further clarification or can call me on 8374222688 or mail me on yadavnow@gmail.com □ Dedeepya : I am Dedeepya. My GPA is 9.7. Please tell me what will be the my total marks? ☆ Tech Inspiration : @deedepya Congratulation you have got 9.7 = 582 marks.Thank you. If you have any concern follow here.are mail me at: yadavnow@gmail.com or call on: 8374222688 5. vamsi : my gpa is 9.7 plz tell me my total percentage and total marks □ Tech Inspiration : @vamsi Congratulation you have got 9.7 = 582 marks.Thank you. If you have any concern follow here.are mail me @ yadavnow@gmail.com or call on: 8374222688 6. surya : I’ve got 85% can u plz tell my grade □ Tech Inspiration : @SURYA YOUR GRADE IS 8.5 and marks are 510 7. MANUKAR : my gpa is 9.2, what will be my percentage and how many marks 8. sandee : My name is sandeep i hav got 8.5grade it means wats my marks and which grade wil i get a2 or b1 □ sanjay : i think its “E” 9. d.samreen : MY GP IS 9.5. CAN I PLZ KNOW MY PERCENTAGE AND OVERALL GRADE? CAN U PLZ TELL THE PROCEDURE HOW TO FIND OUT? 10. vamsi : my friend got gpa is 8.7 plz tell me his total percentage and marks 11. vamsi : my friend percentage is 7.3 percent plz tell her marks and total percentage 12. gautham : I got 9.3 what is my percentage and grade 13. Vamshi : My GPA is 9.2 , then what will be my grade is my grade A1 or A2 14. rahul : My GPA is 8.5 , please can you tell my percentage or marks 15. mariya : hi my gpa score is 8.7 please tell me my total marks 16. pradeep : my gpa is 9.3 please tell me the percentage and total marks 17. kailash : from 92 to 100 we get 10 points then how could we confirm our total.for example 93,93,93,93,93,93,in all subjects we r rewarded 10 points .is our total 600 ? 18. Anesh : Hey i am a student of vignan school my gpa is 10 den wats my total is it 600 if so how can it b if nt den how 9.2 can b 552 19. lokesh : hello iam lokesh my gpa is 9.5 . please tell me what will be the total 20. lokesh : hello iam lokesh igot 9.5 gpa what will the total □ harsha : approx 570-580 21. sowdamini : hi, i got G.P.A 9.2. will you please tell my percentage and the marks i secured? □ hemA : MEMU ADHEY BAADA PADUTHUNNAMU AMMA NAADI KUDE 9.2 NADI MEDA TANTAANU 22. sowdamini : hi, my G.P.A is 9.2. so, please tell me my percentage and the marks i secured. 23. bhanu : can anyone tell me my total marks .my GPA is 9.7.did i have reached 570 24. sai sumanth : i got 9.3 it means how many marks i got 25. ayshwarya : i have got 8.8 GPA plz can u tellme my total marks 26. Mona Patel : Hi…what percentage is for 9.7 ?? 27. chANDU : hi guys if u got 9.7 menas just u think that u got blow 550 blow only …… 28. suryanarayana : hi my gpa score is 9.3,so please tell me my percentage and marks? 29. kishore : mY GPA IS 8.8,, PLZZ TEL ME HOW MANY MARKS I GOT?? 30. kishore : MY GPA IS 8.8 ,HOW MANY MARKS I GOT PLZ TEL ME?? 31. pooja : this is pooja. i got 9.7 gpa. pls send statistics of ssc results. like how many students got 10 gpa, 9.9 , 9.8 9.7 etc., □ sreeja : but can u plzz tell me hw 2 calculate the total from grade is it 600 total 4 1o grade??? plzz tell me.. n hw can we knw our marks 4m the gpa?? 32. usha : my gpa is 9.5 what’s my overall grade ? plz tell me soon… □ ammulu : my gpa is 6 what is my total? 33. Samrin : I got 8.0 GPA..can u tell me what is my total and percentage..?????? 34. BHANU : I GOT 9.0 GPA .CAN YOU TELL ME MY GRADE???????? 35. Samar : I got 9.2 GPA in 2013 results , can you tell me my total and percnentage? 36. vivek : estimate my marks i got 8.7 37. varun : hii i got 8.0 what will my total marks 38. Thrishal Anand : I’ve got 9.7 and can you let me know what grade did I get? like A1 or A2? 39. kumaraswamy : it is wrost grade system in 10th class why use for grade system??????????????????? all student are confusion purpose only 40. harsha : i got 8.8 so how many marks i got and my percentage my grade……… can u plz tell me 41. annanya : hey i got b1 , b1, b2 b2 and c1 grades wat is my cgpa will u plzz te ll me Speak Your Mind Cancel reply
{"url":"http://fullysolved.com/ap-ssc-grading-system/","timestamp":"2014-04-20T11:32:12Z","content_type":null,"content_length":"80745","record_id":"<urn:uuid:62a94625-d5ee-4dbc-a312-78d01f2b4fa2>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
Edgewater, NJ Algebra 1 Tutor Find an Edgewater, NJ Algebra 1 Tutor ...I've been teaching and tutoring for over 15 years in several subjects including pre-algebra, algebra I & II, geometry, trigonometry, statistics, math analysis, pre-calculus, calculus (AP), number theory, SAT Math/Critical Reading/Writing, programming, web design, earth science, biology, chemistry... 83 Subjects: including algebra 1, chemistry, calculus, geometry ...I have always been a lover of mathematics and have had experience tutoring and mentoring New York City high school students in writing while I was in college. I have a passion for learning and hope to spread that love of learning to others.I began learning the language in the fall of 2011 throug... 5 Subjects: including algebra 1, geometry, Japanese, Portuguese ...Overall, I prefer a non-stressful approach to establish a baseline from which to go on with each individual student. Preparation is also key and having the right material to work on is very important. Depending on the subject (and especially for computer subjects) before the first session, I wi... 9 Subjects: including algebra 1, algebra 2, trigonometry, logic ...I am also able to advise techniques for drawing diagrams on the logic games section, and well as general logical rules to help understand the arguments and constraints in each problem. I have tutored the GRE at least 20 times. I am able to tutor students of all skill levels, from the most eleme... 32 Subjects: including algebra 1, calculus, physics, statistics ...Mr. Hill has played in smaller ensembles coached by Robert Marsteller, Mitchell Lurie, and Jane Taylor and in orchestras led by Léo Arnaud, Hans Beer, Lawrence Christianson, Ingolf Dahl, Lukas Foss, and William Schaefer. He was principal horn with the 26th United States Army Band at Fort Wadswo... 8 Subjects: including algebra 1, geometry, algebra 2, prealgebra
{"url":"http://www.purplemath.com/edgewater_nj_algebra_1_tutors.php","timestamp":"2014-04-18T08:23:57Z","content_type":null,"content_length":"24249","record_id":"<urn:uuid:19c1a9d6-6ffc-4475-9e9c-405065dd8e68>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. 'Making Trains' printed from http://nrich.maths.org/ Laura made a train with the Cuisenaire rods. Her train was made from three rods which were all different colours. It was the same length as three pink rods: Laura's train looked like this: Can you make a different train, the same length as Laura's, with three rods which are all different colours? Can you make one the same length using no colours that Laura used? Rob made a train that was the same length as Laura's using four differently coloured rods. How did he do it? Charlene made a train which was as short as it could be using four differently coloured rods. Can you find a rod that is the same length as Charlene's train? Ben made a train using four differently coloured rods the same length as two black rods and a light green one: He made it without using a white rod. How did he do it? You may like to use this interactivity to try out your ideas. Click on 'Rods' to choose your rods: Full Screen Version This text is usually replaced by the Flash movie.
{"url":"http://nrich.maths.org/4331/index?nomenu=1","timestamp":"2014-04-17T01:19:51Z","content_type":null,"content_length":"5084","record_id":"<urn:uuid:e07d4059-7b46-46fc-95bd-a9dc0ce705fe>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
CFD Modeling of Wall Steam Condensation: Two-Phase Flow Approach versus Homogeneous Flow Approach Science and Technology of Nuclear Installations Volume 2011 (2011), Article ID 941239, 10 pages Research Article CFD Modeling of Wall Steam Condensation: Two-Phase Flow Approach versus Homogeneous Flow Approach ^1Electricité de France R&D Division, 6 Quai Watier, 78400 Chatou Cedex, France ^2INCKA, 85, Avenue Pierre Grenier, 92100 Boulogne Billancourt, France Received 14 March 2011; Revised 3 May 2011; Accepted 6 May 2011 Academic Editor: Giorgio Galassi Copyright © 2011 S. Mimouni et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The present work is focused on the condensation heat transfer that plays a dominant role in many accident scenarios postulated to occur in the containment of nuclear reactors. The study compares a general multiphase approach implemented in NEPTUNE_CFD with a homogeneous model, of widespread use for engineering studies, implemented in Code_Saturne. The model implemented in NEPTUNE_CFD assumes that liquid droplets form along the wall within nucleation sites. Vapor condensation on droplets makes them grow. Once the droplet diameter reaches a critical value, gravitational forces compensate surface tension force and then droplets slide over the wall and form a liquid film. This approach allows taking into account simultaneously the mechanical drift between the droplet and the gas, the heat and mass transfer on droplets in the core of the flow and the condensation/evaporation phenomena on the walls. As concern the homogeneous approach, the motion of the liquid film due to the gravitational forces is neglected, as well as the volume occupied by the liquid. Both condensation models and compressible procedures are validated and compared to experimental data provided by the TOSQAN ISP47 experiment (IRSN Saclay). Computational results compare favorably with experimental data, particularly for the Helium and steam volume fractions. 1. Introduction Condensation heat transfer in the presence of noncondensable gases is a relevant phenomenon in many industrial applications, including nuclear reactors. In particular, during the course of a hypothetical severe accident in a nuclear pressurized water reactor (PWR), hydrogen may be produced by the reactor core oxidation and distributed into the reactor containment according to convective flows, water steam wall condensation, and interaction with the spraying droplets. In order to assess the risk of detonation generated by a high local hydrogen concentration, hydrogen distribution in the containment vessel has to be known. The TOSQAN experimental programme [1] has been created to simulate typical accidental thermal hydraulic flow conditions of the reactor containment. The heat and mass exchanges between the spray droplets and the gas with thermal hydraulic conditions representative of this hypothetical severe accident have been studied in [2]. The aim of this work is, thus, to focus on wall condensation. To evaluate the condensation modelling of containment codes, ISP47 test was performed in the TOSQAN facility (OECD). The TOSQAN facility is a large enclosure devoted to simulate typical accidental thermal hydraulic flow conditions in PWR containment (Section 4). It is highly instrumented with nonintrusive optical diagnostics. Therefore, it is particularly suitable for nuclear safety CFD code This issue has already been addressed by using computational fluid dynamics (CFD) codes as CFX code [3]. In these calculations, the flow is modelled as a single-phase and the condensation acts as a sink of mass and energy. In this approach, the liquid film and the influence of the noncondensable gas layer are reduced to a simple sink term. On the other hand, the use of explicit correlations to evaluate heat and mass transfer processes, though it represents a feasible approach for large experimental facilities and reactor plant containments, partly ignores the useful information provided by the detailed CFD models in relation to local conditions. Another modelling is proposed in [4] with the FLUENT code. With this approach, heat and mass correlations are replaced by using “fundamental” physical laws. But, in that case, a very fine computational grid is required; the adopted two-dimensional grid discretizes the vessel gas region of TOSQAN experiment in about 28500 cells (average size of 1cm) instead of 4800 for the former case (average size of 2.6cm). The main objective of the paper is to propose a novel condensation model based on “fundamental” physical laws without requiring a very fine computational grid; 7500 cells are used for TOSQAN ISP47 test, and the grid is uniform. In reactor applications, droplets at the wall come from vapor condensation or sprays. The computation of heat and mass transfer between a spray and a gas mixture has already been addressed [2]. In fact, thanks to a code-to-experiment benchmark based on 2 tests of the TOSQAN facility [5], we successfully evaluated the ability of the code to reproduce the droplet heat and mass transfer on one hand (TOSQAN 101 case) and the gas entrainment and atmosphere mixing by the spray on the other hand (TOSQAN 113 case). A novel model dedicated to the droplet evaporation at the wall was also proposed [2]. As a consequence, the vapor condensation model can be seen as an extension of the previous model. Moreover, it is of primary importance to take into account both evaporation and condensation phenomena. In fact, Andreani et al. [6] underline that depending on the break location and the geometry of the containment, liquid films could flow into dry regions where the liquid would evaporate. If walls are hotter than the liquid film, this would result in an enhanced evaporation rate. The two-phase flow approach adopted in the paper allows taking into account simultaneously the mechanical drift between the droplet and the gas, the heat and mass transfer on droplets in the core of the flow, and the condensation/evaporation phenomena on the walls. But, the calculations of the wall condensation with a homogeneous model (as implemented in Code_Saturne) (Archambeau, 2004), of widespread use for engineering studies, give also reasonable results; as a consequence, the comparison between these two methods allows to underline their advantages and drawbacks, respectively. The paper is organized as follows. First, we describe briefly the set of equations solved in the NEPTUNE_CFD and Code_Saturne codes. In the last part, the two-phase flow model and the homogenous models are compared and validated by simulating the TOSQAN ISP47 test on global and local variables. Both models have been already validated against COPAIN test [7]. 2. The Numerical Solver and Physical Modeling: NEPTUNE_CFD Code The solver belongs to the well-known class of pressure-based methods. It is able to simulate multicomponent multiphase flows by solving a set of three balance equations for each field (fluid component and/or phase) [8, 9]. These fields can represent many kinds of multiphase flows: distinct physical components (e.g., gas, liquid, and solid particles), thermodynamic phases of the same component (e.g., liquid water and its vapour), distinct physical components, some of which split into different groups (e.g., water and several groups of different diameter bubbles), and different forms of the same physical components (e.g., a continuous liquid field, a dispersed liquid field, a continuous vapour field, a dispersed vapour field). The solver is implemented in the NEPTUNE software environment [10, 11], which is based on a finite volume discretization, together with a collocated arrangement for all variables. The data structure is totally face based which allows the use of arbitrary shaped cells (tetraedra, hexaedra, prisms, pyramids, …) including nonconformal meshes. The main interest of the numerical method is the so-called “volume fraction-pressure-energy cycle” that ensures mass and energy conservation and allows strong interface source term coupling [12]. Mass balance equations, momentum balance equations, and total enthalpy balance equations are solved for each phase. The gas turbulence is taken into account by the classical k-ε model. The droplet diameter evolution is calculated from an equation of transport on the density of drops. Additional equations are added to take into account the noncondensable gases (air and helium). As concerns the interfacial momentum transfer terms, the only force exerted on droplet is the drag force. Small droplets stick at the wall and large drop slide along the wall under the competition between the surface tension and the gravity force. As a consequence, the gas velocity near the wall does not tend to zero but to the droplets velocity because of the drag force. This is a major difference between single-phase and two-phase flow approach [7]. As concerns the heat and mass transfer between droplets and the wall, it is based on the balance of heat and mass transfer between a drop and the gas mixture surrounding the drop using the correlations of Ranz and Marshall [13] which are of widespread use. The model of drop-wall interaction which was developed and implemented is written as a symmetric extension of the nucleate boiling model at the wall and uses as a starting point the model of mass transfer in the core flow. To establish this model, we made the following assumptions:the drops which accumulate on the walls take a hemispherical form;there is no nucleate boiling inside the drops at the wall;the drops which impact the walls successively see a stage of cooling (resp., heating) and a stage of condensation (resp., evaporation);the droplets stick to the wall (no rebound), or slide along the wall. The total heat flux exchanged between the wall and the flow is split into four terms:a single-phase flow convective heat flux at the fraction of the wall area unaffected by the presence of droplets (heat transfer between the gas and the wall);a single-phase flow convective heat flux at the fraction of the wall area affected by the presence of a liquid film (heat transfer between the liquid film and the wall);a single-phase flow heat flux to decrease (resp., increase) the droplet temperature and reach the wall temperature (resp., the saturation state) (heat transfer between the droplets and the wall);a condensation (resp., vaporisation) heat flux. Details can be found in [2]. An extensive validation process has been achieved in [7] against the COPAIN experiment, and mesh sensitivity has been found acceptable. 3. Homogeneous Gas Dynamic Model Used in Code_Saturne The motion of gases and heat transfer in containment enclosures can be described by the general momentum, partial masses, and energy conservation equations. The predominant physical phenomena driving the distribution and heat transfer of fluids within containment enclosures are the following.Mixing and/or segregation of gas whose velocity, density, and temperature are different.“Swelling” of containment: the compressibility of gas is taken into account, even if the flow velocities are low.Laminar and controlled combustion of hydrogen in recombiners, in order to limit the concentration of this gas.Condensation of steam on cold structure surfaces, which has the main effect of limiting the pressure rise. The general momentum, partial masses, and energy conservation equations describing these phenomena can be simplified, and stiffness due to the presence of physics having very different characteristic length and time scales can be removed or relaxed. Steam condensation on the walls of the containment enclosure plays a key role in the dynamic and heat transfer. The heat and mass sink terms of gases due to condensation are modeled through correlations based on heat and mass transfer analogy of Chilton-Colburn type. The liquid film is not modeled, and it is assumed that vapor and noncondensable gases are in direct contact with the wall. The modelling of the heat transfer by condensation of steam in liquid can be found in [14]. 4. TOSQAN ISP47 Test 4.1. TOSQAN Experiments The TOSQAN experiment (Figure 1) is a closed cylindrical vessel (7m^3, i.d. 1.5m, total height of 4.8m, and condensing height of 2m) into which steam or noncondensable gases are injected through a vertical pipe located on the vessel axis. This vessel has thermostatically controlled walls so that steam condensation may occur on one part of the wall (the condensing wall), the other part being superheated (the noncondensing wall). The entire transient of the ISP47 test lasted about 18000s. During certain phases of the experiment, steady states were reached when the steam condensation rate became equal to the steam injection rate, while all boundary conditions (in particular, wall temperatures and steam injection rates) were kept constant. The boundary conditions during different steady states were different. The boundary conditions are summed up in Table 1 [1]. 4.2. Numerical Setup The initial conditions for the thermo-fluid-dynamic variables necessary to start the simulation of the transient were evaluated through a preliminary calculation, with no mass flow rate at the inlet section and with only air present inside the vessel. The mean upper (resp., lower) noncondensing wall temperature is maintained constant and equal to 122.0°C ±1 (resp., 123.5°C ±1) during the whole test. Gas temperature, volume fractions, and gas velocity measurements are available on TOSQAN at different heights . The flow is assumed to be axisymmetric so that a two-dimensional axisymmetric mesh is used. Two-dimensional representations of axial velocity and gas temperature with NEPTUNE_CFD are illustrated on Figure 2. Computations with NEPTUNE_CFD have been performed on two kinds of meshing: a grid with 7500 cells (average size of 2cm) and a fine grid with 32000 cells (average size of 1cm). Results are similar (Figure 5) between “standard” (7500 cells) and fine mesh (32000 cells). Hence, the subsequent computations with NEPTUNE_CFD are performed on the “standard” grid (7500 cells). Calculations performed with Code_Saturne use about 1700 cells. The cells are of hexahedral shape. 4.3. Results and Discussion In this section, experimental values are compared to the values calculated with the NEPTUNE_CFD code with a two-phase flow approach. In some figures, values calculated with Code_Saturne (homogenous approach) have been added and named “saturne”. The evolution of the relative pressure during the whole transient is illustrated by Figure 3 and compares quite favourably with experimental data. This figure gives a general idea of the successive 4.3.1. Gas Temperature Profiles The gas temperature compares favourably with experimental results in the lower part of the TOSQAN vessel but is overestimated in the upper part in plume or jet-plume configuration (Figures 8, 9, 14, and 16). In jet configuration, the gas temperature profiles are in reasonable agreement with the experimental data, including near the wall (Figures 10 and 11). We recall that a plume is a column of one gas moving through another. Several effects control the motion of the fluid, including momentum, diffusion, and buoyancy (for density-driven flows). When momentum effects are more important than density differences and buoyancy effects, the plume is usually described as a jet. 4.3.2. Gas Velocity Profiles The gas temperature results are correlated to the gas velocity that is correctly predicted in the steady state 2 (Figure 13) whereas discrepancies are observed for the steady state 1 (Figure 4). Steam mean mass flow rate at steady state 1 is 1.11g/s. At the injection mean temperature, namely, 126°C, the vapor density is kg/m^3. The internal diameter of the injection tube is mm. We deduce the vapor velocity at outlet (m) of the injection tube by which leads to m/s. This value is coherent with the radial profile of the axial velocity at m where a peak along the axis is observed (Figure 4). If we assume that condensation may occur in the core flow, then droplets may form (wet vapor). Because of the mass flow rate conservation, the gas velocity at injection is lower, and the comparison calculated/experimental values is improved for the velocity profiles. But, with condensation in the core flow, calculations show that the gas temperature is globally overestimated in the vessel. As a consequence, more investigations are still needed to check if the mass transfer in the core flow can be neglected. Particularly, the heat and mass transfer in the core flow strongly depends on the droplets diameter for which the initial values are crucial. Another reason could explain the discrepancies about the vertical gas velocity, the modelling of turbulence in buoyant jet configuration, since the empirical constants of the turbulence models are fitted to jet configurations. In fact, the axial gas velocity profile is in reasonable agreement with the experimental data for the steady state 2 (Figure 13). But, the axial gas velocity profiles (Figure 15) for the steady state 3 and 4 (plume jet configuration like the steady state 1) are also in reasonable agreement with the experimental data; hence, discrepancies are not only due to the turbulence modelling. 4.3.3. Helium and Vapor Volume Fraction Vapor volume fraction globally compares favourably with the experimental results (Figures 6, 7, 12, 17, and 18). Hence, the two-phase flow approach proposed to predict vapor condensation on a cooled surface in the TOSQAN ISP47 test is successfully validated in terms of condensation flux whereas discrepancies remain for the heat flux between the wall and the gas mixture in plume configurations. However, these discrepancies should have no impact on safety considerations according to [1]. Moreover, in the homogenous approach (cheap in computational time), the liquid film is not modeled and it is assumed that vapor and noncondensable gases are in direct contact with the wall. Thus, the gas velocity tends to zero at the wall. In the two-phase approach (expensive in computational time), vapor condensates at wall and forms a liquid film. Thus, the gas velocity tends to the liquid film velocity near the wall. As a consequence, the stratification calculated with the two-phase flow approach is accurately calculated in Figure 18 whereas discrepancies are observed with the homogeneous As a consequence, the helium volume fraction profiles are in good agreement with the experimental data (Figures 17 and 18) because the mixture density equals the sum of vapour, air, and helium density. Nevertheless, the accuracy prediction of the global condensate liquid is only a necessary condition. In fact, at s, the helium injection is stopped, and, hence, the mass of helium is constant in the vessel. In most of numerical CFD codes, the helium mass balance equation is usually solved after the mass, momentum and energy balance equations which leads to a numerical error on the helium mass conservation. This numerical error can be neglected for short physical times but can exceed 20% for long transient calculations. Therefore, in a word, the noncondensable gases (air and helium) mass balance equations are solved inside the so-called “volume fraction-pressure-energy cycle” that ensures mass conservation. These results are relevant for safety considerations, given that in applications, hydrogen (explosive gas) is produced in nuclear power plan containment at accident conditions instead of helium. NEPTUNE_CFD results and Code_Saturne results are in good agreement globally. 5. Conclusion A large amount of steam and hydrogen gas is expected to be released within the dry containment of a pressurized water reactor (PWR), after the hypothetical beginning of a severe accident leading to the melting of the core. The accurate modeling of gas distribution in a PWR containment concerns phenomena such as wall condensation, hydrogen accumulation, gas stratification, and transport in the different compartments of the containment. The paper presents numerical assessments of CFD solvers NEPTUNE_CFD and Code_Saturne and is focused on the analysis and the understanding of gas stratification and transport phenomena. We have presented in this paper the wall condensation modelling implemented in NEPTUNE_CFD, a three dimensional two-fluid code dedicated to nuclear reactor applications. A novel model dedicated to the droplet evaporation at the wall was proposed in [2] and generalized in this work to the vapor condensation on a cooled surface. Thanks to a code-to-experiment benchmark based on the COPAIN facility, we successfully evaluated the ability of the codes to reproduce the vapor condensation at wall in a previous work [7]. In this paper, both codes are validated and compared with experimental data corresponding to the TOSQAN ISP47 test. The obtained computational results compare fairly well with experimental data and other computational results obtained with others codes as CFX [3] and FLUENT [4] codes. Moreover, during the course of a severe accident in a pressurized water reactor (PWR), spray systems are used in containment in order to limit overpressure, to enhance the gas mixing in case of the presence of hydrogen, and to drive down the fission products. Hence, vapor condensation on a cooled surface and spray systems act simultaneously during the course of a severe accident. The two-phase flow approach proposed in the paper allows to simulate both phenomena simultaneously. Predictions regarding axial velocity do not agree in some cases because of turbulence modelling. One alternative in further studies might be to use reynolds stress transport model to deal with turbulence modelling (Mimouni and Archambeau, 2010), [15]. Future work will also concern mesh sensitivity studies comprising structured mesh (hexahedra) or unstructured mesh (tetrahedron). This work has been achieved in the framework of the PAGODES2 project financially supported by EDF (Electricité de France). The NEPTUNE_CFD code is being developed in the framework of the NEPTUNE project financially supported by CEA (Commissariat à l’Energie Atomique), EDF (Electricité de France), IRSN (Institut de Radioprotection et de Sûreté Nucléaire), and AREVA-NP. 1. J. Vendel, J. Malet, A. Bentaib et al., “Conclusions of the ISP-47 containment thermal-hydraulics,” in Proceedings of the 12th International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH '07), Pittsburgh, Pa, USA, September-October 2007. 2. S. Mimouni, J.-S. Lamy, J. Lavieville, S. Guieu, and M. Martin, “Modelling of sprays in containment applications with A CMFD code,” Nuclear Engineering and Design, vol. 240, no. 9, pp. 2260–2270, 2010. View at Publisher · View at Google Scholar 3. I. Kljenak, M. Babić, B. Mavko, and I. Bajsić, “Modeling of containment atmosphere mixing and stratification experiment using a CFD approach,” Nuclear Engineering and Design, vol. 236, no. 14–16, pp. 1682–1692, 2006. View at Publisher · View at Google Scholar 4. N. Forgione and S. Paci, “Computational analysis of vapour condensation in presence of air in the TOSQAN facility,” in Proceedings of the 11th International Topical Meeting on Nuclear Reactor Thermal-Hydraulics (NURETH '05), Avignon, France, October 2005. 5. J. Malet, L. Blumenfeld, S. Arndt, et al., “Sprays in containment : final results of the SARNET spray benchmark,” in Proceedings of the 3rd European Review Meeting on Severe Accident Research (ERMSAR '08), Nesseber, Bulgaria, September 2008. 6. M. Andreani, D. Paladino, and T. George, “On the unexpectedly large effect of the re-vaporization of the condensate liquid film in two tests in the PANDA facility revealed by simulations with the GOTHIC code,” in Proceedings of the XCFD4NRS Workshop, Grenoble, France, September 2008. 7. S. Mimouni, A. Foissac, and J. Lavieville, “CFD modelling of wall steam condensation by a two-phase flow approach,” Nuclear Engineering and Design. In press. View at Publisher · View at Google 8. M. Ishii, Thermo-Fluid Dynamic, Theory of Two Phase, Collection de la Direction des Etudes et Recherches d'Electricite de France, Eyrolles, Paris, France, 1975. 9. J.-M. Delhaye, M. Giot, and M. L. Riethmuller, Thermal-Hydraulics of Two-Phase Systems for Industrial Design and Nuclear Engineering, Hemisphere and McGraw Hill, Washington, DC, USA, 1981. 10. A. Guelfi, D. Bestion, M. Boucker et al., “NEPTUNE: a new software platform for advanced nuclear thermal hydraulics,” Nuclear Science and Engineering, vol. 156, no. 3, pp. 281–324, 2007. 11. S. Mimouni, M. Boucker, J. Laviéville, A. Guelfi, and D. Bestion, “Modelling and computation of cavitation and boiling bubbly flows with the NEPTUNE_CFD code,” Nuclear Engineering and Design, vol. 238, no. 3, pp. 680–692, 2008. View at Publisher · View at Google Scholar 12. N. Mechitoua, J. Lavieville, et al., “An unstructured finite volume solver for 2-phase water/vapor flows modelling based on an elliptic oriented fractional step method,” in Proceedings of the 10th International Topical Meeting on Nuclear Reactor Thermal-Hydraulics (NURETH '03), Seoul, South Korea, October 2003. 13. W. E. Ranz and W. R. Marschall, “Evaporation from drops,” Chemical Engineering Progress, vol. 48, pp. 173–180, 1952. 14. N. Mechitoua, S. Mimouni, et al., “CFD modeling of the test 25 of the PANDA experiment, experimental validation and application of CFD and CMFD codes to nuclear reactor safety issues,” in Proceedings of the XCFD4NRS Workshop, Washington, DC, USA, September 2010. 15. S. Mimouni, F. Archambeau, M. Boucker, J. Lavieville, and C. Morel, “A second order turbulence model based on a Reynolds stress approach for two-phase boiling flow. Part 1: application to the ASU-annular channel case,” Nuclear Engineering and Design, vol. 240, no. 9, pp. 2233–2243, 2010. View at Publisher · View at Google Scholar
{"url":"http://www.hindawi.com/journals/stni/2011/941239/","timestamp":"2014-04-17T10:08:53Z","content_type":null,"content_length":"92605","record_id":"<urn:uuid:6142eea3-ffac-4140-9edc-36d830e94924>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
Riemann Sums, Simpson's Rule This applet will graph a given function on an interval [a, b], display various graphs that arise in the numerical evaluation of integrals, and evaluate the associated sums. The graphing screen is large and you will have to move from this text screen to the graphing screen by means of the mouse while learning how to use the applet. After the applet window is loaded you will see: a graphing area at the top, left of the screen; a text area at the top, right, and three text fields labeled f(x), a b, and M at the bottom left. The default value of f(x) is sin(1/x). It is being graphed on the interval [1, 2]. The graph is constructed by plotting f(x) at M = 80 equally spaced x values in [a, b], and then connecting the 80 points (x, f(x)) by straight lines. f(x) field is where the function under consideration is to be entered. The usual syntax rules apply here: the variable has the name x; the carat, ^, is used for exponentiation; multiplication must be denoted by the symbol *; all the functions you would see in a ordinary calculus course sin(x), tan(x), exp(x), ln(x), etc may be used; function arguments must be included within a pair of parentheses. Further details on syntax are given below. The domain is any pair of real numbers a b with a < b. The entry M is assumed to be a positive integer. To start, click on the "graph f(x)" button; this produces the graph of the function. The function is graphed in black. The x-axis, and the two lines from the x-axis to the function at the end points appear in purple. Moving to the right, the Partition with N = text field specifies the number of subintervals in which to divide the interval [a, b]. The default value is 6. The subintervals all have the same length, (b -a)/N. Continuing, if you press the button labeled "right" the applet will display the 6 rectangles whose height is the value of the function at the right hand endpoint of each subinterval. The type, the integer N, and the numerical value of the associated riemann sum are printed in the text area. Analogous actions occur when left, mid-point, and random buttons are pressed. When the trapezoid button is pressed, the trapezoid rule is applied. It may be difficult to see the straight lines connecting the successive function lines. To confirm that straight lines are used f(x) by sin(x), a b by 0 3.14159, press "graph f(x)", reset N to 4, and then press "trapezoid" again. Pressing the "Simpson" button yields an approximation of the "area under the curve" by parabolas. Again, unless N is small, say N = 2 or N = 4, it will be difficult to distinguish the graph of f(x) from the graph oof the approximating polynomial. However, the numerical values of the trapezoidal sums and the simpson sums illustrate the relative accuracy of the simpson algorithm. Incidentally, in Simpson's method, the integer N must be an even integer. If N is odd when you click on the simpson button, the program will let you know. Built-In Functions If you click on "Examples" at the top of the Riemann screen, and then click on Example1 the program will load the function f(x)= abs(x) + sqrt(x) + ln(1+x) + exp(x)+ arcsin(x/3). (It may be necessary to scroll across the text field to see all of them) The main point here is that the program the usual elementary functions (polynomials, the trigonometric functions, the absolute value function, the logarithm to the base e, the exponential function e^ x, and the inverse trigonometric functions) may be used in the program. The functions ln(x) must be defined on a positive domain; sqrt(x) must be defined for x >= 0. Exponents and Negative Domains Click on Examples|Example2 , click on Example2, and then click on "graph f(x)". This graph illustrates, in part, how exponents are handled in the applet. First of all, an exponent must be an integer, a decimal, or a rational number. Functions of x are not allowed. (You can handle, say, x^x, by entering it as exp(x*ln(x))). Second, if the domain consists of positive numbers there are no problems with the evaluation of the functions. If the domain contains negative numbers then two cases can occur: a. If a number to be plotted is real, as in the case x^(2/3) of Example 2, it will be calculated. b. If the expression is imaginary, as in the cases x^(1/2) or x^(0.333) where x is negative, then the action depends on your browser. If (in case (b)) you are using the Netscape browser no graph will appear, but question remarks will appear on the left side of the graphing area. A brief description of the error can then be found by clicking on Netscape's Communicator|Tools|Java Counsel menu entries. If you are using Microsoft's Internet Explorer, all imaginary numbers are graphed as +1. The program will produce error messages when some common errors are made. If you click on Examples|Syntax1 and then click on "graph f(x)" an error message will appear on the graphing screen. It will say that the number of left parenthesis is not equal to the number of right ones. So correct the error (by erasing the extra parenthesis at the end of the definition of f(x)) and click again. You will get another error message: the 2x needs to have a multiplication sign between the 2 and the x. Insert one and click again. Another error: A multiplication sign is needed between a closing and opening parenthesis. Insert one. This time the program works. To continue click on Examples|Syntax2 and then click on "graph f(x)". You will learn that you can enter on two distinct numbers for the domain. Erase the last "3" and click again. You will find you still don't have exactly two numbers. The problem this time is the space between the minus sign and the 1. Erase an click. A graph will appear.
{"url":"http://www.math.ucla.edu/~ronmiech/Java_Applets/Riemann/","timestamp":"2014-04-19T01:47:33Z","content_type":null,"content_length":"7748","record_id":"<urn:uuid:ec855a9f-dad2-4aea-aae7-e8aa915fe728>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/kellyking/answered","timestamp":"2014-04-18T23:28:51Z","content_type":null,"content_length":"102583","record_id":"<urn:uuid:4d3f3223-4461-4d14-945b-75ede09c3a84>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] numpy docs dependency problem in Ubuntu Benjamin Root ben.root@ou.... Fri Feb 11 09:56:41 CST 2011 On Friday, February 11, 2011, Jouni K. Seppänen <jks@iki.fi> wrote: > [Crossposting to matplotlib devel list] > Robert Kern <robert.kern@gmail.com> writes: >> On Thu, Feb 10, 2011 at 11:22, Barry Warsaw <barry@python.org> wrote: >>> Here's the problem: for Ubuntu, we've had to disable the building of >>> the numpy documentation package, because its dependencies violate >>> Ubuntu policy. Numpy is in our "main" archive but the documentation >>> depends on python-matplotlib, which lives in our "universe" >>> archive. Such cross archive dependencies break the build. >>> We can't put python-matplotlib in main because of *its* dependencies. >> As a digression, I think the python-matplotlib dependencies could be >> significantly reduced. For a number of use cases (this is one of them, >> but there are others), you don't need any GUI backend. Independent of >> this issue, it would be great to be able to install python-matplotlib >> in a headless server environment without pulling in all of those GUI >> bits. Looking at the list of the hard dependencies, I don't understand >> why half of them are there. >> http://packages.ubuntu.com/natty/python/python-matplotlib > Would it make sense to split out each interactive backend to its own > Ubuntu package, e.g. python-matplotlib-tk, etc? Each of these would > depend on the relevant toolkit packages, and python-matplotlib would > have a much shorter list of dependencies. > -- > Jouni K. Seppänen > http://www.iki.fi/jks > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion There are a lot of advantages to this idea, and I wonder if it might make distributions easier and allow fuller control by the user. In particular, kubuntu could default to using the qt backend while regular ubuntu could use gym. However, how practical is this to implement? What does it require from us upstream? Ben Root More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2011-February/054899.html","timestamp":"2014-04-16T07:49:01Z","content_type":null,"content_length":"5656","record_id":"<urn:uuid:e9e252d4-1cf3-40d8-92ad-b0d0b6140325>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Should I switch to a 64-bit OS to compensate for Octave's limitations? (self.numerical) submitted ago by PeoriaJohnson sorry, this has been archived and can no longer be voted on Recent Stanford ML Class "graduate," here. In that course, Octave is used for programming assignments. Now I'd like to use it for some analyses involving large datasets (millions of entries), but I'm running into an error that most googling indicates is due to a limitation of 32-bit Octave(?) Here's what happens when I try to transpose a matrix. octave:23> load("-ascii", "training40k_matrix_correct.m") octave:24> size(training40k_matrix) ans = octave:25> size(training40k_matrix') error: memory exhausted or requested size too large for range of Octave's index type -- trying to return to prompt Is trading out my OS ( Ubuntu 11.04 - GNU/Linux 2.6.38-13-generic-pae i686 ) for a 64-bit version going to solve this? Anybody have experience with that? Is there a simpler alternative? all 3 comments [–]micro_cam0 points1 point2 points ago sorry, this has been archived and can no longer be voted on Well if you just need to transpose a matrix you can easily write a python (or whatever) script to do it from one file to another without keeping the whole thing in memory at once. You will eventually need to do things like this anyways if you keep considering larger and larger datasets, but it makes the code much more complicated and a 64 bit os with lots of ram will hold this off for a while. [–]joelthelion0 points1 point2 points ago sorry, this has been archived and can no longer be voted on
{"url":"http://www.reddit.com/r/numerical/comments/nlecc/should_i_switch_to_a_64bit_os_to_compensate_for/","timestamp":"2014-04-19T02:59:10Z","content_type":null,"content_length":"53889","record_id":"<urn:uuid:81f6b89e-08b2-4bbd-b100-2ce70e948421>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
Oak Lawn Algebra Tutor Find an Oak Lawn Algebra Tutor ...I then went on to earn a Ph.D. in Biochemistry and Structural Biology from Cornell University's Medical College. As an undergraduate, I spent a semester studying Archeology and History in Greece. Thus I bring first hand knowledge to your history studies. 41 Subjects: including algebra 2, algebra 1, English, chemistry ...The Math section of the ACT can be difficult for many reasons. I help students by making sure that they get questions that they have "earned" through years of study in high school. I also can teach them things they don't know with just enough detail to allow them to apply the material on the exam, and I can re-teach them material they may have forgotten. 24 Subjects: including algebra 2, algebra 1, calculus, GRE ...Topics included set theory, graph theory, combinatorics. Since then I have worked as a TA for "Finite Mathematics for Business" which had a major component of counting (combinations, permutations) problems, and linear programming, both of which are common in discrete math. Other topics in which... 22 Subjects: including algebra 2, algebra 1, calculus, geometry ...I have been using Microsoft Outlook at my primary email platform for over 5 years in my career. It is the standard email platform that companies use and I am well-versed in the benefits, as well as, disadvantages of using it. I have been using the Microsoft Windows operating system since 1999. 36 Subjects: including algebra 1, English, reading, writing ...Please allow me to help improve their knowledge in these fascinating subjects. Sincerely, LorenI believe I am well qualified to tutor in the field of Genetics. I have a Bachelors degree in Genetics & Development from the University of Illinois (Champaign/Urbana). I have worked several years in medical research labs using Molecular Biology techniques. 10 Subjects: including algebra 1, geometry, biology, elementary (k-6th)
{"url":"http://www.purplemath.com/oak_lawn_algebra_tutors.php","timestamp":"2014-04-18T03:49:41Z","content_type":null,"content_length":"23878","record_id":"<urn:uuid:c96ddee4-d880-49eb-ac43-fd329c4a45ef>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Plausible and genuine extensions of L'Hospital's Rule J. Marshall Ash1 Summary: Plausible and genuine extensions of L'Hospital's Rule J. Marshall Ash1 DePaul University Chicago, IL 60614 Allan Berele2 DePaul University Chicago, IL 60614 Stefan Catoiu DePaul University Chicago, IL 60614 1 A plausible extension Roughly speaking, L'Hospital's Rule says that if f(x)=g(x) is indeterminate and if x is large, then f(x)=g(x) is approximately equal to f0(x)=g0(x). Also, the limit- comparison test says that if an is approximately equal to bn then an converges if and only if bn converges. Combining these two imprecisely stated theorems, we would like to predict the convergence of Source: Ash, J. Marshall - Department of Mathematical Sciences, DePaul University Collections: Mathematics Summary: Plausible and genuine extensions of L'Hospital's Rule J. Marshall Ash1 DePaul University Chicago, IL 60614 Allan Berele2 DePaul University Chicago, IL 60614 Stefan Catoiu DePaul University Chicago, IL 60614 1 A plausible extension Roughly speaking, L'Hospital's Rule says that if f(x)=g(x) is indeterminate and if x is large, then f(x)=g(x) is approximately equal to f0(x)=g0(x). Also, the limit- comparison test says that if an is approximately equal to bn then P an converges if and only if P bn converges. Combining these two imprecisely stated theorems, we would like to predict the convergence of Source: Ash, J. Marshall - Department of Mathematical Sciences, DePaul University
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/1011/0163042.html","timestamp":"2014-04-18T22:00:42Z","content_type":null,"content_length":"7689","record_id":"<urn:uuid:f43ef7e2-2c20-4f80-9bd8-f3a99bedf8b7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof Designer Proof Designer is a Java applet that writes outlines of proofs in elementary set theory, under the guidance of the user. It is designed to help students learn to write proofs. Before you can use Proof Designer, you may have to make some adjustments to the configuration of your computer or browser. The documents listed below will help you set up your computer to use Proof Designer and then learn how Proof Designer works. It may take a minute for Java to start up, and for the Proof Designer applet to be loaded. Once the applet has been loaded, a button labeled “Write a Proof” should appear below this line. If it does not appear, refer to the instructions for setting up your computer to use Proof Designer.
{"url":"http://www.cs.amherst.edu/~djv/pd/pd.html","timestamp":"2014-04-16T04:18:04Z","content_type":null,"content_length":"3007","record_id":"<urn:uuid:20bd052d-ec0c-4974-a623-326149eb6632>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
WHAT IS THE PLOT TRIANGLE FOR oLIVER TWIST. PLZ ANSWER FAST. I NEED IT BY WEDNESDAY. - Homework Help - eNotes.com WHAT IS THE PLOT TRIANGLE FOR oLIVER TWIST. PLZ ANSWER FAST. I NEED IT BY WEDNESDAY. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/what-plot-triangle-oliver-twist-plz-answer-fast-297888","timestamp":"2014-04-18T23:50:14Z","content_type":null,"content_length":"25746","record_id":"<urn:uuid:388189f2-e182-4477-be2e-ed29b7f135c6>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Parameter estimation April 15th 2011, 08:58 AM #1 Oct 2009 Parameter estimation Let X_1,X_2,...,X_n be iid Bernoulli(p). Also let T be Binomial(n, p). So I found that they both have the same moment generating function, and I found the mean and variance of T. The question asks: Let n=20 and (x_1+...+x_n)=9. Using the method of moments, find an estimate of theta = log(p/(1-p)). I'm not sure how to proceed from here. Let X_1,X_2,...,X_n be iid Bernoulli(p). Also let T be Binomial(n, p). So I found that they both have the same moment generating function, and I found the mean and variance of T. The question asks: Let n=20 and (x_1+...+x_n)=9. Using the method of moments, find an estimate of theta = log(p/(1-p)). I'm not sure how to proceed from here. With method of moments, the game is to equate the empirical moments with the theoretical moments. Set Xbar = p, then just plug that in to get the estimate of theta. So would the answer be -0.08? use p-hat as 9/20 to estimate p in theta ln (.45/.55) Thanks, I got it! The next part of the question which I'm not sure of is: Obtain a 95% confidence interval for theta based on the MLE of theta. April 15th 2011, 02:10 PM #2 Senior Member Oct 2009 April 15th 2011, 04:07 PM #3 Oct 2009 April 15th 2011, 11:23 PM #4 April 18th 2011, 09:55 AM #5 Oct 2009
{"url":"http://mathhelpforum.com/advanced-statistics/177772-parameter-estimation.html","timestamp":"2014-04-18T01:26:53Z","content_type":null,"content_length":"40084","record_id":"<urn:uuid:3121a271-5949-49e5-93be-77f4390e85a1>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
Matrix where every subset of rows has maximal rank up vote 3 down vote favorite I am looking for a class of matrices $M(n(m), m, k(m), \phi)$ with the following properties: 1. M is $n \times m$ where $n(m) > m$. 2. Every subset of rows of size $k$ has (maximal) rank $m$. 3. $n(m)$ grows slowly. 4. $m \leq k(m) \leq (1 - \phi) n(m)$ and $k(m)$ is as close as possible to $m$. and preferably, the matrix is binary. Has this been studied before? I appreciate any pointers. I know that a random uniform binary matrix with $n(m) \approx \frac{2}{1-\phi}(m + log 1/\delta)$ and $k(m) = m$ satisfies the properties above with probability $1 - \delta$ (for large enough $m$). Also, any probabilistic suggestions that perform better than the uniform random binary matrix are welcome. (I apologize if my question turns out to be elementary, I also appreciate any help with properly tagging the question.) matrices random-matrices coding-theory 1 I think I'd look for information on so-called MDS codes. (MDS = maximum distance separable.) I do not think these are the same thing, but they should lead you closer to what you want. – Chris Godsil Apr 5 '13 at 14:36 1 Something very similar is happening with so called fountain codes. IIRC they have results of the type: in a carefully designed matrix any $k(m)=(1+\varepsilon)*m$ rows will work with a high probability. Some constructions are known. Not exactly what you want, but may be worth checking out (in coding theory the roles of the rows and columns would be opposite to yours, but I'm sure you'll manage :-) – Jyrki Lahtonen Apr 7 '13 at 20:44 MDS codes would be nice, but there aren't any non-trivial binary ones. Furthermore, then $n(m)$ could never exceed the size of the field (+1). – Jyrki Lahtonen Apr 7 '13 at 20:46 @ChrisGodsil Thanks! I am looking into MDS codes, and while it seems that they are not exactly what I want they are giving pointers to some useful leads.. @JyrkiLahtonen if I am not mistaken, fountain codes are using random matrices. What I refer to in the question is in fact one simplistic form of fountain codes (simplified Luby transform). I am primarily looking for something deterministic. Better fountain codes use different distributions to improve encoding/decoding efficiency.. – aelguindy Apr 8 '13 at 12:12 If the problem with MDS-codes is that there maximum length is $q+1$ ($q=|F|$), and you need to be able to make them longer, then an option is algebraic-geometry codes (aka Goppa codes). If you use 1 a maximal curve of genus $g$, then the resulting codes are "within (the constant) $g$ of being MDS". The maximal attainable length (when $q$ is a square) is $q+1+2g\sqrt{q}$. Several families of such curves are known. But here typically $q$ is a bit larger, $q=16, 64, 256,\ldots$ – Jyrki Lahtonen Apr 9 '13 at 10:33 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged matrices random-matrices coding-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/126621/matrix-where-every-subset-of-rows-has-maximal-rank","timestamp":"2014-04-19T15:13:47Z","content_type":null,"content_length":"54016","record_id":"<urn:uuid:4086fdd0-747d-41d2-a4a0-24d719e56434>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
Riemann isometry vs Euclidean bi-Lipschitz mapping up vote 1 down vote favorite Assume that $\gamma$ is a rectifiable Jordan curve in the complex plane of length $2\pi$. Then there exists a Riemann isometry $f$ between $\gamma$ and the unit circle $T$. My question is, does this isometry provides the minimal constant of bi-Lipschitz mappings (w.r.t. euclidean metric) between $\gamma$ and $T$ (provided that the last set is non-empty). I guess that by the "constant" of a bi-Lipschitz mapping, you mean its distortion, right? – Benoît Kloeckner Jun 7 '13 at 13:50 add comment 1 Answer active oldest votes The formulation of the question could be better; I had to guess what could you had in mind and likely I made it wrong. up vote 0 down Imagine a curve which pass very close to itself say at the points $A$ and $B$, but both arcs from $A$ to $B$ are nearly round and both have length near $\pi$. If you map it vote accepted isometrically, the distortion is very bad for $A$ and $B$ and you can decrease distortion by shrinking the image of one of the arcs. add comment Not the answer you're looking for? Browse other questions tagged geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/133066/riemann-isometry-vs-euclidean-bi-lipschitz-mapping","timestamp":"2014-04-20T10:51:11Z","content_type":null,"content_length":"50136","record_id":"<urn:uuid:19667cb9-1fbb-4807-8e76-a589c88fb260>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] The liar and the semantics of set theory Roger Bishop Jones rbj at rbjones.com Fri Sep 20 04:52:16 EDT 2002 On Friday 20 September 2002 1:21 am, Rupert McCallum wrote: > --- Roger Bishop Jones <rbj at rbjones.com> wrote: > > The liar "paradox" was used by Tarski to show that > > arithmetic truth is not definable in arithmetic. > > > > It is natural to suppose that a similar result obtains in > > relation to set theory, viz. that set theoretic truth is not > > definable in set theory. > > However, I am not aware of any argument (using the > > liar or otherwise) which conclusively demonjstrates > > this conjecture. > Tarski's argument may be generalized as follows. > Theorem. (Undefinability of truth). It looks like you didn't read the rest of my message. If I understand you correctly, you are showing at length what I conceded, viz that if the semantics is given by a single interpretation then a liar=like construction will show that truth is not definable. However, my question is about what happens if we give set theory a loose semantics by stipulating a set of more than one "intended interpretation" (e.g. all standard models). Then any sentence which does not have the same truth value in all these interpetations does not have a truth value under this semantics and nor does its negation, so the liar construction fails to show that this method of defining set theoretic truth cannot be expressed in set theory. Roger Jones More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2002-September/005923.html","timestamp":"2014-04-21T03:07:15Z","content_type":null,"content_length":"4088","record_id":"<urn:uuid:948a96c8-5955-475c-bd15-a6c37c07eead>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Differentiation with physics Actually I just realised that the formulas you gave are applicable in this case, so I'm a bit confused now whether your lecturer wants you to use differentiation or not. I'm tempted to advise you to try both ways, but let's go for the first one first, since you have found those formulas. What you should remember for 2D problems is that you need to consider the X and Y directions separately. So looking at what are the initial velocity u and acceleration a in the X-direction? How about the Y-direction (u , a Then what can you say about the formulas v , v for the velocity ?
{"url":"http://www.physicsforums.com/showthread.php?t=690480","timestamp":"2014-04-18T23:22:29Z","content_type":null,"content_length":"41184","record_id":"<urn:uuid:6b34d1cd-bd97-42f9-8cf9-fb89690c3094>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
Wave Equation after Reflection Hello all, 1. The problem statement, all variables and given/known data We can represent a mechanical transverse wave by Y=Asin(kx-wt+∅). Now imagine this wave travelling (towards right as velocity is positive) and meeting up with two cases Case 1) Rigid wall. Case 2) Free end. The way gets reflected completely( ignoring transmission or any other losses). Now my question is what is the new equation of wave.? 3. The attempt at a solution In all text books I have studied with, (Resnick Halliday Krane being one of them) for equation of wave as Y=Asin(kx-wt+∅), reflected wave (from free end) is written as Y=Asin(wt+kx+∅) and (for rigid we add a phase difference of pi) to make it y=Asin(wt+kx+∅+pi). From what I realise this is simply done to make the velocity negative. Now my question is we could have made the velocity negative even by writing the equation as y=Asin(-kx-wt+∅) (for free end) and y=Asin(-kx-wt+∅+pi) for rigid end. What prompts us to use the earlier mentioned equations more? whose coefficient has to be negated to form the reflected wave and why is it so? Also we could represent the original wave by Y=Asin(wt-kx+∅2) {∅2 is a different phase constant} In this case what will be the equation of reflected wave(from free end)? Will it be y=Asin(wt+kx+∅2) or y =Asin(-wt-kx-∅2) Thanks a ton
{"url":"http://www.physicsforums.com/showthread.php?p=3811811","timestamp":"2014-04-17T00:56:57Z","content_type":null,"content_length":"26723","record_id":"<urn:uuid:c49fd805-38ad-4cee-8b1d-ae3d8f2f1007>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
Composition as a Sets of Ordered Pairs September 30th 2009, 07:49 PM #1 Sep 2009 Composition as a Sets of Ordered Pairs I've come across a challenging problem in my test review: Given f = {(a, b), (b, a), (c, b)}, a function from X = {a, b, c} to X: (a) Write f o f and f o f o f as sets of ordered pairs. (b) Define f^n = f o f o ... o f to be the n-fold composition of f with itself. Write f^9 and f^623 as sets of ordered pairs. I have no idea how to approach this. I know that f o f and f o f o f are f(f) and f(f(f)), but I can't see how I can compose that. Where in f can I insert f? Part (b) also looks daunting. Any help would be appreciated! Last edited by john192; September 30th 2009 at 08:15 PM. I think I've figured out (a): f(a) = b; f(b) = a; f(c) = b, so f(f) = {(b, a), (a, b), (b, a)}, and f(f(f)) = {(a, b), (b, a), (c, a)} But I'd love for someone to confirm this. I'm still having trouble with part (b) also. Thanks for any help! You need ti rethink the whole problem. Here is how it works. $f \circ f(a) = f(f(a)) = f(b) = a\, \Rightarrow \,(a,a) \in f \circ f$ $\begin{gathered}<br /> f \circ f = \left\{ {(a,a),(b,b),(c,a)} \right\} \hfill \\<br /> f \circ f \circ f = \left\{ {(a,b),(b,a),(c,b)} \right\} \hfill \\<br /> f \circ f \circ f \circ f = \left \{ {(a,a),(b,b),(c,a)} \right\} \hfill \\ <br /> \end{gathered}$ September 30th 2009, 08:27 PM #2 Sep 2009 October 1st 2009, 02:37 AM #3
{"url":"http://mathhelpforum.com/discrete-math/105350-composition-sets-ordered-pairs.html","timestamp":"2014-04-20T17:17:01Z","content_type":null,"content_length":"36323","record_id":"<urn:uuid:42331a45-f8f0-4d92-8a48-c691d3e66e8c>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Measurement unit conversion: liter ›› Measurement unit: liter Full name: liter Plural form: liters Symbol: L Alternate spelling: litres Category type: volume Scale factor: 0.001 ›› SI unit: cubic meter The SI derived unit for volume is the cubic meter. 1 cubic meter is equal to 1000 liter. Valid units must be of the volume type. You can use this form to select from known units: I'm feeling lucky, show me some random units ›› Definition: Litre The litre (spelled liter in American English and German) is a metric unit of volume. The litre is not an SI unit, but (along with units such as hours and days) is listed as one of the "units outside the SI that are accepted for use with the SI." The SI unit of volume is the cubic metre (m³). ›› Sample conversions: liter liter to kiloliter liter to milliliter liter to gallon [US, liquid] liter to cubic millimetre liter to cubic foot liter to yard liter to barrel [UK, wine] liter to litro liter to cubic cubit [ancient egypt] liter to trillion cubic metre
{"url":"http://www.convertunits.com/info/liter","timestamp":"2014-04-16T13:29:03Z","content_type":null,"content_length":"23907","record_id":"<urn:uuid:e93d0de7-ae92-4382-862c-628ed4e71158>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple example:How to use foreach and doSNOW packages for parallel computation. February 6, 2011 By teramonagi I checked whether this example was run collectly or not in Windows XP(32bit) only ! In R language, the members at Revolution R provide foreach and doSNOW packages for parallel computation. these packages allow us to compute things in parallel. So, we start to install these packages. In foreach package, you can write the codes which are run not only in parallel but also in sequence. And, these are as following. Next, we make clusters by doSNOW package for the purpose of parallel computation. Because I have dual core machine, I specify two as the number of clusters. Now, We are ready to compute things in parallel. It is easy for us to do that by foreach package. You only have to change "%do%" into "%dopar%". I compared the performance of parallel comutation to single computation as following. (I'm sorry that some terms are written in Japanese!) You can understand the result of parallel computation is about twice as fast as single computation do !!! Reference(including PDF) daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/simple-examplehow-to-use-foreach-and-dosnow-packages-for-parallel-computation-2/","timestamp":"2014-04-18T03:30:51Z","content_type":null,"content_length":"52009","record_id":"<urn:uuid:f8c2ab11-efa0-4aaa-a6db-b4d2f078ab26>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculate the Density of Air at Different Pressures and Temperatures Using the Ideal Gas Law • Introduction The density of air varies significantly with both pressure and temperature, so how do you find the density of air at different pressures and temperatures as needed for applications such as drag force calculations, frictional loss for flow in pipes or ducts, or air velocity determination with a pitot tube. The ideal gas equation and the ideal gas constant, which express the ideal gas law provide a means for calculating air density for different pressures and temperatures. Although this article is primarily about determining the density of air, the density of other gases at known temperature and gas pressure can also be estimated using the ideal gas law in the same way. • The Ideal Gas Law Equation You may be familiar with the ideal gas law in the following commonly used form: PV = nRT. This equation gives the relationship among the pressure, P, volume, V, and temperature, T, of n moles of an ideal gas, using an ideal gas constant, R. We can introduce the density of the gas into the equation by making use of the fact that molecular weight (MW) has units of mass/mole, and thus that n = m/MW, and that the ideal gas law can be written as: PV = (m/MW)RT. Noting that m/V is density, ρ, the equation can be written as P(MW) = (m/V)RT = ρRT. Solving for density gives the following equation for the density of an ideal gas in terms of its MW, pressure and temperature. ρ = (MW)P/RT with commonly used U.S. units as follows: ρ = density of the gas in slugs/ft^3, MW = molecular weight of the gas in slugs/slugmole (or kg/kgmole, etc.) (NOTE: MW of air = 29), P = absolute gas pressure in psia (NOTE: Absolute pressure equals pressure measured by a gauge plus atmospheric pressure.), T = absolute temperature of the gas in ^oR (NOTE: ^oR = ^oF + 459.67) R = ideal gas constant = 345.23 psia-ft^3/slugmole-^oR. If we can treat air as an ideal gas, then the ideal gas law in this form can be used to calculate the density of air at different pressures and temperatures. An Excel spreadsheet works well for ideal gas law/gas density calculations. For a downloadable Excel template to calculate gas density with the ideal gas law, see the article, "Excel Templates for Venturi and Orifice Flow Meter Calculations." • But, Can I Use the Ideal Gas Law to Calculate the Density of Air? This is a good question to ask, because the air in a pipe friction loss, drag force, or pitot tube calculation, is indeed a real gas, not an ideal gas. Fortunately, however, many real gases behave almost exactly like an ideal gas over a wide range of temperatures and pressures. The ideal gas law doesn't work well for very low temperatures (approaching the critical temperature of the gas) or very high gas pressures (approaching the critical pressure of the gas). For many practical, real situations, however, the ideal gas law gives quite accurate values for the density of air (and many other gases) at different pressures and temperatures. • SI Units for the Ideal Gas Law The units on the ideal gas constant, R, can be converted to any convenient set of units, making the ideal gas law useable with any consistent set of units. For SI units the ideal gas law parameters are as follows: ρ = density in kg/m^3, P = absolute gas pressure in pascals (N/m^2), T = absolute temperature in ^oK (NOTE: ^oK = ^oC + 273.15) R = ideal gas constant = 8314.5 Joules/kgmole-K • Example: Calculation of the Density of Air at Different Pressures and Temperatures Example 1: Calculate the density of air at 75^oF and a pressure of 14.9 psia. Solution: T = 75 + 459.67^oR = 534.25^oR; substituting values into the ideal gas law: ρ = (MW)P/RT = (29)(14.9)/[(345.23)(534.25)] (slugs/slugmole)(psia)/[(psia-ft^3/slugmole-^oR)(^oR)] = 0.002343 slugs/ft^3 Example 2: Calculate the density of air at 45^oC and a pressure of 100,000 pascals absolute Solution: T = 45 + 273.15 K = 318.15 K; substituting values into the ideal gas law: p = (MW)P/RT = (29)(100,000)/(8314.5)(318.15) (kg/kgmole)(N/m^2)/[(joules/kgmole-K)(K)] = 1.10 kg/m^3 • References For Further Information: 1. Bengtson, Harlan H., Flow Measurement in Pipes and Ducts, An online course for Professional Engineers, http://www.online-pdh.com/engcourses/file.php/90/ 2. Munson, B. R., Young, D. F., & Okiishi, T. H., Fundamentals of Fluid Mechanics, 4^th Ed., New York: John Wiley and Sons, Inc, 2002. 3. Applied Thermodynamics ebook, http://www.taftan.com/thermodynamics/
{"url":"http://www.brighthubengineering.com/hydraulics-civil-engineering/58510-find-the-density-of-air-with-the-ideal-gas-law/","timestamp":"2014-04-18T02:58:33Z","content_type":null,"content_length":"47119","record_id":"<urn:uuid:0595dbed-b49c-4224-8218-aa3d36661725>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
oint Presentations Presentation Summary : t-Test Comparing Means From Two Sets of Data Steps For Comparing Groups Assumptions of t-Test Dependent variables are interval or ratio. The population from which ... Source : http://www.uta.edu/faculty/ricard/stats/Lectures/Chapter%208/Chapter%208%20t-Test.ppt The t - test - University of South Florida PPT Presentation Summary : The t-test Inferences about Population Means Questions How are the distributions of z and t related? Given that construct a rejection region. Source : http://luna.cas.usf.edu/%7Embrannic/files/regression/7%20The%20t-test.ppt Presentation Summary : t-test Testing Inferences about Population Means Learning Objectives Compute by hand and interpret Single sample t Independent samples t Dependent samples t Use SPSS ... Source : http://luna.cas.usf.edu/%7Embrannic/files/resmeth/labs/lab5_ttest.ppt T-Tests in SAS - School of Public Health PPT Presentation Summary : T-Tests in SAS One-sample T-Test Matched Pairs T-Test Two-sample T-Test Introduction Suppose you want to answer the following questions: Does a new headache medicine ... Source : http://www.biostat.umn.edu/~susant/Lab6415/Lab3.ppt Two Sample t-Test Practice – 1 - University of Texas at Austin PPT Presentation Summary : Independent Samples T-Test Practice Problem 1a This question asks whether the statement “Survey respondents who were male had completed more years of school than ... Source : http://www.utexas.edu/courses/schwab/sw318_spring_2004/SolvingProblems/Class22_TwoSamplesTTest.ppt t-test - University of North Texas PPT Presentation Summary : t-test Mechanics Z-score If we know the population mean and standard deviation, for any value of X we can compute a z-score Z-score tells us how far above or below ... Source : http://www.unt.edu/rss/class/mike/5700/ttestMechanics.ppt Presentation Summary : Chapter 9 Hypothesis Testing Using the Two Sample t-test Related Samples Two-sample t-test Independent vs. Related Samples If we have two samples and we want to know ... Source : http://faculty.txwes.edu/jbrown06/course9/documents/9-2Relatedt-tests.ppt Independent Samples t-Test (or 2-Sample t-Test) PPT Presentation Summary : Independent Samples t-Test (or 2-Sample t-Test) Advanced Research Methods in Psychology - lecture - Matthew Rockloff When to use the independent samples t-test The ... Source : http://psychologyaustralia.homestead.com/index_files/04a_Independent_Sample_t-Test.ppt Presentation Summary : The t Test for Dependent Samples You do a t test for dependent samples the same way you do a t test for a single sample, except that: You use difference scores. Source : http://www.unm.edu/~marley/statppt/fall06/002/day12.ppt T-Test (difference of means test) - CSU Bakersfield PPT Presentation Summary : T-Test (difference of means test) T-Test = used to compare means between two groups. Level of measurement: DV (Interval/Ratio) IV (Nominal—groups) Source : http://www.csub.edu/~pjennings/stat.400/ttest.ppt Pooled Variance t Test - West Virginia University PPT Presentation Summary : Pooled Variance t Test Tests means of 2 independent populations having equal variances Parametric test procedure Assumptions Both populations are normally distributed Source : http://www.be.wvu.edu/divmim/mgmt/glblakely/homepage/BADM621/StatSlides5.ppt Frequency Distributions - University of Texas at Austin PPT Presentation Summary : One-sample T-Test of a Population Mean Key Points about Statistical Test Sample Homework Problem Solving the Problem with SPSS Logic for One-sample T-Test of Source : http://www.utexas.edu/courses/schwab/sw388r6_fall_2006/SolvingProblems/Homework%20Problems%20-%20One-sample%20T-Test.ppt The Paired t-Test (A.K.A. Dependent Samples t-Test, or t-Test ... PPT Presentation Summary : The Paired t-Test (A.K.A. Dependent Samples t-Test, or t-Test for Correlated Groups) Advanced Research Methods in Psychology - lecture – Matthew Rockloff Source : http://psychologyaustralia.homestead.com/index_files/07b_Paired_t-Test.ppt Presentation Summary : The one sample t-test November 14, 2006 From Z to t… In a Z test, you compare your sample to a known population, with a known mean and standard deviation. Source : http://www.unm.edu/~marley/statppt/fall06/002/day11.ppt Presentation Summary : Chapter 8 Hypothesis Testing Using the One-sample t-test A Step Back Assumptions of the one-sample z-test All hypothesis testing procedures are going to require that ... Source : http://faculty.txwes.edu/jbrown06/course9/documents/8Single-samplet-test.ppt Presentation Summary : Dependent t-Test CJ 526 Statistical Analysis in Criminal Justice Overview Dependent Samples Repeated-Measures When to Use a Dependent t-Test Two Dependent Samples ... Source : http://cstl-hhs.semo.edu/cveneziano/Dependent-t-Test.ppt T-test - University of Texas at El Paso PPT Presentation Summary : EDRS Educational Research & Statistics Most common and popular statistical test when comparing TWO sample means. T-tests, though used often with means, can be used on ... Source : http://utminers.utep.edu/mtcortez/downloads/Stats/t-test%20edrs%205305%20presentation.ppt Z-Tests, T-Tests, Correlations - University of Texas at El Paso PPT Presentation Summary : T-Tests and Chi2 Does your sample data reflect the population from which it is drawn from? Single Group Z and T-Tests The basic goal of these simple tests is to show ... Source : http://utminers.utep.edu/crboehmer/T-Tests%20&%20Chi2.ppt Two sample t - test - Welcome! | MGH Biostatistics Center PPT Presentation Summary : Title: Two sample t-test Author: bhealy Last modified by: bhealy Created Date: 4/24/2006 5:23:51 PM Document presentation format: On-screen Show Company Source : http://hedwig.mgh.harvard.edu/biostatistics/sites/default/files/public/bhealy_tutorials/lecture9twosamplettest.ppt Presentation Summary : Overview Two paired samples: Within-Subject Designs-Hypothesis test-Confidence Interval-Effect Size Two independent samples: Between-Subject Designs Source : http://www.columbia.edu/cu/psychology/courses/S1610q/lectures/08TwoSampleTests.ppt 1-Sample t-test - Illinois State University PPT Presentation Summary : 1-Sample t-test Mon, Apr 5th T-test purpose Z test requires that you know from pop Use a t-test when you don’t know the population standard deviation. Source : http://psychology.illinoisstate.edu/ktschne/psy138/Onesamp_t.ppt Presentation Summary : t-Test t-Test Two groups of random samples from sample population prior to experiment: group 1 = group 2 any differences can be attributed to: sampling error ... Source : http://iris.nyit.edu/~pdouris/ResearchI/G-T-Test-10.ppt Presentation Summary : We will therefore read the t-test information for an equal variance independent t-test. The most important piece of information is the two-tailed p-value. Source : http://www.gifted.uconn.edu/siegle/research/t-test/ReadingT-testwithExcel.pps If you find powerpoint presentation a copyright It is important to understand and respect the copyright rules of the author. Please do not download if you find presentation copyright. If you find a presentation that is using one of your presentation without permission, contact us immidiately at
{"url":"http://www.xpowerpoint.com/ppt/t-test.html","timestamp":"2014-04-17T15:32:16Z","content_type":null,"content_length":"22037","record_id":"<urn:uuid:4e978de2-4ce8-4208-86ed-5b5462d6c6f8>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
LinearOperators[FactoredAnnihilator] - construct completely factorable annihilator Calling Sequence FactoredAnnihilator(expr, x, case) expr - an algebraic expression x - the name of the independent variable case - a parameter indicating the case of the equation ('differential' or 'shift') • Given an algebraic expression expr, the LinearOperators[FactoredAnnihilator] function returns a completely factored Ore operator that is an annihilator for expr. That is, applying this operator to expr yields zero. If such an operator does not exist, the function returns . • A completely factored Ore operator is an operator that can be factored into a product of factors of degree at most one. • A completely factored Ore operator is represented by a structure that consists of the keyword FactoredOrePoly and a sequence of lists. Each list consists of two elements and describes a first degree factor. The first element provides the zero degree coefficient and the second element provides the first degree coefficient. For example, in the differential case with a differential operator D, FactoredOrePoly([-1, x], [x, 0], [4, x^2], [0, 1]) describes the operator . • There are routines in the package that convert between Ore operators and the corresponding Maple expressions. See LinearOperators[converters]. • To get the completely factorable annihilator, the expression expr must be a d'Alembertian term. The main property of a d'Alembertian term is that it is annihilated by a linear operator that can be written as a composition of operators of the first degree. The set of d'Alembertian terms has a ring structure. The package recognizes some basic d'Alembertian terms and their ring-operation closure terms. The result of the substitution of a rational term for the independent variable in the d'Alembertian term is also a d'Alembertian term. See Also LinearOperators, LinearOperators[Apply], LinearOperators[converters] Abramov, S. A., and Zima, E. V. "Minimal Completely Factorable Annihilators." In Proceedings of ISSAC '97, pp. 290-297. Edited by Wolfgang Kuchlin. New York: ACM Press, 1997. Was this information helpful? Please add your Comment (Optional) E-mail Address (Optional) What is This question helps us to combat spam
{"url":"http://www.maplesoft.com/support/help/Maple/view.aspx?path=LinearOperators/FactoredAnnihilator","timestamp":"2014-04-18T00:22:34Z","content_type":null,"content_length":"96414","record_id":"<urn:uuid:b6d7ef91-d230-45c8-91ff-335b28d44167>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
The elementary geometry of a triangular world with hexagonal circles Victor Pambuccian Department of Integrative Studies, Arizona State University - West Campus, Phoenix, AZ 85069-7100, U.S.A. e-mail: pamb@math.west.asu.edu Abstract: We provide a collinearity based elementary axiomatics of optimal quantifier complexity $\forall\exists\forall\exists$ for the geometry inside a triangle and reprove that collinearity cannot be defined in terms of segment congruence, the metric being Hilbert's projective metric. Keywords: Hilbert geometry inside a triangle, segment congruence, collinearity, quantifier complexity Classification (MSC2000): 51F20, 51A30, 51A45, 51G05 Full text of the article: Electronic version published on: 26 Feb 2008. This page was last modified: 28 Jan 2013. © 2008 Heldermann Verlag © 2008–2013 FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition
{"url":"http://www.emis.de/journals/BAG/vol.49/no.1/11.html","timestamp":"2014-04-20T03:25:09Z","content_type":null,"content_length":"3977","record_id":"<urn:uuid:d99c8e5a-9985-4e75-82d1-2fd24164bc63>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
An isomorphism from a set onto itself. An automorphism group of a group G is the group formed by the automorphisms of G (bijections from G to itself which preserve the multiplication). Similarly, one can consider the automorphism groups of other structures such as rings and graphs, by looking at bijections which preserve their mathematical structure. Related category
{"url":"http://www.daviddarling.info/encyclopedia/A/automorphism.html","timestamp":"2014-04-19T15:24:12Z","content_type":null,"content_length":"5657","record_id":"<urn:uuid:10efe509-778b-47a1-bb17-b9483a040768>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Question concerning curl for finding a conservative force field Hello all, I understand the fact that the principles LaTeX Code: F= \\nabla \\phi . LaTeX Code: \\nabla \\times F = 0 . must apply in order for a force field to be conservative however what i dont get is why showing: LaTeX Code: f_y= g_x, f_z= h_x, g_z= h_y where subscripts are what you are taking the derivative with respect to. is a means to prove the above laws are in effect. i assume it has to do with the fact that subtracting the partial derivatives will give you zero. thank you very much for any help
{"url":"http://www.physicsforums.com/showpost.php?p=2369950&postcount=1","timestamp":"2014-04-20T21:21:13Z","content_type":null,"content_length":"9031","record_id":"<urn:uuid:8409efc3-9057-4454-a640-da8772e64afe>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
36-402, Undergraduate Advanced Data Analysis Cosma Shalizi 36-402, Undergraduate Advanced Data Analysis Spring 2012 Tuesdays and Thursdays, 10:30--11:50 Porter Hall 100 This is the page for the 2012 class. You are probably looking for the 2013 class. The goal of this class is to train students in using statistical models to analyze data — as data summaries, as predictive instruments, and as tools for scientific inference. We will build on the theory and applications of the linear model, introduced in 36-401, extending it to more general functional forms, and more general kinds of data, emphasizing the computation-intensive methods introduced since the 1980s. After taking the class, when you're faced with a new data-analysis problem, you should be able to (1) select appropriate methods, (2) use statistical software to implement them, (3) critically evaluate the resulting statistical models, and (4) communicate the results of their analyses to collaborators and to non-statisticians. Graduate students from other departments wishing to take this course should register for it under the number "36-608". 36-401, or consent of the instructor. The latter is only granted under very unusual circumstances. Professor Cosma Shalizi cshalizi [at] cmu.edu 229 C Baker Hall Teaching assistants Ms. Stefa Etchegaray Mr. Mingyu Tang Mr. Zachary Kurtz Mr. Cong Lu Topics, Notes, Readings Model evaluation: statistical inference, prediction, and scientific inference; in-sample and out-of-sample errors, generalization and over-fitting, cross-validation; evaluating by simulating; the bootstrap; penalized fitting; mis-specification checks Yet More Linear Regression: what is regression, really?; review of ordinary linear regression and its limits; extensions Smoothing: kernel smoothing, including local polynomial regression; splines; additive models; kernel density estimation Generalized linear and additive models: logistic regression; generalized linear models; generalized additive models. Latent variables and structured data: principal components; factor analysis and latent variables; latent cluster/mixture models; graphical models in general Causality: graphical causal models; identification of causal effects from observations; estimation of causal effects; discovering causal structure Dependent data: Markov models for time series without latent variables; hidden Markov models for time series with latent variables; longitudinal, spatial and network data See the end for the current lecture schedule, subject to revision. Lecture notes will be linked there, as available. Course Mechanics Homework will be 60% of the grade, two midterms 10% each, and the final 20%. The homework will give you practice in using the techniques you are learning to analyze data, and to interpret the analyses. There will be twelve weekly homework assignments, nearly one every week; they will all be due on Tuesdays at the beginning of class, through Coursekit, and will all count equally, totaling 60% of your grade. The lowest three homework grades will be dropped; consequently, no late homework will be accepted for any reason. Communicating your results to others is as important as getting good results in the first place. Every homework assignment will require you to write about that week's data analysis and what you learned from it. This portion of the assignment will be graded, along with the other questions, but failure to do the writing (or to show signs of a serious attempt at it) will result in an automatic zero for that assignment. As always, raw computer output and R code is not acceptable, but should be put in an appendix to each assignment. Homework may be submitted either as a PDF (preferred) or as a plain text file (.txt). If you prepare your homework in Word, be sure to submit a PDF file; .doc, .docx, etc., files will not be graded. Unlike PDF or plain text, Word files do not display consistently across different machines, different versions of the program on the same machine, etc., so not using them eliminates any doubt that what we grade differs from what you think you wrote. Word files are also much more of a security hole than PDF or (especially) plain text. Finally, it is obnoxious to force people to buy commercial, closed-source software just to read what you write. (It would be obnoxious even if Microsoft paid you for marketing its wares that way, but it doesn't.) There will be two take-home mid-term exams (10% each), due at 10:30 am on March 6th and April 17th. You will have one week to work on each midterm. There will be no homework in those weeks. There will also be a take-home final exam (20%), due at 10:30 am on May 15, which you will have two weeks to do. Exams must also be submitted through Coursekit, under the same rules as homework. Quality Control To help control the quality of the grading, every week (after the first week of classes), six students will be selected at random, and will meet with the professor for ten minutes each, to explain their work and to answer questions about it. Grades on that week's assignment (possibly including the take-home exams) will be revised (up or down) by the professor in light of performance during these sessions. You may be selected on multiple weeks, if that's how the random numbers come up. Office Hours Prof. Shalizi will hold office hours Mondays, 1:30--3:30 pm, in Baker Hall 229A, or by appointment. TA office hours will be Thursdays, 3:30--4:30 pm in Porter Hall A20A. If you want help with computing, please bring your laptop. We will be trying a replacement for Blackboard called "Coursekit" for announcements, turning in homework, grades, and a discussion forum. An e-mail with an invitation to the system was sent to your Andrew address; here is our Coursekit website. Assignments and notes will be posted here, with links on Coursekit; solutions will be distributed as hard-copies, in class. Julian Faraway, Extending the Linear Model with R (Chapman Hall/CRC Press, 2006, ISBN 978-1-58488-424-8), and Paul Teetor, The R Cookbook (O'Reilly Media, 2011, ISBN 978-0-596-80915-7) will both be required. (Faraway's page on the book, with help and errata.) Venables and Ripley's Modern Applied Statistics with S (Springer, 2003; ISBN 9780387954578) will be optional. The campus bookstore should have copies of all of these. In addition to the textbooks, you are expected to read the notes. They contain valuable information which goes beyond what is in the texts, which you will need to understand to do the assignments and the exams. Collaboration, Cheating and Plagiarism Feel free to discuss all aspects of the course with one another, including homework and exams. However, the work you hand in must be your own. You must not copy mathematical derivations, computer output and input, or written descriptions from anyone or anywhere else, without reporting the source within your work. Unacknowledged copying will lead at the very least to an automatic zero on that assignment, and possibly severe disciplinary action. Please read the CMU Policy on Cheating and Plagiarism, and don't plagiarize. Physically Disabled and Learning Disabled Students The Office of Equal Opportunity Services provides support services for both physically disabled and learning disabled students. For individualized academic adjustment based on a documented disability, contact Equal Opportunity Services at eos [at] andrew.cmu.edu or (412) 268-2012. R is a free, open-source software package/programming language for statistical computing. You should have begun to learn it in 36-401 (if not before), and this class presumes that you have. Many of the assignments will require you to use it. You can expect no assistance from the instructors with any other programming language or statistical software. If you are not able to use R, or do not have ready, reliable access to a computer on which you can do so, let me know at once. Here are some resources for learning R: • The official intro, "An Introduction to R", available online in HTML and PDF • John Verzani, "simpleR", in PDF • Quick-R. This is primarily aimed at those who already know a commercial statistics package like SAS, SPSS or Stata, but it's very clear and well-organized, and others may find it useful as well. • Patrick Burns, The R Inferno. "If you are using R and you think you're in hell, this is a map for you." • Thomas Lumley, "R Fundamentals and Programming Techniques" (large PDF) • Paul Teetor, The R Cookbook, explains how to use R to do many, many common tasks. (It's like the inverse to R's help: "What command does X?", instead of "What does command Y do?"). It is one of the required texts, and is available at the campus bookstore. • There are now many books about R. Some recommendable ones: Even if you know how to do some basic coding (or more), you should read the page of Minimal Advice on Programming. Some handouts on stuff all of you should already know, but where evidently some of you could use refreshers: This was the schedule for the 2012 class. You are probably looking for the 2013 class. The complete notes (large PDF) January 17 (Tuesday): Lecture 1, Introduction to the class Statistics is the science which studies methods for learning from imperfect data. Regression is a statistical model of functional relationships between variables. Getting relationships right means being able to predict well. The least-squares optimal prediction is the expectation value; the conditional expectation function is the regression function. The regression function must be estimated from data; the bias-variance trade-off controls this estimation. Ordinary least squares revisited as a smoothing method. Other linear smoothers: nearest-neighbor averaging, kernel-weighted averaging. Reading: Notes, chapter 1; Faraway, chapter 1 (especially up to p. 17) Homework 1: assignment, data January 19 (Thursday): Lecture 2, The truth about linear regression Using Taylor's theorem to justify linear regression locally. Collinearity. Consistency of ordinary least squares estimates under weak conditions. Linear regression coefficients will change with the distribution of the input variables: examples. Why R^2 is usually a distraction. Linear regression coefficients will change with the distribution of unobserved variables (omitted variable effects). Errors in variables. Transformations of inputs and of outputs. Utility of probabilistic assumptions; the importance of looking at the residuals. What "controlled for in a linear regression" really means. Reading: Notes, chapter 2 (R); Faraway, chapter 1 January 24 (Tuesday): Lecture 3, Evaluation: Error and inference Goals of statistical analysis: summaries, prediction, scientific inference. Evaluating predictions: in-sample error, generalization error; over-fitting. Cross-validation for estimating generalization error and for model selection. Justifying model-based inferences. Reading: Notes, chapter 3 (R) Homework 1 due: solutions Homework 2: assignment, R, penn-select.csv data file January 26 (Thursday): Lecture 4, Smoothing methods in regression The bias-variance trade-off tells us how much we should smooth. Adapting to unknown roughness with cross-validation; detailed examples. How quickly does kernel smoothing converge on the truth? Using kernel regression with multiple inputs. Using smoothing to automatically discover interactions. Plots to help interpret multivariate smoothing results. Average predictive comparisons. Reading: Notes, chapter 4 (R); Faraway, section 11.1 Optional readings: Hayfield and Racine, "Nonparametric Econometrics: The np Package"; Gelman and Pardoe, "Average Predictive Comparisons for Models with Nonlinearity, Interactions, and Variance Components" [PDF] January 31 (Tuesday): Lecture 5, The Bootstrap Quantifying uncertainty by looking at sampling distributions. The bootstrap principle: sampling distributions under a good estimate of the truth are close to the true sampling distributions. Parametric bootstrapping. Non-parametric bootstrapping. Many examples. When does the bootstrap fail? Reading: Notes, chapter 5 (R for figures and examples; pareto.R; wealth.dat) Homework 2 due Homework 3 assigned: assignment, nampd.csv data set February 2 (Thursday): Lecture 6, Heteroskedasticity, weighted least squares, and variance estimation Weighted least squares estimates. Heteroskedasticity and the problems it causes for inference. How weighted least squares gets around the problems of heteroskedasticity, if we know the variance function. Estimating the variance function from regression residuals. An iterative method for estimating the regression function and the variance function together. Locally constant and locally linear modeling. Lowess. Reading: Notes, chapter 6; Faraway, section 11.3 February 7 (Tuesday): Lecture 7, Splines Kernel regression controls the amount of smoothing indirectly by bandwidth; why not control the irregularity of the smoothed curve directly? The spline smoothing problem is a penalized least squares problem: minimize mean squared error, plus a penalty term proportional to average curvature of the function over space. The solution is always a continuous piecewise cubic polynomial, with continuous first and second derivatives. Altering the strength of the penalty moves along a bias-variance trade-off, from pure OLS at one extreme to pure interpolation at the other; changing the strength of the penalty is equivalent to minimizing the mean squared error under a constraint on the average curvature. To ensure consistency, the penalty/constraint should weaken as the data grows; the appropriate size is selected by cross-validation. An example with the data, including confidence bands. Writing splines as basis functions, and fitting as least squares on transformations of the data, plus a regularization term. A brief look at splines in multiple dimensions. Splines versus kernel regression. Reading: Notes, chapter 7; Faraway, section 11.2 Homework 3 due Homework 4: Assignment February 9 (Thursday): Lecture 8, Additive models The curse of dimensionality limits the usefulness of fully non-parametric regression in problems with many variables: bias remains under control, but variance grows rapidly with dimensionality. Parametric models do not have this problem, but have bias and do not let us discover anything about the true function. Structured or constrained non-parametric regression compromises, by adding some bias so as to reduce variance. Additive models are an example, where each input variable has a "partial response function", which add together to get the total regression function; the partial response functions are unconstrained. This generalizes linear models but still evades the curse of dimensionality. Fitting additive models is done iteratively, starting with some initial guess about each partial response function and then doing one-dimensional smoothing, so that the guesses correct each other until a self-consistent solution is reached. Examples in R using the California house-price data. Conclusion: there is hardly ever any reason to prefer linear models to additive ones, and the continued thoughtless use of linear regression is a scandal. Reading: Notes, chapter 8; Faraway, chapter 12 February 14 (Tuesday): Lecture 9, Writing R Code (By popular demand.) R programs are built around functions: pieces of code that take inputs or arguments, do calculations on them, and give back outputs or return values. The most basic use of a function is to encapsulate something we've done in the terminal, so we can repeat it, or make it more flexible. To assure ourselves that the function does what we want it to do, we subject it to sanity-checks, or "write tests". To make functions more flexible, we use control structures, so that the calculation done, and not just the result, depends on the argument. R functions can call other functions; this lets us break complex problems into simpler steps, passing partial results between functions. Programs inevitably have bugs: debugging is the cycle of figuring out what the bug is, finding where it is in your code, and fixing it. Good programming habits make debugging easier, as do some tricks. Avoiding iteration. Re-writing code to avoid mistakes and confusion, to be clearer, and to be more flexible. Reading: Notes, chapter 9 Optional reading: Slides from 36-350, introduction to statistical computing, especially through lecture 15. Homework 4 due Homework 5: assignment, R February 16 (Thursday): Lecture 10, Testing Regression Specifications Non-parametric smoothers can be used to test parametric models. Forms of tests: differences in in-sample performance; differences in generalization performance; whether the parametric model's residuals have expectation zero everywhere. Constructing a test statistic based on in-sample performance. Using bootstrapping from the parametric model to find the null distribution of the test statistic. An example where the parametric model is correctly specified, and one where it is not. Cautions on the interpretation of goodness-of-fit tests. Why use parametric models at all? Answers: speed of convergence when correctly specified; and the scientific interpretation of parameters, if the model actually comes from a scientific theory. Mis-specified parametric models can predict better, at small sample sizes, than either correctly-specified parametric models or non-parametric smoothers, because of their favorable bias-variance characteristics; an example. Reading: Notes, chapter 10 February 21 (Tuesday): Lecture 11, More about Hypothesis Testing The logic of hypothesis testing: significance, power, the will to believe, and the (shadow) price of power. Severe tests of hypotheses: severity of rejection vs. severity of acceptance. Common abuses. Confidence sets as the "dual" to hypothesis tests. Crucial role of sampling distributions. Examples, right and wrong. Reading: Notes, chapter 11 Homework 5 due Homework 6: assignment, strikes.csv data set February 23 (Thursday): Lecture 12, Logistic regression Modeling conditional probabilities; using regression to model probabilities; transforming probabilities to work better with regression; the logistic regression model; maximum likelihood; numerical maximum likelihood by Newton's method and by iteratively re-weighted least squares; comparing logistic regression to logistic-additive models. Reading: Notes, chapter 12; Faraway, chapter 2 (omitting sections 2.11 and 2.12) February 28 (Tuesday): Lecture 13, Generalized linear models and generalized additive models Poisson regression for counts; iteratively re-weighted least squares again. The general pattern of generalized linear models; over-dispersion. Generalized additive models. Reading: Notes, first half of chapter 13; Faraway, section 3.1, chapter 6 Homework 6 due Midterm 1: assignment. Your data-set has been e-mailed to you. March 1 (Thursday): Lecture 14, GLM and GAM Examples Building a weather forecaster for Snoqualmie Falls, Wash., with logistic regression. Exploratory examination of the data. Predicting wet or dry days form the amount of precipitation the previous day. First logistic regression model. Finding predicted probabilities and confidence intervals for them. Comparison to spline smoothing and a generalized additive model. Model comparison test detects significant mis-specification. Re-specifying the model: dry days are special. The second logistic regression model and its comparison to the data. Checking the calibration of the second Reading: Notes, second half of chapter 13; Faraway, chapters 6 and 7 (continued from previous lecture) March 6 (Tuesday): Lecture 15, Multivariate Distributions Reminders about multivariate distributions. The multivariate Gaussian distribution: definition, relation to the univariate or scalar Gaussian distribution; effect of linear transformations on the parameters; plotting probability density contours in two dimensions; using eigenvalues and eigenvectors to understand the geometry of multivariate Gaussians; conditional distributions in multivariate Gaussians and linear regression; computational aspects, specifically in R. General methods for estimating parametric distributional models in arbitrary dimensions: moment-matching and maximum likelihood; asymptotics of maximum likelihood; bootstrapping; model comparison by cross-validation and by likelihood ratio tests; goodness of fit by the random projection trick. Reading: Notes, chapter 14 Midterm 1 due March 8 (Thursday): Lecture 16, Density Estimation The desirability of estimating not just conditional means, variances, etc., but whole distribution functions. Parametric maximum likelihood is a solution, if the parametric model is right. Histograms and empirical cumulative distribution functions are non-parametric ways of estimating the distribution: do they work? The Glivenko-Cantelli law on the convergence of empirical distribution functions, a.k.a. "the fundamental theorem of statistics". More on histograms: they converge on the right density, if bins keep shrinking but the number of samples per bin keeps growing. Kernel density estimation and its properties: convergence on the true density if the bandwidth shrinks at the right rate; superior performance to histograms; the curse of dimensionality again. An example with cross-country economic data. Kernels for discrete variables. Estimating conditional densities; another example with the OECD data. Some issues with likelihood, maximum likelihood, and non-parametric estimation. Reading: Notes, chapter 15 March 13 and 15: Spring break March 20 (Tuesday): Lecture 17, Simulation Simulation: implementing the story encoded in the model, step by step, to produce something data-like. Stochastic models have random components and so require some random steps. Stochastic models specified through conditional distributions are simulated by chaining together random variables. How to generate random variables with specified distributions. Simulation shows us what a model predicts (expectations, higher moments, correlations, regression functions, sampling distributions); analytical probability calculations are short-cuts for exhaustive simulation. Simulation lets us check aspects of the model: does the data look like typical simulation output? if we repeat our exploratory analysis on the simulation output, do we get the same results? Simulation-based estimation: the method of simulated moments. Reading: Notes, chapter 16; R Homework 7: assignment, n90_pol.csv data March 22 (Thursday): Lecture 18, Relative Distributions and Smooth Tests of Goodness-of-Fit Applying the right CDF to a continuous random variable makes it uniformly distributed. How do we test whether some variable is uniform? The smooth test idea, based on series expansions for the log density. Asymptotic theory of the smooth test. Choosing the basis functions for the test and its order. Smooth tests for non-uniform distributions through the transformation. Dealing with estimated parameters. Some examples. Non-parametric density estimation on [0,1]. Checking conditional distributions and calibration with smooth tests. The relative distribution idea: comparing whole distributions by seeing where one set of samples falls in another distribution. Relative density and its estimation. Illustrations of relative densities. Decomposing shifts in relative Reading: Notes, chapter 17 Optional reading: Bera and Ghosh, "Neyman's Smooth Test and Its Applications in Econometrics"; Handcock and Morris, "Relative Distribution Methods" March 27 (Tuesday): Lecture 19, Principal Components Analysis Principal components is the simplest, oldest and most robust of dimensionality-reduction techniques. It works by finding the line (plane, hyperplane) which passes closest, on average, to all of the data points. This is equivalent to maximizing the variance of the projection of the data on to the line/plane/hyperplane. Actually finding those principal components reduces to finding eigenvalues and eigenvectors of the sample covariance matrix. Why PCA is a data-analytic technique, and not a form of statistical inference. An example with cars. PCA with words: "latent semantic analysis"; an example with real newspaper articles. Visualization with PCA and multidimensional scaling. Cautions about PCA; the perils of reification; illustration with genetic maps. Reading: Notes, chapter 18; pca.R, pca-examples.Rdata, and cars-fixed04.dat Homework 7 due Homework 8: assignment March 29 (Thursday): Lecture 20, Factor Analysis Adding noise to PCA to get a statistical model. The factor analysis model, or linear regression with unobserved independent variables. Assumptions of the factor analysis model. Implications of the model: observable variables are correlated only through shared factors; "tetrad equations" for one factor models, more general correlation patterns for multiple factors. (Our first look at latent variables and conditional independence.) Geometrically, the factor model says the data have a Gaussian distribution on some low-dimensional plane, plus noise moving them off the plane. Estimation by heroic linear algebra; estimation by maximum likelihood. The rotation problem, and why it is unwise to reify factors. Other models which produce the same correlation patterns as factor models. Reading: Notes, chapter 19; factors.R and sleep.txt April 3 (Tuesday): Lecture 21, Mixture Models From factor analysis to mixture models by allowing the latent variable to be discrete. From kernel density estimation to mixture models by reducing the number of points with copies of the kernel. Probabilistic formulation of mixture models. Geometry: planes again. Probabilistic clustering. Estimation of mixture models by maximum likelihood, and why it leads to a vicious circle. The expectation-maximization (EM, Baum-Welch) algorithm replaces the vicious circle with iterative approximation. More on the EM algorithm: convexity, Jensen's inequality, optimizing a lower bound, proving that each step of EM increases the likelihood. Mixtures of regressions. Other extensions. Reading: Notes, first half of chapter 20 Homework 8 due Homework 9: assignment, MOM_data_full.txt April 5 (Thursday): Lecture 22, Mixture Model Examples and Complements Precipitation in Snoqualmie Falls revisited. Fitting a two-component Gaussian mixture; examining the fitted distribution; checking calibration. Using cross-validation to select the number of components to use. Examination of the selected mixture model. Suspicious patterns in the parameters of the selected model. Approximating complicated distributions vs. revealing hidden structure. Using bootstrap hypothesis testing to select the number of mixture components. Reading: Notes, second half of chapter 20; mixture-examples.R April 10 (Tuesday): Lecture 23, Graphical Models Conditional independence and dependence properties in factor models. The generalization to graphical models. Directed acyclic graphs. DAG models. Factor, mixture, and Markov models as DAGs. The graphical Markov property. Reading conditional independence properties from a DAG. Creating conditional dependence properties from a DAG. Statistical aspects of DAGs. Reasoning with DAGs; does asbestos whiten teeth? Reading: Notes, chapter 21 Homework 9 due Midterm 2: assignment April 12 (Thursday): Lecture 24, Graphical Causal Models Probabilistic prediction is about passively selecting a sub-ensemble, leaving all the mechanisms in place, and seeing what turns up after applying that filter. Causal prediction is about actively producing a new ensemble, and seeing what would happen if something were to change ("counterfactuals"). Graphical causal models are a way of reasoning about causal prediction; their algebraic counterparts are structural equation models (generally nonlinear and non-Gaussian). The causal Markov property. Faithfulness. Performing causal prediction by "surgery" on causal graphical models. The d-separation criterion. Path diagram rules for linear models. Reading: Notes, chapter 22 April 17 (Tuesday): Lecture 25, Identifying Causal Effects from Observations Reprise of causal effects vs. probabilistic conditioning. "Why think, when you can do the experiment?" Experimentation by controlling everything (Galileo) and by randomizing (Fisher). Confounding and identifiability. The back-door criterion for identifying causal effects: condition on covariates which block undesired paths. The front-door criterion for identification: find isolated and exhaustive causal mechanisms. Deciding how many black boxes to open up. Instrumental variables for identification: finding some exogenous source of variation and tracing its effects. Critique of instrumental variables: vital role of theory, its fragility, consequences of weak instruments. Irremovable confounding: an example with the detection of social influence; the possibility of bounding unidentifiable effects. Summary recommendations for identifying causal effects. Reading: Notes, chapter 23 Midterm 2 due Homework 10: assignment April 19 (Thursday): Carnival April 24 (Tuesday): Lecture 26, Estimating Causal Effects from Observations Estimating graphical models: substituting consistent estimators into the formulas for front and back door identification; average effects and regression; tricks to avoid estimating marginal distributions; propensity scores and matching and propensity scores as computational short-cuts in back-door adjustment. Instrumental variables estimation: the Wald estimator, two-stage least-squares. Summary recommendations for estimating causal effects. Reading: Notes, chapter 24 Homework 10 due Homework 11: assignment, sesame.csv April 26 (Thursday): Lecture 27, Discovering Causal Structure from Observations How do we get our causal graph? Comparing rival DAGs by testing selected conditional independence relations (or dependencies). Equivalence classes of graphs. Causal arrows never go away no matter what you condition on ("no causation without association"). The crucial difference between common causes and common effects: conditioning on common causes makes their effects independent, conditioning on common effects makes their causes dependent. Identifying colliders, and using them to orient arrows. Inducing orientation to enforce consistency. The SGS algorithm for discovering causal graphs; why it works. The PC algorithm: the SGS algorithm for lazy people. What about latent variables? Software: TETRAD and pcalg; examples of working with pcalg. Limits to observational causal discovery: universal consistency is possible (and achieved), but uniform consistency is not. Reading: Notes, chapter 25 May 1 (Tuesday): Lecture 28, Time Series I What time series are. Properties: autocorrelation or serial correlation; strong and weak stationarity. The correlation time, the world's simplest ergodic theorem, effective sample size. The meaning of ergodicity: a single increasing long time series becomes representative of the whole process. Conditional probability estimates; Markov models; the meaning of the Markov property. Autoregressive models, especially additive autoregressions; conditional variance estimates. Bootstrapping time series. Trends and de-trending. Reading: Notes, chapter 26; R for examples; gdp-pc.csv Homework 11 due Final exam: assignment; macro.csv May 3 (Thursday): Lecture 29, Time Series II Cross-validation for time series. Change-points and "structural breaks". Moving averages: spurious correlations (Yule effect) and oscillations (Slutsky effect). State-space or hidden Markov models; moving average and ARMA models as state-space models. The EM algorithm for hidden Markov models; particle filtering. Multiple time series: "dynamic" graphical models; "Granger" causality (which is not causal); the possibility of real causality. Reading: Notes, chapter 27; Faraway, section 9.1 May 15 Final exam due
{"url":"http://www.stat.cmu.edu/~cshalizi/uADA/12/","timestamp":"2014-04-21T10:21:00Z","content_type":null,"content_length":"42864","record_id":"<urn:uuid:65582268-86eb-4e80-a405-cef24d4286e2>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
Advogato: Blog for vicious Visualizing complex singularities I needed a way to visualize which t get hit for a polynomial such as $t^2+zt+z=0$ when z ranges in a simple set such as a square or a circle. That is, really this is a generically two-valued function above the z plane. Of course we can’t just graph it since we don’t have 4 real dimensions (I want t and z to of course be complex). For each complex z, there are generically two complex t above it. So instead of looking for existing solutions (boring, surely there is a much more refined tool out there) I decided it is the perfect time to learn a bit of Python and checkout how it does math. Surprisingly well it turns out. Look at the code yourself. You will need numpy, cairo, and pygobject. I think except for numpy everything was installed on fedora. To change the polynomial or drawing parameters you need to change the code. It’s not really documented, but it should not be too hard to find where to change things. It’s less than 150 lines long, and you should take into considerations that I’ve never before written a line of python code, so there might be some things which are ugly. I did have the advantage of knowing GTK, though I never used Cairo before and I only vaguely knew how it works. It’s probably an hour or two’s worth coding, the rest of yesterday afternoon was spent on playing around with different polynomials. What it does is randomly pick z points in a rectangle, by default real and imagnary parts going from -1 to 1. Each z point has a certain color assigned. On the left hand side of the plot you can see the points picked along with their colors. Then it solves the polynomial and plots the two (or more if the polynomial of higher degree) solutions on the right with those colors. It uses the alpha channel on the right so that you get an idea of how often a certain point is picked. Anyway, here is the resulting plot for the polynomial given above: I am glad to report (or not glad, depending on your point of view) to report that using the code I did find a counterexample to a Lemma I was trying to prove. In fact the counterexample is essentially the polynomial above. That is, I was thinking you’d probably have to have hit every t inside the “outline” of the image if all the roots were 0 at zero. It turns this is not true. In fact there exist polynomials where t points arbitrarily close to zero are not hit even if the outline is pretty big (actually the hypothesis in the lemma were more complicated, but no point in stating them since it’s not true). For example, $t^2+zt+\frac{z}{n}=0$ doesn’t hit a whole neighbourhood of the point $t=-\frac{1}{n}$. Below is the plot for $n=5$. Note that as n goes to infinity the singularity gets close to $t(t+z) = 0$ which is the union of two complex lines. By the way, be prepared the program eats up quite a bit of ram, it’s very inefficient in what it does, so don’t run it on a very old machine. It will stop plotting points after a while so that it doesn’t bring your machine to its knees if you happen to forget to hit “Stop”. Also it does points in large “bursts” instead of one by one. Update: I realized after I wrote above that I never wrote a line of python code that I did write a line of python code before. In my evince/vim/synctex setup I did fiddle with some python code that I stole from gedit, but I didn’t really write any new code there rather than just whacking some old code I did not totally understand with a hammer till it fit in the hole that I needed (a round peg will go into a square hole if hit hard enough).
{"url":"http://www.advogato.org/person/vicious/diary/319.html","timestamp":"2014-04-18T18:25:16Z","content_type":null,"content_length":"9717","record_id":"<urn:uuid:7600b192-883f-48fe-b492-4a218b42660f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
Theory of Hypergeometric Functions On page 231 of Indiscrete Thoughts, Gian-Carlo Rota’s fascinating collection of essays, one finds the following (to borrow one of Rota’s phrases) Sybilline pronouncement, which must originally have appeared in one of his book reviews in Advances in Mathematics: Hypergeometric functions are one of the paradises of nineteenth-century mathematics that remain unknown to mathematicians of our day. Hypergeometric functions of several variables are an even better paradise: they will soon crop up in just about everything. If events have not quite yet justified this prediction, it is largely because there are too many possible definitions. It is necessary to make progress in other areas of mathematics to see which multivariate objects that might plausibly claim to be hypergeometric are cropping up at least in something. The book under review deals with this issue, and was originally published in Japan in 1994. The second author died — much too young — the following year. This translation, due to Kenji Iohara, appeared in 2010. While it won’t win any awards for English prose style, the sense is always clear. According to the first author “the contents are almost the same as the original except for a minor revision. (…) As for the references, we just added several that are directly related to the contents of this book.” Aomoto is perhaps best known for his extension of the Selberg integral. I met him in graduate school one summer when he came to Wisconsin to visit Dick Askey; he lectured on this subject to an audience of two, which might tend to contradict the previous sentence, but anyway that’s how I will always think of him. His beautiful argument may be found in section 8.2 of Special Functions, by Andrews, Askey, and Roy. The first chapter is a brief review of the univariate theory. In some introductory remarks on page 1 the authors observe that, although the Gamma function is not uniquely determined by the functional equation Γ(z+1)=zΓ(z), it can be found from this equation if we specify its behavior at infinity, and they add that “this is our basic idea to treat hypergeometric functions in this book.” In section 1.2 the authors give the standard definition of a [2]F[1] hypergeometric series, but with some nonstandard notation. Hypergeometric series are built up from shifted factorials, which are products of the form a(a+1)…(a+n–1) for some complex number a, where this means 1 if n=0. Such products are usually abbreviated by (a)[n]. Although these are often called Pochhammer symbols, this is not a very good name — Pochhammer used this notation, but for binomial coefficients, not for these products. The authors make the still more curious decision to use (a;n) instead of (a)[n] and to nevertheless call this a Pochhammer symbol. (They also consistently use √–1 instead of i.) If one has the gamma function then one doesn’t actually need a new notation, since (a)[n] = Γ(a+n)/Γ(a). In section 1.1 the authors make this quotient of gamma functions match Stirling’s formula, and thereby derive Euler’s infinite product for the Gamma function; this is an implementation of the idea of the previous paragraph. A [p]F[q] hypergeometric series is a power series where the coefficient of x^n/n! has p shifted factorials in the numerator and q in the denominator. Except for the [1]F[0], which is the binomial series, the most common type is the [2]F[1], so much so that this book (and many other books, mostly older ones) suppresses the subscripts 2 and 1 after their first appearance. The gamma function is ubiquitous in the theory, e.g. in Euler’s integral representation (section 1.3), which gives the [2]F[1] as a combination of gamma functions times a generalization of the beta integral. The authors write “one of our purposes is to extend this integral to higher-dimensional cases and to reveal systematically the structure of generalized hypergeometric functions.” At the end of this introductory chapter (page 19) the authors give a nice road map for the rest of the book. To carry out their program they require a superstructure of abstractions that one does not normally associate with special functions, and this is outlined in chapter 2: affine varieties, ordinary and twisted de Rham homology and cohomology, exact sequences of sheaves, logarithmic differential forms, filtrations, and arrangements of hyperplanes. One had better have a good background in most of the above (I regret to say that I do not) to expect to get much out of this book. In chapter 3 the authors discuss a family of generalized hypergeometric series of type (n+1, m+1), where m and n are positive integers with n2F[1]. More encouragingly, when n=1 and m=4 it reduces to a classical two variable generalization of the [2]F[1] due to Appell, and for n=1 and general m to a multivariate version of Appell’s series introduced by Lauricella. Some other bivariate hypergeometric series studied by Horn are also special cases. Integrals like that of Selberg arise as integral representations of these generalized hypergeometric functions over hyperplane In the fourth and final chapter the authors prove a classical theorem of G. D. Birkhoff on the asymptotic behavior of solutions of difference equations, and apply it to the foregoing theory. This is the idea foreshadowed on page 1. The book also has four appendices. The first sketches the theory of a class of hypergeometric series slightly more general than the (n+1, m+1) type. The second contains a brief discussion of the Selberg integral. The third is background for the fourth, which was written by Toshitake Kohno. It brings in the Knizhnik–Zamolodchikov (KZ) equation and some other ideas from mathematical physics. This book is an outstanding achievement, but it is not for the general reader, nor even for experts on classical hypergeometric series unless their mathematical education has been broader than mine. Warren Johnson (warren.johnson@conncoll.edu) is visiting assistant professor of mathematics at Connecticut College.
{"url":"http://www.maa.org/publications/maa-reviews/theory-of-hypergeometric-functions","timestamp":"2014-04-17T21:51:05Z","content_type":null,"content_length":"101287","record_id":"<urn:uuid:99f4c07c-0f62-4441-b39f-03396b302583>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
ODE-Determining whether differential and all partial derivates are continuous March 6th 2013, 01:42 AM #1 Mar 2013 ODE-Determining whether differential and all partial derivates are continuous Please help! I have been given the condition: U is a ball of radius r>0 in R^n, it is centred at a. x is an element in U given length (x-a)<r The ordinary differential equation: dx/dt = v(t,x), where x=(x1, x2, ..., xn)^T is an element in R^n. If U is continuous and all partial derivatives are continuous in U. Let |t-t0)<T, then dx/dt=v(t,x) has a unique solution satisfying x(t0)=a defined on interval J which is part of R. Determine whether the following equations satisfy the above condition: (a): dx/dt=Ax^(1/2), where A is a constant (b): dx/dt=x^2 Any help on proving whether the equations satisfy this condition will be VERY appreciated, or even just some information on the topic/process that will be helpful! Re: ODE-Determining whether differential and all partial derivates are continuous Mark joy wont be too happy if he sees this. March 6th 2013, 01:55 PM #2 Mar 2013
{"url":"http://mathhelpforum.com/differential-equations/214304-ode-determining-whether-differential-all-partial-derivates-continuous.html","timestamp":"2014-04-20T21:22:55Z","content_type":null,"content_length":"32535","record_id":"<urn:uuid:186e581a-30e1-4928-8ddc-149c245fe2e1>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple example:How to use foreach and doSNOW packages for parallel computation. February 6, 2011 By teramonagi I checked whether this example was run collectly or not in Windows XP(32bit) only ! In R language, the members at Revolution R provide foreach and doSNOW packages for parallel computation. these packages allow us to compute things in parallel. So, we start to install these packages. In foreach package, you can write the codes which are run not only in parallel but also in sequence. And, these are as following. Next, we make clusters by doSNOW package for the purpose of parallel computation. Because I have dual core machine, I specify two as the number of clusters. Now, We are ready to compute things in parallel. It is easy for us to do that by foreach package. You only have to change "%do%" into "%dopar%". I compared the performance of parallel comutation to single computation as following. (I'm sorry that some terms are written in Japanese!) You can understand the result of parallel computation is about twice as fast as single computation do !!! Reference(including PDF) daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/simple-examplehow-to-use-foreach-and-dosnow-packages-for-parallel-computation-2/","timestamp":"2014-04-18T03:30:51Z","content_type":null,"content_length":"52009","record_id":"<urn:uuid:f8c2ab11-efa0-4aaa-a6db-b4d2f078ab26>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Hows about a little help here The mode is the most common number. Here, it is 600 because it appears three times. I'm quite surprised you didn't know that, John. For the rest of 2: 1. Just add up all of the numbers for each month. 2. Take your answer to 1) and divide it by 12 (the number of months) 3. Arrange the numbers in order and pick the middle one. Here, that is halfway between the 6th and 7th ones, so just take the mean of those. 4. As above, it is 600 because it is the most common.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=22129","timestamp":"2014-04-18T16:31:27Z","content_type":null,"content_length":"14431","record_id":"<urn:uuid:e9954f30-e3d2-4313-93d0-b691fb20d9e5>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
10th grade Honors Math Student does not know basic math... January 30th 2013, 10:53 AM #1 Jan 2013 10th grade Honors Math Student does not know basic math... I have a 15 yr old 10th grade HONORS student that cannot do basic Math. She currently maintains an A average in her Algebra II Honors and Geometry Honors courses. We have recently begun asking simple questions, what is 10% of $250, if you aretraveling at 60 mph, how far can you travel in 1 hour, etc etc. She has no idea how to begin an answer. Although she has been an honors student throughout her education and maintained an A (sometimes B) average, she does poorly onstandardized testing, 50 percentile or below. We now feel like we understand why, she does not truly understand math concepts, even the most basic math. We suspect she is taught mostly "cheats" and "short cuts" and repeats patterns to solve problems and with theuse of a calculator no longer has a need to know basic multiplication. My question is, how do we begin at-home remediation? Is there a source that could help organize our journey from basic math to where she currently learning? Our only efforts are printing practice sheets of elementary percentage, fraction, multiplicationworksheets. This has proven difficult in finding enough info in one place and in tracking progress. Tackling this is extremely overwhelming and I am hoping for a real guide to how to do this correctly. Thank you for any information you can provide. Re: 10th grade Honors Math Student does not know basic math... since i don't know the student's actual proficiencies, nor where her particular weaknesses lie, my recommendation would be to go to https://www.khanacademy.org/exercisedashboard and take some of the exercises there. start at the top and have her work her way down. my guess, based on the scant information you have provided is: a lack of knowing how to manipulate fractions. a poor understanding of the distributive law. lack of seeing that percentages ARE ratios, and what they MEAN. unfortunately, with our given system of arithmetic (arabic numerals in a base 10 system of digits), a certain amount of memorization is mandatory. you simply cannot multiply 24 x 37 if you do not know what 4 x 7 is. some understanding of the "laws of arithmetic" is in order, too. does she know what "additive identity" and "multiplicative identity", or "associativity" and "commutativity" mean, and how to use them to solve problems symbolically (like with x's and stuff)? these rules aren't just "more meaningless stuff to learn" but are "labor-saving tools" if you know 4 x 7, you don't NEED to remember what 7 x 4 is, if you remember the "commutative law". frankly, i am surprised she does as well in her classes as she has been. one suspects the school system has been kinder to her than warranted, and it does not sound like she is anywhere near prepared for "hard courses" like analytic geometry or calculus (which is FAR tougher than calculating a 15% discount on a pair of $20 shoes). Re: 10th grade Honors Math Student does not know basic math... Thank you for the reply, I will start on that site. The school systems (a private school education grades 1-8 and highly ranked small public for 9-10) has definitely been quicker to say "some kids just don't test well" than truly question the high classroom grades vs low standardized test scores. And with her honors ranking, she is on track to begin AP Calculus next school year (which I believe is early for her age). It is very difficult to tell a teenager they are not as brilliant as they have been made to believe, and to try to start from the beginning. Thanks again. Re: 10th grade Honors Math Student does not know basic math... One should make an effort to involve the child in day today activities and introduce concepts of variable, constant , multiplication division etc., in an informal way. Then there are wonderful learning materials available in 3D stereo which can be of great help. Eureka.in is a wonderful product and will prove to be of great help. You can see by visiting their site: designmate(I) pvt ltd.: Designmate - 3D Education Software - Education Portal for K12 School Education - Ahmedabad, India Re: 10th grade Honors Math Student does not know basic math... If there's one word of advice, as a university student who was top in high school, please tell your child to MEMORIZE TRIGONOMETRIC FUNCTIONS. Please tell your child to memorize that. Calculus is a breeze if you just memorize those rules. My biggest mistake in high school hands down. Just trying not to making history repeat itself for another student. God bless. Last edited by Steelers72; January 30th 2013 at 09:03 PM. January 30th 2013, 11:25 AM #2 MHF Contributor Mar 2011 January 30th 2013, 12:22 PM #3 Jan 2013 January 30th 2013, 08:42 PM #4 Super Member Jul 2012 January 30th 2013, 08:57 PM #5 Dec 2012
{"url":"http://mathhelpforum.com/math-topics/212283-10th-grade-honors-math-student-does-not-know-basic-math.html","timestamp":"2014-04-18T15:21:21Z","content_type":null,"content_length":"43436","record_id":"<urn:uuid:89a0eca7-79fd-428b-9b57-f18265b263bb>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: I invested $4000 in a bank who offers 5 percent monthly interest (the interest covers the total amount of money currently present in the bank including the previously added interest). How much money would I have after 40 months • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50fbd49ce4b010aceb331cfd","timestamp":"2014-04-18T16:24:00Z","content_type":null,"content_length":"39764","record_id":"<urn:uuid:635fa3dc-4db1-4a1f-b556-a6bad981a6b7>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
Vortex simulations of 2D turbulence in confined domains Seminar Room 1, Newton Institute This talk considers the evolution of 2D turbulence in a confined domain with slip boundary conditions imposed. Several domain shapes are considered, both regular (e.g. a circle, a square) and irregular (e.g. random coastlines). The CASL (Contour-Advective Semi-Lagrangian) technique is employed, taking as the initial condition a random assembly of vortex patches. It is known that the initial angular momentum is important in determining whether the very long time state is dipolar or monopolar when no-slip boundary conditions are considered. Although this is not the case for slip boundary conditions, the initial total circulation plays a similar role. We explore the dependence of the final state on the initial circulation for a range of geometries. Some examples are shown where trapping of vortices in 'bays', caused by domain-scale interactions, can influence the long time evolution. Recent work on 2D turbulence in a periodic box has shown how the rate of dipole/ monopole interactions can be related to the time rate of decay of enstrophy at intermediate to long times (once the initial very strong interactions are over). The presence of the domain boundary provides a mechanism for enhancing the rate of such interactions, as monopoles of opposite sign tend to hug the boundaries, travel in opposite directions and then meet to form dipoles which are then launched into the flow. Accordingly, we consider the effect of domain shape on the frequency of dipole formation and dipole/monopole interactions and the corresponding rate of decay of enstrophy. Finally, we discuss extensions of this work to the 3D quasi-geostrophic case. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/HRT/seminars/2008121011301.html","timestamp":"2014-04-17T00:49:38Z","content_type":null,"content_length":"7373","record_id":"<urn:uuid:80144272-9ed9-4698-adf7-aab40871bfcc>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
On the set of infinite measures up vote 1 down vote favorite My question is about the structure of the set of infinite Borel measures on compact metric spaces invariant with respect to a homeomorphism. Let $T$ be a homeomorphism of a compact metric space $X$ and $M(T)$ the set of $T$-invariant Borel measures on $X$, i.e., for any $\mu$ from $M(T)$ one has $\mu(TA) = \mu(A)$ for any Borel set $A$. Denote by $M_1(T)$ the subset of probability Borel measures and let $M_\infty(T)$ denote the subset of infinite $\sigma$-finite measures from $M(T)$. Remark that the set $M_1(T)$ is always non-empty; on the other hand, the set $M_\infty(T)$ may be empty in some cases. For instance, if $T$ is minimal (every $T$-orbit is dense in $X$), then $M_\infty(T)= \emptyset$. It is well known that $M_1(T)$ is a Choquet simplex whose extreme points are ergodic measures. Question: What can be said about $M_\infty(T)$? Is this set a Choquet simplex? (I'm sorry if this is trivial or well known) ergodic-theory measure-theory 1 I think if you take a sum of $\delta$ measures along any (aperiodic) orbit then you always get an infinite measure. Not a nice one because it's not locally finite. – Anthony Quas Jun 22 '12 at 1 Actually I think just about everything goes wrong. There's a nice topology on the set of probability measures, thanks to the Riesz representation theorem. The same isn't true for the infinite measures. The probability measures have a natural normalization (so that only convex combinations make sense). On the other hand, a positive multiple of an infinite invariant measure is another infinite invariant measure. – Anthony Quas Jun 23 '12 at 8:11 Anthony, thanks for your comments. You are right, infinite atomic invariant measures always exist. Of course, the case of continuous infinite measures is of main interest. Do you know whether there are any results about erodic decomposition of dynamical systems with infinite invariant measures? – SIB Jun 23 '12 at 14:12 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged ergodic-theory measure-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/100388/on-the-set-of-infinite-measures","timestamp":"2014-04-17T19:17:21Z","content_type":null,"content_length":"50026","record_id":"<urn:uuid:198bad40-5aff-4a69-bbea-6142b5b1c1db>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
CGTalk - Subdivision trouble 10-23-2012, 04:47 PM If I subdivide the above polygon I get rectangular polygons. If I add a cut down the middle so I have two square polygons it will no longer subdivide. Basically I'm being forced to use rectangular polygons, since this will be a sheet with the cloth simulator you can see that I need square polys. I'm following a video tutorial and in the video this problem doesn't exist, he cuts then subdivides without a problem. Any ideas?
{"url":"http://forums.cgsociety.org/archive/index.php/t-1076675.html","timestamp":"2014-04-18T08:45:05Z","content_type":null,"content_length":"5801","record_id":"<urn:uuid:01986017-30d4-4a1e-8185-7f5e2d80ddbe>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
Irrelevant patents on elliptic-curve cryptography D. J. Bernstein Authenticators and signatures A state-of-the-art Diffie-Hellman function Irrelevant patents on elliptic-curve cryptography Popular rumor states that elliptic-curve cryptography is a patent minefield. However, I'm not aware of any patents that apply to Curve25519. Special primes Bender and Castagnoli in 1990.06 wrote ``2^{127}+24933 is prime'' and commented that this was ``convenient in computer arithmetic,'' in particular for elliptic-curve cryptography. My prime 2^{255}-19 is convenient for the same reasons. Popular rumor states that convenient primes are covered by a subsequent Crandall patent: US patent 5159632, filed 1991.09.16, granted 1992.10.27. What the patent actually claims is elliptic-curve cryptography with primes p ``such that mod p arithmetic is performed in a processor using only shift and add operations.'' Similar comments apply to US patent 5271061, filed 1991.09.17, granted 1993.12.14; and US patent 5463690, filed 1991.09.17, granted 1995.10.31. Reduction modulo NIST's primes 2^{224}-2^{96}+1 et al. is often handled with shift and add operations, raising the question of whether the patent is valid; but reduction modulo my prime 2^{255}-19 is handled with multiplications, which the patent does not claim to cover. It should, in any case, be obvious to the reader that a patent cannot cover prime shapes published two years before the patent was filed. Point compression Miller in 1986, in the paper that introduced elliptic-curve cryptography, suggested compressing a public key (x,y) to simply x: ``Finally, it should be remarked, that even though we have phrased everything in terms of points on an elliptic curve, that, for the key exchange protocol (and other uses as one-way functions), that only the x-coordinate needs to be transmitted. The formulas for multiples of a point cited in the first section make it clear that the x-coordinate of a multiple depends only on the x-coordinate of the original point.'' This is exactly the compression method that I use. Popular rumor states that point compression is covered by a subsequent Vanstone-Mullin-Agnew patent: US patent 6141420, filed 1994.07.29, granted 2000.10.31. What the patent actually claims are (1--28) encryption using an elliptic curve over a finite field of characteristic 2 with elements represented on a normal basis; (29, 36) communicating (x,y) on a curve by communicating x and having the receiver somehow compute y; (30--35, 37--41) communicating x and ``identifying information'' of y, such as one bit; and (42--52) some secret-key encryption mechanisms. My Curve25519 software never computes y, so it is not covered by the patent. It should, in any case, be obvious to the reader that a patent cannot cover compression mechanisms published seven years before the patent was filed. Fixed-base exponentiation Yao in 1976 published a fast algorithm to compute many powers of a fixed base. Pippenger in 1976 published an even faster algorithm. Yao's algorithm, with two slight improvements published later by Knuth, was reinvented by Brickell, Gordon, McCurley, and Wilson in 1993 and patented: US patent 5299262, filed 1992.08.13, granted 1994.03.29. A very small part of Pippenger's algorithm was also reinvented by Brickell, Gordon, McCurley, and Wilson. A slightly larger part of Pippenger's algorithm was then reinvented by Lim and Lee in 1994 and patented: US patent 5999627, filed 1995.06.06, granted 1999.12.07. My Curve25519 software does not use any of these algorithms.
{"url":"http://cr.yp.to/ecdh/patents.html","timestamp":"2014-04-18T03:20:21Z","content_type":null,"content_length":"3963","record_id":"<urn:uuid:03862536-0673-4a89-b9d8-89aee71a3f1b>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
Sure you can program in Scala just like you would in Java and get all the advantages of the cleaner syntax. But if you really want to explore the power of Scala you should try some functional One way to tell if you are programming in a largely functional manner is if you prefer val declarations to var declarations and if you prefer immutable to mutable classes. How you might create a matrix library in a functional programming style To implement matrices, let's leverage the immutable lists generated by Scala's default List(...) call. Arbitrarily choosing a row-major representation, we will represent a row of a matrix as as a list of doubles. For convenience let's create a type alias for that: type Row = List[Double] A matrix can then be a list of rows: type Matrix = List[Row] One fundamental operation we will be doing is a dot product. Let's implement this in map-reduce style: def dotProd(v1:Row,v2:Row) = v1.zip( v2 ).map{ t:(Double,Double) => t._1 * t._2 }.reduceLeft(_ + _) The zip method combines together two parallel lists of numbers into a single list of pairs of numbers. The map method multiplies the elements of each of these pairs together. The reduceLeft method sums the numbers of the list to create a single number. OK, now let's see how we can create some of the standard matrix functions. First here is how we can transpose a matrix: def transpose(m:Matrix):Matrix = if(m.head.isEmpty) Nil else m.map(_.head) :: transpose(m.map(_.tail)) Notice this is a recursive function. It uses the construct firstRow :: remainderOfRows to form the list of rows that is the output matrix. It calculates the first row of the output matrix by using map to get the first element of each row of the input matrix. It gets the remainder of the rows by recursively getting the transpose of the matrix formed by the tail of each input row (i.e. every element except the first one). How about matrix multiplication? That turns out to be pretty straightforward using standard Scala "for comprehensions": def mXm( m1:Matrix, m2:Matrix ) = for( m1row <- m1 ) yield for( m2col <- transpose(m2) ) yield dotProd( m1row, m2col ) Here we directly implement the matrix multiplication formula, iterating through the rows of the first matrix and the columns of the second matrix, and calculating the dot-product of each. Note that to iterate over the columns of a matrix we actually iterate over the rows of the transpose of the matrix. However it would be nice to be able to use standard mathematical operators like A*B and A^T when coding with matrices. Well, we can using the power of Scala's operator identifiers, infix syntax, and implicit conversion: case class RichMatrix(m:Matrix){ def T = transpose(m) def *(that:RichMatrix) = mXm( this.m, that.m ) implicit def pimp(m:Matrix) = new RichMatrix(m) The user of this library need never explicitly instantiate any RichMatrix objects -- they are automatically created from List[Double[Double]] object whenever the T or * method is called on it. This allows you to do things like: val M = List( List( 1.0, 2.0, 3.0 ), List( 4.0, 5.0, 6.0 ) val MT = List( List( 1.0, 4.0 ), List( 2.0, 5.0 ), List( 3.0, 6.0 ) M.T must_== MT val A = List(List( 2.0, 0.0 ), List( 3.0,-1.0 ), List( 0.0, 1.0 ), List( 1.0, 1.0 )) val B = List(List( 1.0, 0.0, 2.0 ), List( 4.0, -1.0, 0.0 )) val C = List(List( 2.0, 0.0, 4.0 ), List( -1.0, 1.0, 6.0 ), List( 4.0, -1.0, 0.0 ), List( 5.0, -1.0, 2.0 )) A * B must_== C While we are at it we might as well add a few more convenience methods to the RichMatrix: case class RichMatrix(m:Matrix){ def apply(i:Int,j:Int) = m(i)(j) def rowCount = m.length def colCount = m.head.length def toStr = "\n"+m.map{ _.map{"\t" + _}.reduceLeft(_ + _)+"\n" }.reduceLeft(_ + _) The apply method is a special method invoked when the parenthesis operator is applied to an object. It allows you to, for example, do c(1,3) to get the number on the second row, fourth column of the matrix. Note that this method should not be used in performance-sensitive inner loops because it is not efficient. A matrix uses linked-lists as storage so the element access is O(N) not O(1) as would be the case if the matrix used arrays. The rowCount and colCount methods are I hope obvious. The toStr returns a string representation of the matrix in a nice tabular format that is much easier to than the default toString method if List[List[Double]], Note how this is done with two nested map/reduce pairs. Now, creating a matrix using the List( List(...), List(...), ... ) syntax is fine for small matrices, but what if we want to generate large matrices? One way to do this is: object Matrix{ def apply( rowCount:Int, colCount:Int )( f:(Int,Int) => Double ) = ( for(i <- 1 to rowCount) yield ( for( j <- 1 to colCount) yield f(i,j) ).toList This allows you do do things like: // 100x200 matrix containing random elements val A = Matrix(100,200) { (i:Int,j:Int) => // 5x5 identity matrix val I = Matrix(5,5) { (i:Int,j:Int) => if(i==j) 1.0 else 0.0 Note that everything above is pure-functional: all the objects are immutable and each function is implemented as a single expression. That's it for now. There is obviously a lot more we would have to add to make this a useful, production-quality library, (If you want to try this yourself, you can download the complete source code and the BDD-style regression test that demonstrates its use.)
{"url":"http://www.scalaclass.com/book/export/html/1","timestamp":"2014-04-18T20:43:34Z","content_type":null,"content_length":"28746","record_id":"<urn:uuid:4b474d28-3209-4965-9b58-29af61be4838>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
Downers Grove Geometry Tutor ...I've spent the last six years teaching math (and English) prerequisite courses at a small private nursing college. I still tutor there and lead workshops for the school's admission test. Also, I recently started tutoring ACT students at a local branch of a successful national tutoring center. 17 Subjects: including geometry, English, reading, grammar ...I enjoyed these activities so much, in fact, that I decided to broaden my education outreach efforts and start tutoring, focusing mainly on physics and math. My goals are to improve each student's conceptual understanding of a subject along with his/her specific problem-solving skills. Because ... 10 Subjects: including geometry, calculus, physics, algebra 1 ...My master's project was on Remote monitoring of engine health using non intrusive methods. Dear Students, I currently work as Sr. Technical Specialist at Case New Holland Industrial in Burr Ridge, IL. 16 Subjects: including geometry, chemistry, physics, calculus ...Prealgebra is the precursor to future Algebra classes and essentially all Math courses. Work here can begin with simple multiplication and division, working up to beginning Algebraic equations. The main focus however, is generally in word problems. 11 Subjects: including geometry, calculus, algebra 1, algebra 2 ...My Bachelor's is in Mathematics Education from Purdue University and I also have a Masters in Applied Mathematics from DePaul University. I am able to tutor all Math subjects from Pre-Algebra to Calculus and Basic Microsoft Excel. I am available in the Roselle Area after 7pm on weekdays and weekends as needed. 10 Subjects: including geometry, calculus, statistics, algebra 1
{"url":"http://www.purplemath.com/downers_grove_geometry_tutors.php","timestamp":"2014-04-18T18:58:24Z","content_type":null,"content_length":"24004","record_id":"<urn:uuid:194146c0-da95-45ff-a20f-041f2e008914>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
22 December 1952 Mr. Roger Hayward 920 Linda Vista Pasadena 2, Calif. Dear Roger: Here is the information about the drawings that I need for the nucleic acid paper, in connection with the NFIP work. The cylindrical coordinates for 20 atoms are given on the accompanying sheet. Here ρ is the radial distance from the axis of the cylinder (the axis of the helix), Ф is the azimuthal angle, and z is the coordinate in the direction of the axis. The coordinates are on a right-handed system, so that increasing z and increasing Ф generates a right-handed screw. The first five coordinates, for P and four oxygen atoms, refer to the phosphate groups. The four oxygen atoms are at the corners of a tetrahedron, which is, however, not a regular tetrahedron. I think that so far as your drawings are concerned it would be satisfactory, however, to indicate the tetrahedron as a regular tetrahedron, without trying to take the relatively small distortions into consideration. The next three sets of coordinates for C[5'], C[4'], and C[3'], represent three atoms which connect the upper right outer corner of an tetrahedron with the lower left outer corner of the tetrahedron that is in the layer above, and rotated 105° to the right. I suggest that the first drawing indicate one helix, formed by one tetrahedron, the link to another tetrahedron in the layer above, that tetrahedron, the link to the tetrahedron in the layer above that, and so on. This helix is generated by taking the tetrahedron represented by the four oxygen atoms, the inner edge being formed by oxygen atoms with radius 2.00 A, and generating the next tetrahedron by translating the distance 3.40 A along the z axis and rotating through the angle 105°. This helical molecule executes seven turns in 24 tetrahedra. The pitch is 81.6 - that is, 24. x 3.40 A. I suggest that 24 - probably better 25 - tetrahedra be shown in outline, with arcs connecting them together, the arcs representing the atoms C[5'], C[4'], and C[3']. This would be just a diagrammatic representation. It might be worth while to show the axis of the molecule, as a straight line, and to draw a radius out from the axis to the center of the inner edge of each tetrahedron. For the second figure, a representation of the three-chain structure should be drawn. This can consist of 25 tetrahedra and the connecting Mr. Hayward 22/12/52 arcs, as in the first figure, and the same repeated after rotation through 120° and 240°. The molecule consists of three helical chains that are related to one another by the operations of a three-fold axis. It would be good also to have a perspective drawing showing all 20 atoms, plus five more - the phosphate group to which the first phosphate group is connected by the chain of atoms C[5'], C[4'], and C[3']. In this drawing bonds should be shown connecting the atoms. The phosphorus atom is bonded to the four oxygen atoms that constitute the tetrahedron about it. Then O[III] is bonded to C5'. The other bonds are C[5']-C[4'], C[4']-C[3'], C[3']-C[2'], C[2']-C[1'], C[1']-O[1'], O[1']-C[4'] (forming a five-membered ring), C[2']-O[2'], C[1']-N[3], N[3]-C[4], C[4]-C[5], C[5]-C[6], C[6]-N[6], C[6] -N[1], N[1]-C[2], C[2]-O[2], C[2]-N[3]. The other five atoms to be shown, the other phosphate tetrahedron, are obtained from the coordinate for the phosphorus and the surrounding four oxygen atoms by increasing z by 3.40 A and increasing by 105°. I think that it would be wise in this figure to show the helical axis, and perhaps the radii from the axis to the centers of the inner edges of the two Please telephone me if there is any question in your mind about the nature of this structure, or if you think that I have made some mistake in this description. Sincerely yours, Linus Pauling: W
{"url":"http://scarc.library.oregonstate.edu/coll/pauling/dna/corr/sci9.001.17-lp-hayward-19521222-transcript.html","timestamp":"2014-04-17T02:29:17Z","content_type":null,"content_length":"13814","record_id":"<urn:uuid:50145bb0-b3ea-4b3d-be10-4d1d2f4e83ee>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
Google Answers: Equation to know where the Sun is at a given place at a given date-time The goal is to predict the apparent position of the Sun in the Earth sky at a given date, a given hour and a given latitude. Let alpha be the angle between North and the direction of the Sun, and beta then angle of the Sun above the horizon, L the latitude, l the longitude, and t the date-time. I'm looking for the function F : ( albha, beta ) = F( L , t ) For exemple, right now, I am at 48°49'N 02°17'23 E, and we are the 15th of november 2006, time is 1035 GMT. **Roughly** observed, the Sun is at 165° (North being zero) and 22° (horizon being zero) I want F that could have told me : F( 48°49' , 2°17', 2006/11/15, 1035) = (165, 22)
{"url":"http://answers.google.com/answers/threadview/id/782886.html","timestamp":"2014-04-18T03:26:37Z","content_type":null,"content_length":"16909","record_id":"<urn:uuid:f7475607-c9c6-4e4a-9874-25ab793ea494>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Jones: Two-step trending [Archive] - Actuarial Outpost 02-04-2005, 02:26 PM In Jones' paper on trend, he discusses both a one-step and a two-step trending procedure. The one-step trends from the average written date of the experience period to the average written date of the future policy period. No problems here. The two-step does the same thing, except splitting the trending period at the average date for the latest trend point. Can someone explain what this point is? He describes the first step factor as the average on-level written premium from the latest year divided by the average on-level earned premium for the specific year. What does this represent? Where does the average date for the latest trend point come in? It seems to me that the two-step method simply adds an invented step that's not necessary, but I'm obviously confused somewhere. Thanks for your help.
{"url":"http://www.actuarialoutpost.com/actuarial_discussion_forum/archive/index.php/t-47324.html","timestamp":"2014-04-18T03:00:15Z","content_type":null,"content_length":"45246","record_id":"<urn:uuid:c4c1b2a2-7474-4005-9eef-080b05967fb0>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
Item analysis An item analysis involves many statistics that can provide useful information for improving the quality and accuracy of multiple-choice or true/false items (questions). Some of these statistics are: Item difficulty: the percentage of students that correctly answered the item. • Also referred to as the p-value. • The range is from 0% to 100%, or more typically written as a proportion of 0.0 to 1.00. • The higher the value, the easier the item. • Calculation: Divide the number of students who got an item correct by the total number of students who answered it. • Ideal value: Slightly higher than midway between chance (1.00 divided by the number of choices) and a perfect score (1.00) for the item. For example, on a four-alternative, multiple-choice item, the random guessing level is 1.00/4 = 0.25; therefore, the optimal difficulty level is .25 + (1.00 - .25) / 2 = 0.62. On a true-false question, the guessing level is (1.00/2 = .50) and, therefore, the optimal difficulty level is .50+(1.00-.50)/2 = .75 • P-values above 0.90 are very easy items and should be carefully reviewed based on the instructor’s purpose. For example, if the instructor is using easy “warm-up” questions or aiming for student mastery, than some items with p values above .90 may be warranted. In contrast, if an instructor is mainly interested in differences among students, these items may not be worth testing. • P-values below 0.20 are very difficult items and should be reviewed for possible confusing language, removed from subsequent exams, and/or identified as an area for re-instruction. If almost all of the students get the item wrong, there is either a problem with the item or students were not able to learn the concept. However, if an instructor is trying to determine the top percentage of students that learned a certain concept, this highly difficult item may be necessary. Item discrimination: the relationship between how well students did on the item and their total exam score. • Also referred to as the Point-Biserial correlation (PBS) • The range is from –1.00 to 1.00. • The higher the value, the more discriminating the item. A highly discriminating item indicates that the students who had high exams scores got the item correct whereas students who had low exam scores got the item incorrect. • Items with discrimination values near or less than zero should be removed from the exam. This indicates that students who overall did poorly on the exam did better on that item than students who overall did well. The item may be confusing for your better scoring students in some way. • Acceptable range: 0.20 or higher • Ideal value: The closer to 1.00 the better • Calculation: Χ[C] = the mean total score for persons who have responded correctly to the item Χ[Τ] = the mean total score for all persons p = the difficulty value for the item q = (1 – p) S. D. Total = the standard deviation of total exam scores Reliability coefficient: a measure of the amount of measurement error associated with a exam score. • The range is from 0.0 to 1.0. • The higher the value, the more reliable the overall exam score. • Typically, the internal consistency reliability is measured. This indicates how well the items are correlated with one another. • High reliability indicates that the items are all measuring the same thing, or general construct (e.g. knowledge of how to calculate integrals for a Calculus course). • With multiple-choice items that are scored correct/incorrect, the Kuder-Richardson formula 20 (KR-20) is often used to calculate the internal consistency reliability. K = number of items p = proportion of persons who responded correctly to an item (i.e., difficulty value) q = proportion of persons who responded incorrectly to an item (i.e., 1 – p) σ ^2 [x] = total score variance • Three ways to improve the reliability of the exam are to 1) increase the number of items in the exam, 2) use items that have high discrimination values in the exam, 3) or perform an item-total statistic analysis • Acceptable range: 0.60 or higher • Ideal value: 1.00 Item-total statistics: measure the relationship of individual exam items to the overall exam score. Currently, the University of Texas does not perform this analysis for faculty. However, one can calculate these statistics using SPSS or SAS statistical software. 1. Corrected item-total correlation □ This is the correlation between an item and the rest of the exam, without that item considered part of the exam. □ If the correlation is low for an item, this means the item isn't really measuring the same thing the rest of the exam is trying to measure. 2. Squared multiple correlation □ This measures how much of the variability in the responses to this item can be predicted from the other items on the exam. □ If an item does not predict much of the variability, then the item should be considered for deletion. 3. Alpha if item deleted □ The change in Cronbach's alpha if the item is deleted. □ When the alpha value is higher than the current alpha with the item included, one should consider deleting this item to improve the overall reliability of the exam. Item-total statistic table Summary for scale: Item-total statistics Mean = 46.1100 S.D. = 8.26444 Valid n = 100 Cronbach alpha = .794313 Standardized alpha = .800491 Average inter-item correlation = .297818 Variable Mean if Var. if S.D. if Corrected item-total Squared multiple Alpha if deleted deleted deleted Correlation correlation deleted ITEM1 41.61000 51.93790 7.206795 .656298 .507160 .752243 ITEM2 41.37000 53.79310 7.334378 .666111 .533015 .754692 ITEM3 41.41000 54.86190 7.406882 .549226 .363895 .766778 ITEM4 41.63000 56.57310 7.521509 .470852 .305573 .776015 ITEM5 41.52000 64.16961 8.010593 .054609 .057399 .824907 ITEM6 41.56000 62.68640 7.917474 .118561 .045653 .817907 ITEM7 41.46000 54.02840 7.350401 .587637 .443563 .762033 ITEM8 41.33000 53.32110 7.302130 .609204 .446298 .758992 ITEM9 41.44000 55.06640 7.420674 .502529 .328149 .772013 ITEM10 41.66000 53.78440 7.333785 .572875 .410561 .763314 By investigating the item-total correlation, we can see that the correlations of items 5 and 6 with the overall exam are . 05 and .12, while all other items correlate at .45 or better. By investigating the squared-multiple correlations, we can see that again items 5 and 6 are significantly lower than the rest of the items. Finally, by exploring the alpha if deleted, we can see that the reliability of the scale (alpha) would increase to .82 if either of these two items were to be deleted. Thus, we would probably delete these two items from this exam. Deleting item process: To delete these items, we would delete one item at a time, preferably item 5 because it can produce a higher exam reliability coefficient if deleted, and re-run the item-total statistics report before deleting item 6 to ensure we do not lower the overall alpha of the exam. After deleting item 5, if item 6 still appears as an item to delete, then we would re-perform this deletion process for the latter item. Distractor evaluation: Another useful item review technique to use. The distractor should be considered an important part of the item. Nearly 50 years of research shows that there is a relationship between the distractors students choose and total exam score. The quality of the distractors influence student performance on an exam item. Although the correct answer must be truly correct, it is just as important that the distractors be incorrect. Distractors should appeal to low scorers who have not mastered the material whereas high scorers should infrequently select the distractors. Reviewing the options can reveal potential errors of judgment and inadequate performance of distractors. These poor distractors can be revised, replaced, or removed. One way to study responses to distractors is with a frequency table. This table tells you the number and/or percent of students that selected a given distractor. Distractors that are selected by a few or no students should be removed or replaced. These kinds of distractors are likely to be so implausible to students that hardly anyone selects them. • Definition: The incorrect alternatives in a multiple-choice item. • Reported as: The frequency (count), or number of students, that selected each incorrect alternative • Acceptable Range: Each distractor should be selected by at least a few students • Ideal Value: Distractors should be equally popular • Interpretation: □ Distractors that are selected by a few or no students should be removed or replaced □ One distractor that is selected by as many or more students than the correct answer may indicate a confusing item and/or options • The number of people choosing a distractor can be lower or higher than the expected because: □ Partial knowledge □ Poorly constructed item □ Distractor is outside of the area being tested Additional information DeVellis, R. F. (1991). Scale development: Theory and applications. Newbury Park: Sage Publications. Field, A. (2006). Research Methods II: Reliability Analysis. Retrieved August 5, 2006 from The University of Sussex Web site: http://www.sussex.ac.uk/Users/andyf/reliability.pdf Haladyna. T. M. (1999). Developing and validating multiple-choice exam items, 2nd ed. Mahwah, NJ: Lawrence Erlbaum Associates. Suen, H. K. (1990). Principles of exam theories. Hillsdale, NJ: Lawrence Erlbaum Associates. Yu, A. (n.d) Using SAS for Item Analysis and Test Construction. Retrieved August 5, 2006 from Arizona State University Web site: http://seamonkey.ed.asu.edu/~alex/teaching/assessment/alpha.html
{"url":"http://www.utexas.edu/academic/ctl/assessment/iar/students/report/itemanalysis.php","timestamp":"2014-04-21T12:48:30Z","content_type":null,"content_length":"23000","record_id":"<urn:uuid:c879ef3b-b606-4209-9900-831edb8dcba1>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
A Box-Constrained Optimization Algorithm With Negative Curvature Directions and Spectral Projected Gradients A Box-Constrained Optimization Algorithm With Negative Curvature Directions and Spectral Projected Gradients (2001) Download Links Other Repositories/Bibliography by E. G. Birgin , J.M. Martínez author = {E. G. Birgin and J.M. Martínez}, title = {A Box-Constrained Optimization Algorithm With Negative Curvature Directions and Spectral Projected Gradients}, year = {2001} A practical algorithm for box-constrained optimization is introduced. The algorithm combines an active-set strategy with spectral projected gradient iterations. In the interior of each face a strategy that deals eciently with negative curvature is employed. Global convergence results are given. Numerical results are presented. Keywords: box constrained minimization, active set methods, spectral projected gradients, dogleg path methods. AMS Subject Classication: 49M07, 49M10, 65K, 90C06, 90C20. 1 133 Nonmonotone Spectral Projected Gradient Methods on Convex Sets - Birgin, Martínez, et al. 123 CUTE: Constrained and unconstrained testing environment - Toint - 1995 82 The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem - Raydan - 1997 62 Global convergence of a class of trust region algorithms for optimization with simple bounds - Toint - 1988 52 On the Barzilai and Borwein choice of the steplength for the gradient method - Raydan - 1993 47 A new trust region algorithm for bound constrained minimization - Friedlander, Martínez, et al. - 1994 31 Augmented Lagrangian with adaptive precision control for quadratic programming with equality constraints - Dostal, Friedlander, et al. - 1999 31 On the maximization of a concave quadratic function with box constraints - Friedlander, Martínez - 1994 28 Gradient method with retards and generalizations - Friedlander, Martinez, et al. - 1999 26 Estimation of the optical constants and the thickness of thin films using unconstrained optimization - BIRGIN, CHAMBOULEYRON, et al. - 1999 22 Box constrained quadratic programming with controlled precision of auxiliary problems and applications - Dostál - 1996 21 Restricted optimization: a clue to a fast and accurate implementation of the Common Reflection Surface method - Birgin, Biloti, et al. - 1999 21 Nonlinear programming algorithms using trust regions and augmented Lagrangians with nonmonotone penalty parameters - Gomes, Maciel, et al. - 1999 18 An adaptive algorithm for bound constrained quadratic minimization, Investigación Operativa 7 - Bielchowsky, Friedlander, et al. - 1997 17 Validation of an Augmented Lagrangian algorithm with a Gauss-Newton Hessian approximation using a set of hardspheres problems - Krejić, Martínez, et al. - 2000 16 A New Method for Large-scale Box Constrained Convex Quadratic Minimization Problems - Friedlander, Martínez, et al. - 1995 16 Determination of thickness and optical constants of a-Si:H films from transmittance data - Mulato, Chambouleyron, et al. - 2000 14 Solution of contact problems by FETI domain decomposition with natural coarse space projection - Dostál, Gomes, et al. - 1611 11 On the solution of finite-dimensional variational inequalities using smooth optimization with simple bounds - Andreani, Friedlander, et al. - 1995 11 Comparing the numerical performance of two trust-region algorithms for large-scale bound-constrained minimization - Diniz-Ehrhardt, Gomes-Ruggiero, et al. - 1997 10 Solution of coercive and semicoercive contact problems by FETI domain decomposition - Dostal, Friedlander, et al. - 1998 10 Solution of linear complementarity problems using minimization with simple bounds - Friedlander, Martínez, et al. - 1995 9 Nonmonotone strategy for minimization of quadratics with simple constraints - Diniz-Ehrhardt, Dostal, et al. - 2001 9 On the Resolution of Large Scale Linearly Constrained Convex Minimization Problems - Friedlander, Mart'inez, et al. - 1994 9 A class of indefinite dogleg path methods for unconstrained minimization - Zhang, Xu - 1999 9 Estimation of the optical constants and the thickness of thin using unconstrained optimization - Birgin, Chambouleyron, et al. - 1999 8 Adaptive precision control in quadratic programming with simple bounds and/or equality constraints - Dostál, Friedlander, et al. - 1998 8 1995], A new strategy for solving variational inequalities on bounded polytopes, Numerical Functional Analysis and Optimization 16 - Friedlander, Mart'inez, et al. - 1995 7 On the Solution of the Extended Linear Complementarity Problems, Linear Algebra and its Applications 281 - Andreani, Martinez - 1998 7 Solving complementarity problems by means of a new smooth constrained nonlinear solver - Andreani, Mart'inez - 1999 7 Solution of General Linear Complementarity Problem using smooth optimization and its application to bilinear programming and LCP - Fernandes, Friedlander, et al. - 2001 7 BOX-QUACAN and the implementation of Augmented Lagrangian algorithms for minimization with inequality constraints - Mart'inez - 2000 6 On the reformulation of nonlinear complementarity problems using the Fischer-Burmeister function - Andreani, Mart'inez - 1999 6 A globally convergent augmented Lagrangean algorithm for optimization with general constraints and simple bounds - Toint - 1991 4 A Curvilinear Search using Tridiagonal Secant Updates for Unconstrained Optimization - Dennis, Echebest, et al. - 1991 3 Reformulation of variational inequalities on a simplex and compactification of complementarity problems - Andreani, Mart'inez - 2000 3 BOX-QUACAN and the implementation of Augmented Lagrangian algorithms for minimization with inequality constraints - Martnez - 2000 2 A class of inde dogleg path methods for unconstrained minimization - Zhang, Xu - 1999 1 On the solution of variational inequalities using smooth optimization with simple bounds - Andreani, Friedlander, et al. - 1997
{"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.1183","timestamp":"2014-04-17T14:03:41Z","content_type":null,"content_length":"35656","record_id":"<urn:uuid:b9e25041-a019-4b56-8992-f63a03b42b73>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
Philip Hartman Classics in Applied Mathematics 38 Ordinary Differential Equations covers the fundamentals of the theory of ordinary differential equations (ODEs), including an extensive discussion of the integration of differential inequalities, on which this theory relies heavily. In addition to these results, the text illustrates techniques involving simple topological arguments, fixed point theorems, and basic facts of functional analysis. Unlike many texts, which supply only the standard simplified theorems, Ordinary Differential Equations presents the basic theory of ODEs in a general way, making it a valuable reference. This SIAM reissue of the 1982 second edition covers invariant manifolds, perturbations, and dichotomies, making the text relevant to current studies of geometrical theory of differential equations and dynamical systems. In particular, Ordinary Differential Equations includes the proof of the Hartman-Grobman theorem on the equivalence of a nonlinear to a linear flow in the neighborhood of a hyperbolic stationary point, as well as theorems on smooth equivalences, the smoothness of invariant manifolds, and the reduction of problems on ODEs to those on "maps" (Poincaré). Ordinary Differential Equations is based on the author's lecture notes from courses on ODEs taught to advanced undergraduate and graduate students in mathematics, physics, and engineering. The book, which remains as useful today as when it was first published, includes an excellent selection of exercises varying in difficulty from routine examples to more challenging problems. These exercises show extensions of the techniques in question and serve to introduce the reader to the literature in this area. New to this SIAM Classics edition is an Errata section listing corrections to minor errors in the 1982 edition. An extensive bibliography (up to 1980) is provided. Readers should have knowledge of matrix theory and the ability to deal with functions of real variables. Foreword to the Classics Edition; Preface to the First Edition; Preface to the Second Edition; Errata; I: Preliminaries; II: Existence; III: Differential Inqualities and Uniqueness; IV: Linear Differential Equations; V: Dependence on Initial Conditions and Parameters; VI: Total and Partial Differential Equations; VII: The Poincaré-Bendixson Theory; VIII: Plane Stationary Points; IX: Invariant Manifolds and Linearizations; X: Perturbed Linear Systems; XI: Linear Second Order Equations; XII: Use of Implicity Function and Fixed Point Theorems; XIII: Dichotomies for Solutions of Linear Equations; XIV: Miscellany on Monotomy; Hints for Exercises; References; Index To request an examination copy or desk copy of this book, please use our online request form at www.siam.org/catalog/adopt.php. 2002 / xx + 612 pages / Softcover / ISBN-13: 978-0-898715-10-1 / ISBN-10: 0-89871-510-5 / List Price $77.50 / SIAM Member Price $54.25 / Order Code CL38
{"url":"http://ec-securehost.com/SIAM/CL38.html","timestamp":"2014-04-19T06:51:36Z","content_type":null,"content_length":"6960","record_id":"<urn:uuid:02430b57-1f6f-4d2b-89a2-e4858a5f8c62>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
Twitter waterflow problem and loeb The Waterflow Problem I recently saw I Failed a Twitter Interview which features the following cute problem. Consider the following picture: Fig. 1 In Fig. 1, we have walls of different heights. Such pictures are represented by an array of integers, where the value at each index is the height of the wall. Fig. 1 is represented with an array as Now imagine it rains. How much water is going to be accumulated in puddles between walls? For example, if it rains in Fig 1, the following puddle will be formed: Fig. 2 No puddles are formed at edges of the wall, water is considered to simply run off the edge. We count volume in square blocks of 1×1. Thus, we are left with a puddle between column 1 and column 6 and the volume is 10. Write a program to return the volume for any array. My Reaction I thought, this looks like a spreadsheet problem, and closed the page, to get on with my work. Last thing I need right now is nerd sniping. A week or so later I saw A functional solution to Twitter’s waterflow problem which presented a rather concise and beautiful approach to solving the problem. I present it here, in the style that I water :: [Int] -> Int water h = sum (zipWith (-) (zipWith min (scanl1 max h) (scanr1 max h)) This is the second fastest algorithm in this page, clocking in at a mean 2.624748 ms for a random list of 10000 elements. See Benchmarks for more details. An efficient algorithm can be achieved with a trivial rewrite of Michael Kozakov’s Java solution is: {-# LANGUAGE BangPatterns #-} {-# LANGUAGE ViewPatterns #-} import qualified Data.Vector as V import Data.Vector ((!),Vector) water :: Vector Int -> Int water land = go 0 0 (V.length land - 1) 0 0 where go !volume !left !right (extend left -> leftMax) (extend right -> rightMax) | left < right = if leftMax >= rightMax then go (volume + rightMax - land!right) left (right - 1) leftMax rightMax else go (volume + leftMax - land!left) (left + 1) right leftMax rightMax | otherwise = volume extend i d = if land!i > d then land!i else d This is expectedly the fastest algorithm in this page, clocking in at a mean of 128.2953 us for a random vector of 10000 elements. But I still thought my spreadsheet idea was feasible. My approach In a similar way to Philip Nilsson, I can define the problem as it comes intuitively to me. As I saw it in my head, the problem can be broken down into “what is the volume that a given column will hold?” That can be written like this: volume[0] = 0 volume[|S|-1] = 0 volume[i] = min(left[i-1],right[i+1])−height[i] Where left and right are the peak heights to the left or right: left[0] = height[0] left[i] = max(height[i],left[i-1]) right[|S|-1] = height[|S|-1] right[i] = max(height[i],right[i+1]) That’s all. A visual example An example of i is: We spread out in both directions to find the “peak” of the columns: [S:7:S] 7 [S::S] 6 [S:5:S] [S::S] [S::S] 4 [S::S] [S::S] 3 [S::S] 2 [S::S] 2 [S::S] [S::S] 1 [S::S] How do we do that? We simply define the volume of a column to be in terms of our immediate neighbors to the left and to the right: 7 7 6 5 4 [S:3:S] 2 2 [S::S] [S:1:S] [S::S] A X B X is defined in terms of A and B. A and B are, in turn, are defined in terms of their immediate neighbors. Until we reach the ends: 7 7 [S:6:S] 5 [S::S] 4 [S::S] 3 [S::S] [S:2:S] 2 [S::S] [S::S] 1 [S::S] A X Y B The ends of the wall are the only ones who only have one side defined in terms of their single neighbor, which makes complete sense. Their volume is always 0. It’s impossible to have a puddle on the edge. A’s “right” will be defined in terms of X, and B’s “left” will be defined in terms of Y. But how does this approach avoid infinite cycles? Easy. Each column in the spreadsheet contains three values: 1. The peak to the left. 2. The peak to the right. 3. My volume. A and B below depend upon eachother, but for different slots. A depends on the value of B’s “right” peak value, and B depends on the value of A’s “left” value: 7 7 6 5 4 [S:3:S] 2 2 [S::S] 1 [S::S] A B The height of the column’s peak will be the smallest of the two peaks on either side: And then the volume of the column is simply the height of the peak minus the column’s height: Enter loeb I first heard about loeb from Dan Piponi’s From Löb’s Theorem to Spreadsheet Evaluation some years back, and ever since I’ve been wanting to use it for a real problem. It lets you easily define a spreadsheet generator by mapping over a functor containing functions. To each function in the container, the container itself is passed to that function. Here’s loeb: loeb :: Functor f => f (f b -> b) -> f b loeb x = fmap (\f -> f (loeb x)) x (Note, there is a more efficient version here.) So as described in the elaboration of how I saw the problem in my head, the solution takes the vector of numbers, generates a spreadsheet of triples, defined in terms of their neighbors—exept edges—and then simply makes a sum total of the third value, the volumes. import Control.Lens import qualified Data.Vector as V import Data.Vector ((!),Vector) water :: Vector Int -> Int water = V.sum . V.map (view _3) . loeb . V.imap cell where cell i x xs | i == 0 = edge _2 | i == V.length xs - 1 = edge _1 | otherwise = col i x xs where edge ln = set l (view l (col i x xs)) (x,x,0) where l r = cloneLens ln r col i x xs = (l,r,min l r - x) where l = neighbor _1 (-) r = neighbor _2 (+) neighbor l o = max x (view l (xs ! (i `o` 1))) It’s not the most efficient algorithm—it relies on laziness in an almost perverse way, but I like that I was able to express exactly what occured to me. And loeb is suave. It clocks in at a mean of 3.512758 ms for a vector of 10000 random elements. That’s not too bad, compared to the scanr/scanl. This is was also my first use of lens, so that was fun. The cloneLens are required because you can’t pass in an arbitrary lens and then use it both as a setter and a getter, the type becomes fixed on one or the other, making it not really a lens anymore. I find that pretty disappointing. But otherwise the lenses made the code simpler. Update with comonads & pointed lists Michael Zuser pointed out another cool insight from Comonads and reading from the future (Dan Piponi’s blog is a treasure trove!) that while loeb lets you look at the whole container, giving you absolute references, the equivalent corecursive fix (below wfix) on a comonad gives you relative references. Michael demonstrates below using Jeff Wheeler’s pointed list library and Edward Kmett’s comonad library: import Control.Comonad import Control.Lens import Data.List.PointedList (PointedList) import qualified Data.List.PointedList as PE import Data.Maybe instance Comonad PointedList where extend = PE.contextMap extract = PE._focus water :: [Int] -> Int water = view _2 . wfix . fmap go . fromMaybe (PE.singleton 0) . PE.fromList go height context = (lMax, total, rMax) get f = maybe (height, 0, height) PE._focus $ f context (prevLMax, _, _) = get PE.previous (_ , prevTotal, prevRMax) = get PE.next lMax = max height prevLMax rMax = max height prevRMax total = prevTotal + min lMax rMax - height I think if I’d’ve heard of this before, this solution would’ve come to mind instead, it seems entirely natural! Sadly, this is the slowest algorithm on the page. I’m not sure how to optimize it to be better. Update on lens Russell O’Connor gave me some hints for reducing the lens verbiage. First, eta-reducing the locally defined lens l in my code removes the need for the NoMonomorphismRestriction extension, so I’ve removed that. Second, a rank-N type can also be used, but then the type signature is rather large and I’m unable to reduce it presently without reading more of the lens library. Update on loeb On re-reading the comments for From Löb’s Theorem to Spreadsheet Evaluation I noticed Edward Kmett pointed out we can get better laziness sharing with the following definition of loeb: loeb :: Functor f => f (f b -> b) -> f b loeb x = xs where xs = fmap ($ xs) x For my particular water function, this doesn’t make much of a difference, but there is a difference. See the next section. Time travelling solution Michael Zuser demonstrated a time travelling solution based on the Tardis monad: import Control.Monad import Control.Monad.Tardis water :: [Int] -> Int water = flip evalTardis (minBound, minBound) . foldM go 0 go total height = do modifyForwards $ max height leftmax <- getPast rightmax <- getFuture modifyBackwards $ max height return $ total + min leftmax rightmax - height Sami Hangaslammi submitted this fast version (clocks in at 33.80864 us): import Data.Array waterVolume :: Array Int Int -> Int waterVolume arr = go 0 minB maxB where (minB,maxB) = bounds arr go !acc lpos rpos | lpos >= rpos = acc | leftHeight < rightHeight = segment leftHeight 1 acc lpos contLeft | otherwise = segment rightHeight (-1) acc rpos contRight leftHeight = arr ! lpos rightHeight = arr ! rpos contLeft acc' pos' = go acc' pos' rpos contRight acc' pos' = go acc' lpos pos' segment limit move !acc' !pos cont | delta <= 0 = cont acc' pos' | otherwise = segment limit move (acc' + delta) pos' cont delta = limit - arr ! pos' pos' = pos + move Changing the data structure to a vector brings this down to 26.79492 us. Here are benchmarks for all versions presented in this page. benchmarking water/10000 mean: 2.624748 ms, lb 2.619731 ms, ub 2.630364 ms, ci 0.950 std dev: 85.45235 us, lb 78.33856 us, ub 103.2333 us, ci 0.950 found 54 outliers among 1000 samples (5.4%) 51 (5.1%) high mild 3 (0.3%) high severe variance introduced by outliers: 79.838% variance is severely inflated by outliers benchmarking water_loeb/10000 mean: 3.512758 ms, lb 3.508533 ms, ub 3.517332 ms, ci 0.950 std dev: 70.77688 us, lb 65.88087 us, ub 76.97691 us, ci 0.950 found 38 outliers among 1000 samples (3.8%) 24 (2.4%) high mild 14 (1.4%) high severe variance introduced by outliers: 59.949% variance is severely inflated by outliers benchmarking water_loeb'/10000 mean: 3.511872 ms, lb 3.507492 ms, ub 3.516778 ms, ci 0.950 std dev: 74.34813 us, lb 67.99334 us, ub 84.56183 us, ci 0.950 found 43 outliers among 1000 samples (4.3%) 34 (3.4%) high mild 9 (0.9%) high severe variance introduced by outliers: 62.285% variance is severely inflated by outliers benchmarking water_onepass/10000 mean: 128.2953 us, lb 128.1194 us, ub 128.4774 us, ci 0.950 std dev: 2.864153 us, lb 2.713756 us, ub 3.043611 us, ci 0.950 found 18 outliers among 1000 samples (1.8%) 17 (1.7%) high mild 1 (0.1%) high severe variance introduced by outliers: 64.826% variance is severely inflated by outliers benchmarking water_array/10000 mean: 33.80864 us, lb 33.76067 us, ub 33.86597 us, ci 0.950 std dev: 844.9932 ns, lb 731.3158 ns, ub 1.218807 us, ci 0.950 found 29 outliers among 1000 samples (2.9%) 27 (2.7%) high mild 2 (0.2%) high severe variance introduced by outliers: 69.836% variance is severely inflated by outliers benchmarking water_vector/10000 mean: 26.79492 us, lb 26.75906 us, ub 26.83274 us, ci 0.950 std dev: 595.1865 ns, lb 559.8929 ns, ub 652.5076 ns, ci 0.950 found 21 outliers among 1000 samples (2.1%) 18 (1.8%) high mild 3 (0.3%) high severe variance introduced by outliers: 64.525% variance is severely inflated by outliers benchmarking water_comonad/10000 collecting 1000 samples, 1 iterations each, in estimated 28315.44 s benchmarking water_tardis/10000 collecting 1000 samples, 1 iterations each, in estimated 28.98788 s I never bothered waiting around for the comonad or the time traveller ones to complete. They’re slow, let’s say that. Here’s the source code to the benchmarks, let me know if I made a mistake in the benchmark setup.
{"url":"http://chrisdone.com/posts/twitter-problem-loeb","timestamp":"2014-04-18T18:33:01Z","content_type":null,"content_length":"28739","record_id":"<urn:uuid:d0ed19a2-10a1-47d8-bdb2-343fd4cbe719>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
A Primer on Regression Methods for Decoding cis-Regulatory Logic Citation: Das D, Pellegrini M, Gray JW (2009) A Primer on Regression Methods for Decoding cis-Regulatory Logic. PLoS Comput Biol 5(1): e1000269. doi:10.1371/journal.pcbi.1000269 Editor: Fran Lewitter, Whitehead Institute, United States of America Published: January 30, 2009 This is an open-access article distributed under the terms of the Creative Commons Public Domain declaration which stipulates that, once placed in the public domain, this work may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. Funding: DD and JWG were supported by the Director, Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under contract DE-AC02-05CH11231, and by the National Institutes of Health, National Cancer Institute grant U54 CA 112970 to JWG. Competing interests: The authors have declared that no competing interests exist. Importance of cis-Regulatory Elements The rapidly emerging field of systems biology is helping us to understand the molecular determinants of phenotype on a genomic scale [1]. Cis-regulatory elements are major sequence-based determinants of biological processes in cells and tissues [2]. For instance, during transcriptional regulation, transcription factors (TFs) bind to very specific regions on the promoter DNA [2],[3] and recruit the basal transcriptional machinery, which ultimately initiates mRNA transcription (Figure 1A). Figure 1. Basic Tenets of Modeling cis-Regulation Using a Regression Approach. (A) A schematic of transcriptional regulation is shown. Motifs 1, 2, and 3 are bound by their respective TFs and thus are active, while motif 4 is not. Furthermore, TFs 1 and 2 are shown to be interacting. (B) Box plots of the logarithm of expression ratio (E[g]/E[gC]) of genes containing the MCB element ACGCGT (marked as >0, group 1) and genes that do not contain the element (marked as 0, group 2) are shown for the alpha arrest experiment [8] of yeast cell-cycle. The ratio E[g]/E[gC] is the expression of the gene relative to its average across all time points. During 21 min (G1/S phase), there is a statistically significant difference (p<1.0e-16, t-test) in expression level between the genes in groups 1 and 2. Average log[2](E[g]/E[gC]) of these two groups is 0.27 and −0.02, respectively. During the 35 min (G2/M phase), there is no such association (p = 0.02, average log[2](E[g]/E[gC]) = 0.04 vs 0.01). This type of approach is elucidated in detail in [57]. (C) The same data as in (B) is shown, except that the motif counts are no longer binary. There is a statistically significant association between the motif count and expression during the 21 min (p = 3.3e-12 (F-test), y = −0.02+0.28x), but not during the 35 min (p = 0.006, y = 0.01+0.04x) time point. Each point in the figure represents a gene, characterized by a count of ACGCGT in its promoter (x-axis) and log[2](expression ratio) (y-axis). Learning cis-Regulatory Elements from Omics Data A vast amount of work over the past decade has shown that omics data can be used to learn cis-regulatory logic on a genome-wide scale [4]–[6]—in particular, by integrating sequence data with mRNA expression profiles. The most popular approach has been to identify over-represented motifs in promoters of genes that are coexpressed [4],[7],[8]. Though widely used, such an approach can be limiting for a variety of reasons. First, the combinatorial nature of gene regulation is difficult to explicitly model in this framework. Moreover, in many applications of this approach, expression data from multiple conditions are necessary to obtain reliable predictions. This can potentially limit the use of this method to only large data sets [9]. Although these methods can be adapted to analyze mRNA expression data from a pair of biological conditions, such comparisons are often confounded by the fact that primary and secondary response genes are clustered together—whereas only the primary response genes are expected to contain the functional motifs [10]. A set of approaches based on regression has been developed to overcome the above limitations [11]–[32]. These approaches have their foundations in certain biophysical aspects of gene regulation [26], [33]–[35]. That is, the models are motivated by the expected transcriptional response of genes due to the binding of TFs to their promoters. While such methods have gathered popularity in the computational domain, they remain largely obscure to the broader biology community. The purpose of this tutorial is to bridge this gap. We will focus on transcriptional regulation to introduce the concepts. However, these techniques may be applied to other regulatory processes. We will consider only eukaryotes in this tutorial. Regression Methods for Learning the Active cis-Regulatory Elements What is a Regression Method? A regression method is essentially a curve-fitting approach. When there is one observed variable (y-axis) and one predictor variable (x-axis), regression consists of drawing a line or a curve that best fits the data. The challenge arises when there are multiple candidate predictors, among which only a selected few are relevant. This is the case for cis-regulation, where relatively few cis -elements are differentially activated between two conditions while the number of candidate elements is large [2],[5]. Regression methods provide efficient ways to select this set of active elements via a curve-fitting exercise. How To Learn Which cis-Regulatory Elements Are Active Let us consider the case of a single cis-element, a DNA word. Before we introduce the regression method, let us first proceed by dividing the genes into two groups, according to whether a gene has the word in its promoter or not. If under a biological condition the expression levels of genes in these two groups are significantly different from each other, it implies that the cis-element is most likely bound by its cognate TF, which is regulating its target genes. In other words, the cis-element is active. However, if there is no significant difference in expression between these two groups, then, analogously, the cis-element is likely inactive. Furthermore, if the genes with the cis-motif have higher expression levels on average than those without the motif, then the TF is an activator, and in the reverse situation an inhibitor. The case of the MCB element, a G1/S regulator of the yeast cell-cycle [8], is illustrated in Figure 1B. We observe that there is indeed a statistically significant association between the presence of the MCB element and mRNA expression in the G1/S phase of the cell-cycle (p<1.0e-16), but not in the G2/M phase (p = 0.02). Furthermore, this analysis indicates that the MCB element has an activating role in the G1/S phase, as expected [8]. A regression approach is a generalized version of the method described above. Here, the data is not binary any more. Instead, we plot the actual motif counts against the mRNA levels for all genes genome-wide (Figure 1C). To examine if there is any association between the occurrence of the MCB element and mRNA expression, we fit a straight line through these data points. Next, we check if the observed linear fit to the data could be obtained by random chance. If the fit is statistically significant, then the motif is considered active, just as in the binary data above, and inactive otherwise. Furthermore, if the slope of the fitted line is positive, then the element is an activator—a high number of elements are indicative of high expression on average, while fewer or no copies imply low expression. For the MCB element (Figure 1C), we notice that the fit is significant in the G1/S phase, but not in the G2/M phase, as expected of a G1/S-specific element. The positive slope of the line indicates that the MCB element is an activator. The best fit shown in Figure 1C leads to a direct quantitative relation between the logarithm of observed expression E[g] and motif count n[g] of any gene g [11]:(1) where C indicates a reference condition. The parameters a and b, the intercept and slope of the line, respectively, are estimated from the input data via a least squares fit. a and b are constant across all genes. We can use Equation 1 to estimate how much of the mRNA expression levels are explained by this motif. We note that expression data from one experimental condition and one control condition are used in this analysis. How To Learn Multiple cis-Regulatory Elements Under any specific condition, multiple cis-elements are usually active [2],[36],[37]. Moreover, cis-regulation has been shown to be inherently combinatorial. Thus, often distinct combinations of such elements regulate the genes. To learn which specific combinations are active out of the many possible candidate elements, the simplest strategy is to repeat the above curve-fitting procedure for each such element. The elements that meet a significance threshold are considered to be active. However, this simple approach does not account for combinatorial regulation. Namely, it does not specify which particular elements act collectively to regulate gene expression. To overcome this limitation, we build a multivariate model (Equation 2 below with d[12] = 0). This involves two steps: (a) feature selection, i.e., identifying which specific elements are active, and (b) model building, i.e., specifying the regression model involving these elements. These two steps may be executed simultaneously [11]. Alternatively, one can first select the cis-element features, and then build a regression model using these features [13]. A representative flowchart for multivariate modeling is shown in Figure 2. The elements that appear in a multivariate model are, then, hypothesized to be functional [11],[13]. Figure 2. A Flow Chart for Modeling Combinatorial cis-Regulation Using Regression Methods. The steps are shown for constructing a model with linear functions; however, with some small modifications, they are applicable to nonlinear functions as well. P[motif] indicates the p-value of the association of the best motif with mRNA expression. P[motif]>p[0] is one possible termination condition. Other alternative strategies can also be used instead. In this example, feature selection and model building are done simultaneously. An additional complexity is that functional interactions among TFs are often essential to transcriptional control [2]. This is especially true in higher organisms. In regression models, we introduce the interactions via a product of word counts. This reflects the fact that a pair of elements has a stronger effect than the sum of the elements in the pair. The strategy for including these terms is similar to the methodology described above [12]. For example, to describe the three motifs and interactions between motifs 1 and 2 in Figure 1A, the equation would be [12]:(2) n[ig] is the count of motif i for gene g. The parameters a, b[1], b[2], b[3], and d[12] are learnt from the data, again using a least squares fit. d[12]>0 implies a synergistic interaction, while d [12]<0, a competitive interaction. How To Model Regulation by Degenerate Motifs cis-Regulatory elements are often not simple words, especially in higher eukaryotes. Instead, the cis-elements bound by a specific TF may have small differences in their sequences in different promoters [4]–[6]. This variability, referred to as degeneracy of the motifs, is often represented by a position weight matrix (PWM) [3],[5]. PWMs are probabilistic representations of cis-motifs ( Figure 3). Figure 3. Position Weight Matrix (PWM) Logo for E2F-1. The sequence logo for the PWM of E2F-1, a key transcription factor for regulating the mammalian cell-cycle, is shown (http://jaspar.genereg.net/). The figure shows the bases that may occur at each position of this 8-nucleotide long motif. The height of each base quantifies the bits of information content, which is related to the probability of its occurrence at that position [3]. For example, there is a 100% chance of observing a T at position 1, while at position 8, a 90% chance of observing a C, and a 10% of G. To use PWMs in regression methods, we would first score each promoter sequence against each PWM. The probabilities of each base at each position are used to compute the scores. These scores are related to the binding affinity of a TF for the DNA sequence [3],[35],[38]. There are multiple scoring schemes available 13,18,22,33 (see also [3],[35],[39]), but often the maximum score of a PWM for each promoter is used. We then use the same regression methods described above to construct a model, but with PWM scores instead of word counts. JASPAR [40] and TRANSFAC [41],[42] are among the most popular databases of PWMs. However, PWMs may also be generated using de novo motif discovery tools [4],[13],[43]. Nonlinear models. Although one can use linear methods with PWM scores [13], such methods are not ideal since the relation between motif scores and gene expression is not always linear. Furthermore, previous studies indicate that linear methods may not be optimal for modeling degenerate motifs when interactions are included [11]. This is a significant limitation since interactions among degenerate motifs are pervasive in mammalian transcriptional regulation [2],[5]. Instead, based on biophysical models, we expect the transcriptional response to be sigmoidal [44],[45] (Figure 4A). To account for such complexities, nonlinear methods have been developed. We model the expression ratios in terms of sums of sigmoidal functions of PWM scores [28],[31], or, alternatively, their variants, linear splines [15],[22]. Linear splines are related to sigmoidal functions by a logarithmic transformation (Figure 4B). They allow more efficient modeling when data is sparse since they require fewer parameters, while sigmoidal functions yield a more accurate model when sufficient data is available. The modeling procedure is similar to multivariate linear regression (see above). For the example shown in Figure 4C, we obtain an equation of the form:(3) Here, s denotes the PWM score. is a linear spline function or a sigmoidal function in s. Because of the increased number of fitting parameters, these more complex models require that we control for overfitting of the data. Although the implementation details are beyond the scope of this tutorial, they involve various forms of cross-validation (see the references in Table 1). These overfitting effects can also be significant in multivariate linear models with interactions. Because PWM scores are related to binding affinities, and sigmoidal functions model the essential biophysics of transcriptional regulation, these nonlinear approaches have strong biophysical underpinnings [26],[33],[34],[46]. Figure 4. Nonlinear Regression Models of cis-Regulation. (A) mRNA expression (E[g]) as a function of TF binding free energy often has a sigmoidal pattern. There is an activation threshold, below which the transcriptional response is flat. Above the threshold, it grows exponentially, and finally saturates. For an inhibitory pattern, the curve is inverted along the y-axis. PWM scores are proportional to the binding free energies. (B) A logarithmic transformation of the sigmoidal function leads to a sum of linear splines. Each linear spline function has the shape of a hockey stick: It is zero below (or above) a threshold, called knot, and rises linearly above (or below) it. The smoothness of the transition from the flat part to the exponential part of the curve is not modeled in linear splines. A linear model is realized if the activation threshold is ignored, i.e., the sigmoidal function is replaced by an exponential function in (A). In a linear spline approach, the target determination threshold is set to the knot [22] or the gene activation threshold. While for the sigmoidal function, the threshold is typically set by the point at which the curve reaches half its maximal value [28]. (C) A model comprising linear splines for three functional motifs and one interacting motif pair is shown. Table 1. Regression Tools for cis-Regulatory Element Identification Currently Reported in the Literature. How To Identify Target Genes In a regression method, the input is a candidate motif. Thus, once we have identified the active motif, we have an additional task of determining which genes are targets of the cognate TF. Thus, in contrast to coexpression-based approaches where we assume that groups of co-expressed genes are co-regulated, co-regulation of genes is inferred in this approach a posteriori in regression methods. In the case of DNA words, it may seem that all promoters containing an instance of the word will always be bound by its partner TF. However, such a word may represent only the core of the motif. Thus, to discriminate the true targets, additional sequence information flanking the core motif may be essential [17],[32]. The challenge with the PWM scores is that they are generally continuous and nonzero (on a scale from zero to one, zero indicating that the motif is absent). Thus, most promoters often contain a low-scoring instance of each PWM. This is especially true for motifs of high degeneracy, as in humans [5]. Nonlinear regression methods provide a straightforward solution to select which instances of the motifs are active, since they allow one to define a cutoff threshold [22],[28] for each motif—promoters scoring above the threshold are then the targets, while those below are not (Figure 4B). There are alternative strategies to target determination, which are either more complex [23],[24],[31] or require information from ChIP-chip data [16],[25]. How To Assess the Statistical Significance of the Fit A popular metric to assess the quality of a regression model is how much of the variation in the expression data it can explain. This is parameterized as R^2, sometimes referred to as the percent reduction in variance [11]:(4) where V[original] is the variance in the input expression data, and V[residual] is the variance of the differences between the input expression data and the fitted model. V[residual] represents the unexplained part of the variation in expression data. R^2 is directly related to the F-statistic [47], which is often used to evaluate the significance of the fit. Validity of the Premises A large number of studies have shown that the motifs identified by regression methods are indeed functional motifs. The organisms where these methods have been applied include yeast [11]–[13], [15]– [18], [20], [21], [23], [25], [28]–[30],[32],[48], C. Elegans [32], Drosophila [14],[31], and human [22],[30]. Some of this work has been previously reviewed [26],[34],[49], and we refer to these publications for details. Which Kinds of Problems Can These Methods Be Applied to? In this tutorial, we have focused on transcriptional regulation. However, regression methods may also be applied to other stages of gene regulation that are mediated by cis-elements. Regression approaches have been used to model chromatin remodeling [28], 3′ UTR mediated mRNA stability [50], and the regulation of alternative splicing of pre-mRNAs[27]. These methods can also be applied to DNA binding data, such as those generated by ChIP-chip [16],[51], DamID [14], or PBM [21],[52] experiments. In these cases, the binding ratios from TF binding profiles may be used in place of either expression ratios or motif scores, depending on the application. Available Software Based on Regression Methods We have summarized the currently available software based on regression along with their key features in Table 1. The basic aspects of a regression method can be easily implemented in R or MATLAB. In this tutorial, we have described the basic aspects of regression methods. These are complementary to alternative approaches for motif discovery, such as comparative genomics [53]–[55] or motif over-representation methods [4],[56]. In particular, regression methods are optimal for evaluating the activity of cis-elements among a set of candidate elements. They are better suited for modeling combinatorial regulation and nonlinear responses and are more closely tied to the biophysical models of transcriptional regulation. With some modifications, regression methods can also be adapted for de novo motif discovery [21],[25],[50]. Finally, although most regression methods are used to model the observed changes in gene expression between a pair of conditions, recently this methodology has been extended to include information from multiple conditions as well [29]. We thank Sam Ng for a careful reading of the manuscript. Note Added in Proof During the preparation of this manuscript, a new regression approach based on the Fast Orthogonal Search (FOS) method [58] was published to identify active cis-regulatory elements. As new algorithms get published, we will continue to maintain an updated version of Table 1 on our Web site http://vision.lbl.gov/People/ddas/RegressionPrimer/.
{"url":"http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1000269&imageURI=info%3Adoi%2F10.1371%2Fjournal.pcbi.1000269.g002","timestamp":"2014-04-25T09:48:38Z","content_type":null,"content_length":"167064","record_id":"<urn:uuid:c018093a-81e1-493a-aef5-4ff90d84b079>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics Tools Introduction: Software like Mathematica® and Math CAD® are powerful computational and programming tools. In this assignment you will become familiar with the basic features of Math CAD®, a program that is particularly attractive to secondary school mathematics teachers because the text editing features resemble standard mathematical form rather than standard computer form. Before you get started: • Review Getting Started Under the Help Menu from within Mathcad • Review the Topics Under the Help Menu • Use Balloon Help to Identify all icons on the screen. Click on the top icon to get additional palates. Assignment: Demonstrate competence using MathCAD and Geometer's Sketchpad by providing a printout of the following. Use the text tool to identify your work MATHCAD (Mathsoft Corporation) | (sample files) (1) Calculations: Five different calculations using a variety of functions. (2) Variables: Write three or more different equations relevant to your subject such as those shown below or those on the Constants and Equations Page. Solve each equation for one or more specific │Biology │Chemistry │Physics │ │ │ │ │ │Logistic Growth Curve │Force between charges │Relativity │ │ │ │ │ │Where I= rate of increase │F=Force between two point charges│E=energy │ │K=carrying capacity │k=constant of proportionality │m=mass │ │r=intrinsic rate of increase=(b-d)=births-deaths │Q1 and Q2 =charges │c=speed of light│ │N=number of individuals in the population │V=potential difference │ │ (3) Iterative Calculations: Solve an equation for a range of variables. (4) Graphs: Create a graph of a function relative to the subject you teach. (5) Integration/Summation: Use the integral and summation functions to solve problems of your choice. (6) Implications for Teaching: (a) What are the implications of tools such as Mathematica and MathCAD for the teaching and learning of mathematics? What are the issues, problems, benefits, challenges, and dangers? (b) Should mathematics teachers be required to have mastery of such programs? Why or why not? (c) How should such software be incorporated into the mathematics or science curriculum? GEOMETER'S SKETCHPAD Key Curriculum Press. Download Trial Version of Geometer's Sketchpad. (7) Solving real-life problems: (a) Illustrate the concept of Pi using Geometer's sketchpad. (b) Determining the epicenter of an earthquake: Knowing that P waves travel approximately 740 km/minute faster than S waves, determine the epicenter of an earthquake that produced the seismograms shown. (Please review the online simulation first). □ Copy a map of showing three of the cities into Geometer's Sketchpad, and use the circle tool to draw circles around each with a radius equivalent to the distance to the epicenter. □ Download an Excel file to calculate distances between the epicenter and various recording stations. □ You will need to develop an appropriate scale for your map using air travel distance values. □ The location where all three circles meet is the epicenter. Include a screen shot showing how you determined the epicenter using Geometer's Sketchpad. (c) Using Geometer's sketchpad develop a lesson to show how to calculate the height of a building or the diameter of the moon. The average distance to the moon is 384,402 km. Include graphics in your drawing. (d) Using the Sketchpad, develop a lesson to illustrate one or more theorems. Retrieve a map of the appropriate region of the world from an online map service. (Pacific ocean maps courtesy of CIA Factbook and USGS)
{"url":"http://www.csun.edu/science/courses/646/assignments/old/assign_math_tools.html","timestamp":"2014-04-17T15:51:11Z","content_type":null,"content_length":"18614","record_id":"<urn:uuid:aaa1fe96-92dc-486d-b0d6-fdbe3acc736c>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
Untunable Metropolis - Statistical Modeling, Causal Inference, and Social Science Untunable Metropolis Posted by Andrew on 31 July 2011, 9:00 am Michael Margolis writes: What are we to make of it when a Metropolis-Hastings step just won’t tune? That is, the acceptance rate is zero at expected-jump-size X, and way above 1/2 at X-exp(-16) (i.e., machine precision I’ve solved my practical problem by writing that I would have liked to include results from a diffuse prior, but couldn’t. But I’m bothered by the poverty of my intuition. And since everything I’ve read says this is an issue of efficiency, rather than accuracy, I wonder if I could solve it just by running massive and heavily thinned chains. My reply: I can’t see how this could happen in a well-specified problem! I suspect it’s a bug. Otherwise try rescaling your variables so that your parameters will have values on the order of magnitude of 1. To which Margolis responded: I hardly wrote any of the code, so I can’t speak to the bug question — it’s binomial kriging from the R package geoRglm. And there are no covariates to scale — just the zero and one of the binomial distribution. But I will look into rescaling the spatial units forthwith. And he tried it and it worked! P.S. Don’t forget the folk theorem of statistical computing. Recent Comments • Anonymous on One-tailed or two-tailed? • Rahul on One-tailed or two-tailed? • Jonathan (another one) on One-tailed or two-tailed? • Rahul on One-tailed or two-tailed? • question on One-tailed or two-tailed? • Anonymous on One-tailed or two-tailed? Filed under Bayesian Statistics, Statistical computing Comments are closed | Permalink
{"url":"http://andrewgelman.com/2011/07/31/untunable_metro/","timestamp":"2014-04-19T17:01:48Z","content_type":null,"content_length":"19991","record_id":"<urn:uuid:3c60819f-e98f-44e8-8a37-9f991efcf05b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
Electromagnetic phenomena From Wikisource Electromagnetic phenomena in a system moving with any velocity smaller than that of light (1904) Electromagnetic phenomena in a system moving with any velocity smaller than that of light By Prof. H. A. Lorentz § 1. The problem of determining the influence exerted on electric and optical phenomena by a translation, such as all systems have in virtue of the Earth's annual motion, admits of a comparatively simple solution, so long as only those terms need be taken into account, which are proportional to the first power of the ratio between the velocity of translation w and the velocity of light c. Cases in which quantities of the second order, i.e. of the order $\frac{w^{2}}{c^{2}}$, may be perceptible, present more difficulties. The first example of this kind is Michelson's well known interference-experiment, the negative result of which has led FitzGerald and myself to the conclusion that the dimensions of solid bodies are slightly altered by their motion through the aether. Some new experiments in which a second order effect was sought for have recently been published. Rayleigh^[1] and Brace^[2] have examined the question whether the Earth's motion may cause a body to become doubly refracting; at first sight this might be expected, if the just mentioned change of dimensions is admitted. Both physicists have however come to a negative result. In the second place Trouton and Noble^[3] have endeavoured to detect a turning couple acting on a charged condenser, whose plates make a certain angle with the direction of translation. The theory of electrons, unless it be modified by some new hypothesis, would undoubtedly require the existence of such a couple. In order to see this, it will suffice to consider a condenser with aether as dielectricum. It may be shown that in every electrostatic system, moving with a velocity $\mathfrak{w}$,^[4] there is a certain amount of electromagnetic momentum. If we represent this, in direction and magnitude, by a vector $\mathfrak{G}$, the couple in question will be determined by the vector product^[5] $\left[\mathfrak{G}.\mathfrak{w}\right]$. (1) Now, if the axis of z is chosen perpendicular to the condenser plates, the velocity w having any direction we like, and if U is the energy of the condenser, calculated on the ordinary way, the components of $\mathfrak{G}$ are given^[6] by the following formulae, which are exact up to the first order: $\mathfrak{G}_{x}=\frac{2U}{c^{2}}\mathfrak{w}_{x},\quad \mathfrak{G}_{y}=\frac{2U}{c^{2}}\mathfrak{w}_{y},\quad \mathfrak{G}_{z}=0$. Substituting these values in (1), we get for the components of the couple, up to terms of the second order, These expressions show that the axis of the couple lies in the plane of the plates, perpendicular to the translation. If α is the angle between the velocity and the normal to the plates, the moment of the couple will be $\frac{U}{c^{2}}w^{2}\sin\ 2\alpha$; it tends to turn the condenser into such a position that the plates are parallel to the Earth's motion. In the apparatus of Trouton and Noble the condenser was fixed to the beam of a torsion-balance, sufficiently delicate to be deflected by a couple of the above order of magnitude. No effect could however be observed. § 2. The experiment of which I have spoken are not the only reason for which a new examination of the problems connected with the motion of the Earth is desirable. Poincaré^[7] has objected to the existing theory of electric and optical phenomena in moving bodies that, in order to explain Michelsons's negative result, the introduction of a new hypothesis has been required, and that the same necessity may occur each time new facts will be brought to light. Surely, this course of inventing special hypothesis for each new experimental result is somewhat artificial. It would be more satisfactory, if it were possible to show, by means of certain fundamental assumptions, and without neglecting terms of one order of magnitude or another, that many electromagnetic actions are entirely independent of the motion of the system. Some years ago, I have already sought to frame a theory of this kind.^[8] I believe now to be able to treat the subject with a better result. The only restriction as regards the velocity will be that it be smaller than that of light. § 3. I shall start from the fundamental equations of the theory of electrons.^[9] Let $\mathfrak{d}$ be the dielectric displacement in the aether, $\mathfrak{h}$ the magnetic force, $\varrho$ the volume-density of the charge of an electron, $\mathfrak{v}$ the velocity of a point of such a particle, and $\mathfrak{f}$ the electric force, i.e. the force, reckoned per unit charge, which is exerted by the aether on a volume-element of an electron. Then, if we use a fixed system of coordinates, $\begin{cases} div\ \mathfrak{d}=\varrho,\quad div\ \mathfrak{h}=0,\\ rot\ \mathfrak{h}=\frac{1}{c}\left(\dot{\mathfrak{d}}+\varrho\mathfrak{v}\right),\\ rot\ \mathfrak{d}=-\frac{1}{c}\dot{\ (2) mathfrak{h}},\\ \mathfrak{f}=\mathfrak{d}+\frac{1}{c}\left[\mathfrak{v.h}\right].\end{cases}$. I shall now suppose that the system as a whole moves in the direction of x with a constant velocity w, and I shall denote by $\mathfrak{u}$ any velocity a point of an electron may have in addition to this, so that If the equations (2) are at the same time referred to axes moving with the system, they become $div\ \mathfrak{d}=\varrho,\quad div\ \mathfrak{h}=0$, $\frac{\partial\mathfrak{h}_{z}}{\partial y}-\frac{\partial\mathfrak{h}_{y}}{\partial z}=\frac{1}{c}\left(\frac{\partial}{\partial t}-w\frac{\partial}{\partial x}\right)\mathfrak{d}_{x}+\frac{1}{c}\ $\frac{\partial\mathfrak{h}_{x}}{\partial z}-\frac{\partial\mathfrak{h}_{z}}{\partial x}=\frac{1}{c}\left(\frac{\partial}{\partial t}-w\frac{\partial}{\partial x}\right)\mathfrak{d}_{y}+\frac{1}{c}\ $\frac{\partial\mathfrak{h}_{y}}{\partial x}-\frac{\partial\mathfrak{h}_{x}}{\partial y}=\frac{1}{c}\left(\frac{\partial}{\partial t}-w\frac{\partial}{\partial x}\right)\mathfrak{d}_{z}+\frac{1}{c}\ $\frac{\partial\mathfrak{d}_{z}}{\partial y}-\frac{\partial\mathfrak{d}_{y}}{\partial z}=-\frac{1}{c}\left(\frac{\partial}{\partial t}-w\frac{\partial}{\partial x}\right)\mathfrak{h}_{x}$, $\frac{\partial\mathfrak{d}_{x}}{\partial z}-\frac{\partial\mathfrak{d}_{z}}{\partial x}=-\frac{1}{c}\left(\frac{\partial}{\partial t}-w\frac{\partial}{\partial x}\right)\mathfrak{h}_{y}$, $\frac{\partial\mathfrak{d}_{y}}{\partial x}-\frac{\partial\mathfrak{d}_{x}}{\partial y}=-\frac{1}{c}\left(\frac{\partial}{\partial t}-w\frac{\partial}{\partial x}\right)\mathfrak{h}_{z}$, § 4. We shall further transform these formulae by a change of variables. Putting $\frac{c^{2}}{c^{2}-w^{2}}=k^{2}$, (3) and understanding by l another numerical quantity, to be determined further on, I take as new independent variables $x'=klx,\quad y'=ly,\quad z'=lz$, (4) $t'=\frac{l}{k}t-kl\frac{w}{c^{2}}x$, (5) and I define two new vectors $\mathfrak{d}'$ and $\mathfrak{h}'$ by the formulae for which, on account of (3), we may also write $\begin{cases} \mathfrak{d}_{x}=l^{2}\mathfrak{d}_{x}^{'},\quad\mathfrak{d}_{y}=kl^{2}\left(\mathfrak{d}_{y}^{'}+\frac{w}{c}\mathfrak{h}_{z}^{'}\right),\quad\mathfrak{d}_{z}=kl^{2}\left(\mathfrak {d}_{z}^{'}-\frac{w}{c}\mathfrak{h}_{y}^{\mathfrak{'}}\right),\\ \mathfrak{h}_{x}=l^{2}\mathfrak{h}_{x}^{'},\quad\mathfrak{h}_{y}=kl^{2}\left(\mathfrak{h}_{y}^{'}-\frac{w}{c}\mathfrak{d}_{z}^{'}\ (6) As to the coefficient , it is to be considered as a function of , whose value is 1 for , and which, for small values of , differs from unity no more than by an amount of the second order. The variable t' may be called the local time; indeed, for k=1, l=1 it becomes identical with what I have formerly understood by this name. If, finally, we put $\frac{1}{kl^{3}}\varrho=\varrho'$, (7) $k^{2}\mathfrak{u}_{x}=\mathfrak{u}_{x}^{'},\quad k\mathfrak{u}_{y}=\mathfrak{u}_{y}^{'},\quad k\mathfrak{u}_{z}=\mathfrak{u}_{z}^{'}$, (8) these latter quantities being considered as the components of a new vector $\mathfrak{u}'$, the equations take the following form: \left.\begin{align} & div'\ \mathfrak{d}'=\left(1-\frac{wu_{x}'}{c^{2}}\right)\varrho',\quad div'\ \mathfrak{h}'=0,\\ & rot'\ \mathfrak{h'}=\frac{1}{c}\left(\frac{\partial\mathfrak{d}'}{\partial (9) t'}+\varrho'\mathfrak{u}\right),\\ & rot'\ \mathfrak{d}'=-\frac{1}{c}\frac{\partial\mathfrak{h}'}{\partial t'}, \end{align}\right\} \left.\begin{align} & \mathfrak{f}_{x}=l^{2}\mathfrak{d}_{x}^{'}+l^{2}\frac{1}{c}\left(\mathfrak{u}_{y}^{'}\mathfrak{h}_{z}^{'}-\mathfrak{u}_{z}^{'}\mathfrak{h}_{y}^{'}\right)+l^{2}\frac{w}{c^ {2}}\left(\mathfrak{u}_{y}^{'}\mathfrak{d}_{y}^{'}-\mathfrak{u}_{z}^{'}\mathfrak{d}_{z}^{'}\right),\\ & \mathfrak{f}_{y}=\frac{l}{k}^{2}\mathfrak{d}_{y}^{'}+\frac{l}{k}^{2}\frac{1}{c}\left(\ mathfrak{u}_{z}^{'}\mathfrak{h}_{x}^{'}-\mathfrak{u}_{x}^{'}\mathfrak{h}_{z}^{'}\right)-\frac{l}{k}^{2}\frac{w}{c^{2}}\mathfrak{u}_{x}^{'}\mathfrak{d}_{y}^{'},\\ & \mathfrak{f}_{z}=\frac{l}{k}^ (10) {'}\mathfrak{d}_{z}^{'}. \end{align}\right\} The meaning of the symbols div' and rot' in (9) is similar to that of div and rot in (2); only, the differentiations with respect to x, y, z are to be replaced by the corresponding ones with respect to x', y', z' . § 5. The equations (9) lead to the conclusion that the vectors $\mathfrak{d}'$ and $\mathfrak{h}'$ may be represented by means of a scalar potential φ and a vector potential $\mathfrak{a}'$. These potentials satisfy the equations 1) $\triangle'\varphi'-\frac{1}{c^{2}}\frac{\partial^{2}\varphi'}{\partial t'^{2}}=-\varrho'$, (11) $\triangle'\mathfrak{a}'-\frac{1}{c^{2}}\frac{\partial^{2}\mathfrak{a}'}{\partial t'^{2}}=-\frac{1}{c^{2}}\varrho'\mathfrak{u}'$. (12) and in terms of them $\mathfrak{d}'$ and $\mathfrak{h}'$ are given by 1) M.E; ?? 4 and 10 $\mathfrak{d}'=-\frac{1}{c}\frac{\partial\mathfrak{a}'}{\partial t'}-grad'\ \varphi'+\frac{w}{c}\ grad'\ \mathfrak{a}_{x}^{'}$, (13) $\mathfrak{h}'=rot'\mathfrak{a}'$. (14) The symbol $\triangle'$ is an abbreviation for $\frac{\partial^{2}}{\partial x'^{2}}+\frac{\partial^{2}}{\partial y'^{2}}+\frac{\partial^{2}}{\partial z'^{2}}$ and $grad'\ \varphi'$ denotes for a vector whose components are $\frac{\partial\varphi'}{\partial x'},\ \frac{\partial\varphi'}{\partial y'},\ \frac{\partial\varphi'}{\partial z'}$. The expresson $grad'\ \mathfrak{a}_{x}^{'}$ has a similar meaning. In order to obtain the solution of (11) and (12) in a simple form, we may take x', y', z' as the coordinates of a point P' in a space S', and ascribe to this point, for each value of t' , the values of $\varrho',\ \mathfrak{u}',\ \varphi',\ \mathfrak{a}'$, belonging to the corresponding point P (x, y, z) of the electromagnetic system. For a definite value t' of the fourth independent variable, the potentials φ' and $\mathfrak{a}'$ in the point of the system or in the corresponding point P' of the space S' , are given by^[10] $\varphi'=\frac{1}{4\pi}\int\frac{\left[\varrho'\right]}{r'}dS'$ (15) $\mathfrak{a}'=\frac{1}{4\pi c}\int\frac{\left[\varrho'\mathfrak{u}'\right]}{r'}dS'$. (16) Here dS' is an element of the space S', r' its distance form P' and the brackets serve to denote the quantity $\varrho'$ and the vector $\varphi',\ \mathfrak{u}'$, such as they are in the element dS' , for the value $t'-\frac{r'}{c}$ of the fourth independent variable. Instead of (15) and (16) we may also write, taking into account (4) and (7), $\varphi'=\frac{1}{4\pi}\int\frac{\left[\varrho\right]}{r}dS$ (17) $\mathfrak{a}'=\frac{1}{4\pi c}\int\frac{\left[\varrho\mathfrak{u}'\right]}{r}dS$ (18) the integrations now extending over the electromagnetic system itself. It should be kept in mind that in these formulae r' does not denote the distance between the element dS and the point (x, y, z) for which the calculation is to be performed. If the element lies at the point (x[1],y[1]},z[1]), we must take It is also to be remembered that, if we wish to determine φ and $\mathfrak{a}'$ for the instant, at which the local time in P is t' , we must take $\varrho$ and the vector $\varphi,\ \mathfrak{u}'$, such as they are in the element dS at the instant at which the local time of that element is $t'-\frac{r'}{c}$. § 6. It will suffice for our purpose to consider two special cases. The first is that of an electrostatic system, i.e. a system having no other motion but the translation with the velocity w. In this case $\mathfrak{u}'=0$, and therefore, by (12), $\mathfrak{a}'=0$. Also, φ' is independent of t' , so that the equations (11), (13) and (14) reduce to $\begin{cases} \triangle'\varphi'=-\varrho',\\ \mathfrak{d}'=-grad'\ \varphi',\ \mathfrak{h}'=0.\end{cases}$ (19) After having determined the vector b' by means of these equations, we know also the electric force acting on electrons that belong to the system. For these the formulae (10) become, since u'=0 $\mathfrak{f}_{x}=l^{2}\mathfrak{d}_{x}^{'},\quad\mathfrak{f}_{y}=\frac{l^{2}}{k}\mathfrak{d}_{y}^{'},\quad\mathfrak{f}_{z}=\frac{l^{2}}{k}\mathfrak{d}_{x}^{'}$. (20) The result may be put in a simple form if we compare the moving system Σ with which we are concerned, to another electrostatic system Σ' which remains at rest and into which Σ is changed, if the dimensions parallel to the axis of x are multiplied by kl, and the dimensions which have the direction of y or that of z, by l, a deformation for which (kl, l, l) is an appropriate symbol. In this new system, which we may suppose to be placed in the above mentioned space S' , we shall give to the density the value $\varrho'$, determined by (7), so that the charges of corresponding elements of volume and of corresponding electrons are the same in Σ and Σ' . The we shall obtain the forces acting on the electrons of the moving system Σ, if we first determine the corresponding forces in Σ' , and next multiply their components in the direction of the axis of x by l^2, and their components perpendicular to that axis by $\frac{l^{2}}{k}$. This is conveniently expressed by the formula $\mathfrak{F}\left(\Sigma\right)=\left(l^{2},\ \frac{l^{2}}{k},\ \frac{l^{2}}{k}\right)\mathfrak{F}\left(\Sigma'\right)$ (21) It is further to be remarked that, after having found by (19), we can easily calculate the electromagnetic momentum in the moving system, or rather its component in the direction of the motion. Indeed, the formula shows that Therefore, by (6), since $\mathfrak{h}'=0$ $\mathfrak{G}_{x}=\frac{k^{2}l^{4}w}{c^{2}}\int\left(\mathfrak{d}_{y}^{'2}+\mathfrak{d}_{z}^{'2}\right)dS=\frac{klw}{c^{2}}\int\left(\mathfrak{d}_{y}^{'2}+\mathfrak{d}_{z}^{'2}\right)dS'$. (22) § 7. Our second special case is that of a particle having an electric moment, i.e. a small space S, with a total charge $\int\varrho\ dS=0$, but with such a distribution of density, that the integrals $\int\varrho\ x\ dS,\ \int\varrho\ y\ dS,\ \int\varrho\ z\ dS$ have values differing from 0. Let x, y, z be the coordinates, taken relatively to a fixed point A of a particle, which may be called its centre, and let the electric moment be defined as a vector $\mathfrak{p}$ whose components $\mathfrak{p}_{x}=\int\varrho\ x\ dS,\quad\mathfrak{p}_{y}=\int\varrho\ y\ dS,\quad p_{z}=\int\varrho\ z\ dS$ (23) $\frac{d\mathfrak{p}_{x}}{dt}=\int\varrho\ \mathfrak{u}_{x}dS,\quad\frac{d\mathfrak{p}_{y}}{dt}=\int\varrho\ \mathfrak{u}_{y}dS,\quad\frac{d\mathfrak{p}_{z}}{dt}=\int\varrho\ \mathfrak{u}_{z}dS$ (24) Of course, if x, y, z are treated as infinitely small, $\mathfrak{u}_{x},\ \mathfrak{u}_{y},\ \mathfrak{u}_{z}$ must be so likewise. We shall neglect squares and products of these six quantities. We shall now apply the equations (17) to the determination of the scalar potential φ' for an exterior point P(x, y, z), at finite distance from the polarized particle, and for the instant at which the local time of this point has some definite value t' . In doing so, we shall give the symbol $\left[\varrho\right]$, which, in (17), relates to the instant at which local time in dS is $t'-\frac {r'}{c}$, a slightly different meaning. Distinguishing by $r_{0}^{'}$ the value of r' for the centre A, we shall understand by $\left[\varrho\right]$ the value of the density existing in the element dS at the point (x, y, z), at the instant t[0] at which the local time of A is $t'-\frac{r_{0}^{'}}{c}$. It may be seen from (5) that this instant precedes that for which we have to take the numerator in (17) by $k^{2}\frac{w}{c^{2}}x+\frac{k}{l}\frac{r_{0}^{'}-r^{'}}{c}=k^{2}\frac{w}{c^{2}}x+\frac{k}{l}\frac{1}{c}\left(x\frac{\partial r'}{\partial x}+y\frac{\partial r'}{\partial y}+z\frac{\partial r'}{\ partial z}\right)$ units of time. In this last expression we may put for the differential coefficients their values at the point A. In (17) we have now to replace $\left[\varrho\right]$ by $\left[\varrho\right]+k^{2}\frac{w}{c^{2}}x\left[\frac{\partial\varrho}{\partial t}\right]+\frac{k}{l}\frac{1}{c}\left(x\frac{\partial r'}{\partial x}+y\frac{\partial r'}{\partial y}+z\frac{\ (25) partial r'}{\partial z}\right)\left[\frac{\partial\varrho}{\partial t}\right]$, where $\left[\frac{\partial\varrho}{\partial t}\right]$ relates again to the time t[0]. Now, the value of t' for which the calculations are to be performed having been chosen, this time t[0] will be a function of the coordinates x, y, z of the exterior point P. The value of $\left[\varrho\right]$ will therefore depend on these coordinates in such a way that $\left[\frac{\partial[\varrho]}{\partial x}\right]=-\frac{k}{l}\frac{1}{c}\frac{\partial r'}{\partial x}\left[\frac{\partial\varrho}{\partial t}\right]$, etc., by which (25) becomes $\left[\varrho\right]+k^{2}\frac{w}{c^{2}}x\left[\frac{\partial\varrho}{\partial t}\right]-\left(x\frac{\partial[\varrho]}{\partial x}+y\frac{\partial[\varrho]}{\partial y}+z\frac{\partial[\varrho]} {\partial z}\right)$. Again, if henceforth we understand by r' what has above been called $r_{0}^{'}$, the factor $\frac{1}{r'}$ must be replaced by $\frac{1}{r'}-x\frac{\partial}{\partial x}\left(\frac{1}{r'}\right)-y\frac{\partial}{\partial y}\left(\frac{1}{r'}\right)-z\frac{\partial}{\partial z}\left(\frac{1}{r'}\right)$, so that after all, in the integral (17), the element dS is multiplied by $\frac{\left[\varrho\right]}{r}+k^{2}\frac{w}{c^{2}}\frac{x}{r}\left[\frac{\partial\varrho}{\partial t}\right]-\frac{\partial}{\partial x}\frac{x[\varrho]}{r'}-\frac{\partial}{\partial y}\frac{y[\ varrho]}{r'}-\frac{\partial}{\partial z}\frac{z[\varrho]}{r'}$. This is simpler than the primitive form, because neither r' , nor the time for which the quantities enclosed in brackets are to be taken, depend on x, y, z. Using (23) and remembering that $\int\ varrho\ dS=0$, we get $\varphi'=k^{2}\frac{w}{4\pi c^{2}r'}\left[\frac{\partial\mathfrak{p}_{x}}{\partial t}\right]-\frac{1}{4\pi}\left\{ \frac{\partial}{\partial x}\frac{\left[\mathfrak{p}_{x}\right]}{r'}+\frac{\partial} {\partial y}\frac{\left[\mathfrak{p}_{y}\right]}{r'}+\frac{\partial}{\partial z}\frac{\left[\mathfrak{p}_{z}\right]}{r'}\right\}$, a formula in which all the enclosed quantities are to be taken for the instant at which the local time of the centre of the particle is $t'-\frac{r'}{c}$. We shall conclude these calculations by introducing a new vector $\mathfrak{p}'$, whose components are $\mathfrak{p}_{x}^{'}=kl\mathfrak{p}_{x,}\quad\mathfrak{p}_{y}^{'}=l\mathfrak{p}_{y,}\quad\mathfrak{p}_{z}^{'}=l\mathfrak{p}_{z}$ (26) passing at the same time to x', y', z', t' as independent variables. The final result is $\varphi'=\frac{w}{4\pi c^{2}r'}\frac{\partial\left[\mathfrak{p}_{x}^{'}\right]}{\partial t'}-\frac{1}{4\pi}\left\{ \frac{\partial}{\partial x'}\frac{\left[\mathfrak{p}_{x}^{'}\right]}{r'}+\frac{\ partial}{\partial y'}\frac{\left[\mathfrak{p}_{y}^{'}\right]}{r'}+\frac{\partial}{\partial z'}\frac{\left[\mathfrak{p}_{z}^{'}\right]}{r'}\right\}$. As to the formula (18) for the vector potential, its transformation is less complicate, because it contains the infinitely small vector $\mathfrak{u}'$. Having regard to (8), (24), (26) and (5), I $\mathfrak{a}'=\frac{1}{4\pi cr'}\frac{\partial\left[\mathfrak{p^{'}}\right]}{\partial t'}$. The field produced by the polarized particle is now wholly determined. The formula (13) leads to $\mathfrak{d}'=-\frac{1}{4\pi c^{2}}\frac{\partial^{2}}{\partial t^{'2}}\frac{[\mathfrak{p}']}{r'}+\frac{1}{4\pi}grad'\left\{ \frac{\partial}{\partial x'}\frac{\left[\mathfrak{p}_{x}^{'}\right]} (27) {r'}+\frac{\partial}{\partial y'}\frac{\left[\mathfrak{p}_{y}^{'}\right]}{r'}+\frac{\partial}{\partial z'}\frac{\left[\mathfrak{p}_{z}^{'}\right]}{r'}\right\}$ and the vector $\mathfrak{h}'$ is given by (14). We may further use the equations (20), instead of the original formulae (10), if we wish to consider the forces exerted by the polarized particle on a similar one placed at some distance. Indeed, in the second particle, as well as in the first, the velocities $\mathfrak{u}$ may be held to be infinitely small. It is to be remarked that the formulae for a system without translation are implied in what precedes. For such a system the quantities with accents become identical to the corresponding ones without accents; also k=1 and l=1. The components of (27) are at the same time those of the electric force which is exerted by one polarized particle on another. § 8. Thus far we have only used the fundamental equations without any new assumptions. I shall now suppose that the electrons, which I take to be spheres of radius R in the state of rest, have their dimensions changed by the effect of a translation, the dimensions in the direction of motion becoming kl times and those in perpendicular direction l times smaller. In this deformation, which may be represented by $\left(\frac{1}{kl},\ \frac{1}{l},\ \frac{1}{l}\right)$, each element of volume is understood to preserve its charge. Our assumption amounts to saying that in an electrostatic system Σ, moving with a velocity w, all electrons are flattened ellipsoids with their smaller axes in the direction of motion. If now, in order to apply the theorem of §6, we subject the system to the deformation (kl, l, l), we shall have again spherical electrons of radius R. Hence, if we alter the relative position of the centres of the electrons in Σ by applying the deformation (kl, l, l), and if, in the points thus obtained, we place the centres of electrons that remain at rest, we shall get a system, identical to the imaginary system Σ' , of which we have spoken in § 6. The forces in this system and those in Σ will bear to each other the relations expressed by (21). In the second place I shall suppose that the forces between uncharged particles, as well as those between such particles and electrons, are influenced by a translation in quite the same way as the electric forces in an electrostatic system. In other terms, whatever be the nature of the particles composing a ponderable body, so long as they do not move relatively to each other, we shall have between the forces acting in a system (Σ' ) without, and the same system (Σ) with a translation, the relation specified in (21), if, as regards the relative position of the particles, Σ' is got from Σ by the deformation (kl, l, l), or Σ from Σ' by the deformation $\left(\frac{1}{kl},\ \frac{1}{l},\ \frac{1}{l}\right)$. We see by this that, as soon as the resulting force is 0 for a particle in Σ' , the same must be true for the corresponding particle in Σ. Consequently, if, neglecting the effects of molecular motion, we suppose each particle of a solid body to be in equilibrium under the action of the attractions and repulsions exerted be its neighbours, and if we take for granted that there is but one configuration of equilibrium, we may draw the conclusion that the system Σ' , if the velocity w is imparted to it, will of itself change into the system Σ. In other terms, the translation will produce the deformation $\left(\frac{1}{kl},\ \frac{1}{l},\ \frac{1}{l}\right)$. The case of molecular motion will be considered in § 12. It will easily be seen that the hypothesis that has formerly been made in connexion with MICHELSON'S experiment, is implied in what has now been said. However, the present hypothesis is more general because the only limitation imposed on the motion is that its velocity be smaller than that of light. § 9. We are now in a position to calculate the electromagnetic momentum of a single electron. For simplicity's sake I shall suppose the charge e to be uniformly distributed over the surface, so long as the electron remains at rest. Then, a distribution of the same kind will exist in the system with which we are concerned in the last integral of (22). Hence $\int\left(\mathfrak{d}_{y}^{'2}+\mathfrak{d}_{z}^{'2}\right)dS'=\frac{2}{3}\int b^{'2}dS'=\frac{e^{2}}{6\pi}\int_{R}^{\infty}\frac{dr}{r^{2}}=\frac{e^{2}}{6\pi R}$, $\mathfrak{G}_{x}=\frac{e^{2}}{6\pi c^{2}R}klw$. It must be observed that the product kl is a function of w and that, for reasons of symmetry, the vector $\mathfrak{G}$ has the direction of the translation. In general, representing by $\mathfrak{w} $ the velocity of this motion, we have the vector equation $\mathfrak{G}=\frac{e^{2}}{6\pi c^{2}R}kl\mathfrak{w}$. (28) Now, every change in the motion of a system will entail a corresponding change in the electromagnetic momentum and will therefore require a certain force, which is given in direction and magnitude by $\mathfrak{F}=\frac{d\mathfrak{G}}{dt}$. (29) Strictly speaking, the formula (28) may only be applied in the case of a uniform rectilinear translation. On account of this circumstance — though (29) is always true — the theory of rapidly varying motions of an electron becomes very complicated, the more so, because the hypothesis of § 8 would imply that the direction and amount of the deformation are continually changing. It is even hardly probable that the form of the electron will be determined solely by the velocity existing at the moment considered. Nevertheless, provided the changes in the state of motion be sufficiently slow, we shall get a satisfactory approximation by using (28) at every instant. The application of (29) to such a quasi-stationary translation, as it has been called by Abraham,^[11] is a very simple matter. Let, at a certain instant, $\mathfrak{j}_{1}$, be the acceleration in the direction of the path, and $\ mathfrak{j}_{2}$, the acceleration perpendicular to it. Then the force $\mathfrak{F}$ will consist of two components, having the directions of these accelerations and which are given by $\mathfrak{F}_{1}=m_{1} \mathfrak{j}_{1}$ and $\mathfrak{F}_{2} =m_{2} \mathfrak{j}_{2}$ $m_{1}=\frac{e^{2}}{6\pi c^{2}R}\frac{d(klw)}{dw}$ and $m_{2}=\frac{e^{2}}{6\pi c^{2}R}kl$. (30) Hence, in phenomena in which there is an acceleration in the direction of motion, the electron behaves as if it had a mass m[1], those in which the acceleration is normal to the path, as if the mass were m[2]. These quantities m[1] and m[2] may therefore properly be called the "longitudinal" and "transverse" electromagnetic masses of the electron. I shall suppose that there is no other, no "true" or "material" mass. Since k and l differ from unity by quantities of the order $\frac{w^{2}}{c^{2}}$, we find for very small velocities $m_{1}=m_{2}=\frac{e^{2}}{6\pi c^{2}R}$. This is the mass with which we are concerned, if there are small vibratory motions of the electrons in a system without translation. If, on the contrary, motions of this kind are going on in a body moving with the velocity w in the direction of the axis of x, we shall have to reckon with the mass m[1], as given by (30), if we consider the vibrations parallel to that axis, and with the mass m[2] , if we treat of those that are parallel to OY or OZ. Therefore, in short terms, referring by the index Σ to a moving system and by Σ' to one that remains at rest, $m(\Sigma)=\left(\frac{d(klw)}{dw},\ kl,\ kl\right)m(\Sigma')$. (31) § 10. We can now proceed to examine the influence of the Earth's motion on optical phenomena in a system of transparent bodies. In discussing this problem we shall fix our attention on the variable electric moments in the particles or "atoms" of the system. To these moments we may apply what has been said in § 7. For the sake of simplicity we shall suppose that, in each particle, the charge is concentrated in a certain number of separate electrons, and that the "elastic" forces that act on one of these and, conjointly with the electric forces, determine its motion, have their origin within the bounds of the same atom. I shall show that, if we start from any given state of motion in a system without translation, we may deduce from it a corresponding state that can exist in the same system after a translation has been imparted to it, the kind of correspondence being as specified in what follows. a. Let A', A[2]', A[3]' , etc. be the centres of the particles in the system without translation (Σ'); neglecting molecular motions we shall take these points to remain at rest. The system of points A, A[2], A[3], etc., formed by the centres of the particles in the moving system Σ, is obtained from A', A[2]', A[3]' , etc. by means of a deformation $\left(\frac{1}{kl},\ \frac{1}{l},\ \frac{1}{l}\ right)$. According to what has been said in § 8, the centres will of themselves take these positions A, A[2], A[3], etc. if originally, before there was a translation, they occupied the positions A', A[2]', A[3]' , etc. We may conceive any point P' in the space of the system Σ' to be replaced by the above deformation, so that a definite point P of Σ corresponds to it. For two corresponding points P' and P we shall define corresponding instants, the one belonging to P' , the other to P, by stating that the true time at the first instant is equal to the local time, as determined by (5) for the point P, at the second instant. By corresponding times for two corresponding particles we shall understand times that may be said to correspond, if we fix our attention on the centres A' and A of these particles. b. As regards the interior state of the atoms, we shall assume that the configuration of a particle A in Σ at a certain time may be derived by means of the deformation $\left(\frac{1}{kl},\ \frac{1} {l},\ \frac{1}{l}\right)$ from the configuration of the corresponding particle in Σ' , such as it is at the corresponding instant. In so far as this assumption relates to the form of the electrons themselves, it is implied in the first hypothesis of § 8. Obviously, if we start from a state really existing in the system Σ' , we have now completely defined a state of the moving system Σ. The question remains however, whether this state will likewise be a possible one. In order to judge this, we may remark in the first place that the electric moments which we have supposed to exist in the moving system and which we shall denote by $\mathfrak{p}$ will be certain definite functions of the coordinates x, y, z of the centres A of the particles, or, as we shall say, of the coordinates of the particles themselves, and of the time t. The equations which express the relations between $\mathfrak{p}$ on one hand and x, y, z, t on the other, may be replaced by other equations, containing the vectors $\mathfrak{p}'$ defined by (25) and the quantities x',y',z',t' defined by (4) and (5). Now, by the above assumptions a and b, if in a particle A of the moving system, whose coordinates are x, y, z, we find an electric moment $\mathfrak{p}$ at the time t, or at the local time t', the vector $\mathfrak{p}'$ given by (26) will be the moment which exists in the other system at the true time t' in a particle whose coordinates are x', y', z' . It appears in this way that the equations between $\mathfrak{p}'$, x', y', z', t' are the same for both systems, the difference being only this, that for the system Σ' without translation these symbols indicate the moment, the coordinates and the true time, whereas their meaning is different for the moving system, $\mathfrak{p}'$, x', y', z', t' being here related to the moment $\mathfrak{p}$, the coordinates x, y, z and the general time t in the manner expressed by (26), (4) and (5). It has already been stated that the equation (27) applies to both systems. The vector $\mathfrak{d}'$ will therefore be the same in Σ' and Σ, provided we always compare corresponding places and times. However, this vector has not the same meaning in the two cases. In Σ' it represents the electric force, in Σ it is related to this force in the way expressed by (20). We may therefore conclude that the electric forces acting, in Σ and in Σ' , on corresponding particles at corresponding instants, bear to each other the relation determined by (21). In virtue of our assumption b, taken in connexion with the second hypothesis of § 8, the same relation will exist between the "elastic" forces; consequently, the formula (21) may also be regarded as indicating the relation between the total forces, acting on corresponding electrons, at corresponding instants. It is clear that the state we have supposed to exist in the moving system will really be possible if, in Σ and Σ' , the products of the mass in and the acceleration of an electron are to each other in the same relation as the forces, i. e. if $m\mathfrak{j}(\Sigma)=\left(l^{2},\ \frac{l^{2}}{k},\ \frac{l^{2}}{k}\right)m\mathfrak{j}(\Sigma')$. (32) Now, we have for the accelerations $\mathfrak{j}(\Sigma)=\left(\frac{l}{k^{3}},\ \frac{l}{k^{2}},\ \frac{l}{k^{2}}\right)\mathfrak{j}(\Sigma')$ (33) as may be deduced from (4) and (5), and combining this with (32), we find for the masses $m(\Sigma)=(k^{3}l,\ kl,\ kl)m(\Sigma')$. If this is compared to (31), it appears that, whatever be the value of l, the condition is always satisfied, as regards the masses with which we have to reckon when we consider vibrations perpendicular to the translation. The only condition we have to impose on l is therefore But, on account of (3), so that we must put $\frac{dl}{dw}=0,\ l=const$. The value of the constant must be unity, because we know already that, for w=0, l=1. We are therefore led to suppose that the influence of a translation on the dimensions (of the separate electrons and of a ponderable body as a whole) is confined to those that have the direction of the motion, these becoming k times smaller than they are in the state of rest. If this hypothesis is added to those we have already made, we may be sure that two states, the one in the moving system, the other in the same system while at rest, corresponding as stated above, may both be possible. Moreover, this correspondence is not limited to the electric moments of the particles. In corresponding points that are situated either in the aether between the particles, or in that surrounding the ponderable bodies, we shall find at corresponding times the same vector $\mathfrak{d}'$ and, as is easily shown, the same vector $\mathfrak{h}'$. We may sum up by saying : If, in the system without translation, there is a state of motion in which, at a definite place, the components of $\mathfrak{p}$, $\mathfrak{d}$, and $\mathfrak{h}$ are certain functions of the time, then the same system after it has been put in motion (and thereby deformed) can be the seat of a state of motion in which, at the corresponding place, the components of $\mathfrak{p}'$, $\mathfrak{d}'$, and $\mathfrak{h}'$ are the same functions of the local time. There is one point which requires further consideration. The values of the masses m[1], and m[2] having been deduced from the theory of quasi-stationary motion, the question arises, whether we are justified in reckoning with them in the case of the rapid vibrations of light. Now it is found on closer examination that the motion of an electron may be treated as quasi-stationary if it changes very little during the time a light-wave takes to travel over a distance equal to the diameter. This condition is fulfilled in optical phenomena, because the diameter of an electron is extremely small in comparison with the wave-length. § 11. It is easily seen that the proposed theory can account for a large number of facts. Let us take in the first place the case of a system without translation, in some parts of which we have continually $\mathfrak{p}=0$, $\mathfrak{d}=0$ and $\mathfrak{h}=0$. Then, in the corresponding state for the moving system, we shall have in corresponding parts (or, as we may say, in the same parts of the deformed system) $\mathfrak{p}'=0$, $\mathfrak{d}'=0$ and $\mathfrak{h}'=0$. These equations implying $\mathfrak{p}=0$, $\mathfrak{d}=0$, $\mathfrak{h}=0$, as is seen by (26) and (6), it appears that those parts which are dark while the system is at rest, will remain so after it has been put in motion. It will therefore be impossible to detect an influence of the Earth's motion on any optical experiment, made with a terrestrial source of light, in which the geometrical distribution of light and darkness is observed. Many experiments on interference and diffraction belong to this class. In the second place, if in two points of a system, rays of light of the same state of polarization are propagated in the same direction, the ratio between the amplitudes in these points may be shown not to be altered by a translation. The latter remark applies to those experiments in which the intensities in adjacent parts of the field of view are compared. The above conclusions confirm the results I have formerly obtained by a similar train of reasoning, in which however the terms of the second order were neglected. They also contain an explanation of MICHELSONS's negative result, more general and of somewhat different form than the one previously given, and they show why RAYLEIGH and BRACE could find no signs of double refraction produced by the motion of the Earth. As to the experiments of TROUTON and NOBLE, their negative result becomes at once clear, if we admit the hypotheses of §8. It may be inferred from these and from our last assumption (§ 10) that the only effect of the translation must have been a contraction of the whole system of electrons and other particles constituting the charged condenser and the beam and thread of the torsion-balance. Such a contraction does not give rise to a sensible change of direction. It need hardly be said that the present theory is put forward with all due reserve. Though it seems to me that it can account for all well established facts, it leads to some consequences that cannot as yet be put to the test of experiment. One of these is that the result of MICHELSON'S experiment must remain negative, if the interfering rays of light are made to travel through some ponderable transparent body. Our assumption about the contraction of the electrons cannot in itself be pronounced to be either plausible or inadmissible. What we know about the nature of electrons is very little and the only means of pushing our way farther will be to test such hypotheses as I have here made. Of course, there will be difficulties, e.g. as soon as we come to consider the rotation of electrons. Perhaps we shall have to suppose that in those phenomena in which, if there is no translation, spherical electrons rotate about a diameter, the points of the electrons in the moving system will describe elliptic paths, corresponding, in the manner specified in § 10, to the circular paths described in the other case. § 12. It remains to say some words about molecular motion. We may conceive that bodies in which this has a sensible influence or even predominates, undergo the same deformation as the systems of particles of constant relative position of which alone we have spoken till now. Indeed, in two systems of molecules Σ' and Σ, the first without and the second with a translation, we may imagine molecular motions corresponding to each other in such a way that, if a particle in Σ' has a certain position at a definite instant, a particle in Σ occupies at the corresponding instant the corresponding position. This being assumed, we may use the relation (33) between the accelerations in all those cases in which the velocity of molecular motion is very small as compared to w. In these cases the molecular forces may be taken to be determined by the relative positions, independently of the velocities of molecular motion. If, finally, we suppose these forces to be limited to such small distances that, for particles acting on each other, the difference of local times may be neglected, one of the particles, together with those which lie in its sphere of attraction or repulsion, will form a system which undergoes the often mentioned deformation. In virtue of the second hypothesis of § 8 we may therefore apply to the resulting molecular force acting on a particle, the equation (21). Consequently, the proper relation between the forces and the accelerations will exist in the two cases, if we suppose that the masses of all particles are influenced by a translation to the same degree as the electromagnetic masses of the electrons. § 13. The values (30) which I have found for the longitudinal and transverse masses of an electron, expressed in terms of its velocity, are not the same as those that have been formerly obtained by Abraham. The ground for this difference is solely to be sought in the circumstance that, in his theory, the electrons are treated as spheres of invariable dimensions. Now, as regards the transverse mass, the results of Abraham have been confirmed in a most remarkable way by Kaufmann's measurements of the deflexion of radium-rays in electric and magnetic fields. Therefore, if there is not to be a most serious objection to the theory I have now proposed, it must be possible to show that those measurements agree with my values nearly as well as with those of Abraham. I shall begin by discussing two of the series of measurements published by Kaufmann^[12] in 1902. From each series he has deduced two quantities η and ζ, the "reduced" electric and magnetic deflexions, which are related as follows to the ratio $\beta=\frac{w}{c}$: $\beta=k_{1}\frac{\zeta}{\eta}, \psi(\beta)=\frac{\eta}{k_{2}\zeta^{2}}$. (34) Here ψ(β) is such a function, that the transverse mass is given by $m_{2}=\frac{3}{4}\cdot\frac{e^{2}}{6\pi c^{2}R}\psi(\beta)$ (35) whereas k[1] and k[2] are constant in each series. It appears from the second of the formulae (30) that my theory leads likewise to an equation of the form (35); only Abraham's function ψ(β) must be replaced by Hence, my theory requires that, if we substitute this value for ψ(β) in (34), these equations shall still hold. Of course, in seeking to obtain a good agreement, we shall be justified in giving to k [1] and k[2] other values than those of Kaufmann, and in taking for every measurement a proper value of the velocity w, or of the ratio β. Writing $sk_{1},\ \frac{3}{4}k_{2}^{'}$ and β' for the new values, we may put (34) in the form $\beta'=sk_{1}\frac{\zeta}{\eta}$ (36) $\left(1-\beta^{'2}\right)^{-1/2}=\frac{\eta}{k',\zeta^{2}}$ (37) Kaufmann has tested his equations by choosing for k[1] such a value that, calculating β and k[2] by means of (34), he got values for this latter number that remained constant in each series as well as might be. This constancy was the proof of a sufficient agreement. I have followed a similar method, using however some of the numbers calculated by Kaufmann. I have computed for each measurement the value of the expression $k_{2}^{'}=\left(1-\beta^{'2}\right)^{-1/2}\psi(\beta)k_{2}$ (38) that may be got from (37) combined with the second of the equations (34). The values of ψ(β) and k[2], have been taken from Kaufmann's tables and for β' I have substituted the value he has found for β, multiplied by s, the latter coefficient being chosen with a view to obtaining a good constancy of (38). The results are contained in the following tables, corresponding to the tables III and IV in Kaufmann's paper. │ III. s=0,933. │ │β │ψ(β) │k[2] │β' │k[2]'│ │0.851 │2.147│1.721│0.794│2.246│ │0.766 │1.86 │1.736│0.715│2.258│ │0.727 │1.78 │1.725│0.678│2.256│ │0.6615 │1.66 │1.727│0.617│2.256│ │0.6075 │1.595│1.655│0.567│2.175│ │ IV. s=0,954. │ │β │ψ(β) │k[2]│β' │k[2]'│ │0.963│3.23 │8.12│0.919│10.36│ │0.949│2.86 │7.99│0.905│9.70 │ │0.933│2.73 │7.46│0.800│9.28 │ │0.883│2.31 │8.32│0.842│10.36│ │0.830│2.06 │8.13│0.792│10.23│ │0.801│1.96 │8.13│0.764│10.28│ │0.777│1.89 │8.04│0.741│10.20│ │0.752│1.83 │8.02│0.717│10.22│ The constancy of k[2]' is seen to come out no less satisfactory than that of k[2], the more so as in each case the value of s has been determined by means of only two measurements. The coefficient has been so chosen that for these two observations, which were in Table III the first and the last but one, and in Table IV the first and the last, the values of k[2]' should be proportional to those of k[2]. I shall next consider two series from a later publication by Kaufmann,^[13] which have been calculated by Runge^[14] by means of the method of least squares, the coefficients k[1], and k[2] having been determined in such a way, that the values of η, calculated, for each observed ζ, from Kaufmann's equations (34), agree as closely as may be with the observed values of η. I have determined by the same condition, likewise using the method of least squares, the constants a and b in the formula which may be deduced from my equations (36) and (37). Knowing a and b, I find β for each measurement by means of the relation For two plates on which Kaufmann had measured the electric and magnetic deflexions, the results are as follows, the deflexions being given in centimeters. I have not found time for calculating the other tables in Kaufmann's paper. As they begin, like the table for Plate 15, with a rather large negative difference between the values of η which have been deduced from the observations and calculated by Runge, we may expect a satisfactory agreement with my formulae. │ Plate N°. 15. a=0,06489, b=0,3039. │ │ │ η │ β │ │ ├────────┬─────────────────┬─────┬─────────────────┬─────┼─────────────┤ │ ζ │ │ │ │ │ │Calculated by│ │ │Observed│Calculated by R. │Diff.│Calculated by L. │Diff.├──────┬──────┤ │ │ │ │ │ │ │R. │L. │ │0.1495│0.0388 │0.0404 │-16 │0.0400 │-12 │0.987 │0.951 │ │0.199 │0.0548 │0.0550 │-2 │0.0552 │-4 │0.964 │0.918 │ │0.2475│0.0716 │0.0710 │+6 │0.0715 │+1 │0.930 │0.881 │ │0.296 │0.0896 │0.0887 │+9 │0.0895 │+1 │0.889 │0.842 │ │0.3435│0.1080 │0.1081 │-1 │0.1090 │-10 │0.847 │0.803 │ │0.391 │0.1290 │0.1297 │-7 │0.1305 │-15 │0.804 │0.763 │ │0.437 │0.1524 │0.1527 │-3 │0.1532 │-8 │0.763 │0.727 │ │0.4825│0.1788 │0.1777 │+11 │0.1777 │+11 │0.724 │0.692 │ │0.5265│0.2033 │0.2039 │-6 │0.2033 │0 │0.688 │0.660 │ │ Plate N°. 19. a=0,05867, b=0,2591. │ │ │ η │ β │ │ ├────────┬─────────────────┬─────┬─────────────────┬─────┼─────────────┤ │ ζ │ │ │ │ │ │Calculated by│ │ │Observed│Calculated by R. │Diff.│Calculated by L. │Diff.├──────┬──────┤ │ │ │ │ │ │ │R. │L. │ │0.1495│0.0404 │0.0388 │+16 │0.0379 │+25 │0.990 │0.954 │ │0.199 │0.0529 │0.0527 │+2 │0.0522 │+7 │0.969 │0.923 │ │0.247 │0.0678 │0.0675 │+3 │0.0674 │+4 │0.939 │0.888 │ │0.296 │0.0834 │0.0842 │-8 │0.0844 │-10 │0.902 │0.849 │ │0.3435│0.1019 │0.1022 │-3 │0.1026 │-7 │0.862 │0.811 │ │0.391 │0.1219 │0.1222 │-3 │0.1226 │-7 │0.822 │0.773 │ │0.437 │0.1429 │0.1434 │-5 │0.1437 │-8 │0.782 │0.736 │ │0.4825│0.1660 │0.1665 │-5 │0.1664 │-4 │0.744 │0.702 │ │0.5265│0.1916 │0.1906 │+10 │0.1902 │+14 │0.709 │0.671 │ § 14. I take this opportunity for mentioning an experiment that has been made by Trouton^[15] at the suggestion of FitzGerald, and in which it was tried to observe the existence of a sudden impulse acting on a condenser at the moment of charging or discharging; for this purpose the condenser was suspended by a torsion-balance, with its plates parallel to the Earth's motion. For forming an estimate of the effect that may be expected, it will suffice to consider a condenser with aether as dielectricum. Now, if the apparatus is charged, there will be (§ 1) an electromagnetic momentum (Terms of the third and higher orders are here neglected). This momentum being produced at the moment of charging, and disappearing at that of discharging, the condenser must experience in the first case an impulse $-\mathfrak{G}$ and in the second an impulse $+\mathfrak{G}$. However Trouton has not been able to observe these jerks. I believe it may be shown (though his calculations have led him to a different conclusion) that the sensibility of the apparatus was far from sufficient for the object Trouton had in view. Representing, as before, by U the energy of the charged condenser in the state of rest, and by U+U' the energy in the state of motion, we have by the formulae of this paper, up to the terms of the second order, an expression, agreeing in order of magnitude with the value used by Trouton for estimating the effect. The intensity of the sudden jerk or impulse will therefore be $\frac{U'}{w}$. Now, supposing the apparatus to be initially at rest, we may compare the deflexion α, produced by this impulse, to the deflexion α' which may be given to the torsion-balance by means of a constant couple K, acting during half the vibration time. We may also consider the case in which a swinging motion has already been set up: then the impulse, applied at the moment in which the apparatus passes through the position of equilibrium, will alter the amplitude by a certain amount β and a similar effect β' may be caused by letting the couple K act during the swing from one extreme position to the other. Let T be the period of swinging and l the distance from the condenser to the thread of the torsion-balance. Then it is easily found that $\frac{\alpha}{\alpha'}=\frac{\beta}{\beta'}=\frac{\pi U'l}{KTw}$. (39) According to Trouton's statements U' amounted to one or two ergs, and the smallest couple by which a sensible deflexion could be produced was estimated at 7,5 C. G. S.-units. If we substitute this value for K and take into account that the velocity of the Earth's motion is 3 × 10^6 c.M. per sec., we immediately see that (39) must have been a very small fraction. This work is in the public domain in the United States because it was published before January 1, 1923. The author died in 1928, so this work is also in the public domain in countries and areas where the copyright term is the author's life plus 80 years or less. This work may also be in the public domain in countries and areas with longer native copyright terms that apply the rule of the shorter term to foreign works.
{"url":"http://en.wikisource.org/wiki/Electromagnetic_phenomena","timestamp":"2014-04-17T02:03:26Z","content_type":null,"content_length":"115730","record_id":"<urn:uuid:9ceeed0a-4515-4bd3-9cf3-8734367f0a5c>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 609 A box contains 4 red and 3 blue chips. What is the probability when 3 chips are selected randomly that all 3 chips will be red if select chips with replacement? A batch of 350 raffle tickets contains 4 winning tickets. You buy 4 tickets. What is the probability you have no winning tickets? b.) all winning tickets? c.) at least 1 winning ticket? d.) At least 1 non-winning ticket? A box contains 4 red and 3 blue poker chips. What is the probability when 3 are selected randomly that all 3 wil be red if we select chips with replacement? a.) Without replacement? The offices of president, vice president, seceretary, and treasurer for a club will be filled from a pool of 12 candidates. Five of them are members of the debate team. What is the probability that all the offices are filled by memebers of the debate team? At a certain college, 15% of the students receieved an A in their required math cousre, 10% receieved an A in their required chemistry course, and 5% receieved an A in both. A student is selected at random. a.) What is the probability that he received an A in either chemistry ... At a certain college, 15% of the students received an A in their required math course, 10% receieved an A in their required chemistry course, and 5% received an A in both. A student is selected at random. a.) What is the probability that he received an A in either chemistry or... Two flower seeds are randomly selected from a package that contains five seeds for red flowers and three seeds for white flowers. a.) What is the probability that both seeds will result in red flowers? b.) What is the probability that one of each color is selected? c.) What is... You forget to set your alarm with a probaility of .3. If you set the alarm it rings with the probabilty of .8. If the alarm rings, it will wake you on time to make your first class with the probability of .9. If the alarm does not ring, you wake in time for your first class wi... A person owns both her own business and her own home and purchases a cariety of insurances to protect them. In any one year the probability of an insurance claim on the business is .4 and on the home is .05. Assuming that these are independent events, find the probability that... At a certain college, 15% of the students received an A in their required math course, 10% received an A in their required Chemistry course, 5% received an A in both. A student is selected at random. a.) What is the probability tha he receieved an A in either Chemistry or Math... The probability that a certain door is locked is.6. The key to the door is one of five unidentified keys hanging on a key rack. Two keys are randomly selected before approaching the door. What is the probability that the door may be opened without returning for another key? Two flower seeds are randomly selected from a package tha tcontains five seeds for red flowers and three seeds for white flowers. a.) What is the probability that both seeds will result in red flowers? b.) What is the probability that one of each color is selected? c.) What is... A box of marbles contains an equal number of red marbles and yellow marbles but twice as many green marbles as red marbles. Draw one marble from the box and observe its color. Assign probabilities to the elements. Events A, B, and C are defines on sample space S. Their corresponding sets of sample points do not intersect and their union is S. Further, event B is twice as likely to occur as event A, and event C is twice as likely to occur as event B. Determine the probability of each of ... Suppose a box of marbles contains an equal number of red marbles and yellow marbles but twice as amny green marbles as red marbles. Draw one marble from the box and observe its color. Assign probabilitites to the elements. Suppose a box of marbles contains an equal number or red and yellow marbles but twice as many green marbles as red marbles. Draw one marble from the box and observe its color. Assign probabilities to the elements. medical coding & billing Can someone plesae help me. I need to to answer these four quetions. I got something, but I'm not sure if Igot it right. The questions are, How can eliminating abbreviations reduce errors?. Should written policies be developed for abbreviation usage? If yes, what should th... formulates a mitigation plan for your specific environmental problem. 7th grade Truck Company charges $40 per day & an additional $2 for each mile driven. Make a table of the charges for renting a truck for 5 days and driving it 0.25, 50, 75, 100, 125, 150, 175, 200, 225, 250, 275, and 300 miles. You have just started work for a small company, FitCo, that develops private fitness clubs in small towns. FitCo buys or leases a local hotel or motel, then renovates to provide a gym, swimming pool, sauna, Jacuzzi, and a small café where patrons can buy juices, smoothi... Start with an x column and next to it have a y column so for every x answer you have there will be a corresponding y answer. For example start with assigning x=1, plug it in the equation and find out what y equals. Then assign x=2 then plug it in and find out what y is, and so... Assume that the economy is already in a recession, and both the President and Congress have decided to do something to restore the economy. Both agree that lowering taxes would not be a good idea, but do believe that it is in the best interest of the economy to increase govern... adult education 6pt3c = medical records Compare cost control strategies of employer-sponsored (employers buy from insurance companies) to self-funded (employers cover costs of benefits) health plans. Include the following factors: Riders Enrollment periods Provider networks Third party administrators Also discuss ho... What is the perimeter of a rectangle if the length is 7/10 in. and the width is 1/10 in.? Find the area of the triangle below 5in 7in The distance from Philadelphia to Sea Isle is 100 mi. A car was driven this distance using tires with a radius of 14 in. How many revolutions of each tire occurred on the trip? (Hint: one revolution is equivalent to one circumference distance traveled. Thus, need to find how m... How are the data elements contained in the HIPAA 837 claim form similar to the CMS-1500, and how does each form relate to the claims process? In your opinion, do the similarities between HIPAA 837 and CMS-1500 complicate or simplify the claims process? I need help coding this situation: A 67-year old Medicare patietn presents to the office, exhiting symptoms of HIV infection. After detailed examination, symptoms are determined to be advanced AIDS with manifestation of Kaposi's sarcoma and other opportunistic infections. ... Can the remainder in division ever be equal to the divisor? please explain? Health insurance plan My kids just got approved for healthcare coverage through medicaid. They are asking me to choose a plan from the following: Staywell Health Plan of Florida, United Healthcare of Florida, AMERIGROUP Community Care, HealthEase of Florida, Inc., MediPass. Please if someone knows ... Jim can fill a pool carrying buckets of water in 30 minutes. Sue can do the same job in 45 minutes. Tony can do the same job in 1 ½ hours. How quickly can all three fill the pool together? A. 12 minutes B. 15 minutes C. 21 minutes D. 23 minutes E. 28 minutes Dr. Bob's computerized records have been compromised by a hacker, resulting in damages to patients. Which of the following is most true? A. Dr. Bob may be liable if he didn't have proper firewall or other protections. B. Dr. Bob won't be liable. C. Dr. Bob will be... infants and children Which of the following is true? Fertilization usually takes place in the uterus. Women can get pregnant only on the day of ovulation. Males produce an average of 300 million sperm a day. Sperm survive for only one day after intercourse Anatomy and Physiology I don't understand the relationship with question I have posted. Thank you Anatomy and Physiology a 73 year old woman suffers a severe cerebral hemorrhage after minor trauma. Blood work reveals that her blood clotting ability is impaired. Disease of which abdominal organ may contrbute to this? HSM 270 Need assessment qualitative approaches quantitative approaches For which decision areas is the financial manager responsible? For which decision areas is the financial manager responsible? what can you lie on sit on and comb you hair with ??? Who won the battles and lost the battles against the Patriots? Active volcanos are most likely to form at a)transform boundaries b)divergent boundaries c)the center of continents] d)convergent oceanic-continental boundaries The ______ is an exampleof a tranform boundary. a)Aappalachian Mountains b)Himalayasa c)Mid-Atlantic Ridge d)San And... Identify and name any rhetorical Snooker's lumber can convert logs into either lumber or plywood. In a given day, the mill turns out twice as many units of plywood as lumber. It makes a profit of $30 on a unit of lumber and $45 on a unit of plywood. How many of each unit must be produced and sold in order... I do not watch T.V. much, so I do not know many T.V. personalities. I need one for each theory relevant to Freud, Rogers, and Jung. Can you help me find three T.V. characters? If you replace the equal sign of an equation and put an inequality sign in its place, is there ever a time when the same value will be a solution to both the equation and inequality? managers lose some control as the number of .stockholders increases. Business English Thank you Ms. Sue for giving me the boost. If you may help me understand the hierarchal structre of a company to better contribute tasks among employees based on their similar skills and background. please if you may expalain to me what will be the position of the 4 employees ... Business English I have to write a short argument to my supervisor to convince her to accept the changes that will improve productivity in my office. Background Your office manager, Daisy Miller retired three months ago. You were promoted to office manager for DMD Company where you began as a ... computer services Can anyone tell what website I need to go to for this question, Should individuals and organizations with access to the databases be identified to the patient? All I will need is to know what web site to go to. thanks it will be very helpful to me. computer services Can anyone help me on finding the answer to this question. Wehn should the computerized medical database be online to the computer terminal? Just let me know what web site I need to go to. computer services I need help to find more information about the computer service bureau Can anyone help me? What do I need to look under? The question I have to answer is "When the computer service bureau destroys or erases records, should the erasure be verified by the bureau to the phyi... ETH 125/ Cultural Diversity trends in immigrations will continue to shape demographics. What will the population llok like in the year 2050? Math--kind of how do you solve 14n+2-7n=37 Describe the measures of central tendency and measures of dispersion for the following data set: A = {8,1,2,9,3,2,8,1,2} The answers in the back of my book are pi/3,pi, and 5pi/3 Find the solutions for sec^2x+secx=2 that fit into the interval [0, 2pi] I figured out that pi/3 (after factoring and switching things around) was a solution, but then I got stuck on cos x = 1/3. We have to solve for the problem below to make sure that it is equivalent to 2csc(x). However, I keep getting 2/2sin(x) and not 2/sin(x), which is what 2csc(x) is. tan(x)/(1+sec(x))+ (1+sec(x))/tan (x)= 2csc(x) Need help with setting up an equation for this problem: A boat is sailing due east parellel to the shoreline at a speed of 10 miles per hour.At a given time the bearing to the lighthouse is S 70 degrees E, and 15 minutes later the bearing is S 63 degrees E. The lighthouse is l... I need help with this problem: The bearing from the Pine Knob fire tower to the Colt Station fire tower is N 65 degrees E, and the two towers are 30 kilometers apart. A fire spotted by rangers in each tower has a bearing of N 80 degrees E from Pine Knob and S 70 degrees E from... What is the name of the structure above the cellular level? advanced math I'm not sure what you mean because I thought I was clear with my question. But thank you for your help anyway. advanced math Thank you. advanced math Therefore, the vertex of the problem I previously asked would be 1,0. Is that right? advanced math Thank you no name. :) advanced math I did the math for the previous problem I posted, using this formula: -b+ or - the square root of b^2-4ac/2a. When I did this for -x^2+2x-1, I got 0/-2. I did the math and the x and y intercept came out to be -2 and -2. But when I graphed this on my graphic calculator, the po... advanced math Thank you. advanced math Hi, I am trying to solve graphing quadratic equations. One of the problems is written as -x^2= -2x-1. So if I wanted to put this equation in the proper quadratic equation form, would it be -2x-1-x^2, or -x^2+-2x-1? I'm really not sure. English 12 this question is based on the epic of gilgamesh. How important is Enkidu's role in the battle with Humbaba? please don't post links to sites..I checked everywhere already! can someone read and give their thoughts on my paer The apartheid was much like the Jim Crow laws found here in the United States, but it was much worst. In South Africa blacks had to deal with a lot of hardships that no one should have to put up with. The government ran this ... math please help no it like this 6x^2y^3+9x^2y^3 ________________ 3x^2y^2 math please help someone please show how to do Simplify, if possible: 6x^2y^3+9x^2y^3/3x^2y^2 Quantiative Methods CarpetPlus sells and installs floor covering for commercial buildings. A CarpetPlus account executive, was just awarded the contract for five jobs. This executive now must assign a CarpetPlus installation crew to each of the five jobs. Because the commission, the executive wil... Quantiative Methods Western Family Steakhouse offers a variety of low-cost meals and quick service. Other than management, the steakhouse operates with two full-time employees who work 8 hours per day. The rest of the employees are part-time employees who are scheduled for 4 hours shifts during p... Quantiative Methods Industrial Designs has been awarded a contract to design a label for a new wine produced by Lake View Winery. The Co estimates that 150 hours will be required to complete the project. The firm's three graphics designers available for assisgment to this project ar Lisa, a s... What is a²+6a-72+0 ? How many meters are in one Hectometer? How many Meters are in one Kilometer? How many grams are in one Centigram? How many Grams are in one Kilogram? business calculas An apple orchard has an average yield of 38 bushels of apples/tree if the density is 24 trees/acre. For each unit increase in tree density, the yield decreases by 2 bushels. Letting x denote the number of trees beyond 24/acre, find a function in x that gives the yield of apples write a memo explaining how you would enhance communication during a business trip to another country. We'll be glad to critique your memo. Thank you for using the Jiskha Homework Help Forum. Since you say a "memo" you might like the following site from Perdue on... Graph the function y = -2x + . Tell whether or not each ordered pair lies on the line. Explain. (0,1) (-5,3) (1,-1) (-2,5) Graph it. Then you can see if the points are on the line. Tell wheher each represents a funtion. Explain minutes= 60 times hours A math function doing something to a value like adding, subtracting, multiplying or dividing, as you have in your formula: minutes = 60 * hours. I hope this helps. Thanks for asking. Environmental Science List at least two benefits and two drawbacks of each form of perst control for: Pesticides, Genetic engineering, Biological control, insect birth control through sterilization, Insect sex attractants (pheromones) Insect harmones, Food irradiation, and Intergrated Pest Manageme... Identify cliamte and topographical factors that(a)intensify air pollution and (b) help reduced air polluction. What sources of water pollution can you identify? How are people and the environment affected by the physical and economic effects of the air and water pollution? Tha... Identifying Fallacies Give example of fallacy from newspaper editorial and another one from an opinion magazine. We do not do your homework for you, but here is a great site for explaining fallacies. http://www.nizkor.org /features/fallacies/ http://www.unc.edu/depts/wcweb/handouts/fallacies.html Cl... give an example of a charismatic leader. Why this leader? Osama Bin Laden. Barak Obama Adolf Hitler Rev Sun Myung Moon Sgt. Alvin C. York, US Army General Robert E. Lee, CSA General Bob Pursley, USAF General Curtis Lemay, USAF Admiral Hyman Rickover, USN You can do an internet... Writing Assignment Please Help!!! As a class assignment, I have to write a memo with myself being the personel manager informing employees that effective January 1, their dental premiums will be increasing from $22 per month to $24 per month. Which is a $2 increase. Can anyone please help me? I am having diffi... Social Science Consider racial imbalances in education, the economy, family life, housing and crimical justice system, and the health care and politics, of these societal challenges facing modern African American, which do you think are most difficult to overocme and why? of these societal c... How can you use or organize persuasive messages? You need to consider your own interests when reading a persuasive message. If the message is from your boss, you should probably follow it to the letter if you want to keep your job. If the persuasive message is from an advertis... Business English How do you outline informative and positive message and negative message and persuasive messages? http://www.google.com/search?q=how+to+outline&rls=com.microsoft:en-us:IE-SearchBox&ie=UTF-8&oe=UTF-8& sourceid=ie7 An outline is an outline. Whether it's outlining positive, ne... Is there a big difference between 10k and 14k gold chains? My husband wants a chain he can wear all the time. Should I get 10k or 14k? http://www.24carat.co.uk/whyeighteencaratgoldisbetter.html http: //en.wikipedia.org/wiki/Carat_(purity) Thank you for using the Jiskha Homework... word riddle help please to find the clue you must search the text Cause in the Late evening when dark is dim and Peaches are gray walk 1 north and then head east, till you find the pretty thing and tell me what it is! Without the text that needs to be searched, we cannot help you. As the directions s... critical thinking how can readers distinguish beteen prejudicial and non-prejudicial use of rhetorical devices This is an excellent site on propaganda techniques: http://www.propagandacritic.com/ Be sure to also check out the links here -- all from previous postings of the same question: ------... What does the term 'craft technique' mean? Example:"What is one author's craft technique found in this text that makes the writing lively?' See my answer to your question below. =) What is a story with one or more level of meanings? i need help on learning... Are multi cellular organisma more advanced that unicellular organisma. Why or why not Mutlicellular organisms have cells that have specialized in various functions. Would you call that more advanced? Since this is not my area of expertise, I searched Google under the key words... Can you give me some input on the short story "the other wife" I've never read this story, but when I entered "the other wife" (including the quotation marks) in www.google.com, here is the top result: http://www.storybites.com/colettewife.htm =) what is the justicationf for john rawls maxamin rule. How is it compare and contrast with liertarianism and egalitarianism? above you spelled "liertarianism" do you mean "libertarianism" Maybe the searchs below will provide some information: http://www.goog... Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=SHARON&page=6","timestamp":"2014-04-18T18:56:31Z","content_type":null,"content_length":"33065","record_id":"<urn:uuid:f32f5793-99e9-42f6-90a1-21680dd7a8d2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
Circle Geometry: Incenter perpendicular to Chord? July 3rd 2010, 06:40 PM #1 Jul 2009 Circle Geometry: Incenter perpendicular to Chord? Bisector of the angles A,B,C of a triangle ABC meet the circumcircle of the triangle in points P, Q and R respectively and intersect at the point T. Prove that Ap is perpendicular to RQ, BQ is the perpendicular to RP and CR is the perpendicular to QP. I tried to prove that T is the centre of the circle. Then i would be perpendicular to chord. But i have no clue Bisector of the angles A,B,C of a triangle ABC meet the circumcircle of the triangle in points P, Q and R respectively and intersect at the point T. Prove that Ap is perpendicular to RQ, BQ is the perpendicular to RP and CR is the perpendicular to QP. I tried to prove that T is the centre of the circle. Then i would be perpendicular to chord. But i have no clue That's because T doesn't necessarily have to be the center of the circumcircle. However, it is the incenter of the triangle. (the incenter is the intersection of the internal angle bisectors). It's been a while since I took my transformation geometry course, but I would probably begin with an accurate drawing using a compass and straight edge. My intuition tells me that you need to perform a dilation transformation with center T with factor -TQ/TB. then something follows from there. Hello, Lukybear! I assume you have a good sketch. Bisectors of the angles $A,B,C$ of $\Delta ABC$ meet the circumcircle of the triangle in points $P, Q, R$ respectively and intersect at point $T.$ Prove that: . $AP \perp RQ,\; BQ \perp RP,\;CR \perp QP.$ Let the angles at $A$ be $\alpha.$ That is: . $\angle BAP = \angle CAP = \alpha.$ Then: . $\text{arc}(BP) = \text{arc}(PC) = 2\alpha$ Let the angles at $B$ be $\beta.$ That is: . $\angle ABQ = \angle CBQ = \beta.$ Then: . $\text{arc}(AQ) = \text{arc}(QC) = 2\beta$ Let the angles at $C$ be $\gamma.$ that is: . $\angle ACR = \angle BCR = \gamma.$ Then: . $\text{arc}(AR) = \text{arc}(RB) = 2\gamma$ In $\Delta ABC\!:\;2\alpha + 2\beta + 2\gamma \:=\:180^o \quad\Rightarrow\quad \alpha + \beta + \gamma \:=\:90^o$ Let $AP$ and $RQ$ intersect at $U.$ From the "intersecting chords theorem", . . we have: . $\angle AU\!R \;=\;\dfrac{\text{arc}(AR) + \text{arc}(QP)}{2}$ Hence: . $\angle AU\!R \:=\:\dfrac{2\gamma + (2\beta + 2\alpha)}{2} \;=\;\alpha + \beta + \gamma \:=\:90^o$ Therefore: . $AP \perp RQ.$ In a similar fashion, prove that $BQ \perp RP$ and $CR \perp QP.$ July 3rd 2010, 09:34 PM #2 July 4th 2010, 03:35 AM #3 Super Member May 2006 Lexington, MA (USA)
{"url":"http://mathhelpforum.com/geometry/150041-circle-geometry-incenter-perpendicular-chord.html","timestamp":"2014-04-20T23:51:09Z","content_type":null,"content_length":"43387","record_id":"<urn:uuid:7e91b347-da4c-4589-9e01-273bb3dd98e8>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: multilevel ordered logit cond prob in gllapred [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: multilevel ordered logit cond prob in gllapred From Michael Ingre <Michael.Ingre@ipm.ki.se> To statalist@hsphsun2.harvard.edu Subject Re: st: multilevel ordered logit cond prob in gllapred Date Thu, 24 Mar 2005 12:57:29 +0100 I did something similar a while ago using -nlcom- on an estimated ordered ologit with -gllamm-. An example of how I did it is presented below using auto.dta. // Michael sysuse auto // Create level 2 ID gen make2 = substr(make, 1, index(make, " ")) egen pick =tag(make2) gen id=_n if pick sort make2 id replace id = id[_n-1] if id == . // Make turn>40 a dichotomous predictor variable gen turn40 = turn > 40 // estimate gllamm model on rep78 with turn40 as predictor gllamm rep78 turn40 , i(id) link(ologit) fam(bin) adapt // Use local macros to temporary hold your input values local pv1 = 1 // predictor value 1 local pv2 = 0 // predictor value 2 local cut = 11 // Choose cut point i.e. which level of the dependent variable you want to predict // calculate log of the ratio of the two probabilities predicted from pv1/pv2 nlcom ln( (exp(`pv1' * [rep78]turn40 - [_cut`cut']_cons) / /// (1 + exp(`pv1' * [rep78]turn40 - [_cut`cut']_cons))) / /// (exp(`pv2' * [rep78]turn40 - [_cut`cut']_cons) / /// (1 + exp(`pv2' * [rep78]turn40 - [_cut`cut']_cons)))) // Store the results from -nlcom- matrix b = r(b) matrix V = r(V) // Values needs to be exponentiated to describe RRR di "RRR: " exp(b[1,1]) di "Lower 95% CI " exp(b[1,1] - sqrt(V[1,1])*1.96) di "Upper 95% CI " exp(b[1,1] + sqrt(V[1,1])*1.96) On 2005-03-23, at 20:49, INAGAMI,SANAE wrote: I have looked at the gllamm manual. I just don't know how to write the I want to know the probability of each ordered outcome conditional on a categorical predictor variable being 1. Then with the same command, I want to change the predictor variable to 0. Then I would like to set up a relative risk of outcomes by creating a ratio of the probabilities (1 vs 0). Then I would like to come up with confidence intervals for those ratios. I can do this in ordered logistic stata, but I can't do it with glamm because I think I set up the syntax incorrectly. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ Michael Ingre , PhD student & Research Associate Department of Psychology, Stockholm University & National Institute for Psychosocial Medicine IPM Box 230, 171 77 Stockholm, Sweden * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2005-03/msg00800.html","timestamp":"2014-04-18T08:21:44Z","content_type":null,"content_length":"8304","record_id":"<urn:uuid:c697ca69-5bfb-41e3-b719-b45f5548dba5>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
Branch-Circuit, Feeder and Service Calculations, Part II An essential part in the life of an electrician is performing load calculations. Determining what size conductors and overcurrent protective devices to install is something most electricians do on a daily basis. Specifications for calculating branch-circuit, feeder and service loads are in Article 220 of the National Electrical Code (NEC). This article is divided into five parts. Part I covered general requirements for calculation methods (Electrical Contractor, March 2006). Branch-circuit load calculation methods are covered in Part II. Calculation methods for feeders and services are covered in Parts III and IV. Part V provides calculation methods for farms. This month’s column is the second part in a series dedicated to clarifying the load calculation requirements stipulated in Article 220. Sections 220.3 and 220.5 are included in the general specifications in Article 220. New to the 2005 edition of the NEC is 220.3 and Table 220.3. Other articles contain load-calculation requirements in specialized appli-cations. These references are in Table 220.3. These requirements are in addition to, or modifications of, those within Article 220. For example, while load-calculation requirements for electric welders are not listed in Article 220, they are in Article 630. Table 220.3 provides the specific sections in Article 630 pertaining to the ampacity calculations for electric welders. [630.11, 630.31] Unless other voltages are specified, for purposes of calculating branch-circuit and feeder loads, nominal system voltages of 120, 120/240, 208Y/120, 240, 347, 480Y/277, 480, 600Y/347 and 600 volts shall be used. [220.5(A)] When performing calculations in Article 220, it is permissible to round a fraction of an ampere to the nearest integer or whole number. Where calculations result in a fraction of an ampere that is less than 0.5, such fractions shall be permitted to be dropped. [220.5(B)] For example, after performing a load calculation, the ending result is 122.32 amperes. Because of 220.5(B), the fraction of an ampere that is less than 0.5 can be dropped. Because this fraction can be dropped, the new result is 122 amperes. If the ending result of the calculation is 122.5, the fraction must be rounded up. Because 0.5 must be rounded up to the next whole number, the new result is now 123 amperes. Part II (220.10 through 220.18) of Article 220 covers branch- circuit load calculations. Although Part I provides general requirements for the article, 220.10 provides general requirements for calculating branch-circuit loads in Part II. Branch-circuit loads shall be calculated as shown in 220.12, 220.14, and 220.16. [220.10] 220.12 Lighting Loads for Specified Occupancies The actual specifications for load calculations begin in 220.12. Table 220.12 provides unit loads for a variety of occupancies. Unit loads are provided in both volt-amperes per square foot and volt-amperes per square meter. Unit loads range from ¼ to 3½ volt-amperes per square foot. Some examples of general lighting unit loads include 1 volt-ampere per square foot for churches, 2 volt-amperes per square foot for hospitals, 3 volt-amperes per square foot for dwelling units, and 3½ volt-amperes per square foot for banks and office buildings (see Figure 1). The floor area for each floor shall be calculated from the outside dimensions of the building, dwelling unit or other area involved. [220.12] Regardless of the wall thickness, the floor area for the occupancy must be calculated from the outside dimensions. For example, the general lighting load is needed for a dwelling unit. The inside dimensions, measured from inside wall to inside wall, are 25 by 40 feet. Each exterior wall measures 6 inches deep. Because the exterior walls must be included, the calculated floor area is 1,066 square feet (26 x 41 = 1,066). Table 220.12 stipulates a minimum unit load of 3 volt-amperes per square foot for dwelling units. The minimum general lighting load for the dwelling in this example is 3,198 volt-amperes (1,066 x 3 = 3,198, see Figure 2). As stated in the last sentence in 220.12, certain areas in dwelling units can be omitted from the calculated floor area. It is not necessary to include open porches, garages, or unused or unfinished spaces not adaptable for future use. For example, the general lighting load is needed for a dwelling unit. The outside dimensions are 30 by 50 feet. This dwelling includes an open front porch that is within the boundaries of the outside dimensions. The front porch measures 5 by 8 feet. The total area, including the open porch, is 1,500 square feet (30 x 50 = 1,500). The porch’s area is 40 square feet (5 x 8 = 40). After deducting the area of the open porch, the calculated area is 1,460 square feet (1,500 – 40 = 1,460). After applying the minimum unit load per square foot, the minimum general lighting load for this dwelling is 4,380 volt-amperes (1,460 x 3 = 4,380, see Figure 3). Calculating specialized areas separately is not permissible in one-family dwellings and individual dwelling units of two-family and multifamily dwellings. [Table 220.12] Table 220.12 contains unit loads for areas within the main occupancy. These areas include assembly halls and auditoriums at 1 volt-ampere per square foot; halls, corridors, closets and stairways at ½ volt-ampere per square foot; and storage spaces at ¼ volt-ampere per square foot. It is permissible to calculate these specialized areas at lesser unit loads. For example, a bank has a total area of 20,000 square feet. The following are part of the bank’s total area: hallways, corridors and stairways that total 1,600 square feet; storage spaces that measure 400 square feet; and a 3,000-square-foot assembly hall (see Figure 4). First, calculate the specialized areas within the bank. The general lighting load for this bank’s hallways, corridors, and stairways is 800 volt-amperes (1,600 x 0.5 = 800). The general lighting load for storage spaces is 100 volt-amperes (400 x 0.25 = 100). The general lighting load for the assembly hall is 3,000 volt-amperes (3,000 x 1 = 3,000). The total area for hallways, corridors, stairways, storage spaces and the assembly hall is 5,000 square feet (1,600 + 400 + 3,000 = 5,000). Next, calculate the general lighting load for the remaining area in the bank. After deducting the specialized areas in the bank, the remaining area is 15,000 square feet (20,000 – 5,000 = 15,000). The general lighting load for the main area of the bank is 52,500 volt-amperes (15,000 x 3.5 = 52,500). Finally, add together all the general lighting loads in this bank. The general lighting load (not counting receptacle loads) for this bank is 56,400 volt- amperes (800 + 100 + 3,000 + 52,500 = 56,400, see Figure 5). Had the specialized areas within the bank not been calculated separately, the general lighting load would have been 70,000 volt-amperes (20,000 x 3.5 = 70,000). Calculating the general lighting load is one step in a multistep process when determining branch-circuit, feeder and service loads. Although the lighting load in this bank is a continuous load, it has not been multiplied by 125 percent. There is no requirement in Article 220 to multiply continuous loads by 125 percent. When sizing conductors and overcurrent protection for feeders and services, the general lighting load will be combined with other steps. At that point, all continuous loads must be multiplied by 125 percent. Next month’s column continues the discussion of load calculations. EC MILLER, owner of Lighthouse Educational Services, teaches classes and seminars on the electrical industry. He is the author of “Illustrated Guide to the National Electrical Code” and NFPA’s “Electrical Reference.” He can be reached at 615.333-3336, charles@charlesRmiller.com, or www.charlesRmiller.com. Comment Count Comments
{"url":"http://www.ecmag.com/section/codes-standards/branch-circuit-feeder-and-service-calculations-part-ii?qt-issues_block=1","timestamp":"2014-04-16T14:46:22Z","content_type":null,"content_length":"60397","record_id":"<urn:uuid:d9285cac-775b-493f-8327-8b64554c3bf3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
Distortion in JFET input stage circuits Distortion analysis of basic JFET circuits This page has been initiated by John Curl in Blowtorch thread at www.diyaudio.com. We have started to analyze distortion spectra of single JFET and differential JFET pair both for ideal and real devices. Our goal is to continue with more sophisticated structures like complementary differential pair and differential pair with folded cascode. Let me repeat the results for single JFET and differential JFET pair at first 1. Single JFET 1.1. Single JFET, zero lambda For single JFET, the analyzed circuit is as follows: The device used is 2SK389_V. The spice model of this device has lambda = 0, i.e. the transfer function meets square law. The model is as follows: Distortion spectrum of this circuit with the model shown is shown in Link 1 .This is a distortion of JFET drain current for constant Vds. It contains only pure 2nd harmonic component, with value of some 2.5% for given model, operating current, and input voltage. 1.2. Single JFET, non-ideal device For the same circuit (Fig. 1) we use a model of non-ideal device. This model has non-zero lambda (channel-length modulation) : Now distortion spectrum is shown in Link 2 . We can see that 3rd and 4rd harmonic component appeared, as a result of non-zero channel-length modulation. The reason of appearance of new harmonic components can be derived from JFET transfer function: where the effect of lambda can be seen. 1.3. Single JFET, zero lambda, emitter degeneration Now we analyze single device with zero lambda (meets square law transfer function), but with emitter (source) resistor added. For this circuit, distortion is in Link 3 . 3rd harmonic has appeared, as a result of voltage drop on R4 effect on Id (equation 1). 1.4. Single JFET, zero lambda, no degeneration, drain resistor In case we increase value of R1 to become a drain load, for the reason of voltage output, again we create higher harmonics than 2nd. The reason is again seen in equation 1, Vds is not constant, the (1 + lambda.Vds) becomes non-zero, and Id does not follow pure square law. 2. JFET differential pair 2.1. JFET differential pair, lambda=0, single-ended input, single-ended output We start our JFET differential pair analysis with single-ended input and single-ended output, very usual praxis in audio amplifiers. The input signal is voltage V3, output is current i(R2), with R2 value negligible. Distortion of such circuit is shown in Link 4 . It is pure 3rd harmonic of some 0.1% for given conditions. 2.2. JFET differential pair, non-zero lambda For same conditions as in article 2.1., only JFET model has non-zero lambda (model 2), we can see distortion in Link 5 . Now, 2nd harmonic has appeared and also very small 5th. 2.3. JFET differential pair, balanced input, balanced output Now we have JFET differential pair with true differential input and true differential output. It sees voltage difference at the input and current difference at the output. First we assume devices with lambda=0. Distortion spectrum is in Link 6 . It is pure 3rd harmonic, 0.03%. Now we take non-ideal device with lambda = 1m (model 2), distortion is in Link 7 . It is again only pure 3rd harmonic. That means, for true differential pair, even with nonideal devices, no even harmonics were created. The condition is symmetry of input, balanced output and same devices. We increase output current now to 2mA, everything else same. Distortion is in Link 8 . Distortion is odd harmonics only, but we can see very small 5th, 0.00025%, has appeared. 3. Complementary differential pair 3.1. Complementary differential pair, fully balanced, current output is our next step: Distortion of this circuit is in Link 9 . Distortion is pure 3rd, and decreased to 0.006%, for 1mA output current. In case we replace the I1 constant current source by short, as recommended by John Curl, we get substantial reduction of distortion to absolutely negligible value, see Link 10 . However, we need some means to control drain current, as it depends on Vto of JFEts, and with short it may become too high. Or we use another device grade, for BL we get Id = 10mA approx, like here in the Link 10. 4. Gain control for complementary differential pair To control the gain, we introduce a resistor between left and right halves of the circuit. The R7 resistor creates local feedback, the higher the resistor value, the lower gain. Let's see effect of resistor value to distortion spectrum. 4.1. R7 = 10R Distortion spectrum for R7 = 10R is shown in Link 11. It is pure 3rd harmonic of 0.0007%, nothing else is seen till -180dB. 4.2. R7 = 100R Distortion spectrum for R7 = 100R is shown in Link 12. The 3rd is reduced to 0.0002%, but we can see higher harmonics start to appear, though very low in magnitude. So this kind of local feedback reduces overall distortion, but creates new harmonic components again. 5. Influence of matching of N and P halves in complementary differential pair Let us return to complementary differential pair with N and P halves directly connected. In case of 2SK389BL/2SJ109BL model devices, Id = 9.285mA. We shall now investigate distortion for output current 10mA, see Link 13. Distortion rised and we can see also 5th harmonic has appeared. We also monitor all 4 drain currents. As we can see, they are not equal, but N a P halves differ. The reason is different threshold voltage and transconductance of N and P jfets, see model parameters: *** N Channel JFET .MODEL 2SK389BL NJF (VTO=-955.83m BETA=12.7099m LAMBDA=1m RS=10.3203 + CGD=11.2066p CGS=19.1141p PB=3.80937 KF=1.217079E-019 AF=502.753m) *** P Channel JFET .MODEL 2SJ109BL PJF (VTO=-418.894m BETA=83.8035m LAMBDA=1m RS=11.0887 + CGD=71.6975p CGS=61.6464p PB=1.85382 KF=7.879067E-018 AF=499.474m) We shall try to make the models more similar by keeping VTO=-955.83m and BETA=12.7099m for both N and P jfet. Now let us try distortion again, see Link 14. The 5th harmonic disappeared, 3rd is reduced and all 4 drain currents are same. So, to keep the lowest distortion and avoid higher harmonics, we need to have N and P jfets as equal as possible. to be continued .... Pavel Macura, 04/2008
{"url":"http://web.telecom.cz/macura/diyaudio/jfetdist.htm","timestamp":"2014-04-20T03:18:46Z","content_type":null,"content_length":"9250","record_id":"<urn:uuid:1539e19d-c29f-486f-8e06-04042abfc1c1>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
New Chicago, IN Math Tutor Find a New Chicago, IN Math Tutor ...I have over ten years experience tutoring students in reading including working with an hearing impaired student. I teach students how to break words into syllables which makes them more manageable and easier to read. I also work on comprehension skills. 10 Subjects: including geometry, GRE, algebra 1, algebra 2 I have spent in excess of 30 years in the chemical and environmental industry as an industrial trainer, research engineer, supervisor and manager. I have authored technical articles and made numerous presentations to both technical and public audiences. I believe that in order for anyone to unders... 13 Subjects: including algebra 2, biology, calculus, physics ...I went through the process myself, successfully gaining admission to Northside College Prep, and I am fully up to date on the current CPS protocol for applying to these schools. I am able to walk you through the application process, give you an overview of prospective schools, and provide academ... 38 Subjects: including algebra 1, reading, trigonometry, statistics ...I have taught statistics for over 15 years at the University level, as well as tutoring individual students and small groups for more than two decades. My BA in Physics and Math, as well as my MBA have helped me relate well to the bio-sciences, health-care and social sciences. The math is the same, and I have learned the "bio- terminology". 11 Subjects: including statistics, probability, algebra 1, algebra 2 ...Mathematics is my passion and the fundamentals are my specialty, from pre-algebra thru algebra and analytic geometry. Many award winning students in the field of Math. I stress the importance knowing and speaking the language of mathematics. 8 Subjects: including linear algebra, ACT Science, algebra 1, algebra 2 Related New Chicago, IN Tutors New Chicago, IN Accounting Tutors New Chicago, IN ACT Tutors New Chicago, IN Algebra Tutors New Chicago, IN Algebra 2 Tutors New Chicago, IN Calculus Tutors New Chicago, IN Geometry Tutors New Chicago, IN Math Tutors New Chicago, IN Prealgebra Tutors New Chicago, IN Precalculus Tutors New Chicago, IN SAT Tutors New Chicago, IN SAT Math Tutors New Chicago, IN Science Tutors New Chicago, IN Statistics Tutors New Chicago, IN Trigonometry Tutors Nearby Cities With Math Tutor Beverly Shores Math Tutors Boone Grove Math Tutors Gary, IN Math Tutors Hebron, IN Math Tutors Hobart, IN Math Tutors Kouts Math Tutors La Crosse, IN Math Tutors Lake Station Math Tutors Leroy, IN Math Tutors Lowell, IN Math Tutors Ogden Dunes, IN Math Tutors Pottawattamie Park, IN Math Tutors Wanatah Math Tutors Wheeler, IN Math Tutors Whiting, IN Math Tutors
{"url":"http://www.purplemath.com/new_chicago_in_math_tutors.php","timestamp":"2014-04-20T19:51:24Z","content_type":null,"content_length":"23946","record_id":"<urn:uuid:6d4e2375-289e-47f3-a4d5-714c07fd8919>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
Newton, MA Algebra 2 Tutor Find a Newton, MA Algebra 2 Tutor ...I also coached. I have worked with infants of 6 mos to senior citizens to providing therapeutic lessons to those with disabilities. I have a WSI and lifeguard certification. 19 Subjects: including algebra 2, chemistry, calculus, Spanish ...I strive to help students understand the core concepts and building blocks necessary to succeed not only in their current class but in the future as well. I am a second year graduate student at MIT, and bilingual in French and English. I earned my high school diploma from a French high school, as well as a bachelor of science in Computer Science from West Point. 16 Subjects: including algebra 2, French, elementary math, algebra 1 ...Over the last three years I have been either tutoring or working in classroom settings. In 2010 I earned my Massachusetts Physics Teaching License. I have a Bachelors of Science in Materials Science and Engineering from MIT and an MBA from Babson College. 12 Subjects: including algebra 2, chemistry, calculus, physics ...I have helped numerous students master both the foundations and the specific skills taught in a variety of calculus courses. Geometry is the study and exploration of logic and visual reasoning. There are numerous tools and facts that you need to understand and then you build on them throughout the year. 23 Subjects: including algebra 2, physics, calculus, statistics I am a recent graduate of MIT with 6 years of experience working with a wide range of students in grades 8-12 to improve SAT scores in the Reading, Math, and Writing sections. I have successfully tutored students at all skill levels and have achieved measurable success in all cases. I am a patient... 18 Subjects: including algebra 2, chemistry, physics, calculus Related Newton, MA Tutors Newton, MA Accounting Tutors Newton, MA ACT Tutors Newton, MA Algebra Tutors Newton, MA Algebra 2 Tutors Newton, MA Calculus Tutors Newton, MA Geometry Tutors Newton, MA Math Tutors Newton, MA Prealgebra Tutors Newton, MA Precalculus Tutors Newton, MA SAT Tutors Newton, MA SAT Math Tutors Newton, MA Science Tutors Newton, MA Statistics Tutors Newton, MA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Auburndale, MA algebra 2 Tutors Brighton, MA algebra 2 Tutors Brookline, MA algebra 2 Tutors Cambridge, MA algebra 2 Tutors Newton Center algebra 2 Tutors Newton Centre, MA algebra 2 Tutors Newton Highlands algebra 2 Tutors Newton Upper Falls algebra 2 Tutors Newtonville, MA algebra 2 Tutors Roxbury, MA algebra 2 Tutors Somerville, MA algebra 2 Tutors Waban algebra 2 Tutors Waltham, MA algebra 2 Tutors Watertown, MA algebra 2 Tutors West Newton, MA algebra 2 Tutors
{"url":"http://www.purplemath.com/Newton_MA_Algebra_2_tutors.php","timestamp":"2014-04-19T15:04:11Z","content_type":null,"content_length":"24036","record_id":"<urn:uuid:3c4f5ebe-3ced-4adf-87f0-3663fbb71924>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
How do you decide what to read? I tried a mathematical approach about five or seven years ago and discovered I had forgotten most of the math I learned 40 or 50 years ago. So tensor mathematics turned out not to be SO interesting to me now that I wanted to recover undergraduate and graduate mathematics...time is short...I now use expert interpretations from posters here about what happens in that math. So I switched to half a dozen or so books to get started.....for the general public, light on math, like RELATIVITY, Albert Einstein [1954] THE FABRIC OF THE COSMOS, Brian Greene, THE BLACK HOLE WAR, Leonard Susskind, PARALLEL WORLDS, Michio Kaku, THE TROUBLE WITH PHYSICS, Lee Smolin, THE NATURE OF SPACE AND TIME, Hawking and Penrose, THE INFLATIONARY UNIVERSE, Allan Guth, WARPED PASSAGES, Lisa Randall [lots of particle theory and explanations] Ended up readin maybe two or three dozen books while aboard my boat summers in Maine after I retired. That's when I had time. Buy them used...like at Amazonbooks...cheap. I compare descriptions with what I can easily find online, like Wikipedia, Ned Wright, Mathpages, etc. From time to time I read an ARXIV research paper recommended in these forums. And when I come across an 'aha!!' description, into my notes it goes!! Here is one recent forums discussion I found really interesting:
{"url":"http://www.physicsforums.com/showthread.php?p=4189103","timestamp":"2014-04-17T21:32:13Z","content_type":null,"content_length":"33072","record_id":"<urn:uuid:d8c58e1e-d06f-460f-abef-a3fd2efa0128>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
What kinds of Lot Acceptance Sampling Plans (LASPs) are there? 6. Process or Product Monitoring and Control 6.2. Test Product for Acceptability: Lot Acceptance Sampling 6.2.2. What kinds of Lot Acceptance Sampling Plans (LASPs) are there? LASP is a sampling A lot acceptance sampling plan (LASP) is a sampling scheme and a set of rules for making decisions. The decision, based on counting the number of defectives in a sample, can be to scheme and a set accept the lot, reject the lot, or even, for multiple or sequential sampling schemes, to take another sample and then repeat the decision process. of rules Types of LASPs fall into the following categories: acceptance plans to choose from • Single sampling plans: One sample of items is selected at random from a lot and the disposition of the lot is determined from the resulting information. These plans are usually denoted as (\(n,c\)) plans for a sample size \(n\), where the lot is rejected if there are more than \(c\) defectives. These are the most common (and easiest) plans to use although not the most efficient in terms of average number of samples needed. • Double sampling plans: After the first sample is tested, there are three possibilities: 1. Accept the lot 2. Reject the lot 3. No decision If the outcome is (3), and a second sample is taken, the procedure is to combine the results of both samples and make a final decision based on that information. • Multiple sampling plans: This is an extension of the double sampling plans where more than two samples are needed to reach a conclusion. The advantage of multiple sampling is smaller sample sizes. • Sequential sampling plans: This is the ultimate extension of multiple sampling where items are selected from a lot one at a time and after inspection of each item a decision is made to accept or reject the lot or select another unit. • Skip lot sampling plans: Skip lot sampling means that only a fraction of the submitted lots are inspected. Definitions of Deriving a plan, within one of the categories listed above, is discussed in the pages that follow. All derivations depend on the properties you want the plan to have. These are basic Acceptance described using the following terms: Sampling terms • Acceptable Quality Level (AQL): The AQL is a percent defective that is the base line requirement for the quality of the producer's product. The producer would like to design a sampling plan such that there is a high probability of accepting a lot that has a defect level less than or equal to the AQL. • Lot Tolerance Percent Defective (LTPD): The LTPD is a designated high defect level that would be unacceptable to the consumer. The consumer would like the sampling plan to have a low probability of accepting a lot with a defect level as high as the LTPD. • Type I Error (Producer's Risk): This is the probability, for a given (\(n,c\)) sampling plan, of rejecting a lot that has a defect level equal to the AQL. The producer suffers when this occurs, because a lot with acceptable quality was rejected. The symbol \(\alpha\) is commonly used for the Type I error and typical values for \(\alpha\) range from 0.2 to 0.01. • Type II Error (Consumer's Risk): This is the probability, for a given (\(n,c\)) sampling plan, of accepting a lot with a defect level equal to the LTPD. The consumer suffers when this occurs, because a lot with unacceptable quality was accepted. The symbol \(\beta\) is commonly used for the Type II error and typical values range from 0.2 to 0.01. • Operating Characteristic (OC) Curve: This curve plots the probability of accepting the lot (Y-axis) versus the lot fraction or percent defectives (X-axis). The OC curve is the primary tool for displaying and investigating the properties of a LASP. • Average Outgoing Quality (AOQ): A common procedure, when sampling and testing is non-destructive, is to 100 % inspect rejected lots and replace all defectives with good units. In this case, all rejected lots are made perfect and the only defects left are those in lots that were accepted. AOQs refer to the long term defect level for this combined LASP and 100 % inspection of rejected lots process. If all lots come in with a defect level of exactly \(p\), and the OC curve for the chosen (\(n,c\)) LASP indicates a probability \(p_a\) of accepting such a lot, over the long run the AOQ can easily be shown to be: $$ \mbox{AOQ} = \frac{p_a p (N - n)}{N} \, ,$$ where \(N\) is the lot size. • Average Outgoing Quality Level (AOQL): A plot of the AOQ (Y-axis) versus the incoming lot \(p\) (X-axis) will start at 0 for \(p=0\), and return to 0 for \(p = 1\) (where every lot is 100 % inspected and rectified). In between, it will rise to a maximum. This maximum, which is the worst possible long term AOQ, is called the AOQL. • Average Total Inspection (ATI): When rejected lots are 100 % inspected, it is easy to calculate the ATI if lots come consistently with a defect level of \(p\). For a LASP (\ (n,c\)) with a probability \(p_a\) of accepting a lot with defect level \(p\), we have $$ \mbox{ATI} = n + (1-p_a)(N-n) \, , $$ where \(N\) is the lot size. • Average Sample Number (ASN): For a single sampling LASP (\(n,c\)) we know each and every lot has a sample of size \(n\) can be calculated assuming all lots come in with a defect level of \(p\). A plot of the ASN, versus the incoming defect level \(p\), describes the sampling efficiency of a given LASP scheme. The final choice Making a final choice between single or multiple sampling plans that have acceptable properties is a matter of deciding whether the average sampling savings gained by the various is a tradeoff multiple sampling plans justifies the additional complexity of these plans and the uncertainty of not knowing how much sampling and inspection will be done on a day-by-day basis.
{"url":"http://www.itl.nist.gov/div898/handbook/pmc/section2/pmc22.htm","timestamp":"2014-04-16T15:59:22Z","content_type":null,"content_length":"12681","record_id":"<urn:uuid:a10ecc86-1753-4b13-bdc5-86e49b38e0df>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
Ways to roll 6 dice such that there are exactly 4 unique numbers? July 22nd 2010, 10:02 AM #1 Jul 2010 Ways to roll 6 dice such that there are exactly 4 unique numbers? This one has me stumped. I've written a monte-carlo simulator, and probability of rolling such a combination is something like 50.1% My thinking is that there are C(6, 4)=15 ways to have 4 unique numbers, and for each of these there are C(6+4-1, 6)=84 ways to have 6 dice land on these numbers. but this gives 1260 ways, which is completely unreasonable when there are C(6+6-1,6)=462 ways to roll 6 dice. I just can't wrap my brain around this one. Any ideas? Im not sure what you mean by "4 unique numbers" "a total of 4 different numbers appear in the set of 6" (eg 112234) "a total of 4 numbers appear 1 time, and one other number occurs twice" (eg 123455) edit: wrong answer Last edited by SpringFan25; July 22nd 2010 at 11:00 AM. Here is a different approach.. I interpreted the question as: you roll 6 dice, and there are exactly four of {1,2,3,4,5,6} represented among what is rolled. So it's either in the form {a,b,c,d,a,a} or {a,b,c,d,a,b}. Like you said we have C(6,4) ways to get {a,b,c,d} For {a,b,c,d,a,a}, there are 4 ways to choose "a" and then $\displaystyle \binom{6}{3,1,1,1}=\frac{6!}{3!}$ ways to permute. For {a,b,c,d,a,b} we have C(4,2) ways to choose {a,b} and $\displaystyle \binom{6}{2,2,1,1}=\frac{6!}{2!2!}$ ways to permute. So it's C(6,4) * ( C(4,1) * 6!/3! + C(4,2) * 6!/2!/2! ) ways, and divide that by 6^6. This comes to 325/648. July 22nd 2010, 10:49 AM #2 MHF Contributor May 2010 July 22nd 2010, 11:05 AM #3
{"url":"http://mathhelpforum.com/statistics/151703-ways-roll-6-dice-such-there-exactly-4-unique-numbers.html","timestamp":"2014-04-20T11:56:08Z","content_type":null,"content_length":"36726","record_id":"<urn:uuid:f21976cd-ab0c-4538-b985-c55a937750a6>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
Hidden in Einstein’s Math: Faster-than-Light Travel? Scientists have extended Einstein's equations for faster-than-light travel. Here a three-dimensional (right) graph shows the relationship between three different velocities: v, u and U, where v is the velocity of a second observer measured by a first observer, u is the velocity of a moving particle measured by the second observer, and U is the relative velocity of the particle to the first Credit: Hill, Cox/Proceedings of the Royal Society A Although Einstein's theories suggest nothing can move faster than the speed of light, two scientists have extended his equations to show what would happen if faster-than-light travel were possible. Despite an apparent prohibition on such travel by Einstein’s theory of special relativity, the scientists said the theory actually lends itself easily to a description of velocities that exceed the speed of light. "We started thinking about it, and we think this is a very natural extension of Einstein's equations," said applied mathematician James Hill, who co-authored the new paper with his University of Adelaide, Australia, colleague Barry Cox. The paper was published Oct. 3 in the journal Proceedings of the Royal Society A: Mathematical and Physical Sciences. Special relativity, proposed by Albert Einstein in 1905, showed how concepts like speed are all relative: A moving observer will measure the speed of an object to be different than a stationary observer will. Furthermore, relativity revealed the concept of time dilation, which says that the faster you go, the more time seems to slow down. Thus, the crew of a speeding spaceship might perceive their trip to another planet to take two weeks, while people left behind on Earth would observe their passage taking 20 years. Yet special relativity breaks down if two people's relative velocity, the difference between their respective speeds, approaches the speed of light. Now, Hill and Cox have extended the theory to accommodate an infinite relative velocity. [Top 10 Implications of Faster-Than-Light Neutrinos] Interestingly, neither the original Einstein equations, nor the new, extended theory can describe massive objects moving at the speed of light itself. Here, both sets of equations break down into mathematical singularities, where physical properties can't be defined. "The actual business of going through the speed of light is not defined," Hill told LiveScience. "The theory we've come up with is simply for velocities greater than the speed of light." In effect, the singularity divides the universe into two: a world where everything moves slower than the speed of light, and a world where everything moves faster. The laws of physics in these two realms could turn out to be quite different. In some ways, the hidden world beyond the speed of light looks to be a strange one indeed. Hill and Cox's equations suggest, for example, that as a spaceship traveling at super-light speeds accelerated faster and faster, it would lose more and more mass, until at infinite velocity, its mass became zero. "It's very suggestive that the whole game is different once you go faster than light," Hill said. Despite the singularity, Hill is not ready to accept that the speed of light is an insurmountable wall. He compared it to crossing the sound barrier. Before Chuck Yeager became the first person to travel faster than the speed of sound in 1947, many experts questioned whether it could be done. Scientists worried that the plane would disintegrate, or the human body wouldn't survive. Neither turned out to be true. Fears of crossing the light barrier may be similarly unfounded, Hill said. "I think it's only a matter of time," he said. "Human ingenuity being what it is, it's going to happen, but maybe it will involve a transportation mechanism entirely different from anything presently Follow Clara Moskowitz on Twitter @ClaraMoskowitz or LiveScience @livescience. We're also on Facebook & Google+.
{"url":"http://www.space.com/17951-einstein-relativity-faster-than-light-travel.html","timestamp":"2014-04-20T21:04:31Z","content_type":null,"content_length":"45617","record_id":"<urn:uuid:bc832d10-b504-4b58-9524-d55efdee329c>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00194-ip-10-147-4-33.ec2.internal.warc.gz"}